uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,156,128
arxiv
\section{Introduction} Vehicular sensor network (VSN) combines wireless communication provided by vehicular ad-hoc network (VANET) with sensing devices installed in vehicles. Sensors available in vehicles gather data sets including localisations, speeds, directions, accelerations, etc. Thus, the vehicles participating in VSN can be used as the sources of information to determine accurately the traffic flow characteristics. Monitoring of road and traffic conditions becomes an important application area of VSNs. This new technology creates a huge opportunity to extend the road-side sensor infrastructure of the existing traffic control, management and safety systems \cite{bplaczek:bib1,bplaczek:bib18}. A major drawback of the current-generation traffic monitoring systems is a narrow coverage due to high installation and maintenance costs. It is expected that the VSNs will help to overcome these limitations. Unlike traditional wireless sensor networks, VSNs are not subject to major memory, processing, storage, and energy limitations. However, in dense urban road networks, where number of vehicles uses the same transmission medium for many purposes, the periodic transmissions of all sensed data may consume the entire channel bandwidth resulting in excessive congestion and delays in the communication network. These effects are a major impediment for the time-constrained control tasks and safety related services. Therefore the efficient use of the wireless communication channel is one of the basic issues in VSNs applications development \cite{bplaczek:bib1}. This paper presents and evaluates three data collection algorithms that use uncertainty estimates to reduce the data transmission in a VSN-based road traffic control system. The uncertainty-dependent data collection algorithms were inspired by an observation that for many cases the scope of real-time vehicular data potentially available in VSN exceeds the needs of particular traffic control tasks. The underlying idea is to detect the necessity of data transfers on the basis of uncertainty evaluation. The advantage of the introduced approach is that it uses selective on-time queries instead of periodical data sampling. The rest of the paper is organised as follows. Related works are reviewed in Section 2. Section 3 describes the VSN-based road traffic control system. Algorithms for traffic data collection are presented in Section 4. Section 5 contains the results of an experimental study on data collection for the traffic control in a road network. Finally, in Section 6, conclusion is given. \section{Related works} The emergence of VSNs technologies has made it possible to use novel, more effective techniques to deal with the problems of road traffic control. Several traffic control algorithms have been developed in this field for signalised intersections. These adaptive signal control schemes use real-time sensor data collected from vehicles (e.g. their positions and speeds) to minimise travel time and delay experienced by drivers at road intersections. Most methods are based on wireless communication between vehicles and road-side control nodes (e.g. \cite{bplaczek:bib2,bplaczek:bib3}). In few proposed solutions, vehicle-to-vehicle communication is used for implementing the traffic control \cite{bplaczek:bib4,bplaczek:bib5}. In the above cited studies, the real-time sensed data are assumed to be collected continuously from all vehicles in a certain area. Such periodical data sampling scheme may cause excessive congestion and latency in the communication network due to the bandwidth-limited wireless medium. Therefore, more research is needed to determine required input data sets as well as sampling rates that are necessary for the traffic control. In the literature several methods have been introduced for wireless sensor networks that enable the optimisation of data collection procedures. Suppression based techniques have been demonstrated to be useful in reducing the amount of sensor data transmitted for monitoring physical phenomena \cite{bplaczek:bib6}. Temporal suppression is the most basic method: sensor readings are transmitted only from those nodes where a change occurred since the last transmission \cite{bplaczek:bib7}. Spatial suppression methods aim to reduce redundant transmissions by exploiting the spatial correlation of sensor readings \cite{bplaczek:bib8}. If the sensor readings of neighbouring sensor nodes are the same or similar, the transmission of those sensed values can be suppressed. Model-based suppression methods use divergence between actual measurements and model predictions to detect the necessity of data transfers \cite{bplaczek:bib9}. Implementing this approach requires a pair of dynamic models of the monitored phenomenon, with one copy distributed in the sensor network and the other at a base station. Another effective approach to the optimisation problem of data collection in sensor networks is the model-based querying approach, in which the sensor data are complemented by a probabilistic model of the underlying system \cite{bplaczek:bib10}. According to this methodology, sensors are used to acquire data only when the model is not sufficiently rich to answer the query with an acceptable confidence. Each query has to include user-defined error tolerances and target confidence bounds that specify how much uncertainty is acceptable in the answer. In \cite{bplaczek:bib11} an uncertainty-dependent data collection method was proposed for the VSN-based traffic control systems. In this method, the necessity of data transfers is detected by uncertainty evaluation of traffic control decisions. The sensor data are transmitted from vehicles to the control node only at selected time moments. For the remaining periods of time the sensor readings are replaced by results of an on-line traffic simulation. The effectiveness of this method was verified in an experimental study on traffic control at isolated intersection. \section{VSN-based road traffic control} The purpose of the VSN-based traffic control system is to manage the traffic flow by controlling traffic signals (Fig. 1). VSN senses positions and velocities of vehicles in a road network. The control loop includes data collection module, which sends selective on-time queries to retrieve necessary traffic data from the VSN. At each time step, the set of data that has to be acquired is determined taking into account the uncertainty estimated during decision-making procedure. Traffic model is an important component which uses the acquired data for approximation of the current traffic state as well as prediction of its future evolution. A task of the decision making module is to select an optimal control action on the basis of the information delivered by traffic model, according to the control strategy. \begin{figure} \centering \includegraphics [height=2.22cm] {BPlaczek_Fig1.eps} \caption{VSN-based road traffic control system} \end{figure} \subsection{Traffic control strategy} In the presented study, a decentralised self-control strategy \cite{bplaczek:bib12} was applied to minimise travel times in a road network. The self-organised traffic control is based on an optimisation and a stabilisation rule. Both rules are executed in parallel for all intersections in the network in order to adapt the traffic control to local flow conditions. According to the self-organised traffic control strategy the consecutive control decisions are made in time steps of one second. A particular control decision determines which traffic stream should get a green signal at an intersection. The decision is made using the following formula: \begin{equation} \sigma = \left\{ \begin{array}{l l} \mathrm{head} \: \mathrm{\Omega} & \quad \mathrm{if} \: \mathrm{\Omega} \neq \emptyset\\ \mathrm{arg\:max}_i \pi_i & \quad \mathrm{otherwise,}\\ \end{array} \right. \end{equation} where: $\sigma$ indicates the traffic stream which will get green signal, $\mathrm{\Omega}$ is an ordered set containing indices of the traffic streams that have been selected using the stabilisation rule, $\pi_i$ denotes priority of stream $i$, which is calculated on the basis of the optimisation rule. The aim of stabilisation rule is to assure that all traffic streams will be served at least once in $T_{max}$ period. To this end, for each traffic stream a service interval $Z_i$ is predicted as the sum of preceding red time $r_i$ for stream $i$, intergreen time $\tau^0_i$ before switching the green signal for stream $i$, and green time $G_i$ required for vehicles in lane $i$ to pass the intersection: \begin{equation} Z_i=r_i+\tau^0_i+G_i. \end{equation} The index $i$ of traffic stream joins the set $\mathrm{\Omega}$ as soon as $Z_i \geq T_{max}$. \selectlanguage{german} Optimisation rule aims for minimising waiting times by serving the incoming traffic as quickly as possible. According to this rule a traffic stream with the highest priority index $\pi_i $ gets green signal, provided that the set $\mathrm{\Omega}$ is empty. The priority index for stream $i$ is defined as \begin{equation} \pi_i =\frac{N_i}{\tau^\mathrm{pen}_{i,\sigma}+\tau_i+G_i}, \end{equation} where: $N_i$ denotes number of vehicles in lane $i$ that are expected to pass the intersection in time $\tau_i+G_i$, $\tau^\mathrm{pen}_{i,\sigma}$ is a penalty for switching from stream $\sigma$ to $i$, $\tau_i$ denotes intergreen time after green signal for stream $i$ and $\sigma$ is the index of currently served traffic stream. For more detailed information on the self-organised traffic control strategy see the paper by L"ammer and Helbing \cite{bplaczek:bib12}. \selectlanguage{english} \subsection{Traffic model} Traffic model in the traffic control system is used to estimate the numbers of vehicles approaching an intersection ($N_i$) and to predict the required green times ($G_i$). In contrast to the original self-organised control method, which uses a macroscopic (fluid dynamic) traffic model \cite{bplaczek:bib12}, the proposed approach is based on the microscopic fuzzy cellular model \cite{bplaczek:bib13}. This modification enables a better utilisation of the data acquired from VSN, concerning the parameters of particular vehicles. The fuzzy cellular traffic model was formulated as a hybrid system combining cellular automata and fuzzy calculus. It was based on a cellular automata approach to traffic modelling that ensures the accurate simulation of real traffic phenomena \cite{bplaczek:bib14}. A characteristic feature of this model is that it uses fuzzy numbers to represent vehicles positions, velocities and other parameters. Moreover, the model transition from one time step to the next is based on arithmetic of the ordered fuzzy numbers. This approach benefits from advantages of the cellular automata models and eliminates their main drawbacks i.e. necessity of multiple Monte Carlo simulations and calibration issues \cite{bplaczek:bib15}. A traffic lane in the fuzzy cellular model is divided into cells that correspond to the road segments of equal length. Road traffic streams at an intersection are represented as sets of vehicles. A vehicle $j$ in traffic lane $i$ is described by its position $X_{i,j}$ (occupied cell) and velocity $V_{i,j}$ (in cells per time step). The maximum velocity is defined by the parameter $V_{max}$. The velocities and positions of all vehicles are updated simultaneously in discrete time steps of one second. All the above mentioned variables are expressed by fuzzy numbers. In this study it was assumed that the fuzzy numbers have trapezoidal or triangular membership functions, thus they are represented by four scalars and the notation $A=(a^{(1)},a^{(2)},a^{(3)},a^{(4)})$ is used. Arithmetic operations are computed for the fuzzy numbers using the following definition: \begin{equation} o(A,B)=(o(a^{(1)},b^{(1)}),o(a^{(2)},b^{(2)}),o(a^{(3)},b^{(3)}),o(a^{(4)},b^{(4)})), \end{equation} where $A$, $B$ are the fuzzy numbers and $o$ stands for an arbitrary binary operation. The application of fuzzy calculus helps to deal with incomplete traffic data and enables straightforward determination of the uncertainty in simulation results \cite{bplaczek:bib16}. The main advantage of the fuzzy cellular model relies on the fact that the prediction of the parameters $N_i$ and $G_i$ is computationally efficient and the results are also represented by means of fuzzy numbers, thus their uncertainties can be easily evaluated. \section{Data collection algorithms} This section introduces three data collection algorithms for VSNs as well as defines the uncertainty estimates that enable reduction of data transmission. The data collection algorithms are presented as components of the traffic control procedure, which was discussed in Section 3. It should be also noted here that the control procedure is executed independently for each intersection in the road network. At first, some basic operations will be explained, which are common to the three proposed algorithms (see the pseudocodes below). The aim of the model \emph{update} operation is to approximate the current state of the traffic flow i.e. current positions of all vehicles approaching an intersection ($X_{i,j}$). This approximation is based on both the real traffic data acquired from VSN and the results of real-time simulation. During the real-time simulation the traffic model is used to estimate the missing positions of vehicles that were excluded from direct data acquisition. Besides the data on vehicle positions, the model update operation has to take into account also the real-time status data of traffic control operations (i.e. current traffic signals). As it was mentioned in the previous section, the traffic model is used to \emph{predict} the numbers of vehicles approaching an intersection ($N_i$) and the required green times ($G_i$). Values of these parameters for all lanes are predicted by faster than real-time simulation using the approximation of current traffic state to determine initial conditions. The prediction results are used to \emph{make control decision} that determines which traffic stream should get a green signal at an intersection. Finally, traffic control system \emph{executes the control decision} by switching the appropriate signals. \begin{figure} \begin{Verbatim}[commandchars=\\\{\}] for each time step update traffic model for each lane i=1..m for each vehicle j=1..n(i) if unc(X\textsubscript{i,j})>ut\textsubscript{pos} then acquire X\textsubscript{i,j} if new data collected then update traffic model for each lane i=1..m predict N\textsubscript{i}, G\textsubscript{i} make control decision execute control decision \end{Verbatim} \caption{Pseudocode of data collection algorithm 1} \end{figure} Figure 2 shows pseudocode of the first data collection algorithm. According to this algorithm the vehicle position is acquired from VSN only if uncertainty of the position $unc(X_{i,j})$, approximated by traffic model, is higher than a predetermined threshold $ut_{pos}$ (in cells). This approach is similar to the concept of model-based querying, which was mentioned in Section 2. Note that the vehicle position is represented by a fuzzy number $X_{i,j}=(x^{(1)}_{i,j},x^{(2)}_{i,j},x^{(3)}_{i,j},x^{(4)}_{i,j})$. In order to estimate its uncertainty, the definition was adapted, which is based on determination of the area under membership function. Using this definition, the following formula was derived: \begin{equation} unc(X_{i,j})=0.5|x^{(1)}_{i,j}-x^{(2)}_{i,j}|+|x^{(2)}_{i,j}-x^{(3)}_{i,j}|+0.5|x^{(3)}_{i,j}-x^{(4)}_{i,j}| \end{equation} The second data collection algorithm (Fig. 3) estimates uncertainty of the predicted green times $unc(G_i)$ using a similar measure to that defined in (5). If for a given lane ($i$) the prediction uncertainty of $G_i$ is higher than a threshold value $ut_{pred}$ (in seconds) then the positions of vehicles in that lane are acquired. During the data acquisition only those vehicles are taken into account whose positions cannot be precisely determined by the traffic model. \begin{figure} \begin{Verbatim}[commandchars=\\\{\}] for each time step update traffic model for each lane i=1..m predict N\textsubscript{i}, G\textsubscript{i} if unc(G\textsubscript{i})>ut\textsubscript{pred} then for each vehicle j=1..n(i) if unc(X\textsubscript{i,j})>0 then acquire X\textsubscript{i,j} if new data collected then update traffic model for each lane i=1..m predict N\textsubscript{i}, G\textsubscript{i} make control decision execute control decision \end{Verbatim} \caption{Pseudocode of data collection algorithm 2} \end{figure} Uncertainty of control decisions is used for detecting the necessity of data transfers in the third data collection algorithm (Fig. 4). The decision uncertainty is estimated as the maximum of uncertainties associated with the two rules of the control strategy i.e. the stabilisation and the optimisation rule: $unc(decision)=\max(unc_{stab}, unc_{opt})$. The decision rules include comparison operations that have to be executed for fuzzy numbers and thus the probabilistic approach to fuzzy numbers comparison \cite{bplaczek:bib17} is employed. This approach enables estimation of the probability $P$ with which one fuzzy number is less, greater or equal to another fuzzy number. The probabilities are used for uncertainty estimation of traffic control decisions. Detailed discussion of the decision uncertainty estimation method can be found in \cite{bplaczek:bib11}, here only the resulting formulas are given with short comments. \begin{figure} \begin{Verbatim}[commandchars=\\\{\}] for each time step update traffic model for each lane i=1..m predict N\textsubscript{i}, G\textsubscript{i} make control decision if unc(decision)>ut\textsubscript{dec} then for each lane i=1..m for each vehicle j=1..n(i) if unc(X\textsubscript{i,j})>0 then acquire X\textsubscript{i,j} if new data collected then update traffic model for each lane i=1..m predict N\textsubscript{i}, G\textsubscript{i} make control decision if unc(decision)<=ut\textsubscript{dec} then execute control decision \end{Verbatim} \caption{Pseudocode of data collection algorithm 3} \end{figure} Let $\sigma$ denote the result of the control decision i.e. an index of traffic stream which will get a green signal. The stabilisation rule determines $\sigma$ when $Z_{\sigma} \geq T_{max}$. It was assumed for this study that the condition $Z_{\sigma} \geq T_{max}$ is satisfied if the probability $P(Z_{\sigma} \geq T_{max})$ is above 0.5. Thus, in opposite situation the optimisation rule is activated. Such assumptions lead to the following definition of the stabilisation uncertainty: \begin{equation} unc_{stab}=\left\{ \begin{array}{l l} 2P(Z_\sigma<T_{max}) & \quad \mathrm{if} \: P(Z_{\sigma} \geq T_{max})>0.5\\ 2P(Z_\sigma \geq T_{max}) & \quad \mathrm{if} \: P(Z_{\sigma} \geq T_{max})\leq0.5.\\ \end{array} \right. \end{equation} The optimisation uncertainty corresponds with the comparisons that are necessary for finding the highest priority value $\pi_\sigma$. This uncertainty equals zero if the stabilisation rule determines the control decision: \begin{equation} unc_{opt}=\left\{ \begin{array}{l l} 0 & \quad \mathrm{if} \: P(Z_{\sigma} \geq T_{max})>0.5\\ \max_i 2P(\pi_\sigma < \pi_i) & \quad \mathrm{if} \: P(Z_{\sigma} \geq T_{max})\leq0.5.\\ \end{array} \right. \end{equation} The symbols used in (6) and (7) were defined in Section 3.1. Resulting values of the uncertainties $unc_{stab}$, $unc_{opt}$ and $unc(dec)$ are in range between 0 and 1. \section{Experimental results} The proposed data collection algorithms were applied to the VSN-based traffic control in a road network. The experiments were performed in a traffic simulator which was developed for this purpose on the basis of Nagel-Schreckenberg stochastic cellular automata \cite{bplaczek:bib14}. Structure of the simulated network is presented in Fig. 5. Roads are unidirectional, thus each intersection has two incoming traffic streams. Links between intersections consists of 40 cells that correspond to the distance of 300 m. Maximal velocity of vehicles is 2 cells per time step i.e. 54 km/h (the simulation time step is one second). Deceleration probability $p$ is 0.15. For the above settings of the Nagel-Schreckenberg traffic model, the obtained saturation flow at intersections is about 1700 vehicles per hour of green time. The self-organised traffic control was simulated assuming the intergreen times $\tau$ of 5 s and the maximum period $T_{max}$ of 120 s. Architecture of the considered traffic control system consists of two types of VSN nodes: control and vehicle nodes. The fixed control nodes installed at intersections collect sensor data from the vehicle nodes and execute the traffic control procedure. Each vehicle in the system is equipped with a wireless communication unit and uses a GPS device to determine its position. Every time a vehicle enters the road network, it has to register itself by sending a hello message. The data collection operation is initialised by the control node which generates queries to acquire positions of vehicles approaching an intersection. \begin{figure} \centering \includegraphics [height=5cm] {BPlaczek_Fig5.eps} \caption{Simulated road network} \end{figure} Simulation results for the three data collection algorithms are compared in Fig. 6. The comparison takes into account the performance of traffic control and the number of data transfers from particular vehicles to the control nodes. The average delays of vehicles and data transfer counts were determined from 3 hour traffic simulations. In this experiment, the traffic flow volumes were changed gradually in order to reproduce saturation levels (demand-capacity ratios) from 0 to 100\%. The highest accuracy of the traffic information in the control system was obtained using the first data collection algorithm with threshold value $ut_{pos}=0$. This scenario results in lowest delays and highest number of data transfers. The delays grow drastically for algorithm 1 if a threshold value above 0 is used (see the plot for $ut_{pos}=5$). In comparison, the second algorithm for $ut_{pred}=5$ provides low delays and reduces the data transfer counts. However, for algorithm 2 with higher threshold values an increase of the delays is observed especially at low saturation levels. The best results were obtained for algorithm 3, which enables significant reduction in the data transfers and does not decrease the performance of the traffic control. \begin{figure} \centering \includegraphics [height=5cm] {BPlaczek_Fig6.eps} \caption{Simulation results: average delay (left) and number of data transfers (right)} \end{figure} \section{Conclusion} In this paper three data collection algorithms were proposed for VSN-based traffic control systems. Effectiveness of the introduced algorithms was evaluated in an experimental study on the traffic control in a road network. Experiments were carried out using a simulation environment. The tests confirmed that the proposed algorithms enable reduction in the data transmission for a wide range of traffic conditions. The most promising results were obtained for the algorithm using decision uncertainty to detect the necessity of data transfers.
2,869,038,156,129
arxiv
\section{Introduction} The connections between electricity and probability are deep, and have provided many tools for understanding the behaviour of stochastic processes. In this note, we describe a new result in this direction from \cite{Croyres}, which states that if a sequence of spaces equipped with so-called `resistance metrics' and measures converge with respect to the Gromov-Hausdorff-vague topology, then the associated stochastic processes also converge. (In the non-compact case, the proof in \cite{Croyres} also requires a non-explosion condition.) All the relevant concepts will be introduced more carefully below, with the statement of the main result appearing as Theorem \ref{mainres}. This result generalises previous work on trees, fractals, and various models of random graphs (apart from the background in \cite{Croyres}, see also \cite{ALWtree, kig1}). Moreover, it is useful in the study of time-changed processes, including Liouville Brownian motion, the Bouchaud trap model and the random conductance model, on such spaces \cite{CHKtime}. Some of these examples will be sketched in Section \ref{applications}. I further conjecture that the result will be applicable to the random walk on the incipient infinite cluster of critical bond percolation on the high-dimensional integer lattice (see Section \ref{IICsec}). \section{Random walks on graphs and electrical networks}\label{rwsec} Before introducing the definition of a resistance metric and associated stochastic process on a general space, it is helpful to recall the more elementary definition of effective resistance and the corresponding random walk on a graph. This is the purpose of the present section. We start with the definition of a random walk on a weighted graph. In particular, let $G=(V,E)$ be a finite, connected graph, equipped with (strictly positive, symmetric) edge conductances $(c(x,y))_{\{x,y\}\in E}$. Let $\mu$ be a finite measure on $V$ of full-support. We then define the associated random walk $X$ to be the continuous time Markov chain with generator $\Delta$, as defined by: \[(\Delta f)(x):=\frac{1}{\mu(\{x\})}\sum_{y:\:y\sim x}c(x,y)(f(y)-f(x)),\] where the sum is over vertices $y$ connected to $x$ by an edge in $E$, i.e.\ this is the process that jumps from $x$ to $y$ with rate $c(x,y)/\mu(\{x\})$. Note that the transition probabilities of the jump chain of $X$ are given by \[P(x,y)=\frac{c(x,y)}{c(x)},\] where $c(x):=\sum_{y:\:y\sim x}c(x,y)$, and so are completely determined by the conductances. The measure $\mu$ determines the time-scaling of the process. Common choices are to take $\mu(\{x\}):=c(x)$, which is the so-called \emph{constant speed random walk (CSRW)}, or $\mu(\{x\}):=1$, which is the \emph{the variable speed random walk (VSRW)}. As illustrated by the example presented in Section \ref{trapsec}, the latter two processes can have quite different behaviour if the conductances are inhomogeneous. Suppose now we view $G$ as an electrical network with edges assigned conductances according to $(c(x,y))_{\{x,y\}\in E}$. If vertices in the network are held according to the potential $f(x)$, then the total \emph{electrical energy} dissipated in the network is given by $\mathcal{E}(f,f)$, where $\mathcal{E}$ is the quadratic form on $V$ given by \[\mathcal{E}(f,g):=\frac12\sum_{x,y:x\sim y}c(x,y)\left(f(x)-f(y)\right)\left(g(x)-g(y)\right).\] Moreover, regardless of the particular choice of $\mu$, $\mathcal{E}$ is a \emph{Dirichlet form} on $L^2(V,\mu)$, and can be written as \[\mathcal{E}(f,g)=-\sum_{x\in V}(\Delta f)(x)g(x)\mu(\{x\}).\] Using the classical correspondence between Dirichlet forms and reversible Markov processes, it follows that there is a one-to-one correspondence between the electrical energy $\mathcal{E}$ (viewed as a Dirichlet form on $L^2(V,\mu)$) and the random walk $X$. (For the definition of a Dirichlet form, and background on the connections between such objects and Markov processes, see \cite{FOT}.) Suppose now that we wished to replace our network by a single resistor between two vertices $x$ and $y$. The resistance we should assign to this resistor to ensure that the same amount of current flows from $x$ to $y$ when voltages are applied to them as did in the original network is given by the \emph{effective resistance}, which can be computed by setting \[R(x,y)^{-1}=\inf\left\{\mathcal{E}(f,f):\:f(x)=1,f(y)=0\right\}\] for $x\neq y$, and $R(x,x)=0$. Although it is not immediate from the definition, it is possible to check that $R$ is a metric on $V$, e.g.\ \cite{Tet}, and characterises the edge conductances uniquely, e.g.\ \cite{Kigdendrite}. The latter observation is important, because it means that, given an effective resistance $R$ on a graph, one can reconstruct the corresponding electrical energy operator $\mathcal{E}$. Thus, if one is also given a measure $\mu$, then by viewing $\mathcal{E}$ as a Dirichlet form on $L^2(V,\mu)$ as in the previous paragraph we also recover the random walk $X$. In summary, we have the following correspondences: \begin{center} \begin{tabular}{ccccc} Random walk $X$, & $\leftrightarrow$ & Dirichlet form $\mathcal{E}$ &$\leftrightarrow$&Effective resistance $R$\\ generator $\Delta$ & & on $L^2(V,\mu)$ && and measure $\mu$. \end{tabular} \end{center} \section{Resistance metrics and forms}\label{ressec} Building on the discussion of the previous section, it is now straightforward to introduce a resistance metric on a general space. After presenting the definition, we then explain the theory developed by Kigami in the context of analysis on low-dimensional fractals that links resistance metrics and stochastic processes (see \cite{kig1,Kig} for details). \begin{definition}[{\cite[Definition 2.3.2]{kig1}}] Let $F$ be a set. A function $R:F\times F\rightarrow \mathbb{R}$ is a \emph{resistance metric} if, for every finite $V \subseteq F$, one can find a weighted (i.e.\ equipped with conductances) graph with vertex set $V$ for which $R|_{V\times V}$ is the associated effective resistance. \end{definition} As some first examples of resistance metrics, we have: \begin{itemize} \item the effective resistance metric on a graph; \item the one-dimensional Euclidean metric $|x-y|$ on $\mathbb{R}$ (not true in higher dimensions), or fractional powers of this $|x-y|^{\alpha-1}$ for $\alpha\in(1,2]$ (see \cite[Chapter 16]{Kig}); \item any `shortest path' metric on a tree-like metric space (see \cite{ALWtree, Kigdendrite}); \item the resistance metric on the Sierpinski gasket, which can be constructed by setting, for `graph vertices' $x$, $y$ in the limiting fractal, \[R(x,y)=\lim_{n\rightarrow\infty}(3/5)^nR_n(x,y),\] where $R_n$ is the effective resistance on the level $n$ graph (see Figure \ref{sgpic}) considered with unit resistances along edges, and then using continuity to extend to whole space. Resistance metrics can similarly be defined on various classes of fractals, see \cite{kig1} for background. \end{itemize} \begin{figure}[h] \begin{center} \vspace{-0pt} \scalebox{0.1}{\includegraphics{sg.eps}} \caption{Level 0, 1, 2 approximations to the Sierpinski gasket.}\label{sgpic} \end{center} \end{figure} Playing the role of the electrical energy in this general setting is the collection of resistance forms. We now state the definition of such objects. Whilst this is quite technical and we will not discuss the role of the various conditions in detail here, importantly it gives a route to connect the resistance metric with a stochastic process. \begin{definition}[{\cite[Definition 2.3.1]{kig1}}] Let $F$ be a set. A pair $(\mathcal{E},\mathcal{F})$ is a resistance form on $X$ if it satisfies the following conditions: \begin{description} \item[RF1] $\mathcal{F}$ is a linear subspace of the collection of functions $\{f:\:F\rightarrow \mathbb{R}\}$ containing constants, and $\mathcal{E}$ is a non-negative symmetric quadratic form on $\mathcal{F}$ such that $\mathcal{E}(f,f)=0$ if and only if $f$ is constant on ${F}$. \item[RF2] Let $\sim$ be the equivalence relation on $\mathcal{F}$ defined by saying $f\sim g$ if and only if $f-g$ is constant on $F$. Then $(\mathcal{F}/\sim,\mathcal{E})$ is a Hilbert space. \item[RF3] If $x\neq y$, then there exists an $f\in \mathcal{F}$ such that $f(x)\neq f(y)$. \item[RF4] For any $x,y\in F$, \[\sup\left\{\frac{|f(x)-f(y)|^2}{\mathcal{E}(f,f)}:\:f\in\mathcal{F},\:\mathcal{E}(f,f)>0\right\}<\infty.\] \item[RF5] If $\bar{f}:=(f\wedge 1)\vee 0$, then $f\in\mathcal{F}$ and $\mathcal{E}(\bar{f},\bar{f})\leq \mathcal{E}(f,f)$ for any $f\in\mathcal{F}$. \end{description} \end{definition} The following theorem connects the notions of a resistance metric and a resistance form, and yields the stochastic process that will be of interest in the remainder of the article. In particular, it explains how the correspondences stated at the end of the previous section extend to the more general present setting. For simplicity of the statement, we restrict to the compact case. It is also possible to extend the result to locally compact spaces, though this requires a more careful treatment of the domain of the Dirichlet form. \begin{theorem}[{\cite[Theorems 2.3.4, 2.3.6]{kig1}, \cite[Corollary 6.4 and Theorem 9.4]{Kig}}] (a) Let $F$ be a set. There is a one-to-one correspondence between resistance metrics and resistance forms on $F$. This is characterised by the relation: \begin{equation}\label{reschar} R(x,y)^{-1}=\inf\left\{\mathcal{E}(f,f):\:f(x)=1,f(y)=0\right\} \end{equation} for $x\neq y$, and $R(x,x)=0$.\\ (b) Suppose $(F,R)$ is compact resistance metric space, and $\mu$ is a finite Borel measure on $F$ of full support. Then the corresponding resistance form $(\mathcal{E},\mathcal{F})$ is a regular Dirichlet form on $L^2(F,\mu)$, and so naturally associated with a Hunt process $((X_t)_{t\geq 0},(P_x)_{x\in F})$. \end{theorem} As a first example of the connection between a resistance metric and a stochastic process (beyond the example of random walks on graphs already discussed), consider $F=[0,1]$, $R=\mbox{Euclidean}$, and $\mu$ be a finite Borel measure of full support on $[0,1]$. Define \[\mathcal{E}(f,g)= \int_0^1 f'(x) g'(x) dx,\qquad \forall f,g\in\mathcal{F},\] where $\mathcal{F}=\{f\in C([0,1]):\: f\mbox{ is absolutely continuous and }f'\in L^2(dx)\}$. Then $(\mathcal{E},\mathcal{F})$ is the resistance form associated with $([0,1],R)$. Moreover, $(\mathcal{E},\mathcal{F})$ is a regular Dirichlet form on $L^2(\mu)$. Integrating by parts yields \[\mathcal{E}(f,g)= -\int_0^1 (\Delta f)(x) g(x) \mu(dx),\qquad \forall f\in \mathcal{D}(\Delta),\:g\in\mathcal{F},\] where $\Delta f=\frac{d}{d\mu}\frac{df}{dx}$, and $\mathcal{D}(\Delta)$ contains those $f$ such that: $f'$ exists and $df'$ is absolutely continuous with respect to $\mu$, $\Delta f\in L^2(\mu)$, and $f'(0)=f'(1)=0$. From this, we see that if $\mu(dx)=dx$, then the Markov process naturally associated with $\Delta$ is reflected Brownian motion on $[0,1]$. (For more general $\mu$, the relevant process is simply a time-change of Brownian motion according to Revuz measure $\mu$.) Taking $R(x,y)=|x-y|^{\alpha-1}$ for $\alpha\in(1,2]$, we can also obtain $\alpha$-stable processes in this way (see \cite[Chapter 16]{Kig}). \section{Scaling limit result} In this section, we will present a simplified version of the result established in \cite{Croyres}, the aim of which was to establish scaling limits of stochastic processes associated with resistance forms. In the full result, a non-explosion condition was provided to extend from the case of compact spaces that we consider here. Moreover, the result was also adapted to random spaces, and incorporated spatial embeddings. In \cite{CHKtime} a similar result was proved under more restrictive volume growth conditions, which were applied to further deduce a convergence statement regarding the local times of the processes in question. To introduce the result precisely, let us fix the framework. In particular, we write $\mathbb{F}_c$ for the collection of quadruples of the form $(F,R,\mu,\rho)$, where: $F$ is a non-empty set; $R$ is a resistance metric on $F$ such that $(F,R)$ is compact; $\mu$ is a locally finite Borel regular measure of full support on $(F,R)$; and $\rho$ is a marked point in $F$. We recall that saying a sequence of such spaces converges in the (marked) Gromov-Hausdorff-Prohorov topology to some element of $\mathbb{F}_c$ if all the spaces can be isometrically embedded into a common metric space $(M,d_M)$ in such a way that: the embedded sets converge with respect to the Hausdorff distance, the embedded measures converge weakly, and the embedded marked points converge. The following result establishes that, if such convergence occurs, then we also obtain convergence of stochastic processes. \begin{theorem}[{cf.\ \cite[Theorem 1.2]{Croyres}}]\label{mainres} Suppose that the sequence $(F_n,R_n,\mu_n,\rho_n)_{n\geq 1}$ in $\mathbb{F}_c$ satisfies \begin{equation}\label{ghp} \left(F_n,R_n,\mu_n,\rho_n\right)\rightarrow \left(F,R,\mu,\rho\right) \end{equation} in the (marked) Gromov-Hausdorff-Prohorov topology for some $(F,R,\mu,\rho)\in \mathbb{F}_c$. It is then possible to isometrically embed $(F_n,R_n)_{n\geq 1}$ and $(F,R)$ into a common metric space $(M,d_M)$ in such a way that \[P^n_{\rho_n}\left(\left(X^n_t\right)_{t\geq 0}\in\cdot\right)\rightarrow P_{\rho}\left(\left(X_t\right)_{t\geq 0}\in\cdot\right)\] weakly as probability measures on $D(\mathbb{R}_+,M)$ (that is, the space of cadlag processes on $M$, equipped with the usual Skorohod $J_1$-topology), where $((X^n_t)_{t\geq 0},(P^n_x)_{x\in F_n})$ is the Markov process corresponding to $(F_n,R_n,\mu_n,\rho_n)$, and $((X_t)_{t\geq 0},(P_x)_{x\in F})$ is the Markov process corresponding to $(F,R,\mu,\rho)$. \end{theorem} Of course, given the correspondence between measured resistance metric spaces and stochastic processes, as described in Section \ref{ressec}, one might intuitively expect that Gromov-Hausdorff-Prohorov convergence will give us all the information we need to obtain process convergence. To turn this expectation into a proof we use the fact that, for a process associated with a resistance metric, we have an explicit formula for its resolvent kernel. (This starting point was influenced by the one used as the basis of the corresponding argument for trees in \cite{ALWtree}.) In particular, for $(F,R,\mu,\rho)\in \mathbb{F}_c$, let \[G_xf(y)=E_y\int_0^{\sigma_x}f(X_s)ds\] be the resolvent of $X$ killed on hitting $x$, where we have written $\sigma_x$ for the hitting time of $x$. (NB.\ Processes associated with resistance forms hit points; the above expression is well-defined and finite.) We then have that \[G_xf(y)=\int_Fg_x(y,z)f(z)\mu(dz),\] where the resolvent kernel is given by \[g_x(y,z)=\frac{R(x,y)+R(x,z)-R(y,z)}{2}.\] (See \cite[Theorem 4.3]{Kig}.) In view of this expression, the metric measure convergence at (\ref{ghp}) readily gives convergence of resolvents. Relatively standard arguments subsequently yield semigroup convergence, which in turn gives convergence of finite dimensional distributions. To get from convergence of finite dimensional distributions to convergence in $D(\mathbb{R}_+,M)$, it remains to check tightness of the processes. For this, we again appeal to an explicit expression for a resolvent in terms of resistance. In particular, we have for any closed set $A$ that \[g_A(y,z)=\frac{R(y,A)+R(z,A)-R_A(y,z)}{2},\] where $g_A$ is the resolvent kernel for the process $X$ killed on hitting the set $A$, and $R_A(y,z)$ is the resistance from $y$ to $z$ when the set $A$ is `shorted', i.e.\ in defining the resistance between $y$ and $z$ similarly to (\ref{reschar}), we consider only functions that are constant on $A$. (Again, see \cite[Theorem 4.3]{Kig}.) From this, and using that $X$ admits local times $(L_t(x))_{x\in F,t\geq 0}$ (see \cite[Lemma 2.4]{CHKtime}) that satisfy $E_yL_{\sigma_A}(z)=g_A(y,z)$, where $\sigma_A$ is the hitting time of $A$, one can establish via Markov's inequality a general exit time estimate of the form: \[\sup_{x\in F}P_x\left(\sup_{s\leq t}R(x,X_s)\geq \varepsilon\right)\leq \frac{32 N(F,\varepsilon/4)}{\varepsilon}\left(\delta+\frac{t}{\inf_{x\in F}\mu(B_R(x,\delta))}\right),\] where $B_R(x,\delta):=\{y\in F:\:R(x,y)<\delta\}$, and $N(F,\varepsilon)$ is the minimal size of an $\varepsilon$ cover of $F$ (see \cite[Lemma 4.3]{Croyres}). The exact form of this expression is not especially important. Rather, the crucial factor is the straightforward dependence on simply metric-measure quantities. As a consequence, we find that if (\ref{ghp}) holds, then \[\lim_{t\rightarrow0}\limsup_{n\rightarrow\infty}\sup_{x\in F_n}P^n_x\left(\sup_{s\leq t}R_n(x,X^n_s)\geq \varepsilon\right)=0.\] Tightness of the sequence $X^n$ is then a standard application of Aldous' tightness criterion \cite[Theorem 16.10 and 16.11]{Kall}. \section{Applications}\label{applications} To complete this overview, we present several examples to which the resistance metric framework is particularly useful. For further details, examples and references, see \cite{Croyres, CHKtime}. \subsection{Trees} Consider a sequence of graph trees $(T_n)_{n\geq 1}$, where $T_n$ has vertex set $V(T_n)$, shortest path graph distance $R_n$ (noting that this is a resistance metric), counting measure on the vertices $\mu_n$ (placing mass one on each vertex), and root $\rho_n$. Suppose that for some null sequences $(a_n)_{n\geq 1},(b_n)_{n\geq 1}$, \[\left(V(T_n),a_nR_n,b_n\mu_n,\rho_n\right)\rightarrow\left(\mathcal{T},R,\mu,\rho\right),\] for some limit in $\mathbb{F}_c$. (NB.\ Under the assumptions stated, $(\mathcal{T},R)$ is a so-called `real tree', which is a natural metric space analogue of a graph tree.) It then holds that \[\left(X^{T_n}_{ta_nb_n}\right)_{t\geq 0}\rightarrow \left(X^\mathcal{T}_t\right)_{t\geq 0},\] where $X^{T_n}$ is the random walk associated with $(V(T_n),R_n,\mu_n,\rho_n)$. (Here and in the examples below, for the statement we suppose the state spaces are suitably isometrically embedded into a common metric space.) In particular, distributional versions of the result hold for: \begin{itemize} \item Critical Galton-Watson trees with finite variance conditioned on size, $a_n=n^{1/2}$, $b_n=n$. Versions of the result for infinite variance Galton-Watson trees also hold, see \cite{Croyrogt}. \item The (non-lattice) branching random walk, where the underlying tree is a critical Galton-Watson tree with exponential tails for the offspring distribution, and the steps have a centred, continuous distribution with fourth order polynomial tail decay \cite{CroyHd}. \item $\Lambda$-coalescent measure trees \cite[Section 7.5]{ALWtree}. \item The uniform spanning tree in two dimensions, $a_n=n^{5/4}$, $b_n=n^2$ (this was proved subsequentially in \cite{BCK}, and the full convergence follows from \cite{HoldenSun}). See Figure \ref{ustfig}. \end{itemize} \begin{figure}[ht] \begin{center} \scalebox{.5}{\includegraphics{SRW5000colour.eps} \includegraphics{SRW50000colour.eps}} \end{center} \caption{The range of a realisation of the simple random walk on uniform spanning tree on a $60\times60$ box (with wired boundary conditions), shown after 5,000 and 50,000 steps. From most to least crossed edges, colours blend from red to blue. Picture: Sunil Chhita.}\label{ustfig} \end{figure} \subsection{Conjecture for critical percolation}\label{IICsec} One model for which an appealing conjecture can be made is the incipient infinite cluster of bond percolation on $\mathbb{Z}^d$ in high dimensions, that is, when $d>6$. In particular, it is expected that this model satisfies the same scaling properties as branching random walk, and thus one might anticipate that if $\mathrm{IIC}$ is the incipient infinite cluster (see \cite{HJ} for a construction), $R_{\mathrm{IIC}}$ is the resistance metric on this (when individual edges have unit resistance) and $\mu_{\mathrm{IIC}}$ is the counting measure on $\mathrm{IIC}$, then the rescaled sequence \[\left(\mathrm{IIC},n^{-2}R_{\mathrm{IIC}},n^{-4}\mu_{\mathrm{IIC}},0\right),\] satisfies a locally compact, distributional version of (\ref{ghp}), with limit being (an unbounded version of) the continuum random tree, and so the associated random walks converge to Brownian motion on the latter space, cf.\ the conjecture of \cite{CroyHd}. See also the recent work of \cite{BCF} regarding the lattice branching random walk. \subsection{Critical random graph} One critical percolation model that can already be tackled with Theorem \ref{mainres} is that on the complete graph. In particular, let $G(n,1/n)$ be the Erd\H{o}s-R\'{e}nyi random graph at criticality, which is obtained by running bond percolation with edge retention probability $1/n$ on the complete graph with $n$ vertices. For the largest connected component $\mathcal{C}_1^n$ of $G(n,1/n)$, it can be checked that \[\left(\mathcal{C}_1^n,n^{-1/3}R_n,n^{-2/3}\mu_n,\rho_n\right)\rightarrow\left(F,R,\mu,\rho\right),\] where the limiting space can be described explicitly, cf.\ \cite{ABG}. Hence, as originally proved in \cite{Croycrg}, \[\left(X^n_{tn}\right)_{t\geq 0}\rightarrow \left(X_t\right)_{t\geq 0}.\] \subsection{Heavy-tailed random conductance model on fractals}\label{trapsec} Finally, consider the Sierpinski gasket graphs shown in Figure \ref{sgpic}. Suppose that we equip the edges of these graphs with random, i.i.d.\ edge conductances that satisfy \[P(c(x,y)\geq u)=u^{-\alpha}\] for some $\alpha\in (0,1)$. One can then check that resistance homogenises, in the sense that, almost-surely, \[\left(V_n,(3/5)^nR_n\right)\rightarrow \left(F,R\right),\] in the Gromov-Hausdorff topology, where: $V_n$ is the vertex set of the $n$th level graph, $R_n$ is the effective resistance associated with the random conductances, $F$ is the Sierpinski gasket, and (up to a deterministic constant) $R$ is the effective resistance on the Sierpinski gasket introduced above, see \cite{CHKtime}. Recall from Section \ref{rwsec} that the VSRW associated with $(V_n,R_n)$, which has transition rates $\lambda_{xy}= c(x,y)$, is the process corresponding to $(V_n,R_n,\mu_n)$, where $\mu_n(\{x\})=1$. Since $3^{-n}\mu_n\rightarrow \mu$, where $\mu$ is $(\ln3/\ln 2)$-dimensional Hausdorff measure on Sierpinski gasket, it follows that the VSRW $X^n$ converges to the standard Brownian motion on the gasket, $X$ say: \[\left(X^n_{t5^n}\right)_{t\geq 0}\rightarrow \left(X_t\right)_{t\geq 0}.\] On the other hand, the associated CSRW has transition rates $\lambda_{xy}= c(x,y)/c(x)$, where $c(x):=\sum_{y:y\sim x}c(x,y)$. This corresponds to the space $(V_n,R_n,\nu_n)$, where $\nu_n(\{x\})=c(x)$. Similarly to the convergence of i.i.d.\ sums to $\alpha$-stable subordinators, it further holds that \[\nu_n:={3}^{-n/\alpha}\sum_{x\in V_n}c(x)\delta_{x}\rightarrow \nu=\sum_i v_i\delta_{x_i},\] in distribution, where $\{(v_i,x_i)\}$ is a Poisson point process on $(0,\infty)\times F$ with intensity $c v^{-1-\alpha}dv\mu(dx)$ (for some deterministic constant $c$). Hence the CSRW, $Y^n$ say, (and indeed its jump chain, which is simply the discrete time simple random walk amongst the same conductances) converges: \[\left(Y^{n}_{t (5/3)^n3^{n/\alpha}}\right)_{t\geq 0}\rightarrow \left(Y_t\right)_{t\geq 0},\] where the limiting process $Y$ is the so-called \emph{Fontes-Isopi-Newman (FIN)} diffusion on the limiting fractal, which is the time-change of the Brownian motion $X$ according to Revuz measure $\nu$. The process $Y$ spends positive time at atoms of $\nu$, which demonstrates the persistence of the trapping on edges of high conductance of the CSRW in the limit (a phenomenon which the result of the previous paragraph shows does not occur for the related VSRW). For further details of this example, see \cite{CHKtime}. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,869,038,156,130
arxiv
\section{Introduction} In this paper we consider the numerical approximation of two one-dimensional, nonlocal systems of nonlinear partial differential equations (pde's) of dispersive wave type. The systems have been derived in \cite{BLS2008} as models describing the propagation of internal waves in a two-layer interface problem with rigid upper and lower boundaries and under two different regimes in the case of a shallow upper layer and small-amplitude deformations in the lower layer. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{internalw3.eps} \caption{Idealized model of internal wave propagation in a two-layer interface problem: $\rho_{2}>\rho_{1}; d_{2}>d_{1}$; $\zeta(x,t)$ denotes the downward vertical displacement of the interface from its level of rest at $(x,t)$.} \label{ILWBO_fig1} \end{figure} The idealized model is sketched in Figure \ref{ILWBO_fig1}. It consists of two layers of inviscid, homogeneous, incompressible fluids with depths $d_{1}, d_{2}$ and densities $\rho_{1}<\rho_{2}$. The upper and lower layers are bounded above and below, respectively, by rigid lids, with the origin of the vertical coordinate at the top. In \cite{BLS2008} the Euler equations with interface are reformulated in terms of two nonlocal operators linking the velocity potentials associated with the two layers and evaluated at the interface. This approach is then used to derive different asymptotic models, with which the Euler system is consistent. The models are valid in specific physical regimes described in terms of the parameters \begin{eqnarray*} \epsilon=\frac{a}{d_{1}},\; \mu=\frac{d_{1}}{\lambda^{2}},\; \epsilon_{2}=\frac{a}{d_{2}},\; \mu=\frac{d_{2}}{\lambda^{2}}, \end{eqnarray*} where $a$ and $\lambda$ denote, respectively, a typical amplitude and wavelength of the interfacial deviation. In the present paper we focus on two of these regimes. As previously mentioned, the upper layer is assumed to be shallow ($\mu<<1$) while for the lower layer the deformations are assumed to be of small amplitude ($\epsilon_{2}<<1$). Under these conditions, two situations are considered: \begin{itemize} \item[(i)] The Intermediate Long Wave (ILW) regime: In this case, small amplitude deformations are additionally assumed for the upper layer; specifically, it is supposed that $$\mu\sim\epsilon^{2}\sim\epsilon_{2}<<1, \mu_{2}\sim 1.$$ In 1D, the corresponding system in nondimensional, unscaled form are given by the equations \begin{equation}\label{ILW} \begin{array}{l} \left[1+\frac{\alpha}{\gamma}|D|{\rm coth}|D| \right]\zeta_t+\frac{1}{\gamma}\left((1- \zeta)u \right)_x+\frac{(\alpha-1)}{\gamma^2}(|D|{\rm coth}|D|)u_x=0\ ,\\ u_t+(1-\gamma)\zeta_x-\frac{1}{2\gamma}(u^2)_x=0\ , \end{array} \end{equation} where $\zeta=\zeta(x,t)$ denotes the interfacial deviation, $\gamma=\frac{\rho_{1}}{\rho_{2}}<1$, $\alpha\geq 1$ is a modelling parameter, and the nonlocal operator $|D|$ has the Fourier symbol $$\widehat{|D|f}(k)=|k|\widehat{f}(k),\; k\in\mathbb{R},$$ with $\widehat{f}(k)$ standing for the Fourier transfom of $f$ at $k$. In (\ref{ILW}) $x$ and $t$ are proportional to distance along the fluid channel and time respectively, and $u=u(x,t)$ is a velocity variable. \item[(ii)] The Benjamin-Ono (B-O) regime: This corresponds to the range of parameters $$\mu\sim\epsilon^{2}\sim\epsilon_{2}<<1, \mu_{2}=\infty,$$ and the resulting 1D version of the corresponding systems in nondimensional, unscaled form is \begin{equation}\label{BO} \begin{array}{l} \left[1+\frac{\alpha}{\gamma}|D| \right]\zeta_t+\frac{1}{\gamma}\left((1- \zeta)u \right)_x+\frac{(\alpha-1)}{\gamma^2}|D|u_x=0\ ,\\ u_t+(1-\gamma)\zeta_x-\frac{1}{\gamma}uu_{x}=0\ , \end{array} \end{equation} \end{itemize} The linear well-posedness and consistency of the Euler system with (\ref{ILW}) and (\ref{BO}) are considered in \cite{BLS2008}. The Cauchy problem for (\ref{ILW}) and (\ref{BO}) has been studied by Xu, \cite{X}. In the case of (\ref{ILW}), Xu showed, among other, local and long-time existence of solutions for $\alpha>1$. In addition, and for $\alpha>1$ as well, it is noted in \cite{X}, Remark 4.2, that these properties also hold for (\ref{BO}). Note that similar systems to (\ref{ILW}), (\ref{BO}) have been considered in \cite{CGK}. Existence of smooth solitary-wave solutions of (\ref{ILW}) and (\ref{BO}) was recently proved in \cite{AnguloS2019}, with arguments based on the implicit function theorem. Furthermore, the solitary waves of the ILW systems were proved to decay exponentially, while those of the B-O systems to decay like $1/x^{2}$. The numerical generation of solitary waves of (\ref{ILW}) and (\ref{BO}) was studied in \cite{BonaDM2021}, where three iterative techniques, two of them based on the Petviashvili method, \cite{Petv1976,pelinovskys}, and the third given by the Conjugate-Gradient-Newton (CGN) method, \cite{Yang}, were introduced and their performance was compared. Approximations of some solitary-wave solutions for (\ref{ILW}) and (\ref{BO}), whose existence was not covered by the results in \cite{AnguloS2019}, were computed, and the resulting profiles were compared with solitary waves of the corresponding unidirectional ILW and BO equations. In this paper we discretize in space the periodic initial-value problem (ivp) for the systems (\ref{ILW}) and (\ref{BO}), using the spectral Fourier-Galerkin method, and prove, in section \ref{sec2}, error estimates for the ensuing semidiscretizations. While there exist error analyses of spectral discretizations of the one-way ILW and B-O equations, cf. e.~g. \cite{PD}, we are not aware of any such analysis in the case of the systems. Section \ref{sec3} is devoted to the numerical generation of solitary waves of the systems. Here we modify the Petviashvili iteration, implemented in \cite{BonaDM2021}, by introducing a vector extrapolation method, \cite{sidi}, with the aim of accelerating the convergence. Numerical examples illustrate the performance of the resulting procedure. The following notation will be used. On the interval $(0,1)$, the inner product and norm on $L^{2}=L^{2}(0,1)$ are denoted by $(\cdot,\cdot)$ and $||\cdot ||$, respectively. For real $\mu\geq 0$, $H^{\mu}$ will stand for the $L^{2}$-based periodic Sobolev spaces on $[0,1]$ with the norm given by $$||g||_{\mu}=\left(\sum_{k\in\mathbb{Z}}(1+k^{2})^{\mu}|\widehat{g}(k)|^{2}\right)^{1/2},\; g\in H^{\mu}, $$ where $\widehat{g}(k)$ denotes the $k$th Fourier coefficient of $g$. For $1\leq p\leq\infty$ $W^{\mu,p}=W^{\mu,p}(0,1)$ stands for the Sobolev space of periodic functions on $(0,1)$ of order $\mu$, whose generalized derivatives are in $L^{p}=L^{p}(0,1)$. The norm on $L^{\infty}$ will be denoted by $|\cdot |_{\infty}$, and that on $W^{\mu,\infty}$ by $||\cdot ||_{\mu,\infty}$. For an integer $N\geq 1$, $(\cdot,\cdot)_{N}$ will denote the Euclidean inner product in $\mathbb{C}^{2N}$, and the associated norm will be denoted by $||\cdot||_{N}$. \section{Error estimates of the spectral semidiscretizations} \label{sec2} In order to describe and analyze the spectral semidiscretizations of (\ref{ILW}) and (\ref{BO}) some preliminaries are needed. In the sequel we will assume that $\alpha>1$, so that the theory in \cite{X} is valid. We write the nonlocal operator in (\ref{ILW}) as \begin{eqnarray} g(D):=\frac{\alpha}{\gamma}|D|{\rm coth}|D|,\label{e21} \end{eqnarray} with symbol $g(k)=\frac{\alpha}{\gamma}|k|{\rm coth}|k|$. We observe that for real $k$ $g(k)$ behaves like $\frac{\alpha}{\gamma}|k|$ for $|k|>>1$, and like $\frac{\alpha}{\gamma}\left(1+\frac{k^{2}}{3}+O(k^{4})\right)$ for small $|k|$. Consequently, the symbol of $(1+g(D))^{-1}$, i.~e. $\left(1+\frac{\alpha}{\gamma}|k|{\rm coth}|k|\right)^{-1}$, is bounded for all $k\in\mathbb{R}$, and is of $O\left(\frac{1}{|k|}\right)$ as $|k|\rightarrow\infty$. On the other hand, since in the B-O case the nonlocal term is \begin{eqnarray} g(D)=\frac{\alpha}{\gamma}|D|,\label{e21b} \end{eqnarray} whose symbol is simply $\frac{\alpha}{\gamma}|k|$, we see again that the symbol of $(1+g(D))^{-1}$, i.~e. $\left(1+\frac{\alpha}{\gamma}|k|\right)^{-1}$, is bounded for all $k\in\mathbb{R}$ and is of $O\left(\frac{1}{|k|}\right)$ as $|k|\rightarrow\infty$. We consider the periodic ivp for (\ref{ILW}) and (\ref{BO}) on the spatial interval $[0,1]$ and written for $0\leq x\leq 1,\; 0\leq t\leq T$ in the form \begin{eqnarray} &&\zeta_{t}+\frac{1}{\gamma}(1+g(D))^{-1}\left(1+\frac{(\alpha-1)}{\alpha}g(D)\right)u_{x}=\frac{1}{\gamma}(1+g(D))^{-1}(\zeta u)_{x},\nonumber\\ &&u_{t}+(1-\gamma)\zeta_{x}=\frac{1}{2\gamma}\partial_{x}(u^{2}),\label{e22}\\ &&\zeta(x,0)=\zeta_{0}(x),\; u(x,0)=u_{0}(x),\nonumber \end{eqnarray} where $\zeta_{0}, u_{0}$ are given $1$-periodic smooth functions on $[0,1]$. We note again that for the ILW system (\ref{ILW}) $g(D)$ is given by (\ref{e21}), while, for the B-O system, by (\ref{e21b}). We assume that the ivp (\ref{e22}) has a unique solution which is sufficiently smooth for the purposes of the error estimation. We introduce the nonlocal operators $$\mathcal{T}:=(1+g(D))^{-1},\quad \mathcal{J}:=(1+g(D))^{-1}\left(1+\frac{(\alpha-1)}{\alpha}g(D)\right).$$ We have previously examined the symbol of $\mathcal{T}$. The symbol of $\mathcal{J}$ is $\frac{1+\frac{(\alpha-1)}{\alpha}g(k)}{1+g(k)}$, which, in view of our assumptions on $\alpha$ and $\gamma$, is well defined and bounded for $k\in\mathbb{R}$. Let $N\geq 1$ be an integer, and consider the finite dimensional space \begin{eqnarray*} S_{N}={\rm span} \{e^{ikx},\; k\in\mathbb{Z}, -N\leq k\leq N\}. \end{eqnarray*} We recall several properties of the $L^{2}$-projection operator onto $S_{N}$ \begin{eqnarray*} P_{N}v=\sum_{|k|\leq N}\widehat{v}(k)e^{ikx}, \end{eqnarray*} where $\widehat{v}(k)$ is the $k$th Fourier coefficient of $v$. \begin{itemize} \item $P_{N}$ commutes with $\partial_{x}$. \item Given integers $0\leq j\leq \mu$, and for any $v\in H^{\mu}, \mu\geq 1$, \begin{eqnarray} ||v-P_{N}v||_{j}&\leq &CN^{j-\mu}||v||_{\mu},\label{epn1}\\ |v-P_{N}v|_{\infty}&\leq &CN^{1/2-\mu}||v||_{\mu},\label{epn2} \end{eqnarray} for some constant $C$ independent of $N$. \end{itemize} We will also use the following inverse inequalities. Given $0\leq j\leq \mu$, there exists a constant $C$ independent of $N$, such that for any $\psi\in S_{N}$ \begin{eqnarray} ||\psi||_{\mu}\leq CN^{\mu-j}||\psi||_{j},\; ||\psi||_{\mu,\infty}\leq CN^{1/2+\mu-j}||\psi||_{j}.\label{epn3} \end{eqnarray} In what follows, $C$ will denote a constant independent of $N$. Now we may define the semidiscretizations. The semidiscrete spectral Fourier-Galerkin approximation of (\ref{e22}) is defined by the real-valued functions $\zeta_{N},u_{N}:[0,T]\rightarrow S_{N}$ satisfying for $0\leq t\leq T$ \begin{eqnarray} &&\zeta_{N,t}+\frac{1}{\gamma}(1+g(D))^{-1}\left(1+\frac{(\alpha-1)}{\alpha}g(D)\right)u_{N,x}=\frac{1}{\gamma}(1+g(D))^{-1}\partial_{x}P_{N}(\zeta_{N} u_{N}),\nonumber\\ &&u_{N,t}+(1-\gamma)\zeta_{N,x}=\frac{1}{2\gamma}\partial_{x}P_{N}(u_{N}^{2}), \label{e23}\\ &&\zeta_{N}\big|_{t=0}=P_{N}\zeta_{0},\; u_{N}\big|_{t=0}=P_{N}u_{0}.\nonumber \end{eqnarray} The ode ivp (\ref{e23}) is implemented in its Fourier component form \begin{eqnarray*} &&\widehat{\zeta}_{N,t}+\frac{1}{\gamma}(1+g(k))^{-1}\left(1+\frac{(\alpha-1)}{\alpha}g(k)\right)(ik)\widehat{u}_{N}=\frac{1}{\gamma}(1+g(k))^{-1})ik)\widehat{\zeta_{N} u_{N}},\\ &&\widehat{u}_{N,t}+(1-\gamma)(ik)\widehat{\zeta}_{N}=\frac{1}{2\gamma}(ik)\widehat{u_{N}^{2}},\\ &&\widehat{\zeta}_{N}(k,0)=\widehat{\zeta}_{0}(k),\; \widehat{u}_{N}(k,0)=\widehat{u}_{0}(k), \end{eqnarray*} where $\widehat{\zeta}_{N}=\widehat{\zeta}_{N}(k,t), \widehat{u}_{N}=\widehat{u}_{N}(k,t), -N\leq k\leq N, t\geq 0$ denote the Fourier coefficients of $\zeta_{N}$ and $u_{N}$ respectively. The ode ivp (\ref{e23}) clearly has a local in time solution; part of the proof of the following proposition is showing that this solution can be extended up to $t=T$. \begin{proposition} \label{pro21} (ILW systems.) Assume that the solution $\zeta, u$ of (\ref{e22}) for $\alpha>1$, and $g$ given by (\ref{e21}), is such that $\zeta, u\in H^{\mu}, \mu>3/2$ for $0\leq t\leq T$. Then, for $N$ sufficiently large, \begin{eqnarray} \max_{0\leq t\leq T}\left(||\zeta_{N}-\zeta||+||u_{N}-u||\right)\leq CN^{1-\mu}.\label{e24} \end{eqnarray} \end{proposition} \begin{proof} The general plan of the proof resembles that of \cite{DDS1}, where another class of asymptotic models for internal waves was analyzed. We let $\theta=\zeta_{N}-P_{N}\zeta,\rho=P_{N}\zeta-\zeta$, so that $\zeta_{N}-\zeta=\theta+\rho$, and $\xi=u_{N}-P_{N}u, \sigma=P_{N}u-u$, so that $u_{N}-u=\xi+\sigma$. Applying $P_{N}$ on both sides of the pde's in (\ref{e22}) and subtracting from the respective semidiscrete equations in (\ref{e23}) we obtain \begin{eqnarray} &&\theta_{t}+\frac{1}{\gamma}\mathcal{J}\xi_{x}=\frac{1}{\gamma}\mathcal{T}\partial_{x}P_{N}A,\nonumber\\ &&\xi_{t}+(1-\gamma)\theta_{x}=\frac{1}{2\gamma}\partial_{x}P_{N}B,\qquad t\geq 0\label{e25}\\ &&\theta\big|_{t=0}=0,\; \xi\big|_{t=0}=0,\nonumber \end{eqnarray} where it is straightforward to see that \begin{eqnarray*} A&=&u\rho+\zeta \sigma+u\theta+\zeta\xi+\sigma\theta+\rho\xi+\rho\sigma+\theta\xi,\\ B&=&u\sigma+u\xi+\sigma\xi+\frac{1}{2}\sigma^{2}+\frac{1}{2}\xi^{2}. \end{eqnarray*} Using a standard trick for rational functions, cf. e.~g. \cite{X}, we may write $$\mathcal{J}=(1+g(D))^{-1}\left(1+\frac{(\alpha-1)}{\alpha}g(D)\right)=\frac{\alpha-1}{\alpha}+\frac{1}{\alpha}(1+g(D))^{-1}=\frac{\alpha-1}{\alpha}+\frac{1}{\alpha}\mathcal{T},$$ and, therefore, may simplify the equations somewhat having just one nonlocal operator in the problem. We rewrite accordingly (\ref{e25}) as \begin{eqnarray} &&\theta_{t}+\frac{1}{\gamma}\frac{\alpha-1}{\alpha}\xi_{x}+\frac{1}{\alpha\gamma}\mathcal{T}\xi_{x}=\frac{1}{\gamma}\mathcal{T}\partial_{x}P_{N}A,\nonumber\\ &&\xi_{t}+(1-\gamma)\theta_{x}=\frac{1}{2\gamma}\partial_{x}P_{N}B,\qquad t\geq 0,\label{e26}\\ &&\theta\big|_{t=0}=0,\; \xi\big|_{t=0}=0.\nonumber \end{eqnarray} We will use the standard energy method to estimate $\theta$ and $\xi$. Taking the $L^{2}$ inner products of the semidiscrete equations in (\ref{e26}) with $\theta$ and $\xi$, respectively, we have \begin{eqnarray} &&(\theta_{t},\theta)+\frac{1}{\gamma}\frac{\alpha-1}{\alpha}(\xi_{x},\theta)+\frac{1}{\alpha\gamma}(\mathcal{T}\xi_{x},\theta)=\frac{1}{\gamma}(\mathcal{T}A_{x},\theta),\label{e27}\\ &&(\xi_{t},\xi)-(1-\gamma)(\theta,\xi_{x})=\frac{1}{2\gamma}(B_{x},\xi).\label{e28} \end{eqnarray} Hence, multiplying (\ref{e28}) by $\frac{\alpha-1}{\alpha\gamma (1-\gamma)}$ and adding to (\ref{e27}) we obtain \begin{eqnarray} \frac{1}{2}\frac{d}{dt}\left(||\theta||^{2}+\frac{\alpha-1}{\alpha\gamma (1-\gamma)}||\xi||^{2}\right)&=&-\frac{1}{\alpha\gamma}(\mathcal{T}\xi_{x},\theta)+\frac{1}{\gamma}(\mathcal{T}A_{x},\theta)\nonumber\\ &&+\frac{\alpha-1}{2\alpha\gamma^{2}(1-\gamma)}(B_{x},\xi).\label{e29} \end{eqnarray} Note that $\frac{\alpha-1}{2\alpha\gamma^{2}(1-\gamma)}>0$. Also note that the operator $\mathcal{T}\partial_{x}$, with symbol $\frac{ik}{1+g(k)}$, is bounded in $L^{2}$. Therefore we have by (\ref{e29}), as long as the solution of (\ref{e26}) (or (\ref{e23})) exists, that \begin{eqnarray} \frac{d}{dt}\left(||\theta||^{2}+||\xi||^{2}\right)\leq C\left(||\xi|| ||\theta||+||A|| ||\theta|| +|(B_{x},\xi)|\right).\label{e210} \end{eqnarray} We bound now the two last terms in the right-hand side of (\ref{e210}). By the definition of $A$ we have \begin{eqnarray*} ||A||&\leq & |u|_{\infty}||\rho||+|\zeta|_{\infty}||\sigma||+|u|_{\infty}||\theta||+|\zeta|_{\infty}||\xi||+|\sigma|_{\infty}||\theta||\\ &&+|\rho|_{\infty}||\xi||+|\rho|_{\infty}||\sigma||+|\theta|_{\infty}||\xi||. \end{eqnarray*} Let $t_{N}\in (0,T]$ be the maximal temporal instance such that \begin{eqnarray} |\theta|_{\infty}\leq 1,\; 0\leq t\leq t_{N}.\label{e211} \end{eqnarray} Then, by the approximation properties of $S_{N}$ (\ref{epn1}), (\ref{epn2}), the inverse inequalities (\ref{epn3}), and the fact that $\mu\geq 1$, we get by the above estimate of $||A||$ that for $0\leq t\leq t_{N}$ \begin{eqnarray} ||A||\leq C\left(N^{-\mu}+||\theta||+||\xi||\right),\label{e212} \end{eqnarray} where $C$ is independent of $N$. To bound the term $(B_{x},\xi)$ note that by periodicity \begin{eqnarray} (B_{x},\xi)=((u\sigma)_{x},\xi)+((u\xi)_{x},\xi)+((\sigma\xi)_{x},\xi)+(\sigma\sigma_{x},\xi).\label{e213} \end{eqnarray} Since $\mu\geq 3/2$ \begin{eqnarray*} |((u\sigma)_{x},\xi)|\leq |u_{x}|_{\infty}||\sigma|| ||\xi||+|u|_{\infty}||\sigma_{x}|| ||\xi||\leq CN^{1-\mu}||\xi||. \end{eqnarray*} Also, for the same reason \begin{eqnarray*} |((u\xi)_{x},\xi)|&= &\frac{1}{2}|(u_{x}\xi,\xi)|\leq \frac{1}{2}|u_{x}|_{\infty}||\xi||^{2}\leq C||\xi||^{2},\\ |((\sigma\xi)_{x},\xi)|&= &\frac{1}{2}|(\sigma_{x}\xi,\xi)|\leq \frac{1}{2}|\sigma_{x}|_{\infty}||\xi||^{2}\leq C||\xi||^{2}, \end{eqnarray*} and \begin{eqnarray*} |(\sigma\sigma_{x},\xi)|\leq |\sigma_{x}|_{\infty}||\sigma|| ||\xi||\leq CN^{\frac{3}{2}-2\mu}||\xi||\leq CN^{-\mu}||\xi||. \end{eqnarray*} From these estimates and (\ref{e213}) we conclude that \begin{eqnarray} |(B_{x},\xi)|\leq C\left(N^{2(1-\mu)}+||\xi||^{2}\right),\label{e214} \end{eqnarray} as long as the solution of (\ref{e26}) exists. Therefore, (\ref{e210}), (\ref{e212}) and (\ref{e214}) give for $0\leq t\leq t_{N}$ \begin{eqnarray*} \frac{d}{dt}(||\theta||^{2}+||\xi||^{2})\leq C\left(N^{2(1-\mu)}+||\theta||^{2}+||\xi||^{2}\right), \end{eqnarray*} from which, by Gronwall's inequality we get \begin{eqnarray} ||\theta||+||\xi||\leq CN^{1-\mu},\label{e215} \end{eqnarray} for $0\leq t\leq t_{N}$, where $C$ is independent of $N$ and $t_{N}$. Since by (\ref{e215}) $|\theta|_{\infty}\leq CN^{3/2-\mu}$ and $\mu>3/2$, we infer that $t_{N}$ was not maximal in (\ref{e211}) for $N$ sufficiently large, and in the customary way the existence of solutions of (\ref{e25}) and the validity of (\ref{e215}) may be extended to $t=T$. The estimate (\ref{e24}) follows. \end{proof} As far as the B-O case is concerned, recall that the symbol of $(1+g(D))^{-1}$ is also bounded for all $k\in\mathbb{R}$ and is of $O\left(\frac{1}{|k|}\right)$ as $|k|\rightarrow\infty$. This was the basic property that we used in the error analysis of the spectral semidiscretization of (\ref{ILW}). Hence the proof of Proposition \ref{pro21} can be easily adapted to the B-O case for $\alpha>1$. Without proof we state: \begin{proposition} \label{pro31} (B-O systems.) Assume that the solution $\zeta, u$ of the periodic ivp for (\ref{BO}) for $\alpha>1$ is such that $\zeta, u\in H^{\mu}, \mu>3/2$ for $0\leq t\leq T$. Let $(\zeta_{N},u_{N})$ be the solution of the Fourier-Galerkin semidiscretization of the periodic ivp, defined by (\ref{e23}), where now $g$ is given by (\ref{e21b}). Then $(\zeta_{N},u_{N})$ exists uniquely up to $t=T$ and satisfy for $N$ sufficiently large \begin{eqnarray*} \max_{0\leq t\leq T}\left(||\zeta_{N}-\zeta||+||u_{N}-u||\right)\leq CN^{1-\mu}, \end{eqnarray*} for some constant $C$ independent of $N$. \end{proposition} \section{Solitary wave solutions} \label{sec3} The ILW and B-O systems (\ref{ILW}) and (\ref{BO}) have been shown to posses solitary-wave solutions. These are solutions $\zeta=\zeta(x-ct), u=u(x-ct), c\neq 0$, where $\zeta(X), u(X)\rightarrow 0$ as $|X|\rightarrow\infty$, and that satisfy the system \begin{eqnarray} &&-c(1+g(D))\zeta+\frac{1}{\gamma}\left(1+\frac{(\alpha-1)}{\alpha}g(D)\right)u=\frac{1}{\gamma}\zeta u,\nonumber\\ &&-cu+(1-\gamma)\zeta=\frac{1}{2\gamma}u^{2},\label{e31} \end{eqnarray} where $g(D)$ is given by (\ref{e21}) or (\ref{e21b}). The existence of smooth solutions of (\ref{e31}) was proved by Angulo-Pava and Saut, \cite{AnguloS2019}, for some range of speeds $c$, using the implicit function theorem. Properties of their asymptotic decay were also proved in the same paper, ensuring that in the ILW case the solitary waves decay exponentially, while in the B-O case, the decay is algebraic, like $1/|X|^{2}$. (Note that, in both cases, the asymptotic behaviour is the same as that of the solitary wave solutions of the corresponding unidirectional models.) The numerical generation of approximate solutions of (\ref{e31}) was studied in \cite{BonaDM2021}. We summarize the numerical technique used. Let $l>0$ be large enough, $N\geq 1$ be an even integer, and discretize the periodic problem for (\ref{e31}) on $[-l,l]$ with a Fourier collocation method based on the $N$ collocation points $x_{j}=-l+jh, j=0,\ldots,N-1, h=2l/N$. The approximation to the solitary wave $(\zeta,u)$ is then represented by the nodal values $\zeta_{h}=(\zeta_{h,0},\ldots,\zeta_{h,N-1})^{T}$ and $u_{h}=(u_{h,0},\ldots,u_{h,N-1})^{T}$, where $\zeta_{h,j}\approx \zeta(x_{j}), u_{h,j}\approx u(x_{j}), j=0,\ldots, N-1$, and $\zeta_{h}, u_{h}$ satisfy the system \begin{eqnarray} S\begin{pmatrix} \zeta_{h}\\u_{h}\end{pmatrix}=F(\zeta_{h},u_{h}):=\frac{1}{\gamma}\begin{pmatrix} \zeta_{h}.u_{h}\\(u_{h}.^{2})/2\end{pmatrix},\label{e32} \end{eqnarray} where $S$ is the $2N$-by-$2N$ matrix \begin{eqnarray} S:=\begin{pmatrix} -c(I_{N}+g(D_{N}))&\frac{1}{\gamma}(I_{N}+\frac{\alpha-1}{\alpha}g(D_{N}))\\(1-\gamma)I_{N}& -cI_{N}\end{pmatrix},\label{e33} \end{eqnarray} being $I_{N}$ the $N$-by-$N$ identity matrix and $D_{N}$ the $N$-by-$N$ Fourier pseudospectral differentiation matrix. The dots on the right-hand side of (\ref{e32}) signify Hadamard products. The system (\ref{e32}), (\ref{e33}) is implemented in its Fourier component form. Thus for $-N/2\leq k\leq N/2-1$, the $k$th discrete Fourier components of $\zeta_{h}$ and $u_{h}$, denoted by $\widehat{\zeta_{h}}(k), \widehat{v_{h}}(k) $, resp., satisfy the fixed point system \begin{eqnarray} \underbrace{\begin{pmatrix}-c(1+g(\widetilde{k}))&\frac{1}{\gamma}(1+\frac{\alpha-1}{\alpha}g(\widetilde{k}))\\(1-\gamma)& -c\end{pmatrix}}_{S(\widetilde{k})} \begin{pmatrix} \widehat{\zeta_{h}}(\widetilde{k})\\\widehat{v_{h}}(\widetilde{k})\end{pmatrix}=\underbrace{\frac{1}{\gamma}\begin{pmatrix} \widehat{\zeta_{h}.u_{h}}(\widetilde{k})\\\widehat{(u_{h}.^{2})/2}(\widetilde{k})\end{pmatrix}}_{\widehat{F(\zeta_{h},u_{h})}_{\widetilde{k}}},\label{e34} \end{eqnarray} where $\widetilde{k}=\pi k/l, -N/2\leq k\leq N/2-1$. In \cite{BonaDM2021} three methods for the iterative resolution of (\ref{e34}) were proposed: The Petviashvili iteration, the CGN method, and a variant of Petviashvili's method (called e-Petviashvili's method) obtained from solving $\zeta$ in terms of $u$ in the second equation of (\ref{e31}) and substituting into the first one. The resulting equation for $u$ (with quadratic and cubic terms) is iteratively solved with the method proposed in \cite{AlvarezD2014}, an extension of the Petviashvili scheme for nolinearities which are superpositions of homogeneous functions with different degree of homogeneity, cf. \cite{BonaDM2021} for details. \subsection{Numerical generation of solitary waves with acceleration methods} In the present paper we propose an alternative technique based on implementing the Petviashvili iteration combined with a vector extrapolation method, \cite{sidi}. The inclusion of the extrapolation has the general benefit of accelerating the convergence of the basic method used for the iteration. (In some cases, the process changes from divergent to convergent.) In the present case, the Petviahsvili iteration, formulated as, cf. \cite{BonaDM2021}, \begin{eqnarray} m_{\nu}&=&\frac{\langle S_{N}Z^{[\nu]},Z^{[\nu]}\rangle_{N}}{\langle F(Z^{[\nu]}),Z^{[\nu]} \rangle_{N}},\nonumber\\ SZ^{[\nu+1]}&=&m_{\nu}^{2}F(Z^{[\nu]}),\; \nu=0,1,\ldots,\label{e35} \end{eqnarray} where $Z^{[\nu]}:=(\zeta_{h}^{[\nu]},u_{h}^{\nu]})$, is combined with the so-called minimal polynomial extrapolation method (MPE), which may be described as follows (cf. \cite{smithfs,AlvarezD2015} and references therein for details). From a number of $l$ iterations $Z^{[\nu]},\ldots,Z^{[\nu+l]}$ with the Petviashvili method (\ref{e35}), the extrapolation steps \begin{eqnarray} X_{\nu,l}=\sum_{j=0}^{l}\gamma_{j}Z^{[\nu+j]},\label{e34b} \end{eqnarray} are computed, where the coefficients $\gamma_{j}$ are of the form \begin{eqnarray} \gamma_{j}=\frac{c_{j}}{\displaystyle\sum_{i=0}^{l}c_{i}},\quad 0\leq j\leq l,\label{e34c} \end{eqnarray} with $c_{l}=1$ and the $c_{j}, j=0,\ldots,l-1$, are the solution, in the least squares sense, of the system \begin{eqnarray*} \sum_{i=0}^{l-1}c_{i}W_{\nu+i}=\widetilde{W}_{\nu}, \end{eqnarray*} where $W_{j}=\Delta Z^{[j]}:=Z^{[j+1]}-Z^{[j]}, \widetilde{W}_{j}=-\Delta Z^{[j+l]}$. The method (\ref{e34b}), (\ref{e34c}) was originally formulated and analyzed for linear vector sequences in \cite{CabayJ1976}. The MPE method is typically implemented in cycling mode, \cite{smithfs}. For a fixed width of extrapolation $mw\geq 1$, the advance from the $(\nu+1)$th iteration is performed according to the following steps: \begin{itemize} \item Step 1: Compute $mw$ steps of (\ref{e35}) from $X^{[0]}=Z^{[\nu]}$: $Z^{[1]},\ldots,Z^{[mw]}$. \item Step 2: Compute the corresponding extrapolation steps (\ref{e34b}) from the iterations of Step 1. \item Set $Z^{[\nu+1]}=X_{mw,\nu}$, $X^{[0]}=Z^{[\nu+1]}$ and go to Step 1. \end{itemize} The resulting procedure is controlled by iterating while the residual error \begin{eqnarray} RES(\nu)=||S_{N}Z^{[\nu]}-F(Z^{[\nu]})||_{N},\label{e36} \end{eqnarray} is above some tolerance. \subsection{Numerical experiments} We illustrate the iterative method described in the previous section with some numerical experiments. \begin{figure}[htbp] \centering \centering \subfigure[] {\includegraphics[width=10cm,height=5.2cm]{ILW2.eps}} \subfigure[] {\includegraphics[width=10cm,height=5.2cm]{ILW3.eps}} \subfigure[] {\includegraphics[width=10cm,height=5.2cm]{ILW1.eps}} \caption{Numerical generation of solitary waves. ILW case with $\gamma=0.8, \alpha=1.2, c_{s}=0.52$. (a) $\zeta$ and $u$ numerical profiles; (b) Phase plot; (c) Residual error (\ref{e36}) vs. number of iterations for several values of the width of extrapolation $mw$.} \label{ILW1} \end{figure} In the case of the ILW system (\ref{ILW}), Figure \ref{ILW1}(a) shows the numerical solitary wave profile obtained with the Petviashvili method (without extrapolation) for $\gamma=0.8, \alpha=1.2$, $c=0.52$ , while the corresponding phase plots are shown in Figure \ref{ILW1}(b). The effect of the extrapolation technique is observed in Figure \ref{ILW1}(c), which shows the behaviour of the residual error (\ref{e36}) as function of the number of iterations, for several values of the width $mw$ ($mw=1$ would correspond to the iteration without extrapolation). Note that the inclusion of the vector extrapolation accelerates the convergence of the iteration by diminishing the number of iterations required for the residual to become smaller than a fixed error. \begin{figure}[htbp] \centering \centering \subfigure[] {\includegraphics[width=10cm,height=5.2cm]{BO2.eps}} \subfigure[] {\includegraphics[width=10cm,height=5.2cm]{BO3.eps}} \subfigure[] {\includegraphics[width=10cm,height=5.2cm]{BO1.eps}} \caption{Numerical generation of solitary waves. B-O case with $\gamma=0.8, \alpha=1.2, c_{s}=0.57$. (a) $\zeta$ and $u$ numerical profiles; (b) Phase plot; (c) Residual error (\ref{e36}) vs. number of iterations for several values of the width of extrapolation $mw$.} \label{BO1} \end{figure} The illustration of the B-O case is given in Figure \ref{BO1}, where an approximate solitary wave solution of (\ref{e31}), in the B-O case, for $\gamma=0.8, \alpha=1.2$, and $c=0.57$ is shown in Figure \ref{BO1}(a), and with the corresponding phase plot in Figure \ref{BO1}(b). By comparison with Figure \ref{ILW1}(b), we can observe the different type of decay to zero at infinity (algebraic versus exponential, cf. \cite{AnguloS2019}) of the solitary waves as trajectories homoclinic to the origin. The acceleration of the convergence for the BO case is shown in Figure \ref{BO1}(c). The most remarkable reduction in the number of iterations for a given residual error is observed when we let $mw=1$ (no acceleration) and then $mw=2$ (cycling mode extrapolation with two steps of the Petviashvili method). After that, for larger values of $mw$, the method continues accelerating the convergence, with a milder reduction in the number of iterations (see e.~g. \cite{smithfs,AlvarezD2015} for discussions about an optimal choice for $mw$). \section*{Acknowledgements} Vassilios Dougalis and Angel Dur\'an would like to acknowledge travel support, that made possible this collaboration, from the Institute of Mathematics (IMUVA) of the University of Valladolid, and the Institute of Applied and Computational Mathematics of FORTH. Angel Dur\'an was supported by Junta de Castilla y Le\'on and FEDER funds (EU) under Research Grant VA193P20. Leetha Saridaki was supported by the grant \lq\lq Innovative Actions in Environmental Research and Development (PErAn)\rq\rq (MIS5002358), implemented under the \lq\lq Action for the strategic development of the Research and Technological sector'' funded by the Operational Program \lq\lq Competitiveness, and Innovation'' (NSRF 2014-2020) and co- financed by Greece and the EU (European Regional Development Fund). The grant was issued to the Institute of Applied and Computational Mathematics of FORTH.
2,869,038,156,131
arxiv
\section{Introduction} The ability to propagate a domain wall (DW) through a submicron magnetic wire using a magnetic field \cite{Ono1999,Allwood2002,Nakatani2003,Atkinson2003,Beach2005} or electric current \cite{Vernier2004,Yamaguchi2004,KLAUI2005,Meier2007,BEACH08} is the basis of several new spintronics devices \cite{Allwood2005,Parkin2008,HAYASHI08,Zutic2004,Xu2008}. Regarding the topic of current-induced DW dynamics, most is known about DWs in in-plane magnetized permalloy strips \cite{KLAUI08}. Recently, the focus has been shifting toward materials with high perpendicular magnetic anisotropy (PMA) \cite{RAVELOSONA2005,Boulle2008a,Moore2008,SanEmeterioAlvarez2010,Burrowes2009,Miron2011,KIM10,Koyama2011,Heinen2010}. Although field-driven DW motion is typically slow due to DW creep \cite{LEMERLE1998,METAXAS2007,KIM2010}, these materials might show faster current-induced DW motion, because they exhibit simple and narrow DWs potentially leading to large non-adiabatic spin torque contributions \cite{Thiaville2005,Zhang2004,Tatara2004}, or by the presence of Rashba fields stabilizing the DW structure during propagation \cite{Miron2011}. Furthermore, recent results indicate that the non-adiabaticity is strongly dependent on details of the perpendicular material, ranging from a negligible effect in Co/Ni \cite{Koyama2011} to a large contribution in Pt/Co multilayers \cite{Boulle2008a,Heinen2010}. These interesting observations call for more experiments on various material systems. Being able to control the position of DWs at will is essential for successful DW experiments or devices. One issue is the initial creation of a magnetic domain and its domain walls. A second issue is to control the exact pinning positions where a domain wall stops after propagation, which is needed in several memory and logic devices making use of spintronics \cite{Parkin2008,Zutic2004,Xu2008}. For the first issue of writing a domain at a controlled position, there are generally two possibilities: one should either apply a highly localized magnetic field, or locally modify the switching properties of the magnetic nanostrip to be able to write with a global field. A highly localized magnetic field poses restrictions to the experimental environment and therefore writing with a global field is often the desired option. For in-plane magnetized DW devices made of permalloy, one often designs a variation in shape, such as a bend in the wire \cite{Allwood2002,KLAUI2005} or a large pad at the end of the wire \cite{Atkinson2003,Shigeto1999,Thomas2005}. Due to shape anisotropy, these lead to preferential nucleation points when an external field is applied. For PMA materials however, there is a very strong perpendicular easy axis that dominates over shape-induced effects, by which nucleation preferably occurs at randomly distributed defects. For the second issue of controlled DW pinning, similar considerations apply in PMA materials: geometric variations can be used for DW pinning \cite{RAVELOSONA2005,Burrowes2009} but these shape-induced effects are rather weak and typically lead to deformations of the domain wall \cite{RAVELOSONA2005}, causing the DW to lose its one-dimensional (1D) character. In a recent study \cite{JeroenFIB} it was shown that both issues can be tackled at the same time by taking control over the parameter that governs the switching behavior: the PMA. The PMA is known to be reduced by irradiation with highly energetic ions \cite{Fassbender2008,Hyndman1,Chappert1998,Devolder2000,Devolder2001,Vieu2002}. Using a focused ion beam (FIB) of, for example, Ga \cite{JeroenFIB,Hyndman2,Aziz2005,Aziz2006} or He \cite{Markie} ions, the anisotropy can be controlled very locally (at a scale of a few nanometer). By locally reducing the anisotropy, the coercivity is also reduced and a DW nucleation area is made. Furthermore, it was shown that DWs tend to pin at a discontinuity in the anisotropy, i.e. the boundary of a Ga-irradiated area, solving the second issue. In the current paper, we provide further insights into this pinning of DWs at engineered anisotropy variations. First, we describe in detail the mechanism responsible for DW pinning at anisotropy variations, through the development of a 1D model in section \ref{sec:model}. Furthermore, the magnetic anisotropy of Pt/Co/Pt strips is experimentally determined as a function of Ga irradiation dose and Co layer thickness in section \ref{sec:Anisotropy}. Finally, in section \ref{sec:DWpinning}, we report a detailed experimental study on DW pinning at an anisotropy boundary, showing that the DW energy landscape in a nanostrip can basically be engineered at will on a nanometer scale. \section{Model of DW pinning}\label{sec:model} In this section, we investigate how DWs are pinned at anisotropy modulations by assuming a simple model system (figure \ref{Figure1}(a)). The system consists of a PMA strip of length $L$, width $w$, and thickness $t$. We assume that a single 1D Bloch DW is present in the strip, at a certain position $q$ along the $x$-axis. The strip has perpendicular magnetic anisotropy, but the anisotropy changes at $x=0$. We assume a linear transition between two values over a gradient length $\delta$ centered at $x=0$. The part $x<- \delta/2$ has an effective perpendicular anisotropy constant $K_{\rm{eff}}$ and the part $x>\delta/2$ has $K_{\rm{eff,0}} >K_{\rm{eff}}$ (figure 1(b)). The other relevant parameters $M_{\rm{s}}$ (saturation magnetization) and $A$ (exchange constant) are kept constant. Since the energy of a DW scales with the square root of the anisotropy, the anisotropy change at $x=0$ causes an energy barrier as sketched in figure 1(c). The larger the anisotropy difference, the larger this barrier. By applying an external field $H$, the potential landscape is tilted making it possible for the DW to escape as soon as the tilt slope cancels the maximum slope of the DW energy landscape. In the following, we derive expressions for the pinning field $H_{\rm{pin}}$ as a function of the anisotropy of the left part of the strip, $K_{\rm{eff}}$. We will discuss the two cases shown in figure \ref{Figure1}(b) and \ref{Figure1}(d). The situation of figure \ref{Figure1}(b), in the limit that the anisotropy step is small, is discussed in section \ref{sec:modelSmallStep}. In section \ref{sec:ModelIP}, we discuss the situation where the part $x<0$ has strong in-plane (shape) anisotropy, $K_{\rm{eff}} \ll 0$, as sketched in figure \ref{Figure1}(d). We compare the analytical model with full micromagnetic simulations and find exact agreement. \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\linewidth]{Figure1} \caption{(a) Sketch of a Bloch DW in a nanostrip and definition of the coordinate system. (b) Sketch of a step in the anisotropy along the strip direction $x$. Such a step leads to an energy barrier for a DW sitting to the left of the step, as sketched in (c). The barrier can be overcome by applying an external magnetic field that tilts the energy landscape. (d) Sketch of the anisotropy landscape in case the part $x\ll0$ has in-plane magnetic anisotropy ($K_{\rm{eff}}<0$). } \label{Figure1} \end{center} \end{figure} \subsection{Limit of small $K$ step} \label{sec:modelSmallStep} A DW centered at a position $q$ in a perpendicularly magnetized nanostrip has a standard Bloch profile, with the out-of-plane angle $\theta$ given by \cite{SLONCZEWSKI79} \begin{equation} \label{e:gallium:bloch} \theta(x) = \pm 2 \arctan \left[\exp\left(\frac{x-q}{\Delta}\right)\right], \end{equation} with $\Delta$ the DW width. This profile is not exactly valid in the vicinity of the anisotropy interface since the part of the DW residing in the low-$K$ region tends to widen, but this effect is negligible in the limit studied. Considering effective anisotropy and exchange contributions, the magnetic energy density is \cite{SLONCZEWSKI79} \begin{eqnarray}\label{energydens} w(x)&=A\left[\left(\frac{\partial \theta}{\partial x}\right)^2 + \left(\sin \theta \frac{\partial \phi}{\partial x}\right)^2 \right] + K(x) \sin^2 \theta \nonumber \\ &= \left(\frac{A}{\Delta^2} + K(x) \right) \mbox{sech}^2 \left(\frac{x-q}{\Delta}\right), \end{eqnarray} where $\phi$ is the in-plane angle of magnetization ($\phi=0$ for a Bloch DW), and $K(x)$ has the profile sketched in figure \ref{Figure1}(b), \begin{eqnarray} K (x) =& K_{\rm{eff}}& (x<-\delta/2), \nonumber\\ K (x) =& \frac{K_{\rm{eff,0}} + K_{\rm{eff}}}{2} + (K_{\rm{eff,0}} - K_{\rm{eff}})\frac{x}{\Delta} \qquad & (-\delta/2\leq x \leq \delta/2), \\ K (x) =& K_{\rm{eff,0}}& (x>\delta/2). \nonumber \end{eqnarray} Because the DW width can be considered constant in the limit studied, the term $A/\Delta^2$ in (\ref{energydens}) can be omitted for simplicity. The total DW energy per unit cross-sectional area $\sigma_{\rm{DW}}$ of a DW centered at $q$ is then (up to constant) given by \begin{eqnarray} \label{e:gallium:integral} \sigma_{\rm{DW}} (q) = \int_{-\infty}^{\infty} w(x) \mathrm{d}x = &\frac{\Delta}{\delta } \left(2 K_{\rm{eff,0}} \delta +(K_{\rm{eff,0}}-K_{\rm{eff}}) \,\Delta \right. \nonumber\\ & \left. \times \left(\ln\left[1+e^{-\frac{2 q+\delta }{\Delta }}\right] -\ln\left[1+e^{\frac{-2 q+\delta }{\Delta }}\right]\right)\right). \end{eqnarray} By applying an external magnetic field $H$ in the $z$-direction, the energy landscape of the domain wall is tilted due to the Zeeman energy, giving a total energy $\varepsilon(q)$ \begin{equation} \varepsilon(q) = \sigma_{\rm{DW}}(q) -2\mu_0 M_{\rm{s}} H q . \end{equation} For estimating the depinning field, we are interested in the derivative of the DW energy with respect to $q$, which should be negative at any position in order for the DW to depin, \begin{equation}\label{e:gallium:derivative} \frac{\mathrm{d}\varepsilon}{\mathrm{d} q} = \frac{2 (K_{\rm{eff,0}}-K_{\rm{eff}}) \Delta \sinh\left[\frac{\delta }{\Delta }\right]}{\delta \left(\cosh\left[\frac{2 q}{\Delta }\right]+\cosh\left[\frac{\delta }{\Delta }\right]\right)} - 2 \mu_0 M_{\rm{s}} H < 0\,.. \end{equation} Hence, the maximum of $\frac{\mathrm{d}\varepsilon}{\mathrm{d} q}$ should be negative, \begin{equation} \max_{-\infty<q<\infty} \frac{\mathrm{d}\varepsilon}{\mathrm{d} q} = \left.\frac{\mathrm{d}\varepsilon}{\mathrm{d} q}\right|_{q=0} = (K_{\rm{eff,0}} - K_{\rm{eff}} )\frac{2\Delta}{\delta}\tanh\frac{\delta}{2\Delta}- 2 \mu_0 M_{\rm{s}} H < 0. \end{equation} The DW thus depins for $H > H_{\rm{pin}}$, with \begin{equation} \label{e:gallium:pinningfield} H_{\rm{pin}} = \frac{K_{\rm{eff,0}} - K_{\rm{eff}} }{{2{\mu _0}{M_{\rm{s}}}}} \times \frac{{2\Delta }}{\delta }\tanh \frac{\delta }{2\Delta }. \end{equation} In case the length scale of the anisotropy gradient $\delta$ is much smaller than the DW width $\Delta$, the pinning field is simply given by the difference of the anisotropy values, \begin{equation} \label{e:gallium:pinningfieldSharp} \lim_{\delta \rightarrow 0} H_{\rm{pin}} = \frac{K_{\rm{eff,0}} - K_{\rm{eff}} }{{2{\mu _0}{M_{\rm{s}}}}}. \end{equation} The opposite limit is also interesting; it turns out that the pinning field becomes zero if $\delta \gg \Delta$, \begin{equation} \label{e:gallium:pinningfieldSharp} \lim_{\delta \rightarrow \infty } H_{\rm{pin}} = 0, \end{equation} which means that a DW will only pin if $\delta$ is at a length scale comparable to the DW width, typically in the range of $10$\,nm. \subsection{Limit of in-plane $K$} \label{sec:ModelIP} If the perpendicular uniaxial anisotropy is quenched completely, this results in an effective in-plane anisotropy. Therefore, the DW at the moment of depinning is not necessarily an `up' to `down' transition. If the effective in-plane anisotropy is small, the out-of-plane field that is applied to achieve DW injection is already enough to pull the magnetization fully out-of-plane and the origin of the DW pinning field is not physically different from the case studied in the previous section. However, if the in-plane shape anisotropy is strong, there will always be a 90$^{\circ}$ DW present at the interface, and reversal is merely initiated by nucleation of a DW at this interface that will propagate through the out-of-plane part of the strip. In the following, we will attempt to model this situation by assuming that the in-plane anisotropy is so large that the spins are completely in-plane in the irradiated area, even though a perpendicular field is applied. This in fact corresponds to infinite in-plane anisotropy. Furthermore, it is assumed that the Bloch profile is still valid, but rescaled from the domain $\theta \in [0,\pi]$ to $\theta \in [0,\frac{\pi}{2}]$. The profile then reads (notice the factor 2 difference with (\ref{e:gallium:bloch})) \begin{equation} \theta(x) = \pm \arctan \left[\exp\left(\frac{x-q}{\Delta}\right)\right]. \end{equation} By micromagnetic simulations of an in-plane to out-of-plane transition in a strip, we verified that this profile is reasonably precise. To simplify the calculation, we only consider the case $\delta = 0$, because the precise shape of the anisotropy profile was found not to matter in the limit studied. The DW energy density reflects the change of easy axis at $x>0$: \begin{eqnarray} w(x) &= \frac{A}{4\Delta^2} & + \left| K_{\rm{eff}} \right| \cos^2 \theta \nonumber\\ & = \frac{A}{4\Delta^2} & + \left| K_{\rm{eff}} \right| \frac{1}{\exp\left(2\frac{x-q}{\Delta}\right)+1} \qquad \mbox{ ($x<0$),}\\ w(x) &= \frac{A}{4\Delta^2} & + K_{\rm{eff ,0}} \sin^2 \theta \nonumber \\ & = \frac{A}{4\Delta^2} & + K_{\rm{eff,0}} \frac{\exp\left(2\frac{x-q}{\Delta}\right)}{\exp\left(2\frac{x-q}{\Delta}\right) +1} \qquad \mbox{ ($x>0$).} \end{eqnarray} In analogy with (\ref{e:gallium:derivative}), the derivative of $\sigma_{\rm{DW}}$ becomes \begin{equation} \frac{\mathrm{d}\sigma_{\rm{DW}}}{\mathrm{d} q} = \frac{K_{\rm{eff,0}}\exp\left(\frac{2q}{\Delta}\right) - \left|K_{\rm{eff}}\right|}{\exp\left(\frac{2q}{\Delta}\right) +1}. \end{equation} This function is monotonically increasing and is maximal at $q\rightarrow\infty$. Therefore, the maximum slope of the energy barrier is given by \begin{equation} \max_{-\infty<q<\infty} \frac{\mathrm{d}\sigma_{\rm{DW}}}{\mathrm{d} q} = K_{\rm{eff,0}}. \end{equation} A more detailed analysis shows that at finite in-plane anisotropy, if a small $z$-component of magnetization is assumed for $x<0$, the maximum derivative is not at $\infty$ but close to $q=0$ (retaining the same magnitude), so that injection indeed occurs at the anisotropy interface. The derivative of total energy includes again a Zeeman term, which now has half the original magnitude, because the z-component of magnetization is zero at one end of the DW. Therefore, \begin{equation} \label{e:gallium:lowlimitenergy} \max_{-\infty<q<\infty} \frac{\mathrm{d}\varepsilon}{\mathrm{d}q} = K_{\rm{eff,0}}- \mu_0 M_{\rm{s}} H, \end{equation} and the pinning field is found by equating this expression to zero, \begin{equation} \label{e:gallium:pinningfieldIP} H_{\rm{pin}} = \frac{K_{\rm{eff,0}}}{\mu_0 M_{\rm{s}}}. \end{equation} To test the validity of (\ref{e:gallium:pinningfield}) and (\ref{e:gallium:pinningfieldIP}), micromagnetic simulations \cite{llg} are performed on a strip with $w=60\,$nm, $t=1$\,nm, and length $L=400\,$nm. The simulation cell size is $4\times4\times1$\,nm$^3$. Reducing the simulation cell size did not significantly change the obtained results. The saturation magnetization $M_{\rm{s}}=1400$\,kA/m and the exchange constant $A=16\,$pJ/m. The uniaxial anisotropy constant of the right part of the strip was fixed at $K_0 = 1.5$\,MJ/m$^3$, yielding an effective anisotropy $K_{\rm{eff,0}} = K_0 - \frac{1}{2}\mu_0 N_z M_{\rm{s}}^2 = 0.305$\,MJ/m$^3$. The left part of the strip has a variable effective anisotropy $K_{\rm{eff}} < K_{\rm{eff,0}}$. The starting configuration is a DW that is artificially created at the boundary and then energetically relaxed at zero applied field. Then, the field is increased in small steps, and at each field step the LLG solver iterates until the torque on the magnetization is virtually zero. The result is shown in figure \ref{Figure2}. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.7\linewidth]{Figure2} \caption{H$_{\rm{pin}}$ obtained from micromagnetic simulations of DW depinning at a sharp anisotropy step (open circles), or a gradual anisotropy increase (open squares and triangles). The solid and dotted lines show the limiting cases of the 1D model derived in the text. The filled circles are simulated nucleation fields of the left area, which dominate the switching of the entire strip if if reversal is started from a saturated state (as in experiment). Part of the data adapted from \cite{JeroenFIB,Markie}. \label{Figure2}} \end{center} \end{figure} The situation $\delta \rightarrow 0$ is shown as open circles in figure \ref{Figure2}, and $H_{\rm{pin}}$ from the 1D model (\ref{e:gallium:pinningfieldSharp}) is plotted as a solid line. In the regime where the anisotropy difference is rather small, good agreement is found. We also see that as the anisotropy becomes negative (in-plane), the simulated data approaches the derived limit (\ref{e:gallium:pinningfieldIP}), shown as the dotted horizontal line. The situation of a finite length $\delta$ is also simulated, by changing the values of the anisotropy on a single cell level. For instance, to simulate a length $\delta=20\,$nm, the anisotropy is step-wise increased over a width of 5 cells, which are each 4\,nm wide. This approach is valid as long as the cell size is of the order of the exchange length. Plotting the 1D limit (\ref{e:gallium:pinningfield}) in this case is slightly more complicated because it also contains the DW width $\Delta$, which in turn depends on the anisotropy at the DW position. For the plotted lines, we simply used $\Delta = \sqrt{A/K_{\rm{eff,0}}} \approx 7\,$nm, which again shows excellent agreement in the evaluated limit. Interestingly, for larger anisotropy differences, we see that the pinning field in the simulations bends upwards from the limit. This is simply because $\Delta$ increases, as it is partially in a region with lower $K_{\rm{eff}}$. If we take into account this increasing $\Delta$, the 1D model also predicts this upturn, demonstrating the power of the 1D approach. In the experimental situation, starting from a saturated state, a DW does not readily exist but must first be nucleated. Therefore, simulations starting from the saturated state were also conducted, shown as the solid circles in figure \ref{Figure2}. It is consistently observed that the DW is nucleated in the left part of the strip. For relatively high $K_{\rm{eff}}$, the nucleation field is much higher than the pinning field and therefore dominates the switching field of the entire strip. The nucleation field in the simulations matches that of a Stoner-Wohlfart particle and is in good approximation given by the anisotropy field $H_{K_{\rm{eff}}} = 2 K_{\rm{eff}}/(\mu_0 M_{\rm{s}})$, plotted as the dashed line. We should note that this nucleation field has no quantitative meaning in experiments, where the switching behavior does not show coherent Stoner-Wohlfart behavior, but is dominated by domains nucleating at random defects and their expansion by DW motion. To conclude this section, we have shown by analytical modeling and micromagnetic simulations, that a DW can be pinned at an anisotropy boundary. The field strength needed for depinning depends linearly on the anisotropy difference if the boundary is not too high. Interestingly, it was shown that a DW can also be injected from a boundary between an in-plane and out-of-plane anisotropy region. Furthermore, not only the height of the anisotropy boundary, but also its spatial extent (width), is an extra parameter that tunes the pinning field, and should be at the length scale of the DW for pinning to occur. In the next sections, we study quantitatively how Ga FIB irradiation can be used to tune the anisotropy (Section \ref{sec:Anisotropy}), and how DW pinning and nucleation can be controlled using this tool (Section \ref{sec:DWpinning}). \section{Manipulating the anisotropy of Pt/Co/Pt} \label{sec:Anisotropy} Whereas it is widely accepted that Ga and He irradiation reduces the PMA of sputtered Pt/Co/Pt films, the evidence is usually indirect, i.e. through measurement of the coercive field. The anisotropy has been systematically measured as a function of He irradiation dose \cite{Devolder2000}, but to our knowledge, a systematic data set of anisotropy as a function of Ga dose is lacking. Performing a quantitative measurement of the anisotropy as a function of Ga dose is therefore interesting in its own right, as well as insightful for the interpretation of DW pinning and nucleation in section \ref{sec:DWpinning}. Common methods to quantitatively measure the anisotropy of magnetic samples make use of Stoner-Wohlfart theory \cite{STONER1948}. Typically, an external field $H$ is applied under an angle $\alpha$ with the easy axis of magnetization. The magnetization is pulled away from its favored direction, toward the field direction. The ease by which the magnetization can be pulled is a measure of the anisotropy. We use the Extraordinary Hall Effect (EHE) to measure $M_z(H,\alpha)$ on Hall crosses that have been irradiated with varying Ga doses, and obtain quantitative values for $K_{\rm{eff}}$ by fitting to the theoretical model \cite{Rosenblatt2010}. \subsection{Experimental Details} Samples containing four Hall crosses of 5\,$\hbox{\textmu}$m wide Pt(4\,nm) / Co($x$\,nm) / Pt(2\,nm) are deposited on a Si / SiO$_2$(100\,nm) substrate. The thickness of the Co layer is varied from 0.4 to 0.6 nm. The samples were fabricated using Electron Beam Lithography (EBL), sputtering and lift-off. On top of the branches of the Hall crosses, 20 nm thick Pt contacts are deposited using a second EBL step for electrical contact. A micrograph of the resulting sample is shown in figure \ref{Figure3}(a). After the deposition of the Pt contacts, the Hall crosses are irradiated with different Ga doses. The ions have an energy of 30 keV and a beam current of several pA is used. The dose is varied from $0.07 \times 10^{13}\,$ions/cm$^{2}$ to $1.3 \times 10^{13}\,$ions/cm$^{2}$. This dose range does not lead to significant etching, but only affects the Pt/Co interfaces \cite{Hyndman1,Hyndman2}. The irradiated region for each Hall cross is indicated in figure \ref{Figure3}(a). Four lock-in amplifiers are used to measure the EHE as a function of applied magnetic field on four different Ga-irradiated crosses at the same time. An AC current with a density of $\sim 3.0 \cdot 10^{9}\,$Am$^{-2}$ at a frequency of 5 kHz is sent through the strip. The external field is applied under a variable angle $\alpha$. The measured lock-in voltage consists of the EHE plus a small contribution of the ordinary Hall effect (OHE). Since the EHE is constant when the magnetization is saturated, we can use the measured signal slope at high perpendicular fields to subtract the OHE from all other measurements. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{Figure3} \caption{(a) Pt/Co/Pt sample with four irradiated Hall crosses for EHE measurements; (b) Example of $M_z(H,\alpha)$ (open circles). The lines are the result of a global Stoner-Wohlfart fit for all $\alpha$ up to 80$^{\circ}$. Higher $\alpha$ are not incorporated because of non-coherent magnetization reversal \cite{Rosenblatt2010}. The inset shows the experimental geometry.} \label{Figure3} \end{figure} Figure\,\ref{Figure3}(b) shows a typical measurement of $M_{z}/M_{\rm{s}}$ for various $\alpha$. All traces are fitted globally using a fitting routine based on energy minimization of the Stoner-Wohlfart model. Input parameters within the model are the applied field $H$, the angle $\alpha$, the perpendicular magnetization $M_{z}$ and the saturation magnetization $M_{\rm{s}}$. The latter is estimated at $1.4 \times 10^ {6}\,$A/m from SQUID measurements. The fit yields a value of the perpendicular anisotropy $K_{\rm{eff}}$. The second order crystalline anisotropy is found to be negligible and therefore is not taken into account in the final fit. It can be seen in figure \ref{Figure3}(b) that for nearly in plane fields ($\alpha > 80^{\circ}$) there is a strong deviation between the fits and the experimental data. This is known to arise from non-coherent magnetization reversal processes, wherein the structure no longer behaves as a single magnetic domain \cite{Rosenblatt2010,HUBERT98}. To exclude this effect only measurements up to an angle of 80\,$^{\circ}$ are incorporated in the fit. \subsection{Anisotropy of Ga irradiated Pt/Co/Pt} \label{sec:EHEResults} \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{Figure4} \caption{The anisotropy constant $K_{\rm{eff}}$ as a function of the Ga irradiation dose for Pt/Co/Pt structures with varying Co thickness. The guides are exponential fits.} \label{Figure4} \end{figure} Figure \ref{Figure4} visualizes the effect of Ga irradiation and Co layer thickness on the anisotropy of Pt/Co/Pt structures. First we discuss the influence of the Co layer. It is observed that the anisotropy increases if the Co thickness is reduced from 0.6 to 0.5 nm. This inverse dependence on $t$ is expected, since $K_{\rm{eff}}$ arises from the surface anisotropy $K_s$ at the Pt/Co interfaces via $K_{\rm{eff}} = 2 K_s/t +K_v$ \cite{JOHNSON96}, where $K_v$ is negative and contains the contribution from shape anisotropy. However, the anisotropy of the 0.4\,nm Co sample does not differ significantly from the 0.5\,nm sample, meaning that growth-related phenomena are starting to play a role for such thin layers. Thinner layers are more ill-defined and therefore the interface anisotropy will decrease; this transition occurs right between 0.4 and 0.5\,nm. This is also reflected in a significantly lower coercivity of 0.4\,nm samples in section \ref{sec:DWpinning}, again pointing to a more disordered layer with easy nucleation centers. As a function of Ga dose, we see a decrease of $K_{\rm{eff}}$ that is approximately linear at low dose, and less steep at high dose. For higher doses than shown, the remanence at zero field was significantly reduced and the Stoner-Wohlfart model could not be applied. Eventually, the magnetization becomes completely in-plane (negative $K_{\rm{eff}}$). This transition to in-plane magnetization occurs at higher dose if the Co layer is thinner, because the anisotropy is higher to begin with. From a practical perspective this is very interesting, because the range of Ga doses that can be applied to tune the anisotropy increases by more than a factor of 2. Whereas the effect of Ga irradiation on the anisotropy is now quantified, the effect on other magnetic properties is not. \emph{A priori}, however, we do not expect a very significant effect, since Ga irradiation mainly affects the interfaces and $M_{\rm{s}}$ and $A$ are typically bulk parameters. The magnitude of the EHE signal is some measure of $M_{\rm{s}}$, and we observed no trend as a function of Ga dose. Less is known about the effect on $A$, but at least such an effect is not needed for explaining the results in the remainder of this paper. To conclude this section, it is seen that the anisotropy of Pt/Co/Pt samples increases for thinner Co layers, but this increase stops for very thin layers of $<0.5\,$nm. Interestingly, the reduction of anisotropy with low Ga dose remains constant irrespective of the starting anisotropy of the unirradiated film, i.e. the slope at low dose does not depend on the thickness in figure \ref{Figure4}. This is slightly counterintuitive, because if Ga irradiation reduces the surface anisotropy $K_s$ by the same amount regardless of thickness, this would translate to a $1/t$ dependence of the slope of $K_{\rm{eff}}$. From an experimental perspective this is a very useful result. By changing the Co thickness or the growth conditions, the tunable range of DW pinning fields can be expanded. In the next section \ref{sec:DWpinning} we will further investigate the consequences of this on the nucleation, pinning and injection of DWs in Pt/Co/Pt layers. \section{Controlling Domain Wall Nucleation and Pinning} \label{sec:DWpinning} In the present section the effects of Ga irradiation on DW nucleation and pinning are investigated experimentally. First, the experimental method is described. In the subsequent sections, DW nucleation and pinning is investigated as a function of Ga dose, strip width, Co layer thickness, and beam focus. It will turn out that both the height and the width of the DW energy barrier can be tuned by these parameters. \subsection{Experimental Details} The investigated structures are rectangular Pt(4\,nm) / Co($x$\,nm) / Pt(2\,nm) strips of 15\,$\times$2\,$\hbox{\textmu}$m$^{2}$, 10\,$\times$1\,$\hbox{\textmu}$m$^{2}$, 5\,$\times$0.5\,$\hbox{\textmu}$m$^{2}$ and 2.5\,$\times$0.25\,$\hbox{\textmu}$m$^{2}$. Different Co thicknesses $x= 0.4$, 0.5 and 0.6\,nm are used. The structures are grown on a Si~/~SiO$_2$(100\,nm) substrate by EBL, sputtering, and lift-off. After the fabrication of the Pt/Co/Pt layers, the left half of the strips is irradiated with Ga ions at a varying dose to reduce the anisotropy. Upon application of a magnetic field, a DW nucleates in this area and subsequently moves into the remainder of the strip. Wide-field Kerr microscopy \cite{HUBERT98} is used to study the effect of ion irradiation on nucleation and pinning of DWs. In the analysis we focus on the injection field $H_{\rm{in}}$, defined as the external field at which the DW penetrates into the non-irradiated part of the structure. Since the injection of a DW involves two processes with a different typical field strength (nucleation at a field $H_{\rm{n}}$ and depinning at a field $H_{\rm{pin}}$), the injection field is defined as the maximum of these two fields. The magnetic field is swept from negative to positive and a sudden change in intensity of the Kerr signal occurs in the non-irradiated area when the DW is injected. Decent statistics are obtained by averaging $H_{\rm{in}}$ over 12 structures. The error bars in all figures where $H_{\rm{in}}$ is plotted against the irradiation dose represent the standard deviation of $H_{\rm{in}}$ from structure to structure. \subsection{Variable Ga dose and strip width} First the effect of Ga irradiation is studied on strips with a fixed composition Pt(4\,nm) / Co(0.6\,nm) / Pt(2\,nm). Figure \ref{Figure5} shows exemplary Kerr images of the switching process in several 10\,$\times$1\,$\hbox{\textmu}$m$^{2}$ strips. The Kerr images of three different Ga doses are shown. In figure \ref{Figure6} the measured injection field is plotted as a function of Ga dose for structures of various sizes. \begin{figure}[p] \begin{centering} \includegraphics[width=0.7\linewidth]{Figure5} \caption{Kerr microscopy images of the magnetic switching behavior of 10\,$\times$1\,$\hbox{\textmu}$m$^{2}$ Pt / Co(0.6\,nm) / Pt structures for various doses of Ga irradiation. The irradiated regions are marked in (a). The magnetic contrast is enhanced by subtraction of a background image, which is obtained at zero field after saturation at high negative fields.} \label{Figure5} \end{centering} \end{figure} \begin{figure}[p] \begin{centering} \includegraphics[width=0.7\linewidth]{Figure6} \caption{DW injection field as a function of Ga dose for a Pt / Co(0.6\,nm) / Pt strip of variable width. The lines are drawn as a guide to the eye.} \label{Figure6} \end{centering} \end{figure} Here we discuss the features observed in the Kerr images of figure \ref{Figure5}. The samples were saturated at negative field and the field was swept to positive saturation. Snapshots at different positive fields during the sweep are shown. In figure \ref{Figure5}(a) (dose 0.34\,$\times$10$^{13}$\,ions/cm$^{2}$), it is seen that at a certain field strength, the bright structures have switched completely while the dark structures have not. This is due to the statistical nature of domain nucleation in perpendicular materials, which occurs at random defects. At a slightly higher field (figure \ref{Figure5}(b)), 2 more structures have switched instantly. This means that a DW was nucleated in the irradiated area, which instantly moves into the remainder of the strip. In other words, the nucleation field is much higher than the pinning field, $H_{\rm{n}} > H_{\rm{pin}}$. The range of doses where this is the case is denoted by A in figure \ref{Figure6}. Clearly, $H_{\rm{n}}$ decreases with Ga dose due to the PMA reduction. In the snapshots taken at higher dose (0.41\,$\times$10$^{13}$\,ions/cm$^{2}$) in figure \ref{Figure5}(c), it is seen that a DW nucleated in the irradiated area pins at the boundary between the two regions in some strips. However, in other structures the DW moved instantly without pinning. This indicates that the field strengths associated with nucleation and pinning are approximately the same, $H_{\rm{n}} \approx H_{\rm{pin}}$. A significantly higher field is needed (figure \ref{Figure5}(d)) to depin all the trapped DWs. Looking at a slightly higher dose of 0.44\,$\times$10$^{13}$\,ions/cm$^{2}$ in figure \ref{Figure5}(e), a strong change in the nucleation of the DW is observed. Instead of the instantaneous switching that was observed before, the irradiated area now switches in many small domains, because we are getting close to the in-plane transition. By increasing the field as seen in figure \ref{Figure5}(f), a single domain will again appear and the corresponding DW is pinned for all structures at the shown field. Hence, $H_{\rm{n}} < H_{\rm{pin}}$. This regime is denoted B in figure \ref{Figure6}. In figure \ref{Figure6}, $H_{\rm{in}}$ as a function of Ga dose is plotted for structures of different sizes. Next to the discussed regimes A ($H_{\rm{n}}>H_{\rm{pin}}$), B ($H_{\rm{n}}<H_{\rm{pin}}$), we identify a third regime C where the pinning field converges to an asymptote, because the magnetization of the irradiated region becomes in-plane. The same 3 regimes were found in the micromagnetic model depicted in figure \ref{Figure2}. For the strips of 15$\times$2\,$\hbox{\textmu}$m$^{2}$, 10$\times$1\,$\hbox{\textmu}$m$^{2}$ and 5$\times$0.5\,$\hbox{\textmu}$m$^{2}$ the behavior is very similar. The 2.5\,$\times$0.25\,$\hbox{\textmu}$m$^{2}$ structures however behave somewhat differently. Although all the observed features are still present, it can be seen that these structures have a significantly lower nucleation field in regime A. Since all structures are grown and measured under the same conditions on the same wafer, this effect must be related to the decrease in size. Indeed, due to the limitations of the lithography method used, the roughness of the strips is very significant compared to the strip width, resulting in a rather poorly defined strip. The nucleation field is very sensitive to structural defects and is therefore reduced, and also the anisotropy itself might be affected, leading to a change of the observed effects. The magnitude of the injection fields is roughly a factor 20 higher in the simulations/1D model compared to the experiments. This is not unusual, since the simulations do not include any thermal fluctuations. In room temperature experiments, thermal fluctuations play a crucial role in all magnetization reversal phenomena. For examle, the coercive field (responsible for the injection field in the high-$K$ range) is greatly reduced at finite temperatures, and originates from the nucleation of a small area followed by DW motion, instead of the Stoner-Wohlfart type of switching in our model. In SQUID measurements, it was found that for a similar film, the coercivity at 5\,K is roughly 40 times larger than at room temperature. Also, the escape of a DW over an energy barrier (responsible for the DW injection in the low-$K$ region) is much easier at elevated temperatures, so lower fields are required for depinning. Therefore, only a qualitative comparison with the micromagnetic model can be made. \subsection{Variable Co layer thickness} \begin{figure}[htb] \begin{centering} \includegraphics[width=0.7\linewidth]{Figure7}{} \caption{(a) DW injection field in 1\,$\hbox{\textmu}$m wide strips as a function of Ga dose for different Co thicknesses. Kerr snapshots of (b) 0.6\,nm and (c) 0.5\,nm structures at the highest dose with full PMA, demonstrating that pinning is better tunable in a thinner Co layer.} \label{Figure7} \end{centering} \end{figure} Figure~\ref{Figure7} shows a comparison of $H_{\rm{in}}$ as a function of Ga dose for different Co thicknesses in Pt / Co ($x$\,nm) /Pt structures of 10\,$\times$1\,$\hbox{\textmu}$m$^{2}$. The $x=0.4\,$nm structures clearly have a lower nucleation field. This is probably related to the growth quality of such ultrathin films. Interestingly, the pinning strength is very similar for the 0.5 and 0.6\,nm Co thicknesses. This is also what would be expected from the anisotropy measurements of figure \ref{Figure4}, because $K_{\rm{eff,0}}-K_{\rm{eff}}$ appeared to be rather insensitive to the layer thickness. The minimum of the curve, where $H_{\rm{pin}}=H_{\rm{n}}$, is found at a dose of 0.44\,$\times$10$^{13}$ions/cm$^{2}$ for both the 0.5\,nm and 0.6\,nm strips. For the 0.4\,nm structures $H_{\rm{n}}$ is lower (related to the growth quality of such thin layers), which shifts the minimum slightly to the left at 0.31\,$\times$10$^{13}$ions/cm$^{2}$. Also, the DW pinning in regime B is lower for the 0.4\,nm strips, because the anisotropy is better retained at high doses compared to the 0.5\,nm sample (as seen in figure \ref{Figure4}), leading to a lower pinning barrier. In the high-dose regime (C), where the irradiated region has an in-plane magnetization, $H_{\rm{pin}}$ is theoretically given by $K_{\rm{eff,0}}/(\mu_0 M_{\rm{s}})$, so ultimately determined by the anisotropy of the untouched part $K_{\rm{eff,0}}$. Both $K_{\rm{eff,0}}$ and $H_{\rm{pin}}$ are significantly higher for the 0.5\,nm Co film, demonstrating that the theoretical model appears to have qualitative validity also in this regime. For the 0.4\,nm Co film, the pinning field at high dose is masked by the very low $H_{\rm{n}}$. Compatible with the anisotropy measurements in figure \ref{Figure4}, it is seen from the Kerr images that for thin Co layers, much larger anisotropy differences can be obtained before the magnetization becomes in-plane. Because theoretically $H_{\rm{pin}}=(K_{\rm{eff,0}}-K_{\rm{eff}})/(\mu_{0}M_{\rm{s}})$ this means that the pinning strength of the anisotropy barrier can also be made much stronger. Decreasing the Co thickness therefore leads to more controllable DW pinning. This is illustrated by figure \ref{Figure7}(c), which shows that DWs are consistently pinned in the 0.5\,nm Co strip for all the studied structures at the shown dose of $0.56\times10^{13}\,$ions/cm$^2$. At the same dose, the 0.6\,nm Co strip is already in-plane magnetized. The highest dose where the 0.6\,nm strips are fully perpendicular is $0.41\times10^{13}\,$ions/cm$^2$, and figure \ref{Figure7}(c) illustrates the unreliable pinning in these strips. For application as pinning sites, one typically would like to pin an existing domain wall without risking nucleation of a new domain wall. Therefore, one would require a significant gap between the highest $H_{\rm{n}}$ and the lowest $H_{\rm{pin}}$ of any of the structures. For the 0.6\,nm Co, this gap is virtually zero for any dose with full PMA. For 0.5\,nm, the gap is maximized at $0.56\times10^{13}\,$ions/cm$^2$ and 0.8 mT in size. Interestingly, for the 0.4\,nm strips, full PMA extends to very high doses and the optimal gap was 4.7\,mT at a dose of $0.81\times10^{13}\,$ions/cm$^2$. \subsection{Tuning the width of the pinning barrier} In the previous sections we showed that the DW pinning field at a Ga irradiation boundary scales with $K_{\rm{eff,0}}-K_{\rm{eff}}$, where $K_{\rm{eff,0}}$ can be tuned by the Co interlayer thickness and $K_{\rm{eff}}$ by the Ga dose. However, equation \ref{e:gallium:pinningfield} suggests another parameter to tune the pinning field: the length scale of the anisotropy gradient $\delta$. It is expected that the pinning strength decreases with increasing $\delta$, because the energy barrier for DW propagation becomes less steep. Experimentally, $\delta$ is controlled by placing the sample away from the focal point. The distance to the focal point determines the FWHM of the beam, which is used as an estimate of $\delta$. Figure~\ref{Figure8} illustrates the behavior of the injection field in Pt / Co (0.5\,nm) / Pt as $\delta$ is varied from 0 (optimal beam focus) to $\approx 80$\,nm. Increasing $\delta$ clearly leads to a systematic decrease of $H_{\rm{pin}}$. The qualitative agreement with the theoretical result of figure \ref{Figure2} is striking. The fact that a slight change of $\delta$ leads to such clear effects is strong evidence that Ga irradiation creates pinning sites at a length scale comparable to the DW width. Using focused helium beams, an even smaller $\delta$ can be realized due to a better optimal focus, leading to stronger DW pinning \cite{Markie}. It is interesting to note that the minimum in $H_{\rm{in}}$ is also reduced when increasing $\delta$. A lesson to learn from this, is that in order to achieve DW injection at the lowest possible field, one should simply make $\delta$ as big as possible. \begin{figure}[htb] \begin{centering} \includegraphics[width=0.7\linewidth]{Figure8}{} \caption{DW injection field of 1\,$\hbox{\textmu}$m wide Pt / Co(0.5\,nm) / Pt structures. The width of the anisotropy barrier $\delta$ is controlled by changing the focus of the ion beam. As expected, the pinning strength is reduced for increasing $\delta$.} \label{Figure8} \end{centering} \end{figure} \section{Conclusion} In this paper, we analyzed in detail the pinning of a domain wall at engineered anisotropy variations. First, we analytically derived that a step in the magnetic anisotropy acts as an energy barrier for the DW. It was shown that the pinning field of a DW at such an anisotropy boundary increases with the anisotropy difference and decreases with the width of the boundary. The analytical model matches well with micromagnetic simulations. Then, it was shown that FIB irradiation with Ga ions can be used to control the magnetic anisotropy of a Pt/Co/Pt strip, and quantitative measurements were performed using the EHE effect. Thereafter, field-induced domain wall pinning and nucleation in irradiated Pt/Co/Pt nanostrips was studied using wide-field Kerr microscopy. The pinning behavior qualitatively reproduced all the features of the analytical model. The pinning of DWs was shown to be insensitive to the width of the strip in the range 0.5-2\,$\hbox{\textmu}$m. However, the thickness of the Co layer does provide another handle to tune DW pinning, since a thinner Co layer has higher intrinsic anisotropy, thereby increasing the range of anisotropy values that can be realized without destroying the PMA. Finally, it was shown that even the width of the anisotropy barrier, which according to our model has to be of the order of the DW width ($\sim10\,$nm), can be precisely tuned by reducing the focus of the ion beam. This leads to a lower injection field because the energy barrier for the DW becomes less steep. Engineered anisotropy defects can not only be used to controllably inject a DW at arbitrarily low fields, but also to provide tunable pinning sites for field- and current-induced domain wall motion in PMA strips. In the experiments reported in this paper, relatively large areas were irradiated with Ga, but also small defects could be made that act as pinning sites. These can be useful in DW-based memory or logic devices as an alternative to geometrically induced pinning sites \cite{Parkin2008,Zutic2004,Xu2008}, or for controlled experiments on current-induced DW depinning. Furthermore, we have recently shown by micromagnetic simulations that a DW pinned at an anisotropy boundary can be brought into steady oscillatory motion by a DC current \cite{oscillator}, which could be used as a microwave current source similar to spin torque oscillators. To conclude, control of the magnetic anisotropy at the nanoscale in general is a powerful tool in many magnetic nanodevices. \section{Acknowledgement} This work is part of the research programme of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO). This is an author-created, un-copyedited version of an article accepted for publication in Journal of Physics: Condensed Matter. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The definitive publisher-authenticated version is available online at http://dx.doi.org/10.1088/0953-8984/24/2/024216 \section*{References} \bibliographystyle{jphys}
2,869,038,156,132
arxiv
\section{Introduction} Dualities are among the most fascinating, and the most useful, phenomena of physics. A duality expresses that the same physical system can be described in more than one way, with the two ``dual" descriptions consisting of entirely different degrees of freedom, interacting in different ways. Often, one dual description is strongly coupled while the other is weakly coupled. We have the intuition that near weak coupling we have an approximately free set of degrees of freedom, but as the coupling increases those degrees of freedom can bind to one another in such a way that they are no longer independent, and treating the problem in the original variables is no longer easy. Sometimes, however, it occurs that the bound states of the original degrees of freedom behave as a new set of variables that are themselves weakly coupled with respect to a different set of collective interactions, even though the original interactions are strong. Such a situation can enable physicists to run back to their beloved weakly-coupled perturbation theory in situations where it originally seemed wildly inappropriate, by switching to the new degrees of freedom. The gauge/gravity, or AdS/CFT, correspondence \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj} is a particularly remarkable duality. The two dual sets of degrees of freedom and their interactions are profoundly different. On one side we have a theory of (in general quantum) gravity in a higher spacetime dimension, while in the other case we have a theory without gravity, in a lower spacetime dimension; the original and most famous realization is of a duality between string theory in an anti-de Sitter (AdS) space, and a conformal field theory (CFT) in one dimension less. One might think that a duality between a gravitational theory and a non-gravitational theory should not occur, and one might also think that a duality between physics in different spacetime dimensions {\em really} should not occur. Yet the gauge/gravity correspondence does both, and in fact, it is essential that it is both together: this correspondence provides a realization of the {\em holographic principle} \cite{tHooft:1993dmi, Susskind:1994vu, Bousso:2002ju}, which holds that any theory of quantum gravity is in some sense massively redundant, and should be describable in terms of fewer degrees of freedom living on some sort of ``boundary" of spacetime. Thus one member of the dual pair having gravity, and the same member living in a higher-dimensional space, are two sides of the same coin, and one is necessary for the other. The gauge/gravity correspondence is interesting in both directions. Starting from the non-gravitational field theory side, one has in principle a definition of quantum gravity in certain backgrounds, and can attempt to understand the emergence of spacetime from the CFT data. This program has many exciting developments and is described in the TASI 2017 summer school by lectures from Harlow \cite{Harlow:2018fse} and from Headrick; see also the TASI 2015 lecture notes from van Raamsdonk \cite{VanRaamsdonk:2016exw}. The other way to proceed is to attempt to use what we know about gravity to learn about non-gravitational field theory. In particular, the realization of the strong/weak coupling duality dichotomy in the case of the AdS/CFT correspondence is that when the field theory has a large number of degrees of freedom (``large N") and is strongly coupled, the dynamics on the gravity side reduce to classical general relativity interacting with other fields. Thus we can ignore the (admittedly fascinating) story of quantum gravity, and attempt to use what we know about classical gravity to learn things about strongly coupled field theories. This is the subject of this review. Such applications were also explored in the last two lectures of Erdmenger's AdS/CFT lectures at TASI 2017. For other reviews of such applications, see the monumental review by Hartnoll, Lucas and Sachdev \cite{Hartnoll:2016apf}, McGreevy's lectures from TASI 2015 \cite{McGreevy:2016myw}, lectures on AdS/CFT and heavy ions like \cite{CasalderreySolana:2011us, Adams:2012th, DeWolfe:2013cua} and others on applications to condensed matter such as \cite{McGreevy:2009xe, Hartnoll:2009sz}. There have been a number of useful reviews of AdS/CFT itself. The ``MAGOO" Physics Report \cite{Aharony:1999ti} is a twentieth-century classic, including background on the large-N limit and a discussion of AdS$_3$, among many other things. Klebanov's 1999 TASI lectures \cite{Klebanov:2000me} provide a lot of background on the string/brane systems that led to AdS/CFT, and a nice description of the remarkable holographic system arising from branes at a conifold singularity. D'Hoker and Freedman's lectures at TASI 2001 \cite{DHoker:2002nbb} provide encyclopedic detail of supersymmetry in AdS/CFT and higher-point correlation functions, among other things. Polchinski's 2010 TASI lectures \cite{Polchinski:2010hw} contain an alternate argument not involving strings or branes for the existence of a gravity dual starting with a field theory, and Penedones's 2015 TASI lectures \cite{Penedones:2016voo} take the viewpoint of starting with physics in AdS space and deriving that something looking like a dual CFT must exist. With all of these reviews out there, I will not attempt to be encyclopedic here. Instead I will try to provide an entry point into the subject, discussing some important basics and classic examples. Maybe a better name for these lectures would have been ``Gauge/gravity duality for applications," for I will focus on the gravity calculations and the nature of the duality, discussing the applications (in particular the strong nuclear interaction of quantum chromodynamics, and the theory of high-temperature superconductors and the associated strange metals) in a more cursory fashion, as motivation for the gravity calculations. Hopefully the reader will finish these lectures with a solid foundation in the subject, as well as feeling intrigued and ready to delve more deeply into the literature. \section{Gauge/Gravity duality} ``The AdS/CFT correspondence" is not the best name for such a profound and far-reaching duality. Besides being stuffed with two initially-mysterious acronyms, it also suffers from being overly specific: the correspondence can easily be generalized to cases where the gravity theory does not live in anti-de Sitter space, and the dual theory is not a conformal field theory. Nonetheless the name has stuck, and we will use it like everyone does. ``The gauge/gravity correspondence" is a little better, since it is more general, and we always have a gravity theory on one side, and in all our examples where the other side is known it will be a gauge theory, so we will use this name as well, mostly interchangeably. In this section we will provide an overview of the correspondence, as well as introducing anti-de Sitter space and considering the limits of validity of classical gravity, while mentioning a few famous examples of the duality. \subsection{Overview of the correspondence} Let us begin by providing an overview of how the AdS/CFT correspondence works. All of our examples will have the gravity theory living in a spacetime that is at least asymptotically anti-de Sitter, so we will assume that in what follows, though there are generalizations. The essential features of the duality are: \begin{itemize} {\item On one side of the correspondence we have a theory of gravity, with a metric and other fields. On the other side is a gauge theory. } {\item The same symmetries act on both sides of the correspondence. In particular the isometries of the gravity theory geometry are exactly the spacetime symmetry group of the field theory. In this way a change of scale in the field theory is associated with motion through the extra dimension on the gravity side. Additionally, gauge groups of the gravity theory match global symmetries of the field theory. The field theory's gauge group is an exception; it does not appear on the gravity side.} {\item The AdS/CFT dictionary associates to each field in the gravity theory, a gauge-invariant operator in the gauge theory, with the same symmetry properties.} {\item Doing physics in anti-de Sitter space requires not just initial conditions, but also boundary conditions at spatial infinity (``the boundary") to be specified. Setting these boundary conditions for gravity theory fields is mapped by the correspondence to adding sources or turning on expectation values for the dual operators. From this, correlation functions of gauge theory operators can be calculated by determining the gravity theory's response to changing boundary conditions. Intuitively, we say the gauge theory ``lives" at the boundary of the gravity theory.} {\item Black hole thermodynamics in the gravity theory is mapped to regular thermodynamics in the gauge theory: a black hole in AdS space with a certain Hawking temperature, entropy etc. corresponds to a state in the field theory with the same thermodynamic properties.} \end{itemize} One of the most important features in the correspondence is the particular nature of anti-de Sitter space, its boundary at infinity and the need for boundary conditions there, so let us introduce this geometry. \subsection{Anti-de Sitter and asymptotically anti-de Sitter spaces} Anti de-sitter space is the maximally symmetric space with Lorentzian signature and negative curvature. It is notable for the importance of its ``boundary", the limit of the geometry at large spatial distance: unlike for example Minkowksi space, where the boundary is infinitely far away and doesn't bother us, in AdS space the boundary may be reached by a signal which then returns in finite proper time. This makes the boundary act like a real place, and consequently we have to understand what's going on there. We will discuss anti-de Sitter space in $D \equiv d+1$ spacetime dimensions. There are many sets of coordinates that can be used to describe some or all of the space. The full (geodesically complete) geometry is called {\em global} AdS, and has a boundary with topology $R \times S^{d-1}$. For our applications, we will instead be interested in a subset of AdS space, the so-called {\em Poincar\'e patch}, whose metric can be written as \begin{eqnarray} \label{AdSMetric} ds^2 = {r^2 \over L^2} (-dt^2 + d\vec{x}^2) + {L^2 dr^2 \over r^2} \,, \end{eqnarray} where there are $d-1$ spatial coordinates $\vec{x}$, one time coordinate $t$ and the spatial ``radial coordinate" $r$, for a total of $d+1$ dimensions.\footnote{Another useful coordinate system uses the coordinate $z \equiv L^2/r$, where the metric looks like $ds^2 = {L^2 \over z^2} (-dt^2 + d\vec{x}^2 + dz^2)$ and the boundary is at $z=0$.} Here $L$ is the characteristic length scale of the geometry, called the AdS radius. The constant negative curvature is expressed by the Ricci scalar, \begin{eqnarray} R = - {d(d+1)\over L^2} \,. \end{eqnarray} Anti de-Sitter space is a solution to Einstein's equations with a cosmological constant, \begin{eqnarray} R_{\mu\nu} + {1 \over 2}R g_{\mu\nu} +\Lambda g_{\mu\nu} = 0\,, \quad \quad \Lambda = - {d (d-1) \over 2 L^2} \,. \end{eqnarray} Often this $D$-dimensional Einstein equation arises as the dimensional reduction of a higher dimensional theory dimensionally reduced on a positive-curvature geometry like a sphere: in the higher dimension we will have $AdS_D \times S^q$, with a $D$-form or $q$-form field strength $F_D$ or $F_q$ (generalizations of the two-form field strength of electromagnetism) on the AdS or compact factor, respectively, inducing the cosmological constant when reduced to the lower-dimensional theory. The boundary of the Poincar\'e patch of anti-de Sitter space is at $r \to \infty$, and has topology $R^d$; one can see that the induced metric on a slice at fixed $r$ is just an overall constant times the $d$-dimensional Minkowski metric. A null ray may leave $r=0$, reach $r=\infty$ and return in finite proper time for a timelike observer at the origin. As a result, to understand physics in anti-de Sitter space it is not enough to specify initial conditions on a spacelike hypersurface like $t=0$; we must also set boundary conditions at infinity. Thus even for fixed bulk dynamics, each set of boundary conditions defines a different physical theory living in the space. In the AdS/CFT correspondence, these boundary conditions are intimately related to the connection between the gravity theory and the dual field theory. The dual field theory lives on a space with the topology of the AdS boundary, and it is often convenient to think of it as ``living on the boundary". For this reason, we will define the correspondence more precisely before we impose the AdS boundary conditions, so we can discuss what we are doing on both sides of the duality simultaneously. Once we put things inside the geometry, the metric will be deformed and it will not be precisely AdS space anymore. Sometimes we will want to put some pretty big things in there, like black holes. However, such deformations will still leave the geometry asymptotically approaching AdS at infinity. Thus we will be interested in asymptotically AdS geometries of the form \begin{eqnarray} \label{AsymptoticAdS} ds^2 = e^{2A(r)} (- h(r) dt^2 + d\vec{x}^2) + e^{2B(r)} dr^2 \,, \end{eqnarray} along with other fields depending on $r$, reducing to the AdS metric (\ref{AdSMetric}) as $r \to \infty$. This breaks a certain set of symmetries of the space; as we discuss momentarily, this must correspond to breaking the analogous symmetries on the field theory side. It is possible to consider even less symmetric geometries, for example breaking spatial homogeneity; such examples are quite interesting, but we will not have time to consider them here. \subsection{Validity of classical gravity, large $N$ and some famous cases} A theory of gravity always has an associated Newton's constant $G_N$, which we may define as the coefficient of the Einstein term in the action: $S = (1 / 16 \pi G_N) \int d^Dx \, R$. From $G_N$ we can define the Planck length as $G_N = \ell_P^{D-2}$, characterizing the length scale at which quantum gravity effects should appear. String theory has another length scale, the string scale $\ell_s$, characterizing the size of the string. The strongest form of the gauge/gravity duality asserts that the full quantum theory of gravity on a particular spacetime is dual to the appropriate field theory, for all values of their respective parameters. We, however, will stay within the classical gravity limit, which says that the AdS radius $L$, which is the characteristic length of the geometry, is much bigger than the Planck length $\ell_P$ and the string length $\ell_s$: \begin{eqnarray} \label{Limits} L \gg \ell_P, \ell_s \,. \end{eqnarray} Thus the quantum or stringy nature of the geometry is not visible on the characteristic scale of the curvature. In the dual field theory, this corresponds to the limit of a large number of degrees of freedom (large $N$) and strong coupling. In the case of the gauge theories we study, the large number of degrees of freedom means a large number of colors for the gauge group (for example, $SU(N)$ gauge theory with large N). Studying this large N limit as a way to simplify theories like QCD predates the AdS/CFT correspondence by two decades (for a brief review see \cite{Aharony:1999ti, McGreevy:2009xe}, or for more detail Coleman's lectures ``1/N" in \cite{Coleman}). To avoid divergent loop diagrams with an infinite number of fields running in the loop one must simultaneously tune the coupling to zero, while keeping an effective coupling that is a product of the vanishing original coupling and the diverging number of colors, called the 't Hooft coupling, fixed. The limit (\ref{Limits}) corresponds to the large $N$, large 't Hooft coupling limit of the dual field theory. To be a little more precise, here are two fundamental examples of the AdS/CFT correspondence, both discovered in \cite{Maldacena:1997re}: \begin{itemize} {\item Type IIB string theory has, among other fields, a metric and a four-form gauge field with a self-dual five-form field strength. This theory on $AdS_5 \times S^5$ with $N$ units of five-form flux on each five-dimensional spacetime factor is dual to ${\cal N}=4$ Super-Yang-Mills theory in four spacetime dimensions with gauge group $SU(N)$, a particular non-abelian gauge theory cousin of QCD consisting of gauge fields, 4 fermions and 6 scalars all in the adjoint representation of $SU(N)$, that is also an exactly superconformal quantum field theory. The limit (\ref{Limits}) reduces us to Type IIB supergravity, and corresponds in the field theory to $N \to \infty$ with the 't Hooft coupling $\lambda \equiv g^2_{\rm YM} N$ fixed and large.} {\item Eleven dimensional M-theory has a metric and a three-form gauge potential with a four-form field strength. This theory on $AdS_4 \times S^7$ with $N$ units of four-form flux on $AdS_4$ corresponds to ABJM theory \cite{Aharony:2008ug}, a superconformal Chern-Simons-matter theory in three dimensions with gauge groups $U(N)_k \times U(N)_{-k}$, where $k$ is the Chern-Simons level. The limits (\ref{Limits}) again correspond to $N \to \infty$, $\lambda$ fixed and large where the 't Hooft coupling is now $\lambda \equiv N/k$.} \end{itemize} We will discuss these in a little more detail in section~\ref{NScalingSec}. Studying these very special, highly-symmetric superconformal theories in their strongly coupled limits can be interesting in its own right. ${\cal N}=4$ Super-Yang-Mills has been called ``the simple harmonic oscillator of the $21^{st}$ century" for its iconic role as a prototype gauge theory and quantum field theory that can be studied from many angles and generalized in many directions, while ABJM theory appears to play no less of a fundamental role in the pantheon of quantum field theories as the avatar of three-dimensional Chern-Simons-matter theories, and is less famous likely just due to our (admittedly understandable) preference for four-dimensional physics.\footnote{The final member of the triumvirate of highly special superconformal theories is the six-dimensional $(2,0)$ theory, which also has a gravity dual, and since the others can be obtained from it under dimensional reduction it may be the most special of all. But because it is the least-well understood from the field theory perspective, we will not use it as an example.} But we are also motivated by less symmetric strongly coupled systems: \begin{enumerate} {\item Quantum chromodynamics (QCD) is the theory of the strong nuclear force, but cannot be studied at low energies in perturbation theory as quantum electrodynamics (QED) can be, due to its strong coupling. The large-$N$ limit in gauge theories was developed in an attempt to make QCD tractable. Anything we can learn about its strong coupling behavior from AdS/CFT would be highly desirable.} {\item High-$T_c$ superconductors are strongly correlated electron materials that arise in the study of condensed matter systems. They are effectively two-spatial-dimensional, and their phase diagrams include a so-called ``strange metal" phase that appears not to be describable in terms of an effective quasiparticle description. Again, a holographic description would be most useful.} \end{enumerate} In what follows, we will use these systems as motivation and examples for what kind of physical systems the gauge/gravity correspondence can make contact with. \section{The AdS/CFT dictionary} The AdS/CFT correspondence connects a gravity theory living in (asymptotically) $AdS_{d+1}$ to a field theory living on $R^d$. The first part of the matching between the two theories is the matching of symmetries. After describing this we will establish the holographic dictionary between gravity theory fields and gauge theory operators, give the mathematical statement of the correspondence, and explore aspects of it in some detail for a single scalar field. \subsection{Symmetries} \medskip\noindent \underline{Spacetime symmetries} A Lorentz-invariant field theory living on $R^d$ is symmetric with respect to the $d$-dimensional Poincar\'e group, including translations, rotations and Lorentz boosts. When the theory is also conformal, the spacetime symmetry group is enhanced to the conformal group $SO(d,2)$, which includes the symmetries above along with the scale transformation, \begin{eqnarray} D: t \to \lambda t \,, \quad \vec{x} \to \lambda \vec{x}\,, \end{eqnarray} and the $d$ special conformal transformations, which we won't have need to discuss. Conformal field theories describe physics invariant under this overall scale transformation, which means they lack any length or mass scale. The ultraviolet (high-energy) and infrared (low-energy) limits of a general quantum field theory are CFTs, and as such CFTs represent fundamental building blocks for the study of quantum field theory, as well as being relevant to physical systems from critical phenomena to string theory. $AdS_{d+1}$ is a higher-dimensional space, yet its isometry group --- the group of coordinate transformations that preserve the metric --- is also precisely $SO(d,2)$. The translations, rotations and Lorentz boosts on the coordinates $t, \vec{x}$ exactly match those acting on the analogous coordinates in the field theory; it is this matching of symmetries that really lets us identify the space the field theory lives on with a constant-$r$ slice of the gravity theory. Furthermore, the AdS analog of the scale transformation is \begin{eqnarray} \label{GravityScale} D: t \to \lambda t \,, \quad \vec{x} \to \lambda \vec{x}\,, \quad r \to {r \over \lambda} \,. \end{eqnarray} This acts as a scale transformation on the $t, \vec{x}$ coordinates, while also moving us in the radial direction. This identification leads us to one of the profound aspects of AdS/CFT, \begin{center} {\em Moving in the radial direction of the gravity theory corresponds to a change of scale in the field theory.} \end{center} As we shall see, {\em breaking} this symmetry on one side corresponds to breaking it on the other side, as well. Thus asymptotically AdS geometries that have something sitting at some value of $r$, or otherwise deviate from pure AdS as we move in the radial coordinate, correspond to a field theory where scale invariance is broken, and moving through the radial direction corresponds to moving through the scales of the theory. \medskip\noindent \underline{Global symmetries} A global symmetry $G$ of the field theory is realized by a {\em gauge} symmetry $G$ in the gravity theory: each global conserved current in the field theory $J$ is associated to a fluctuating gauge field $A$ on the gravity side. For an example of this, ${\cal N}=4$ Super-Yang-Mills has an $SO(6)$ global symmetry under which the gauge field is invariant, the fermions transform in the ${\bf 4}$ and the scalars in the ${\bf 6}$. The gravity dual is type IIB string theory on $AdS_5 \times S^5$, and in reducing from 10 to 5 dimensions, the spacetime metric modes with one index on the $S^5$ reduce to $SO(6)$ gauge fields, realizing the $SO(6)$ isometry group of the compact $S^5$ factor. This $SO(6)$ gauge symmetry of the gravity theory matches the $SO(6)$ global symmetry of the field theory. Similarly, ABJM theory has an $SO(8)$ symmetry that is matched to the $SO(8)$ isometry group of the compact $S^7$ in $AdS_4 \times S^7$ on the gravity side. \medskip\noindent \underline{Fermionic symmetries} Fermionic symmetries (``supersymmetries") must also match between the field theory and gravity sides. For ${\cal N}=4$ Super-Yang-Mills, the ${\cal N}=4$ four-dimensional spinor supercurrents give us 16 supercharges worth of fermionic symmetries; however, the exact conformal invariance of the theory leads to 16 more ``superconformal" fermionic symmetries. These 32 supercharges combine with the conformal group $SO(4,2) \simeq SU(2,2)$ and the global symmetry group\footnote{The fact that $SO(6)$ does not commute with the supercharges makes it an {\em R-symmetry group}. R-symmetries commute with the spacetime symmetries, but both bosonic groups do not commute with supersymmetry, joining them all in one larger (super)group.} $SO(6) \sim SU(4)$ to make a supergroup called $SU(2,2|4)$. Meanwhile on the gravity side, type IIB string theory on $AdS_5 \times S^5$ also preserves 32 supercharges (the maximal amount of supersymmetry in ten dimensions) and again the overall group of symmetries is the supergroup $SU(2,2|4)$. We won't have anything to say about supergroups, but it is an essential check of the correspondence that they too match on both sides. \medskip\noindent \underline{Gauge symmetries} The field theory in general will also have a gauge symmetry --- the ``gauge" part in ``gauge/gravity correspondence". In the case of ${\cal N}=4$ Super-Yang-Mills, this is $SU(N)$. However, there is {\em no} appearance of this gauge group on the gravity side, except for the presence of the parameter $N$ as the total five-form flux. Is this is a problem for the correspondence? It is not. A gauge symmetry, fundamentally, is not a true symmetry at all, even though we use the language of symmetries to describe it. Instead it is a redundancy of description; physical quantities must be gauge-invariant. When one has a duality, it is not necessary that the gauge symmetries match on both sides of the duality, since the different gauge symmetries are characteristic of the different degrees of freedom used. Only the gauge-invariant variables must match across the duality. This is the case with AdS/CFT: only gauge-invariant operators in the field theory will match with fields on the gravity side. \subsection{Field/operator correspondence} Having described the matching between the symmetries of the two sides of the correspondence, we next turn to the connection between the variables. A fundamental part of the AdS/CFT dictionary is that there is an association between each {\rm field} $\phi(r, \vec{x}, t)$ of the gravity theory and a {\em gauge-invariant operator} ${\cal O}(\vec{x}, t)$ of the field theory: \begin{eqnarray} \label{Dictionary} \phi(r, \vec{x}, t) \quad \leftrightarrow \quad {\cal O}(\vec{x}, t) \,. \end{eqnarray} Consider for concreteness a scalar field $\phi(r, \vec{x}, t)$ on the gravity side; we will say more about other spin fields later. Assume its quadratic action takes the Klein-Gordon form, \begin{eqnarray} S_{\rm KG} = {1 \over 2 \kappa^2} \int d^{d+1}x \sqrt{-g} \left( - {1 \over 2}(\partial \phi)^2 - {1 \over 2}m^2 \phi^2 \right) \,, \end{eqnarray} where we have included an overall factor $1/2\kappa^2$ with mass dimension $d-1$; in many supergravity theories all terms share the same overall normalization with the Einstein-Hilbert action, which renders the scalars dimensionless. Newton's constant $G_N$ of the gravity theory is then related by $\kappa^2 = 8 \pi G_N$. This normalization will not affect the equations of motion, but the overall value of the action will be important, as we will see. The corresponding Klein-Gordon equation of motion is \begin{eqnarray} \label{KGEqn} \left( - {1 \over\sqrt{-g}} \partial_\mu \sqrt{-g} g^{\mu\nu} \partial_\nu + m^2 \right) \phi = 0 \,. \end{eqnarray} Near the boundary $r \to \infty$ the geometry approaches AdS space. The solution to (\ref{KGEqn}) then approaches \begin{eqnarray} \label{KGSoln} \phi(r \to \infty, \vec{x}, t) = {\alpha(\vec{x},t) L^{2\Delta_-} \over r^{\Delta_-}} + \cdots + {\beta(\vec{x},t)L^{2\Delta_+} \over r^{\Delta_+}} + \cdots \,, \end{eqnarray} where we defined the exponents \begin{eqnarray} \Delta_\pm \equiv {d \over 2} \pm \sqrt{\left(d \over 2\right)^2 + m^2 L^2} \,. \end{eqnarray} Since (\ref{KGEqn}) is a second-order differential equation, it has two independent solutions on the boundary, represented by the leading $\alpha(\vec{x}, t)$ and the subleading $\beta(\vec{x}, t)$. (Note that there may be other terms bigger than the $\beta(\vec{x},t)$ term, as indicated by the dots in the middle, but these all depend on $\alpha(\vec{x},t)$ and vanish in the $\alpha \to 0$ limit; $\beta$ is the leading independent term.) Since $\phi(r, \vec{x}, t)$ is a coordinate scalar, the scaling isometry (\ref{GravityScale}) indicates that $\alpha(\vec{x}, t)$ and $\beta(\vec{x}, t)$ must scale in such a way so as to cancel the transformations of $r^{\Delta_-}$ and $r^{\Delta_+}$; this implies that they behave as $d$-dimensional objects with dimensions $\Delta_-$ and $\Delta_+$, respectively. We inserted the factors of $L$ into (\ref{KGSoln}) so these are their engineering dimensions, as well.\footnote{Using the variable $z\equiv L^2/r$ absorbs all the factors of $L$, and this can be calculationally more convenient. We mostly stick with the variable $r$ as it emphasizes that the boundary lies at infinite distance, and it is more commonly used in the solutions we will encounter.} We must now impose boundary conditions at $r \to \infty$ to make our AdS theory well-defined. Constraining half the degrees of freedom at the boundary is sufficient. The simplest choice is to remove the leading term, $\alpha(\vec{x}, t) = 0$. More generally, we can constrain $\alpha(\vec{x}, t)$ to take specified values: \begin{eqnarray} \label{RegularQuant} \alpha(\vec{x}, t) = J(\vec{x}, t) \,, \end{eqnarray} where $J(\vec{x}, t)$ is chosen by us and fixed. The other near-boundary solution, $\beta(\vec{x}, t)$ is unspecified and allowed to fluctuate dynamically. Then, initial conditions on a spacelike hypersurface $\Sigma$ plus these boundary conditions lead to a unique time evolution for the field throughout spacetime. When the boundary conditions (\ref{RegularQuant}) are imposed, we refer to this as the {\em regular quantization}. We will associate the fluctuating mode $\beta(\vec{x}, t)$ with the dual field theory operator ${\cal O}$, in a way which will become more precise momentarily, and thus the dimension of ${\cal O}$ is the dimension of $\beta$, \begin{eqnarray} \label{RegularDim} \Delta_{{\cal O}} = \Delta_+ = {d \over 2} + \sqrt{\left( d \over 2\right)^2 + m^2 L^2}\,. \end{eqnarray} We are now prepared to make the statement of the gauge/gravity correspondence. It can be viewed as an equality of path integrals, where the gravity path integral with boundary conditions (\ref{RegularQuant}) on fields is equal to the field theory path integral with the same functions $J(\vec{x},t)$ turned on as {\em sources} for the operators \cite{Gubser:1998bc, Witten:1998qj}: \begin{eqnarray} \label{AdSCFT} Z_{\rm grav}[\phi; \alpha(\vec{x},t) = J(\vec{x},t)] = Z_{\rm CFT}[{\rm source \ for} \ {\cal O}(\vec{x},t) \ {\rm is} \ J(\vec{x},t) ] \,. \end{eqnarray} Of course, there is a difficulty in general with characterizing the left-hand-side of this equation at all. For arbitrary $N$ and $\lambda$ it is a quantum gravity theory which we are not sure how to write down.\footnote{In fact, away from the classical gravity limit, a reasonable way to proceed is to {\em define} the left-hand-side as whatever it needs to be to satisfy this equation. We are only comfortable doing this because of the nontrivial checks on the correspondence that can be carried out in the cases where we can characterize the left-hand-side.} In these lectures, however, we are working in the large-$N$, large 't Hooft coupling limit. In that case, the left-hand-side reduces to a saddle point, localizing the fields to solutions to the classical gravitational equations of motion. We then have \begin{eqnarray} \label{AdSCFT2} \exp{i S_{\rm grav}[\phi; \alpha = J]} = \left\langle \exp{i \int d^dx \; J(\vec{x}, t) {\cal O}(\vec{x}, t)} \right\rangle_{\rm CFT} \,. \end{eqnarray} Now the left-hand-side is to be understood as the classical gravitational action, with both quantum and stringy corrections neglected, evaluated on solutions of the classical equations of motion. The right-hand-side is unchanged from the previous expression, and just written in a different way. Let us say again what we have done in words: each gravity field is associated to a field theory operator. Imposing boundary conditions for the gravity fields corresponds to turning on sources for the corresponding operators. The field theory path integral with these sources is equal to the exponential of the gravity action with the corresponding boundary conditions. A few remarks: \begin{itemize} {\item The equations (\ref{AdSCFT}) and (\ref{AdSCFT2}) are written as if there is only one scalar field $\phi$ and one spinless dual operator ${\cal O}$, but in general there will be any number of fields of different spins. Each field will have a corresponding boundary condition corresponding to turning on a source for the dual operator. We will discuss the other spins more later, but for now will stick to the example of a scalar field $\phi$.} {\item Since $\alpha(\vec{x},t)$ has dimension $\Delta_- = d - \Delta_+$ as far as the coordinates $\vec{x}$ and $t$ are concerned, so must $J(\vec{x},t)$, and thus it has the correct dimension to be a source for the $\Delta_+$-dimensional operator ${\cal O}(\vec{x}, t)$ in $d$ dimensions.} {\item One can pass from a Lorentzian formulation to a Euclidean one by replacing the $i$ factors in (\ref{AdSCFT}) and (\ref{AdSCFT2}) with minus signs.} \end{itemize} We now flesh out this correspondence by studying a few important details. \subsection{Relevant operators and the Breitenlohner-Freedman bound} Looking at the formula (\ref{RegularDim}) for the operator dimension, we see that requiring $m^2 \geq 0$ gives us dual operators of dimension $\Delta \geq d$: that is, irrelevant and marginal operators. What about relevant operators? In quantum field theory in Minkowski space, we are used to requiring $m^2 \geq 0$ for a scalar field in a stable vacuum. If $m^2 < 0$, unstable modes will exist, signaling that we are not in the correct vacuum. In anti-de Sitter space, however, the story is a little different. As shown by Breitenlohner and Freedman, it is possible to have $m^2 < 0$ without leading to an instability. The total energy receives negative contributions from the mass term, but receives compensating positive contributions from the kinetic energy term. As a result, the net energy can be positive, with no instabilities \cite{Breitenlohner:1982jf, Breitenlohner:1982bm}. However, this is only possible if the mass-squared is not too negative. In order to have a perturbatively stable vacuum, scalar masses must satisfy the {\em Breitenlohner-Freedman (BF) bound}, \begin{eqnarray} m^2 L^2 \geq - {d^2 \over 4} \,. \end{eqnarray} The dimension formula (\ref{RegularDim}) continues to hold for allowed negative-mass cases, and we see that scalars with mass-squared going down to the BF bound get us dual operators with dimensions going down to $\Delta = d/2$; we now have found some of the relevant operators. (There is a way to take the dimension down even further, to $\Delta = d/2 - 1$, as we will see momentarily.) The case where the mass-squared precisely saturates the BF bound $m^2 L^2 = -d^2/4$ is special, because here $\Delta_+ = \Delta_-$ and the asymptotic solution (\ref{KGSoln}) no longer holds. Instead we find \begin{eqnarray} \label{BFBoundSaturating} \phi(r \to \infty, \vec{x}, t) = {\alpha(\vec{x},t) L^{d/2} \log r \over r^{d/2}} + {\beta(\vec{x},t) L^{d/2}\over r^{d/2}} + \cdots \,, \end{eqnarray} where a logarithm has showed up to distinguish the two independent solutions. We can take the boundary condition $\alpha(\vec{x},t) = J(\vec{x},t)$ to correspond to a source for a dimension-$d/2$ operator, as before. \subsection{Holographic renormalization and one-point functions} The gravity side of the correspondence involves the gravitational action $S_{\rm grav}$, evaluated on solutions to the equations of motion. Unfortunately, this turns out not to be finite! Let's see how this goes for our example of a single scalar field $\phi$, where $S_{\rm grav} = S_{\rm KG}$. We'll begin by regulating things, cutting off the spacetime at a large value $r=R$. Integrating $S_{\rm KG}$ by parts, we find \begin{eqnarray} \label{RegulatedKG} S_{\rm KG} = {1 \over 4\kappa^2 } \int d^{d+1}x \sqrt{-g} \phi \left( \square - m^2\right) \phi - {1 \over 4\kappa^2} \int_{r=R} d^dx \sqrt{-h}\, \phi n^\mu \partial_\mu \phi \,, \end{eqnarray} where the second term is a boundary term evaluated at $r=R$, with $h_{\mu\nu}$ the induced boundary metric and $n^\mu$ a normal vector field. The bulk term vanishes on solutions to the equations of motion, but plugging in the asymptotic $\phi$ solution (\ref{KGSoln}) and using $\sqrt{-h} \sim r^d$, $n^r \sim r$ in $AdS_{d+1}$, we find that the boundary term diverges as we take $R \to \infty$. Thus it seems like our impressive fundamental gauge/gravity relation involves a left-hand-side that's infinite! But that's okay; the right-hand-side is infinite too. We are dealing with a quantum field theory after all, and quantum field theories have ultraviolet divergences if we take them to be valid to arbitrarily high energy scales (arbitrarily short distances). So to make sense of AdS/CFT, we need to do what we always do with quantum field theory: we need to regulate it to render the divergences finite, and then renormalize by adding suitable counterterms so that we get finite results when the regulator is removed. Now we're just going to carry out this regularization and renormalization on the gravity side. The scale transformation (\ref{GravityScale}) tells us that large distances in gravity ($r \to \infty$) match up with short distances in field theory ($t, \vec{x} \to 0$) and thus it is natural that the short-distance, ultraviolet divergences of QFT are showing up as the long-distance divergences in AdS gravity. Cutting off the geometry at $r=R$ as in (\ref{RegulatedKG}) is precisely an ultraviolet regulator as far as the field theory is concerned. Since (\ref{RegulatedKG}) is regulated, our next step must be to renormalize by adding suitable counterterms. This is called ``holographic renormalization" \cite{Bianchi:2001de, Bianchi:2001kw} (for a review see \cite{Skenderis:2002wp}). We will demonstrate how it works in a particular example: let $d=3$ so we are dealing with $AdS_4/{\rm CFT}_3$ and take a scalar field with mass-squared $m^2 L^2 = -2$. (This example is relevant for the gravity dual to ABJM theory.) Now $\Delta_+ = 2$, $\Delta_- = 1$ and the asymptotic behavior of the scalar is \begin{eqnarray} \phi(r \to \infty, \vec{x}, t) = {\alpha(\vec{x},t)L^2 \over r} + {\beta(\vec{x},t) L^4\over r^2} + \cdots \,. \end{eqnarray} Plugging this into the KG action, the bulk term vanishes and we are left with the boundary term \begin{eqnarray} S_{\rm KG} = {L^2 \over 4 \kappa^2}\int d^3x \left( {R \alpha^2 \over L^2} + 3 \alpha \beta \right) \,, \end{eqnarray} which is implicitly evaluated at $r=R$, diverging as $R \to \infty$. We will deal with this divergence by adding a new piece to the action, a boundary counterterm, \begin{eqnarray} \label{BdyCounter} S_{\rm bdy} = - {1 \over 4\kappa^2} \int d^3x \sqrt{-h}\, \phi^2 \,. \end{eqnarray} The divergence of this boundary term cancels the divergence of the bulk action, leaving us with the finite result \begin{eqnarray} S_{\rm KG} + S_{\rm bdy} = {L^2 \over 4 \kappa^2} \int d^3x \, \alpha \beta \,. \end{eqnarray} Our choice of boundary counterterm is also intimately related to our boundary conditions (\ref{RegularQuant}) constraining $\alpha(\vec{x}, t)$ (for a nice discussion of this, see \cite{Marolf:2006nd}). The relationship comes from making sure our solution is a true extremum of the action, boundary terms included. Consider varying the field \begin{eqnarray} \phi(r, \vec{x}, t) \to \phi(r, \vec{x}, t) + \delta \phi(r, \vec{x}, t) \,. \end{eqnarray} This induces variations $\delta \alpha(\vec{x}, t)$ and $\delta \beta (\vec{x}, t)$. We want the action, evaluated on a solution, to be stationary under such a variation. The variation is \begin{eqnarray}\nonumber \delta S_{\rm KG} + \delta S_{\rm bdy} &=& {1 \over 2 \kappa^2}\int d^4x \sqrt{-g} \, \delta \phi \left( \square \phi +2 \phi \right) \\ &+& {L^2 \over 2 \kappa^2}\int d^3x \left( {R\over L^2} (1 - 1) \alpha \delta \alpha + (1 -1) \alpha \delta \beta + (2 - 1) \beta \delta \alpha \right) \,, \end{eqnarray} Solving the Klein-Gordon equation ensures the bulk part is zero. Moreover, we can see that the boundary term (\ref{BdyCounter}) cancels a divergent part of the variation, as well as a finite $\delta \beta$ part. We are left with \begin{eqnarray} \label{SVariation} \delta S_{\rm KG} + \delta S_{\rm bdy} = {L^2 \over 2 \kappa^2 }\int d^3x \, \beta \delta \alpha \,. \end{eqnarray} For a general boundary condition this would not vanish, but for our boundary condition (\ref{RegularQuant}), $\alpha$ is not allowed to fluctuate and so its variation must be zero. Thus indeed the solution is stationary under the full, bulk plus boundary action. Thus the boundary counterterm has done two things for us: \begin{enumerate} {\item Made the total action finite as the regulator is removed, and} {\item Made the total action stationary for solutions once our boundary conditions are imposed.} \end{enumerate} The variation of the action (\ref{SVariation}) also provides our path to correlation functions. Let's say we want to calculate the one-point function of the field theory operator ${\cal O}$ dual to $\phi$. To get this we vary with respect to the source: \begin{eqnarray} \langle {\cal O}(\vec{x}, t) \rangle = {1 \over i} {\delta \over \delta J(\vec{x}, t)} \left\langle \exp{i \int d^dx \, J(\vec{x}, t) {\cal O}(\vec{x}, t)} \right\rangle \Bigg|_{J = 0}\,, \end{eqnarray} where we assume we have normalized the partition function to 1 in the absence of sources. But using the correspondence, this can be translated into a statement about the gravity action and its response to varying its boundary conditions: \begin{eqnarray} \langle {\cal O}(\vec{x}, t) \rangle = {1 \over i} {\delta \over \delta \alpha(\vec{x}, t)} e^{i S_{\rm grav}} \Bigg|_{\alpha = 0} = {\delta S_{\rm grav}\over \delta \alpha(\vec{x}, t)} \Bigg|_{\alpha = 0}\,. \end{eqnarray} In our example of a single scalar field, (\ref{SVariation}) tells us this is simply \begin{eqnarray} \label{OnePointReg} \langle {\cal O}(\vec{x}, t) \rangle = {L^2 \over 2 \kappa^2 }\beta(\vec{x}, t) \,. \end{eqnarray} Thus for our example, we have \begin{eqnarray} \phi(r \to \infty, \vec{x}, t) = {L^2 J(\vec{x},t) \over r} + { 2 \kappa^2 L^2 \langle{\cal O} (\vec{x},t) \rangle\over r^2} + \cdots \,. \end{eqnarray} Just as the leading term near the boundary corresponds to the source for the dual operator, and is constrained by our boundary conditions, the subleading term which is allowed to fluctuate corresponds to the expectation value of the dual operator. This kind of relationship generalizes to all fields in all dimensions. We may think of the two terms as the stimulus and the response: the source term $\alpha(\vec{x}, t)$ pokes the system, and combined with the initial conditions this determines how the system responds $\beta(\vec{x}, t)$. We will discuss how the source communicates its influence to the response in section~\ref{CorrelationSec}, where we discuss higher-point correlation functions. \subsection{Alternate quantization and the other relevant operators} By allowing our scalar mass-squared to go down to the Breitenlohner-Freedman bound, we have been able to find gravity duals for operators with dimension down to $\Delta = d$. However, some physical operators in known systems have dimension smaller than this. How can we realize these operators in the gravity dual? The answer turns out to be to change our boundary conditions. For a restricted range of $m^2$ values down to (but not including) the BF bound, \begin{eqnarray} \label{AltRange} - {d^2 \over 4} < m^2 L^2 \leq - {d^2 \over 4} + 1 \,, \end{eqnarray} it is possible to exchange the roles of $\alpha$ and $\beta$: now we will take $\beta$ to be fixed, and allow $\alpha$ to fluctuate. Since we are changing our boundary conditions, we will have to change our boundary terms, as well. Our example of the field with $d=3$ and $m^2 L^2 = -2$ lies in the range (\ref{AltRange}). Consider the alternate boundary terms: \begin{eqnarray} S_{\rm bdy, alt} = {1 \over 4 \kappa^2} \int d^3x \sqrt{-h} \, \phi^2 + {1 \over 2 \kappa^2}\int d^3x \sqrt{-h} \, \phi n^\mu \partial_\mu \phi \,. \end{eqnarray} We have changed the sign of the original boundary term, and added a second term. This combination also renders the total action finite: \begin{eqnarray}\nonumber S_{\rm KG} + S_{\rm bdy, alt} &=& {L^2 \over 4 \kappa^2}\int d^3x \left( {R\over L^2} \left( 1 + 1-2 \right) \alpha^2 + \left( 3 +2 - 6 \right) \alpha \beta \right) \\ &=& - {1 \over 4 \kappa^2 L^4} \int d^3x \,\alpha \beta\,, \end{eqnarray} and leads to the variation of the action \begin{eqnarray} \label{AltSVariation} \nonumber \delta S_{\rm KG} + \delta S_{\rm bdy, alt} &=& {L^2 \over 2 \kappa^2 } \int d^3x \left( {R \over L^2}(1 +1 - 2) \alpha \delta \alpha + (1 + 1 - 3) \alpha \delta \beta + (2 +1 - 3) \beta \delta \alpha \right) \\ &=& - {L^2 \over 2 \kappa^2 }\int d^3x \, \alpha \delta \beta \,. \end{eqnarray} Thus the solutions leave the action stationary when we constrain $\beta(\vec{x}, t)$, \begin{eqnarray} \label{AltSource} \beta(\vec{x}, t) =J_{\rm alt}(\vec{x}, t) \,, \end{eqnarray} and allow $\alpha(\vec{x}, t)$ to fluctuate.\footnote{When working in asymptotically AdS space, the relation (\ref{AltSource}) can become contaminated by $\alpha$ terms unless one works in ``Fefferman-Graham" coordinates where $g_{rr} = L^2/r^2$ exactly.} The alternate quantization variation (\ref{AltSVariation}) then implies \begin{eqnarray} \langle {\cal O}_{\rm alt}(\vec{x}, t) \rangle =- {L^2 \over 2 \kappa^2}\alpha(\vec{x}, t) \,. \end{eqnarray} Now $\alpha$ and $\beta$ have switched roles. Thus the dual operator ${\cal O}_{\rm alt}$ must have the scaling dimension of $\alpha$, which means $\Delta_{{\cal O}_{\rm alt}} = \Delta_-$. \begin{figure \begin{center} \includegraphics[scale=0.7]{DimensionMass} \caption{The plot of mass-squared $m^2$ of the gravity scalar $\phi$ versus the dimension $\Delta$ of the dual field theory operator ${\cal O}$, for AdS$_4$/CFT$_3$ ($d=3$). Heavy dots indicate cases with integer or half-integer dimension. Values of $\Delta$ associated to the regular quantization and alternate quantization are indicated, as is the value of $\Delta$ that saturates the BF bound $m^2 L^2 = -9/4$, and the unitarity bound $\Delta \geq 1/2$. ${\cal N}=8$ gauged supergravity in four dimensions has 70 scalars at $m^2L^2=-2$, 35 in alternate quantization dual to the $\Delta = 1$ operator Tr $X^2$, and 35 in regular quantization dual to the $\Delta = 2$ operator Tr $\lambda^2$. \label{fig:DimensionMass}} \end{center} \end{figure} Using masses over the range (\ref{AltRange}), this brings us all the way down to $\Delta = d/2-1$. As it turns out, this is as low as you can go: unitarity and the conformal algebra of a CFT implies $\Delta \geq d/2-1$. The gravity side also stops there: if the mass-squared is outside the range (\ref{AltRange}), there are no boundary terms compatible with the alternate boundary conditions to render the action finite, and we must use the regular quantization. The alternate quantization may seem a little esoteric, but it is physically essential. Consider the example of M-theory on $AdS_4 \times S^7$, dual to ABJM theory. In the gravity limit M-theory becomes eleven-dimensional supergravity, and we can reduce this theory on $S^7$, producing towers of fields. The ``bottom" set of fields in these towers, consisting of the four-dimensional graviton and its superpartners, constitutes the fields of four-dimensional, ${\cal N}=8$ gauged supergravity. This theory has 70 scalars, all with $m^2 L^2 = -2$. It turns out that supersymmetry requires half of these scalars to use the regular quantization, and the other half the alternate quantization. This fits perfectly with the dual ABJM theory, where there are 35 scalar bilinears Tr $X^2$ with $\Delta = 1$, and 35 fermion bilinears Tr $\lambda^2$ with $\Delta = 2$. We see the importance of the quantization of the boundary conditions: even if we kept the same fields on the gravity side, changing the boundary conditions would change the operator content of the field theory dual. The same bulk gravity action with different boundary conditions, and hence different boundary counterterms, is truly a different theory. To illustrate these ideas, in figure~\ref{fig:DimensionMass} we plot the relationship between the conformal dimension $\Delta$ and the mass-squared $m^2L^2$ for a scalar with $d=3$. We indicate the range of $\Delta$ corresponding to regular quantization, the range of $\Delta$ corresponding to alternate quantization, and the value $\Delta = 3/2$ where the associated mass saturates the BF bound $m^2L^2 = -9/4$. \subsection{Fields with spin} Bosonic fields on the gravity side like the metric and gauge fields also satisfy second order equations, and analogously to the scalar described above, one independent solution near the boundary corresponds to a source for the dual operator, while the other solution is proportional to the one-point function. As mentioned before when talking about symmetries, vector fields on the gravity side are dual to conserved currents on the field theory side: \begin{eqnarray} A_\mu \longleftrightarrow J^\mu \,. \end{eqnarray} Let us outline the holographic renormalization of the $A_\mu$ field, briefly since many aspects are analogous to the case of the scalar. The quadratic Maxwell action \begin{eqnarray} S_{\rm Max} = {1 \over 2 \kappa^2} \int d^{d+1}x \sqrt{-g} \, \left( - {1 \over 4} F_{\mu\nu}F^{\mu\nu}\right)\,, \end{eqnarray} with $F_{\mu\nu} \equiv \partial_\mu A_\nu - \partial_\nu A_\mu$ leads to the equation of motion \begin{eqnarray} {1 \over \sqrt{-g}} \partial_\mu \sqrt{-g} g^{\mu\alpha} g^{\nu\beta} F_{\alpha \beta} = 0 \,. \end{eqnarray} In the gauge $A_r = 0$, the solution to the equations of motion near the boundary is \begin{eqnarray} \label{VecFieldExpand} A_i(r, \vec{x},t ) = \alpha_i(\vec{x},t ) L + {\beta_i(\vec{x},t ) L^{2d-3} \over r^{d-2}} + \ldots \,, \end{eqnarray} where $\partial^i \beta_j=0$, and $f_{ij} \equiv \partial_i \alpha_j - \partial_j \alpha_i$ and $g_{ij} \equiv \partial_i \beta_j - \partial_j \beta_i$ are constrained by $\partial^i f_{ij} = \partial^i g_{ij} = 0$. To see the scaling of $\alpha_i$ and $\beta_i$, pass to locally flat coordinates: \begin{eqnarray} A_{\hat{\imath}}(r, \vec{x},t ) = {\alpha_i(x) L^2 \over r} + {\beta_i(x) L^{2d-2} \over r^{d-1}} + \ldots \,, \end{eqnarray} where we see that $\alpha_i(\vec{x},t )$ should be dual to a source of dimension 1, and $\beta_i(\vec{x},t )$ to an operator of dimension $d-1$; again we have inserted factors of $L$ so the scaling dimensions match the engineering dimensions. The action turns out to be finite without the addition of any boundary term, and reduces on the equation of motion to \begin{eqnarray} S_{\rm Max} = (d-2){L^{d-1} \over 4 \kappa^2} \int d^dx \, \alpha^i \beta_i \,, \end{eqnarray} while the variation of the action becomes \begin{eqnarray} \delta S_{\rm Max} = (d-2){L^{d-1} \over 2 \kappa^2} \int d^dx \, \delta \alpha^i \beta_i \,. \end{eqnarray} Thus we identify $\alpha_i(\vec{x},t)$ as the fixed source for the current, and the one-point function is \begin{eqnarray} \langle J^i(\vec{x},t) \rangle = (d-2) {L^{d-1} \over 2 \kappa^2} \eta^{ij} \beta_j(\vec{x},t) \,, \end{eqnarray} completing our discussion of the spin-1 field. Finally, the (unique) metric is dual to the (also unique) energy-momentum tensor: \begin{eqnarray} g_{\mu\nu} \longleftrightarrow T^{\mu\nu} \,. \end{eqnarray} We will not go through the holographic renormalization of the metric tensor, which is somewhat complicated and involves adding boundary terms with a geometric meaning. For a further discussion of this, including the holographic Weyl anomaly, see \cite{Skenderis:2000in}. Thus we have described how bosonic fields match up with their corresponding operators. We will say a little about fermionic fields in the final section. \section{String theory origins of AdS/CFT} Before diving into studying correlation functions, let's take a side trip to think about how string theory motivated the AdS/CFT correspondence, and how this relates to the two distinct approaches to applying gravity systems to learn about gauge theories, which one can call ``top-down" and ``bottom-up". Our discussion of string theory will just skim the surface, but hopefully provide enough of a flavor to communicate the the role it plays. To explore this more deeply, see for example \cite{Aharony:1999ti, Klebanov:2000me, DHoker:2002nbb}. \subsection{D-branes, M-branes and the gauge/gravity correspondence} String theory is the result of quantizing one-dimensional relativistic objects, called strings. Strings come in two varieties: closed strings, which loop back on themselves, and open strings, which have endpoints. The quantization of closed strings naturally leads to the dynamics of gravity (plus other fields), while the quantization of open strings leads to the dynamics of gauge theory (again, plus other fields). String theory needs to live in an unexpectedly large number of dimensions in order to avoid quantum inconsistencies: for the supersymmetric version of the theory, which contains fermionic excitations, this number is ten. A fundamental property of string theory is that a little piece of string does not ``know" whether it is part of a closed string or an open string unless it is at an endpoint; thus, the difference between gravity and gauge theory comes solely from the boundary conditions, as closed strings can have both left- and right-moving oscillations, while in open strings half the degrees of freedom are projected out and we are left with standing waves. In some sense, string theory is telling us that gravity is like gauge theory squared \cite{Kawai:1985xq}, and the two kinds of physics are intimately connected. The original understanding of open strings imposed Neumann boundary conditions on their endpoints in all directions, resulting in strings that could end anywhere. After some time, it was realized that one can instead impose Dirichlet boundary conditions in some directions, restricting the endpoints to live on certain lower-dimensional surfaces in spacetime \cite{Dai:1989ua}. The next step was realizing that these surfaces that open strings can end on, so-called ``Dirichlet-branes" or ``D-branes", behave themselves as dynamical membrane objects that can carry energy, move around and interact, and have a particular kind of ``Ramond-Ramond" charge \cite{Polchinski:1995mt}. At low energies, the dynamics of the open strings living on the branes are a lower-dimensional gauge theory, and the fact that these strings can interact with closed strings moving throughout space indicates that the branes gravitate, as well as being sources for higher-index generalizations of abelian gauge fields. The birthplace of AdS/CFT was in these D-branes. Consider a stack of $N$ D3-branes\footnote{A $Dp$-brane traditionally denotes a brane with $p$ spatial dimensions, and thus $(p+1)$ spacetime dimensions.} on top of each other; these branes exist within the string theory called type IIB. The ability of each open string endpoint to end on any of the $N$ branes gives rise to the $N^2$ degrees of freedom of a non-abelian gauge theory, in this case the maximally supersymmetric gauge theory in four dimensions, ${\cal N}=4$ Super-Yang-Mills theory with $SU(N)$ gauge group. On the other hand, the energy in the branes curves space around them, altering the geometry. The breakthrough of Maldacena \cite{Maldacena:1997re} was to propose that the gauge theory living {\em on} the branes was {\em exactly equivalent} to the gravitational dynamics in the geometry {\em very close} to the branes; the two descriptions are redundant with each other. The near-horizon geometry of a stack of D3-branes is $AdS_5 \times S^5$, and from this emerged the proposal that type IIB string theory on this spacetime was exactly dual to ${\cal N}=4$ Super-Yang-Mills. From this origin of the correspondence, details of the field-operator map emerge naturally. In the original brane picture, there are couplings between the open string modes living on the brane, and closed string excitations propagating through spacetime. These couplings imply a connection between gauge-invariant field theory operators and gravity fields. Thus in this example, the AdS/CFT dictionary can be derived. While there are D-branes of other dimensionalities, the D3-branes are special in hosting an exactly conformal field theory. It turns out there are two other branes that have this property, though they are not exactly D-branes. One other remarkable idea that along with D-branes propelled the so-called ``second superstring revolution" of the mid-to-late 1990's was Witten's proposal that the strongly coupled limit of (ten-dimensional) type IIA string theory was not a string theory at all, but an eleven-dimensional theory containing membrane degrees of freedom, dubbed M-theory, whose low-energy limit was the already-discovered eleven-dimensional supergravity \cite{Witten:1995ex}. M-theory is now understood (albeit imperfectly) as being part of the web of dualities relating various string theories. It has no strings, but does have branes of its own, M2-branes and M5-branes. The low-energy field theories living on these branes are the three-dimensional ABJM theory, and the six-dimensional (2,0) theory; along with ${\cal N}=4$ SYM, these theories form the fundamental trio of maximally supersymmetric, exactly conformal field theories in more than two dimensions. The field-operator map could be read off for these theories as well--- although the (2,0) theory, in particular, is still not as well understood from the field theory point of view. These theories and their generalizations are called {\em top-down} realizations of AdS/CFT. Because they are motivated from string/M-theory, their field operator dicitonaries are known, and the footing they stand on is relatively firm. \subsection{Top-down vs bottom-up} We now come to the two different approaches for studying field theories using gravity: ``top-down" and ``bottom-up". In the top-down case, one uses a known gravity theory/field theory pair coming from string/M-theory. This has several advantages: \begin{itemize} {\item Because the dictionary is known precisely, you know exactly which operators in which field theory you are talking about.} {\item Because the dual theories come from string theory, you can be confident --- or at least, {\rm more} confident --- that there aren't hidden pathologies affecting the system. } \end{itemize} But there are also drawbacks: \begin{itemize} {\item You are limited to studying one of the field theories that is known to have a string theory dual. These usually have a lot of supersymmetry and may not look exactly like known systems realized in nature.} {\item These systems are often complicated. For example, type IIB supergravity on $AdS_5 \times S^5$ has an infinite set of fields; even truncating to the graviton and its superpartners, one is left with five-dimensional ${\cal N}=8$ gauged supergravity, a theory with 15 gauge fields and 42 scalars, among other bose and fermi fields.} \end{itemize} On the flip side, one can pursue a ``bottom-up" model: here one just writes down a simple gravity theory with whatever properties are desired. Now instead of using a complex set of fields pre-supplied by string theory, you just use what you want. Put in a graviton. Need a conserved current? There's a $U(1)$ gauge field. Interested in a charged condensate? Add a charged scalar. The interactions can be tweaked on demand. Now the advantages have flipped: \begin{itemize} {\item The model is flexible, and can contain whatever fields and dynamics you want.} {\item The model can be simple, with no extraneous fields or complicated couplings.} \end{itemize} But so have the disadvantages: \begin{itemize} {\item Since string theory didn't give it to you, you don't know what the dual field theory is. At best you can say it's some large N gauge theory at strong coupling, with particular symmetries and an operator spectrum you created. You don't have a Lagrangian or even a list of fundamental fields.} {\item There might be some hidden pathology or issue with the theory that you can't see.} \end{itemize} In practice, both of these approaches are valuable. The bottom-up approach can focus in directly on a particular desired property, and tune interactions and couplings to get exactly the phenomenon that one wants. On the other hand, the top-down approach makes precise predictions about known field theories and is on a firmer footing. Moreover, sometimes certain kinds of interactions or dynamics that you might not have thought of by yourself can be offered to you by the top-down model. In our examples in the second half of the lectures, we will explore both top-down and bottom-up approaches, and hopefully see the value of both. \section{Correlation Functions and RG flow geometries} \label{CorrelationSec} We will now turn to the study of two-point correlation functions. We will discuss what these look like in an exactly conformal field theory dual to AdS space. Then, we will study a few new geometries that are only asymptotically AdS, discuss how the variation of their fields over the radial coordinate is associated to the breaking of scale invariance, and then discuss two-point functions in these systems as well. \subsection{Two-point functions and boundary conditions} Higher-point correlation functions can also be calculated from the gravity side. Consider a two-point function; this can be calculated as the variation with respect to the source of the one-point function, with sources not turned off until the end, \begin{eqnarray} \langle {\cal O}(x) {\cal O}(y) \rangle = {1 \over i} {\delta \langle {\cal O}(x)\rangle_J \over \delta J(y)} \Bigg|_{J=0} \,, \end{eqnarray} where $x =\{\vec{x},t\}$. For a scalar in the regular quantization, the generalization of the relations (\ref{RegularQuant}) and (\ref{OnePointReg}) to arbitrary $d$ and $\Delta$ are \begin{eqnarray} \label{GeneralRegQuant} J(x) = \alpha(x) \,, \quad \quad \langle{\cal O}(x)\rangle = (2 \Delta - d) {L^{d-1} \over 2 \kappa^2} \beta(x) \,, \end{eqnarray} which imply the two-point function\footnote{The alternate quantization version of (\ref{GeneralRegQuant}) switches $\alpha$ and $\beta$ and adds a sign to the expression for $\langle {\cal O} \rangle$.} \begin{eqnarray} i\langle {\cal O}(x) {\cal O}(y) \rangle = (2 \Delta-d) {L^{d-1} \over 2 \kappa^2} {\delta \beta(x) \over \delta \alpha(y)} \Bigg|_{\alpha = 0}\,. \end{eqnarray} In the near-boundary expansion, $\alpha(x)$ and $\beta(x)$ are independent. What causes one to depend on the other? To relate them, one must solve for $\phi(x,r)$ throughout the bulk, in general imposing a boundary condition far from the boundary, at the deep interior (infrared end) of the geometry. One will then have a functional relation like \begin{eqnarray} \label{Kernel} \beta(y) = \int d^dx K(x-y) \alpha(x) + {\cal O}(\alpha^2) \,, \end{eqnarray} with a kernel $K(x-y)$ proportional to the two-point function. Since translation invariance guarantees the two-point function depends only on the difference $x-y$, we can detangle the integral relation (\ref{Kernel}) by passing to momentum space, where we find the simple expression \begin{eqnarray} i\langle {\cal O}(p) {\cal O}(-p)\rangle = {1 \over (2\pi)^d} {\langle{\cal O}(p)\rangle \over J(p)}= {2 \Delta-d \over (2\pi)^d} {L^{d-1} \over 2 \kappa^2} {\beta(p) \over \alpha(p)}\,, \end{eqnarray} neglecting higher order terms in (\ref{Kernel}). We poke the gravity geometry with a source by imposing the $\alpha$ boundary condition, and the field responds with $\beta$; the two point-function is just the ratio. Thus interaction terms are in general not necessary to study two-point functions, since we only need the linearized equations to determine (\ref{Kernel}) to leading order. For higher-point functions we must go beyond linearized order in the equations of motion, and interactions start to play a role. Having established the structure of the two-point function, the task becomes to solve the scalar equation of motion in the appropriate background, imposing a boundary condition in the deep interior, and then read off the results at the boundary. The boundary condition in the interior thus becomes very important; which condition shall we pick? For zero temperature backgrounds, the linearized equation can be solved continuing to Euclidean space. In the deep interior there is generally one diverging solution and one regular solution, and the prescription is to choose the regular solution. We say we choose {\em regular boundary conditions}. We shall see some examples of this in the rest of the section. At finite temperature solutions can have different behavior in the deep interior, and then we need to find a different boundary condition, as we will discuss in section~\ref{RealTimeSec}. \subsection{$N$-scaling of correlation functions} \label{NScalingSec} The one-point functions all contain a factor $L^{d-1}/\kappa^2$ coming from the overall normalization of the gravity action. This ratio of two gravitational quantities must reduce to something purely field-theoretic. In the top-down models where we understand the dictionary directly, we can evaluate this explicitly, and discover it reduces to the scaling of the correlator with a power of $N$. In the case of type IIB on $AdS_5 \times S^5$ dual to ${\cal N}=4$ Super -Yang-Mills, one has \begin{eqnarray} L^4 = 4 \pi g_s N {\alpha'}^2 \,,\quad\quad\quad{1 \over \kappa^2} = {L^5 \over 64 \pi^4 g_s^2 {\alpha'}^4} \,. \end{eqnarray} We see that the string theory parameters $g_s$ (the string coupling) and $\alpha'$ (the string length squared) cancel out of our ratio, \begin{eqnarray} \label{LKappa} {L^3 \over \kappa^2} = {N^2 \over 4 \pi^2} \,. \end{eqnarray} Thus correlation functions in ${\cal N}=4$ SYM go like $N^2$ in the large-$N$ limit; this is a known result in large-$N$ gauge theories. In a bottom-up construction, one cannot calculate these factors from first principles. Nonetheless, in a bottom-up AdS$_5$ model, one generally imagines that the dual field theory is some non-Abelian gauge theory, with the implication that $L^3/\kappa^2 \propto N^2$ still holds. In more general top-down cases, if the gravity theory is of the form $AdS_{d+1} \times S^q$ associated to the backreaction of a set of $N$ $d$-dimensional branes, the AdS radius and gravitational constant will take the form \begin{eqnarray} L^{q-1} \propto N \ell_P^{q-1} \,, \quad \quad \quad {1 \over \kappa^2} \propto {L^q\over \ell_P^{d+q-1} }\,, \end{eqnarray} where $\ell_P$ is the Planck length of the higher-dimensional theory, defined by the higher-dimensional gravitational constant $\kappa_{\rm higher}^2 \propto \ell_P^{d+q-1}$. For ABJM theory and the six-dimensional (2,0) theory, one finds \begin{eqnarray} {L^2 \over \kappa^2} \sim N^{3/2} \quad \hbox{(ABJM)}\,, \quad \quad \quad{L^5 \over \kappa^2} \sim N^3 \quad \hbox{(2,0)} \,, \end{eqnarray} which are the characteristic powers of $N$ associated with these theories. \subsection{Two-point functions in AdS space} In pure AdS space, the massive Klein-Gordon equation is most easily solved in terms of the $z \equiv L^2/r$ coordinates. Working in Euclidean space and Fourier transforming the $d$ boundary coordinates $x$ to a momentum $p$, it takes the form \begin{eqnarray} z^2 \phi'' - (d-1) z \phi' - z^2 p^2 \phi - m^2 L^2 \phi = 0\,. \end{eqnarray} This has solution in terms of modified Bessel functions, \begin{eqnarray} \phi(z, p) = c_1 z^{d/2}K_\nu ( pz) + c_2 z^{d/2} I_\nu (p z) \,, \quad \quad \nu \equiv \sqrt{{d^2 \over 4} + m^2 L^2 }= \Delta_+ - {d\over2} \,. \end{eqnarray} In the deep interior $z \to \infty$, $I_\nu$ diverges as $e^{z}$ while $K_\nu$ is regular, going like $e^{-z}$. We satisfy our regularity prescription for the boundary condition by keeping the $K_\nu$ solution, \begin{eqnarray} \phi(r, p) = r^{-d/2} \, K_\nu \Big( {p L^2 \over r }\Big) \,. \end{eqnarray} Expanding this near the boundary, we find the ratio of the $\alpha$ and $\beta$ terms give for integer $\nu$ the 2-point function, \begin{eqnarray} \label{TwoPointLog} \langle {\cal O}(p) {\cal O}(-p)\rangle \sim (p^2)^\nu \log p^2 \,,\quad \quad \hbox{integer}\ \nu \,, \end{eqnarray} and for non-integer $\nu$, \begin{eqnarray} \label{TwoPointNoLog} \langle {\cal O}(p) {\cal O}(-p)\rangle \sim (p^2)^\nu \,, \quad \quad \hbox{non-integer}\ \nu \,, \end{eqnarray} In either case, the Fourier transform gives us in position space\footnote{In general (\ref{TwoPointLog}) will have additional analytic $p^{2\nu}$ terms, which become scheme-dependent contact terms in position space.} \begin{eqnarray} \langle {\cal O}(x) {\cal O}(x') \rangle = {C \over |x-x'|^{2\Delta}} \,, \end{eqnarray} up to a constant $C$ related to the normalization of ${\cal O}$. This is exactly the functional form required by conformal invariance in a CFT. Whether from the log or the non-integer power $\nu$, the correlation functions (\ref{TwoPointLog}), (\ref{TwoPointNoLog}) have a non-analyticity in $p^2$, which we can view as creating a branch cut at the origin in the complex $p^2$ plane. This has the interpretation of a continuum of states all the way down to zero energy; since this is an exactly conformal field theory with no preferred scale, excitations of all energies are possible. Three and higher point functions can also be computed, and have the forms conformal field theory demands. We will not continue down the road of higher point functions, but there is a lovely story there; see for example \cite{DHoker:2002nbb}. \subsection{RG flow geometries} Let's think about some slightly more complicated geometries than pure AdS: consider in the background a single scalar field varying in the radial direction, the ``active" scalar, \begin{eqnarray} \label{RGFlowScalar} \phi = \phi(r) \,. \end{eqnarray} In general this leads to an asymptotically AdS metric of the form (\ref{AsymptoticAdS}) with $h(r) = 1$: \begin{eqnarray} \label{RGFlow} ds^2 = e^{2A(r)} \eta_{ij} dx^i dx^j + e^{2B(r)} dr^2 \,. \end{eqnarray} These geometries preserve $d$-dimensional Poincar\'e invariance, but break scale invariance. Thus the state of the dual field theory evolves as one runs from the ultraviolet to the infrared, and spacetimes of the form (\ref{RGFlowScalar}), (\ref{RGFlow}) are known as ``renormalization group flow" (RG flow) geometries. As we will see, depending on the behavior of $\phi(r \to \infty)$ such a geometry sometimes corresponds to a non-vacuum state of a CFT (spontaneous breaking of conformal invariance), and sometimes it corresponds to a new, non-CFT theory altogether (explicit breaking of conformal invariance). We will illustrate RG flow geometries in the particular context of the five-dimensional ${\cal N}=8$ gauged supergravity theory, which is a truncation of type IIB supergravity on $AdS_5 \times S^5$ and hence is dual to ${\cal N}=4$ Super-Yang-Mills (specifically, to its lowest dimension operators that remain at strong coupling). The bosonic modes include the metric, the $SO(6)$ 15 gauge fields from the Kaluza-Klein reduction on the $S^5$, and 42 scalars. We summarize these scalars, their 10-dimensional origin, their $SO(6)$ quantum numbers, mass $m^2$, and associated dual operator and dimension $\Delta$ in the table, \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline 10D SUGRA fields & $SO(6)$ & $m^2 L^2$ & $\Delta$ & ${\cal N}=4$ operator \\ \hline $g_{\mu\nu} + F_5$ & ${\bf 20'}$ & $-4$ & 2 & Tr $X^{(i} X^{j)}$\\ $H_3$, $F_3$ & $ {\bf 10} \oplus {\bf \overline{10}}$ & $-3$ & 3 & Tr $\lambda \lambda$ \\ IIB dilaton & ${\bf 1} \oplus {\bf 1}$ & $0$ & $4$ & ${\cal L} = {\rm Tr}\ (F_{\mu\nu}F^{\mu\nu} + \cdots)$ \\ \hline \end{tabular} \end{center} Here $F_5$, $F_3$ and $H_3$ are antisymmetric tensor field strengths of type IIB supergravity, and $X$, $\lambda$ and $F$ are scalars, fermions and field strengths of ${\cal N}=4$ SYM, transforming in the ${\bf 6}$, ${\bf 4}$ and ${\bf 1}$ of $SO(6)$, respectively; the ${\bf 20'}$ representation of $SO(6)$ has the prime because there is already a representation called ${\bf 20}$. For more details on the fields and operators and their relations, see \cite{DHoker:2002nbb}. \bigskip \noindent \underline{\bf Coulomb branch flow} \noindent Consider a particular RG flow geometry with an ``active" scalar $\phi_{\bf 20'}(r)$ in the ${\bf 20'}$ of $SO(6)$, which has dual operator Tr $X^2$ with $\Delta = 2$. There is a supersymmetric solution preserving half the total supersymmetry (16 supercharges) with the form \cite{Freedman:1999gk} \begin{eqnarray} e^{A(r)} = {r \over L }\left( 1 + {\ell^2\over r^2} \right)^{1/6}\,, \quad e^{B(r)} = {L \over r}\left( 1 + {\ell^2\over r^2} \right)^{-1/3}\,, \quad \phi_{\bf 20'}(r) = -\sqrt{2\over 3} \log \left( 1 + {\ell^2\over r^2} \right) \,, \end{eqnarray} in terms of a parameter $\ell$, which when set to zero returns us to AdS space. This solution breaks $SO(6) \to SO(4) \times SO(2)$. The near-boundary expansion for the scalar is \begin{eqnarray} \phi_{\bf 20'} = 0 \times {\log r \over r} + {\ell^2 \over r^2} + \ldots \,. \end{eqnarray} Here we have explicitly written how the coefficient of the ``source" term in the near-boundary scalar expansion is zero (note this is a BF-bound saturating scalar with the form (\ref{BFBoundSaturating})). Hence there is zero source turned on, but there is an expectation value \begin{eqnarray} \langle {\rm Tr}\ X^2\rangle \sim \ell^2\,. \end{eqnarray} The interpretation for this geometry is that it is a state in ${\cal N}=4$ SYM where some expectation values have been turned on for the scalars $X$, spontaneously breaking conformal symmetry and moving out onto the Coulomb branch.\footnote{ ``Coulomb branch" is a term in the lingo of supersymmetric gauge theories indicating a scalar in the adjoint representation of the gauge group, a superpartner of the gauge fields, has gotten an expectation value.} This geometry has a singularity in the IR, and one might be concerned that this singularity renders the geometry problematic. However, it is known that this spacetime is the five-dimensional reduction of a well-behaved ten-dimensional spacetime with a disc of D3-brane sources. Consequently, the spacetime is considered an acceptable one despite its singularity. Let us take a look at a two-point function in this geometry. Because the active scalar $\phi_{\bf 20'}$ is nonzero in the background, its fluctuations mix with fluctuations of the metric trace, and are more complicated to deal with. A simpler thing to look at is the fluctuations of the physical (transverse traceless) modes of the graviton; in any RG flow spacetime these always satisfy the massless Klein-Gordon equation. The solution for the linearized Klein-Gordon equation for a massless scalar $h$ can be found in terms of hypergeometric functions \cite{Freedman:1999gk}, \begin{eqnarray} h = \left( r^2 \over r^2 + \ell^2 \right)^a \: {}_2F_1 \Big( a,a;2+2a; {r^2 \over r^2 + \ell^2} \Big)\,, \quad \quad a \equiv - {1 \over 2} + {1 \over 2} \sqrt{1 + {L^4 p^2 \over \ell^2}}\,, \end{eqnarray} leading to the momentum-space two-point function \begin{eqnarray} \langle{\cal O}{\cal O}\rangle \sim p^4 \psi \left( {1 \over 2} + {1 \over 2} \sqrt{1 + {L^4 p^2 \over \ell^2}}\right) \,. \end{eqnarray} The digamma function $\psi$ moving out to complex arguments induces a logarithmic branch in the $s\equiv -p^2$ plane starting at $s = \ell^2/L^4$, describing a continuous spectrum of excitations above a mass gap of size $\ell/L^2$. Two-point functions for all the bosonic and fermionic modes of ${\cal N}=8$ gauged supergravity have been calculated in this background \cite{Freedman:1999gk, Bianchi:2000sm, DeWolfe:2012uv}, and all display the same continuous spectrum over a gap. \bigskip \noindent \underline{\bf ${\cal N}=1$ flow} \noindent Consider instead an active scalar $\phi_{\bf 10}(r)$ in the ${\bf 10}$ representation of $SO(6)$, thus dual to a fermionic bilinear Tr $\lambda^2$ in ${\cal N}=4$ SYM, with dimension $\Delta = 3$. There is an RG flow solution \cite{Girardello:1998pd}, \begin{eqnarray} e^{A(r)} = {r \over L} \left( 1 - {M^2 \over r^2} \right)^{1/2} \,, \quad e^{B(r)} = {L \over r} \,, \quad \phi_{\bf 10}(r) = {\sqrt{3} \over 2} \log \left( r + M \over r-M \right) \,, \end{eqnarray} where $M$ is a constant of units length. This spacetime breaks the symmetry $SO(6) \to SO(3)$ and 32 supercharges to 4 supercharges (${\cal N}=1$ supersymmetry in four dimensions). It may superficially look quite similar to the previous Coulomb branch spacetime, but its interpretation is quite different, as can be seen from the asymptotic expansion of the scalar: \begin{eqnarray} \phi_{\bf 10}(r \to \infty) = {\sqrt{3} M \over r} + { M^3 \over \sqrt{3} r^3} + \ldots \,. \end{eqnarray} The leading term indicates there is a source turned on for the operator Tr $\lambda^2$ (as well as the subleading term indicating it has an expectation value). Turning on a constant source for this operator is precisely adding a mass term for the fermion, with mass proportional to $M$. Thus this deformation corresponds not to going to a different state in ${\cal N}=4$ Super-Yang-Mills, but to modifying the Lagrangian for ${\cal N}=4$ Super-Yang-Mills by adding mass terms, explicitly breaking conformal invariance.\footnote{${\cal N}=1$ supersymmetry is preserved, meaning there must also be a mass term turned on for the six superpartner scalars $X^i$. The corresponding operator has high dimension at strong coupling and is not visible in our truncation, but we can infer it is sourced as well.} Again we can study the solution to a massless scalar $h$, equivalent to a physical graviton fluctuation. The solution is \begin{eqnarray} h = {\alpha^4 \over r^4} \: {}_2F_1 \Big( 2 + {|p|L^2 \over 2 M}, 2 - {|p|L^2 \over 2 M}; 2; 1 - {M^2 \over r^2} \Big) \,. \end{eqnarray} The correlation function is a little messy but the non-analytic part is \begin{eqnarray} \langle {\cal O} {\cal O} \rangle \sim L^4 p^2 ( L^4 p^2 - 4 M^2) \left(\cdots + \psi \Big( 2 + {|p|L^2 \over 2 M} \Big) + \psi \Big( 2 - {|p|L^2 \over 2 M}\Big) \right)\,, \end{eqnarray} which thanks to the digamma function has poles at \begin{eqnarray} s = {M^2 \over L^4} 4 (n+2)^2 \,. \end{eqnarray} Thus unlike the continuous spectrum over a gap of the Coulomb branch flow, here we have excitations only at particular momenta. Since conformal invariance is explicitly broken, we can interpret this as a gauge theory that confines, producing an infinite tower of gauge-invariant ``glueball" states at particular masses. Other fermionic and bosonic modes have been studied, and arrange themselves into similar towers of discrete states, with modes related by the preserved ${\cal N}=1$ SUSY having the same poles \cite{Bianchi:2000sm}. Thus from these examples we see \begin{itemize} {\item An RG flow geometry with a flowing scalar can correspond either to spontaneously breaking conformal invariance, going to a new state in the CFT, or explicitly breaking conformal invariance, modifying the Lagrangian to something different. The resulting excitations, a continuous spectrum or a discrete set of modes, fit the conformal and confining theories, respectively.} {\item Each geometry introduces a characteristic length scale beyond the AdS scale $L$, in these cases $\ell$ or $M$, over which the geometry varies in the radial direction. The existence of this length scale is associated to the breaking of scale invariance, as the spacetime isometry (\ref{GravityScale}) is broken. These distance scales are translated into energy scales $\ell/L^2$ and $M/L^2$ in the dual field theory.} \end{itemize} \section{Thermodynamics and AdS/CFT} We will now describe how the presence of a black hole in AdS space places the dual field theory in a thermal state, and discuss a bottom-up application of such systems to the phase diagram of QCD. \subsection{Black hole thermodynamics is real thermodynamics} One interesting thing we can ask about a quantum field theory is, how does the system behave when we give it a temperature $T$? A temperature sets an energy scale, and so correspondingly a length scale as well: conformal invariance is (spontaneously) broken by the state at nonzero temperature. Thus from our understanding of AdS/CFT, we anticipate that in the gravity dual something must sit inside the geometry at a certain radial distance $r_H$ associated to the scale $T$. In fact, it has been known since Hawking's work in the 70s that there is a metric configuration that has an associated temperature: a black hole. Hawking showed that quantum field theory in the background black hole geometry radiates particle quanta from the horizon as blackbody radiation at the ``Hawking temperature" $T_H$, proportional to the specific gravity at the horizon. In AdS/CFT, a black hole in the geometry at Hawking temperature $T_H$ is dual to the state of the field theory at temperature $T = T_H$. Since we are interested in configurations in the Poincar\'e patch, we will consider planar black holes (or ``black branes") in anti-de Sitter space, where the horizon at a moment of time is translationally invariant over the $\vec{x}$ coordinates. Such planar black holes are particular cases of the asymptotically AdS metric (\ref{AsymptoticAdS}), where $h(r)$ is the horizon function, whose vanishing defines the location of the horizon $r = r_H$: \begin{eqnarray} h(r = r_H) =0\,. \end{eqnarray} In principle, one can determine the Hawking temperature of a given black hole geometry by studying quantum field theory in the curved space background, and making suitable Bogolioubov transformations between in and out states. Once we believe there is a Hawking temperature, however, there is an easier way to determine what it is. A real-time path integral can be turned into a finite-temperature partition function by continuing to Euclidean time $\tau \equiv i t$ and imposing that the imaginary time is periodic, \begin{eqnarray} \tau \sim \tau + {1 \over T} \,. \end{eqnarray} In flat space, any such periodicity can be chosen. The Euclidean version of a black hole, however, complains unless we pick the periodicity just right. Near the horizon we have $h(r) = h'(r_H) (r - r_H) + \ldots $, and assuming $A(r_H)$ and $B(r_H)$ are not zero or singular, the $\tau/r$ sector of the Euclidanized metric becomes as $r \to r_H$, \begin{eqnarray} ds^2 \approx {e^{2B(r_H)}\over h'(r_H)} {dr^2 \over r-r_H} + e^{2A(r_H)} h'(r_H) (r-r_H) d\tau^2 \,. \end{eqnarray} Defining a new radial coordinate, \begin{eqnarray} \tilde{r} \equiv {2 e^{B(r_H)} \over \sqrt{h'(r_H)}} \sqrt{r-r_H} \,, \end{eqnarray} we get \begin{eqnarray} ds^2 \approx d\tilde{r}^2 + \tilde{r}^2 {e^{2A(r_H) - 2B(r_H)} h'(r_H)^2 \over 4} d\tau^2 \,. \end{eqnarray} If $\tau$ is periodic, this has the structure of a two-dimensional plane in polar coordinates, $ds^2 \approx d\tilde{r}^2 + \tilde{r}^2 d \theta^2$. However, there will be a conical singularity at the origin unless $\theta$ has the proper periodicity, $\theta \sim \theta + 2\pi$. Translating this into a statement about $\tau$, we must have \begin{eqnarray} \tau \sim \tau + {4 \pi \over e^{A(r_H)-B(r_H)} h'(r_H)} \,, \end{eqnarray} and thus the Hawking temperature must be, \begin{eqnarray} \label{Temp} T = {e^{A(r_H)-B(r_H)} \over 4 \pi} h'(r_H) \,. \end{eqnarray} Once we have brought the gravity side to Euclidean space at this temperature, we must take the field theory to the same Euclidean periodicity, and so the field theory also lives at temperature $T$. The thermodynamic variable conjugate to the temperature is the entropy. From the laws of black hole mechanics, it is known that the quantity playing the role of entropy for a black hole is its horizon area divided by four times Newton's constant: \begin{eqnarray} S = {A_H \over 4 G_N} \,. \end{eqnarray} Since we have a translationally invariant planar black hole, the horizon area and the entropy are infinite. The entropy density per unit $(d-1)$-volume, however, is finite, and can be evaluated as \begin{eqnarray} \label{Entropy} s = {1 \over 4 G_N} {\int d^{d-1} x \sqrt{g_{d-1}} \over {\rm vol}} = {1 \over 4G_N} e^{(d-1)A(r_H)} \,. \end{eqnarray} Other thermodynamic variables that are often relevant are a chemical potential and charge density for a conserved charge. As we know from the AdS/CFT dictionary, a conserved current in the field theory requires a gauge field in the gravity theory. Consider a conserved $U(1)$ current $J^\mu$ (which could be a part of some larger nonabelian symmetry) and the associated gauge field $A_\mu$. Specializing (\ref{VecFieldExpand}) to the $A_0$ component, the leading term is identified with the source (the chemical potential $\mu$) and the subleading term is the response (the charge density $\rho = \langle J^0 \rangle$), \begin{eqnarray} A_0 (r \to \infty) = \mu L + \ldots - {2 \kappa^2 L^{d-2} \over (d-2)} { \rho \over r^{d-2}} \,. \end{eqnarray} A constant term in a gauge field is of course not gauge-invariant; here we have made the assumption that $A_0(r_H) = 0$, in which case identifying $A_0(\infty) = \mu L $ is well-defined. Let's look at an example spacetime. The (planar) AdS-Schwarzschild black hole has the asymptotic AdS form (\ref{AsymptoticAdS}) with \begin{eqnarray} \label{AdSSch} e^A = e^{-B} = {r \over L} \,, \quad \quad h = 1 - {r_H^d \over r^d}\,. \end{eqnarray} The mass of the black hole is proportional to $r_H^d$. At large $r \gg r_0$, this reverts to AdS$_{d+1}$. The horizon function $h(r)$ has a zero at $r_H$, the location of the horizon. One can calculate the temperature, \begin{eqnarray} T = {d \over 4 \pi} {r_H \over L^2} \,, \end{eqnarray} and entropy density, \begin{eqnarray} s = {1 \over 4G} {r_H^{d-1} \over L^{d-1}} \,. \end{eqnarray} Once again, we see that the presence of a feature in the geometry at a particular $r_H$ has given us a corresponding energy scale in the field theory --- in this case the temperature --- at $r_H/L^2$. For a geometry including a chemical potential and charge density, we can generalize (\ref{AdSSch}) to an AdS-Reissner-Nordstr\"om solution, which is a {\em charged} planar black hole in AdS. We can write it as \begin{eqnarray} \label{AdSRN} e^A = e^{-B} = {r \over L} \,, \quad \quad h = 1 - {r_H^d + Q^2/r_H^{d-2} \over r^d} + {Q^2 \over r^{2d-2}}\,, \quad \quad \quad A_0 = \mu L \left( 1- {r_H^{d-2} \over r^{d-2}}\right) \,. \end{eqnarray} Here there are two independent parameters, the horizon radius and the charge parameter $Q$. The equations of motion relate the chemical potential $\mu$ and the charge density $\rho$ to $Q$ as \begin{eqnarray} \mu = \sqrt{2d-2 \over d-2}{Q \over L^2 r_H^{d-2}} \,, \quad\quad \rho ={\sqrt{(d-2)(2d-2)} \over (2 \kappa^2)}{Q \over L^{d-1} }\,, \end{eqnarray} while the temperature and entropy can be written \begin{eqnarray} T = {d \over 4 \pi }{r_H \over L^2} \left( 1 - {d-2 \over d} {Q^2 \over r_H^{2d-2}} \right) \,, \quad \quad s = {1 \over 4G} {r_H^{d-1} \over L^{d-1}} \,. \end{eqnarray} We may equally well take the two independent parameters to be the temperature $T$ and chemical potential $\mu$. Thus we see that field theory systems at nonzero temperature and density can be studied by poking and prodding a charged black hole living in AdS space. The same rules we discussed before about calculating correlation functions by varying boundary conditions apply here as well. Much of the milage that has come out of applying the gauge/gravity duality to strongly coupled field theories has come out of these black hole systems. \begin{figure \begin{center} \includegraphics[scale=0.35]{qcdPD.pdf} \caption{A cartoon of the QCD phase diagram. Taken from \cite{DeWolfe:2010he}. \label{fig:PhaseDiag}} \end{center} \end{figure} \subsection{An example: the phase diagram of QCD} Quantum chromodynamics (QCD), the theory of the strong nuclear force, is one of the strongly coupled systems we are most interested in understanding. Its Lagrangian might at first seem like just an elaboration on the well-understood form of quantum electrodynamics (QED), but the non-abelian gauge fields change the game entirely. The interactions are strongly coupled at low energies, and quarks and gluons are confined inside hadrons. The large N approach of generalizing to a large number of colors was invented to make QCD more tractable, and large N dynamics of gauge theories find a natural home in the gauge/gravity correspondence. It is natural to ask what we can learn about QCD from this point of view. One aspect of QCD that is interesting to ask about is its behavior at nonzero temperature and density. Here the density is for the conserved global $U(1)$ baryon number. At zero density, the powerful tool of lattice QCD can be brought to bear, and it is well-known from these studies that varying $T$ from low to high brings us from a region with confinement to a region with deconfinement. In addition, the chiral symmetry (the off-diagonal factor inside the global $SU(N_f) \times SU(N_f)$ symmetry associated to having $N_f$ flavors of quark) goes from being broken to being restored. Because the chiral symmetry is not exact --- it is broken in QCD by non-equal quark masses, along with couplings to electromagnetism and the rest of the Standard Model --- this is not a true phase transition, but a crossover. As the liberation of quarks occurs with increasing temperature, the normalized entropy density $s/T^3$ grows rapidly but smoothly. It is natural to ask what happens when we introduce a nonzero baryon number density, or alternately a baryon number chemical potential $\mu$. It is generally believed that as $\mu$ increases, the crossover with $T$ should sharpen until at a certain $\mu_C$, it becomes a true phase transition. A line of first-order transitions is expected to appear, terminating at a critical point where the transition is second-order. A cartoon of the QCD phase diagram, including high-density color superconducting and color-flavor-locked phases we will not discuss, appears in figure~\ref{fig:PhaseDiag}. However, it turns out it is very hard to study $\mu \neq 0$ in lattice QCD. The chemical potential in Euclidean space introduces complex terms in the action, which are much harder to deal with --- the so-called ``sign problem". AdS/CFT, on the other hand, doesn't have a sign problem. It is perfectly happy to study nonzero $\mu$ --- all one has to do is turn on a gauge field in the presence of the AdS black hole. So it is natural to ask what the gauge/gravity correspondence can tell us. Of course, AdS/CFT is not a perfect probe either. In particular, we have no gravity dual for QCD itself, not even QCD with a large number of colors. The signature dualities of the gauge/gravity correspondence all involve conformal field theories, and QCD is not conformal. As we have mentioned, turning on certain scalar fields in the gravity background can break conformal invariance by turning on sources for field theory operators, but it is not known how to get to QCD precisely in a top-down fashion. So in this section, we will explore what a bottom-up model can do. Rather than trying to derive QCD from first principles, we will generate a recipe for a holographic theory that has the QCD properties we particularly want to explore --- namely its thermodynamics. Then we will cook up the recipe and see how it tastes. Since we are building a bottom-up model, we can have whatever gravity fields we want. We will pick three: \begin{itemize} {\item The metric $g_{\mu\nu}$, dual to the energy and momentum $T_{\mu\nu}$.} {\item A $U(1)$ gauge field $A_\mu$, dual to the conserved current of baryon number $J^\mu$.} {\item An almost-massless scalar field $\phi$.} \end{itemize} The scalar field requires a little more explanation: it is there to model the running of the QCD coupling. In type IIB supergravity on $AdS_5 \times S^5$, there is a massless scalar dual to the ${\cal N}=4$ Lagrangian, for which turning on a source is just shifting the exactly marginal ${\cal N}=4$ SYM coupling constant. The QCD coupling, on the other hand, is not exactly marginal, but runs, though at higher energy it runs slowly. We introduce a scalar that is almost massless, so when we turn it on in the gravity background, it sources an almost marginal operator and introduces a slowly running coupling in the field theory. This is a very ``bottom-up" move --- we are not exactly reproducing the QCD coupling, but by introducing something that runs slowly, we hope to imitate its properties. Given these fields, we have a gravity Lagrangian \begin{eqnarray} \label{QCDPhaseL} {\cal L} = R - {1 \over 2} (\partial \phi)^2 - V(\phi) - {1 \over 4} f(\phi) F_{\mu\nu} F^{\mu\nu}\,, \end{eqnarray} for some potential $V(\phi)$ and gauge kinetic function $f(\phi)$. In principle these could be whatever we wanted, and each choice would define a different (unknown) strongly coupled dual field theory. What we will do is to choose these functions to match known lattice data, and then step off to $\mu \neq 0$ where lattice data has a hard time following. \begin{figure} \centerline{\includegraphics[width=6in]{GridPlots.pdf}}\begin{picture}(0,0)(0,0)\put(90,0){\Large (A)}\put(350,0){\Large (B)}\end{picture} \caption{Numerically generated black holes. Each dot represents a numerically generated solution. Red points are thermodynamically stable, while green points are thermodynamically unstable. Taken from \cite{DeWolfe:2010he}.} \end{figure} Lattice data makes predictions for the entropy density $s(T)$ and the quark suceptability $\chi(T)\equiv {\partial \rho \over \partial \mu}(T)$. Making a choice of functions \cite{Gubser:2008ny, Gubser:2008yx, DeWolfe:2010he} \begin{eqnarray} \label{VChoice} V(\phi) = {-12 \cosh \gamma\phi + b\phi^2 \over L^2} \,, \quad\quad f(\phi) = {\rm{sech} \left[ {6 \over 5} (\phi-2) \right] \over \rm{sech} {12 \over 5}} \,, \end{eqnarray} with $\gamma = 0.606$ and $b = 2.057$, we can numerically generate a series of (uncharged) black hole solutions to the theory. These solutions are like generalizations of the AdS-Schwarzschild solution (\ref{AdSSch}), but with the scalar field turned on as well. Like AdS-Schwarzschild, the black hole geometries have a horizon and an associated temperature and entropy. Using (\ref{Temp}) and (\ref{Entropy}), we can determine the thermodynamics of the field theory states dual to these black holes. Although $\rho$ and $\mu$ vanish for uncharged black holes, it is possible to derive an expression for their derivative. Putting these together, thanks to the choices (\ref{VChoice}), this ensemble of black holes have thermodynamic properties matching well the predictions of lattice QCD. In particular, we have ``baked in" the crossover. Note that (\ref{VChoice}) is not in any sense a perfectly optimized solution in the space of all potentials; it is one choice that works pretty well. Having allowed lattice thermodynamics to fix the model (\ref{QCDPhaseL}), (\ref{VChoice}) at $\mu=0$, we proceed to use the model to generate charged black holes with $\mu \neq 0$. In practice, this is done by seeding values of $\phi$ and $dA_0/dr$ at the horizon and numerically solving the Einstein-Maxwell-dilaton equations of motion. One arrives at generalizations of the AdS-Reissner-Nordstr\"om solutions (\ref{AdSRN}), again with a nonzero scalar added. This ensemble of black holes can be analyzed for thermodynamics, to determine $s(T, \mu)$ and $\rho(T, \mu)$. The result is that AdS/CFT ``knows" that the crossover is supposed to sharpen into a first-order phase transition, and the ensemble of black holes have thermodynamics displaying this behavior. For example, $\rho(\mu)$ at $T > T_C$ displays crossover behavior, but at $T< T_C$, it instead becomes multi-valued, bending back on itself. This multivalued behavior is the hallmark of a first-order phase transition; the two positive-slope solutions at a given $\mu$ are the two competing phases, while the negative-slope solution is thermodynamically unstable. Which of the two stable solutions is physically realized is resolved by determining which minimizes the free energy, and given the definition of the correspondence, the free energy is just the classical gravity action evaluated on each solution. The locus on the phase diagram where the two solutions have exactly equal free energies is the first-order line. \begin{figure \begin{center} \includegraphics[scale=0.54]{aboveTcRhoMu.pdf} \includegraphics[scale=0.46]{atTcRhoMu.pdf} \includegraphics[scale=0.46]{belowTcRhoMu.pdf} \caption{The baryon density $\rho$ as a function of chemical potential $\mu$ for several values of $T$ near the critical point. Taken from \cite{DeWolfe:2010he}. \label{fig:rhoMu}} \end{center} \end{figure} One can then ask about the end of the first-order line, which should be a critical point. Critical points can be classified by the critical exponents by which susceptibilities diverge as it is approached. The QCD critical point is generally believed to be in the same universality class (sharing the same exponents) as the 3D Ising model, which also matches an ordinary liquid/gas critical point. Does AdS/CFT know about the critical exponents? The answer is a nice illustration of how much AdS/CFT ``knows", and what it doesn't know. The susceptibilities of the black hole ensemble do diverge near the critical point, with exponents that match the 3D Ising model --- but in the mean-field approximation. The corrections to the exponents from quantum fluctuations are invisible to the gravity theory \cite{DeWolfe:2010he}. So on the one hand, it is remarkable that an ensemble of black hole geometries, designed to imitate the known thermodynamics of QCD at zero density, ``knows" to sharpen the crossover into a line of phase transitions as the chemical potential increases, with a critical point in the anticipated universality class to boot. But the critical point is only in the mean-field limit. Presumably, the same large N, large coupling approximations that suppress quantum corrections on the gravity side and allow us to do classical GR, also suppress the quantum corrections to the critical exponents. If we could include quantum corrections to the gravity theory, the full exponents might reveal themselves, but as it is we are working in a semiclassical limit where fluctuations are suppressed. Still, these bottom-up models make the prediction that if a theory with a crossover like QCD is extended to nonzero density, a critical point should indeed appear. \section{Real-time Correlators and the Shear Viscosity of the Quark-Gluon Plasma} At low temperatures, QCD states are hadrons. As temperature increases, the theory moves through the crossover and is expected to eventually reach a state of liberated quarks and gluons. This is consistent with our understanding of asymptotic freedom, which tells us non-abelian gauge theories like QCD are weakly coupled at high energy. Such a phase of mostly-free quarks and gluons was dubbed ``the quark-gluon plasma" (QGP). Two decades ago it was expected that heavy ion collisions like those then being planned at the Relativistic Heavy Ion Collider (RHIC) would create a state of hadronic matter with an effective high temperature that would produce this quark-gluon plasma. When RHIC began collecting data for gold-gold collisions (and later LHC for lead-lead collisions), however, this was not what was observed. A state of matter consistent with almost free color charged particles was not observed. Instead, the hadronic matter behaved in a way consistent with being a fluid, with no quasiparticle description at all. This state of matter --- still called the quark-gluon plasma --- quickly freezes out into hadrons, but in the meantime seems to flow with a viscosity that is extremely small compared to its entropy density. The ratio of shear viscosity to entropy density for water is on the order 2; for liquid helium, 0.7. The QGP shows up at $0.12 \pm 0.08$, as low as any substance ever seen (for reviews in our context see \cite{CasalderreySolana:2011us, Adams:2012th, DeWolfe:2013cua}). This observation seems to cry out for AdS/CFT to address it. The gauge/gravity duality specializes in strongly coupled gauge theories with no quasiparticle description. And in fact, a theoretical suggestion that the QGP might have such a low viscosity had emerged several years before the RHIC data came in. \subsection{Hydrodynamics and transport} In equilibrium, a substance can be characterized by its temperature $T$, as well as a chemical potential $\mu$ for any conserved current. Hydrodynamics describes a fluid close enough to equilibrium that a notion of a temperature can be assumed to hold locally: we assume there exists $T(x)$ (and $\mu(x)$). The dynamics of energy and momentum are captured by the energy-momentum tensor, which can be written near equilibrium in a derivative expansion. One may write this in a fully covariant form, but for us it is enough to choose a local rest frame where $T_{0i} = 0$, where $i$ runs over the $d-1$ spatial coordinates. Then to first order in derivatives, we have \begin{eqnarray} \label{FluidEMT} T_{00}&=& \varepsilon \,, \\ T_{ij}&=& \delta_{ij} p - \eta \Big( \partial_i u_j - \partial_j u_i - {2 \over d-1} \delta_{ij} \partial_k u^k \Big) - \xi \delta_{ij} \partial_k u^k \,. \end{eqnarray} Here $\varepsilon$ and $p$ are the energy density and pressure, characterizing the fluid at leading order, and $u_i$ is the fluid velocity vector. The parameters characterizing the terms first-order in derivatives are called {\em transport coefficients}: the shear viscosity $\eta$ and the bulk viscosity $\xi$. If a conserved current is present, the expansion of $J^\mu$ contains another transport coefficient, the conductivity. Below we will be interested in calculating the shear viscosity. This can be done using the theory of linear response: if we turn on a source, how does the system respond? Imagine adding a source $g^{\mu\nu}$ for the energy-momentum tensor as a perturbation to the Hamiltonian, \begin{eqnarray} H = H_0 + H' = H_0 + \int d^{d-1}x \, g^{\mu\nu} T_{\mu\nu}\,. \end{eqnarray} The time rate of change of $\langle T_{\mu\nu}\rangle$ is then given in terms of a commutator, \begin{eqnarray} {d \langle T_{\mu\nu} \rangle \over d t} = i \langle [ H, T_{\mu\nu} ]\rangle \,, \end{eqnarray} and integrating we find \begin{eqnarray} \delta \langle T_{\mu\nu}(\vec{x},t)\rangle &=& i \int_{t_0}^t dt' \, \Big\langle \Big[ \int d^{d-1}x' g^{\rho\sigma}(\vec{x}',t') T_{\rho\sigma}(\vec{x}',t'), T_{\mu\nu} (\vec{x},t)\Big]\Big\rangle\\ &=& \int d^dx' g^{\rho\sigma}(\vec{x}',t') G^R_{\rho\sigma,\mu\nu}(\vec{x}', t,;\vec{x},t) \,, \end{eqnarray} where we defined the retarded Green's function, \begin{eqnarray} G^R_{\rho\sigma,\mu\nu}(\vec{x}', t,;\vec{x},t) = i \theta(t-t')[ T_{\rho\sigma}(\vec{x}',t'), T_{\mu\nu} (\vec{x},t)] \,, \end{eqnarray} whose $\theta$-function encodes the fact that the response must come after the source that provokes it. In momentum space, this becomes \begin{eqnarray} \delta \langle T_{\mu\nu} (\omega, \vec{k}) \rangle = g^{\rho\sigma}(\omega, \vec{k}) G^R_{\rho\sigma;\mu\nu}(\omega, \vec{k}) \,. \end{eqnarray} Thus the information on the response of the energy-momentum tensor to a perturbing source is contained in the retarded Green's function. We can determine the shear viscosity from a transverse traceless mode of the Green's function for energy-momentum tensor in the zero momentum limit, for example $G^R(\omega) \equiv G^R_{xy;xy}(\omega, \vec{k}=0)$. Since the shear viscosity arises at first order in derivatives in (\ref{FluidEMT}), it will appear with $\omega$ as its coefficient. We can thus extract it using a so-called Kubo formula: \begin{eqnarray} \label{Kubo} \eta = - \lim_{\omega \to 0} {1 \over \omega} G^R(\omega) \,. \end{eqnarray} Thus to calculate this transport coefficient in AdS/CFT, we need to calculate the retarded propagator $G^R$. \subsection{Real-time correlators in AdS/CFT} \label{RealTimeSec} In Euclidean AdS/CFT, including the RG flows we considered, the two solutions to fluctuation equations far from the boundary take the form of one regular, one divergent, and the AdS/CFT prescription is to choose the regular boundary condition. Black hole horizons in Lorentzian signature are different. There the typical solutions of linearized equations of motion near the black hole horizon $r_H$ take the form, \begin{eqnarray} \label{InfallingOutgoing} X(r,t, \vec{x})_{\omega;\:\vec{k} = 0} &=& e^{-i \omega t}x_0 (r -r_H)^{\pm i \alpha \omega} (1 + \ldots)\\ &=& x_0 e^{i \omega ( \pm \alpha \log (r-r_H) - t)} \,, \end{eqnarray} where $\alpha$ is a constant. These solutions can be identified as infalling and outgoing modes. What is the boundary condition to choose? The prescription of Son and Starinets \cite{Son:2002sd} is that keeping the {\em infalling} modes only, corresponds to calculating the {\em retarded} Green's function. The reasoning behind this choice is that it corresponds to the dissipation of linear response: we poke the system with the black hole and let it relax, and watch the excited modes fall behind the horizon. The fact that the information of the modes is lost into the black hole corresponds to the dissipative process; it is physical for modes to fall into the horizon, just as it is physical that response follow cause, as in the retarded propagator. Modes coming purely out of the horizon would correspond to the causality-reversed case of the advanced propagator. (One can make these arguments more rigorous; see \cite{Skenderis:2008dg}). Thus to calculate a retarded Green's function at finite temperature, we solve the linearized fluctuation equation in the Lorentzian signature black hole geometry, and impose {\em infalling} boundary conditions (the lower sign in (\ref{InfallingOutgoing})). \subsection{Calculating the shear viscosity to entropy density } The AdS/CFT result for the ratio of shear viscosity to entropy density was originally done by \cite{Kovtun:2004de} and can be calculated in a number of ways. Here we follow the calculation of \cite{Gubser:2008sz}. We specialize to a four-dimensional field theory with gravity dual involving an ordinary Einstein gravity, but do not make any further restriction on the theory. To calculate the retarded Green's function $G^R_{xy;xy}$, we will solve the fluctuation equation for the graviton mode $h_{xy} \equiv Z$ with $\omega \neq 0, \vec{k}=0$. As mentioned before, a transverse traceless graviton mode obeys the Klein-Gordon equation of a massless scalar, which in the asymptotically AdS background of (\ref{AsymptoticAdS}) takes the form \begin{eqnarray} \label{ZKG} Z'' + (4A' - B'+ {h'\over h}) Z' + \omega^2 {e^{2B-2A} \over h^2} Z =0\,. \end{eqnarray} Let us define the coefficient of the $Z'$ term as $p(r)$. The theory of ordinary differential equations then tells us that the quantity \begin{eqnarray} {\cal F} \equiv e^{\int p(r) dr} \times{\rm Wronskian} \,, \end{eqnarray} is independent of $r$. Choosing the solutions $Z$ and $Z^*$ to put into the Wronskian, we obtain \begin{eqnarray} {\cal F}= e^{4A-B} h \, {\rm Im} (Z^* Z') \,. \end{eqnarray} We will make our progress by evaluating this quantity in the UV (near the boundary) and in the IR (near the horizon). Near the boundary, the solution to (\ref{ZKG}) takes the form, \begin{eqnarray} \label{ZExpansion} Z(r \to \infty) = Z^{(0)}_{\rm UV} + \ldots + {Z^{(4)}L^8 \over r^4} + \ldots \,, \end{eqnarray} and with $Z^{(0)}_{\rm UV}$ identified as the source, holographic renormalization identifies the expectation value for the dual operator and the resulting Green's function are \begin{eqnarray} \langle {\cal O} \rangle = - 4 {L^3 \over 2 \kappa^2} Z^{(4)} \quad \quad \to \quad \quad G_R = - 4 {L^3 \over 2 \kappa^2} {Z^{(4)} \over Z^{(0)}_{\rm UV}} \,. \end{eqnarray} Near the boundary we also have \begin{eqnarray} e^{4A-B} h \to {r^5 \over L^5} \,. \end{eqnarray} In calculating the Im$(Z^* Z')$ part of ${\cal F}$, the coefficients of the unwritten middle terms in (\ref{ZExpansion}) are proportional to $Z^{(0)}_{\rm UV}$, and thus the corresponding cross terms are all real; the first non-real term involves $Z^{(0)*}_{\rm UV} Z^{(4)}$. Choosing a normalization where $Z^{(0)}_{\rm UV} = 1$, this is just proportional to the imaginary part of the retarded Green's function, \begin{eqnarray} \label{ImagGreen} {\rm Im} \, G^R = {1 \over 2 \kappa^2} {\cal F} \,. \end{eqnarray} Now turn to the IR. Near the horizon we have \begin{eqnarray} Z &=& Z^{(0)}_{\rm IR} (r - r_H)^{-i \alpha \omega} + \ldots\\ &=& Z^{(0)}_{\rm IR} ( 1 - i \alpha \omega \log (r-r_H) + \ldots) \,, \end{eqnarray} where \begin{eqnarray} \alpha = {e^{B-A} \over h'}\Bigg|_{r=r_H} \,, \end{eqnarray} which implies \begin{eqnarray} {\cal F}= - e^{3A(r_H)} \omega |Z^{(0)}_{\rm IR}|^2 \,. \end{eqnarray} Plugging into (\ref{ImagGreen}) and the Kubo formula (\ref{Kubo}), we find \begin{eqnarray} \eta = {e^{3A(r_H)} \over 16 \pi G_N} |Z^{(0)}_{\rm IR}|^2 \,, \end{eqnarray} where we used $2 \kappa^2 = 16 \pi G_N$. We see this is proportional to the gravitational expression for the entropy density in terms of the horizon area (\ref{Entropy}). Thus we have \begin{eqnarray} {\eta \over s} = {1 \over 4\pi} |Z^{(0)}_{\rm IR}|^2 \,. \end{eqnarray} To complete the calculation, we have to relate $Z^{(0)}_{\rm IR}$ to $Z^{(0)}_{\rm UV}$. In general one would not be able to do this without solving the fluctuation equation everywhere. However, here we have a trick up our sleeve. In the $\omega \to 0$ limit, the Klein-Gordon equation (\ref{ZKG}) becomes \begin{eqnarray} \partial_r (\log Z') + \partial_r (4A - B + \log h) = 0\,. \end{eqnarray} Integrating this, we obtain \begin{eqnarray} Z(r) = {\rm const} + {\rm const}' \int_r^\infty dr' {e^{-4A+B} \over h} (r')\,. \end{eqnarray} Going to the UV $r \to \infty$, we find \begin{eqnarray} {\rm const} = Z^{(0)}_{\rm UV} \,, \end{eqnarray} while going to the IR where $h \propto r-r_H$, \begin{eqnarray} Z = {\rm const} + {\rm const}'' \log (r-r_H) \,, \end{eqnarray} and thus const $= Z^{(0)}_{\rm IR}$. Putting these together we have \begin{eqnarray} Z^{(0)}_{\rm IR} = Z^{(0)}_{\rm UV} = 1\,, \end{eqnarray} and thus \begin{eqnarray} {\eta \over s} = {1 \over 4 \pi} \,. \end{eqnarray} The particularly simple form of the fluctuation equation in the low-energy limit allowed us to relate the UV and IR directly, and solve for the shear viscosity to entropy density ratio. (We note that for other transport coefficients, such a simplification does not occur.) Thus it is {\em universally} the case for any theory with an Einstein gravity dual that this ratio will obtain. This result represents another way to proceed given that we don't have an exact gravity dual of QCD: look for general features of large N gauge theories that span a wide class of cases. We do not have a gravity dual for QCD, but we now have seen that it is a generic feature of large-N, strongly coupled field theories that their viscosity to entropy density ratio is very low; so it is not unreasonable to expect that QCD might have this property as well, and indeed experimentally this turns out to be the case. \section{Holographic Superconductors} So far we have encountered systems with scalars turned on, leading to RG flow geometries, and systems with gauge fields turned on, leading to a density and chemical potential for the corresponding dual conserved charge. It is also possible to turn on scalars and gauge fields simultaneously. If the scalars are neutral with respect to the gauge field, these geometries are qualitatively similar to ones we have already studied; in fact the QCD phase diagram spacetimes we considered are examples of this class. If a charged scalar is turned on, however, things are qualitatively different. The corresponding $U(1)$ is now broken. Such geometries are generally referred to as {\em holographic superconductors} \cite{Gubser:2008px, Hartnoll:2008vx, Hartnoll:2008kx, Gubser:2008pf}, because of the spontaneous breaking of the symmetry. Probably a better name would be {\em holographic superfluids}, because the dual field theory current $J^\mu$ is global, not gauged. But people like to imagine it would be easy to weakly gauge the current, and the holographic superconductor name has stuck. Consider a gravity action of the form \begin{eqnarray} \label{ChargedScalarL} S_{\rm grav} = {1 \over 2 \kappa^2} \int d^{d+1}x \left( R - {1 \over 4} F_{\mu\nu} F^{\mu\nu} - |(\partial_\mu - i e A_\mu) \phi|^2 - V(\phi) \right)\,, \end{eqnarray} containing a charged scalar $\phi$. We can imagine the field $\phi$ is dual to the ``Cooper pair", the charged bosonic composite whose condensation leads to superconductivity. One class of solutions to (\ref{ChargedScalarL}) that always exists is the charged black hole, the AdS-Reissner-Nordstr\"om solution (\ref{AdSRN}) with vanishing scalar $\phi = 0$. One can choose any $T$ and $\mu$ for these solutions, and the $U(1)$ is unbroken. Since the dual theory is conformal, only the ratio $T/\mu$ really matters, and black holes with the same ratio will be coordinate-equivalent under the scale transformation. However, it turns out that other solutions exist for certain values of $T$ and $\mu$. To anticipate why this might be the case, let's look at some interesting properties of the AdSRN solution at zero temperature. If we choose parameters $Q^2=d/(d-2) r_H^{2d-2}$, we get $T=0$ but $\mu >0$, and the horizon function becomes \begin{eqnarray} h = 1 - {2d-2 \over d-2} {r_H^d \over r^d} + {d \over d-2} {r_H^{2d-2} \over r^{2d-2}} = {d (d-1) \over r_H^2} (r-r_H)^2 + {\cal O}((r-r_H)^3)\,. \end{eqnarray} The geometry still has a horizon, but there is now a double zero in the horizon function; this is the signature of an {\em extremal} black hole, which has the minimum possible mass for a given charge. Near the horizon, the extremal metric takes the form \begin{eqnarray} ds^2 = - {(r-r_H)^2 \over L_2^2} dt^2 + {L_2^2 d(r-r_H)^2 \over (r-r_H)^2} &+& {r_H^2 \over L^2} d\vec{x}^2 \,, \quad \quad A_0 = {\sqrt{2}\over L_2} (r-r_H ) \,, \\ L_2^2 &\equiv& {L^2 \over d(d-1)} \,. \end{eqnarray} which is AdS$_2 \times \mathbb{R}^{d-1}$, with the time and radial directions combining into AdS$_2$ with characteristic length $L_2$, and a constant electric field in the radial direction. Consider fluctuations of the charged scalar in this background. The scalar receives an additional effective contribution to its mass from the coupling to the electric field, which near the horizon takes a simple form: \begin{eqnarray} \label{meff} m^2_{\rm eff} = m^2 + e^2 g^{tt} A_0^2 = m^2 - 2 e^2 \,. \end{eqnarray} Now an interesting thing can occur: even if the mass $m^2$ satisfies the BF bound in AdS$_{d+1}$, it can be that the effective mass $m_{\rm eff}^2$ may violate the AdS$_2$ BF bound, \begin{eqnarray} \label{AdS2BF} m_{\rm eff}^2 L_2^2 \geq - {1 \over 4} \,, \end{eqnarray} if the electric charge $e$ of the scalar field is sufficiently strong. Such a violation suggests that while the scalar field may be stable in the AdS vacuum, it develops an instability near the horizon of a charged black hole. We might expect the scalar to condense, forming a condensate around the black hole. For $T$ nonzero but small, the near-horizon geometry is not precisely AdS$_2$, but the charge contribution to $m^2_{\rm eff}$ can be seen to get smaller as $T$ is increased, suggesting that this condensation phenomenon will be strongest at low temperatures. We have now motivated that there might be asymptotically AdS charged black hole solutions with a nonzero scalar field turned on, preferentially at lower temperatures. We can look for such solutions, imposing the choice that the scalar profile is only associated to the dual operator having an expectation value turned on, but no source; this means the solutions will be dual to states in the {\em same} dual field theory as the AdSRN backgrounds. It turns out these solutions do exist, and furthermore exist only below a certain critical temperature $T_c$, whose magnitude is set by the only other parameter associated to a mass scale, the chemical potential: $T_c \sim \mu$. As the temperature is decreased in this family of solutions, the value of the condensate $\langle {\cal O}\rangle$ grows. For example, consider the case $d=3$ with the scalar mass $m^2 L^2 = -2$, which shows up in ${\cal N}=8$ gauged supergravity and was our example when we went over holographic renormalization, and let it be in the regular quantization so the dual operator ${\cal O}_2$ has dimension 2. One finds solutions with asymptotic scalar field \begin{eqnarray} \phi(r\to \infty) = {0 \over r} + {{\cal O}_2 \over r^2} + \ldots \,, \end{eqnarray} with the condensate as a function of temperature given in figure~(\ref{fig:HoloSup}a) \cite{Hartnoll:2008vx}. \begin{figure \begin{center} \includegraphics[scale=0.4]{HoloSup1.pdf} \includegraphics[scale=0.4]{HoloSup2.pdf} \caption{The value of the condensate $\langle {\cal O}_2\rangle$ compared to the critical temperature $T_c$ as temperature is varied, with $d=3$, $m^2 L^2 = -2$ and $e=1$, from \cite{Hartnoll:2008vx}. \label{fig:HoloSup}} \end{center} \end{figure} Thus there are two solutions to the same theory with the same $T/\mu$, one with broken $U(1)$ symmetry and one with unbroken. As in the QCD phase diagram discussion, this corresponds to two different states at the same point in the phase diagram. Which solution is actually preferred is a question of which minimizes the free energy, which as discussed previously is the (fully renormalized) classical gravitational action. Calculating this, one indeed finds that the solutions with the condensate are thermodynamically preferred. Thus at high temperatures, only AdSRN solutions exist and the dual field theory is in a regular phase, but as the temperature is lowered below $T_c$, the holographic superconductor solutions take over, and the theory enters a superfluid phase. We can understand these holographic superconductor solutions as the endpoint of the instability towards condensing scalars found from a violation of the near-horizon AdS$_2$ BF bound (\ref{AdS2BF}). One interesting question is how the condensate modifies the flow of charge. This can be studied in the conductivity, another transport coefficient similar to the shear viscosity discussed previously. This is captured in the spatial component of a frequency-dependent gauge field, \begin{eqnarray} \label{AConduct} A_x(\omega) = A_x^{(0)}(\omega) + {A_x^{(1)}(\omega) \over r} + \ldots \,. \end{eqnarray} In practice this mode mixes with the metric fluctuation mode $h_{tx}$, and they must be studied together as a coupled system. From (\ref{AConduct}) one can then obtain the conductivity from Ohm's law $\langle \vec{J}\rangle =\sigma(\omega) \langle\vec{E}\rangle$, with $\langle E_x \rangle = \langle \dot{A}_x \rangle = -i \omega \langle A_x \rangle$, and thus \begin{eqnarray} \sigma(\omega) = {\langle J_x \rangle \over \langle E_x \rangle} = {i \over\omega} { A_x^{(1)}(\omega)\over A_x^{(0)}(\omega)} \,. \end{eqnarray} The $\omega \to 0$ limit gives the DC conductivity $\sigma\equiv \sigma(\omega=0)$ in a Kubo formula directly analogous to the one (\ref{Kubo}) for the shear viscosity. The result is plotted in figure~(\ref{fig:HoloSup}b) \cite{Hartnoll:2008vx} for a range of $T/T_c$; as $T$ decreases below the critical temperature, the low-frequency conductivity disappears, indicating an energy gap in the spectrum that eventually reaches size $\sim T$. Top-down holographic superconductors embedded in string theory have also been found \cite{Gubser:2009qm, Gauntlett:2009dn, Gubser:2009gp, Gauntlett:2009bh, Ammon:2010pg}. There the large set of gauged fields and charged scalar fields leads to multiple instabilities and intricate webs of branches of superconducting solutions; for a description in the ABJM case, see \cite{Donos:2011ut}. Zero-temperature limits of holographic superconductors are known to exist \cite{Horowitz:2009ij, Gubser:2009cg}, and can differ substantially from the extremal black holes we have already encountered as zero-temperature limits of non-superconducting systems. One top-down zero-temperature solution (which can be viewed either as part of eleven-dimensional supergravity reduced to four dimensions \cite{Gauntlett:2009dn, Gubser:2009gp, Gauntlett:2009bh}, or directly in four-dimensional maximally supersymmetric gauged supergravity \cite{Bobev:2010ib}) takes the form of a {\em domain wall} geometry, where the spacetime is asymptotically anti-de Sitter both in the UV and IR ends. In the UV, the characteristic length scale is the usual one $L_{\rm UV} = L$ associated to empty AdS space and the vacuum of ABJM theory, where the scalar fields vanish; in the IR, the running charged scalar approaches a different critical point of the supergravity potential, resulting in a different effective cosmological constant and a smaller AdS scale, $L_{\rm IR}$. The rest of the geometry interpolates between these two AdS limits. Analogs of this geometry with no gauge field were found in \cite{Freedman:1999gp}, and can be thought of as RG flow geometries where a relevant perturbation in the UV brings the theory to a new nontrivial fixed point in the IR. In the zero-temperature superconductor case, the asymptotics of the scalar only lead to a field theory expectation value for the dual operator, but the chemical potential constitutes the deformation of the theory. More generally, it is believed \cite{Gubser:2009cg} that zero-temperature holographic superconductors will interpolate between AdS in the UV, and a so-called Lifshitz geometry in the IR, with metric of the form \cite{Kachru:2008yh} \begin{eqnarray} ds^2_{\rm Lif} = -\left(r \over L_{\rm IR}\right)^{2z} dt^2 + {r^2 \over L_{\rm IR}^2} d\vec{x}^2 + {L_{\rm IR}^2 \over r^2} dr^2 \,, \end{eqnarray} which has a scaling symmetry with exponent $z$ treating space and time differently, \begin{eqnarray} D: t \to \lambda t^z \,, \quad \vec{x} \to \lambda \vec{x}\,, \quad r \to {r \over \lambda} \,. \end{eqnarray} \section{Fermions and Strange Metals} \subsection{High-temperature superconductors} The so-called high-temperature (high-$T_c$) superconductors are interesting not just for their relatively high superconducting transition temperatures, but for a number of other interesting features of their phase diagrams, even outside the superconducting phase. A cartoon of the phase diagram is presented in figure~\ref{fig:PhaseDiag2}, where the vertical axis is the temperature, and the horizontal axis is a doping fraction of atoms in the lattice. Outside the superconducting region, at high doping the system acts like a traditional Fermi liquid: despite the interactions between electrons, it behaves as if transport is mediated by charged particles, which we can think of as electrons ``dressed" by the interactions. At zero temperature and finite density the dressed electrons arrange themselves into a Fermi surface, with quasiparticle excitations that are asymptotically stable as their energies approach the Fermi surface, as is familiar behavior from the usual Fermi theory of metals. \begin{figure \begin{center} \includegraphics[scale=0.28]{PhaseDiag2.pdf} \caption{A cartoon of the high-$T_c$ superconductor phase diagram, showing the superconducting dome, as well as Fermi liquid, strange metal, pseudogap and antiferromagnetic phases; the last two are not discussed here. \label{fig:PhaseDiag2}} \end{center} \end{figure} On the other hand, another phase exists above the superconducting dome, with more unusual properties. In this ``strange metal" phase, sharp Fermi surfaces exist, but there are no stable quasiparticles. Such a state of matter has been called a ``non-Fermi liquid" and remains theoretically challenging. From what we've learned about QCD in the quark-gluon plasma phase, it is tempting to think that a strongly-coupled phase of matter with no quasiparticle description might be well-described by a gravity dual. Can the gauge/gravity correspondence see a non-Fermi liquid? To answer this question, we can look for Fermi surface singularities and the dispersion relations of their associated small fluctuations, by studying fermionic response in black hole gravity backgrounds. These backgrounds should have finite density, and the Fermi surface singularity should be cleanest at zero temperature (though finite temperature studies can be useful as well). To proceed, we will need to say something about AdS/CFT for fermions. For more discussion of this subject and its background, see for example \cite{Hartnoll:2016apf, McGreevy:2016myw, Iqbal:2011ae}. \subsection{Fermions and AdS/CFT} For fermionic excitations, the basic principle of AdS/CFT remains the same: each field on the gravity side contains boundary conditions corresponding to a source, while the response is free to fluctuate and is tied to the source through a boundary condition in the IR. However, because fermionic fields obey first order differential equations, things are encoded slightly differently: there is only one spinor's worth of boundary conditions from the higher-dimensional point of view, but this splits into two separate spinors from the lower-dimensional perspective, one of which is the source and the other the dual operator expectation value \cite{Iqbal:2009fd}. As with scalar fields, for certain values of the mass there are two possible quantizations. For a spin-1/2 field obeying the Dirac equation\footnote{We use conventions where $\{ \Gamma^\mu, \Gamma^\nu\} = - 2 g^{\mu\nu}$, with mostly plus signature metric. A hat indicates a flat-space index.} \begin{eqnarray} (i \Gamma^\mu \nabla_\mu - m) \chi = 0 \,, \end{eqnarray} the near-boundary solutions can be written in terms of the projections \begin{eqnarray} \chi_\pm \equiv {1 \over 2} \left( 1 \pm i \Gamma^{\hat{r}} \right) \chi\,, \end{eqnarray} as \begin{eqnarray} \chi_+ (r \to \infty) &=& A_+(t, \vec{x}) L^{d-2mL-1/2} \, r^{-d/2+mL} + \ldots\,, \\ \chi_- (r \to \infty) &=& A_-(t, \vec{x}) L^{d+2mL-1/2} \, r^{d/2+mL} + \ldots \,, \end{eqnarray} As with the scalar example, we inserted factors of $L$ so that the engineering dimensions of the spinors $A_\pm$ match their $d$-dimensional scaling dimensions, $\Delta = d/2 \mp mL$. Let us assume $m \geq 0$, otherwise we can effectively exchange $\chi_+$ and $\chi_-$. For $mL > 1/2$, we must take $A_+$ (the leading term) as the fixed spinor source: \begin{eqnarray} J_{\rm reg}(t, \vec{x}) = A_+(t, \vec{x}) \,, \end{eqnarray} and allow $A_-$ to fluctuate, corresponding to the expectation value and leading to a dual operator of dimension $\Delta_\chi = {d \over 2} + mL$. For the window $0 \leq mL \leq 1/2$, the alternate quantization becomes possible as well, with source \begin{eqnarray} J_{\rm alt}(t, \vec{x}) = A_-(t, \vec{x}) \,, \end{eqnarray} and dual operator dimension $\Delta_\chi = {d \over 2} + mL$. Thus the regular quantization with $A_+$ fixed gets us dimensions down to $\Delta = d/2$, and the alternate quantization fills in the window $(d-1)/2 \leq \Delta \leq d/2$, down to the unitarity bound.\footnote{Precisely at $m=0$, $\chi_+$ and $\chi_-$ have the same asymptotic scaling. Here both quantizations are possible, and they are equivalent in simple backgrounds but inequivalent in the presence of other interactions. This case is relevant for 4D ${\cal N}=8$ gauged supergravity, where SUSY can be used to select the regular quantization; see \cite{Breitenlohner:1982jf, Breitenlohner:1982bm} and for a modern discussion \cite{DeWolfe:2014ifa}.} From the definition, each of $\chi_\pm$ contains half the degrees of freedom of $\chi$. When the gravity theory is in an even dimension, one can choose a basis for the $(d+1)$-dimensional Clifford algebra where $\Gamma^{\hat{r}}$ is diagonal and $A_\pm$ reduce to half-dimensional spinors appropriate for the odd-dimensional CFT$_d$; for example in AdS$_4$/CFT$_3$, a four-component Dirac spinor $\chi$ in 4D decomposes into two-dimensional Dirac spinors $\chi_\pm$ in 3D. Meanwhile, if the gravity theory is an odd dimension, $\Gamma^{\hat{r}}$ is proportional to the chirality matrix in the $d$-dimensional Clifford algebra, and the $\chi_\pm$ spinors are chiral; so in AdS$_5$/CFT$_4$, a four-component Dirac spinor in 5D becomes two Weyl spinors in 4D. As with the bosonic case, in general we have to perform holographic renormalization for a fermion as well. Consider a Majorana fermion $\chi$ with $m=0$ in $d=3$, the case appropriate to 4D ${\cal N}=8$ gauged supergravity: here both the dual operator and its source have dimension $3/2$. We take the bulk + boundary action, \begin{eqnarray} S_{\rm Dirac} = {1 \over 2 \kappa^2}\int d^4x \sqrt{-g} {i \over 2} \bar\chi \Gamma^\mu \nabla_\mu \chi + {1 \over 4} \int d^3x \sqrt{-h} \, \bar\chi \chi \,, \end{eqnarray} which evaluated on solutions to the Dirac equation becomes, \begin{eqnarray} S_{\rm Dirac} = {L^2 \over 4 \kappa^2} \int d^3x \,\bar{A}_+ A_- \,. \end{eqnarray} and leads to the action variation \begin{eqnarray} \delta S_{\rm Dirac} = {L^2 \over 2 \kappa^2}\int d^3x \, \bar{A}_- \delta A_+ \,, \end{eqnarray} which vanishes for the regular quantization $\delta A_+ = 0$, and leads to the one-point function \begin{eqnarray} \label{FermiOnePoint} \langle {\cal O}\rangle = {\delta S_{\rm Dirac} \over \delta \bar{J}} = {L^2 \over 2 \kappa^2 } A_- \,. \end{eqnarray} Changing the sign of the boundary term makes it suitable for the alternate quantization. \subsection{Holographic Fermi and non-Fermi liquids} Now that we understand fermionic correlation functions in AdS/CFT, we can can search for Fermi surfaces in finite density systems by examining the fermionic Green's function \begin{eqnarray} G_R(k, \omega) \sim {A_- \over A_+} \,. \end{eqnarray} (Since $A_\pm$ are spinors this is strictly speaking a matrix of Green's functions, but a choice of gamma matrices can diagonalize it.) One then defines a Fermi surface at momentum $k_F$ as a singularity in the Green's function at $\omega =0$, the energy of the Fermi surface: \begin{eqnarray} G_R(\omega = 0, k = k_F) \to \infty \,. \end{eqnarray} Once a Fermi surface singularity is found, one can look for nearby fluctuations. For the extremal AdSRN geometries, there is a subtle order of limits issue between the near-horizon and small-$\omega$ expansions. Treating this with care relates the full Green's function $G_R$ near the Fermi surface to an auxiliary Green's function ${\cal G}(\omega)$ defined in the near-horizon $AdS_2$ region \cite{Faulkner:2009wj}. One then has the schematic form \begin{eqnarray} G_R (\omega, k) \sim {1 \over k_\perp - {1 \over v_F} \omega + \ldots + {\cal G}(\omega) }\,. \end{eqnarray} There is always a series in $\omega$, with higher order terms indicated by the ellipsis, but this may or may not be dominated by the IR Green's function \begin{eqnarray} {\cal G}(\omega) \sim \omega^{2 \nu_k} \,, \end{eqnarray} where $\nu_k$ is an effective $AdS_2$ dual operator dimension. If $\nu_{k_F} > 1/2$, the leading small-fluctuation singularity is given by the $\omega/v_F$ term, which is real and hence is associated to an asymptotically stable mode: this is Fermi liquid behavior. However, if $\nu_{k_F}<1/2$, the dispersion relation is dominated by the complex ${\cal G}(\omega)$, which leads to unstable modes whose decay widths are of the same order as their energies; these are not asymptotically stable and describe a non-Fermi liquid. In bottom up models in the AdSRN geometry, it was found that by tweaking the fermion mass and charge, both Fermi liquid and non-Fermi liquid behavior could manifest \cite{Lee:2008xf, Liu:2009dm, Cubrovic:2009ye, Faulkner:2009wj}. One can also study a set of top-down geometries dual to both ${\cal N}=4$ SYM and to ABJM theory; these geometries in general have running neutral scalars as well, but share the extremal near-horizon AdS$_2$ property of AdSRN. Instead of a doping parameter, one may vary the ratios of chemical potentials (of which one has three for the $SO(6)$ of ${\cal N}=4$, and four for the $SO(8)$ of ABJM) to produce new geometries. Holographic Fermi surfaces indeed appear in such top-down models for many fermionic fields, and interestingly, over a wide class of such top-down geometries, {\em every} fermion studied has non-Fermi liquid behavior \cite{DeWolfe:2011aa, DeWolfe:2012uv, DeWolfe:2014ifa}.\footnote{In certain special IR-singular geometries with vanishing entropy, there can be perfectly stable modes in an energy band, before the non-Fermi liquid behavior returns \cite{DeWolfe:2013uba, DeWolfe:2014ifa}.} Varying the chemical potentials can also produce geometries where $\nu_{k_F} \to 0$, which can be thought of as the divergence of a correlation length \cite{Iqbal:2011in}. This can indicate the boundary of a so-called ``oscillatory region", inside which no Fermi surface singularities exist as the Fermi momentum moves off into the complex plane; these regions are characterized by log $\omega$ terms and may be thought of existing in regions where the fermion charge is strong enough to allow pair production of charged particles in the IR AdS$_2$, in a fermionic version of the bosonic instability seen in holographic superconductors \cite{Faulkner:2009wj}. For certain fermions in top-down models there can also be isolated points where the correlation length diverges, so-called pole-zero transitions, where lines of Fermi surfaces (poles in $G_R$) transmute into zeros in $G_R$ instead. \subsection{Fermionic response in holographic superconductors} Besides the non-superconducting states just described, we can consider how fermions behave in holographic superconductors as well. Elementary superconducting states develop a mass gap: one question we can ask is, does such a gap occur for fermionic fluctuations in holographic superconductors? Bottom-up fermions in superconducting backgrounds, with variable masses and charges, generically display bands of ungapped stable fermionic modes, with higher charge in general leading to more bands \cite{Gubser:2009dt}. It was suggested by \cite{Faulkner:2009am} that such bands of excitations could be made gapped by a particular gravity interaction of the schematic form \begin{eqnarray} \label{MajoranaBCS} S_{\rm Maj} = \int d^4x \, \phi \chi^T C \Gamma_5 \chi + \hbox{hermitian conjugate}, \end{eqnarray} where $\phi$ is the charged scalar active in the superconductor, and $\chi$ is the charged fermion; this Majorana-type coupling has net charge for the fermion bilinear, and can be thought of as an interaction between a Cooper pair $\chi \chi$ and the background condensate. Such an interaction is strongly reminiscent of the BCS Hamiltonian for superconductivity, \begin{eqnarray} H_{\rm BCS} = \Delta c^\dagger c^\dagger + \hbox{hermitian conjugate} \,, \end{eqnarray} with $c^\dagger$ a fermion creation operator and $\Delta$ the condensate. In general, the excitations of particles and their antiparticles (holes) have mirrored bands crossing at $\omega = 0$; the ``Majorana BCS" coupling (\ref{MajoranaBCS}) causes these bands to interact, and the resulting level repulsion pushes them away from the Fermi surface, leading to a gap (see figure~\ref{fig:LevelRepulsion}). \begin{figure \begin{center} \includegraphics[scale=0.35]{levelRepulsion.pdf} \caption{A cartoon for the gapping mechanism of the Majorana BCS coupling. Without such a coupling, fermionic excitations generally exist crossing the dashed Fermi surface, leading to an ungapped state. The antiparticles/holes have an identical line of excitation, flipped across $\omega=0$. Turning on the Majorana BCS coupling mixes these two energy bands, causing them to repel and leading to a gap. Taken from \cite{DeWolfe:2016rxk}. \label{fig:LevelRepulsion}} \end{center} \end{figure} Thus in bottom-up models of holographic superconductivity, the ungapped state is generic, but a gap can be produced by suitably small charges or by an interaction of the form (\ref{MajoranaBCS}). It is natural to ask what occurs in top-down systems. In one such zero-temperature model, an AdS$_4$ to AdS$_4$ domain wall describing a state in ABJM theory as described in the previous section, the fermionic excitations are gapped purely as a result of the charges being small \cite{DeWolfe:2016rxk}. In another, similar AdS$_4$ to AdS$_4$ domain wall state (technically not a superconductor since the scalar also has a source turned on, but geometrically very similar) \cite{Bobev:2011rv} the charges are large enough that without interactions an ungapped state would appear, but the theory provides a multi-field generalization of the Majorana BCS coupling (\ref{MajoranaBCS}), which conspires to precisely gap the fermionic modes \cite{DeWolfe:2015kma, DeWolfe:2016rxk}. Thus, just as bottom-up models of non-superconducting finite density states could be Fermi liquids or non-Fermi liquids, but top-down systems appear to be exclusively non-Fermi liquids, bottom-up models of superconducting states can be gapped or ungapped, but top-down systems appear as exclusively gapped, for one reason or another. Whether this is conspiracy or coincidence is not entirely clear. \subsection{Limits of the correspondence} There is an interesting tension between these phenomena and what is visible in the bosonic variables. In general bosonic response functions like the conductivity do not ``see" the corresponding Fermi surfaces; for example, a certain response to charged impurities called Friedel oscillations is expected to lead to singularities in the conductivity at $2k_F$, but these are not seen \cite{Blake:2014lva, Henriksson:2016gfm}. The oscillatory region and pole-zero transition behaviors, associated with something like a correlation length divergence and hence presenting the appearance of a quantum phase transition in the fermionic variables, do not show up in the thermodynamics or the bosonic response functions. It has been speculated that the fermionic response may be in some sense subleading in $N$ and hence invisible to the leading-$N$ bosonic response, and that leading order in $N$ requires a bulk condensation of fermionic fields, the so-called electron stars \cite{Hartnoll:2010gu,Hartnoll:2010ik}. However, the fermionic correlators appear at the same, leading order in $N$ as their bosonic counterparts; this is manifest in top-down models, where they show up at $N^2$ and $N^{3/2}$ respectively, essentially since they come from the same supergravity action as the bosons. This is apparently in conflict with the idea that the fermionic response is subleading in $N$. This disconnect can be traced to the fact that two-point functions in the large-$N$ limit are calculated as linearized fluctuation equations on the gravity side, and so bosonic and fermionic gravity modes are ignorant of each other to this order; yet according to the deep reshuffling of degrees of freedom inherent in holography, they should each ``know" about both bosonic and fermionic degrees of freedom in the field theory. As the large N limit suppressed the quantum corrections to the scaling exponents in the holographic critical point, leaving mean field values, this mean field behavior of large N also seems to wash out the bosonic and fermionic responses' knowledge of each other. We have seen that the AdS/CFT correspondence is a remarkable duality, and can be very useful for modeling the behavior of field theories at strong coupling without a quasiparticle description. We may not have an exact gravity dual for QCD or a laboratory system, but there are a number of lines of attack open to us: we can engineer a model holographic system designed to mimic the true one (as with the QCD phase diagram), we can look for universal behavior across a wide class of holographic models (as with the viscosity to entropy ratio), or we can try to find holographic models of broad classes of dynamics (as with superconductors and non-Fermi liquids). There are many other applications both already done and still yet to be done, and hopefully these lectures have provided a useful introduction to this broad and fascinating subject. \acknowledgments I am grateful to the organizers of TASI 2017, Mirjam Cveti\v{c} and Igor Klebanov, for inviting me to lecture and for organizing a wonderful school, and to Tom DeGrand and Emily Flanagan for running it so smoothly. I would also like to thank my fellow speakers for delivering a series of fascinating and enjoyable lectures. I would like to thank collaborators old and new who have taught me a lot about AdS/CFT, particularly Dan Freedman, Steve Gubser, Oscar Henriksson and Chris Rosen. Most of all I would like to thank the students, for their stimulating questions that made it such a pleasure to interact and talk about physics. My research is supported by the Department of Energy under Grant No.~DE-FG02-91-ER-40672. I would like to take a moment to remember Joe Polchinski. Joe had a long history with TASI, co-organizing TASI 1992 with Jeff Harvey where he delivered a now-classic set of lectures on effective field theory and the Fermi surface, speaking on D-branes at TASI 1996 right after they helped launch the second superstring revolution, giving a terrific set of talks on AdS/CFT at TASI 2010, and co-organizing the wonderful TASI 2015 summer school with Pedro Vieira, where he spoke on the black hole information problem. Without his discovery of D-branes, we would not have been led to the gauge/gravity correspondence as we were, and without his powerful thinking and mischievous sense of humor, the fields of quantum field theory, string theory and black hole information, among others, would have been much poorer. I learned of his death as these notes were being completed, and though they can only pretend to aspire to the level of clarity and insight that Joe would invariably bring to a subject, I would like to dedicate these lectures to him. \bibliographystyle{JHEP}
2,869,038,156,133
arxiv
\section{Introduction} Although the formation of stars in multiple systems is known to be a major channel of star-formation (e.g.\ Duch\^ene et al.\ 2007 for a recent review), our understanding of the way multiple stellar systems form remains comparatively poorer than our comprehension of isolated star-formation. To improve this situation, it is necessary to identify and characterize multiple systems in the earliest phases of their evolution (preferably during the Class 0 and I stages). Progress has been slow, unfortunately, because very few existing instruments have enough sensitivity and angular resolution to detect and resolve very embedded systems even in the nearest star-forming regions. Moreover, young stars in multiple systems are surrounded by circumstellar and/or circumbinary disks, and often drive powerful, episodic jets that must be correctly interpreted before a given system can be properly characterized. As a consequence, the number of very young systems for which the binarity is clearly established, and the system parameters are well measured, is extremely limited. Arguably one of the most promising cases is that of IRAS~16293--2422 (e.g.\ Wootten 1989; Mundy et al.\ 1992; Ceccarelli et al.\ 2000; Chandler et al.\ 2005). IRAS 16293--2422 is a well-studied very young low-mass protostellar system located in Lynds 1689N, a dark cloud in the Ophiuchus star-forming complex at $d$ = 120 pc (Loinard et al.\ 2008). It has a total luminosity of about 15 L$_{\odot}$\ and has never been detected shortward of $\lambda$ = 12 $\mu$m (e.g.\ Ceccarelli et al.\ 1998, J{\o}rgensen et al.\ 2008). These characteristics make IRAS~16293--2422 a {\it bona fide} Class 0 source with an age of only a few 10$^4$ yr (Andr\'e et al.\ 2000). It has been suspected to be multiple since Wootten (1989) and Mundy et al.\ (1992) found it to be double both at millimeter and centimeter wavelengths, and Mizuno et al.\ (1990) discovered that it powered a multi-lobe outflow system. More recent observations (Hirano et al.\ 2001; Castets et al.\ 2001; Chandler et al.\ 2005) confirmed these early findings, and it is now well-established that IRAS 16293--2422 is indeed a very young multiple system. In all centimeter observations with an angular resolution better than \msec{0}{3} obtained before 2006, IRAS 16293--2422 comprised three radio sources called A1, A2 and B (Wootten 1989, Loinard 2002, Chandler et al.\ 2005 --see Fig.\ 1). Components A1 and A2 are located to the south-east of the system and are separated from each other by about \msec{0}{34}, whereas component B is located about 5$''$ to the north-west of the A1/A2 pair (Fig.\ 1). Using archival VLA observations, Loinard (2002) and Chandler et al.\ (2005) have shown that the position angle between A2 and A1 has increased roughly linearly from about 45$^\circ$ in the late 1980s to about 80$^\circ$ in 2003--2005. During that same timespan, the separation between the two sources has remained constant at about \msec{0}{34}. Two different interpretations have been proposed for this relative motion (see Loinard et al.\ 2007 for details). According to the first one, A1 and A2 are two protostellar sources in a nearly circular Keplerian orbit, seen almost exactly face-on. In the alternative possibility, A1 is interpreted as a shock resulting from the impact of a strongly precessing (or wobbling) jet driven by a third --as-yet undetected-- protostar in the system, presumably a companion of A2\footnote{The jet driven by A2 itself is not aligned with the current position of A1, so if A1 is indeed a shock feature, it must be driven by a different source (Chandler et al.\ 2005; Loinard et al.\ 2007).}. Of course, in the former case, the position angle between A2 and A1 should keep increasing indefinitely, whereas in the latter, the change of position angle with time should decelerate, and eventually reverse its course because the jet must oscillate around an equilibrium position. In a recent 1.3 cm image, IRAS~16293--2422 was unexpectedly found to comprise four radio sources rather than the usual three (Fig.\ 1c --Loinard et al.\ 2007). While components B and A1 were at the expected positions and had the expected morphologies, component A2 appeared to have split into two sub-condensations dubbed A2$\alpha$ and A2$\beta$. Since the line joining A2$\alpha$ to A2$\beta$ was at a position angle of about 62$^\circ$, very similar to the direction of both the large-scale flow and the thermal jet known to be driven by A2, Loinard et al.\ (2007) argued that A2$\alpha$ and A2$\beta$ traced a recent bipolar ejection from A2. If this is the case, then A2$\alpha$ and A2$\beta$ should move symmetrically away from A2, along the direction of the jet, at velocities typical of the winds driven by low-mass protostars (tens to hundreds of km s$^{-1}$, depending on the inclination of the jet). This could be easily tested with new observations. As the previous discussion shows, the very nature of some of the sources associated with IRAS 16293--2422 remains unknown, and the exact number of protostars contained in the system is still uncertain. In this article, we will present and analyze new high-resolution, high-sensitivity, radio continuum observation that will be used to further investigate the structure of IRAS~16293--2422. \section{Observations} Two new 3.6 cm observations of IRAS~16293--2422 were obtained on 2007, August 14 (2007.62), and 2008, December 13 (2008.95) with the {Very Large Array} (VLA) of the {National Radio Astronomy Observatory} (NRAO) in its most extended (A) configuration. The standard 3.6 cm continuum frequency setup was used, and both circular polarizations were recorded simultaneously. The absolute flux density was set using observations of 3C 286. For improved flux accuracy, we did not assume 3C 286 to be a point source, but instead used a model image provided by NRAO. The phase calibrator was PKS J1625--2527 whose absolute position is expected to be accurate to about 2 milli-arcseconds. The data were collected during the VLA/EVLA transition period, so the array consisted of a mixture of ``old'' VLA antennas, and of antennas already equipped with new electronics. The non-matched bandpass shapes between VLA and EVLA antennas produced significant closure errors on VLA/EVLA baselines. To correct these errors, we measured baseline-dependent gains using the observations of the phase calibrator. The images obtained after applying these baseline-dependent gains are almost identical to those produced by simply flagging the VLA/EVLA baselines. To optimize the angular resolution, the calibrated visibilities were imaged using uniform weighting (ROBUST parameter set to --5), resulting in synthesized beams of \msec{0}{35} -- \msec{0}{45} and $\sim$ \msec{0}{17} in the north-south and east-west directions, respectively (see Table 1). The r.m.s.\ noise levels in the final images are 45--50 $\mu$Jy beam$^{-1}$. In the following sections, these new observations will be compared to a similar 3.6 cm observation obtained in 2003.65, and to 0.7 cm and 1.3 cm images obtained in 2005.20 and 2006.11, respectively (see Fig.\ 1 --all three images were published previously by Loinard et al.\ 2007). The characteristics of these images are listed in Table 1 together with those of the new 3.6 cm images published here. The flux of source B is known to be fairly constant with time at any given wavelength (Chandler et al.\ 2005), so measurements of the total flux of component B provides a valuable self-consistency check on the overall calibration. The flux found for component B in the three 3.6 cm observations included in the present work are reported in the last column of Table 1; they are indeed fully consistent with each other, and with previously published figures (Chandler et al.\ 2005). Observations at $\lambda$ = 0.7 and 1.3 cm, designed to measure (in combination with the 3.6 cm data presented here) the spectral index of A2$\alpha$ and A2$\beta$ were requested and approved both in the 2007.62 and 2008.95 runs. They were actually collected in the former run, but could not be properly calibrated because of poor weather conditions. Because of scheduling limitations, no multi-wavelengths data could be obtained in the 2008.95 run. \section{Nature and origin of A2$\alpha$ and A2$\beta$} \subsection{Structure and astrometry} In the two new 3.6 cm observations, component B is at its expected position, with its usual morphology (Fig.\ 1) and flux (Sect.\ 2). The structure of component A, however, has changed significantly since early 2006 --when the last VLA observation prior to those presented here was obtained (Fig.\ 1c). In the observation obtained in mid-2007 (Fig.\ 1d), component A of IRAS~16293--2422 appears to contain four radio sources (see particularly the zoom on this region shown in Fig.\ 2). Although somewhat blended with the rest of the emission, A1 is still clearly discernable to the east of the system (Fig.\ 2). A radio source is also visible again at the expected position of A2. Note that this was not the case in the 2006.11 observation, where the emission associated with A2 was blended with that of A2$\alpha$. The two additional sources in the system (indeed, the two brightest ones) are located on each side of A2, and we identify them with A2$\alpha$ and A2$\beta$. They are clearly not at the same positions as in the 2006.11 observation. Instead, they have moved away from A2 in a roughly symmetrical manner, A2$\alpha$ towards the south-west, and A2$\beta$ towards the north-east. In the observation obtained at the end of 2008, component A is (again, but fortuitously) composed of three sources (Figs.\ 1e and 2). Source A2 is still clearly identified, while source A2$\alpha$ has moved further from A2 towards the south-west. Component A2$\beta$ has also kept moving away from A2 (towards the north-east), but now appears to be blended with A1 (Fig.\ 2). To further investigate the nature and properties of A2$\alpha$ and A2$\beta$, it is interesting to compare their relative positions at the various epochs. In 2006.11, A2$\alpha$ and A2$\beta$ were separated by \msec{0}{166} $\pm$ \msec{0}{003} at a position angle of 62$^\circ$ $\pm$ 2$^\circ$. In the 2007.62 image, however, the separation has increased to \msec{0}{455} $\pm$ \msec{0}{011}, but the position angle has not changed significantly: it is now measured to be 61$^\circ$ $\pm$ 3$^\circ$. From the change in their separation, we can estimate the velocity at which A2$\alpha$ and A2$\beta$ are moving away from one another to be 109 $\pm$ 4 km s$^{-1}$. The errors quoted here and later in the paper only account for the positional uncertainties of the ejecta, they do not include the errors on the distance $d$ to the source. If the velocity since ejection has been constant at the value estimated above, the ejection must have occured 0.87 $\pm$ 0.04 yr before the 2006.11 observation (or 2.38 $\pm$ 0.11 yr before the 2007.62 observation), i.e.\ in 2005.24 $\pm$ 0.04. This is --as expected-- between the epochs of the 0.7 and 1.3 cm observations shown in Figs.\ 1 and 2, but only very shortly after the 0.7 cm data were gathered. Since A2 and A2$\alpha$ are well separated in the last two 3.6 cm observations, one can also consider the evolution of the relative position between these two sources. In 2007.62. they were separated by \msec{0}{300} $\pm$ \msec{0}{015} at a position angle of 49$^\circ$ $\pm$ 4$^\circ$, whereas in the 2008.95 observation, the separation was \msec{0}{494} $\pm$ \msec{0}{014} and the position angle 61$^\circ$ $\pm$ 3$^\circ$. Thus, A2$\alpha$ appears to be moving away from A2 at 82 $\pm$ 9 km s$^{-1}$. Assuming again that the velocity has not changed appreciably, the ejection must have occurred 3.41 $\pm$ 0.37 yr before the 2008.95 observation, i.e.\ in 2005.54 $\pm$ 0.37. This is in good agreement with the date estimated above from the relative motion between A2$\alpha$ and A2$\beta$, suggesting that the assumption of constant velocity is reasonable. Note that the relative velocity between A2 and A2$\alpha$ derived here is somewhat larger than half of the velocity between A2$\alpha$ and A2$\beta$ calculated above. This shows that A2$\alpha$ is moving away from A2 somewhat faster than A2$\beta$\footnote{One could argue that this result might also be consistent with an acceleration of the velocity of the ejecta since the relative velocity between A2$\alpha$ and A2 is based on more recent observations that the estimate of the relative velocity between A2$\alpha$ and A2$\beta$. We favor the interpretation given in the text because the two recent 3.6 cm observations (Fig.\ 2) clearly show that A2$\alpha$ has moved farther from A2 than A2$\beta$.}. Indeed, one can estimate the relative velocity between A2$\beta$ and A2 combining the present result and the relative velocity between A2$\alpha$ and A2$\beta$ calculated earlier. We obtained (assuming again constant velocities) 26 $\pm$ 10 km s$^{-1}$. Thus A2$\beta$ appears to move away from A2 about three times faster than A2$\alpha$. Bipolar ejections usually produce somewhat more symmetric patterns. It should be mentioned, however, that the north-east and south-west lobes of the molecular outflow driven by A2 have long been known to be very asymmetric. For instance, SiO emission is very strong in the direction of the north-east lobe and nearly absent towards the south-west counterpart (e.g.\ Castets et al.\ 2001; Hirano et al.\ 2001). Since SiO is a good tracer of shocks between jets and circumstellar material, this most likely indicates that the region to the south-west of component A contains relatively little dense gas capable of decelerating the ejecta. In summary, A2$\alpha$ and A2$\beta$ appear to behave kinematically exactly as would be expected if they were ejecta from A2: they are moving (in projection) at 30--80 km s$^{-1}$ away from A2 along the direction (P.A.\ $\sim$ 60$^\circ$) of the outflow known to be powered by A2. The true velocity of the jet must be of the order of the escape velocity from A2. As we will see in Sect.\ 4, A2 is likely to be a $\sim$ 1.5 M$_{\odot}$\ protostar. The radius at which jets are launched is usually believed to be a few stellar radii ($\sim$ 3 $R_*$), and very young stars are a few times larger than their main sequence counterparts of the same mass. It is, therefore, reasonable to assume that the radius of the protostar associated with A2 is about 3 R$_{\odot}$, and that the escape velocity should be calculated at $\sim$ 10 R$_{\odot}$. Under these assumptions, we obtain $V_{esc}$ $\approx$ 240 km s$^{-1}$. To obtain a rough estimate of the orientation of the jet, we assume that this value provides a reasonable estimate of the true current velocity of the jet (this would require, in particular, that the jet has suffered little deceleration since it was launched). The projected velocity of the ejecta in only 30--80 km s$^{-1}$, so the jet powered by A2 must be oriented along a direction only 10$^\circ$--15$^\circ$ from the line of sight. We conclude that A2 drives a flow oriented almost along the line of sight, and that A2$\alpha$ and A2$\beta$ are ejecta along that flow. Episodic bipolar mass ejections are known to occur in young stars (e.g.\ Marti et al.\ 1995). To our knowledge, this is the first time, however, that an ejection is actually observed from the very beginning: we seem to have witnessed the very birth of a Herbig-Haro pair. \subsection{Properties of the ejecta} The centimeter emission produced by winds and ejecta from low-mass stars is thought to be nearly entirely of free-free origin (e.g.\ Anglada 1995, Shang et al.\ 2004). As detached clumps, A2$\alpha$ and A2$\beta$ are likely less dense than the so-called thermal jets associated with the central regions of winds driven by young stars. As a consequence, the free-free emission from A2$\alpha$ and A2$\beta$ is likely to be optically thin. In the absence of simultaneous multi-frequency observations (see Sect.\ 2), it is somewhat hazardous to estimate their spectral index and ascertain the characteristics of the emission. We note, however, that our data are fully consistent with optically thin free-free emission. The source A2$\beta$ was well-resolved in the 2006.11 1.3 cm data and in the 2007.62 3.6 cm observations. The spectral index derived from these two observations is $\alpha$ = --0.09 $\pm$ 0.05, in excellent agreement with the expected value ($\alpha$ = --0.1) for optically thin free-free emission. To further constrain the properties of the ejecta, we will concentrate on A2$\alpha$, because it is well-resolved from the other sources in both of our 3.6 cm data sets. Within the errors, the 3.6 cm flux of A2$\alpha$ does not appear to have changed much between the two observations (0.62 $\pm$ 0.09 mJy in 2007.62 and 0.93 $\pm$ 0.10 in 2008.95 --Table 2). The angular size of the emission (deconvolved from the primary beam) was found to be \msec{0}{21} $\times$ \msec{0}{08} in the 2008.95 data, whereas the emission was only resolved in one direction in the 2007.62 observations. In the resolved dimension, the angular size was \msec{0}{14}, whereas in the other direction, the emission came from a region smaller than \msec{0}{17}. Thus, the mean angular size of the emission was about \msec{0}{14} in 2008.95 and less than about \msec{0}{15} in 2007.62. Assuming optically thin free-free emission, the mass of ionized gas can be calculated from the radio flux as (e.g.\ Rodr\'{\i}guez et al.\ 1980): \begin{equation} {M_i \over M_\odot} = 3.39 \times 10^{-5} \left( {S_\nu \over 1~\mbox{mJy}} \right)^{0.5} \left( {\nu \over 1~\mbox{GHz}} \right)^{0.05} \left( {T_e \over 10^4~\mbox{K}} \right)^{0.175} \left( {\theta \over 1''} \right)^{1.5} \left( {d \ \over 1~\mbox{kpc}} \right)^{2.5}. \end{equation} \noindent Using the numbers above and $T_e$ = 10$^4$ K, we obtain, for A2$\alpha$, $M_i$ $\approx$ (1.00 $\pm$ 0.05) $\times$ 10$^{-8}$ M$_{\odot}$\ using the 2008.95 observations (once again, the quoted uncertainty does not include the errors on the distance to the source), and $M_i$ $\lesssim$ 0.9 $\times$ 10$^{-8}$ M$_{\odot}$\ using the 2007.62 data. Assuming that both ejecta have similar masses, the bipolar ejection event reported here corresponds to a total mass of about 2 $\times$ 10$^{-8}$ M$_{\odot}$. From the observed radio flux, one can also calculate the electron density $n_e$ of the ejecta (e.g.\ Rodr\'{\i}guez et al.\ 1980): \begin{equation} {n_e \over 1~\mbox{cm}^{-3}} = 7.8 \times 10^{3} \left( {S_\nu \over 1~\mbox{mJy}} \right)^{0.5} \left( {\nu \over 1~\mbox{GHz}} \right)^{0.05} \left( {T_e \over 10^4~\mbox{K}} \right)^{0.175} \left( {\theta \over 1''} \right)^{-1.5} \left( {d \ \over 1~\mbox{kpc}} \right)^{-0.5}. \end{equation} \noindent With the observed parameters of the emission, we get $n_e$ = 4.4 $\times$ 10$^{5}$ cm$^{-3}$ for A2$\alpha$. Interestingly, the recombination timescale at that density is about 6 months. Since the ejecta have remained ionized at least since 2006.11 (when they were first detected) and most certainly since their creation around 2005.3 (see above), some mechanism must provide energy to keep them ionized. The most likely candidates are shocks either with the surrounding medium or internal to the jets. To remain ionized, the ejecta require an ionization rate of $\sim 10^{42}$ s$^{-1}$. Assuming that 13.6 eV of energy are required ionization, a power of $\sim 2 \times 10^{31}$ erg s$^{-1}$ is needed. If this power is produced by the kinetic energy of the ejecta, we expect that the ejecta should decelerate at a rate of about 12.5 km s$^{-1}$ yr$^{-1}$ (for an initial velocity of 240 km s$^{-1}$ and a mass of $10^{-8}$ $M_\odot$). The true deceleration would be somewhat smaller if the ejecta were only partially ionized as seems to be commonly the case (e.g.\ Podio et al.\ 2009). In any case, since the jet appears to be only 10--15$^\circ$ from the line of sight, the projected deceleration would only be 2--3 km s$^{-1}$ yr$^{-1}$ and would be undetectable with the existing observations. \subsection{Origin of the ejecta} Detached clumps have long been known to exist in the jets driven by young stars. They can be created in (at least) two ways. One possibility is if the driving source experiences episodic increases in its mass loss rate. This hypothesis is often the preferred one to explain the presence of symmetric pairs of well-defined HH knots in evolved outflows (e.g.\ Arce \& Goodman 2002). An alternative possibility is if the mass loss rate remains constant, but the ejection velocity increases abruptly from an initial value $v_i$ to a final (larger) one $v_f$. In this situation, a working surface where material accumulates is created at the interface between the two winds. This naturally creates a gas condensation which can eventually become a detached clump (Masciadri \& Raga 2001). Both of these mechanisms are plausible scenarios for the creation of A2$\alpha$ and A2$\beta$, and it would be interesting to be able to distinguish between them. Ejection and accretion are believed to be intimately linked in protostars, so if A2$\alpha$ and A2$\beta$ were created during an episode of increased mass loss, one would expect that increased accretion would also have been occuring. Assuming that the mass ejection rate is about 10 times smaller than the accretion rate (e.g.\ Hartmann \& Kenyon 1996), then the total mass accreted during this episode must have been about 2 $\times$ $10^{-7}$ M$_{\odot}$. Very young protostars derive much of their luminosity from accretion, so one could wonder if such an episode of increased accretion might have produced a detectable increase in the total luminosity of IRAS~16293--2422. To try and answer that question, one must characterize the timescale of the ejection/accretion episode. From Fig.\ 2, it is clear that by 2007.62, the ejecta had become detached from A2. As a consequence, a conservative upper limit on the timescale of the ejection event is 2.34 yr, the time elapsed between the ejection and the 2007.62 observation (Sect.\ 3.1). The corresponding lower limit on the accretion rate during the event would be $\dot{M}_{min}$ $\approx$ 8.5 $\times$ 10$^{-8}$ M$_{\odot}$\ yr$^{-1}$, and the corresponding lower limit on the excess accretion luminosity $L_{acc, min}$ $\approx$ 1.3 L$_{\odot}$\ (here we have assumed that all the accretion energy is released on the stellar surface, and that the stellar radius is 3 R$_{\odot}$). The bolometric luminosity of IRAS~16293--2422 is about 15 L$_{\odot}$\ (Sect.\ 1), so the increased accretion should have produced a $\sim$ 10\% increase in the total luminosity of the source during the assumed 2.34 yr duration of the event. Of course, the duration of the event might have been significantly shorter than 2.34 yr. An alternative way of estimating the characteristic timescale is the following. The ejecta are currently about \msec{0}{14} ($\equiv$ 17 AU) across (they were likely smaller in the past since A2$\beta$ appeared to be unresolved in the 1.3 cm observations obtained in 2006.11). A clump of that size moving at $\sim$ 240 km s$^{-1}$ would become fully detached from its ejecting star in about 1.06 $\times$ 10$^{7}$ s ($\equiv$ 0.34 yr, just about 4 months). If this provides a good estimate of the true duration of the ejection event, then the mass accretion rate during the event would have been $\dot{M}$ $\approx$ 6 $\times$ 10$^{-7}$ M$_{\odot}$\ yr$^{-1}$, and the corresponding excess accretion luminosity $L_{acc}$ $\approx$ 9 L$_{\odot}$. This would have produced a 60\% increase in the total luminosity of the source during the 4 month duration of the event. We conclude that if the creation of the A2$\alpha$/A2$\beta$ pair is the result of an increase in the mass accretion/ejection rates of component A2, their birth should have been accompanied by a 10--60\% increase in the total luminosity of IRAS~16293--2422. Most of the luminosity of IRAS~16293--2422 is radiated in the far-infrared and sub-millimeter parts of the electromagnetic spectrum, so the corresponding luminosity increase should be sought there. In particular, there have been several sub-millimeter observations of IRAS~16293--2422 obtained with the Sub-Millimeter Array (SMA) in the last few years (T.\ Bourke, private communication). Note that IRAS~16293--2422 is a multiple system (Sect.\ 1) and that the only member of the system which should have experienced an increase in luminosity is component A2. While components A1 and A2 are not resolved in SMA observations, component A is well resolved from component B (e.g.\ Chandler et al.\ 2005). Moreover, component A and B happen to have similar sub-millimeter fluxes (e.g.\ Chandler et al.\ 2005). As a consequence, the total sub-millimeter luminosity increase for component A in SMA images ought to be at least 20--120\% and should be detectable. We note that an ejection/accretion event such as the one considered in the present discussion would remain fairly modest in comparison with the more spectacular FUOr events which produce increases in the luminosity, mass accretion rates, and mass ejection rates of several orders of magnitude (Hartmann \& Kenyon 1996). In this sense, we would only have witnessed a ``mini-outburst''. The very fact that such a mini-outburst would have been observed, however, would indicate that they most likely occurred quite frequently. They may, therefore, offer more tractable evidence of the relation between accretion and outflow phenomena than the more dramatic, but much less common FUOr events. If the creation of the A2$\alpha$/A2$\beta$ pair is related to a modification of the jet ejection speed without an associated increase in mass accretion rate, no change in the luminosity of IRAS~16293--2422 is expected. Thus the presence or absence of an associated luminosity increase would be a good discriminant between the two possibilities. We note that a $\sim$ 30\% increase in the velocity of a fairly modest underlying jet (with $\dot{M}$ of a few 10$^{-7}$ M$_{\odot}$\ yr$^{-1}$) lasting for a few months, would be sufficient to accumulate a mass of order 10$^{-8}$ M$_{\odot}$\ in a working surface. \section{Relative motion between A1 and A2} The motion of A1 relative to A2 between the late 1980s and 2005 has been investigated in detail by Loinard (2002) and Chandler et al.\ (2005). Neither the 1.3 cm observation obtained in 2006.11 (where A2 is blended with A2$\alpha$) nor the 2008.95 data (where A1 is blended with A2$\beta$) can be used to track the relative motion between A1 and A2 further. However, A1 and A2 are well resolved in the 2007.62 observation. The value of their relative position angle in this new observation is 90 $\pm$ 3$^\circ$, significantly larger than the position angle in 2003--2005 (Chandler et al.\ 2005, Loinard et al.\ 2007 -- Fig 3). It is, however, in good agreement with the general evolution of the position angle since the late 1980s. The separation between A1 and A2 in the 2007.62 3.6 cm observation is 0.365 $\pm$ 0.010 in good agreement with all previous measurements (Fig.\ 3). Thus, the position angle between A1 and A2 has now changed by more than 40$^\circ$ since the late 1980s (from less than 50$^{\circ}$ then, to 90$^{\circ}$ now). Moreover, there is no indication in the data that the rate of change is, in any way, decelerating. Although further monitoring will be needed in the coming years and decades, these characteristics are already difficult to reconcile with the idea of a precessing jet. Indeed, the maximum precession angles typically observed in low-mass young stars are less than 10$^\circ$, and rarely exceed 15$^\circ$ (e.g.\ Matthews et al.\ 2006). The relative motions between A2 and A1 are more readily explained in terms of a Keplerian orbit between two protostars. The fit shown in Fig.\ 3b implies a rate of position angle change with time of 1.98$^\circ$ yr$^{-1}$, corresponding to an orbital period of 182 yr. For a circular orbit in the plane of the sky\footnote{The fact that the jet from A2 is likely nearly along the line of sight (Sect.\ 3.1) would be consistent with an A1/A2 orbit nearly in the plane of the sky; in young binary systems, a coplanarity between the orbital plane and the orientation of the disks corresponds to the most stable configuration.} (which would produce the observed linear increase of the position angle), the total mass of the A1+A2 system would be almost exactly 2 M$_{\odot}$\ (assuming a separation of \msec{0}{34} and a distance to Ophiuchus of 120 pc --Loinard et al.\ 2008). Based on an analysis of the absolute proper motions, Loinard (2002) and Chandler et al.\ (2005) have argued that the center of mass of the A1/A2 system must be significantly closer to A2 than to A1 in this Keplerian scheme, implying that A2 must be significantly more massive than A1. A reasonable estimate would be 1.5 M$_{\odot}$\ for A2 and 0.5 M$_{\odot}$\ for A1. This would be in reasonable agreement with the bolometric luminosity of IRAS~16293--2422 ($\sim$ 15 L$_{\odot}$). In a recent submillimeter observation, yet another compact source was detected toward component A (Chandler et al.\ 2005). This object (called Ab) is located about \msec{0}{64} to the northeast of the A1/A2 pair (see Fig.\ 1). Since it has only been detected at one wavelength so far, the nature of Ab is difficult to assess. Chandler et al.\ (2005) argued that it might be another protostellar source in the system, but the lack of a strong compact counterpart at 0.7 cm in recent VLA data (Loinard et al.\ 2007) might favor an interpretation in terms of a starless clump. If Ab were a protostar, then component A would overall be (at least) a triple system, whereas it might only be double if Ab is a starless clump. In any case, taking into account the fact that component B --to the north-west of the system-- is also associated with a protostar (e.g.\ Rodr\'{\i}guez et al.\ 2005; Chandler et al.\ 2005) and that components A and B are located in the center of a common, dense, centrally condensed, envelope (e.g.\ Looney et al. 2003), we must conclude that IRAS~16293--2422 is most likely a very young hierarchical multiple system. IRAS~16293--2422 has long been known to drive a multi-lobe outflow system composed of two compact bipolar flows at P.A.\ $\sim$ 60$^\circ$ and $\sim$ 110$^\circ$, and a larger monopolar lobe located farther east (Mizuno et al.\ 1990). Recent high-resolution SMA observations of the two compact outflows strongly suggest that both are driven from within the A component of IRAS~16293-2422 (Yeh et al.\ 2008). As mentioned earlier, the outflow at P.A.\ $\sim$ 60$^\circ$ is now known to be driven by A2. Since the A1/A2 pair is most likely a compact binary system, it is tempting to associate the flow at P.A.\ $\sim$ 110$^\circ$ with A1. Interestingly, while the dynamical ages of the two flows are comparable (500 to 1,000 yr), the mechanical luminosity of the flow at P.A.\ $\sim$ 60$^\circ$ (from A2) is about twice that of the flow at P.A.\ $\sim$ 110$^\circ$ (that we tentatively attribute to A1). This would be consistent with the higher mass of A2 as compared to A1. \section{Conclusions and perspectives} In this article, we presented two new high-quality 3.6 cm images of the young protostellar system IRAS~16293--2422 obtained in August 2007 and December 2008 with the Very Large Array. These observations confirm that the radio sources A2$\alpha$ and A2$\beta$ recently identified in the system are ejecta from the protostar A2, and that we seem to have witnessed the very birth of a pair of Herbig-Haro knots. The mass of each of the ejecta is estimated to be $\sim$ 10$^{-8}$ M$_{\odot}$. If the creation of the ejecta was related to an increase in mass accretion rate, the birth of A2$\alpha$/A2$\beta$ must have been accompanied by an increase in the total luminosity of IRAS~16293--2422 of 10--60\%. Source A2 itself, which was blended with A2$\alpha$ in recent observations, is again visible in the data. This allows us to further monitor the relative motion between A1 and A2, and to provide very suggestive evidence that the A1/A2 pair is a tight binary system. Including component B to the north-west of the system, IRAS~16293--2422, therefore, appears to be a very young hierarchical multiple system. Observations similar to those presented here obtained in the coming few years to decades ought to provide a very accurate determination of the mass of the various protostars in IRAS~16293--2422, and a very detailed characterization of this nearby very young multiple stellar system. \acknowledgements L.L., L.F.R., and P.A.\ acknowledge the financial support of DGAPA, UNAM and CONACyT, M\'exico. D.J.W. acknowledges partial support from NASA Origins of Solar Systems Program Grant NAG5-11777. NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
2,869,038,156,134
arxiv
\section{Introduction} This paper is a contribution to the celebration of the 150th anniversary of the birth of Sergey Alexeyevich Chaplygin (1869--1942), the renowned Russian physicist, mathematician, and mechanical engineer. Amongst many other topics, Chaplygin studied the dynamics of a sphere rolling on a plane. For this {\emph{Chaplygin top}}, the mass distribution is eccentric, the three moments of inertia are distinct, and the geometric centre does not, in general, lie on any of the principal axes. A special case of this system was studied by Edward Routh \cite{Routh05}. The Routh sphere is a spherical body with a non-uniform distribution of mass, free to roll without slipping on a plane surface. Its centre of mass is offset from the geometric centre, but it has an axis of symmetry through both these points, and equal moments of inertia about all axes orthogonal to the symmetry axis. This distinguishes it from the more general case studied by Chaplygin \cite{Chaplygin03}. Routh \cite{Routh05} showed that the Routh sphere has two constants of motion in addition to the energy, and is an integrable system. The integrals or constants of motion, known as Jellett's constant and Routh's constant, have been treated in many studies. We mention, in particular, the important contributions \cite{BoMa15, Cushman98, Byungsoo11, Koslov02}. A simple proof that Jellett's and Routh's constants are integrals of the motion is given in Gray and Nickel \cite{GrayNickel00}. However, as remarked by these authors, ``The precise physical significance of the Routh constant remains elusive \dots\ [and] it might be useful to try to find a direct connection between this constant of the motion and the underlying symmetries of the system'' \cite[p.~826]{GrayNickel00}. This explicit connection is established in the present work. Emmy Noether discovered a fundamental connection between symmetries or invariances of dynamical systems and conserved quantities or integrals of the motion. For a historical review, see \cite{KS11}. In her seminal paper \cite{Noether18}, Noether derived an identity, valid whenever the action of the system has an invariance. In the case of extremal flow, in which the Euler-Lagrange or d'Alembert-Lagrange equations apply, this leads to a Noetherian conservation law. This is true both for systems with holonomic constraints and for systems with nonholonomic constraints that are linear in the velocities. We will show in this paper how the integrals of the Routh sphere arise from Noether's invariance identity, and will derive expressions for the symmetry transformations associated with these constants. \textcolor{black}{ As a further demonstration of the power and utility of Noether's theorem, we examine in \S\ref{sec:ChapBall} the problem of the Chaplygin ball on a rotating turntable, recently studied in \cite{BBM18}. Using a systematic approach, we deduce the four known integrals and their associated symmetries directly from Noether's invariance identity.} \section{The invariance identity} Associated with invariance of the action functional under transformations of the dependent and independent variables there is an identity, the \emph{invariance identity.} \textcolor{black}{We restrict ourselves, at the expense of generality but for simplicity of presentation, to the case when the transformation does not depend on the velocities.} Then the invariance identity may be expanded in powers of the velocity variables \textcolor{black}{$\dot q^\mu, \,\,\mu=1, \ldots, N$, where $N$ is the number of degrees of freedom}, to yield a set of differential equations. If these can be solved, they provide the generators of a coordinate transformation that can be used to construct a constant of the motion. For a dynamical system with a Lagrangian function, let us define the action functional $$ S = \int_{t_1}^{t_2} L\left(q(t),\frac{\dd q(t)}{\dd t},t\right)\, \dd t \,. $$ We consider a continuous transformation of the independent and dependent variables \textcolor{black}{$$ t \to T(q,t;\alpha), \qquad q^\mu \to Q^\mu(q,t;\alpha)\,, $$ where $\alpha \in \mathbb{R}$ is a free parameter. The case $\alpha = 0$ corresponds to the identity transformation, with $T(q,t;0) = t$ and $Q^\mu(q,t;0) = q^\mu$. We form the action $S^\prime$ using the new variables but the same functional form of the Lagrangian $L$: $$ S^\prime = \int_{T_1}^{T_2} L\left(Q(T),\frac{\dd Q(T)}{\dd T},T\right)\,\dd T\,, $$ where $Q^\mu(T)$ (with slight abuse of notation) stands for the new variable as a function of the new time. We consider the case where the action is invariant under the transformation: $S^\prime = S$.} For an infinitesimal perturbation, we write $$\begin{array}{ccccccc} q^\mu(t) &\longrightarrow& Q^\mu(T) &=& q^\mu(t) &+& \epsilon\, \xi^\mu(q,t) \,,\\ t &\longrightarrow& T &=& t &+& \epsilon\, \tau(q,t) \,. \end{array}$$ The coefficients of $\epsilon$ are called the generators of the transformation. \textcolor{black}{They form the components of a vector field $\left(\xi^\mu(q,t) , \tau(q,t)\right)$, called an infinitesimal Noether symmetry.} We expand the integrand of $S^\prime$ and express it as an integral with respect to $t$. Then the following \relax{invariance identity} results: \begin{equation} \frac{\partial L}{\partial q^\mu}\xi^\mu + p_\mu \dot\xi^\mu + \frac{\partial L}{\partial t}\tau - H \dot\tau = 0 \label{eq:InvId1} \end{equation} where $p_\mu={\partial L}/{\partial\dot q^\mu}$ is the conjugate momentum, the Hamiltonian is $H = p_\mu \dot q^\mu - L$, and the Einstein summation convention is employed. This identity was first derived by Emmy Noether \cite{Noether18}. Eq.~(\ref{eq:InvId1}) can be written in a completely equivalent but more illuminating form: \begin{equation} \frac{\dd}{\dd t}\biggl[p_\mu \xi^\mu - H\tau \biggr] = (\xi^\mu-\dot q^\mu \tau) \left[ \frac{\dd}{\dd t}\frac{\partial L}{\partial\dot q^\mu} - \frac{\partial L}{\partial q^\mu} \right]\,. \label{eq:InvId2} \end{equation} \subsection*{Extremal or on-shell motion} \textcolor{black}{The term in square brackets on the right hand side of Eq.~(\ref{eq:InvId2}) is the Euler-Lagrange operator acting on the Lagrangian: \[E_{\mu}[L] \equiv \frac{\dd}{\dd t}\frac{\partial L}{\partial\dot q^\mu} - \frac{\partial L}{\partial q^\mu} \,. \]} For a holonomic system, this expression vanishes, so the following conservation law holds: \begin{equation} \frac{\dd}{\dd t}\biggl[p_\mu \xi^\mu - H\tau \biggr] = 0\,. \label{eq:Noether1} \end{equation} For a general nonholonomic system, little can be said. However, if the $M$ constraints are linear in the velocities, so that \textcolor{black}{$$ \gamma^\kappa \equiv A^\kappa_\mu(q,t) \dot q^\mu + B^\kappa(q,t) = 0\,, \quad \kappa = 1, \ldots, M, $$} then the d'Alembert-Lagrange equations may be written in the form $$ \left[ \frac{\dd}{\dd t}\frac{\partial L}{\partial \dot q^\mu} - \frac{\partial L}{\partial q^\mu} \right] = \lambda_\kappa \frac{\partial \gamma^\kappa}{\partial \dot q^\mu} = \lambda_\kappa A^\kappa_\mu \,. $$ The right hand side of Eq.~(\ref{eq:InvId2}) then becomes \textcolor{black}{$$ (\xi^\mu-\dot q^\mu \tau) \left[ \frac{\dd}{\dd t}\frac{\partial L}{\partial \dot q^\mu} - \frac{\partial L}{\partial q^\mu} \right] = (\xi^\mu-\dot q^\mu \tau) \lambda_\kappa A^\kappa_\mu = \lambda_\kappa (A^\kappa_\mu \xi^\mu + B^\kappa \tau) \,. $$} If we assume that the \textcolor{black}{infinitesimal Noether symmetry} respects the constraints, namely if \begin{equation} A^\kappa_\mu \xi^\mu + B^\kappa \tau = 0,\qquad \kappa = 1, \ldots, M \,, \label{eq:symm_constraints} \end{equation} then this expression vanishes. As a consequence, the right hand side of Eq.~(\ref{eq:InvId2}) vanishes for on-shell flow. We conclude that, for both holonomic systems and systems subject to nonholonomic constraints that are linear in the velocities, even with inhomogeneous terms, Eq.~(\ref{eq:InvId2}) reduces to the conservation law, Eq.~(\ref{eq:Noether1}) \cite{Bahar87}. \section{Routh sphere} \begin{figure}[h] \begin{center} \includegraphics[width=0.75\linewidth]{./Fig01-Routh-Sphere.jpg} \caption{Geometry and primary coordinates for the Routh sphere. Geometric centre $\mathbf{C}$, mass centre $\mathbf{O}$ and point of contact $\mathbf{P}$. In this configuration, $\mathbf{I}$ and $\mathbf{i}$ point into the page and $\phi = -\pi/2$.} \label{fig:RS} \end{center} \end{figure} The dynamics of the Routh sphere are discussed in many texts on classical mechanics. The original study is \cite{Routh05}. In this paper we follow the notation of \cite{LyBu09} and \cite{LyBu13}. There are six degrees of freedom: the configuration of the body is given by $(X,Y,Z)$, the coordinates of the centre of mass, and the three Euler angles $(\theta,\phi,\psi)$. The unit orthogonal triad in the space frame is $\{ \mathbf{I}, \mathbf{J}, \mathbf{K} \}$ and the unit orthogonal triad in the intermediate frame is $\{ \mathbf{i}, \mathbf{j}, \mathbf{k} \}$ with $\mathbf{i}$ horizontal and $\mathbf{k}$ fixed along the axis of the body (see Fig.~\ref{fig:RS}). \textcolor{black}{The holonomic constraint that the geometric centre must remain at unit distance above the underlying plane is used to eliminate the variable $Z$, leading to an effective system with $N=5$ degrees of freedom.} Assuming unit mass and unit radius, the Lagrangian of the Routh sphere is \smallskip \[ L = \textstyle\frac{1}{2}\bigl[(I_1+a^2s^2)\dot\theta^2 + (I_1 s^2+I_3 c^2 )\dot\phi^2 + (2I_3 c)\dot\phi\dot\psi + (I_3)\dot\psi^2 + \dot X^2 + \dot Y^2 \bigr] - ga(1-c) \] \smallskip\noindent where $s=\sin\theta$, $c=\cos\theta$ and other notation is conventional. We note that $L$ is independent of both $\phi$ and $\psi$. We assume that $I_1 = I_2 \ne I_3$. \textcolor{black}{There are $M=2$ nonholonomic constraints, which are linear and homogeneous in the velocities, corresponding to rolling motion without slipping:} \begin{eqnarray} \dot X &=& \phantom{-}h s_\phi \dot\theta - a s c_\phi \dot\phi - s c_\phi \dot\psi \label{eq:Xdot} \\ \dot Y &=& - h c_\phi \dot\theta - a s s_\phi \dot\phi - s s_\phi \dot\psi \label{eq:Ydot} \end{eqnarray} where $c_\phi=\cos\phi$, $s_\phi=\sin\phi$ and $h=1-ac$ is the height of the centre of mass. We write these constraints in the form $\gamma^\kappa \equiv A^\kappa_\mu \dot q^\mu = 0$ where $\dot q^\mu = \left(\dot\theta,\dot\phi,\dot\psi,\dot X, \dot Y\right)$ and $$ A^\kappa_\mu = \left[ \begin{matrix} - h s_\phi & asc_\phi & sc_\phi & 1 & 0 \\ h c_\phi & ass_\phi & ss_\phi & 0 & 1 \end{matrix} \right]\,. $$ For reference, we note that $$ \dot X^2 + \dot Y^2 = h^2\dot\theta^2 + s^2(a\dot\phi+\dot\psi)^2 \,. $$ However, we cannot use this to eliminate $\dot X$ and $\dot Y$ from the Lagrangian as the constraints are nonholonomic \cite{Flannery05}. The conjugate momenta are defined in terms of the Lagrangian: $p_\mu=\partial L/\partial \dot q^\mu$. For the Routh sphere they are \begin{eqnarray*} p_\theta &=& (I_1+a^2s^2)\dot\theta \\ p_\phi &=& (I_1 s^2+I_3 c^2 )\dot\phi + (I_3 c)\dot\psi \\ p_\psi &=& (I_3 c )\dot\phi + (I_3)\dot\psi \,. \end{eqnarray*} We also have $p_X = \dot X$ and $p_Y = \dot Y$. Since the determinant of the coefficients (the Hessian) is $(I_1 + a^2s^2) I_1 I_3 s^2$, we can solve for the velocities: \begin{eqnarray*} \dot\theta &=& p_\theta / (I_1+a^2s^2) \\ \dot\phi &=& (p_\phi -c p_\psi)/I_1 s^2 \\ \dot\psi &=& (-c/I_1 s^2) p_\phi + ((I_1 s^2+I_3 c^2)/I_1 I_3 s^2) p_\psi \end{eqnarray*} and, of course, $\dot X = p_X$ and $\dot Y = p_Y $. \subsection*{Invariance} We note that $\phi$, $\psi$, $X$ and $Y$ are all ignorable coordinates. Thus, $L$ is invariant with respect to infinitesimal variations of these coordinates. For free-slip boundary conditions, where there are no constraints linking the momenta, there are four conserved quantities \[ \bigl\{ p_\phi, p_\psi, p_X, p_Y \bigr\} \] corresponding to these four coordinates. Since the Lagrangian does not depend explicitly on $t$, invariance under a transformation of the form $t^\prime = t + \epsilon\tau$ \textcolor{black}{with $\tau$ constant} leads, in the usual way, to conservation of the energy. We therefore assume a transformation of the space coordinates, \begin{eqnarray*} {\phi}^\prime &=& \phi + \epsilon\, \xi^\phi(\theta) \\ {\psi}^\prime &=& \psi + \epsilon\, \xi^\psi(\theta) \end{eqnarray*} where the generators are functions of $\theta$, so that \[ \dot\xi^\phi = \frac{\dd\xi^\phi}{\dd\theta} \dot\theta \qquad \mbox{and} \qquad \dot\xi^\psi = \frac{\dd\xi^\psi}{\dd\theta} \dot\theta \,. \] The constraints also require variations of $X$ and $Y$ of the form \begin{eqnarray*} {X}^\prime &=& X + \epsilon\, \xi^X(\theta,\phi) \\ {Y}^\prime &=& Y + \epsilon\, \xi^Y(\theta,\phi) \end{eqnarray*} so that $\xi^X$ and $\xi^Y$ depend on $\phi$ as well as $\theta$. Explicitly, the constraints imply \begin{equation} \xi^X = -s c_\phi (a \xi^\phi + \xi^\psi ) \qquad \mbox{and} \qquad \xi^Y = -s s_\phi (a \xi^\phi + \xi^\psi ) \,. \label{eq:zetaXY} \end{equation} We note that $c_\phi\xi^X+s_\phi\xi^Y = -s(a\xi^\phi+\xi^\psi)$, independent of $\phi$. The time derivatives are \begin{eqnarray*} \dot\xi^X &=& \bigl[-c \,c_\phi(a\xi^\phi+\xi^\psi) -s c_\phi(a\xi^\phi_{,\theta}+\xi^\psi_{,\theta})\bigr] \dot\theta + \bigl[ s s_\phi(a\xi^\phi+\xi^\psi)\bigr]\dot\phi \\ \dot\xi^Y &=& \bigl[-c\, s_\phi(a\xi^\phi+\xi^\psi) -s s_\phi(a\xi^\phi_{,\theta}+\xi^\psi_{,\theta})\bigr] \dot\theta - \bigl[ s c_\phi(a\xi^\phi+\xi^\psi)\bigr]\dot\phi \,. \end{eqnarray*} Again, we note that $c_\phi\dot\xi^X+s_\phi\dot\xi^Y$ is independent of $\phi$. The invariance identity, Eq.~(\ref{eq:InvId1}), now becomes \[ p_\phi \dot\xi^\phi + p_\psi \dot\xi^\psi + p_X \dot\xi^X + p_Y \dot\xi^Y = 0 \,. \] Substituting the above values we get, for the unconstrained variables, \[ p_\phi \dot\xi^\phi + p_\psi \dot\xi^\psi = [(I_1 s^2+I_3 c^2 )\xi^\phi_{,\theta} + (I_3 c )\xi^\psi_{,\theta} ]\dot\theta\dot\phi + [(I_3 c )\xi^\phi_{,\theta} + (I_3 )\xi^\psi_{,\theta} ]\dot\theta\dot\psi \] and, for the constrained variables, \[ p_X\dot\xi^X + p_Y\dot\xi^Y = \bigl[s(a\xi^\phi+\xi^\psi)+as^2(a\xi^\phi_{,\theta}+\xi^\psi_{,\theta})\bigr]\dot\theta\dot\phi + \bigl[sc(a\xi^\phi+\xi^\psi)+s^2(a\xi^\phi_{,\theta}+\xi^\psi_{,\theta})\bigr]\dot\theta\dot\psi \,. \] Note that this expression is independent of $\phi$. Adding these two expressions and setting the coefficients of $\dot\theta\dot\phi$ and $\dot\theta\dot\psi$ separately to zero gives two ode's for $\xi^\phi$ and $\xi^\psi$: \begin{eqnarray} (I_1 s^2+I_3 c^2 + a^2 s^2 )\frac{\dd \xi^\phi}{\dd\theta} + (I_3 c + a s^2 )\frac{\dd \xi^\psi}{\dd\theta} + s(a\xi^\phi+\xi^\psi) &=& 0 \,, \label{eq:sym1} \\ (I_3 c + a s^2 )\frac{\dd \xi^\phi}{\dd\theta} + (I_3 + s^2 )\frac{\dd \xi^\psi}{\dd\theta} + sc (a\xi^\phi+\xi^\psi) &=& 0\,. \label{eq:sym2} \end{eqnarray} These are the symmetry equations for the Routh Sphere. We can write them \begin{equation} \mathsf{F}\,\frac{\dd{\boldsymbol{\xi}}}{\dd\theta} = \mathsf{G}\,{\boldsymbol{\xi}} \label{eq:gensys} \end{equation} where ${\boldsymbol{\xi}} = (\xi^\phi,\xi^\psi)^{\mathrm T}$ and the coefficient matrices are \begin{equation*} \mathsf{F} = \begin{bmatrix} I_1 s^2+I_3 c^2 + a^2 s^2 & I_3 c + a s^2 \\ I_3 c + a s^2 & I_3 + s^2 \end{bmatrix} \qquad\mbox{and}\qquad \mathsf{G} = - \begin{bmatrix} a s & s \\ a s c & s c \end{bmatrix} \,. \end{equation*} The determinant of the matrix $\mathsf{F}$ is $I_1 s^2/\rho^2$, where $$ \rho = \frac{1}{\sqrt{s^2+I_3+(I_3/I_1)f^2}} \,. $$ So $\mathsf{F}$ is invertible and the symmetry equations may be written as ${\dd\boldsymbol{\xi}}/{\dd\theta} = \mathsf{H}\,\boldsymbol{\xi}$, where $\mathsf{H} = \mathsf{F}^{-1}\mathsf{G}$. Explicitly, \begin{equation} \frac{\dd}{\dd\theta} \begin{pmatrix} \xi^\phi \\ \xi^\psi \end{pmatrix} = \left(-\frac{\rho^2 s}{I_1}\right) \begin{bmatrix} a(I_3 +h) & (I_3 + h) \\ a(I_1 c - I_3 c -ha) & (I_1 c - I_3 c -ha) \end{bmatrix} \begin{pmatrix} \xi^\phi \\ \xi^\psi \end{pmatrix}\,. \label{eq:Heqns} \end{equation} \subsection*{Solution of the symmetry equations} One solution of Eqs.~(\ref{eq:sym1}) and (\ref{eq:sym2}) is immediately obvious by inspection: take both $\xi^\phi$ and $\xi^\psi$ constant, with $\xi^\phi = 1$ and $\xi^\psi = - a$. Then $(a\xi^\phi+\xi^\psi)=0$ so, by virtue of Eq.~(\ref{eq:zetaXY}), both $\xi^X$ and $\xi^Y$ vanish. The Noetherian constant associated with this transformation is \begin{equation} C_J = p_\mu\xi^\mu = p_\phi - a p_\psi \,, \label{eq:CJellet} \end{equation} which is Jellett's constant. Once a solution of Eqs.~(\ref{eq:Heqns}) is known, another one can be found. Suppose there are two linearly independent solutions $(\xi^\phi_1,\xi^\psi_1)^{\rm T}$ and $(\xi^\phi_2,\xi^\psi_2)^{\rm T}$. The Wronskian is defined to be the determinant \[ W(\theta) = \left| \begin{matrix} \xi^\phi_1 & \xi^\phi_2 \\ \xi^\psi_1 & \xi^\psi_2 \end{matrix} \right| = \xi^\phi_1\xi^\psi_2 - \xi^\phi_2\xi^\psi_1 \,. \] It is easily shown that \[ \frac{\dd W}{\dd \theta} = \Tr(\mathsf{H})\, W, \] where $\Tr(\mathsf{H}) = \mathsf{H}_{11} + \mathsf{H}_{22}$. This has a solution $W(\theta) = C \exp[\int \Tr(\mathsf{H})\,\dd \theta ]$. The explicit form of $\mathsf{H}$ is implied from Eq.~(\ref{eq:Heqns}) so that $\Tr(\mathsf{H}) = (-{\rho^2 s}/{I_1})[I_1 c - I_3(c-a)]$. This can be integrated to yield $W(\theta)= C\rho$, with $C$ a constant depending on the normalisation choice for the linearly independent solutions. Then using the definition of $W$ we find that \[ \xi^\phi_2(\theta) = \xi^\phi_1(\theta) \int^\theta \frac{\mathsf{H}_{12}(\theta)}{\xi^\phi_1(\theta)^2} W(\theta)\,\dd\theta\,. \] In the present case, $\xi^\phi_1(\theta) = 1$, $\mathsf{H}_{12}(\theta) = (-\rho^2 s / I_1)(I_3+h)$ and we make the convenient choice $W(\theta)= I_1 \rho$. We find, by direct integration, the solution $\xi^\phi_2(\theta) = (c-a)\rho$ and thence, since $W = a\xi^\phi_2+\xi^\psi_2$, we get \[ \left( \begin{matrix} \xi^\phi_2 \\ \xi^\psi_2 \end{matrix} \right) = \left( \begin{matrix} f\rho \\ (I_1-af)\rho \end{matrix} \right)\,, \] where we write $f = c - a$. Eq.~(\ref{eq:zetaXY}) gives $\xi^X$ and $\xi^Y$. Then the Noetherian constant is \begin{equation} C_R = p_\mu\xi^\mu = \left[\frac{I_1}{I_3}\right]\frac{p_\psi }{\rho}\,, \label{eq:CRouth} \end{equation} which is Routh's constant. We can now write the general solution of Eq.~(\ref{eq:Heqns}) as \[ \left( \begin{matrix} \xi^\phi \\ \xi^\psi \end{matrix} \right) = A_1\left( \begin{matrix} \xi^\phi_1 \\ \xi^\psi_1 \end{matrix} \right) + A_2\left( \begin{matrix} \xi^\phi_2 \\ \xi^\psi_2 \end{matrix} \right) = \left( \begin{matrix} A_1 + A_2 f\rho \\ -a A_1 + A_2(I_1-af)\rho \end{matrix} \right)\,. \] \section{Recovering the symmetry from a known constant} Suppose we know that $C=p_\mu\xi^\mu$ is a constant of the motion. Then \begin{equation} \frac{\partial p_\mu}{\partial p_\nu}\xi^\mu = \left[ \frac{\partial p_\phi}{\partial p_\nu}\xi^\phi + \frac{\partial p_\psi}{\partial p_\nu}\xi^\psi + \frac{\partial p_X}{\partial p_\nu}\xi^X + \frac{\partial p_Y}{\partial p_\nu}\xi^Y \right] = \frac{\partial C}{\partial p_\nu} \label{eq:dCdp} \end{equation} provides a system of equations for the generators $\xi^\mu$. For unconstrained motion the momenta are independent and it follows that ${\partial p_\mu}/{\partial p_\nu}=\delta_\mu^\nu$, so that \[ \xi^\nu = \frac{\partial C}{\partial p_\nu} \,. \] For constrained motion, the generators are interconnected and a linear system of equations must be solved. We can write the constraints Eqs.~(\ref{eq:Xdot})--(\ref{eq:Ydot}) in terms of momenta: \begin{eqnarray*} p_X &=& \phantom{-}s_\phi\left(\frac{h}{I_1+a^2s^2}\right)p_\theta + c_\phi\left[\left(\frac{f}{I_1s}\right)p_\phi-\left(\frac{fc}{I_1s}+\frac{s}{I_3}\right)p_\psi\right]\,, \\ p_Y &=& -c_\phi\left(\frac{h}{I_1+a^2s^2}\right)p_\theta + s_\phi\left[\left(\frac{f}{I_1s}\right)p_\phi-\left(\frac{fc}{I_1s}+\frac{s}{I_3}\right)p_\psi\right]\,. \end{eqnarray*} We also recall that the generators satisfy the constraints Eq.~(\ref{eq:zetaXY}): \begin{equation*} \xi^X = -s c_\phi (a \xi^\phi + \xi^\psi ) \qquad \mbox{and} \qquad \xi^Y = -s s_\phi (a \xi^\phi + \xi^\psi ) \,. \end{equation*} These expressions allow us to eliminate the momenta $p_X$ and $p_Y$ and the generators $\xi^X$ and $\xi^Y$ from Eq.~(\ref{eq:dCdp}) and obtain expressions relating $\xi^\phi$ and $\xi^\psi$: \begin{eqnarray} \xi^\phi - \left(\frac{f}{I_1}\right)(a\xi^\phi+\xi^\psi) &=& \frac{\partial C}{\partial p_\phi} \label{eq:dCdp1} \\ \xi^\psi + \left(\frac{fc}{I_1}+\frac{s^2}{I_3}\right)(a\xi^\phi+\xi^\psi) &=& \frac{\partial C}{\partial p_\psi} \,. \label{eq:dCdp2} \end{eqnarray} Let us apply Eqs.~(\ref{eq:dCdp1})--(\ref{eq:dCdp2}) to the Jellett and Routh constants. For the Jellett constant, $C_J=(p_\phi-ap_\psi)$, we have $({\partial C_J}/{\partial p_\phi},{\partial C_J}/{\partial p_\psi})=(1,-a)$ and the solution is immediately obvious by inspection: \begin{equation} \boldsymbol{\Xi}_J \equiv \left( \begin{matrix} \xi^\theta \\ \xi^\phi \\ \xi^\psi \\ \xi^X \\ \xi^Y \end{matrix} \right) = \left( \begin{matrix} 0 \\ 1 \\ -a \\ 0 \\ 0 \end{matrix} \right) \,. \label{eq:Jgens} \end{equation} The coordinates $X$ and $Y$ of the centre of mass do not vary. An interpretation of this vector, will be given in \S\ref{sec:interp} below. For the Routh constant, Eq.~(\ref{eq:CRouth}), we have ${\partial C_R}/{\partial p_\phi}= 0$ and ${\partial C_R}/{\partial p_\psi} = I_1/(I_3\rho)$, and Eqs.~(\ref{eq:dCdp1})--(\ref{eq:dCdp2}) become \begin{eqnarray*} \xi^\phi - \left(\frac{f}{I_1}\right)(a\xi^\phi+\xi^\psi) &=& 0 \\ \xi^\psi + \left(\frac{fc}{I_1}+\frac{s^2}{I_3}\right)(a\xi^\phi+\xi^\psi) &=& \left[\frac{I_1}{I_3}\right] \frac{1}{\rho} \,. \end{eqnarray*} Eliminating $\xi^\psi$ gives us an expression for $\xi^\phi$: \begin{equation*} \frac{1}{f} \left[ I_3 + s^2 + (I_3/I_1)f^2 \right] \xi^\phi = \frac{1}{\rho} \,. \end{equation*} Simplifying this we get the infinitesimal Noether symmetry \begin{equation} \boldsymbol{\Xi}_R \equiv \left( \begin{matrix} \xi^\theta \\ \xi^\phi \\ \xi^\psi \\ \xi^X \\ \xi^Y \end{matrix} \right) = \rho \left( \begin{matrix} 0 \\ f \\ (I_1-af) \\ -I_1 s c_\phi \\ - I_1 s s_\phi \end{matrix} \right)\,. \label{eq:Rgens} \end{equation} \section{Interpretation of the Routh sphere symmetries} \label{sec:interp} Each infinitesimal Noether symmetry associated with a constant of the motion has a geometrical interpretation, obtained by integrating the it to construct a finite transformation depending on one free parameter. Let us call this free parameter $\alpha$. \subsection*{Jellett symmetry} For the Jellett constant, the Noether symmetry (\ref{eq:Jgens}) leads to the equations $$ \frac{\dd \theta}{\dd \alpha} = 0, \quad \frac{\dd \phi}{\dd \alpha} = 1, \quad \frac{\dd \psi}{\dd \alpha} = -a, \quad \frac{\dd X}{\dd \alpha} = 0, \quad \frac{\dd Y}{\dd \alpha} = 0\,. $$ This has solution $$ \theta=\theta_0, \quad X = X_0, \quad Y = Y_0 \quad \text{(constants)}, $$ $$ \phi(\alpha) = \alpha + \phi_0, \quad \psi(\alpha) = - a\, \alpha + \psi_0\,. $$ We consider the virtual motion corresponding to this free parameter $\alpha$. The angular velocity is simply: $$ \mbf{\Omega} = \frac{\dd \phi}{\dd \alpha} \tbf{K} + \frac{\dd \psi}{\dd \alpha} \mbf{k} = \tbf{K} - a\,\mbf{k} = - \mbf{r} \,, $$ where $\tbf{K}$ is the unit vector in the vertical direction in the inertial frame, and $\mbf{k}$ is the unit vector in the body frame, pointing along the symmetry axis of the body. The \emph{contact vector} $\mbf{r}$ points from the centre of mass $\mbf{O}$ to the contact point $\mbf{P}$ (see Fig.~\ref{fig:RS}). It follows that $\mbf{\Omega}$ is the vector pointing from the contact point $\mbf{P}$ to the centre of mass $\mbf{O}$. Since the position of the centre of mass is fixed, while the Euler angle $\phi$ changes at a constant rate, we deduce that the angular velocity $\mbf{\Omega}$ precesses uniformly about the vertical axis $\tbf{K}$, describing a cone. The period of this precession is $\Delta \alpha = 2\pi$, the same as the period of the angle $\phi$. The period of the $\psi$ angle is $2 \pi a$, which is almost never commensurate with $2 \pi$. Hence, the motion is generically quasi-periodic. \subsection*{Routh symmetry} For the Routh constant, the Noether symmetry (\ref{eq:Rgens}) leads to the equations \begin{equation} \frac{\dd \theta}{\dd \tilde\alpha} = 0, \quad \frac{\dd \phi}{\dd \tilde\alpha} = \rho f, \quad \frac{\dd \psi}{\dd \tilde\alpha} = \rho (I_1 - a f), \quad \frac{\dd X}{\dd \tilde\alpha} = -\rho I_1 s\,c_\phi, \quad \frac{\dd Y}{\dd \tilde\alpha} = -\rho I_1 s\,s_\phi \,. \label{eq:solro} \end{equation} Observing that, for $\theta$ constant, $\rho$ is a positive constant, we will use the rescaled parameter $\rho \tilde\alpha$ as our free parameter $\alpha$ from here on. We can solve the first three equations directly: \begin{equation} \theta = \theta_0 \quad \text{(constant)} \,, \qquad \phi(\alpha) = f \alpha + \phi_0 \,, \quad \psi(\alpha) = (I_1 - a f) \alpha + \psi_0 \,, \label{eq:symmthphps} \end{equation} where $f$ depends on $\theta$ and is thus constant. As in the case of the Jellett symmetry, the angles $\phi$ and $\psi$ change at constant rates, with ratio $\dd\psi/\dd \phi = - a + I_1/f$, again incommensurate in general. As $\theta$ varies from $0$ to $\pi$, this ratio may take arbitrary values outside the open interval $(-a - I_1/(1+a), -a + I_1/(1-a))$. In particular, as $I_1 >0$ it follows $\dd\psi/\dd \phi \neq -a$ which shows that the Routh case does not contain the Jellett case. Let us write the equations for $X$ and $Y$, the last two equations of (\ref{eq:solro}), explicitly, using the partial solutions just found: \begin{equation} \frac{\dd X}{\dd \alpha} = -I_1 s\,\cos(f \alpha + \phi_0) \,, \qquad \frac{\dd Y}{\dd \alpha} = -I_1 s\,\sin(f \alpha + \phi_0) \,. \label{eq:symmXY} \end{equation} The solution to these is immediate: letting $(X_0, Y_0)$ be the value of $(X,Y)$ at $\alpha = 0$, we have $$ X(\alpha) = -\frac{I_1 s}{f} \left[\sin(f \alpha + \phi_0) - \sin(\phi_0)\right]+ X_0\,, \qquad Y(\alpha) = \frac{I_1 s}{f}\left[\cos(f \alpha + \phi_0) - \cos(\phi_0)\right] + Y_0\,. $$ The interpretation of this solution is as follows: \begin{itemize} \item If $f\neq0$ then the projection of the centre of mass onto the underlying plane describes a circle of radius $R = {I_1 s}/{|f|}$, centred at $\left(X_0 + ({I_1 s}/{f})\sin\phi_0, Y_0 - ({I_1 s}/{f})\cos\phi_0\right)$, with period $\Delta \alpha = 2 \pi / |f|$. Noting that $I_1$ and $s$ are non-negative, the sense of rotation of this circular motion is positive if $f>0$ and negative if $f<0$. An interesting case is when the parameters $I_1, a$ and the angle $\theta$ are such that $I_1 - a f = 0$, which requires $f>0$ in particular. Then the ball does not spin with respect to its symmetry axis: $\psi(\alpha) = \psi_0$ for all $\alpha$, and thus the motion corresponds to the ball spinning in the positive sense with respect to the vertical axis $\tbf{K}$: the vector $\mbf{k}$ along the body's symmetry axis precesses about the vertical $\tbf{K}$ with period $\Delta \alpha$. \item If $f=0$, namely if we choose $\theta = \cos^{-1}a$ (which is always possible), then there is no circular motion (the radius tends to infinity): the azimuthal angle $\phi$ is now constant while the ball spins and therefore the centre of mass moves on a straight line. The solution of (\ref{eq:symmthphps}) and (\ref{eq:symmXY}) in this case is $$ \phi = \phi_0, \quad \psi(\alpha) = I_1 \alpha + \psi_0 \, \quad X(\alpha)= - I_1 s \alpha \cos (\phi_0) + X_0 \, \quad Y(\alpha) = - I_1 s \alpha \sin (\phi_0)+ Y_0 \,. $$ so the centre of mass moves in a straight line as the Routh sphere rolls. \end{itemize} \section{{Chaplygin ball on a rotating turntable}} \label{sec:ChapBall} The dynamics of a Chaplygin ball on a rotating turntable were analysed in \cite{BBM18}. The centre of mass of the ball coincides with the geometric centre and $I_1 = I_2 \ne I_3$. The holonomic constraint confines the geometric centre to remain at unit distance above the underlying plane so that the vertical velocity of the centre of mass vanishes. Assuming unit mass and unit radius for the Chaplygin ball, the Lagrangian is \smallskip \[ L = \textstyle\frac{1}{2}\bigl[ I_1\,\dot\theta^2 + (I_1 s^2+I_3 c^2 )\dot\phi^2 + (2I_3 c)\dot\phi\dot\psi + (I_3)\dot\psi^2 + \dot X^2 + \dot Y^2 \bigr] \] \smallskip\noindent where $s=\sin\theta$, $c=\cos\theta$ as above. The potential energy is constant and is taken to be zero. We note that, as for the Routh sphere, $L$ is independent of both $\phi$ and $\psi$. There are two nonholonomic constraints, which are linear and homogeneous in the velocities, corresponding to rolling motion without slipping with respect to the rotating turntable: \begin{eqnarray} \dot X &=& \phantom{-} s_\phi \dot\theta - s c_\phi \dot\psi - \Omega Y \label{eq:XdotCB} \\ \dot Y &=& - c_\phi \dot\theta - s s_\phi \dot\psi + \Omega X \label{eq:YdotCB} \end{eqnarray} where $c_\phi=\cos\phi$ and $s_\phi=\sin\phi$, as above, and $\Omega$ is the (constant) angular velocity of the rotating turntable. We write these constraints in the form $$ \gamma^\kappa \equiv A^\kappa_\mu \dot q^\mu + B^\kappa_\mu q^\mu = 0 $$ where $q^\mu = \left(\theta,\phi,\psi,X,Y\right)$ and $\dot q^\mu = \left(\dot\theta,\dot\phi,\dot\psi,\dot X, \dot Y\right)$. Thus, $$ A^\kappa_\mu = \left[ \begin{matrix} - s_\phi & 0 & s c_\phi & 1 & 0 \\ c_\phi & 0 & s s_\phi & 0 & 1 \end{matrix} \right] \,\qquad\mbox{and}\qquad B^\kappa_\mu = \left[ \begin{matrix} 0 & 0 & 0 & 0 & \Omega \\ 0 & 0 & 0 & -\Omega & 0 \end{matrix} \right] \,. $$ \newcommand{\mathbf{L}_\mathrm{C}}{\mathbf{L}_\mathrm{C}} \newcommand{\mathbf{L}_\mathrm{P}}{\mathbf{L}_\mathrm{P}} We describe a {\bf systematic method} to find Noether symmetries and corresponding constants for the Chaplygin ball on a rotating turntable. \begin{enumerate} \item We require the symmetries to satisfy the nonholonomic constraints (\ref{eq:XdotCB}) and (\ref{eq:YdotCB}): \begin{eqnarray} - s_\phi \xi^\theta + s c_\phi \xi^\psi + \xi^X + \Omega Y \tau &=& 0 \,, \label{eq:con1} \\ c_\phi \xi^\theta + s s_\phi \xi^\psi + \xi^Y - \Omega X \tau &=& 0 \,. \label{eq:con2} \end{eqnarray} Note that the component $\xi^\phi$ is absent from these equations. The constraints are two linear algebraic equations for the six symmetry components $(\xi^\theta, \xi^\phi, \xi^\psi, \xi^X, \xi^Y, \tau)$, which reduce the number of independent symmetry components to four. \item We make an \emph{ansatz} for some symmetry components. For example, we might require $\xi^\phi$ to be the only non-vanishing component. \item In the invariance identity (\ref{eq:InvId1}), we substitute the symmetry components that are known from the ansatz. This yields a differential equation for the remaining symmetry components. \item We solve the equation for these components. We can then construct the corresponding conserved quantities, using the invariance identity in the form (\ref{eq:InvId2}). \end{enumerate} \subsection*{Symmetry for the vertical component of angular momentum} Noting that $\xi^\phi$ does not occur in the nonholonomic constraints (\ref{eq:con1}) and (\ref{eq:con2}), we seek a symmetry $$ (\xi^\theta, \xi^\phi, \xi^\psi, \xi^X, \xi^Y, \tau) = ( 0 , \xi^\phi, 0 , 0 , 0 , 0 ) \,. $$ This symmetry automatically satisfies the nonholonomic constraints. Now, because $\phi$ is an ignorable coordinate, the invariance identity (\ref{eq:InvId1}) becomes $p_\phi \dot\xi^\phi = 0$, with solution $\xi^\phi = $ constant. Then the invariance identity in the form (\ref{eq:InvId2}) becomes $\dd p_\phi/\dd t = 0$ so the $\phi$-component of angular momentum \begin{equation} L_Z \equiv p_\phi \label{eq:IntLZ} \end{equation} is an integral of the motion. \subsection*{Symmetries for horizontal components of angular momentum} Noting the unit coefficients of $\xi^X$ annd $\xi^Y$ in constraints (\ref{eq:con1}) and (\ref{eq:con2}), we seek two types of symmetry: \begin{eqnarray} (\xi^\theta, \xi^\phi, \xi^\psi, \xi^X, \xi^Y, \tau) &=& (\xi^\theta, \xi^\phi , \xi^\psi, 1 , 0 , 0 ) \,, \label{eq:xiX} \\ (\xi^\theta, \xi^\phi, \xi^\psi, \xi^X, \xi^Y, \tau) &=& (\xi^\theta, \xi^\phi , \xi^\psi, 0 , 1 , 0 ) \label{eq:xiY} \,. \end{eqnarray} We consider these symmetries in turn. Substituting (\ref{eq:xiX}) in the constraints, we easily solve for $\xi^\theta$ and $\xi^\psi$: \begin{equation} \xi^\theta = s_\phi \,, \qquad \xi^\psi = -c_\phi/s \,. \label{eq:xithetaphi} \end{equation} These immediately give us expressions for $\dot\xi^\theta$ and $\dot\xi^\psi$: $$ \dot\xi^\theta = c_\phi \dot\phi \,, \qquad \dot\xi^\psi = (c c_\phi /s^2)\dot\theta + (s_\phi/s)\dot\phi \,. $$ Using these in the invariance identity (\ref{eq:InvId1}), which is $(\partial L/\partial\theta)\xi^\theta + p_\mu \dot\xi^\mu = 0$, we obtain an equation for $\dot\xi^\phi$: $$ \dot\xi^\phi = -\left( \frac{c_\phi}{s^2} \dot\theta + \frac{c s_\phi}{s} \dot\phi \right) \,. $$ This implies that $\xi^\phi$ is a function of $\theta$ and $\phi$ only. We obtain $$ \frac{\partial\xi^\phi}{\partial\theta} = -\frac{c_\phi}{s^2} \,, \qquad \frac{\partial\xi^\phi}{\partial\phi} = -\frac{c s_\phi}{s} \,. $$ These are easily seen to satisfy the compatibility condition ${\partial^2\xi^\phi / \partial\theta\partial\phi} = {\partial^2\xi^\phi / \partial\phi\partial\theta}$ and we immediately have the solution \begin{equation} \xi^\phi = \frac{c c_\phi}{s} \,. \label{eq:xipsi} \end{equation} The final step is to substitute (\ref{eq:xithetaphi}) and (\ref{eq:xipsi}) into the invariance identity (\ref{eq:InvId2}) to obtain the Noether integral \begin{equation} L_Y \equiv s_\phi p_\theta + \left(\frac{c c_\phi}{s}\right) p_\phi - \left(\frac{c_\phi}{s}\right) p_\psi + p_X \,. \label{eq:int2} \end{equation} A similar analysis starting from symmetry (\ref{eq:xiY}) yields the Noether integral \begin{equation} L_X \equiv c_\phi p_\theta - \left(\frac{c s_\phi}{s}\right) p_\phi + \left(\frac{s_\phi}{s}\right) p_\psi - p_Y \,. \label{eq:int3} \end{equation} \subsection*{Symmetry for an integral involving the energy} To obtain integrals which are non-linear in the velocities, we need to assume $\tau \neq 0$. We seek a symmetry such that $$ (\xi^\theta, \xi^\phi, \xi^\psi, \xi^X, \xi^Y, \tau) = ( 0 , 0 , 0 , \xi^X, \xi^Y, \tau) \,. $$ The constraints (\ref{eq:con1}) and (\ref{eq:con2}) then become \begin{eqnarray} \xi^X + \Omega Y\, \tau &=& 0 \,, \label{eq:xiconX} \\ \xi^Y - \Omega X\, \tau &=& 0 \,. \label{eq:xiconY} \end{eqnarray} The invariance identity (\ref{eq:InvId1}) is then $$ p_X \dot\xi^X + p_Y \dot\xi^Y - H \dot\tau = 0\,. $$ Differentiating the constraints (\ref{eq:xiconX}) and (\ref{eq:xiconY}) and substituting for $\dot\xi^X$ and $\dot\xi^Y$, we get $$ [H - \Omega(X \dot Y - \dot X Y ) ] \dot\tau = 0\,. $$ This is satisfied for constant $\tau$. Therefore, the invariance identity (\ref{eq:InvId2}) gives us the integral \begin{equation} J \equiv H - \Omega L_{\mathrm{O}} \,, \label{eq:Jint} \end{equation} where $L_{\mathrm{O}} \equiv (X \dot Y - \dot X Y ) = \mathbf{K}\mathbf{\cdot}(\mathbf{R}\boldsymbol{\times}\mathbf{\dot R})$ is the the angular momentum due to the centre of mass about the origin of the space frame ($\mathbf{R}=X\mathbf{I}+Y\mathbf{J}$ is the position vector of the point of contact in the space frame). \subsection*{Physical interpretation of the integrals} The angular momentum about the centre of mass is $$ \mathbf{L}_\mathrm{C} \equiv \mathbb{I}_\mathrm{C} \boldsymbol{\omega} = I_1\omega_1 + I_2\omega_2 + I_3\omega_3\,. $$ Following \cite{BBM18}, we compute the angular momentum about the point of contact, which is, in our notation, $$ \mathbf{L}_\mathrm{P} = \mathbb{I}_\mathrm{C} \boldsymbol{\omega} + \mathbf{K}\boldsymbol{\times}(\boldsymbol{\omega\times}\mathbf{K}) - \Omega\mathbf{R} \,. $$ We note that both the second and third terms on the right are horizontal vectors. It was shown by \cite{BBM18} that, in the body frame, \begin{equation} \left(\frac{\dd\mathbf{L}_\mathrm{P}}{\dd t}\right)_\mathrm{B} = \mathbf{L}_\mathrm{P}\boldsymbol{\times\omega}. \label{eq:LP} \end{equation} Therefore, in the space frame, $$ \left(\frac{\dd\mathbf{L}_\mathrm{P}}{\dd t}\right)_\mathrm{S} = \left(\frac{\dd\mathbf{L}_\mathrm{P}}{\dd t}\right)_\mathrm{B} + \boldsymbol{\omega\times}\mathbf{L}_\mathrm{P} =\boldsymbol{0}\,. $$ It therefore follows that $F_1=\mathbf{I}\boldsymbol{\cdot}\mathbf{L}_\mathrm{P}$, $F_2=\mathbf{J}\boldsymbol{\cdot}\mathbf{L}_\mathrm{P}$ and $F_3=\mathbf{K}\boldsymbol{\cdot}\mathbf{L}_\mathrm{P}$ are integrals of the motion. Computation of $F_3$ is simple, since only the first term of (\ref{eq:LP}) contributes: $\mathbf{K}\boldsymbol{\cdot}\mathbf{L}_\mathrm{P} = p_\phi$. Expressions for the remaining integrals can be computed: \begin{eqnarray*} \mathbf{I}\boldsymbol{\cdot}\mathbf{L}_\mathrm{P} &=& c_\phi p_\theta - \left(\frac{c s_\phi}{s}\right) p_\phi + \left(\frac{s_\phi}{s}\right) p_\psi - p_Y \,,\\ \mathbf{J}\boldsymbol{\cdot}\mathbf{L}_\mathrm{P} &=& s_\phi p_\theta + \left(\frac{c c_\phi}{s}\right) p_\phi - \left(\frac{c_\phi}{s}\right) p_\psi + p_X \,. \end{eqnarray*} We see that the three components of $\mathbf{L}_\mathrm{P}$ in the space frame are precisely the three integrals $(L_X, L_Y,L_Z)$ that we have derived from Noether's theorem. In \cite{BBM18}, another integral, similar to the Jacobi integral, was found: $$ E = \textstyle\frac{1}{2} \boldsymbol{\omega} \mathbf{\cdot} \bigl[ \mathbb{I}_\mathrm{C}\boldsymbol{\omega} + \mathbf{K}\mathbf{\times}(\boldsymbol{\omega}\mathbf{\times}\mathbf{K}) \bigr] - \textstyle\frac{1}{2}\Omega^2(X^2+Y^2) \,. $$ They cite the origin of this integral as \cite{FGS18}. It is straightforward to show that $E$ is identical to the integral $J$ in (\ref{eq:Jint}), which we found using Noether's theorem. \subsection*{Interpretation of the symmetries} We proceed as in Section \ref{sec:interp} to find finite versions of the four infinitesimal symmetries just found. \noindent $\bullet$ \textbf{Symmetry for $L_Z$.} In terms of the free parameter $\alpha$ of the symmetry, we get the equation $$\frac{d\phi}{d\alpha} = 1\,,$$ with solution $\phi(\alpha) = \alpha$. The remaining coordinates $(\theta, \psi, X, Y)$ are kept constant. This corresponds geometrically to the spinning of the ball about the point of contact, at a constant angular velocity ${d\phi}/{d\alpha} = 1$. \noindent $\bullet$ \textbf{Symmetry for $L_Y$.} Reading off the coefficients of $p_\mu$ from equation (\ref{eq:int2}), we get the equations $$ \frac{d \theta}{d\alpha} = \sin \phi\,, \qquad \frac{d \phi}{d\alpha} = \frac{\cos \theta \cos \phi}{\sin \theta} \,, \qquad \frac{d \psi}{d\alpha} = - \frac{\cos \phi}{\sin \theta} \,, \qquad \frac{d X}{d\alpha} = 1, \qquad \frac{d Y}{d\alpha} = 0 \,. $$ One immediately gets $X(\alpha) = \alpha$ and $Y=\text{constant}$. This suggest that the geometric interpretation of this symmetry corresponds to a rotation of the ball such that $Y$ is constant and $X$ changes linearly. To see this, consider the equations for $\theta$ and $\phi$. They provide the first integral $$ \sin \theta \cos \phi = y_0\quad \text{(constant)}, $$ which validates this interpretation. A less obvious result follows from the equation for $\psi$, which can be solved by quadrature, giving the implicit first integral $$ \tan(\psi-\psi_0) = \cos\theta \cot \phi \qquad \qquad (\psi_0 = \text{constant}) \,. $$ \noindent $\bullet$ \textbf{Symmetry for $L_X$.} Reading off the coefficients of $p_\mu$ from equation (\ref{eq:int3}), we get the equations $$ \frac{d \theta}{d\alpha} = \cos \phi \,, \qquad \frac{d \phi}{d\alpha} = -\frac{\cos \theta \sin \phi}{\sin \theta} \,, \qquad \frac{d \psi}{d\alpha} = \frac{\sin \phi}{\sin \theta} \,, \qquad \frac{d X}{d\alpha} = 0 \,, \qquad \frac{d Y}{d\alpha} = -1 \,. $$ Here the interpretation of the symmetry corresponds to a rotation of the ball such that $X$ is constant and $Y$ changes linearly. In a similar fashion to the results obtained for $L_Y$, we obtain the following first integrals: $$ \sin \theta \sin \phi = x_0 \quad \text{(constant)}, \qquad\quad \tan(\psi-\psi_0) = - \cos\theta \tan \phi \qquad\quad (\psi_0 = \text{constant}) \,. $$ \section{Discussion} The key property of the infinitesimal Noether symmetries found for the Routh sphere and the Chaplygin ball is that they respect the nonholonomic constraints. In the more general case of the {Chaplygin top} or the Rock'n'roller, it is not known whether an infinitesimal Noether symmetry that respects the nonholonomic constraints exists. If such a symmetry existed, then a constant of motion could be constructed via equation (\ref{eq:Noether1}). For example, it is possible to show for these more general cases that the transformation $$\phi \to \phi + \epsilon$$ (while keeping all other variables unchanged, including $X$ and $Y$) is an infinitesimal Noether symmetry. However, this symmetry does not respect the nonholonomic constraints (\ref{eq:Xdot})--(\ref{eq:Ydot}) (with velocities replaced by the generators). In fact, from equation (\ref{eq:InvId2}) we obtain $$\frac{\dd p_\phi}{\dd t} = a s (\lambda_1 c_\phi + \lambda_2 s_\phi)\,,$$ where $\lambda_1$ and $\lambda_2$ are the multipliers associated with the constraints (\ref{eq:Xdot}) and (\ref{eq:Ydot}) respectively. This example shows that a Noether symmetry is potentially useful even if it does not respect the nonholonomic constraints: it provides direct formulas for the total time derivative of quantities, which in principle could be exploited for applications such as finding Lyapunov functions. {Another avenue of research is the understanding of the Lie algebra between the Noether symmetries that we found for nonholonomic systems. In the case of holonomic systems, it is well known that the Lie bracket between two symmetries is another symmetry. This leads to a method for finding new integrals starting from known ones \cite{BH03}. However, when nonholonomic constraints are imposed, the usual Lie bracket between two Noether symmetries does not necessarily produce another Noether symmetry. Further research on the relation between Poisson brackets and symmetries (see \cite{Cushman98, BandT13} for studies in the context of the Routh sphere), is needed to generalise the Lie bracket as a method to produce new Noether symmetries.} \section*{Acknowledgement} We thank the reviewers for valuable comments, which have helped us to improve the paper. We are grateful to Vakhtang Putkaradze for fruitful discussions about Noether's Theorem and its use in the analysis of integrable systems.
2,869,038,156,135
arxiv
\section{Introduction} \label{sec:intro} In the past decade, images of very young planetary nebulae (PNs) and proto-planetary nebulae (PPNs) have revealed an unexpected diversity of morphological classes. Many of these objects appear to exhibit a level of complexity that cannot be accounted for in terms of the Generalized Interacting Stellar Winds model (GISW; Balick \& Frank 2002 and references therein). Of particular interest are objects exhibiting point-symmetric, multi-polar, and ``butterfly'' morphologies, as well as bipolar and multi-polar objects exhibiting highly collimated ``jet-like'' outflows. The appearance of these collimated and sometimes multi-polar outflows in so many PPNs has led to the suggestion that high-speed jets operate during the late asymptotic giant branch (AGB) and/or post-AGB evolutionary phases of the central star (Sahai \& Trauger 1998). While the GISW model can account for narrow jets (Icke et al. 1992; Mellema \& Frank 1997; Borkowski, Blondin, \& Harrington 1997), it assumes the winds are radiatively driven. Radiative acceleration cannot however account for these flows since a number of observational studies demonstrate a momentum excess such that a factor $\sim 10^3$ exists between outflow momentum observed and what can be attributed to stellar radiation pressure (Bujarrabal et al. 2001). Moreover, it is difficult to attribute the degree of observed collimation to a large-scale dust torus as is usually required in the GISW model. In addition, the problem of accounting for the precession necessary for the production of point-symmetric flows remains to be solved [for an accretion disk based model see Icke (2003)]. For these reasons, the suggestion has been made that PPN jets and collimated outflows are magnetically driven (Blackman et al. 2001a, 2001b; Frank \& Blackman 2004; Matt, Frank, \& Blackman 2006, Frank 2006). Magnetically driven models couple rotation to a magnetic field. Jets therefore are bound to flow along the rotational axis of the central object and it is difficult to see how multiple jets of similar size can be driven by such a mechanism. We discuss these models and this issue in more detail at the end of the paper. \par Observationally, clumps and collimated flows occur in many stellar outflows though not always together. The outflows in Wolf-Rayet (WR) nebulae are clumpy, but jets are not observed. In young stellar objects (YSOs), jets and collimated bipolar outflows are quite common, and while they can often be clumpy, the jet beams---distinct from the bow shocks which they drive---are often apparent, stretching all the way back to the stellar source. In mature PN, clumps are often seen [as in NGC 2392 (Eskimo), 6853 (Dumbbell) and NGC 7293 (Helix)]. Fully articulated jets are however very rare. We note that ionization shadows and ``mass loaded'' flows behind clumps can give the appearance of jets. In some mature PN such as the Cats Eye nebula, structures appear (some of which fall under the term FLIERS) which may be the remnants of poleward-directed flows. In HST images of many PPNs, the outlines of reflected light are often bipolar, but within these boundaries the illuminated gas seems irregular. Thin jets (as opposed to thin finger-shaped lobes) are rarely seen directly except (perhaps) in 0H231.8+04.2 (Calabash) . However, pairs or sets of knots lying along or near the apparent symmetry axes are not unusual (M1-92, IRAS 20028 +3910, IRAS 16594-3656, Hen 3-1475). Thus the creation of continious jets as in the case of YSOs does not seem to be the norm in PN and PPN. \par Clumps or ``bullets'' driven into the surrounding media have been found to be an effective explanation for some stellar outflow structures. In Poludnenko, Frank, \& Mitran (2004) the authors modeled the strings of $\eta$ Car as bullets of high speed material ejected by the star. The simulations showed that long, thin morphologies similar to jets were readily obtained along with multiple rings associated with vortex shedding and the break-up of the clump. The authors suggested that such ``impulsive'' models may be useful in PPNs as well. Such a scenario is very different from the jet-driven explanation for PPN/PN. In this paper we seek to explore the usefulness of the clump picture. \par Soker (2000) has analytically explored the role of jets in PNs. In the excellent study of Lee \& Sahai (2003), simulations of jets as the drivers of PPNs were presented, including detailed comparisons with observations. Our goals in this paper are more modest. In what follows we take a first step in the exploration of the clumps vs. jets issue by examining 2-dimensional and 3-dimensional pairs of simulations, with each pair consisting of either a steady jet impinging upon a circumstellar gaseous medium, or a clump of gas which is fired ballistically through the same medium along a trajectory corresponding to the direction of flow in the steady jet \footnote{In a related study Raga et al.(2007) have also recently presented a model of the ``3D structure of a radiative, cosmic bullet flow.''}. Both the jet and the clump are assumed to be magnetically launched though no attempt to model the launch mechanism is made here, and the simulations are purely hydrodynamic. For the present, we are merely interested in examining how the clump and jet differ in their effect on the surrounding circumstellar medium. As we will show, the jet and clump models show differences which require further study, but the clumps provide at least as good, or better, an account of key observational characteristics. Given the fact that in some cases multiple outflows are seen in a single object (such as CRL 618), the clump model may be more plausible since what are often interpreted as multiple ``jets'' could instead arise naturally from the fragmentation of an explosively driven polar directed shell. We note here that none of the widely accepted magnetically launched outflow models would create continuous multiple jets of similar or equal age driven in slightly different directions. We note also that new models of binary stars in the context of PN's (Nordhaus \& Blackman 2006, Nordhaus, Blackman \& Frank 2007) show the extent to which envelope ejection can be shaped by gravitational interactions. In Nordhaus \& Blackman 2006 Common Envelope scenarios which lead to aspherical mass loss (including disk creation and possible MHD launching) were articulated. In Nordhaus, Blackman \& Frank 2007 Common Envelope models were explored as the source of differential rotation in the primary which could drive strong dynamo supported magnetic fields. These models showed that while single stars may, in some cases, be able to support a strong field over AGB timescales, binary interactions were highly effective at creating the fields needed to power PPN outflows at the evolutionary moment when they will be required. As we will see, such models provide strong theoretical support for the scenario we argue for in this paper. In section \ref{sec:CompInit} we provide information concerning the numerical methods used, details of the jet and clump models, and a discussion of the initial conditions. In section \ref{sec:Results} we discuss the results of our simulations and in section \ref{sec:Discussion} we summarize our conclusions. \section{Computational Methods and Initial Conditions} \label{sec:CompInit} We have carried out two pairs of hydrodynamic simulations (one ``medium-resolution'' 3D pair and one ``high-resolution'' 2.5D pair). Each pair consists of a jet and clump respectively with each parameterized to be as similar to one another as possible. Specific parameter values for the jet, clump, and ambient medium for each simulation are given in table~\ref{tab:t1}. The simulations are performed using the AstroBEAR code which is an extension of the BEARCLAW adaptive mesh refinement (AMR) package for solving conservation laws. [For a detailed description of the AstroBEAR package see section 3 of Cunningham Frank \& Blackman (2006).] The domain is a rectangular box with a square cross-section and with the $x$-axis chosen to intersect the center of the left square face of the domain. The origin of coordinates is placed at this point of intersection. The clump and jet are launched along the $x$-axis and placed so that their centers coincide with it. The jet was modeled in 3D with a circular cross-section of radius $r_{\rm j}$ (in 2.5D the jet cross-section reduces to a line-segment of length $2r_{\rm j}$ and a thickness of one computational cell). The jet was launched into the domain from a set of fixed cells along the domain boundary. To prevent the expansion of the jet inflow boundary with time, a ring of zero velocity and with outer radius $1.125r_0$ was maintained around the jet-launching region. The velocity profile of the jet was smoothed about a nominal value $v_{{\rm j},0}$ according to \begin{equation} \label{eq:jetvprof} v_{\rm j} = v_{{\rm j},0} \left[ 1-\left(1-s\right) \left(\frac{r}{r_0}\right)^2 \right], \end{equation} where $s$ is a shearing parameter taking values between $0$ and $1$ and for the jet simulation presented here is set equal to $s=0.9$. The clump was modeled in 3D as a spherical over-density of radius $r_{\rm c} = r_{\rm j} $. (The sphere reduces to a circle in 2.5D.) Its initial position in the domain is chosen so that its center is located at the point \begin{equation} \label{eq:clumpcent} {\mathbf r}_{\rm {c},0} (x,y,z) = (2r_{\rm c},0,0), \end{equation} and the density of the clump as a function of location within the clump is \begin{equation} \label{eq:clumpdens} n_c(r) = n_a(r) + n_0 \left[ 1 - \left( \frac{ \left|{\mathbf r}- {\mathbf r}_{c,0} \right|} {r_0 } \right)^2 \right], \end{equation} where \begin{equation} \label{eq:ambdens} n_{\rm a}(r) = {\rm min} \left( n_0,\frac{n_0r_0^2}{r^2} \right), \end{equation} is the ambient number density profile in regions of the domain unoccupied by jet or clump gas, and where $r^2=x^2+y^2+z^2$, $n_0$ is the nominal ambient number density, and $r_0$ is a characteristic length taken to be equal to the jet or clump radius. The 3D (2.5D) simulations are carried out on a base grid with a resolution of 6 (12) cells per jet/clump radius and with two levels of AMR refinement providing an effective resolution of 24 (48) cells per jet/clump radius. In all cases, radiative cooling is modeled using the atomic line cooling function of Delgarno \& McCray (1972), and we do not attempt to follow the detailed ionization dynamics or chemistry of the cooling gas. Given that both models give rise to similarly expanding shells of shock-heated gas we do not expect this simplification to materially affect our conclusions. \clearpage \begin{table*}[h] \scriptsize \begin{tabular}{llll} \hline \hline Model & Parameter & Value (2.5D) & Value (3D) \\ \hline \\ Jet & Radius, $r_j$ \dotfill & $500 {\rm\ AU}$ & $500 {\rm\ AU}$ \\ & Computational cells per $r_j$ \dotfill & $48$ & $24$ \\ & number density, $n_j$ \dotfill & $500 {\rm\ cm}^{-3}$ & $500 {\rm\ cm}^{-3}$ \\ & peak velocity, $v_{j,0}$ \dotfill & $100 {\rm\ km\ s^{-1}}$ & $100 {\rm\ km\ s^{-1}}$ \\ & Temperature, $T_j$ \dotfill & $200 {\rm\ K}$ & $200 {\rm\ K}$ \\ & Nominal ambient density, $n_a$\dotfill & $500 {\rm\ cm}^{-3}$ & $500 {\rm\ cm}^{-3}$ \\ & Ambient temperature, $T_a$ \dotfill & $200 {\rm\ K}$ & $200 {\rm\ K}$ \\ & Shear parameter $s$ \dotfill & $0.9$ & $0.9$ \\ \vspace{-6pt} \\ \hline \\ \vspace{-12pt} \\ Clump & Radius, $r_c$ \dotfill & $500 {\rm\ AU}$ & $500 {\rm\ AU}$ \\ & Computational cells per $r_c$ \dotfill & $48$ & $24$ \\ & nominal number density, $n_o$ \dotfill & $500 {\rm\ cm}^{-3}$ & $500 {\rm\ cm}^{-3}$ \\ & velocity, $v_c$ \dotfill & $100 {\rm\ km\ s^{-1}}$ & $100 {\rm km\ s^{-1}}$ \\ & Temperature, $T_c$ \dotfill & $200 {\rm\ K}$ & $200 {\rm\ K}$ \\ & Nominal ambient density, $n_a$\dotfill & $500 {\rm\ cm}^{-3}$ & $500 {\rm\ cm}^{-3}$ \\ & Ambient temperature, $T_a$ \dotfill & $200 {\rm\ K}$ & $200 {\rm\ K}$ \\ \vspace{-0.35cm} \\ \hline \hline \end{tabular} \caption{\normalsize Simulation Parameters \label{tab:t1} } \normalsize \end{table*} \clearpage \section{Results: Morphology} \label{sec:Results} Results of our simulations are presented in figures \ref{fig:f1}$-$\ref{fig:f6}. In Figures \ref{fig:f1} and \ref{fig:f2} we present the results of medium resolution, (24 cells per radius), 3D simulations of one jet and one clump respectively. The length of the domain in these simulations is 20 computational units with one computational unit corresponding to a physical scale of 500 AU. (One computational unit is also the value chosen for the radii of the jet and clump.) In each figure the upper image shows a plot of emission integrated along the line of sight which in the case of these figures is perpendicular to the plane of the image. The lower image in each figure is a plot of the logarithm of density in a plane coincident with the $x$-$y$ plane. The images show the jet and clump near the end of their respective runs at time $t\simeq 498 {\rm\ yr}$ for the case of the clump and at time $t\simeq 636 {\rm\ yr}$ for the case of the jet. The resolution is seen to be sufficient to capture vortex-shedding events in both simulations. It is also evident from these images that while one can easily distinguish jet from clump in the density maps, the emission maps are quite similar. We note that the clump gives rise to a somewhat more collimated flow, while the jet bow shock expands laterally at a greater rate than the clump bow shock. The jet also lags behind the clump in its forward motion. This is likely due to the streamlining that occurs as the head of the clump is reduced in size as material is ablated away via its interaction with the ambient medium. \par Due to limits on computational resources, it was necessary to impose limits on the resolution and run time of the 3D simulations from which the images in figures \ref{fig:f1} and \ref{fig:f2} are taken. The simulations end just as the vortex-shedding events begin to have an interesting effect on the nebular environment. To explore this stage further we carried out the second pair of 2.5D simulations mentioned above. In these high-resolution simulations, the effective resolution was doubled to 48 cells per jet/clump radius, the length of the domain was doubled, and the transverse dimensions of the domain were enlarged in an attempt to accomodate the lateral expansion of the jet/clump bow shock (this latter adjustment was successful only for the case of the clump). Results from these simulations are presented in figures \ref{fig:f3} through \ref{fig:f6}. In figures \ref{fig:f3} and \ref{fig:f4} we again present images of the logarithm of density for the jet and clump respectively---this time reflected about the axis of symmetry. Both figures show the simulations at various stages of the flow. We observe that the differences found between the two models in the 3D simulations---i.e. the faster domain crossing time and the higher degree of collimation exhibited by the clump---are seen again in these images. The vortex shedding however, is now captured with greater clarity for both jet and clump, and we begin to see significant qualitative differences in the manner in which these events unfold. In particular we note that shedding events are much more frequent in the case of the clump. These results mirror those found by Poludnenko, Frank \& Mitran (2004). It is noteworthy that their study used a different integration scheme than used here. AstroBEAR has a number of schemes built into it and in the Poludnenko study a Wave Propagation scheme was used (LeVeque 1997), while here a MUSCL-Hancock method is used. The fact that the basic morphology of clumps driving bow shocks dominated by vortex shedding events is recovered using both schemes gives us confidence in this aspect of the dynamics. \subsection{Morphology} To get a better sense of how the differences between the models might appear observationally, we present integrated emission maps for the jet and clump respectively in figures \ref{fig:f5} and \ref{fig:f6}. These figures were produced by calculating the effect on the line-of-sight emission resulting from a rotation of the cylindrically symmetric data set about the axis of symmetry. The intensity shown, which does not distinguish among cooling lines, was determined according to: \begin{equation} I_{i,j,k} = \Sigma_k n_{i,j,k}^2\Lambda(T_{i,j,k}), \end{equation} where $i$,$j$, and $k$ refer to the $x$,$y$ and $z$ directions in the final data cube created by rotating $n(r,z)$ and $T(r,z)$ about the axis of symmetry, and $\Lambda$ is the cooling function. The images shown correspond to the final frames in each of figures \ref{fig:f3} and \ref{fig:f4} respectively. Each figure provides two views of the data: one in which the angle of inclination of the symmetry axis with respect to the image plane, $\theta$, is $0^\circ$; and one in which it is $20^\circ$. One difference between the jet and clump cases appears in the shape of the head of the bow shock. A clump has a finite reservoir of mass which interacts with the ambient medium. As the clump propagates down the grid, it drives a (bow) shock wave into the ambient medium. A second shock passes through the clump heating and compressing it. When cooling is present this "transmitted shock'' first leaves the clump flattened. As material is then ablated away via the interactions with the ambient medium the remaining clump material becomes dense and streamlined in the direction of propagation. At later times in the simulation the dense core of the clump drives a {\bf V}~-~shaped bow shock head. In the case of a jet the situation is different. The jet head drives a bow shock into the ambient medium and transmitted shock, called a jet shock, propagates back into the jet material. Decelerated jet material flows transverse the these shocks inflating a cocoon behind the wings of the bow shock. Unlike the clump however, there is always more high speed material behind the jet shock/bow shock pair to resupply the interaction. Thus with material continuously flowing into the cocoon, the bow shock head remains wider and takes on a flatter more {\bf U}~-~shaped configuration. Such a distinction between {\bf V}~-~and {\bf U}~-~shaped flows may be important in comparing with observations. We note that both the 2.5D and 3D simulations both show this difference. We note however that the axial symmetry will tend to enhance features on the axis. Our 3D runs do not yet have the resolution to accurately track the break-up of the clump. Thus the {\bf V} and {\bf U} bow shock head distinction must be considered less than conclusive and await further study. Vortex shedding provides another morphological distinction. In the case of the clump, the relatively frequent shedding events have led to a series of thin, irregularly spaced rings of enhanced intensity centered about the symmetry axis. These are reminiscent of the ring-like structures observed in some collimated PPN outflows [see for example Trammell \& Goodrich (2002)]. The shedding events occurring in the jet simulation lead to similar structures, but these are less frequent and somewhat more band-like in character. The qualitative differences in the manner in which these rings form in the outflows depending on whether one models them as jets or clumps might suggest a means of distinguishing between the two models in observations. One must, once again, be careful not to over-interpret these results due to limits on the resolution and the fact that these simulations are 2.5D. We thus conclude that the high-resolution 2.5D simulations lend weight to the assertion that clumps and/or jets can account for observed ringed structures, but neither can be ruled out as a model for the collimated outflows observed in the environments of PPNs. In the meantime, we note that this conclusion in itself is important with respect to PPN studies as we will discuss in the last section. It is also noteworthy that Lee \& Sahai (2003) attempted to model the rings via a pulsed jet. Each ring became associated with an ``internal working surface'' where faster moving material swept over slower moving material. The internal shocks lead to transverse motions of shocked material which impinge upon the bow shock. As might be expected, the strength of the emission from these shocks decreased as the pulse traveled down the length of the beam. Such dimming of the rings with distance from the source is not what is observed in CRL 618. The clump on the other hand produces the opposite kind of pattern, as ablation events on the clump lead to rings that are bright closer to the head of the bow shock. \subsection{Kinematics} In addition to comparisons of morphology, it is also important to consider the flow kinematics, since observed flows are often seen to exhibit ``Hubble-flow'' characteristics---that is, the flow velocity is observed to increase linearly with respect to distance from the flow origin. To address this issue we present in figure \ref{fig:f7}, plots of the $x$-component of the flow velocity averaged over the directions transverse to the flow $\left<v_x\right>$. Velocity tracers were not used in our simulations. In order, therefore, to differentiate between mildly perturbed ambient gas, and gas that is fully involved in the flow, values of velocity $\lesssim 0.01 v_0$ were ignored, where $v_0$ is the initial velocity of the clump or jet gas. The top row of the figure shows, from left to right, the results for the 3D jet and clump respectively while the bottom figure shows the results for the 2.5D jet and clump. The data for these plots are taken from times near the end of the simulation when the flows have crossed most of the domain. In the two plots involving clumps, there are large regions of the flow for which the variation of velocity with distance is roughly linear. The jets on the other hand, fail to model this behavior altogether. Comparison of the 3D clump plots to the 2.5D case yields an interesting result. Recall that the first vortex-shedding events in both clump and jet were observed to occur in the 3D simulations shortly before the end of the simulation. Because of this they do not have time to perturb the flow in a way that might be noticeable in these plots. However, when we examine this phenomenon kinematically in the extended spatial domain allowed by the 2.5D simulation, we find that while the vortex shedding events, appear to perturb the kinematics of both jet and clump, these perturbations do not alter the overall qualitative character of the flows in either case. To further examine the kinematics of our simulated flows, we have also produced a set of synthetic position-velocity (PV) diagrams for the 2.5D simulations. These are presented in figure \ref{fig:f8}. Once again, as in figures \ref{fig:f5} and \ref{fig:f6} we present our results in pairs corresponding to values of $0^\circ$ and $20^\circ$ for the angle of inclination, $\theta$, of the flow symmetry axis with respect to the image plane. These plots were produced by calculating the velocity structure along the line of sight and with the ``slit'' placement taken to be along the projected axis of symmetry. Results for the jet are given in the first row of the figure, and results of the clump are given in the second row. In these images, the difference between the clump and jet are even more striking. For either angle of inclination, the velocity structure of the jet cannot be said to be even approximately Hubble-like. The clump however, continues to exhibit line-of-sight velocity structure indicative of a linear increase with distance along the projected direction of flow. The effect is particularly apparent in the case of the flow which is inclined with respect to the image plane. These results, and those of figure \ref{fig:f7} suggest that it may be possible to distinguish outflows from steady jets from those from explosive events through a careful examination of their kinematics. One caveat which must be considered in these results is the role of emission. Because our models do not track emission from individual species we cannot separate the emission at the bow shock from that within the jet or from the shocked jet material. In Ostriker et al. (2001), a model for the emission from a jet-driven bow shock was presented which showed a characteristic spur pattern in synthetic PV diagrams. The spur exhibits a rapid drop in velocity away from the tip of the bow shock. Lee \& Sahai (2003) found a range of patterns in their jet simulations which in some cases took on the spur morphology. Thus our results are suggestive of the differences between jets and clumps and indicate that clumps appear to be better, in general, at recovering quasi-linear increases in velocity along the nebular outflow lobe. \subsection{Kinematic Models} \label{sec:KinMod} In order to interpret our results we consider the time-dependent distortion of the clump gas during the evolution of the outflow. Strongly radiative, hypersonic clouds of any geometry will be rapidly compressed into a thin ballistic sheet after ejection by the outflow progenitor. We therefore consider the motion of a cylindrically symmetric disk with surface density $\chi(r)$ and velocity $v(r,t)$ where $v(r,0)=v_\circ$ to model the time-dependent evolution of the clump gas. The equation of motion for a differential ring of the disk under the ram pressure of the ambient gas of density $\rho$ is given by: \begin{equation} \rho v^2(r,t) = -\chi \frac{dv}{dt}. \label{eq:newton} \end{equation} Because the outflow bow shock is convex, most of the outflow-entrained ambient gas will be swept outside of the path of the clump into the bow shock. We therefore neglect accretion of ambient material onto the clump and the kinematics of ambient material ejection in this model. Because the disk is hypersonic in a strongly cooling environment, we consider the model disk to be ballistic, neglecting pressure forces. For simplicity we also take the density of the ambient gas to be constant. Thus the equation of motion for a differential ring of clump gas with radius $r$ integrates to: \begin{equation} v=\frac{v_0}{1+\rho v_0 t/\chi}. \label{eq:eqvkin} \end{equation} The distance traversed by the ring is given as: \begin{equation} L(\chi,t)=\int_0^t v(t) dt=\frac{\chi}{\rho}\ln\left[1+\frac{\rho v_0 t}{\chi}\right]. \label{eq:eqLkin} \end{equation} The quantities $v$, $t$, and $L$, all refer to the same ring. What differentiates one ring of material from another is the parameter $r$. Now at some late time $t$, we imagine the rings to have been distributed over the length of the outflow with this distribution depending on $r$. For this fixed value of $t$, we are interested plotting the velocity of each ring against its corresponding distance. The $r$-dependence of $v$ and $L$ enter into the expressions for these quantities through the surface density $\chi$. We therefore model $\chi$ by assuming that the clump of gas from which our disk formed was initially spherical, of constant volume mass density $\sigma$, and compressed in such a way that all material within the volume of the clump and lying along a given line passing through the clump in the direction of its motion remains on this line after compression. Then, \begin{equation} \chi(r)=2\sigma r_0\sqrt{1-(r/r_0)^2}, \end{equation} where $r_0$ is the radius the clump/disk. Introducing the dimensionless quantities: \begin{equation} \tilde r= \frac{r}{r_0}, \end{equation} \begin{equation} \tilde v = \frac{v}{v_0} \end{equation} \begin{equation} \tilde t = \frac{\rho v_0 t}{2\sigma r_0} \end{equation} and, \begin{equation} \tilde L = \frac{\rho}{2\sigma r_0}L, \end{equation} our parametric equations are: \begin{equation} \tilde v=\left[ 1+\frac{\tilde t}{\sqrt{1-{\tilde r}^2}} \right]^{-1}, \end{equation} and \begin{equation} \tilde L = \sqrt{1-{\tilde r}^2}\ln \left[ 1+\frac{\tilde t}{\sqrt{1-{\tilde r}^2}} \right]. \end{equation} The value of $\tilde t$ is chosen by assuming $\sigma\gtrsim 2\rho$, and by noting that at late times $v_0 t\gtrsim r_0$. In our 2.5D simulations we have $v_0t/r_0 \sim 4\tilde t \simeq 20$ making $\tilde t\simeq 5$. Figure \ref{fig:f9} shows a plot of $v$ vs $L$ in astronomically relevant units with this choice of $\tilde t$. For purposes of comparison, a line with an appropriately chosen slope and intercept is plotted as well. In spite of the simplicity of our analytical model, a comparison of this plot with the upper-right-hand plot of figure \ref{fig:f7} reveals good agreement between the two. Both curves exhibit a small concave curvature for small $L$ while becoming increasingly linear with increasing L (the downward turn in the 2.5D simulation-based plot results from extending the plot into regions not yet reached by the clump). We also find that the range of $L$-values over which the curve shows linear behavior increases for increasing $\tilde t$, implying that a correlation between the kinematical ages of PPN outflows and the extent to which they are observed to exhibit Hubble-flow. We take this calculation as further evidence that our simulations are accurately capturing the dynamics of the flow and the relevance of bullet models for PPN. \clearpage \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 0 0 0 clip=true] {f1.eps} \caption{3D Jet at time $t\simeq 636 {\ \rm yr}$, top: integrated emission assuming atomic line cooling; bottom, base-ten logarithm of density. \label{fig:f1}} \end{figure*} \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 0 0 0 clip=true] {f2.eps} \caption{3D clump at time $t\simeq 498 {\ \rm yr}$, top: integrated emission assuming atomic line cooling; bottom, base-ten logarithm of density. \label{fig:f2}} \end{figure*} \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 150 0 0 clip=true] {f3.eps} \caption{Density of 2.5D Jet at times $t\simeq 336, 628, 896, {\rm and\ } 1165 {\rm \ yr}$. The data is shown here reflected about the axis of symmetry. \label{fig:f3}} \end{figure*} \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 150 0 0 clip=true] {f4.eps} \caption{Density crosscut of 2.5D Clump at times $t\simeq 208, 477, 753,{\rm and\ } 1082 {\rm \ yr}$. The data is shown here reflected about the axis of symmetry. \label{fig:f4}} \end{figure*} \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 300 0 0 clip=true] {f5.eps} \caption{Integrated emission of 2.5D Jet at time $t\simeq 1132 {\rm \ yr}$. The data is shown here rotated about the axis of symmetry and with angles of inclination of the symmetry axis with respect to the image plain of $\theta=0^\circ$(top), and $\theta=20^\circ$(bottom). \label{fig:f5}} \end{figure*} \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 300 0 0 clip=true] {f6.eps} \caption{Integrated emission of 2.5D Clump at time $t\simeq 1082 {\rm \ yr}$. The data is shown here rotated about the axis of symmetry and with angles of inclination of the symmetry axis with respect to the image plain of $\theta=0^\circ$(top), and $\theta=20^\circ$(bottom). \label{fig:f6}} \end{figure*} % \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 300 0 0 clip=true] {f7.eps} \caption{Comparisons of weighted average parallel flow velocity, $\left<v_x\right>$, vs distance from the flow origin for clump and jet for the 3D simulations(top row) calculated at time $t\simeq 636 {\rm \ yr}$, and the 2.5D simulations(bottom row) calculated at time $t\simeq 1165 {\rm \ yr}$. (See the text for an explanation of the weighting.) \label{fig:f7}} \end{figure*} % \begin{figure*}[ht] \vskip 18pt \includegraphics[angle=0, width=6.0in, keepaspectratio=true, trim= 0 300 0 0 clip=true] {f8.eps} \caption{Position-velocity diagrams at time $t\simeq 1082 {\rm \ yr}$ for the 2.5D Jet (top row) and the 2.5D Clump (bottom row) assuming inclination angles of $\theta=0^\circ$(left) and $\theta=20^\circ$(right). \label{fig:f8}} \end{figure*} \begin{figure}[ht] \includegraphics[width=0.75\textwidth] {f9.eps} \caption{$V(r)$ vs $L(r)$ for $\tilde t=5$. See section \ref{sec:KinMod} for an explanation of this figure. \label{fig:f9}} \end{figure} \clearpage \section{Discussion and Conclusions} \label{sec:Discussion} In this paper we have examined the results of two pairs of simulations intended to model the gross morphological and kinematical properties of PPNs. Our primary purpose was to ask not if we could distinguish between jet and clump models, but instead to ascertain if clump models could perform equally well at recovering these properties. Below we explain the justification for this more explicitly and present the conclusion of our study. As was discussed in the introduction, MHD models of PN shaping have been explored by a variety of authors and in a variety of forms. As we will explain below, it is the imperative of MHD models, particularly magneto-centrifugal launch and collimation scenarios, which motivate this paper. Two distinct classes of model for the magnetic shaping of winds in PNs and PPNs have been suggested to date. First there is the Magnetized Wind Bubble (MWB) model, originally proposed by Chavalier \& Luo (1994) and studied numerically by R\`o\.zyczka \& Franco (1996) and Garcia-Segura et al. (1999). In these models, an initially weak toroidal magnetic field is embedded in a radiatively-driven wind. This configuration has been shown capable of accounting for a wide variety of outflow morphologies including highly~-~collimated jets. These models clearly demonstrate the importance of magnetic fields. They cannot however account for the excess momentum in the flows (Bujarrabal et al. 2001) because of the weak fields which simply ride along in a radiatively-driven wind. \par The other class of model invokes so-called Magnetocentrifugal Launching (MCL; Blandford \& Payne 1982; Pelletier \& Pudritz 1992). This paradigm has, for many years, been explored as the mechanism driving jets in young stellar objects (YSOs), micro-quasars and active galactic nuclei (AGNs). The MCL paradigm assumes the presence of a rotating central gravitating object (which may or may not include an accretion disk). In the case of a disk, plasma is threaded by a magnetic field whose poloidal component is in co-rotation with the disk. Disk-coronal gas is then subject to centrifugal force which accelerates the gas flinging it out along field lines. The magnetized plasma eventually expands to a configuration where the toroidal component of the field dominates and hoop stresses collimate the flow. Thus the MCL paradigm accounts for both the origin of the wind and the means of collimation. \par The success of the MCL paradigm in modeling jets associated with YSOs and AGNs has led some authors to suggest applying the idea in the context of PNs and PPNs (Blackman et al. 2001a, Frank \& Blackman 2004,). Most recently it has been shown that the observed total energy and momentum in PPNs can be recovered with disk wind models using existing disk formation scenarios via binary interaction (Frank \& Blackman 2004 and references within). \par Most theoretical investigations of the MCL paradigm assume a steady-state flow. Observations suggest however, that acceleration times for the flows are as much as an order of magnitude shorter than typical kinematical PPN ages (Bujarrabal et al. 2001). This implies that the mechanism responsible for the observed flows may operate explosively, i.e. the time over which the mechanism acts is short compared to the lifetime of the flow. Moreover, it has been suggested by Alcolea et al. (2001) that such a scenario would also provide the most straightforward explanation for the ``Hubble law'' kinematics observed in some PPN outflows (Balick \& Frank 2002; Bujarrabal, Alcolea, \& Neri 1998; Olafsson \& Nyman 1999). The MCL paradigm can act transiently however, when linked with the rapid evolution of its source, as for example in the case of the proposed mechanisms for gamma-ray bursts (GRBs; Piran 2005) and Supernovae (SNe). This scenario has been investigated by a number of authors (Klu\'zniak \& Ruderman 1998; Wheeler et al. 2002; Akiyama et al. 2003; Blackman et al. 2006). In these scenarios differential rotation twists an initially weak poloidal field thereby generating and amplifying a toroidal field. When the toroidal component reaches a critical value it drives through the stratified layers of the collapsing core carrying trapped material with it. The hoop stresses associated with such a field also serve to collimate the flow. Recently Matt, Frank, \& Blackman (2006) examined numerically a simplified version of this idea which was originally suggested for PPN in Blackman et al. 2001b. In these studies the authors began with a gravitating core threaded by an initially poloidal field set rotating at 10\% of the escape speed within an envelope of ionized gas. As the simulation progressed, the resulting toroidal field was sufficiently strong to drive a complete and rapid expulsion of the gaseous envelope. Since the initial conditions assumed in Matt, Frank \& Blackman (2006) are applicable to either a young PPN or a collapsing protopulsar, it is reasonable to ask whether such transient events are occurring in the early stages of the formation of PNs, and if such events can serve as well as steady-state jets can in accounting for the complex morphologies observed in such systems. We note that these classes of model are sometimes referred to as ``magnetic towers'' or ``springs'' because it is the gradient of toroidal field pressure which drives the outflow. Again we note that the magnetic fields needed for our scenario can be delivered by binary interactions as has been demonstrated by Nordhaus \& Blackman 2006 and Nordhaus, Blackman \& Frank 2007. There is growing evidence to suggest that magneto-centrifugal launch models are appropriate for PNs and, more importantly, PPNs (Vlemmings et al. 2006). Taken together with the evidence that many PPNs have short acceleration time scales for which $\tau_{acc} < .1 \tau_{dyn}$ it suggests that some PPNs may be considered to have arisen from explosive, or at least impulsive, depositions of momentum and energy into surrounding, circumstellar environments. Together we argue that these lines of evidence suggest that many PPNs may be shaped by shells which fragment into clumps rather than multiple jets. There is a subset of PPNs and young PNs with multiple lobes of roughly similar size. These include CRL 2688, CRL 618, IRS 19024+004, IRS 09371+1212, M1-37 and He2-47. It is natural to try and interpret these structures initially as resulting from the action of jets. However, consideration of magneto-centrifugal launching models shows this to be unlikely. In all forms of the model, gravitational binding energy is tapped via rotational motions of the central source about some axis $\mathbf \omega$, and is converted into outflow kinetic energy using the magnetic fields as a ``drive belt''. The existence of a quasi-stable rotational axis is a requirement of the models in order to produce a continuous outflow. Multiple jets of equal length are difficult to imagine in such a scenario as the jets would then each require their own rotational engine with separate alignments. Even so-called magnetic tower models which drive the jet by winding up an initially weak poloidal field require a net spin axis such that $B_\phi ~ 2 \pi n_\phi B_p$ where $n_\phi$ is the number of turns about $\mathbf \omega$. The production of multiple bow shocks from clumps or bullets driven by a transient MCL process is not as difficult to envision. In Matt, Frank, \& Blackman (2006), it was shown that the static envelope or atmosphere of a star could be entirely driven off of a rotating magnetized core. These models relied on the magnetic tower ``spring'' mechanisms, and the envelope becomes compressed into a thin shell which rides at the front of the expanding magnetic tower. Such a thin accelerating shell would be subject to a variety of instabilities including the Rayleigh-Taylor, Thin-Shell (Vishniac 1993) and Non-linear Thin Shell (Vishniac 1994) modes, all of which would be modified by the presence of an ordered magnetic field which would impose a long coherence length onto the resulting flow. The precise detail of such fragmentation in this situation have yet to be calculated and stand as an open problem. Given the impulsive acceleration of a dense, radiatively cooling shell into a lower density environment, it is likely that the shell would fragment into a number of high mach number clumps directed along the poles (and perhaps the equator, see Matt, Frank, \& Blackman 2006 and CRL 6888). A potential challenge to this model would be the creation of fragmentation modes that can, in some cases produce roughly equivalent clumps in terms of propagation direction on either side of the source as is seen in some cases. Given that caveat however, the ability for explosive magnetic tower models already explored in the literature to drive unstable shells makes them an attractive means of producing high mach number clumps. As we have shown, these clumps, propelled into the surrounding media then drive bow shocks which do at least as good a job as, if not better than, jets in recovering gross morphological and kinematic observations. Similar conclusions have recently been reached by Raga et al. (2007). In summary, the results presented here add weight to an emerging paradigm in which transient (explosive) MCL processes act as the driver for PPN evolution in some case. The fact that such magnetic launch mechanisms are already favored by some theorists to explain supernovae and gamma-ray bursts (Piran 2005), makes all the more compelling the notion that lower energy analogues of the processes believed to be occurring during the penultimate stages of massive stars' evolution, are also occurring in low and intermediate mass stars. \acknowledgements The authors thank Orsola De Marco and Pat Huggins for providing insights in a number of ways. This work was supported by Jet Propulsion Laboratory Spitzer Space Telescope theory grant 051080-001, NSF grants AST-0507519, Hubble Space Telscope theory grant 11251 and the Laboratory for Laser Energetics.
2,869,038,156,136
arxiv
\section{Introduction} \label{sec:intro} The first detection of gravitational wave signal, GW150914, by the LIGO and Virgo detectors incited a renewed interest in the evolution of binary systems including degenerate stars and black holes (BH) \citep{Abbott2016}. It showed that stellar-mass black holes (StMBH) with masses between $2.5 M_\odot$ and several tens of solar mass should be present in binary systems in significant numbers, unless we were lucky to observe an exceedingly rare and fortuitous event. The detectable burst lasted for $\sim$0.2 s, but it probably took billions of years for the binary system to go through the stages of evolution leading to this event, releasing approximately three solar masses of energy in the form of gravitational radiation. It is now up to observational astrophysics to confirm this scenario by finding StMBH in Galactic binaries. Black holes in tight binaries with regular star companions are observable as powerful and variable X-ray sources. There are a few hundred known objects of this type in the Milky Way, but only two dozen have been dynamically confirmed as StMBH \citep{cas}. The observable phenomena are caused not by the StMBH itself but by the high-energy processes in the accreted material. The orbital periods range from less than 3 hours to a month, but most are shorter than 1 day. Most of these tight X-ray binaries include a dwarf donor companion, but a few systems with giant companions are also known \citep{li}. The estimated masses of the BH companions are greater than $\sim2.7$ $M_\odot$. Such tight systems must be the result of a long-term ($>10^9$ yr) tidal evolution or of rare dynamical events. However, it is reasonable to expect that the majority of Galactic StMBH reside in wider pairs with normal stars (separation $<1$ au for main sequence companions and a few astronomical units for red giants), where there may be no or very little observable effects. There are three main possible ways to detect such binaries. \begin{itemize} \item Precision astrometry can reveal the reflex orbital motion of the stellar companion around the barycenter of the system. Based on the theoretical expectations of the rate of failed supernovae \citep[e.g.][]{Woosley1986}, \citet{gou} predicted that $\sim$30 binaries containing StMBH remnants are present in the Hipparcos catalog, but none has been found, owing to the limited sensitivity. Some candidate binaries have been reprocessed \citep{Goldin2006,Goldin2007}, but the large uncertainty of the orbital parameters precluded definite detection of StMBHs. The renewed interest in this direction is focused now on the {\it Gaia} mission, which is expected to discover StMBH in scores \citep{Mashian2017,Kinugawa2018}. \item Radial velocity (RV) variations can reveal large-amplitude orbital motion caused by a StMBH. Some systems in the 9th Catalogue \citep{SB9} have high values of the mass function, indicating a possible StMBH companion, but they are all of lower grades of reliability, with the exception of the well-studied X-ray binary HIP 18350 = X Per. More recent discoveries include candidates, such as the SB1 star AS 386, with $P = 131$ days, $K_1$ of 52 km~s$^{-1}$, and mass function of 1.9 $M_\odot$ \citep{Khohlov2018}. A candidate non-interacting system with a RG component was reported by \citet{tho}, and possibly more will be identified in the extensive APOGEE survey \citep{bad}. \item Precision photometry of eclipsing binaries reveals the presence of massive companions via the eclipse time variation effect. The {\it Kepler} main mission provided sufficient data for the characterization of 222 triple systems \citep{Borkovits2016}. A few of those have high mass functions, suggesting a massive but invisible tertiary companion. However, the accuracy of this method is compromised by the large uncertainty of the period for long-term effects and the possible interference from persistent, differentially rotating photospheric spots. \end{itemize} \begin{deluxetable*}{cc c c c l } \tablecaption{List of candidates \label{tab:list}} \tablewidth{0pt} \tablehead{ \colhead{TYC} & \colhead{HD} & \colhead{$V$} & \colhead{Spectral} & \colhead{$\overline{\omega}$\tablenotemark{b}} & \colhead{Note} \\ & & \colhead{(mag)} & \colhead{type\tablenotemark{a}} & \colhead{(mas)} & } \startdata 7381-433-1 & 318347 & 10.02 & G0 & 0.729(0.052) & Emission star \\ 7390-1610-1 & 324668 & 9.71 & K0 & 1.009(0.042) & Constant RV \\ 9299-1080-1 & \ldots & 9.74 & ? & 1.232(0.025) & SB1, 81 day \\ 6948-350-1 & 206092 & 9.76 & G9III & 2.592(0.046) & SB2, 4.37 days \enddata \tablenotetext{a}{As given in Simbad} \tablenotetext{b}{Parallaxes and errors are from the {\it Gaia} DR2 \citep{Gaia}.} \end{deluxetable*} In this work, we are exploiting the second method, i.e., the RV measurements of the reflex orbital motion. Our targets are selected from the extensive RV survey of southern sky red giants that had been suggested to serve as reference stars for the {\it Space Interferometry Mission} (SIM) \citep{Makarov2015}. The goal of this survey was to vet spectroscopically single stars, but a large fraction of binaries was discovered. Only three to four individual observations were typically made of each star in a campaign lasting 753 days; the sample size is 1134. We selected from the above survey four stars with very large RV variation for further monitoring, with the aim to confirm their large amplitudes and determine the orbits. A large minimum mass of the secondary component derived from the spectroscopic orbit would provide a strong indication that the companion could be a StMBH and the object is a red giant (RG)+BH binary. The periods of such binaries are expected to be longer than $\sim$100 days owing to the large radii of giants. Binaries with shorter orbital periods should go through the common envelope stage and end up as merged stars (prior to the core collapse) or as tight low-mass X-ray binaries \citep{iva}. At longer periods than $\sim$100 days, a giant star of 2 $M_\odot$ orbited by a 5 $M_\odot$ BH would have an RV amplitude of $K_1 = 60$ km~s$^{-1}$ if the orbit is seen edge-on. At longer periods, $P$, the amplitude decreases as $P^{-1/3}$. A 2 yr SIM grid survey could reveal RG+BH binaries with periods up to $\sim$5 yr. \begin{figure} \plotone{fig1.eps} \caption{Location of the four candidates on the $M_V, V-K$ color-magnitude diagram. The lines are isochrones for solar metallicity and ages of 1 and 4 Gyr from \citet{Dotter2008}. De-reddening corrections are not applied. \label{fig:CMD} } \end{figure} None of our targets turned out to contain spectroscopically detectable massive companions. Nevertheless, the claimed large RV variation had to be verified and explained, revealing some intrinsically interesting and rare stars. The targets and our observing method are briefly introduced in Section~\ref{sec:obs}. The following Section~\ref{sec:obj} presents our results regarding each star. The general discussion and conclusions are given in Section~\ref{sec:disc}. \section{Observational data} \label{sec:obs} \subsection{Targets} Four stars with large RV variations were selected from the SIM grid survey. Table~\ref{tab:list} provides basic data on these stars: their common identifiers (they allow for the retrieval of other information from Simbad), visual magnitudes, spectral types, parallaxes, and short notes. Figure~\ref{fig:CMD} shows the location of these stars on the $(M_V, V-K)$ color-magnitude diagram (CMD), computed using {\it Gaia} DR2 parallaxes \citep{Gaia} and the $K$ magnitudes given by Simbad. No de-reddening corrections are applied. The stars are elevated above the main sequence, confirming the giant status. \subsection{CORALIE RVs} The RV survey of SIM grid stars was conducted by D.~Queloz and D.~S\'egransan using the CORALIE echelle spectrometer at the 1.2 m Euler telescope in La Silla. The RVs were determined by cross-correlating the reduced spectra with the binary mask \citep{CORALIE}. The cross-correlation function (CCF) contains a dip produced by all of the absorption lines included in the mask. Its position defines the RV, while the width of the dip depends on the width of the stellar lines (hence on the projected rotation velocity). The analysis by \citet{Makarov2015} shows the high overall quality of these RVs: the mean absolute error is 34 m~s$^{-1}$, and the intrinsic RV jitter caused by the atmospheres of red giants is of the same order. Observations reported here revealed that the large RV variability of our candidates was, mostly, caused by occasional erroneous measurements in the CORALIE data. As noted by D.~S\'egransan (2018, private communication), the CCF may not contain a valid dip for several reasons: very noisy spectrum (e.g. taken through the clouds), absence of absorption lines matching the mask, e.g. a star of early spectral type, or a large RV that falls outside the normally computed CCF window. In such cases, the processing software measures a wrong RV using the local CCF minimum. Outlying measurements could be identified and rejected by the low dip contrast. Problematic detections are further discussed in Section~\ref{sec:obj}. To confirm the overall high quality and variability of the SIM RV survey, we compared the data with the mean RVs from the Gaia Data Release 2 (DR2). We cross-matched 1133 stars in DR2, but only 1084 have RV measurements there. Selecting only stars with a probability of binarity (given in SIM RV) below 0.9, the resulting sample counts 697 stars. Fig. \ref{dr2.fig} shows the histogram of the RV unit weight error, i.e., the observed difference of RV(SIM) and RV(DR2) divided by the quadratic sum of their errors provided in both catalogs. The theoretically expected Normal$[0,1]$ PDF is shown for reference. In order to make the center of the empirical distribution coincide with 0, we added a common zero-point shift of $0.27$ km~s$^{-1}$\, to all individual differences. The presence of a systematic bias in DR2 measurements was discussed by \citet{kat}. Fig. \ref{dr2.fig} confirms that the bulk of RV measurements of constant stars are as precise as their standard errors suggest in both catalogs. \begin{figure} \plotone{SIMRV_DR2RV.eps} \caption{Histogram of normalized RV differences between the SIM RV survey and Gaia DR2 catalog for 697 common stars with binarity probability less than 0.9. The thick line shows the Normal$[0,1]$ distribution for comparison. A common zero-point shift of $+0.27$ km~s$^{-1}$\, was applied to all RV(SIM) $-$ RV(Gaia) differences. The core of the empirical distribution is narrower than the expected distribution, indicating that the standard errors of mean RV may be slightly overestimated. \label{dr2.fig} } \end{figure} \subsection{CHIRON observations} The observations reported here were conducted at the 1.5 m telescope located at Cerro Tololo (Chile) and operated by the SMARTS consortium. Ten hours of observing time were allocated through the National Optical Astronomy Observatory (NOAO). Spectra were taken by the telescope operator in the service mode. The optical echelle spectrometer CHIRON \citep{CHIRON} was used in the fiber mode with a spectral resolution of 30000. On each visit, a single 5-minute exposure of the star was taken, accompanied by the spectrum of the comparison lamp for wavelength calibration. The data were reduced by the pipeline written in IDL. The RVs are derived from the reduced spectra by cross-correlation with a binary mask based on the solar spectrum, similarly to the CORALIE RVs. We used 39 echelle orders in the spectral range from 4500 to 6500\AA, which is relatively less contaminated by telluric lines. More details are provided by \citet{Tok2016}. The RVs delivered by this procedure should be on the absolute scale if the wavelength calibration is good. A comparison of CHIRON RVs with several RV standards revealed a small offset of $+0.16$ km~s$^{-1}$ \citep{Tok2018}; in the following this offset is neglected. The mean RVs of the star with constant RV, HD 324668, measured by CORALIE and CHIRON, differ only by 43 m~s$^{-1}$. Table~\ref{tab:orb} gives the elements of two spectroscopic orbits derived from the CHIRON RVs. The notations are standard, as is the method of orbit calculation by weighted least squares. Individual RVs are given in Table~\ref{tab:rv}, published in full electronically. \begin{deluxetable*}{l ccc c ccc c c } \tablecaption{Orbital elements \label{tab:orb}} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{$P$} & \colhead{$T_0$} & \colhead{$e$} & \colhead{$\omega$} & \colhead{$K_1$ } & \colhead{ $K_2$} & \colhead{ $\gamma$} & \colhead{$N$} & \colhead{rms} \\ & \colhead{(day)} & \colhead{-2400000} & & \colhead{(deg)} & \colhead{km~s$^{-1}$\,} & \colhead{km~s$^{-1}$\,} & \colhead{km~s$^{-1}$\,} & & \colhead{km~s$^{-1}$\,} } \startdata TYC 9299-1080-1 & 80.99 & 58394.76 & 0 & 0 & 18.41 & \ldots & 317.71 & 13 & 0.37 \\ & $\pm$0.02 & $\pm$0.08 & fixed & fixed & $\pm$0.13 & \ldots & $\pm$0.08 & \ldots & \ldots \\ HD 206092 & 4.37545& 58271.347 & 0 & 0 & 43.88 & 111.69 & 27.35 & 17 & 1.96 \\ & $\pm$0.00008 & $\pm$0.008 & fixed & fixed & $\pm$0.87 & $\pm$1.05 & $\pm$0.50 & \ldots & 1.55 \enddata \end{deluxetable*} \begin{deluxetable}{l ccc} \tablecaption{Radial velocities \label{tab:rv}} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{JD} & \colhead{RV} & \colhead{Comp.} \\ & \colhead{$-2400000$} & \colhead{km~s$^{-1}$\,} & } \startdata HD 324668 & 58260.8470 &-32.651 & \\ HD 324668 & 58270.7916 &-32.624 & \\ HD 324668 & 58341.5915 &-32.590 & \\ HD 206092 & 58260.912 & -6.810 & a \\ HD 206092 & 58260.912 & 112.381 & b \\ HD 206092 & 58271.899 & 59.516 & a \\ HD 206092 & 58271.899 & -50.498 & b \enddata \end{deluxetable} \subsection{Speckle interferometry} All targets were observed on 2018.4 in the $I$ band using the speckle camera at the 4.1 m SOAR telescope. The angular resolution (minimum detectable separation) was 50\,mas, and the dynamic range (maximum magnitude difference) is about 4 mag at 0\farcs15 separation. The instrument and observing technique are described in \citet{SOAR}. No companions were detected. \section{Notes on individual objects} \label{sec:obj} \subsection{HD 318347} Hydrogen emission in the spectrum of this star has been noted a long time ago. Simbad gives the spectral type G0, matching the red color $V -K = 2.96$ mag. However, the star is featured in the catalog of galactic OB stars by \citet{Reed2003}. The four CHIRON spectra taken over 165 days (from JD 2458260 to JD 2458425) do not have absorption lines typical of late-type stars, apart from the strong sodium absorptions, apparently of interstellar origin. The H$\alpha$ line shows a strong and wide emission with a double peak (Fig.~\ref{fig:HD318347}). The strength of the emission and the contrast of the two peaks change on the time scale of a fortnight. The SIM grid catalog lists six RVs ranging from $-245$ to $323$ km~s$^{-1}$, all with small errors not exceeding 25 m~s$^{-1}$. Suspiciously, two RVs measured with CORALIE on the same night, JD 2453591, differ by 54 km~s$^{-1}$. These RVs were probably derived from the CCFs without valid dips, given the lack of absorption lines in the spectrum. Most likely, HD~318347 is a highly reddened early-type star located in the Galactic plane at a distance of 1370 pc. However, \citet{hou} find, based on LAMOST data, that most of the double-peaked H$\alpha$ emission line stars appear in binaries. A short-period cataclysmic variable cannot be precluded. It is possible that the unidentified {\it Fermi} Large Area Telescope (LAT) source 2FGL J$1746.5-3228$ \citep{nol} is associated with the {\it Swift} X-ray source J$174645.4-323746$ \citep{pag} and with HD~318347, which is located $5.8$ arcsec away, a little more than the estimated positional uncertainty of the former. The 10\AA ~width of the H$\alpha$ emission implies gas motions at $\sim500$ km~s$^{-1}$, possibly associated with accretion onto the stellar surface. Circumstellar material can cause additional extinction as well as the infrared excess. \begin{figure} \plotone{fig2.eps} \caption{Two echelle orders in the spectra of HD~318347 containing the H$\alpha$ line (top) and the sodium D lines (bottom), normalized by the blaze function. Note the strong variability of the H$\alpha$ emission line strength in the spectra taken on four different MJD epochs. \label{fig:HD318347} } \end{figure} \subsection{HD 324668} This K0 giant is located in the Galactic plane at a distance of 1~kpc. The average of the three CHIRON RVs is $-32.622$ km~s$^{-1}$ with an rms scatter of 31 m~s$^{-1}$. They match perfectly the two RVs measured by CORALIE, with an average of $-32.665$ km~s$^{-1}$. However, the third CORALIE RV of $+276.3$ km~s$^{-1}$ is highly discrepant, earning this star a title of spectroscopic binary in Simbad. We can only guess whether this discrepancy was caused by pointing at another star in this crowded sky region or for some other reason. Obviously, this red giant has a constant RV. Incidentally, it proves the excellent agreement between the RV zero points of CORALIE and CHIRON. \subsection{TYC 9299-1080-1} \begin{figure} \plotone{fig3.eps} \caption{The CCF (top) and RV curve (bottom) of TYC 9299-1080-1. The crosses mark two RVs from CORALIE and one from {\it Gaia}. \label{fig:TYC} } \end{figure} This star, also known as CD$-$72 1472, is located at a distance of 812~pc and has a Galactic latitude of $-25^\circ$. All three CORALIE RVs are mutually discordant: $+300.6$, $-273.9$, and $+552.6$ km~s$^{-1}$. They were measured over a time span of 696 days. The first CHIRON RVs have shown a slow trend, inspiring hope that this star has a long period and a large RV amplitude. However, further observations have shown that the RV varies with a period of 81 days (Fig. \ref{fig:TYC}). Elements of the circular single-lined orbit derived from 13 CHIRON RVs are presented in Table~\ref{tab:orb}. The {\it Gaia} median RV of $311.0$ km~s$^{-1}$ roughly matches our orbit, assuming an epoch of 2015.5. The variability of the Gaia RV detections can be inferred from the elevated error of the median ($5.7$ km s$^{-1}$), which implies a single-measurement standard deviation of $\sim 19$ km s$^{-1}$. The first CORALIE RV fits the orbit crudely. D.~S\'egransan (2018, private communication) provided another RV measured by CORALIE on JD 2454699.52, which matches the orbit perfectly. We attribute the two discrepant CORALIE RVs to the large RV of this star, placing the true dip outside the nominal CCF window. This explains why this object has two wrong RVs. However, the small errors of these erroneous measurements reported by the CORALIE pipeline, 22 and 16 m~s$^{-1}$, are perplexing. For the mass of the primary star in the range from 1 to 2 $M_\odot$, the minimum mass of the companion is from 0.48 to 0.73 $M_\odot$. The companion could be a normal solar-type dwarf or a white dwarf. The large radius of the giant primary has caused tidal orbit circularization. Its unusual feature is the large center-of-mass RV of 317.7 km~s$^{-1}$. The parallax and proper motion correspond to the tangential velocity of 217 km~s$^{-1}$, hence the star moves with a total velocity of 384 km~s$^{-1}$ relative to the Sun. Considering the Galactic rotation velocity at the solar radius, 236$\pm$3 km~s$^{-1}$\, \citep{Kawata2019}, and the solar peculiar velocity $(U_\odot,V_\odot,W_\odot) = (6.0,10.6,6.5)$ km~s$^{-1}$\, \citep{Bobylev2014}, we derive the galactocentric velocity of TYC 9299-1080-1, $(U,V,W)_{\rm gal} = (302.9, 10.0, 66.2)$ km~s$^{-1}$\,. This binary star certainly belongs to the Galactic halo. It moves almost straight toward the galactic center and will pass in its vicinity. However, the motion is not fast enough for a runaway or extragalactic star. Our numerical integration of the Galactic orbit indicates that it moves on a highly extended orbit between $\sim100$ pc and 20 kpc from the center, which will be reached in less than 20 Myr. \subsection{HD 206092} \begin{figure} \plotone{fig4.eps} \caption{The CCF on JD 2458356 (top) and the RV curve (bottom) of HD 206092. The squares and full line denote the primary component Aa, the triangles and dashed line correspond to the secondary Ab, and the crosses and plus signs depict the CORALIE RVs. \label{fig:HD206092} } \end{figure} Located at a distance of 386 pc, this object is the closest of our four candidates. Its association with a ROSAT X-ray source was noted, while \citet{Kiraga2012} found photometric variability with a period of 2.1876 days, presumably caused by rotational modulation. The first CHIRON spectra revealed wide double lines in rapid motion (Fig.~\ref{fig:HD206092}). The system's location in the CMD (Fig.~\ref{fig:CMD}) indicates that the primary is an evolved star, probably a red giant. Thus, this system belongs to the rare type of RS CVn-type binaries characterized by high levels of chromospheric and X-ray activity. Further monitoring established the orbital period of 4.38 days. The orbit (Table~\ref{tab:orb}) is circular, with amplitudes of 44 and 112 km~s$^{-1}$. The residuals of CHIRON RVs to the orbit are relatively large because the CCF dips are wide and have somewhat irregular shapes, presumably caused by starspots. The photometric period found by \citet{Kiraga2012} corresponds to half of the orbital cycle, indicating ellipsoidal variability caused by tidally distorted stars in a close binary. The Balmer lines H$\alpha$ and H$\beta$ in the spectra of HD 206092 are shallow, being filled by emission; the sodium D lines also have emission in their cores produced in the chromosphere. In this case, the large RV variability detected by CORALIE is explained by the orbital motion. We compared these six RVs with our orbit and found that one is highly discrepant, three correspond to the primary component Aa and two match the secondary Ab. These RVs were used with low weights to improve the accuracy of the period. The RV errors reported by CORALIE for this star are unrealistically small. The orbit corresponds to the minimum masses $M_1 \sin^3 i$ and $M_2 \sin^3 i$ of 1.22 and 0.48 $M_\odot$, or the mass ratio of 0.39. Yet, the areas of the CCF dips are comparable, indicating the light ratio of 0.82 (see Fig.~\ref{fig:HD206092}). Assuming the mass sum of 2 $M_\odot$, the orbital axis is 14.2 $R_\odot$. The wide CCFs suggest that the components almost touch each other; the large luminosity and the red color correspond to large stellar radii. The roughly estimated $v\sin i$ (projected surface velocity of rotation) is 40 km~s$^{-1}$\,, hence, the radii for both components $R>3.5 R_\sun$ assuming synchronous rotation. The unusually bright (for its mass ratio) secondary component Ab must be transferring mass to the primary component Aa. We tried to find analogs of HD 206092 by selecting binaries with the following parameters: period $<10$ days, $V-K>2$ mag and $M_V < 2.5$ mag. The only analog we found is RS~CVn (HD 114519), the prototype of its class, which is defined as chromospherically active, short-period binaries with primaries evolved off the main sequence \citep{fek}. The corresponding parameters for RS CVn are $P=4.80$ days, $M_V$ = 2.5 mag, $V-K=2.16$ mag, and spectral type K0IV. Not surprisingly, the object is a powerful source of coronal X-ray emission listed in the ROSAT catalog of bright sources. Looking at evolved binary systems with somewhat longer periods, the large mass ratio and similar line strengths of HD 206092 suggest that it is similar to chromospherically active semi-detached eclipsing binaries such as AR Mon and RZ Cnc \citep{pop}. Although AR Mon and RZ Cnc have significantly longer periods (20 days) than HD 206092, they appear to be in a similar mass transfer evolutionary stage. Another similar large mass ratio system, which has a shorter period of 10.7 days, is RV Lib \citep{imb}. Using a ROSAT-determined count rate of CR$=0.176(0.023)$ cts s$^{-1}$ and a hardness ratio of HR1$=0.44(0.13)$ we derive an X-ray luminosity of $L_X=3.34(0.066)\times 10^{31}$ erg s$^{-1}$, where the standard error is estimated from the formal uncertainties of the CR, HR1, and Gaia parallax. RS CVn-type binaries are the most luminous (and, on average, the hardest) sources of coronal X-ray radiation \citep{mak}, but HD 206092 is almost twice as luminous as the brightest X-ray star within 50 pc of the Sun, which is II Peg, also a RS CVn-type binary. Another useful comparison is the prototype star RS CVn (F6IV+G8IV) with $L_X=1.67(0.01)\times10^{31}$ erg s$^{-1}$. This puts HD 206092 into the category of outstandingly active and X-ray luminous field stars. \section{Discussion and conclusions} \label{sec:disc} \subsection{Expected number of RG+BH binaries} In the past, considerable effort has been spent to predict the number of binaries containing compact objects, focusing mostly on potential sources of gravitational waves generated by merging binaries with two compact components \citep[e.g.][]{B2002}. These works use binary population synthesis. The results depend strongly on the initial assumptions such as binary statistics and are also affected by the uncertainties in the binary and stellar evolution. The same population synthesis approach was used more recently to estimate the number of star-BH binaries detectable by {\it Gaia} through astrometric effects of an unseen companion on the visible star. Predictions made by different groups differ by two orders of magnitude, depending on the assumptions and models used \citep{Kinugawa2018}. All massive binaries with orbital separations less than 20 au are affected by the mass transfer and mass loss that changes their orbits \citep{Moe2017}. Large velocities acquired by the remnants during supernova explosions (kicks), on the order of hundreds of km~s$^{-1}$, likely destroy all binaries except the tightest ones. Hence, it is possible that RG+BH binaries with periods longer than $\sim$100 days do not exist\footnote{Several {\it low-mass} X-ray binaries with giant components have been identified, however. For example, symbiotic X-ray binaries are a rare class of low-mass, hard X-ray binaries that consist of a neutron star accreting mass from an M giant. Two systems so far have been analyzed, V2116 Oph \citep{hin6} and V934 Her \citep{hin18} and both have long periods of 3.2 and 12.0 years, respectively. Thus, at least for supernova remnant binaries that result in neutron stars, some long-period systems do survive.}. Neglecting both orbital evolution and kicks, we estimate the fraction of RG+BH progenitors, i.e. the upper limit of the fraction of RG+BH binaries among red giants. The progenitors of BH components have masses of $M_{0} > 20 M_\odot$ \citep{B2002}, while the giants have typical masses between $M_1 = 2 M_\odot$ and $M_2 = 3 M_\odot$. Using the Salpeter mass function, $f(M) \propto M^{-\alpha}$ with $\alpha = 2.35$, we evaluate the fraction of stars more massive than $M_0$ relative to stars with masses between $M_1$ and $M_2$, $f_* = 0.106$. Now, some fraction of stars with $M_{0} > 20 M_\odot$ are binaries with secondary components in the same mass range as our giants, between $M_1$ and $M_2$, and with periods from 100 to $10^3$ days. These binaries are potential progenitors of RG+BH objects when their primary components become BHs and the secondaries turn into giants. The fraction of such progenitors relative to field stars in the same mass range (which also become giants) can be estimated using the recent analysis of binary statistics by \citet{Moe2017}. About $f_B=0.2$ of massive stars have binary companions of all masses in the above period range. At those periods, the distribution of the mass ratio $q$ does not follow the Salpeter function, although it still grows as $q^{-1.5}$ at $q>0.3$. Therefore, the fraction of massive stars with suitable parameters is $f_B f_q$, where $f_q \approx 0.2$ is the fraction of companions in the selected mass range between $M_1$ and $M_2$. Summarizing for each primary star in this mass range we get $f_* f_B f_q = 0.004$ potential progenitor binaries. A sample of 1000 giants is not expected to contain more than four progenitors of RG+BH binaries with intermediate periods. Given the evolution of binary orbits and the destructive effects of the supernova kicks, the expected number of RG+BH binaries should be much less than the number of their progenitors. Hence, the non-detection of such objects in the SIM grid survey is natural. \subsection{Related work} Recently, \citet{Murphy2018} introduced a new observing technique by deriving quasi-spectroscopic orbits from the timing of stellar pulsations. They presented a sample of 314 such orbits for primary stars of A/F spectral types and periods between 100 and 1500 days, progenitors of the red giants studied here. The size of the parent sample was 2224, twice as big as the SIM grid sample. They estimated that a fifth of these binaries have degenerate white dwarf secondaries, while their primaries are products of mass transfer (blue stragglers). None of the binaries had a large mass function indicative of the BH secondary. This non-detection of BH secondaries agrees with our result. Using population synthesis, \citet{Kinugawa2018} estimated the number of binaries with BH components detectable astrometrically by {\it Gaia}. Their assumptions regarding binary statistics differ from the latest analysis by \citet{Moe2017} in several important respects. The predicted fraction of BH binaries with periods between 50 days and 5 yr (similar to the period range explored here) in a simulated sample of $10^5$ progenitor binaries ranges from 0.013 to 0.028, depending on the metallicity (more BH binaries in metal-poor population). The majority of these binaries contain stars less massive than 2 $M_\odot$. We stress that these estimates are highly uncertain. \subsection{Conclusions} The large RV variability of some red giants detected by the CORALIE survey of SIM grid stars was intriguing and called for further investigation. We found that this detection is spurious, resulting from flukes in data reductions. Theoretical estimates, still highly uncertain, indicate that the fraction of red giants containing an StMBH remnant cannot exceed $10^{-2}$, and likely is orders of magnitude smaller. Yet, observational limits on the existence of such RG+BH binaries should be placed independently of the theory. Our study contributes to establishing such limits by non-detection of RG+BH candidates in a sample of 1000 stars. Among the four candidates studied here, two can be of independent interest: the red-giant binary TYC 9299-1080-1 with a large spatial velocity of 384 km~s$^{-1}$ and the semi-detached binary HD~206092 of the rare RS CVn type. \section*{Acknowledgments} We thank the telescope operator R.~Hinohosa for taking the data. D.~S\'egransan has kindly helped us to understand the origin of discrepant CORALIE RVs and provided additional unpublished RV of TYC 9299-1080-1.
2,869,038,156,137
arxiv
\section{Introduction} Recent progress in studying inflation in string theory and supergravity has led to a notable revival of interest in cosmic strings \cite{Kibblerev,PolchIntro,DavKib}. It is now believed that cosmic string formation is generic in both supersymmetric grand unified theories \cite{Jeannerot} and brane inflation \cite{SarTye,DvalVil,PolchStab}, where the inflaton potential is of the hybrid type \cite{hybrid} and the inflationary phase ends with a phase transition, leaving behind a network of topological (or semilocal \cite{UrrAchDav}) cosmic strings. Although such string networks cannot be solely responsible for the formation of the observed structure in the universe \cite{Battye,Battye1}, they can still act as subdominant contributors. Indeed, a recent study~\cite{BHindKU} finds that a $\Lambda$CDM model with a flat spectrum of scalar perturbations and a network of (field theory simulated) cosmic strings contributing to the Cosmic Microwave Background (CMB) anisotropy at the $10\%$ level, provides an excellent fit to the observational data (see also Refs.~\cite{Battye,Battye1, Battye2,Battye3} for related work). On the other hand, significantly higher contribution from strings is inconsistent with the CMB, so, thinking on the positive side, one can use this fact to constrain the parameter space of inflationary models with network production. Perhaps more optimistically, one can hope to identify characteristic observational signatures of the string networks appearing in different models, and then try to look for them observationally, in an attempt to point out a direction towards the correct class of models. From this point of view, brane inflation is of particular interest, being one of the most well-developed inflationary models in string theory, and also producing cosmic strings with distinct properties. Indeed, string networks in this setup evolve in a higher-dimensional space, and this can have important effects on the probability of string intercommutation \cite{JoStoTye2,PolchProb,Jackson} and on string velocities \cite{EDVOS}, resulting in a significant enhancement of network string densities \cite{intprob,Sak}. Further, the networks produced in these scenarios are of the $(p,q)$-type \cite{DvalVil,PolchStab}, and strings can interact in more complicated ways than ordinary Abelian cosmic strings, forming $Y$-shaped junctions. In the most well-developed models \cite{KKLMMT}, the strings are evolving in a warped spacetime with one or more `throat' regions, resulting from a combination of $D$-branes and fluxes present in the compactification. This warping gives rise to potentials for the string positions in the internal dimensions, so one expects that the strings get confined in a region around the bottom of the throat. However, this localisation process has not yet been studied in detail. The purpose of this paper is to study the dynamics and evolution of strings in the vicinity of such throat regions. What we find is that, although there are, indeed, potential terms pulling the strings towards the bottom of the throat, the classical evolution does not have enough (Hubble) friction to guarantee that they actually fall on it and stabilise there. Instead, depending on the impact parameter and velocity, a string can simply deflect, bounce and escape to infinity, enter a series of bounces, or form a bound orbit around the minimum. These possibilities can have important implications for the evolution of string networks, because the probability of string intercommutings is inversely proportional to the effective volume available to the string~\cite{PolchProb,Jackson}. Thus, if the strings are not confined at the bottom of the warping throat, this probability gets further suppressed leading to further enhancement in the network density. The structure of the paper is as follows. In section \ref{dynamics}, we consider the dynamics of strings evolving in a spacetime that is a warped product of a Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) universe with a static toroidal space. We write down the Nambu-Goto equations of motion in this background and identify a number of potential and friction terms, which tend to pull the strings towards highly warped regions. In section \ref{VOS}, we use these equations to develop a model for studying a simple string configuration (one in which the strings are straight in the internal dimensions) moving near a minimum of the warping potential (section \ref{VOS}). We point out the weakness of Hubble friction and, choosing a simple warping function, we solve the model for different initial conditions obtaining a sample of string trajectories, which include deflections, bounces and bound orbits. In section \ref{IIB} we try to make contact with more realistic IIB compactifications, by considering a slightly different setup, in which the metric is a warped product of Minkowski spacetime and an unspecified 6-dimensional Riemannian manifold. We perform a qualitative analysis in terms of one-dimensional motion in an effective potential, finding the same general types of orbits. We discuss the appearance of Hubble friction in this setup, in terms of cosmological expansion in the effective 4D theory, and comment on how the results of section \ref{VOS} can be understood in this picture also. We summarise our results and discuss their implications for string evolution in section \ref{discuss}. \section{\label{dynamics}String Dynamics in Warped Spacetime} We start by considering a cosmic string propagating in a warped $(D+1)$-dimensional FLRW spacetime with metric \begin{equation}\label{warped} ds^2=g_{\mu\nu}{\rm d}x^\mu {\rm d}x^\nu = h^{-1/2}({\bf l}) \left[N(t)^2 {\rm d}t^2-a(t)^2 {\rm d}{\bf x}^2\right]-h^{1/2} ({\bf l})b(t)^2 {\rm d}{\bf l}^2 , \end{equation} where the warp factor $h$ is a function of the internal coordinates ${\bf l}$. The motion of the string generates a two-dimensional timelike surface, the string worldsheet $x^\mu=x^\mu(\zeta^\alpha)$, parametrised by the worldsheet coordinates $\zeta^\alpha$, $\alpha=0,1$. The dynamics is given by the Nambu-Goto action \begin{equation}\label{nambu} S=-\mu \! \int \! \sqrt{-\gamma}\, d^2\zeta \, , \end{equation} where $\mu$ is the string tension and $\gamma$ the determinant of $\gamma_{\alpha\beta}=g_{\mu\nu}\partial_\alpha x^\mu \partial_\beta x^\nu$, the pullback of the background metric (\ref{warped}) on the worldsheet. The action (\ref{nambu}) enjoys 2D worldsheet diffeomorphism invariance, which can be used to fix the gauge by imposing two conditions on the worldsheet coordinates. For our discussion it will be convenient to work in the \emph{transverse temporal gauge}: \begin{equation}\label{gauge} \zeta^0=t, \quad \dot x^\mu x_\mu^{\prime}=0 \, , \end{equation} where a dot (resp. prime) denotes differentiation with respect to the timelike (resp. spacelike) worldsheet coordinate $\zeta^0$ (resp. $\zeta^1$). This gauge choice imposes that $\dot x$ is normal to the string, allowing an interpretation in terms of the physically relevant transverse string velocity, while identifying worldsheet and background times. The equations of motion derived from the action (\ref{nambu}) in this gauge are: \begin{equation}\label{eom_expand} \frac{\partial}{\partial t}\left(\frac{\dot x^{\mu}{x^{\prime}}^2} {\sqrt{-\gamma}}\right) + \frac{\partial}{\partial \zeta} \left( \frac{x^{\prime \mu}\dot x^2}{\sqrt{-\gamma}}\right) + \frac{1} {\sqrt{-\gamma}} \Gamma^{\mu}_{\nu\sigma}\left({x^{\prime}}^2 \dot x^{\nu} \dot x^{\sigma} + \dot x^2 x^{\prime \nu} x^{\prime \sigma}\right) = 0 , \end{equation} with $\mu,\nu,\sigma$ running from $0$ to $D$. In the following we shall use the notation: \begin{equation}\label{index_not} \begin{array}{llrcl} \mu,\nu=0,1,2,...,D& & & & \\ i,j=1,2,3 &,&{\bf x}&\equiv& x^i \\ \ell,m=4,5,...,D &,&{\bf l}&\equiv& x^\ell\,. \\ \end{array} \end{equation} The Christoffel symbols of the metric (\ref{warped}) are: \begin{equation}\label{christ} \begin{array}{lll} \Gamma^0_{00}=\frac{\dot N}{N} & \Gamma^i_{00}=0 & \Gamma^\ell_{00}=-\frac{1}{4}N^2\frac{h_{,\ell}}{b^2h^2} \\ \Gamma^0_{0i}=0 & \Gamma^i_{0j}=\frac{\dot a}{a}\delta^i_j & \Gamma^\ell_{0i}=0 \\ \Gamma^0_{0\ell}=-\frac{1}{4}\frac{h_{,\ell}}{h} & \Gamma^i_{0\ell}=0 & \Gamma^\ell_{0m}=\frac{\dot b}{b}\delta^\ell_m \\ \Gamma^0_{ij}=N^{-2} a \dot a \delta_{ij} & \Gamma^i_{jk}=0 & \Gamma^\ell_{ij}=\frac{1}{4}\frac{a^2 h_{,\ell}}{b^2h^2}\delta_{ij} \\ \Gamma^0_{\ell m}=hN^{-2}b\dot b\delta_{\ell m} & \Gamma^i_{\ell m}=0 & \Gamma^\ell_{mn}=\frac{1}{4h}(\delta_{\ell m} h_{,n} + \delta_{\ell n} h_{,m} - \delta_{mn} h_{,\ell}) \\ \Gamma^0_{i\ell}=0 & \Gamma^i_{j\ell}=-\frac{1}{4}\frac{h_{,\ell}}{h} \delta_{ij} & \Gamma^\ell_{im}=0 \,\,. \\ \end{array} \end{equation} We define a scalar $\epsilon$, the string energy per unit coordinate length (per unit tension), by: \begin{equation}\label{eps_warped} \epsilon=\frac{{-x^{\prime}}^2}{\sqrt{-\gamma}}={\left(\frac{h^{-1/2} a^2{{\bf x}^{\prime}}^2+h^{1/2}b^2{{\bf l}^{\prime}}^2}{h^{-1/2}N^2- h^{-1/2}a^2{\dot{\bf x}}^2-h^{1/2}b^2{\dot {\bf l}}^2} \right)}^{1/2} \end{equation} and note that due to the gauge choice $\gamma_{01}\equiv \dot x^\mu x_\mu^\prime=0$ we also have $\dot x^2/\sqrt{-\gamma}=\epsilon^{-1}$. With this notation, the $0$, $i$ and $\ell$ components of the equation of motion (\ref{eom_expand}) become: \begin{eqnarray} &&\dot\epsilon=-\epsilon \left\{\frac{\dot N}{N}+\frac{a\dot a}{N^2} \left[\dot{\bf x}^2-{\left(\frac{{\bf x}^{\prime}}{\epsilon}\right)}^2 \right]+h\frac{b\dot b}{N^2}\left[\dot{\bf l}^2-{\left(\frac{{\bf l}^ {\prime}}{\epsilon}\right)}^2\right]-\frac{{\bf l}\cdot\nabla h({\bf l})}{4h}\right\} \label{eom_eps_warp}\\ &&\ddot{\bf x}+\left\{\frac{2\dot a}{a}-N^{-2}\left\{ N\dot N+a\dot a \left[\dot{\bf x}^2 - {\left(\frac{{\bf x}^{\prime}}{\epsilon} \right)}^2\right]+hb\dot b \left[\dot{\bf l}^2-{\left(\frac{{\bf l}^ {\prime}}{\epsilon}\right)}^2 \right] \right\} \right\}\dot{\bf x} \nonumber \\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\frac{1}{4h}\left({\bf l}^\prime \cdot\nabla h({\bf l})\right)\epsilon^{-2}{\bf x}^\prime={\left( \frac{{\bf x}^{\prime}}{\epsilon}\right)}^{\prime}\epsilon^{-1} \label{eom_x_warp}\\ &&\ddot{\bf l}+\left\{ \frac{2\dot b}{b}-N^{-2}\left\{ N\dot N+a\dot a \left[\dot{\bf x}^2-{\left(\frac{{\bf x}^{\prime}}{\epsilon} \right)}^2\right]+hb\dot b\left[\dot{\bf l}^2-{\left(\frac{{\bf l}^ {\prime}}{\epsilon}\right)}^2\right]\right\}+3\frac{\dot{\bf l}\cdot \nabla h({\bf l})}{4h}\right\}\dot{\bf l}\nonumber \\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; -\frac{N^2\nabla h({\bf l})}{4b^2h^2} +\frac{a^2\nabla h({\bf l})}{4b^2h^2}(\dot{\bf x}^2-\epsilon^{-2}{{\bf x}^\prime}^2)-\frac{\nabla h({\bf l})}{4h}(\dot{\bf l}^2-\epsilon^{-2} {{\bf l}^\prime}^2) \nonumber \\ &&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; -\frac{1}{2h}\left({\bf l}\cdot\nabla h({\bf l})\right)\epsilon^{-2}{\bf l}^\prime={\left(\frac{{\bf l}^ {\prime}}{\epsilon}\right)}^{\prime}\epsilon^{-1} \,. \label{eom_l_warp} \end{eqnarray} Comparing to the corresponding equations \cite{EDVOS} in the unwarped case $h({\bf l})=1$, one observes that the effect of warping is to introduce factors of $h$ (in the $\dot b$ terms) and new potential terms proportional to $\nabla h({\bf l})$. This is in agreement with the intuitive expectation that, since energy is minimised at highly warped regions, there should be forces driving the string towards those regions. However, the dynamics by which this localisation mechanism may operate has not been studied in detail. The primary purpose of this paper is to explore the effect these potential terms may have on string evolution. In the next section we will study this problem by considering the relevant macroscopic velocity equations for simple warping potentials. \section{\label{VOS}Effect of Warping on String Evolution: A Simple Model} In this section we shall study the effect of warping on the evolution of string networks, using a macroscopic velocity-dependent network model analogous to the models of Refs.~\cite{vos,vosk,EDVOS}. The picture we have in mind is a network of strings produced at the end of brane inflation \cite{DvalTye,DvalShafSolg,BMNQRZ,Garc-Bell, JoStoTye1,KKLMMT}. Assuming that reheating is efficient, the strings can be taken to evolve in a radiation-dominated universe, but there is also a compact manifold of internal dimensions, which can have significant impact on the evolution of the network \cite{JoStoTye2,EDVOS,PolchProb}. In explicit constructions \cite{KKLMMT}, the metric is a warped product, and the warping factor is a function of the internal coordinates, giving rise to `throats' of local minima in the internal manifold. In this section we will consider a simplified situation in which the extra dimensions are toroidally compactified, so the metric is of the form (\ref{warped}). This will allow us to write down explicit evolution equations for the strings and obtain numerical solutions given a choice of warping factor. The idea is to average the equations of motion (\ref{eom_eps_warp})-(\ref{eom_l_warp}) over a network of Nambu-Goto strings to obtain macroscopic evolution equations for the root-mean-squared (rms) velocities of string segments. Let us first consider the $\ddot {\bf x}$ equation (\ref{eom_x_warp}). This differs from the corresponding unwarped spacetime equation \cite{EDVOS} in two ways: first, there is a factor of $h$ multiplying $b\dot b(\dot{\bf l}^2-{{\bf l}^{\prime}}^2 \epsilon^{-2})$, and second, there is the new term: \begin{equation}\nonumber \frac{1}{4h}\left({\bf l}^\prime\cdot\nabla h({\bf l})\right) \epsilon^{-2}{\bf x}^\prime\,. \end{equation} For stabilised extra dimensions we have $\dot b=0$, so the first of the above terms is zero in both unwarped and warped backgrounds. Further, in order to obtain the macroscopic velocity equation, one has to dot with $\dot{\bf x}$ and average over the string network, so the second term yields a contribution proportional to $\langle {\bf x}^\prime\cdot\dot{\bf x}\rangle$, where the angled brackets denote `energy weighted averaging' obtained by integrating over the worldsheet with weighting function $\epsilon$, and normalising with respect to the total string energy. But, since the 3-vectors $\dot{\bf x}$ and ${\bf x}^\prime$ are uncorrelated, one expects $\langle{\bf x}^\prime\cdot\dot{\bf x}\rangle$ to randomly change sign with no large-scale correlations as one moves along the string, and so this term gives zero when averaged over large scales. This has been tested numerically in the case of a 3+1 FLRW model in Ref.~\cite{EDVOS}. Finally, there is an implicit dependence on the warp factor through the modified definition of $\epsilon$ in equation (\ref{eps_warped}), which, however, disappears when one moves to physical variables. Indeed, defining the rms (peculiar) string velocities: \begin{equation}\label{v_x_warp} v_x^2=\left\langle\left(\frac{h^{-1/4}a {\rm d}{\bf x}}{h^{-1/4}N {\rm d}t} \right)^2 \right\rangle = \left\langle\left(\frac{{\rm d} {\bf x}}{{\rm d} \tau}\right)^2 \right\rangle \equiv \langle\dot {\bf x}^2 \rangle \end{equation} and \begin{equation}\label{v_l_warp} v_{\ell}^2=\left\langle\left(\frac{h^{1/4}{\rm b d}{\bf l}}{h^{-1/4} N{\rm d}t}\right)^2\right \rangle = \left\langle\left(\frac{h^{1/2} b {\rm d} {\bf l}}{a {\rm d}\tau} \right)^2\right\rangle \equiv \langle h b^2 {\dot {\bf l} }^2 / a^2 \rangle \, , \end{equation} where `conformal' time $\tau$\footnote{In this section we use the notation $\dot{}\equiv\frac{\rm d}{{\rm d}\tau}$.} corresponds to the slicing $N\!=\!a$, the evolution equation for the 3-dimensional velocity $v_x$ in terms of the `proper' time ${\rm d}s=h^{-1/4}N {\rm d}t$ is identical to that of the unwarped case, namely: \begin{equation}\label{v_xdt} v_x \frac{{\rm d} v_x}{{\rm d}s}=\frac{k_x v_x}{R}(1-v^2)-\left( 2-w_\ell^2\right) \frac{1}{a}\frac{{\rm d}a}{{\rm d}s}(1-v^2) {v_x}^2 - \frac{1}{a} \frac{{\rm d}a}{{\rm d}s} {v_\ell}^2{v_x}^2 \, . \end{equation} Here, $v^2=v_x^2+v_\ell^2$ and $w_\ell$ is a string orientation parameter \begin{equation}\label{wl} w_\ell=\left\langle \frac{ h b^2 {{\bf l}^{\prime}}^2 } {a^2 {{\bf x}^{\prime}}^2 + h b^2 {{\bf l}^{\prime}}^2 } \right \rangle^{1/2} , \end{equation} quantifying the degree to which the strings lie in the extra dimensions ${\bf l}$. The 3-momentum parameter $k_x$ is defined by \begin{equation}\label{k_x} \frac{k_x v_x (1-v^2)}{R}=\left\langle \dot {\bf x}\cdot {\bf u} \left( 1-\dot{\bf x}^2-\frac{h b^2 \dot{\bf l}^2 }{a^2}\right) \right\rangle\,, \end{equation} where ${\bf u}$ is the physical curvature vector and $R$ the average radius of curvature of the string network (see Ref.~\cite{EDVOS} for details). We now consider the $\ddot{\bf l}$ equation (\ref{eom_l_warp}). This contains several extra terms, some of which survive after averaging over the worldsheet. In particular there are terms proportional to $\nabla h({\bf l})$, which can be thought of as a force driving the strings towards the minima of the warping potential. Indeed, for a static string configuration, the worldsheet action reduces to a potential\footnote{There is also a dependence on the dilaton, which is ignored here (see later discussion).} $V({\bf l})=\mu h^{-1/2}$ \cite{PolchProb} and the corresponding force $F=-\nabla V({\bf l})$ is proportional to $\nabla h({\bf l})h^{-3/2}$. An equation like (\ref{v_xdt}) only yields information about the time evolution of the magnitude of the velocity but, here, we are also interested in its direction. We will thus seek to construct a vector equation for the internal velocities ${\bf v}_{\ell}$. For simplicity, we will choose a special configuration in which the strings are oriented normally to the extra dimensions, that is we will set ${\bf l}^\prime=0$. In this way we eliminate effects arising from string curvature in the extra dimensions (the right hand side of (\ref{eom_l_warp})) as well as corrections proportional to $w_{\ell}$ (see equation (\ref{v_xdt})), concentrating only on the effects of the warped background. One may worry that this choice could suppress effects that might be relevant in the following analysis, but it turns out that this is not the case. The effects of string bending in the extra dimensions have been studied in Refs.~\cite{EDVOS,intprob} and, on macroscopic\footnote{In this context `macroscopic' refers to scales greater than the string correlation length.} scales, can be described by a non-zero $w_{\ell}$ and an effective renormalisation of the string tension $\mu_{\rm eff}>\mu$, both of which will not be important in the following. On the other hand, on small scales that are relevant in the present study, this bending can produce string velocities in the extra dimensions. However, these will simply add to string kinetic energies and can only strengthen our conclusions, which will be based on the absence of an efficient damping mechanism in these dimensions. By concentrating on this special configuration ${\bf l}^\prime=0$, equation (\ref{eom_l_warp}) simplifies considerably and this will allow us to study the dynamics of strings near a minimum of the warping potential \cite{thesis}. We define the physical velocity in the extra dimensions as the $(D-3)$-vector \begin{equation}\label{v_l_warp_vect} {\bf v}_{\ell}=h^{1/2} b \dot{\bf l} / a \,, \end{equation} which, due to the chosen string orientation, does not depend on the spacelike worldsheet coordinate $\zeta$. Then, by forming $\dot{\bf v}_{\ell}$ and using the equations of motion (\ref{eom_eps_warp}), (\ref{eom_l_warp}) we obtain: \begin{equation}\label{v_l_warp_vecteqn} \frac{{\rm d}{\bf v}_{\ell}}{{\rm d}s}=-\left[\frac{1}{a}\frac{{\rm d} a}{{\rm d}s}\left(1-2v_x^2-v_\ell^2\right)+\frac{{\bf v}_{\ell}\cdot \nabla h({\bf l})}{4bh^{5/4}}\right]{\bf v}_{\ell}+\left(2-2 v_x^2 \right) \frac{\nabla h({\bf l})}{4bh^{5/4}} \, . \end{equation} We can use this equation to study the dynamics of a straight string moving in a warping potential $h({\bf l})$. Fig.~\ref{vlvec} shows the relevant phase diagram for two simple choices of warping potentials, namely $h({\bf l})=A-B\tanh^2({\bf l})$ and $h({\bf l})=[A+B\ln(|{\bf l}|)]/{\bf l}^4$, in the simplest case of one extra dimension and assuming a constant 3-dimensional velocity $v_x$. As expected, the strings are driven towards the minimum of the potential at the centre, but the damping provided by Hubble expansion is too weak to guarantee that they actually reach it. Instead, equation (\ref{v_l_warp_vecteqn}) suggests a picture in which the string oscillates around the minimum, rather than quickly falling on it and stabilising. In view of the results of Ref.~\cite{EDVOS} this is not surprising: there, it was found that Hubble damping couples very weakly to the extra dimensional velocities and is generally insufficient to cause significant redshifting of velocities in the internal dimensions. This may seem to contradict the intuition one has from inflation, where oscillations of the inflaton around the minimum of the potential at the end of inflation are efficiently damped. Here the situation is similar, as the internal position of the string corresponds to a scalar field in four dimensions. However, unlike the case of inflation where cosmology is scalar field dominated and the damping is efficient, here the oscillations take place during radiation domination, so the damping term is much weaker and decays as $t^{-1}$ (see Fig.~\ref{damping}). \begin{figure} \includegraphics[height=2.5in,width=2.8in]{phase_tanh2_long.eps} \includegraphics[height=2.5in,width=2.8in]{phase_log_long.eps} \caption{\label{vlvec} String trajectory in two-dimensional phase space $(\ell,v_{\ell})$, i.e. in the case of a single extra dimension $\ell$, assuming a constant 3D velocity $v_x$. The two plots correspond to different warping potentials, with warp factors $h(\ell)=A-B\tanh^2(\ell)$ (left) and $h(\ell)=[A+B\ln(|\ell|)]/\ell^4$ (right). Starting at a distance away from the potential minimum (located at $\ell=0$ in the former case and $\ell=0.8$ in the latter) and with initial velocity towards it, the string oscillates around the tip, but the motion is only weakly damped by Hubble expansion.} \end{figure} \begin{figure} \includegraphics[height=2.5in,width=2.8in]{damping_infl.eps} \includegraphics[height=2.5in,width=2.8in]{damping_rad.eps} \caption{ \label{damping} Effect of Hubble damping on scalar field oscillations $\varphi(t)$ in both scalar field dominated and radiation dominated cosmology. In the scalar field dominated case (left), Hubble damping has approximately constant magnitude, but in radiation domination (right) cosmological friction is much less efficient as it scales like $t^{-1}$. In the case of strings the situation is more dramatic, because the Hubble term also comes with a factor of $1-2v_x^2-v_\ell^2$ (see equation (\protect\ref{v_l_warp_vecteqn})) and so, as 3D velocities evolve towards their scaling value, this term becomes zero.} \end{figure} Let us briefly discuss the effect of the 3-dimensional velocity $v_x$ on the orbits of the string in $({\bf l}, {\bf v}_\ell)$ phase-space. In the case of a single compact dimension $\ell$, equation (\ref{v_l_warp_vecteqn}) reads: \begin{equation}\label{v_l_warp_scaleqn} \frac{{\rm d} v_{\ell}}{{\rm d}s}=-\frac{1}{a}\frac{{\rm d} a}{{\rm d}s}\left(1-2v_x^2-v_\ell^2\right) v_\ell+\left(2-2 v_x^2- v_\ell^2 \right)\frac{\nabla h(\ell)}{4bh^{5/4}} \, , \end{equation} where $v_\ell$ can also take negative values. Note that the first term, corresponding to Hubble damping, comes with a coefficient of $(1-2v_x^2-v_\ell^2)$ which can be much smaller than 1 for $v_x^2 \lesssim 1/2$. The strength of this term depends on the magnitude of $v_x$ and decays as $t^{-1}$ as the universe expands. On the other hand, the potential term comes with a coefficient of $(2 -2v_x^2-v_\ell^2)$, which is always greater than unity due to the constraint $v^2\le 1/2$. This term is not diluted by cosmic expansion. The evolution is therefore dominated by this potential term but there is also a transient Hubble damping effect, which operates for a few Hubble times until it effectively dies away, and whose strength depends on the 3-dimensional velocity $v_x$ (Fig.~\ref{vx_dependence}). It is therefore important to treat $v_x$ as a dynamical variable rather than a constant parameter. Also, as we saw, the evolution of $v_x$ is governed by equation (\ref{v_xdt}), which depends on $v_\ell$, and thus the assumption of constant $v_x$ is at best an approximation. We will therefore couple equation (\ref{v_l_warp_scaleqn}) to the evolution equation for the rms $v_x$, which, for the special orientation we chose ($w_\ell=0$), becomes: \begin{equation}\label{v_xdt_w_l_zero} \frac{{\rm d} v_x}{{\rm d}s}=\frac{k_x}{R}(1-v^2)- \frac{1}{a}\frac{{\rm d}a}{{\rm d}s}(2-2v_x^2-v_\ell^2) v_x \, . \end{equation} \begin{figure} \includegraphics[height=2.3in,width=2.6in]{orbit_vx_0.eps} \includegraphics[height=2.3in,width=2.6in]{orbit_vx_03.eps} \includegraphics[height=2.3in,width=2.6in]{orbit_vx_05.eps} \includegraphics[height=2.3in,width=2.6in]{orbit_vx_064.eps} \caption{\label{vx_dependence} Effect of 3D velocity $v_x$ on the orbits of the string in $(\ell,v_{\ell})$ phase space, for the case of one extra dimension $\ell$. The warping factor is taken to be of the form $h(\ell)=A-B\tanh^2(\ell)$. Treating the 3D velocity as a constant parameter, each plot corresponds to a differnt value of $v_x$. From top left to bottom right: $v_x=0$, $0.3$, $0.5$ and $0.64$. Clearly, as one increases $v_x$, Hubble damping becomes less and less important, since the coefficient of the friction term in equation (\protect\ref{v_l_warp_scaleqn}) decreases. As a result, the orbit is more stable for larger values of $v_x$. To model this effect, the 3D velocity should be treated as a dynamical variable rather than a constant parameter.} \end{figure} This yields very interesting dynamics, with $v_x$ exhibiting damped oscillations around its average value (Fig.~\ref{vlvecvx}). This effect, however, is too small to be of observational interest and is further suppressed by cosmic expansion. Initially, the string oscillates around the warped minimum, being (weakly) damped by Hubble expansion, while its 3-dimensional velocity $v_x$ is modulated by the oscillation. After a few revolutions, the damping dies away and the phase-space orbit stabilises. \begin{figure} \includegraphics[height=2.3in,width=2.6in]{phase_tanh2_vx.eps} \includegraphics[height=2.3in,width=2.6in]{vx_osc_tanh2.eps} \includegraphics[height=2.3in,width=2.6in]{phase_log_vx.eps} \includegraphics[height=2.3in,width=2.6in]{vx_osc_log.eps} \caption{\label{vlvecvx} String trajectories (left) near the warped minimum in two-dimensional phase space $(\ell,v_{\ell})$ for the same warping potentials and initial conditions as in Fig.~\protect\ref{vlvec}, but now treating $v_x$ as a dynamical variable. The 3D velocity $v_x$ oscillates due to its coupling to $v_\ell$ (see equation (\protect\ref{v_xdt_w_l_zero})), but this effect is small (right). The upper plots are for warping factor $h(\ell)=A-B\tanh^2(\ell)$, while the lower ones for $h(\ell)=[A+B\ln(|\ell|)]/\ell^4$.} \end{figure} In this simplest case of a single extra extra dimension that we have studied so far, the string has to pass through the minimum in coordinate space in every cycle, but in the presence of more internal dimensions there will generally be a non-zero impact parameter. One naturally expects that angular momentum conservation will lead to deflecting/bouncing orbits around the tip of the warped throat. This will be studied in some detail in the next section. For now, we simply plot a sample of string trajectories (now in physical space), obtained by solving equations (\ref{v_l_warp_vecteqn}) and (\ref{v_xdt_w_l_zero}) in the case of two internal dimensions (Fig.~\ref{2_extra_dims}). As shown, depending on the initial conditions, the string is deflected or bounces back, and can either escape from the throat or enter a series of bounces around the tip. \begin{figure} \includegraphics[height=1.5in,width=1.6in]{orbit_06_0_0_03.eps} \includegraphics[height=1.5in,width=1.6in]{orbit_06_0_0_04.eps} \includegraphics[height=1.5in,width=1.6in]{orbit_06_06_04_03.eps} \includegraphics[height=1.5in,width=1.6in]{orbit_m2_m5_03_04.eps} \includegraphics[height=1.5in,width=1.6in]{orbit_m6_m5_03_04.eps} \includegraphics[height=1.5in,width=1.6in]{orbit_7_m5_m03_04.eps} \caption{\label{2_extra_dims} A sample of string trajectories in physical space, in the case of two extra dimensions $\ell_1$ and $\ell_2$. Different trajectories correspond to different initial conditions for the position ${\bf l}=(\ell_1,\ell_2)$ and velocity ${\bf v}_{\ell}=(v_{\ell 1},v_{\ell 2})$. In all cases the warping factor is $h({\bf l}) =A-B\tanh^2(|{\bf l}|)$. The possibilities (depending on the initial conditions) include deflections, bounces and bound orbits with negligible Hubble friction.} \end{figure} The key point of this section, which is the central idea of the present paper, is that the classical evolution does not possess a strong damping term to guarantee that the strings quickly migrate to the tip of the warping throat and stabilise there. Instead, depending on the velocity and impact parameter, a string passing near a warped throat will generally experience a mere deflection or a bounce around the potential minimum. Note that, in the above, we have ignored any possible dependencies on the dilaton $\Phi({\bf l})$. In the case of $(p,q)$-strings \cite{PolchStab} for example, taking into account the dilaton dependence gives rise to a factor of $(p^2+q^2 e^{-2\Phi({\bf l})})^{1/2}$ \cite{PolchProb} in the potential $V({\bf l})$. This could lead to a dependence of the minimum on $p$ and $q$, hence on the type of string, with potentially important implications for the intercommuting properties of different types of strings. However, for the strongly warped backgrounds one is usually interested in (e.g. \cite{KKLMMT}), the variation of the dilaton is negligible and thus we have safely ignored this effect in our discussion. \section{\label{IIB}IIB Compactifications} In the previous section, we considered string motion in a background that was a warped product of an expanding FLRW universe with a compact internal manifold. Here, we would like to make contact with explicit constructions in string theory, in particular type IIB fluxed compactifications that have been used to realise cosmological inflation \cite{KKLMMT,Pajer,BDKMcA}. These typically involve a metric that is a warped product of Minkowski spacetime with a Calabi-Yau manifold, and time-dependence of the background arises in the effective 4D description. Indeed, in the effective theory, a number of scalar fields appear, which can couple to the 4D metric. In the constructions of \cite{KKLMMT,BDKMcA}, all scalar fields are dynamically stabilised apart from one, corresponding to the position of a mobile $D$-brane in the 10D picture, which plays the role of the inflaton in the effective theory. The Einstein frame metric takes the following general form: \begin{equation}\label{metric_E} ds^2 = h^{-1/2}(l)\eta_{\mu\nu} {\rm d}x^\mu {\rm d}x^\nu -h^{1/2}(l) g_{\ell m} {\rm d}l^\ell {\rm d}l^m , \end{equation} where $\eta_{\mu\nu}$ is the 4D Minkowski metric and $g_{\ell m}$ the metric on the internal Calabi-Yau space. The warp factor $h$ depends only on the internal coordinates $l$. Explicit solutions of this type exist, for example Klebanov-Tseytlin (KT)~\cite{KlebTseyt} and Klebanov-Strassler (KS)~\cite{KlebStras} geometries, but, for the general discussion that follows, we leave the exact geometry unspecified. We will assume, however, as in the above solutions, that the internal manifold has a group of angular symmetries allowing us to define a radial coordinate $r\equiv l^r$, and that the warp factor $h$ depends only on this radial coordinate. A moving string on this warped background is described by the Nambu-Goto action (\ref{nambu}) with metric (\ref{metric_E}). The motion of $D$-branes in warped backgrounds has been well-studied~\cite{Kutasov,KacMcAl,Germani,EaGrTaZa}. The solutions found in those cases include deflections, bounces, and bound orbits, like in the previous section. Unlike the case of $D3$-branes, which are spacetime-filling and only have velocities in the internal space, strings are rather different, as there are also two transverse directions where the string can move, giving rise to 3D velocities that can dynamically interfere with the internal ones (see previous section). We consider this case in more detail below. Defining the radial direction $r\equiv l^r$, we write the internal metric as: \begin{equation}\label{int_metric} g_{\ell m} {\rm d}l^\ell {\rm d}l^m = g_{rr} {\rm d}r^2 + g_{\theta\phi} {\rm d}l^\theta {\rm d}l^\phi , \end{equation} where the indices $\theta$ and $\phi$ run over the angular internal coordinates. The internal speed of the string is then $\dot l^2 = g_{rr} \dot r^2 + g_{\theta\phi} \dot l^\theta \dot l^\phi$, and the action, in the transverse temporal gauge, reads: \begin{equation}\label{action_sym} S=-\mu \int h^{-1/2}(r) \sqrt{\left[1-\dot{\bf x}^2 - h(r) (g_{rr} \dot r^2 + g_{\theta\phi} \dot l^\theta \dot l^\phi)\right]\left({\bf x}^{\prime 2}+h(r)g_{\ell m} l^{\prime\ell} l^{\prime m}\right)} \,\, d^2\zeta \, . \end{equation} As in the previous section, we will consider the special configuration in which the string has $l^\prime=0$. As we have chosen the angular coordinates $l^\phi$ to correspond to spacelike Killing vectors, the following momenta are conserved: \begin{equation}\label{momenta} \pi_\phi \equiv \frac{\partial{\cal L}}{\partial\dot l^\phi} = \frac{\mu\sqrt{{\bf x}^{\prime 2}}}{\sqrt{1-\dot{\bf x}^2 - h(r) (g_{rr} \dot r^2 + g_{\theta\omega} \dot l^\theta \dot l^\omega)}} h^{1/2}(r) g_{\phi\theta} \dot l^\theta \, . \end{equation} Also, time translational invariance implies that the energy \begin{equation}\label{energy} {\cal E}\equiv {\bf p}\cdot\dot{\bf x} + \rho \dot r + \pi_\phi \dot l^\phi - {\cal L} = \frac{\mu h^{-1/2}(r)\sqrt{{\bf x}^{\prime 2}}}{\sqrt{1-\dot{\bf x}^2 - h(r) (g_{rr} \dot r^2 + g_{\theta\phi} \dot l^\theta \dot l^\phi)}} \end{equation} is conserved, where ${\bf p}$ and $\rho$ are the canonical momenta associated to ${\bf x}$ and $r$ respectively. Then, defining \begin{equation}\label{Pi_of_r} \Pi^2(r) \equiv g^{\theta\phi} \pi_\theta \pi_\phi \, , \end{equation} we can write \begin{equation}\label{E_of_Pi} {\cal E}=\mu\sqrt{{\bf x}^{\prime 2}} h^{-1/2}(r)\left(\frac{1+ \Pi^2(r)/\mu^2 {\bf x}^{\prime 2}}{1-\dot{\bf x}^2-h(r)g_{rr} \dot r^2}\right)^{1/2} \, , \end{equation} or equivalently: \begin{equation}\label{rdot2} \dot r^2 = \frac{g^{rr}}{h(r)} \left[ 1 - \left( \frac{\mu^2 {\bf x}^{\prime 2}+\Pi^2(r)}{h(r){\cal E}^2} + \dot{\bf x}^2 \right) \right] \, . \end{equation} For the setup we are interested in, the strings are macroscopic in the Minkowskian directions and homogeneous over the short length-scales relevant to the warping scale in the internal dimensions. Further, as we saw in the previous section, the coupling between 3D and internal velocities is weak, so one could take constant $\dot{\bf x}^2$ as an approximation. Here, we will consider the special case of a straight string ${\bf x}^{\prime 2}=1$. Translational invariance along the transverse string directions implies that the corresponding momenta $\bf p$ are conserved, and we can write: \begin{equation}\label{rdot2_simple} \dot r^2 = \frac{g^{rr}}{h(r)} \left( 1 - \frac{\mu^2 +\Pi^2(r)+h(r){\bf p}^2}{h(r){\cal E}^2} \right) \, . \end{equation} The right-hand-side is a function of the radial coordinate only, and so equation (\ref{rdot2_simple}) describes the one-dimensional motion of a particle in an effective potential \begin{equation}\label{eff_pot} \dot r^2 + V_{\rm eff}(r) = 0 \, , \end{equation} where $V_{\rm eff}(r)$ is minus the right-hand-side of (\ref{rdot2_simple}). Physical motion is restricted to regions where $V_{\rm eff}\le 0$, and the zeros of the effective potential correspond to turning points, where the string reverses its (radial) direction of motion. Let us first consider the case $\Pi^2(r)={\bf p}^2=0$. The effective potential is as in Ref.~\cite{Kutasov}, where the dynamics of $D$-branes moving in the vicinity of $NS5$-branes has been analysed and explicit solutions have been obtained for $h(r)=1+c/r^2$, $c={\rm const}$. The motion is restricted to the region: \begin{equation}\label{constr_Pi_p_zero} h(r)\ge \frac{\mu^2}{{\cal E}^2} \, . \end{equation} Since $h(r\rightarrow\infty)=1$, for $\mu<{\cal E}$ the constraint is empty and the string can escape to infinity. On the other hand, if $\mu>{\cal E}$ the string does not have enough energy to escape the potential, so it will reach a finite maximum distance, where $V_{\rm eff}=0$, and then reverse its motion to return to $r$=0. Now consider non-zero angular momentum, $\Pi^2(r)>0$, in which case the effective potential is: \begin{equation}\label{Veff_ang_mom} V_{\rm eff} = \frac{g^{rr}}{h(r)} \left( \frac{\mu^2 +\Pi^2(r)}{h(r){\cal E}^2} - 1 \right) \, . \end{equation} There is now a new possibility in the case ${\cal E}>\mu$: since $\Pi^2(r)$ scales with the inverse metric (see equation (\ref{Pi_of_r})), the angular momentum increases as the string approaches the potential minimum and can dominate over the tension term at short distances. The effect of the angular momentum is to reduce the radial velocity of the string, until it reaches $V_{\rm eff}=0$, where it bounces and then escapes to infinity. For ${\cal E}<\mu$, the string does not have enough energy to escape the attractive potential, and at large enough distances the effective potential approaches a constant value: \begin{equation}\label{Veff_assympt} V_{\rm eff} \simeq g^{rr}\left( \frac{\mu^2 }{{\cal E}^2}-1 \right) \, . \end{equation} The string reaches a maximum distance and then returns to $r=0$. The angular momentum generally slows down the approach to $r=0$, and solutions include examples where the string spirals an infinite amount of times before reaching the potential centre \cite{Kutasov}. Bound orbits can also exist, depending on the structure of $h(r)$. Indeed, bound orbits were found in the case of KT and KS backgrounds in Ref.~\cite{EaGrTaZa}, where $D3$-brane motion was studied in detail using the Dirac action, and including the relevant Wess-Zumino term. Let us finally focus on the 3D momenta ${\bf p}^2$. These have the effect of `renormalising' the second term in equation (\ref{Veff_ang_mom}) by a correction of ${\bf p}^2/{\cal E}^2= \dot{\bf x}^2 \equiv v_x^2$, that is: \begin{equation}\label{Veff_p2} V_{\rm eff} = \frac{g^{rr}}{h(r)} \left[ \frac{\mu^2 +\Pi^2(r)}{h(r){\cal E}^2}-\left(1-v_x^2\right) \right] \, . \end{equation} In other words, some of the kinetic energy of the string is in the transverse motion in the Minkowskian directions, so it is harder for the string to escape to infinity. At large $r$ the effective potential approaches \begin{equation}\label{Veff_p2_assympt} V_{\rm eff} \simeq g^{rr}\left[ \frac{\mu^2 }{{\cal E}^2} - \left( 1-v_x^2\right)\right] \, , \end{equation} and so the string needs to have an energy greater than the relativistic mass, ${\cal E}^2>\mu^2/(1-v_x^2)$, in order to escape. For small distances, where the angular momentum term dominates, the motion changes direction at smaller $r$, as the radial motion of the string is slower than in the $v_x=0$ case. Note that as $v_x\rightarrow 1$ we must have $\dot r \rightarrow 0$, due to the constraint $1-v^2>0$ arising from the square root in the action (\ref{action_sym}). Let us now compare our results with the findings of the previous section. There is general agreement in the type of orbits that can arise, namely deflections, bounces and bound orbits, but there are also some important differences. In particular, in this section the 3D velocities $v_x$ were constant, while in section \ref{VOS} there was a week dependence of $v_x$ on the internal velocity $v_\ell$. By looking at equation (\ref{v_xdt_w_l_zero}), this dependence can be traced to the string curvature and Hubble friction terms. While the macroscopic model of the previous section enabled us to allow for correlation-scale curvature giving rise to rms string 3D velocities, here we have considered a straight string with zero curvature. The most important difference with the previous section is the inclusion of friction terms due to cosmic expansion. This, for example, changes the dependence of the orbits on $v_x$, as, in the case of no friction, a small $v_x$ generally implies a larger value for the radial velocity in the internal dimensions (as we just saw), but when one includes Hubble friction a small $v_x$ also comes with a stronger damping term on $v_\ell$, which can lead to a smaller internal velocity (Fig.~\ref{vx_dependence}). Here, there is no Hubble friction in the 10D picture, since the metric is a warped product of Minkowski (as opposed to FLRW in section \ref{VOS}) spacetime with a compact internal manifold. To make contact with the previous section, we can move to an effective 4D description by `integrating out' the compact dimensions on the worldsheet action. Splitting the metric into a 4D and an internal part, $g^{(4)}_{\mu\nu}$ and $g^{(6)}_{\ell m}$ respectively (which include the relevant warping factors), the induced metric on the worldsheet is: \begin{equation}\label{ind_metric_10D} \gamma^{(10)}_{\alpha\beta}=g^{(4)}_{\mu\nu} \partial_\alpha x^\mu \partial_\beta x^\mu + g^{(6)}_{\ell m} \partial_\alpha l^\ell \partial_\beta l^m \, . \end{equation} Defining the induced 4D metric as \begin{equation}\label{ind_metric_4D} \gamma^{(4)}_{\alpha\beta}=g^{(4)}_{\mu\nu} \partial_\alpha x^\mu \partial_\beta x^\mu \, , \end{equation} one can factorise it in the worldsheet Lagrangian, to obtain an effective 4D string action: \begin{eqnarray} -\frac{\cal L}{\mu}&=&\sqrt{-{\rm det}\gamma^{(10)}}=\sqrt{ -{\rm det} [\gamma^{(4)}_{\alpha\beta}(\delta^\beta_\gamma + \gamma_{(4)}^{\beta\delta} \, \partial_\delta l^\ell\partial_\gamma l^m \, g^{(6)}_{\ell m} )]} \nonumber \\ &=&\sqrt{-{\rm det}\gamma^{(4)}} \sqrt{ {\rm det}(\delta^\beta_\gamma+\gamma_{(4)}^{\beta\delta} \, \partial_\delta l^\ell\partial_\gamma l^m \, g^{(6)}_{\ell m})} \label{factorise} \, . \end{eqnarray} Then, using \begin{equation}\label{expand_matrix} {\rm det}({\bf 1} + {\bf M}) = 1 + \frac{1}{2}{\rm Tr}({\bf M}) - \frac{1}{4}{\rm Tr}({\bf M}^2) + \frac{1}{8}({\rm Tr}{\bf M})^2 + {\cal O} ({\bf M}^3) \, , \end{equation} one finds kinetic terms for the worldsheet scalar fields $l^\ell$. One can similarly obtain a low-energy Einstein-Hilbert term, starting from the 10D gravitational action. The 4D metric will then couple to any scalar fields arising from the compactification. In the setups we are interested in, all scalar fields are stabilised except one, which corresponds to the position of a mobile $D$-brane moving towards an anti-$D$-brane at the bottom of the warping throat. The interaction between the brane-anti-brane pair, gives rise to a potential for the scalar field, which, subject to fine tuning, can satisfy the slow-roll conditions for inflation. Thus, as the branes approach each other, the scalar field drives inflation in the effective 4D description. The inflationary phase ends with the collision of the branes and the production of an interacting network of cosmic $D$- and $F$-strings \cite{DvalVil,PolchStab}. The universe enters a radiation-dominated era, so the strings soon find themselves evolving in a power-law FLRW, rather than inflationary, background. We can then understand the essence of our previous results also in this picture: firstly, the string action we found contains kinetic terms for worldsheet scalar fields $l^\ell$, which from the 10D point of view correspond to the positions of the strings in the internal manifold. These can be thought of as worldsheet currents, which are known to result in a reduction of the velocity of strings \cite{book}. We found the same effect from the 10D point view in section \ref{VOS}, were some of the kinetic energy of the string was in the internal directions, so the 3D motion of strings was reduced as a result of the constraint $v^2\lesssim 1$ (local) or $v^2\lesssim 1/2$ (for rms velocities in networks) \cite{EDVOS}. Further, as the strings evolve in an expanding background, there is Hubble friction, which could in principle kill these worldsheet excitations. In the 10D picture, we found, by considering the string equations of motion, that Hubble damping in the expanding dimensions couples only weakly to the internal excitations and is insufficient to damp them away. From the low-energy point of view we have just considered, this is still the case because Hubble damping is important on large scales, while the worldsheet scalar excitations operate over much shorter length-scales, over which the background can be taken to be flat. Further, Hubble damping is becoming less and less important (scaling as $t^{-1}$) on fixed scales, so it can only have a transient effect at early times. This is in sharp contrast with the case of inflation, where damping can have a much more significant impact (Fig.~\ref{damping}). \section{\label{discuss}Discussion} Let us summarise and comment on our results. In the first part of this paper, we investigated the effect of warping on string evolution, in the case where the background is a warped product of a FLRW universe with a static, toroidal internal space. Starting from the Nambu-Goto equations of motion for strings evolving in this warped background, we identified a number of extra terms that tend to pull the strings towards the bottom of the throat. We then obtained equations for the velocity evolution of string segments, and, by solving them near the minimum of a warping potential, we quantified the tendency of strings to move towards the minimum. We noted that, in classical theory, there is not enough damping to guarantee that the strings actually reach the potential minimum and stabilise. Instead, our analysis supports a picture in which strings oscillate around the bottom, being only weakly damped by cosmological expansion, rather than quickly migrating to it. During these oscillations, we have found that the 3D string velocity $v_x$ also exhibits oscillatory modulation in its magnitude, due to its coupling to $v_\ell$, but this effect is too small to be of observational significance. Including angular momentum, and considering different initial conditions, we have found a number of different string trajectories that include deflections, bounces, and bound orbits around the minimum. We then moved on to study string motion in 10D warped backgrounds, like the ones arising in IIB compactifications in brane inflation, where the metric is a warped product of Minkowski spacetime and a Calabi-Yau manifold. Through a qualitative analysis in terms of an effective potential for one-dimensional radial motion, we found similar string trajectories. Then, by integrating out the internal dimensions, we obtained kinetic terms of worldsheet scalars, corresponding to the internal string positions in the 10D picture. In the effective 4D picture, one can then understand the effect of slowing down of strings from a slightly different point of view, namely in terms of worldsheet currents as in superconducting strings. Hubble friction is then inefficient on short scales and decays as $t^{-1}$ during radiation/matter domination. In both pictures, our classical analysis points out the absence of strong enough damping to ensure that a generic string trajectory around the potential minimum would be one spiraling towards it, loosing energy on the way, and falling on it. Instead, we find that generic trajectories near the tip involve series of bounces/deflections with insignificant kinetic energy damping. This could clearly have important implications for string evolution, in particular it could further reduce the average probability of string intercommutations \cite{JoStoTye2,PolchProb}. The effect may not be as dramatic as it looks at first sight, because, in the typical brane inflation setup, the branes collide at the tip of the throat, and so the strings are actually produced close to the bottom. Thus, if the energy of the produced string is not enough to escape the potential pull, the string can enter a series of bounces, or a bound orbit, its motion being confined within a maximum distance from the centre. However, depending on the initial internal velocity of the string, this distance may be large enough for it to be a bad approximation to consider the string located at the bottom. It is also possible to produce strings with enough energy to escape the throat region, though this looks statistically unlikely. Indeed, a simple classical estimate obtained by comparing the string potential and kinetic energies (arising from expanding the energy (\ref{energy}) in powers of $v_\ell^2\equiv h(r)(g_{rr} \dot r^2 + g_{\theta\phi} \dot l^\theta \dot l^\phi)$ to write it as rest mass plus potential and kinetic energy), suggests that the string will escape for $v_\ell^2\gtrsim (h-1)(1-v_x^2)$. Deep in the warping potential where $(h-1)\gg 1$, this is a rare possibility for a scaling network, as there is a `Virial theorem' imposing that $v_x^2+v_\ell^2\simeq 1/2$. However, since we are now considering strings that were just produced with velocities $v_x, v_\ell$ at the brane collision and had no time to `virialise', this condition does not apply. Unfortunately, the details of the brane collision and annihilation process are at present poorly understood and one cannot quantify the transverse velocity distribution of the produced strings. Presumably, the collision is highly non-adiabatic, giving rise to significant string velocities in the transverse directions, but, due to the Brownian 3D spatial structure of the produced network and local energy conservation, these are expected to be subdominant compared to the corresponding 3D velocities~\cite{EDVOS}. It follows, therefore, that the main concern here is not about the strings escaping the warping potential, but, rather, entering a series of bounces and deflections about the bottom of the throat, remaining within a maximum distance from it. This distance defines a volume factor, which suppresses the string intercommuting probability. Under the assumption that the strings are localised at the bottom of the throat, the corresponding volume factor appearing in the relevant string amplitude is determined by the `thickness' of the string, or better the extend of the wavefunction characterising the fluctuations of the string position around the bottom~\cite{PolchProb,Jackson}, which is of order few string lengths. Here, the volume factor can be much larger due to the classical motion of the string, the relevant distance scale being much greater than the string scale\footnote{It is, of course, still smaller than the compactification scale.}. Therefore, depending on the details of the brane collision occurring in the final stages of brane inflation (in particular, on the energy transfered to translational degrees of freedom in the internal dimensions), the intercommuting probabilities for cosmic superstrings may be further reduced, with potentially important implications for string network scaling values \cite{JoStoTye2,Sak,intprob}. Given the uncertainties in the details of the final stages of brane inflation-in particular the collision and annihilation of the branes-it is not possible at present to quantify the expected suppression in the intercommuting probability. It is clear, however, from the above discussion that this suppression can easily be of one order of magnitude or more. Thus, combined with recent evidence~\cite{intprob,Vanch_loops} that the scaling string density goes with $P^{-2/3}$ (weaker than initially anticipated), the effects discussed in this paper would push up the predicted string densities in these scenarios closer to the initially anticipated levels, obtained by using a larger probability $P$ but stronger dependence of $\rho$ on $P$. The key point of the no-damping result obtained in this study is that the Nambu-Goto equations of motion imply that Hubble friction in the internal dimensions comes with a velocity-dependent coefficient, which quickly goes to zero as string velocities evolve. Any other damping term which could operate over cosmological timescales would be enough to ensure string stabilisation at the bottom of the throat, though it seems difficult to motivate such a friction mechanism in classical theory. One may wonder whether some other mechanism could operate in quantum theory, providing an efficient damping term. An interesting possibility would be quantum decay to lighter particles, but one would need to know in detail how the worldsheet scalars couple to the standard model and/or other light fields. \begin{acknowledgments} The work presented here was initiated by a question of G. Efstathiou. I would like to thank Paul Shellard for collaboration during the early stages of this project. This paper has also benefited from discussions with Fernando Quevedo, Jose Blanco-Pillado and Ivonne Zavala. I would like to thank Ed Copeland and Anne Davis for their comments and encouragement. I acknowledge financial support from the Cambridge Newton Trust and the EC Marie Curie Research Training Network ENRAGE. This work is also supported in part by MEC, research grant FPA2007-66665. \end{acknowledgments} \bibliographystyle{JHEP.bst}
2,869,038,156,138
arxiv
\section{Preliminaries} In this paper will be given some metrical notions in synthetic differential geometry(SDG). We shall show that a metrical geometry in SDG is, in general, similar with a classical one. Most of results will concern to so called "global" properties, what means that we will work with elements aparted from each other. All notions of SDG are taken from \cite{Kock_SDG}. As it was shown in \cite{Kock_SDG} the following theory (a specially Axiom 1) is not compatible with the axiom of excluded third so it have not models in sets but it have so called "well adapted models" in cartesian closed categories. Father all settings will be in some cartesian closed category $\cal E$. As it was shown in \cite{Kock_SDG} we can do them using an ordinary set theoretical language. As in \cite{Kock_SDG} we shall assume that a geometric line is a nondegenerate commutative ring $R$ of line type in $\cal E$, i.e satisfies\\ \begin{axm} {\bf 1} {\it Let $D =\{ x\in R \ |\ x^2=0 \}$.\\ For all $g:D\rightarrow R$ are exist the unique $a, b\in R$, such, that for all $ d\in D$ is valid $g(d)=a+d\cdot b$.} \end{axm} The object $D$ is "generic tangent vector". To define a metrical notions we have to make some further assumptions about properties of $R$. First of all we shall assume, that on $R$ are given two orders, agreed with the structure of the ring: \begin{enumerate} \item the strict order $ < $ such, that $\forall x\in R\ \ \ \lnot (x < x) $ \item the weak order $\leq\ $ such, that $\forall x\in R\ \ \ (x\leq x) $ \end{enumerate} Connected with each other by axioms: \[ \forall x, y\in R\ \ \ \lnot (x < y)\Rightarrow y\leq x \] \[ \forall x, y, z\in R\ \ \ x < y \land y\leq z \Rightarrow x < z \] In a standard manner we shall define intervals. \[ (x,y)=\{z\in R\ \ |\ \ x < z \land z < y \} \] \[ [x,y]=\{z\in R\ \ |\ \ x\leq z \land z\leq y \} \] We denote by $InvR=\{x\in R\ | \ \exists \ \ y\in R\ \ x\cdot y=1 \} $ -- object of convertible elements in $R$.\\ We shall assume that the following formula is valid. \begin{equation}\label{equ_Inv} \forall x\in R\ \ \ x\in InvR\ \ \iff\ \ x < 0 \lor x>0 \end{equation} We shall assume, that:\footnote{Under $\bigwedge_{i=1}^n$ we understand $\underbrace{\land \ldots \land}_n $, and under $\bigvee_{i=1}^n$ we do $\underbrace{\lor \ldots \lor}_n $} \begin{enumerate} \item $R$ is a local ring, i.e \begin{equation}\label{equ_local} \forall x\in R \ \ \ x\in InvR\ \ \ \lor x-1\in InvR. \end{equation} \item $R$ is a field of quotients, i.e \begin{equation}\label{equ_fild} \forall x_1, \ldots, x_n \in R\ \ \ \lnot (\bigwedge_{i=1}^n x_i=0) \Rightarrow \bigvee_{i=1}^n x_i \in InvR. \end{equation} \item $R$ is a formally real ring i.e \begin{equation}\label{equ_freal} \forall x_1, \ldots, x_n \in R\ \ \ \bigvee_{i=1}^n x_i \in InvR \Rightarrow \sum_{i=1}^n x_i^2 \in InvR. \end{equation} \item $R$ is a Pythagorean ring i.e \begin{equation}\label{equ_Pyth} \forall x_1, \ldots, x_n \in R\ \ \ \sum_{i=1}^n x_i^2 \in InvR \Rightarrow \exists \sqrt{\sum\nolimits_{i=1}^n x_i^2}\in InvR. \end{equation} \item $R$ is a Archimedean ring i.e \begin{equation} \forall x\in R\ \ \ x < 0 \lor x < 1 \lor x < 2 \lor \ldots \end{equation} \end{enumerate} \begin{axm}[Axiom of integration.]\\ {\it For any $f:[0,1] \rightarrow R$ exists unique $g:[0,1] \rightarrow R$ such, that $g^\prime \equiv f$ and $g (0)=0$.}\\ \end{axm} We shall denote $\int\limits_0^1 f (t) dt:=g (1) $.\\ As it is shown in \cite{Kock_SDG}, all these assumptions are realized in well adapted models for $R$.\\ As it is shown in \cite{McLarty}, from (\ref{equ_local}) and (\ref{equ_fild}) follows that \begin{equation}\label{equ_main} \forall x\in R\ \ \ x<0\ \ \lor\ \ \ (\forall \varepsilon >0\ \ \ -\varepsilon < x < \varepsilon)\ \ \lor x>0 \end{equation} We shall denote \[ R^ +=\{x\in R\ \ |\ \ x>0 \} \] \[ R^-=\{x\in R\ \ |\ \ x < 0 \} \] \[ R^\varepsilon=\{x\in R\ \ |\ \ \forall \varepsilon >0\ \ \ -\varepsilon < x < \varepsilon \} \] It is easy to see that (\ref{equ_main}) can be written as follows: \begin{equation}\label{equ_-+} R=R^- \cup R^\varepsilon \cup R^+ \end{equation} \section{Linear algebra}\label{sec_LA} As in the basis of our reasonings is the ring $R$ and its properties, for consideration of a metric we needs in some results from intuitionistic linear algebra. The initial items of information on this question are taken from C.Mulvey "Intuitionistic algebra and representation of rings"\cite{Mulvey_IA} and A.Heyting "Intuitionism"\cite{Heyting_I}, but, as these works contains only a few results on the theme, some of them we had to prove. \vspace{3\baselineskip} \subsection{Apartness relation on the ring $R$} The apartness relation in the intuitionistic mathematics is the positive form of the not equality relation. It have been entered and investigated by Heyting (see for example \cite{Heyting_I}). In this paragraph we shall give definition of apartness relation on the ring $R$ and investigate its properties. It is necessary to notice, that relation given below, not completely satisfies to Heyting axioms of apartness, and therefore, we had to check up its properties anew. \begin{dfn} {\rm We shall speak, that $a, b\in R $ are {\it apart} and write $a\# b$, if $a-b\in InvR$. } \end{dfn} \begin{note} From the (\ref{equ_Inv}) follows, that \[ a\# b \iff a < b \lor a > b. \] \end{note} \renewcommand{\theenumi}{\arabic{enumi}} \begin{prp} {\rm The apartness relation on $R$ has following properties}: \begin{enumerate} \item $ a=b \Rightarrow \lnot (a\# b) $. \item $ \lnot (a=b) \iff a\# b $. \item $ a\# b \Rightarrow (a\# c) \lor (b\# c)\ \ \ \forall c\in R$. \end{enumerate} \end{prp} \begin{prv} \begin{enumerate} \item Obviously. \item The necessity is follows from (\ref{equ_fild}).\\ The sufficiency is obvious. \item $a\# b \Rightarrow a-b \in InvR $ \\ So as $R$ is local ring, we have that for any $x\in R$ and $r\in InvR$ \[ x\in InvR\ \ \lor\ \ r-x\in InvR.\] If we put $r=a-b$ and $x=a-c$, we shall receive \[ a-c \in InvR\ \ \lor \ \ c-b \in InvR. \] What means that $a\# c \lor b\# c$. $\Box $ \end{enumerate} \end{prv} \begin{note} Heyting in \cite{Heyting_I} defines an apartness relation, as a relation satisfying to conditions: \begin{enumerate} \item $ a\# b \Rightarrow \lnot (a=b) $. \item $ \lnot (a\# b) \Rightarrow a=b$. \item $ a\# b \Rightarrow a\# c \lor b\# c\ \ \ \forall c\in R$. \end{enumerate} \end{note} As we have noticed, the apartness relation on $R$ differs from what was considered by Heyting, but despite of, for it are also executed following positive statements. \begin{prp}\label{prp_OT2} {\rm Are executed:} \begin{enumerate} \item $ a\# b \Rightarrow (a+c)\# (b+c) \ \ \ \forall c\in R$. \item $ a\# b, c\# 0 \Rightarrow a c\# b c $. \end{enumerate} \end{prp} \begin{prv} The proof of these statements is based on compatibility of the order $ < $ with the structure of the ring. \begin{enumerate} \item $ a\# b \iff (a<b) \lor (a>b) \Rightarrow $ $ a+c < b+c \lor a+c > b+c \Rightarrow (a+c) \# (b+c) $. \item $ a\# b$ and $ c\# 0 \Rightarrow (a<b) \lor (a>b)$ and $ c>0 \lor c<0$\\ $c>0$ $ \Rightarrow (a c < b c \lor a c > b c) $ $\Rightarrow a c \# b c $.\\ $c < 0$ is similar.$\Box $ \end{enumerate} \end{prv} \begin{prp}\label{prp_OT3} {\rm Are executed:} \begin{enumerate} \item $ a\cdot b \# 0 \Rightarrow a \# 0 \land b \# 0$. \item $ a+b \# 0 \Rightarrow a \# 0 \lor b \# 0 $. \item $ a\cdot b \# c\cdot d \Rightarrow (a \# c) \lor (b \# d) $. \end{enumerate} \end{prp} \begin{prv} The proof of this statement is based on the results of Proposition \ref{prp_OT2} and similar to the appropriate proof in (\cite[\S 4.1.3]{Heyting_I}). \end{prv} \subsection{Systems of linear equations} All the theorems below are proven by Heyting \cite{Heyting_I}. Their proofs are based on positive properties of apartness relation (Statements \ref{prp_OT2}, \ref{prp_OT3}). Let $A=(a_{ij})$ be the matrix of a system of linear equations with coefficients from $R$. \begin{equation}\label{equ_SLE} \sum_{k=1}^n a_{ik}x_k=b_i \ \ \ (i=1, \ldots, n.) \end{equation} Let $d$ be the determinant of $A$. If $d\# 0$, it is possible to decide the system (\ref{equ_SLE}) using the Cramer's rule \[ x_k=\frac{d_k}{d} \] Decision is unique in a following exact sense: \begin{teo}[ Heyting \S 4.2.1 ]\label{teo_SLE} If $p_1, \ldots, p_n$ are such numbers, that for some $j$ takes place $p_j\# d_j / d$, then it is possible to find such $i$, that \[ \sum_{k=1}^n a_{ik}p_k\# b_i. \] \end{teo} \begin{dfn} {\rm A matrix $A$ has a {\it rank} $r$, if at least one of it minors of the order $r$ is apart from a zero, while all minors of the order $r+1$ are equal to a zero. } \end{dfn} For a system of similar equations \begin{equation}\label{equ_OSLE} \sum_{k=1}^n a_{ik}x_k=0 \ \ \ (i=1, \ldots, m.) \end{equation} we have the theorem. \begin{teo}[ Heyting \S 4.2.4 ] \label{teo_OSLE} If rank of a matrix $A=(a_{ik}) $ is equal $n$, then for any $u_1, \ldots, u_n$, such, that $u_k\# 0 $ at least for one $k$, will be though one $i$, such, that \[ \sum_{k=1}^n a_{ik}u_k \# 0. \] \end{teo} The inverse theorem is valid too. \begin{teo}[Heyting \S 4.2.4 ] \label{teo_OSLE_1} If for any $u_1,\ldots, u_n$, such, that $u_k\# 0 $ and at least for one $k$ left-hand part of a system (\ref{equ_OSLE}) is aparted from a zero, then rank of $A$ is equal $n$. \end{teo} \subsection{$R$-modules with an apartness relation} In this paragraph we give abstract definition of apartness relation on $R$-modules, which generalize properties of apartness relation on $R$ and prove a theorem about dimension of $R$-module's basis. Let $V$ be $R$-module. We shall name the elements of $V$ as {\it vectors}. \begin{dfn}\label{dfn_OTV} {\rm Binary relation $\#$ on $V$, satisfying to conditions: \begin{enumerate} \item $ \bar{a}=\bar{b}\Rightarrow \lnot (\bar{a}\# \bar{b}) $. \item $ \lnot (\bar{a}=\bar{b}) \iff \bar{a}\# \bar{b}$. \item $ \bar{a}\# \bar{b}\Rightarrow \bar{a}\# \bar{c} \lor \bar{c}\# \bar{a}$ \end{enumerate} where $\bar{a} ,\bar{b}, \bar{c} \in V $, will be called an {\it apartness relation} on $V$. } \end{dfn} Further we shall give positive concepts, equivalent to classical concepts of linear dependence and linear independence of vectors. These definitions are given by analogy to appropriate definitions of Heyting. \begin{dfn} {\rm Let $V$ be a $R$-module. We shall speak, that vectors $\bar{a}_1, \ldots, \bar{a}_m \in V$ are {\it strongly linearly dependent}, if exists $\lambda_i \in R$ apart from a zero such, that \[ \lambda_1 \cdot \bar{a}_1+\ldots+\lambda_m \cdot \bar{a}_m=0 \] } \end{dfn} \begin{dfn} {\rm Let $V$ be a $R$-module with given on it apartness relation. We shall speak, that vectors $\bar{a}_1, \ldots, \bar{a}_m \in V$ are {\it mutually free }, if from that at least one of $\lambda_i$ apart from a zero, follows, that \begin{equation} \lambda_1 \cdot \bar{a}_1+\ldots+\lambda_m \cdot \bar{a}_m \# 0 \end{equation} } \end{dfn} We shall give the following definition: \begin{dfn} {\rm Let $V$ be a $R$-module with given on it an apartness relation. A system of mutually free vectors such, that any vector from $V$ can be expressed as a linear combination of vectors of this system will be called a {\it basis} of $V$. } \end{dfn} We shall prove the following theorem by analogy to the classical proof which is given in the book \cite{Lang_A}, using our definitions and properties of the ring $R$. \begin{teo}\label{teo_base} Let $V$ be a finitely generated $R$-module. Then, any two bases of $V$ have identical dimension, that is, contain identical number of vectors. \end{teo} \begin{prv} Let $\{v_1, \ldots, v_p\}$ be a basis of $V$, $p\geq 1$. To prove, that any other basis consists from $p$ of elements, is enough to prove that if $\{w_1, \ldots, w_r\}$ is a system of mutually free vectors, then $r\leq p$. Inverse inequality may be proven similarly. We shall prove by induction.\\ As $\{v_1, \ldots, v_p\}$ is a basis, the vector $w_1$ can be recorded as follows \begin{equation}\label{equ_w} w_1=c_1 v_1+\ldots+c_p v_p\ \ \ c_1, \ldots, c_p \in R. \end{equation} As $\{w_1, \ldots, w_r\}$ are mutually free, we have $w_1\# 0$. Assumption, that all $c_i=0$, lead to the contradiction. Consequently \[ \lnot (\bigwedge_{i=1}^p c_i=0) \] Whence, from (\ref{equ_fild}), we shall receive, that exists $i$ such, that $c_i\# 0$. We shall consider, for definiteness, that it is $c_1\# 0$. Then $v_1$ lies in the space generated by $\{w_1, v_2, \ldots, v_p\}$, which coincides with all $V$. We shall show, that the vectors $\{w_1, v_2, \ldots, v_p\}$ are mutually free. Actually, if we shall consider the linear combination $\lambda_1 w_1+\sum_{i=2}^p \lambda_i v_i $, such, that at least one $\lambda_i \# 0$, using (\ref{equ_w}), we shall receive \begin{equation}\label{equ_v} \lambda_1 c_1 v_1+\sum_{i=2}^p (\lambda_i+c_i \lambda_1) v_i. \end{equation} In a force of (\ref{equ_main}), for $\lambda_1 $ we have two cases: \begin{enumerate} \item $\lambda_1 \# 0 $. In this case $\lambda_1 \cdot c_{1} \# 0$ as $c_1$ also apart from a zero. \item $\lambda_1 \in R^\varepsilon $. In this case $\lambda_1 \cdot c_{i} $ also belongs to $R^\varepsilon $, and, hence, $\lambda_i+c_i \lambda_1 \# 0$. \end{enumerate} In any case, from the mutual freedom of vectors $\{v_1, \ldots, v_p\}$, we receive, that the linear combination (\ref{equ_v}) is aparted from a zero, and, hence, the vectors $\{w_1, v_2, \ldots, v_p\}$ are mutually free. We shall assume on a induction, that after suitable renumbering of $v_i$ we have found $w_1, \ldots, w_k\ \ (k < p) $ such, that $\{w_1, \ldots, w_k, v_{k+1}, \ldots, v_p \}$ is a basis of $V$. We shall present $w_{k+1}$ as follows \[ w_{k+1}=c_1 w_1+\ldots+c_k w_k+c_{k+1} v_{k+1}+\ldots c_p v_p, \] where at least one $c_i$ apart from a zero. We shall assume, that it $c_{k+1}\# 0$. Using a similar reasons, we shall change $v_{k+1}$ on $w_{k+1}$ and again receive a basis. We shall repeat this procedure till $k$ became equal to $r$. Whence we have, that $r\leq p$.\ Hence the theorem is proved.$\Box $ \end{prv} \subsection{Algebraic properties of the $R^n$} In this paragraph we shall give definition of an apartness relation on $R^n$ and a notion of basis of $R^n$ . First of all let us note, that $R^n$ is an $R$-module in a natural way. \begin{dfn} {\rm A vectors $\bar{a}, \bar{b} \in R^n $ are {\it apart}, $\bar{a}\# \bar{b}$, if $a_i$ and $b_i$ are apart in $R$, at least for one $i$. } \end{dfn} \begin{prp}\label{prp_RnOT} {\rm The relation introduced above is really the apartness relation on $R^n$ in the sense of Definition~\ref{dfn_OTV}. } \end{prp} \begin{prv} Let us check up conditions 1 -- 3 of specified definition. \begin{enumerate} \item Obviously. \item The necessity is follows from (\ref{equ_fild}).\\ The sufficiency is obvious. \item Let $\bar{a}\# \bar{b}$, hence $a_i \# b_i$, at least for one $i$. Let $\bar{c}\in R^n $, then from the properties of apartness in $R$ we have \[ a_i \# c_i \ \lor \ c_i \# b_i. \] Whence we receive, that $\bar{a} \# \bar{c} \lor \bar{c} \# \bar{a}$.$\Box $ \end{enumerate} \end{prv} Further we shall give the theorem from Heyting, which describe properties of mutually free vectors in $R^n$. \begin{teo}[ Heyting \S 4.3.1 ] \label{teo_FV1} For vectors \[ \bar{a}_i=(a_{i1}, \ldots, a_{in}) \ \ \ (i=1,2, \ldots, p) \] were mutually free, is necessary and sufficiently, that the matrix made from their coefficients had rank $p$. \end{teo} Let us consider a system of $n$ mutually free vectors $\bar{e}_1, \ldots, \bar{e}_n$ in $R^n$. Then the matrix made from coefficients of these vectors, under the Theorem \ref{teo_FV1}, has rank $n$ and its determinant apart from a zero. From the Theorem \ref{teo_SLE} follows, that any vector from $R^n$ can be expressed as a linear combination of vectors $\bar{e}_1, \ldots, \bar{e}_n$. >From the Theorem \ref{teo_base} follows, that any basis of $R^n$ consists from $n$ mutually free vectors. Let $\{\bar{e}_1, \ldots, \bar{e}_n\}$ and $\{\bar{f}_1, \ldots ,\bar{f}_n \}$ are two bases in $R^n$. Lay out vectors of the second basis through the first. \begin{eqnarray*} \bar{f}_1 &= & c^1_1 \bar{e}_1+\ldots +c^n_1 \bar{e}_n \\ \ldots & & \ldots \ \ \ \ \ \ \ \ \ \ldots \\ f_j &= & c^1_j \bar{e}_1+\ldots +c^n_j \bar{e}_n \\ \ldots & & \ldots \ \ \ \ \ \ \ \ \ \ldots \\ \bar{f}_1 &= & c^1_n \bar{e}_1+\ldots +c^n_n \bar{e}_n \end{eqnarray*} The matrix $C=(c^i_j)$, made from coefficients of decomposition, is {\it the matrix of transition} from basis $\{\bar{e} _1, \ldots, \bar{e}_n\}$ to basis $\{\bar{f}_1, \ldots, \bar{f}_n \}$. The matrix $C$ is convertible and its determinant apart from a zero. Let $\bar{x}\in R^n $ in a basis $\{\bar{e}_1, \ldots,\bar{e}_n\}$ has the form \[ \bar{x}=x^1 \bar{e}_1+\ldots+x^n \bar{e}_n \] And in basis $\{\bar{f}_1, \ldots, \bar{f}_n \}$: \[ \bar{x}=y^1 \bar{f}_1+\ldots+y^n \bar{f}_n \] Then the coordinates of $x$, in the new and in the old bases, are connected among themselves as follows: \[ x^i=\sum_{j=1}^n c^i_j y^j \ \ \ (i=1, \ldots, n). \] \subsection{The space of linear forms} In this paragraph we will give a definition of apartness relation and of basis of the space of linear forms. \begin{dfn} {\rm A {\it linear form} on $R^n$ is a map $f:R^n\rightarrow R$ such, that \[ \forall r\in R\ \ \forall \bar{x}\in R^n:\ f(r\cdot \bar{x}) =r\cdot f(\bar{x}) \] \[ \forall \bar{x},\bar{y}\in R^n\ \ f(\bar{y}+\bar{x})= f(\bar{x})+f(\bar{y}) \] } \end{dfn} \begin{dfn} {\rm The {\it space of all linear forms} $R^{n*}$ is the subobject of $R^{R^n}$ with a structure of $R$-module on it given by formulas \[ (f+g)(\bar{x})=f(\bar{x})+g(\bar{x})\] \[ (r\cdot f)(\bar{x})=r\cdot f(\bar{x}) \] } \end{dfn} Thus $R^{n*}$ has a structure of $R$-module. Let $\{\bar{e}_1, \ldots, \bar{e}_n\}$ be a basis of $R^n$. We shall define the linear forms $f^i$ as follows: \[ f^i(\bar{e}_j)=\delta^i_j. \] Obviously, an any linear form $h$ can be expressed as a linear combination of $f^i$'s: \[ h (\bar{x})=\sum_{i=1}^n h_i f^i (\bar{x}), \] where $h_i=h (\bar{e}_i) $. \begin{dfn} {\rm We shall speak, that $f\in R^{n*}$ {\it is aparted from a zero(linear form) } and write $f\# 0$, if exists $i$ such, that $f(\bar{e}_i)\# 0$, where $\{\bar{e}_1, \ldots, \bar{e}_n \}$ is basis of $R^n$.\\ $f\# g \iff f-g\# 0$. } \end{dfn} \begin{note} This definition does not depend on choice of basis. Really, let $\bar{e}_i $ be such basis vector that $f(\bar{e}_i) \# 0 $ and \[ \bar{a}_i=c_1 \bar{h}_1+\ldots+c_n \bar{h}_n \] in a new basis $\{\bar{h}_1, \ldots, \bar{h}_n \}$. Then we have \[ f(\bar{e}_i)=c_1 f(\bar{h}_1)+\ldots +c_n f(\bar{h}_n)\] Whence from the Proposition \ref{prp_OT3} follows that $f(\bar{h}_j) \# 0$. \end{note} \renewcommand{\theenumi}{\arabic{enumi}} \begin{prp} {\rm The relation introduced above is the apartness relation in $R^{n*}$ in the sense of Definition~\ref{dfn_OTV}. } \end{prp} \begin{prv} Proof is similar to the proof of Statement \ref{prp_RnOT}.$\Box $\\ \end{prv} We have that $R^{n*}$ is a $R$-module with apartness relation, hence, a notions of a mutual freedom and of a basis are defined on $R^{n*}$ and the theorem \ref{teo_base} is valid. Also valid the following theorem \begin{teo}\label{teo_Rank} {\rm The linear forms $g^1, \ldots, g^p \in R^{n*}$ are mutual free iff rank of a matrix \[ \left( \begin{array}{ccc} g^1(\bar{e}_1) & \ldots & g^1(\bar{e}_n) \\ \ldots & \ldots & \ldots \\ g^p(\bar{e}_1) & \ldots & g^p(\bar{e}_n) \end{array} \right) \] is equal to $p$. } \end{teo} \begin{prv} The direct consequence of the Theorems \ref{teo_OSLE}, \ref{teo_OSLE_1}. $\Box $\\ \end{prv} \begin{clr} System of linear forms $f^i$ is basis of $R^{n*}$, which is {\it dual} to the basis $\{\bar{e}_1, \ldots, \bar{e}_n \}$. \end{clr} Let $\{\bar{e}_1, \ldots, \bar{e}_n \}$ be a basis of $R^n$ and $\{f^1, \ldots, f^n\}$ its dual basis of $R^{n*}$. Let $y\in R^{n*}$ and \[ y=y_1 f^1+\ldots+y_n f^n \] Let $\{\bar{h}_1, \ldots, \bar{h}_n \}$ be another basis of $R^n$ and $\{j^1, \ldots, j^n\}$ its dual basis of $R^{n*}$. Let \[ y=z_1 j^1+\ldots+z_n j^n \] Then the coordinates of $y$ in the new and old bases are connected as follows \[ z_l=\sum_{s=1}^n c^s_l y_s \ \ \ (l=1, \ldots, n) \] \section{Metric in synthetic differential geometry} In this section we shall define metric concepts within a context of SDG. \vspace{3\baselineskip} \subsection{The metric properties of $R^n$} In this paragraph we shall define metric concepts on $R^n$. \begin{dfn} {\rm A map $(\cdot ,\cdot):R^n\times R^n \rightarrow R$ which satisfies the following conditions: \begin{enumerate} \item $\bar{x} \# 0 \Rightarrow \exists \bar{y} \in R^n: $ $ (\bar{x}, \bar{y} )\# 0 $\\ $\bar{x} =0 \Rightarrow (\bar{x} , \bar{x} )=0 $ \item $(\bar{x}, \bar{y})=(\bar{y}, \bar{x}) $ \item $(\bar{x} +\bar{y} ,\bar{z})=(\bar{x} ,\bar{z}) +(\bar{y} ,\bar{z}) $ \item $(\lambda \cdot \bar{x} ,\bar{y})=\lambda \cdot (\bar{x} ,\bar{y}) $ \end{enumerate} where $\lambda \in R$\ , \ $\bar{x}, \bar{y}, \bar{z}\in R^n$, will be called a {\it scalar product} on $R^n$. } \end{dfn} So as $R$ is a Pythagorean (\ref{equ_Pyth}) and a formal real (\ref{equ_freal}) ring we may define a norm of vector as follows \begin{dfn} {\rm Let $\bar{x}\in R^n$ such that $\bar{x}\# 0 $.Then a number $ \| \bar{x} \|=\sqrt{(\bar{x} ,\bar{x})}$. will be called a {\it norm} of the vector $\bar{x}$. } \end{dfn} \begin{dfn} {\rm We shall speak, that the vectors $\bar{x} ,\bar{y} \in R^n$, such that $\bar{x} \# 0, \bar{y} \# 0$, are {\it orthogonal} \ if $(\bar{x}, \bar{y})=0$. } \end{dfn} We shall call the $R^n$ with a scalar product $(\cdot, \cdot)$ as Euclidean space if $(\bar{x}, \bar{x} ) > 0$ for all $\bar{x} \# 0$ and as pseudo-Euclidean if $(\bar{x}, \bar{x} )$ may be both positive and negative. Let $\{\bar{e}_1, \ldots, \bar{e}_n\}$ be a basis of $R^n$ . We shall denote $g_{ij}=(\bar{e}_i, \bar{e}_j) \ \ \ (i,j=1, \ldots, n)$, The scalar product $\bar{x}, \bar{y}\in R^n$ can be recorded as \[ (\bar{x} ,\bar{y})=\sum_{i,j=1}^n g_{ij}\cdot x^i y^j. \] \begin{teo}\label{teo_det} Determinant of the matrix $\{ g_{ij}\}$, apart from a zero. \end{teo} \begin{prv} We shall determine the following linear forms by formulas: \[ \ell^i (\bar{x})=(\bar{e}_i ,\bar{x}). \] Let us show, that $\ell^i (\bar{x}) $ are mutually free. For this purpose is enough to show that the form $\ell(\bar{x})$, defined as the linear combination \begin{equation}\label{equ_sum} \ell (\bar{x})=\sum_{i=1}^n \lambda_i \ell^i (\bar{x}) \end{equation} where at least one $\lambda_i \# 0$, is aparted from a zero. By linearity we can write \[ \ell(\bar{x})=(\bar{a} ,\bar{x}) \] where $\bar{a}=\sum_{i=1}^n \lambda_i \bar{e}_i.$ In a force of the definition of $\lambda_i $ we have that $\bar{a}$ apart from a zero, and consequently $(\bar{a} ,\bar{a}) \# 0 $. Thus \[ \ell (\bar{a}) \# 0 \] From (\ref{equ_fild}) may be deduce that exists $\bar{e}_i$ such, that \[ \ell (\bar{e}_i) \# 0.\] Hence $\ell (x) $ is aparted from a zero. It means, that $\ell^i (\bar{x}) $ are mutually free and, under the Theorem \ref{teo_Rank} \[ \det{ \{ g_{ij}\} }\# 0.\] $\Box $ \end{prv} \subsection{Tangent bundle of $R^n$} As in \cite{Kock_SDG} we shall assume that a tangent bundle to $R^n$ is the object ${R^n}^D$ (exponential object). Let us denote it as $TR$. >From Axiom 1 follows that $TR \cong R^n\times R^n$, whence a tangent vector to $R^n$ in a point $a=(a_1, \ldots, a_n) $ is a map $t:D\rightarrow R^n$ of the form \[ t(d)=(a_1+d\cdot b_1, \ldots, a_n+d\cdot b_n) \] where $\bar{b}=(b_1, \ldots, b_n) \in R^n$. The {\it main part} $\gamma:TR\rightarrow R^n $ is a map such that $\gamma (t)=\bar{b}$.It establishes isomorphism of $R$-modules $T_aR^n \stackrel{\gamma}{\cong}R^n$. \begin{dfn} {\rm We shall speak, that a vector $t\in TR^n$ is {\it apart from a zero} ($t\#0 $), if the main part $\gamma (t) $ apart from a zero in $R^n$. } \end{dfn} We shall give definition of scalar product in $T_aR^n$. \begin{dfn} {\rm As {\it scalar product} of two tangent vectors to $R^n$ in a point $a$ we shall name scalar product of their main parts in $R^n$, that is \[ <\cdot ,\cdot>:TR^n \times_{R^n}TR^n \stackrel{\gamma \times \gamma}{\longrightarrow}R^n \times R^n \stackrel{(\cdot, \cdot)}{\longrightarrow}. \] Let $\|t\|=\sqrt{< t, t >}$ be a {\it norm(module)} of a vector $t$, such, that $t\#0 $. } \end{dfn} \begin{dfn} {\rm We shall name a map $c:[a,b] \rightarrow R^n$ as {\it curve} on $R^n$. } \end{dfn} \begin{dfn} {\rm The tangent vector $\dot{c}(t):D\rightarrow R^n$, such that \[ \dot{c} (t)(d)=c(t+d)=(c_1 (t+d,\ldots, c_n (t+d) \] \[ =(c_1 (t), \ldots, c_n (t))+d\cdot (c_1'(t),\ldots ,c_n'(t))= c(t)+d\cdot c' (t).\] we shall name as {\it speed vector} of a curve $c$ in a point $t\in [a,b]$} \end{dfn} The module of a speed vector defines a map $\| \dot{c}\|:[a,b] \rightarrow R $ \[ \| \dot{c} (t) \|=\sqrt{< \dot{c}(t), \dot{c}(t) >} \] \begin{dfn} {\rm A {\it length of a curve} $c:[a,b]\rightarrow R^n $ such, that $\dot{c}(t) \# 0\ \ \ \forall t\in [a,b] $, given on a interval $[a, b]$ such, that $a\leq b$ is integral from a module of its speed vector, i.e. $\ell (c)=\int\limits_a^b \| \dot{c}(t) \| dt $. } \end{dfn} \begin{exm} Let $f:[a,b] \rightarrow R$ where $a\leq b$. We shall find a length of the curve $c(t)=(t,f(t))$ which is a graph of the function $f$. We have $\dot{c}(t)=(t, f(t))+(d, d\cdot f'(t)) $, hence length $\ell(c)=\int\limits_a^b \sqrt{1+f'^2 (t)}dt$.\\ Notice, that in this case $\dot{c}(t) \# 0\ \ \ \forall t\in [a,b] $ because $1^2+f'^2 (t) \# 0$. \end{exm} \begin{prp} {\rm The length of a curve does not depend from parametrization.} \end{prp} \begin{prv} Follows in a standard manner from properties of replacement of variables in integral.$\Box $ \end{prv} \subsection{An element of curve's arch} In this paragraph we shall deduce the classical formula for differential of a element of curve's arch on the plane $R\times R$. At the beginning we shall give the following definition: \begin{dfn} {\rm Let $M$ be an arbitrary object in $\cal E$\, and $f:M\rightarrow R$. The {differential} of $f$ is the composition \[ df:M^D\stackrel{f^D}{\longrightarrow}R^D \stackrel{\gamma}{\longrightarrow}R, \] where $\gamma$ is the main part. } \end{dfn} Let us consider a curve $c:[0,1] \rightarrow R $. In coordinates it is $c(t)=(x(t), y(t)) $. We shall assume, that a speed vector $\dot{c}(t) \# 0\ \ \ \forall t\in [0,1] $. We have \[ \dot{c}(t)=c(t+d)=c(t) + d\cdot c'(t),\] where $c'(t)=(x'(t), y'(t))$. The length of the curve $c$ will be equal to \[ \ell(c)=\int\limits_0^1 \sqrt{\| \dot{c}(\tau) \| } d\tau=\int\limits_0^1 \sqrt{x'^2 (\tau)+y'^2 (\tau)}d\tau. \] We shall define a {\it length of curve's arch} $s:[0,1] \rightarrow R$ as \[ s(t)=\int\limits_0^t \sqrt{x'^2 (\tau)+y'^2 (\tau)}d\tau. \] On property of integral with a variable bound we have \[ s'(t)=\sqrt{x'^2 (t)+y'^2 (t)}. \] Hence, we receive, that \[ s(t+d)=s(t)+d\cdot \sqrt{x'^2 (t)+y'^2 (t)}\ \ \ \forall d\in D \] Now let us consider differential of $s(t)$. On definition it is a map \[ds:[0,1]^D\stackrel{s^D}{\longrightarrow}R^D \stackrel{\gamma}{\longrightarrow}R. \] \[ ds(a+d\cdot b)=\gamma(s(a+d\cdot b)=\] \[=\gamma (s (a)+d\cdot b\sqrt{x'^2 (t)+y'^2(t)}) =b\cdot \sqrt{x'^2 (t)+y'^2 (t)} \] Similarly for differentials $dx$ and $dy$ we have: \[ ds(a+d\cdot b)=b\cdot \sqrt{x'^2 (t)+y'^2 (t)} \] \[ dx(a+d\cdot b)=b\cdot x' (t) \] \[ dy(a+d\cdot b)=b\cdot y' (t) \] Hence, using operations of addition and multiplication of functions from $R^{[ 0,1 ]}$, we receive the classical formula for differential of a element of curve's arch: \begin{equation} ds^2=dx^2+dy^2 \end{equation} Having conducted replacement of coordinates $x(u,v) ,y(u,v)$, by similar reasons, it is possible to show, that \[ dx=\frac{\partial x}{\partial u}\cdot du+ \frac{\partial x}{\partial v}\cdot dv \] \[ dy=\frac{\partial y}{\partial u}\cdot du+ \frac{\partial y}{\partial v}\cdot dv \] and to deduce the formula: \begin{equation} ds^2=E\cdot du^2+2 F\cdot du dv+G\cdot dv^2 \end{equation} \[ E=(\frac{\partial x}{\partial u})^2 +(\frac{\partial y}{\partial u})^2 \] \[ F=\frac{\partial x}{\partial u}\cdot\frac{\partial x}{\partial v}+ \frac{\partial y}{\partial u}\cdot\frac{\partial y}{\partial v} \] \[ G=(\frac{\partial x}{\partial v})^2+(\frac{\partial y}{\partial v})^2 \] \subsection{Riemannian structure on formal manifold} A notion of formal manifold in SDG is a generalization of a classical notion of a $C^\infty$-manifold. In this paragraph we shall show, that on formal manifold it is possible to develop a Riemannian geometry. Let $M$ be a $n$-dimensional formal manifold\cite{Kock_SDG} and $\{U_i\stackrel{\varphi_i}{\longrightarrow}M\}$ be a cover of $M$ by formally etale monomorphisms, where $U_i$ are model objects, i.e. formal etale subobjects in $R^n$. The pair $(U_i,\varphi_i) $ will be called {\it a local card} on $M$. So as $\varphi $ is monomorphism, it is convertible on the image $\varphi (U) $, and, hence, $\varphi^{-1}:\varphi (U) \rightarrow U $ is determined. We shall consider tangent bundle $TM=M^D$. As $M$ is a formal manifold, is valid, that $T_{p}M\cong R^n$ for each $p\in M$. Let $v:D\rightarrow M$ be a tangent vector to $M$ in a point $p$ and $(U,\varphi) $ be a local card such, that $\varphi^{-1}(p)=0$. In this case $\varphi^{-1}\circ v$ is a tangent vector to $U$ in $0$. Since $U$ is subobject of $R^n$ it is valid that $TU\cong U\times R^n$ and, hence, the vector can be recorded as $\varphi^{-1} \circ v(d)=(0,\ldots ,0)+d\cdot (v_1, \ldots, v_n)$, where $(v_1,\ldots ,v_n)\in R^n$. We shall denote through $\partial_i$ vectors \[ \partial_i(d)=(0, \ldots , 0)+d\cdot (0, \ldots , \stackrel{i}{1}, \ldots , 0) \] The vectors $\partial_i \circ \varphi$ will form a basis of $T_{p}M$. \begin{dfn} {\rm We shall speak, that vectors $u, v\in T_{p}M$ are {\it aparted } if $(\varphi^{-1}\circ u) \# (\varphi^{-1}\circ v) $ in $T_{0}U$. } \end{dfn} \begin{dfn}\label{dfn_Riem} {\rm Let $M$ be a formal manifold. A map $g:TM\times_M TM\rightarrow R $ will be called {\it a metric tensor (a Riemannian structure)} on $M$ if following conditions are executed. \begin{enumerate} \item $v\# 0 \Rightarrow \exists u: g(v, u)\# 0 $\\ $v=0 \Rightarrow g(v, v)=0 $ \item $g(v, w)=g(w, v) $ \item $g(u+v, w)=g(u, w)+g(v, w) $ \item $g(\lambda \cdot v, w)=\lambda \cdot g (v, w) $ \end{enumerate} where $\lambda \in R$, $v, w, u \in TM $ so that $v(0)=w(0)=u(0)$. } \end{dfn} \begin{dfn} {\rm Let $v\in TM $ such, that $v\# 0$. Then a number $\| v\| =\sqrt{g(v,v)}$ will be called a {\it norm} of a vector $v$. } \end{dfn} We shall call the M with a metric tensor $g$ as Riemannian space if $g(v, v) > 0$ for all $v \# 0$ and as pseudo-Riemannian if $g(v, v)$ may be both positive and negative. Let us consider a tangent space $T_{p}M$ for some $p\in M$. Then a map $g^p:T_{p}M\times T_{p}M\rightarrow R $ defined as $g^{p}(u,v)=g(u,v)$, for $u, v\in T_{p}M$, is a scalar product on $T_{p}M$. We shall denote \[ g^p_{ij}=g^p(\partial_i ,\partial_j).\] In a force of the Theorem \ref{teo_det} we have that $det{(g^p_{ij})}\# 0 $. For any $u, v\in T_{p}M $ we have \[ g^p (u, v)=g^p_{ij}\cdot u^i v^j, \] where $ u^i, v^j$ are coordinates of vectors $u, v$ at decomposition on basis $\{ \partial_i \}$. As well as in case of $R^n$ it is possible to define a curve on $M$ as a map $c:[a,b] \rightarrow M$, with a speed vector in a point $t\in [a,b] $ equal to $\dot{c}(t)(d)=c(t+d)$. We shall assume, that $a\leq b$ and $\dot{c}(t) \# 0 \ \ \ \forall t\in [a,b]$.Then it is possible to define a length $\ell(c)$ of a curve $c(t)$ as \[ \ell (c)=\int\limits_a^b \sqrt{g (\dot{c}(t), \dot{c}(t))}dt \] \begin{note} It is interesting to note, that in general we can't define on $M$ internal metric $\rho $ as \[ \rho (p, q)=\inf_{c p\frown q}{\ell (c)}. \] The reason is that $R$ is not order complete, and therefore the existence of $\inf_{c p\frown q}{\ell (c)}$ needs to be proved positive. \end{note} \subsection{Models of Riemannian structures on formal manifolds} The well adapted model\cite{Kock_SDG} of SDG is in such category $\cal E$\ that exist functor $i:Mf\rightarrow$ $\cal E$\ from a category of $C^\infty$-manifolds to $\cal E$, which allow to compare a classical differential geometry with a synthetic one. In this paragraph we shall show, that in well adapted models exist a Riemannian structure on formal manifolds of the kind $M=i({\cal M})$. Let us consider a well adapted model $i:Mf\rightarrow$$\cal E$, there $Mf$ is a category of $C^\infty$-manifolds. Let $\cal R$ be the field of real numbers with a natural Riemannian structure on it and $\cal M$ be a $C^\infty$-manifold with given on it a Riemannian structure $g:T{\cal M}\times_{\cal M}T{\cal M}\rightarrow{\cal R}$ \begin{enumerate} \item $v\# 0 \Rightarrow \exists u: g(v, u)\# 0 $\\ $v=0 \iff g(v, v)=0 $ \item $g(v,w)=g(w,v) $ \item $g(u+v, w)=g(u,w)+g(v,w) $ \item $g(\lambda \cdot v, w)=\lambda \cdot g(v,w) $ \end{enumerate} Where $\lambda \in \cal R$, and $v, w, u \in T{\cal M}$ such that $v(0)=w(0)=u(0) $. We shall assume, that $g$ is $C^\infty$-mapping. It means, that \[ g\in Hom_{Mf}(T{\cal M}\times_{\cal M}T{\cal M},{\cal R}). \] We shall consider $i(g):i(T{\cal M} \times_{\cal M} T{\cal M}) \rightarrow i({\cal R}) $. The diagram \begin{eqnarray*} T\cal M\times_{\cal M} T\cal M & \longrightarrow & T\cal M \\ \downarrow & & \downarrow \pi \\ T\cal M & \stackrel{\pi }{\longrightarrow} & \cal M \end{eqnarray*} is transversal pull back in $Set$, hence, in a force of Axiom A of well adapted models\cite{Kock_SDG}, it is preserved by $i$, i.e \[ i(T{\cal M}\times_{\cal M}T{\cal M})=i(T{\cal M}) \times_{i({\cal M})} i(T{\cal M}). \] We shall denote $M=i({\cal M}), R=i({\cal R}) $. It is known that $M$ is a formal manifold. By virtue of Axiom C of well adapted models\cite{Kock_SDG} we have that $i(T{\cal M}) \cong Ti({\cal M})=TM$. In a result we receive, that \[ i(g):TM\times_M TM\rightarrow R \] The conditions (1) - (4) of the Riemannian structure $g$ on $\cal M$ can be expressed in the form of commutativity of the appropriate diagrams in $Mf$. Functor $i:Mf\rightarrow$ $\cal E$\ save them and, hence, the conditions of Definition \ref{dfn_Riem} of a Riemannian structure on formal manifold will be executed for $i(g)$. Thus we have shown, that on formal manifolds of kind $i({\cal M})$, in well adapted models, exist a Riemannian structure of kind $i(g)$. \section{Einstein's equations of a field in SDG} In this section we show that it is possible to write Einstein's equations of a field in SDG. For this we need in some new notions of SDG. \subsection{Connection in SDG} In this paragraph we shall give a notion and some properties of connection in SDG. This results are taken from \cite{KockReyes_Conn}. First of all let us make note about tensors in SDG. For any $R$-modules $U$ and $V$ we can use a classical definition of qtensor product \cite{Kasch}. For this definition all algebraic properties will be valid and all algebraic operations will be definable. Hence they will be valid for $R^n$. Moreover, from the Theorem \ref{teo_det} follows that the operations of rising and lowering indexes are definable for $R^n$ too. Let us see an object $M$ in $\cal E$. We define an infinitesimal object $D \lor D$ as \[ D \lor D=\{(x, y)\in R\times R\ |\ x\cdot y=0 \} \] It easy to see that \[ D\lor D \subseteq D\times D \subseteq R\times R.\] We denote the inclusion $D\lor D \subseteq D\times D$ as $j$. Let us see the object $M^{D \lor D}$. For infinitesimal linear object \cite{Kock_SDG} $M$ ( for a example for formal manifold ) we have \[ M^{D \lor D} \cong M^D \times_M M^D.\] So we can see the elements of $M^{D \lor D}$ as a "crosses" of tangent vectors in each point of $M$. Let ${M^j}$ be the restriction map \[ M^{D\times D} \stackrel{M^j}{\longrightarrow} M^{D\lor D}.\] Then we give the following \begin{dfn} {\rm The connection $\nabla$ on a tangent bundle $M^D \stackrel{\pi}{\longrightarrow} M$ of the object $M$ is a section of the restriction map $ M^j$, i.e $ M^{D\lor D} \stackrel{\nabla}{\longrightarrow} M^{D\times D}$. } \end{dfn} Geometrically, this definition may be understood as a complementation of a "cross" to a infinitesimal "net" (an element of the object $M^{D\times D} $ ) or as parallel transport of the second vector along the infinitesimal segment of the line given by the first vector ( elements of the $ M^{D\lor D} $). Bases on this definition in \cite{KockReyes_Conn} are given a definition of curvature $k$ of a connection $\nabla$ on a tangent bundle $\pi:M^D\rightarrow M$ as a map \[ k:(M^D)^{D\times D}\longrightarrow M^D .\] In a case than $M$ is a etale subobject $U\rightarrow R^n$ in $R^n$ it is possible to write a connection on tangent bundle $\pi:U^D\rightarrow U $ in coordinates\cite{KockReyes_Conn}. More exact, connection $\nabla$ became a map \[ U^D\times_U U^D \cong U\times R^n\times R^n \stackrel{\nabla}{\longrightarrow} U\times R^n\times R^n\times R^n \cong U^{D\times D}.\] Sine connection is a section of the map $M^j$, which in coordinates has form \[ (u,v_1,v_2,v_3)\longmapsto (u,v_1,v_2),\] it can be written as \[ \nabla(u,v_1,v_2) = (u,v_1,v_2,\bar{\nabla} (u,v_1,v_2)).\] If $\nabla$ is affine connection\cite{KockReyes_Conn} then $\bar{\nabla} (u,v_1,v_2))$ is bilinear from $v_1,v_2$. And hence, can be defined by 3$n$ indexed family of functions $\Gamma_{ij}^k:U\rightarrow R$. If $u\in U$ and $\{e_i\}$ is the canonical base in $R^n$ then, as it show in \cite{KockReyes_Conn}, the curvature $k$ of a connection $\nabla$ has form \[ R_{kij}^l = \frac{\partial}{\partial x^i} \Gamma_{jk}^l (u) - \frac{\partial}{\partial x^j} \Gamma_{ik}^l (u) + \sum_{\alpha} \Gamma_{ik}^{\alpha} (u)\cdot \Gamma_{j \alpha}^l (u) - \Gamma_{jk}^{\alpha} (u)\cdot \Gamma_{i \alpha}^l (u), \] where $\Gamma_{ij}^k (u)$ is a $k$-th coordinate of $\bar{\nabla} (u,e_i,e_j)$. This formula is equivalent to the classical one. \subsection{Riemann -- Christoffel's tensors} In this paragraph we define a tensor of curvature on a formal manifold with a metrical tensor. Let us see a formal manifold $M$ with a metrical tensor $g$. Since each local card of $M$ is a etale subobject of $R^n$ we can define connection on it by given $\Gamma_{ij}^k$. Let $U$ be a local card. We define $\Gamma_{ij}^k$ (Christoffel's symbols of a second kind) in a classical manner using a coefficients of metrical tensor. For this we define Christoffel's symbols of a first kind by formulas \[ [ij, k] \equiv \frac{1}{2} \left( \frac{\partial g_{ik}}{\partial x^j} + \frac{\partial g_{jk}}{\partial x^i} - \frac{\partial g_{ij}}{\partial x^k} \right)\ \ \ (i, j, k = 1,\ldots ,n), \] where $g_{ij}$ are coefficients of metrical tensor in $U$. And then we define $\Gamma_{ij}^k$ by formulas \[ \Gamma_{ij}^k \equiv g^{k\alpha} [ij, \alpha],\] where $g^{k\alpha}$ are coefficients of contvariant metrical tensor in $U$. So we define the connection all local cards. Hence we define connection $\nabla$ on $M$. The coefficients $R_{kij}^l$ of the curvature $k$ defines co called Riemann -- Christoffel's tensors of the second kind. Lowering down indexes we shall receive associated tensor \[ R_{ijkl} \equiv g_{i\alpha} R_{jkl}^\alpha, \] Riemann -- Christoffel's tensors of the first kind. >From the definition of these tensors follows that they possessed of all classical properties of symmetry under the indexes change. \subsection{Einstein's equations} Let us see 4 dimension pseudo-Riemann formal manifold $M$ with metric tensor $g$ and let $\nabla$ be the connection on $M$ constructed as above. Using the tensor operations we may define Ricci's tensor $R_{ij}$ by formula $R_{ij} = R_{ij\alpha}^\alpha$ and Einstein's tensor by formula $ G_j^i \equiv R_j^i - \frac{1}{2} \delta_j^i R$, where $R \equiv g^{ij} R_{ij}$ and $R_j^i = g^{ki} R_{kj}$. Having the definitions of this tensors me may write the Einstein's equations of a field as \[ - \kappa T_{ij} = R_{ij} - \frac{1}{2} R g_{ij},\] where $T_{ij}$ is tensor of energy-impulse and $\kappa$ is a constant. So we have shown that SDG may be viewed as base for consideration of general theory of relativity. Particular it gives an ability to construct an intuitionistic models of general relativity in a toposes which are the well adapted models for SDG.
2,869,038,156,139
arxiv
\section{Introduction} A quantum computer is a device engineered to utilize the complexity of a many-particle wavefunction for the purpose of solving computational problems. For specific problems, quantum algorithms are predicted to surpass the ability of classical information processing \cite{ShorsAlgorithm, FeynmanQC, GroversAlgorithm, AaronsonBosonSample2011, QuantSimResources, HarrowLinearSysEqn2009} but the computational space solvable to quantum algorithms has yet to be rigorously explored due to the absence of a working physical architecture. Experimental implementations of small quantum algorithms in systems containing under $10$ qubits have been exhibited in a variety of architectures \cite{DJ3QNMR, ShorQCNMR, DJQCatomic, DJQCphotonic, QFTQCatomic, GroversQCatomic, GroverDJQCsuperconducting, DJQCNVcenter, ShorsQubitRecycling, ShorQCatomiceff, DebnathSmallQCIons2016}. However, realization of a large-scale algorithm consisting of hundreds or thousands of qubits will require protocols that protect the quantum states from sources of decoherence. Quantum error correction (QEC) is a viable method for protecting of quantum states from sources of decoherence \cite{QEC1,QEC2,QEC3}. Error correction routines embed logical qubits into subspaces of a multi-qubit Hilbert space and uses active feedback to remove entropy from the system. An enticing selection for an error correction protocol is the surface code \cite{SCorig} which exhibits an error correction threshold in the circuit model between $0.5\% - 1 \%$ for depolarizing Pauli noise \cite{RaussendorfClusterState12007,RaussendorfClusterState22007,FowlerSurfaceCodeThresh2009,SCThresholdsStephens2014}. This threshold represents the error rate below which logical gates and memories can be made arbitrarily good by increasing the distance of the surface code. Here we examine the smallest surface code with nine data qubits and eight ancilla qubits, known as surface-17 \cite{Tilted13SC,Tilted17SC,TomitaLowDSC2014}. In principle only a single ancilla qubit could be used over and over, but the gains from parallelism are even apparent in studies comparing 8 ancilla qubits to 6 ancilla qubits \cite{TomitaLowDSC2014}. With 10-20 qubits, a number of QEC codes can be implemented fault-tolerantly including the 5-qubit code \cite{Laflamme5qubit1996}, Steane [[7,1,3]] \cite{SteaneSteaneCode1996,SteaneCSScodes1999}, Bare [[7,1,3]] \cite{LiBareAnc2017}, the Bacon-Shor [[9,1,3]] \cite{BaconBaconShor2006,ShorBaconShor1995}, or the twisted surface \cite{YoderSurfaceCodeTwist2017} code. We chose to study the surface code because the memory pseudothreshold, the error rate below which the encoded qubit outperforms the physical qubit, is superior to the 5-qubit code, the Steane code, and the Bare code, and comparable to the Bacon-Shor and twisted surface code \cite{YoderSurfaceCodeTwist2017}. Atomic ions have proven to be high-fidelity qubits for quantum information processing. The internal states of the ions are controlled by the application of electromagnetic radiation with lasers \cite{HayesEntangleOptComb2010} or microwaves \cite{OspelkausMicrowaveGates2011,WarringIndividMicroAddr2013}. Two-qubit gates are performed by conditionally exciting the coupled motion of ions in the chain dependent on the ion's internal states \cite{CiracZollerGate,MolmerSorensenHotIons,BellStatePrep,GatesWarmIons}. The normal modes of motion are nonlocal allowing interactions between any ions in the chain without requiring additional overhead from moving information through local couplings \cite{SCqubitsNature} or storing qubits in auxiliary states \cite{ShorsSuperconducting}. This arbitrary connectivity without altering the intrinsic nature of the qubit adds modularity at the hardware level which relaxes software constraints on compilation when building up high level algorithms from the hardware primitives \cite{DebnathSmallQCIons2016, Linke5qubitComp2017}. Qubits can be encoded into either optical states \cite{RoosCaOpticalQubit1999}, Zeeman states \cite{SpectRbZeemanQubit2011,RusterCaZeemanQubit2016}, or hyperfine states of the ions \cite{BallanceControlHFQubits2016,BrownHighFidBe2011,NoekHighFidYb2013}. For this study, information will be stored in the hyperfine ``clock" states of $^{171}$Yb$^{+}$. While single-qubit operations in this system have displayed error rates below the surface code pseudothreshold \cite{CompPulseYb, NoekHiFidSPAMYb2013} reported from Tomita and Svore \cite{TomitaLowDSC2014}, two-qubit gate fidelities are limited by a number of factors including spontaneous Raman scattering during gates and residual entanglement between the internal state and the motional modes of the ion. Compensation pulses have been developed with a predicted error rate due to scattering of $10^{-4}$ \cite{SpontScatErrYb} and control sequences have been implemented exhibiting single- and two-qubit gate fidelities of $99.9\%$ using the hyperfine ground states of trapped $^9$Be$^+$ \cite{BrownHighFidBe2011} and $^{43}$Ca$^{+}$ \cite{BallanceControlHFQubits2016} ions, respectively. However, quantum control applied to a scaled-up five-ion chain consisting of $^{171}$Yb$^{+}$ qubits currently exhibits two-qubit error rates of $2\%$ \cite{DebnathSmallQCIons2016}. This current error rate is well above the reported pseudothreshold for the surface-17 code, but gates with 99.9\% fidelity should be achievable. Atomic ion experiments have already demonstrated classical error correction \cite{ChiaveriniBitFlipZZXXZX2004,SchindlerRepCpde2011}, encoding logical states for quantum error correction \cite{NiggSteaneEnc2014}, and fault-tolerant quantum error detection \cite{Linke422Ions2016}. In addition, multiple theoretical studies have examined implementation of quantum error correcting codes with trapped ions. Architectural studies with large distance codes have looked at chains of ions connected by shuttling \cite{WinelandExpIssueIons1998,KielpinskiQCCD2002,LekitscheMicroQCCD2017} or by optical interconnects \cite{ChristophEntanglePhotonIon2003,MoehringEntangleIonQubitPhoton2007,DuanEntanglePhotonColloq2010,NickersonTopQCNoisyNetwork2013,MonroeMUSIQC2014}. Studies of smaller codes include the Steane [[7,1,3]] code in a two-dimensional shuttling architecture \cite{TomitaSteaneIonTrap2013} and more recently a linear shuttling architecture \cite{AbuNadaSteaneIonTrap2014,BermudezITQCwithSteane2017}. The work of Tomita and Svore \cite{TomitaLowDSC2014} used optimistic ion trap parameters to study the surface-17 code on a linear chain. Here we consider a more realistic model based on a near-term implementation in a linear chain of Yb$^{+}$ ions. Our model includes many additional physical details such as the necessity to physically separate measured ions from data ions and the distance dependence of two-qubit gates along the ion chain. This study provides an assessment of the feasibility of implementing the surface-17 error correcting code on a linear trap holding a chain of $^{171}$Yb$^{+}$ ions. Furthermore, this study will provide target fidelities for experimentalists to realize error correction with the 17-qubit surface code. The paper is structured in the following manner. First, a resource efficient implementation of the surface code, the 17-qubit rotated surface code, is explained. Following that, the ion trap architecture will be defined and a map between the linear ion chain and the two-dimensional surface code is provided. The remainder of the paper will focus on error correction. Efficiently simulable models of ion trap error sources will be outlined followed by an examination of results from decoding methods tailored for such errors. \section{The 17-Qubit Surface Code} The surface code allows for high-threshold fault-tolerant quantum computation in a two dimensional architecture \cite{SCorig,SCThresholdsStephens2014,RaussendorfClusterState12007,RaussendorfClusterState22007,FowlerSurfaceCodeThresh2009}. The surface code is constructed by a square lattice arrangement of data qubits where the faces of the lattice represent the stabilizer generators of the error correcting code with $X$ and $Z$ type weight-4 stabilizers alternating in a checkerboard-like pattern throughout the lattice. In this arrangement, measurements are local and logical operators are non-local operators that span the surface and terminate on one of two types of boundaries, an $X$ and a $Z$ type, which label the type of stabilizers occupying the four terminating edges of the planar code. There are two choices of edge operators: weight-3 triangles or weight-2 links depending on how the bulk of the surface code is oriented \cite{Tilted13SC}. The logical $Z$ ($X$) operator spans the two $Z$ ($X$) boundaries. The code distance, the weight of the lowest weight Pauli operator that maps elements of one logical basis state to another state, has an intuitive representation as the length (in number of data qubits) of the boundaries of the square lattice arrangement of the code. \begin{figure}[!b] \centering \begin{subfigure}[b]{0.25\textwidth} \includegraphics[width=\textwidth]{17Qubit_SurfaceCode.pdf} \caption{} \label{fig:17QSC} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \hspace{22mm} \mbox{ \Qcircuit @C=0.75em @R=1.5em { \lstick{\mathrm{ancilla} \; \ket{0}} & \qw & \targ & \targ & \gate{\textcolor{red}{Z}} & \targ & \targ & \meter\\ \lstick{\circled{1}} & \qw & \ctrl{-1} & \qw & \qw & \qw & \qw & \qw \\ \lstick{\circled{2}} & \qw & \qw & \qw & \qw & \ctrl{-2} & \qw & \qw & \lstick{\textcolor{red}{Z}} \\ \lstick{\circled{4}} & \qw & \qw & \ctrl{-3} & \qw & \qw & \qw & \qw \\ \lstick{\circled{5}} & \qw & \qw & \qw & \qw & \qw & \ctrl{-4} & \qw & \lstick{\textcolor{red}{Z}} \\ } } \caption{} \label{fig:measerrorprop1} \end{subfigure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=0.6\textwidth]{SCerrorprop.pdf} \caption{} \label{fig:measerrorprop2} \end{subfigure} \caption{a) Planar layout of the 17-qubit surface code. White (black) circles represent data (ancilla) qubits and dark (light) faces denote $X$ ($Z$) stabilizer generators. b) A single error on an ancilla propagating to a two-qubit error on the data; typically not fault-tolerant. c) The ancillary error from b) propagates in a direction perpendicular to the direction of the logical operator which is equivalent to a single-qubit error \emph{from the perspective of the logical operator} retaining fault-tolerance with bare ancilla. All images were adapted from Tomita and Svore \cite{TomitaLowDSC2014}. } \label{fig:17QSCfull} \end{figure} The 17-qubit surface code is shown in figure \ref{fig:17QSC} \cite{Tilted17SC}. The white (black) circles represent data (ancilla) qubits and the dark (light) faces of the lattice dictate the $X$-type ($Z$-type) stabilizer generators of the code. The 13-qubit version of the surface code is constructed by removing the ancillary qubits on the boundaries of the 17-qubit code and scheduling stabilizer measurements in a manner that each of the ancillary qubits are used to measure both a weight-2 and weight-4 stabilizer \cite{Tilted13SC}. This work focused on the 17-qubit version because the greater circuit depth for stabilizer measurement in the 13-qubit code adversely affects error correction \cite{TomitaLowDSC2014}. The resource win for the surface codes is the ability to use bare ancilla for fault-tolerant measurement of the stabilizer generators. The scheduling of the two-qubit gates following an N-like pattern about the face of a weight-4 $Z$ stabilizer allows for the cases where single-ancilla qubit to two-data qubit error propagation (``hook" errors) occur in a direction perpendicular to the direction of logical $Z$ operator as shown in figures \ref{fig:measerrorprop1} and \ref{fig:measerrorprop2}. This error is equivalent to a single-qubit error \emph{from the perspective of the $Z$ logical operator}, thus retaining fault-tolerance during syndrome measurement. Scheduling the two-qubits gates during the measurement of an $X$ stabilizer in a Z-like pattern gives a similar result for the logical $X$ operator. Many other error correction routines require the use of many-qubit ancillary states to ensure fault-tolerance. Shor error correction requires many-qubit states, known as cat states, to fault-tolerantly measure stabilizers which increase the number of gates and require a number of ancilla equivalent to the sum of the operator weights of all the stabilizers to perform measurements in parallel \cite{ShorCatState}. This would require 20 ancillary qubits to perform error correction in parallel with the surface code. Steane \cite{SteaneSteaneEC1997} and Knill \cite{KnillKnillEC2005} error correction both require an ancillary logical state for fault-tolerance requiring 17 ancillary qubits. Recent work has shown that the use of ``flag" ancillary qubits can reduce the number of ancillary qubits required for fault-tolerance \cite{ChaoFlagQubits12017,ChaoFlagQubits22017} but still would which corresponds to 12 ancillary qubits (in parallel) in our surface code setting. However, a variation on the surface code, the twisted surface code, implements flag qubits and is constructed in a manner that requires only 15 total qubits with a small loss in pseudothreshold relative to the surface code \cite{YoderSurfaceCodeTwist2017}. We chose to focus on gate fidelities and pseudothresholds; thus our choice of code. \section{Mapping the Surface Code to an Ion Chain} \begin{figure} \centering \includegraphics[width=\textwidth]{IonTrapOperationsAlpha.png} \caption{Graphical representation of the ion trap architecture and syndrome measurement operations. Ions are trapped in a mixed arrangement where data/ancilla qubits represented in white/black as in figure \ref{fig:17QSC}. There are three zones: a Logic, SPAM, and Storage zone. (Top) The Logic zone where qubit gates are applied. (Bottom) The SPAM zone where state preparation and measurement is performed, scattering photons. The Storage zone does not have a unique task but serves to sufficiently distance qubits from the SPAM zone.} \label{fig:IonTrap} \end{figure} To perform error correction with the surface code, the required operations are single-qubit gates ($H$), two-qubit gates ($CNOT$), state initialization ($\ket{0}$ state), and measurement ($Z$-basis). Single-qubit gates are performed by the application of laser fields \cite{HayesEntangleOptComb2010} or microwave radiation \cite{OspelkausMicrowaveGates2011,WarringIndividMicroAddr2013} to manipulate the hyperfine states of trapped $^{171}$Yb$^{+}$ ($^{2}$S$_{1/2}$$\ket{F=0;m_F=0} \leftrightarrow$ $^{2}$S$_{1/2}\ket{F=1;m_F=0}$ transition) which can drive arbitrary single-qubit rotation gates. High fidelity (compared to other schemes), fast two-qubit gates are performed by the application of counter-propagating laser fields achieving entanglement through the coupling of the internal states with the motional modes of the ion crystal through a method known as the M\o lmer-S\o rensen gate which engineers an $XX$ entangling gate \cite{SorensenPRLMSgate1999,SorensenPRAMSgate2000,DebnathSmallQCIons2016}. Controlled-NOT ($CNOT$) gates can be built from M\o lmer-S\o rensen gates and available single-qubit rotations \cite{MaslovCircuitCompIT2017} (see figure \ref{fig:MStoCNOT}). State initialization and measurement are performed by applying laser beams resonant with the $^2$S$_{1/2} \leftrightarrow \, ^{2}$P$_{1/2}$ transition. For $\ket{0}$ state preparation, qubits are optically pumped out of the $^2$S$_{1/2} \ket{F=1}$ state into the $^2$P$_{1/2} \ket{F=1}$ manifold which, with high probability, falls into the $^2$S$_{1/2} \ket{F=0}$ state \cite{WinelandExpIssueIons1998,TheoryAtomicSpec,YbstructureReadout}. For measurement in the $Z$-basis, a $^2$S$_{1/2} \ket{F=1} \leftrightarrow \, ^2$P$_{1/2}\ket{F=0}$ cycling transition is induced where the discrepancy between scattered photon counts of the qubit states serves as readout \cite{WinelandExpIssueIons1998,TheoryAtomicSpec,YbstructureReadout}. Note that the state preparation and measurement processes scatter photons that should not interact with surrounding ions. This requirement introduces an additional operation, ion shuttling \cite{BlakestadXjunction2009,MoehringYjunct2011,BowlerFastShuttle2012,WaltherFastShuttle2012,WrightXtrap2013,ShuYtrap2014}, which will be used to separate qubits in memory from the scattered photons during measurement/preparation. An alternative approach would be to use two ion species so the data ions are insensitive to the fluorescence of the measurement ions \cite{Tan2SpeciesLogicMgBe2015,HumeThesisCaBeControl2010}, but we avoided this method due to technical issues in shuttling mixed-ion crystals. There exist many ion trap architectures containing both one-dimensional and two-dimensional ion layouts. For a first generation implementation of a logical qubit consisting of atomic ions, a trapped linear chain of ions was favored over two-dimensional architectures due to technological challenges in the latter that result in issues such as additional ion heating through shuttling junctions in traps \cite{BlakestadXjunction2009,MoehringYjunct2011,WrightXtrap2013,ShuYtrap2014}, high idle ion heating rates \cite{Sterling2DTrap2014}, and single-ion addressing/readout issues in two-dimensional trap layouts. The linear trap is composed of at least three zones: a Logic, State Preparation and Measurement (SPAM), and Storage zone (figure \ref{fig:IonTrap}). Ion shuttling across the axial direction of the trap allows for the 17-ion chain to be arbitrarily split into three, separate linear chains of ions inhabiting each of the three zones. The Logic zone is where all single- and two-qubit gates are applied. The central SPAM zone is where state preparation and measurement operations are performed. The Storage zone serves the purpose that its name implies and is required due to the geometric constraint of having the ions confined in a linear chain. In addition to these three zones, one or two additional zones may be capped at the ends of the trap that hold atomic ions for sympathetic cooling of the motional modes of the qubit ions \cite{LarsonSympHg1986,KielpinskiSympTheory2000,BarrettSymCoolLogic2003}. Now we illustrate how a round of stabilizer measurements would proceed in such an architecture. Initially, all of the qubit ions would be prepared and cooled in the SPAM zone; an initialization step. After initialization, all 17 ions are shuttled into the Logic zone. In the Logic zone, the circuit implementing the measurement of the stabilizers of the surface-17 code would be implemented in, ideally, a parallel fashion. After the application of all the gates, groups of ancillary qubits would be shuttled to the SPAM zone for measurement. During the measurement, only ancillary qubits occupy the SPAM zone and any data qubits would be stored in either the Logic or Storage zone sufficiently far away from the SPAM zone. The ancillary qubits in the SPAM zone will be measured in parallel and in sets dictated by the data/ancilla assignment of the qubits in the ion chain. Following readout of all ancillary qubits, all qubits will be shuttled back to the Logic zone and the process is repeated. Such an implementation begs the question: how should the qubits in the surface code be assigned to the linear chain of ions? We are particularly interested in configurations that minimize the gate times (errors) of the error correction circuit. To proceed, we must first discuss two-qubit gates. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{IonConfiguration.pdf} \caption{Equilibrium positions of the 17 ions along the $z$ direction when $l_0=25\,\mu$m and $\gamma_4=0.86$.} \label{fig:IonConfiguration} \end{figure} \begin{figure}[b!] \includegraphics[width=\textwidth]{MSlasersAlpha.png} \caption{Experimental setup (left) and energy level diagrams for the M\o lmer-S\o rensen gate implemented with counter-propagating laser fields. In the experiment, two beams are shined on the ion with one beam in the direction of ${\bf k}_1$ and the other one containing both red and blue detuned components in the ${\bf k}_2$ direction.} \label{fig:MSlasers} \end{figure} For computation of the two-qubit gate times, current gate protocols \cite{DebnathSmallQCIons2016} and motional decoupling techniques \cite{MotionDecoupling} were modeled; the latter of which contributes significantly to the distance dependence of the gate times. The calculation of the gate time of an ion pair is outlined below. In the weak trap limit, a Paul trap can be well approximated by a pseudo-harmonic potential (see e.g.\ Ref.~\cite{LeibfriedQDSingleIon2003}). Here we consider ions in a linear Paul trap along the $z$ direction $\left(\omega_z \ll \omega_x,\,\omega_y\right)$. With a harmonic trap potential, the spacing between ions in the chain will be nonuniform, which can lead to undesired transition into a zigzag shape \cite{SchifferPhaseTransAnisotropic1993,DubinTheoryCoulombCrystal1993}, as well as the difficulty in cooling many low frequency modes. To overcome this problem, an additional quartic potential can be added to the $z$ direction \cite{LinQCAnharmIonTrap2009} giving the total potential energy: \begin{equation} V = \sum_i \left(-\frac{1}{2} \alpha_2 \, z_i^2 + \frac{1}{4} \alpha_4 \, z_i^4\right) + \sum_{i < j} \frac{e^2}{4 \pi \epsilon_0 \left|z_i - z_j\right|} \end{equation} where $\alpha_2,\,\alpha_4 > 0$ are two parameters characterizing the strength of the quadratic and the quartic potentials. The ion configuration is then fully determined by a length unit $l_0 \equiv \left(e^2/4\pi \epsilon_0 \alpha_2 \right)^{1/3}$ and a dimensionless parameter $\gamma_4 \equiv \alpha_4 l_0^2 / \alpha_2$. For $N=17$ $^{171}$Yb$^{+}$ ions, we choose $\gamma_4=0.86$ to minimize the relative standard deviation of the ion spacings. An average ion distance of about $8.2\,\mu$m can then be realized by setting $l_0 = 25\,\mu$m. The equilibrium configuration of the 17 ions is shown in figure \ref{fig:IonConfiguration}. The two-qubit entangling gate is implemented with a spin-dependent force on the two ions via the transverse collective modes. For example, we can use the transverse modes in the $x$ direction whose $k$-th normalized mode vector is denoted as ${\bf b}_j^k$ with a mode frequency $\omega_k$ where the index $j$ runs over all ions $\left(j=1,\,2,\cdots, N\right)$. The creation and annihilation operators corresponding to this collective mode are denoted as $\hat{a}_k^\dag$ and $\hat{a}_k$ respectively. The transverse trap frequency is set to a typical value $\omega_x = 2\pi \times 3\,$MHz and the temperature is set to $k_B T = \hbar \omega_x$ giving an average phonon number of $\bar{n} \approx 0.5$ for each transverse mode. This can be easily achieved with a Raman sideband cooling. The spin-dependent forces are generated by counter-propagating laser beams on the two ions that we choose to entangle (see figure \ref{fig:MSlasers}). The Hamiltonian, in the interaction picture, can be represented as: \begin{equation} \hat{H}_I = \hbar \sum_{j} \tilde\Omega_j \sum_k \eta_k {\bf b}_j^k \sin \mu t \left(\hat{a}_k e^{-i \omega_k t} + \hat{a}_k^{\dag} e^{i \omega_k t} \right) \hat{\sigma}_j^x \end{equation} where we further define the Lamb-Dicke parameter $\eta_k \equiv \Delta k \sqrt{\hbar/2m\omega_k}$, $\Delta k$ is the difference in the wavevectors of the counter-propagating Raman beams, $\mu$ is the two-photon detuning, and $\hat{\sigma}_j^x$ is the $\hat{\sigma}^x$ Pauli matrix on ion $j$. For the $^{171}$Yb$^{+}$ qubit transitions, the laser beams have wavelengths around $\lambda = 355\,$nm \cite{CampballUltrafastGates2010} and for counter-propagating pairs $\Delta k = 2k$, hence the Lamb-Dicke parameter $\eta_k \approx 0.111$. In the above equation, $\tilde\Omega_j$ is the effective Rabi frequency of the Raman transition pairs shown in figure \ref{fig:MSlasers} ($\tilde\Omega_j \approx \Omega_1\Omega_3/\Delta = \Omega_1\Omega_2/\Delta$ where $\Delta$ is the single-photon detuning from the excited state). From now on we will drop the tilde notation for simplicity. Note that one of the laser beams contains two frequency components and we assume that the two Raman transition pairs have the same effective Rabi frequency $\Omega_j$, opposite detunings $\pm\mu$, and opposite wavevector differences $\pm\Delta k$. This is known as the phase-insensitive geometry \cite{LeePhaseControlIon2005}. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{GateTimesParameters.pdf} \caption{Example pulse sequences with discrete Rabi frequencies for performing an entangling gate between two ions in the 17 ion chain: nearest neighbor ions (Pair 8 and 9, Pair 1 and 2) and ions separated by a distance of 7 ion spacings (Pair 5 and 12, Pair 10 and 17). Thanks to the nearly uniform ion spacings, the required gate times for ion pairs at the same distance are roughly the same. In general, the ion pair at a larger distance requires a longer gate time $\tau$ due to their weaker coupling.} \label{fig:gatetimeLuming} \end{figure} The time evolution under the above Hamiltonian can be written as \cite{LeePhaseControlIon2005,LinQCAnharmIonTrap2009}: \begin{equation} \hat{U}_I(\tau) = \exp\left(i\sum_{j} \hat{\phi}_j(\tau)\hat{\sigma}_j^x + i\sum_{i<j}\Theta_{ij}(\tau)\hat{\sigma}_i^x \hat{\sigma}_j^x\right) \end{equation} where $\hat{\phi}_j(\tau) = -i\sum_k [\alpha_j^k(\tau)\hat{a}_k^\dag - {\alpha_j^k}^*(\tau)\hat{a}_k]$. The parameters $\alpha_j^k$ and $\Theta_{ij}$ are purely numbers related to the phase space displacement of the motional states after the gate and angle of the entanglement gate, respectively. For the following calculations, we assume that $\Omega_j$ is the same for both ions and we divide it into segments with equal durations; that is, a piecewise constant $\Omega(t)$ (see figure \ref{fig:gatetimeLuming}). With a suitable choice of detuning $\mu$, gate time $\tau$, and Rabi frequency $\Omega(t)$, we can suppress all the $\alpha_j^k(\tau)$ terms and realize an ideal entangling gate $e^{\pm i \pi \hat{\sigma}_i^x\hat{\sigma}_j^x/4}$ with high fidelity. Here, we focus on the intrinsic gate infidelity caused by the residual coupling to multiple phonon modes after the entangling gate. Other noise sources from technical control errors are not included for this calculation. Figure \ref{fig:gatetimeLuming} shows example calculations of the gate sequences for different ion pairs: two nearest-neighbor pairs and another two separated by 7 ion spacings. Because the ion spacings have been configured to be nearly uniform, the gate times do not vary much for ion pairs with the same separation. In figure \ref{fig:GateTimeGraph}, we show the calculated minimal gate times for ions pairs at the distance of 1, 3, 5, 7, 9 and 11 ion spacings. To find these ``minimal" gate times, we searched different detunings and number of segments with a step of $10\,\mu$s and screened for solutions with an infidelity below $3\times 10^{-6}$. We further require the effective Rabi frequency $\Omega_j(t)$ to be below $2 \pi \times 1\,$MHz, which is comparable with the value in current experiments. We note that the gate time limits here are not fundamental and alternative approaches could lead to faster two-qubit gates \cite{LeungFMgates2017,SchaferFastLogicIon2017}. \begin{figure}[b!] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[width=\textwidth]{GateTimePlot.pdf} \caption{Gate times with respect to ion distance} \label{fig:GateTimeGraph} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=\textwidth]{ionconnectivity.pdf} \caption{Qubit to ion maps} \label{fig:ionconnectivity} \end{subfigure} \caption{a) Two-qubit gate times calculated as a function of the ion distance in the linear chain. Extrapolation of this data was used to calculate the time required to measure the error syndrome. b) Ion chains mapped to a circle. The node labels correspond to the qubit numbers in figure \ref{fig:17QSC}. The labels refer to different optimizations with configurations shown in figure \ref{fig:GateTimeTable}.} \end{figure} The underlying connection graph of a trapped linear ion chain is a fully connected graph \cite{BrownCoDesignQC2016}. Therefore, there are many ways to map the surface-17 code to the linear ion chain. A natural mapping is to split the ion chain into groups of data and ancillary qubits which appears to be advantageous by minimizing ion shuttling times since all measurement can be performed in parallel and would not require the storage zone. However, the data-ancilla distance between ions in this configuration is larger which, as we have shown above, results in slower two-qubit gates. We fit the gate time as a result of the ion distance (figure \ref{fig:GateTimeGraph}) to a linear function yielding: \begin{equation} t_g = 10 + 38 d \end{equation} where $t_g$ is the gate time ($\mu$s) and $d$ is the ion distance. As we can see from figure \ref{fig:gatetimeLuming}, the nearest neighbor calculated gate time (ion pair 8 and 9) corresponds to $d=1$ which gives a $t_g \approx 50\,\mu$s. Single-qubit gates can be performed in parallel with a gate time of $10 \, \mu s$. With this relationship between ion distance and gate times, we screened for the optimal ion chain configurations using a simulated annealing algorithm that minimized several parameters of interest. Three parameters were minimized: the maximum ion distance between entangled ions (M), the average ion distance between entangled ions (A), and the total time for one round of syndrome measurement in parallel (T) corresponding to the second letter in the labels in figure \ref{fig:GateTimeTable}. In addition, the optimizations were performed with constraint that the data and ancilla qubits are separate (S) and are allowed to be mixed together (M) corresponding to the first letter in the labels in figure \ref{fig:GateTimeTable}. The corresponding connection graphs for two optimized chains (SM and MT) are shown in figure \ref{fig:ionconnectivity}. The ion splitting time was assigned to be $100 \, \mu$s between neighboring zones \cite{WaltherFastShuttle2012,BowlerFastShuttle2012,BermudezITQCwithSteane2017}. Therefore, splitting ions in the chain from the logic zone and shuttling the ions to the SPAM or storage zone requires a time of $100 \, \mu$s or $200\, \mu$s, respectively. This time is built from an assumption of a $200\,$kHz lowest axial frequency that implies that splitting/merging of subsets of ions in the ion chain can occur at a rate almost at this frequency. For splitting the ion chain, the transport is expected to be limited to $7\,$m/s assuming a $50\,$kHz update rate in the transport waveforms \cite{ShuYtrap2014}. Therefore, the remaining $95 \, \mu$s allows for the chains to be separated by a distance of $665 \, \mu$m which is excellent separation between the detection lasers and the data qubits. It is assumed that operations can happen in parallel so part of an ancillary ion subchain can be shuttled to the storage zone while the other ions within the same subchain remain in the detection zone. The rejoining of the ions shuttled to the detection zone is assumed to occur in parallel with the next splitting operation which leads to a fixed cost for rejoining the chain of also $100 \, \mu$s. The final assumption is that a three way split requires the same amount of time as single splitting operation of the ion chain due to the parallelism which is reasonable due to the small contribution of splitting operations to the total zone to zone movement time. The measurement time was also fixed to $100 \, \mu$s which is a lax requirement on the experimental apparatus and will allow for high fidelity state detection \cite{NoekHiFidSPAMYb2013}. \begin{figure} \scriptsize \centering \begin{tabular}{ |c | c | c | c | c | c |} \hline {\bf Label} & {\bf Logic} & {\bf Shuttle} & {\bf Measure} & {\bf Total} & {\bf Ion Ordering} \\ \hline \multirow{2}{*}{SM} & 7650 & \multirow{2}{*}{200} & \multirow{2}{*}{100} & 7950 & \multirow{2}{*}{\tiny \circled{$\;1\;$} \circled{$\;2\;$} \circled{$\;5\;$} \circled{$\;8\;$} \circled{$\;0\;$} \circled{$\;4\;$} \circled{$\;3\;$} \circled{$\;6\;$} \circled{$\;7\;$} \dcircled{$\;9\;$} \dcircled{12} \dcircled{11} \dcircled{14} \dcircled{15} \dcircled{13} \dcircled{10} \dcircled{16}}\\ & \emph{3920} & & & \emph{4220} & \\ \hline \multirow{2}{*}{SA} & 7240 & \multirow{2}{*}{200} & \multirow{2}{*}{100} & 7640 & \multirow{2}{*}{\tiny \circled{$\;0\;$} \circled{$\;2\;$} \circled{$\;6\;$} \circled{$\;8\;$} \circled{$\;1\;$} \circled{$\;4\;$} \circled{$\;3\;$} \circled{$\;7\;$} \circled{$\;5\;$} \dcircled{11} \dcircled{12} \dcircled{10} \dcircled{15} \dcircled{13} \dcircled{14} \dcircled{$\;9\;$} \dcircled{16}}\\ & \emph{4140} & & & \emph{4440} & \\ \hline \multirow{2}{*}{MM} & 3080 & \multirow{2}{*}{1200} & \multirow{2}{*}{500} & 4780 & \multirow{2}{*}{\tiny \circled{$\;5\;$} \dcircled{15} \circled{$\;2\;$} \dcircled{12} \dcircled{14} \dcircled{$\;9\;$} \circled{$\;8\;$} \circled{$\;1\;$} \circled{$\;4\;$} \circled{$\;7\;$} \dcircled{11} \circled{$\;3\;$} \dcircled{13} \dcircled{16} \circled{$\;0\;$} \dcircled{10} \circled{$\;6\;$}}\\ & \emph{1690} & & & \emph{3390} & \\ \hline \multirow{2}{*}{MA} & 2300 & \multirow{2}{*}{1800} & \multirow{2}{*}{800} & 4900 & \multirow{2}{*}{\tiny \circled{$\;2\;$} \dcircled{$\;9\;$} \circled{$\;1\;$} \dcircled{12} \circled{$\;5\;$} \dcircled{15} \circled{$\;8\;$} \dcircled{14} \circled{$\;4\;$} \dcircled{11} \circled{$\;0\;$} \dcircled{10} \circled{$\;3\;$} \dcircled{13} \circled{$\;7\;$} \dcircled{16} \circled{$\;6\;$}}\\ & \emph{1170} & & & \emph{3770} & \\ \hline \multirow{2}{*}{MT} & 4300 & \multirow{2}{*}{700} & \multirow{2}{*}{300} & 5300 & \multirow{2}{*}{\tiny \dcircled{10} \dcircled{15} \dcircled{$\;9\;$} \circled{$\;5\;$} \circled{$\;0\;$} \circled{$\;1\;$} \dcircled{11} \dcircled{12} \dcircled{14} \circled{$\;7\;$} \circled{$\;4\;$} \circled{$\;3\;$} \circled{$\;8\;$} \circled{$\;2\;$} \circled{$\;6\;$} \dcircled{13} \dcircled{16}}\\ & \emph{2320} & & & \emph{3320} & \\ \hline \end{tabular} \caption{Trap operation times and ion arrangements optimized for an array of parameters. The first letter of the label refers to S=separate and M=mixed arrangements of data and ancilla qubits. The second letter of the label refers to the parameter minimized with M=maximum distance between entangled ions, A=average distance between entangled ions, and T=parallel total gate time. All values are reported in microseconds and the numbers in roman and \emph{italics} refer to the gate time of the operations performed in serial and parallel, respectively. Parallel operations allow for two simultaneous two-qubit gates exciting the independent $x$ and $y$ radial modes and fully parallel single-ion operations. Single-qubit gates, parallel measurement/state preparation, and shuttling between neighboring zones require $10$ $\mu s$, $100$ $\mu s$, and $100$ $\mu s$ ($5$ $\mu s$ split and $95$ $\mu s$ shuttle time), respectively.} \label{fig:GateTimeTable} \end{figure} The time required to measure the error syndrome for different optimized configurations are shown in figure \ref{fig:GateTimeTable}. The gate times (Logic) for the chain configurations where the data and ancilla qubits are separate (labels SM and SA) are substantially longer than the mixed configurations. The mixed configurations (MM, MA, and MT) have longer chain manipulation and measurement times. The longer times are due to the inability to perform all measurements in parallel for a mixed arrangement; only subchains consisting of neighboring ancilla can be measured in parallel. An example of a parallel step for the mixed configuration is shown in figure \ref{fig:IonTrap} where the ion configuration corresponds to the ion chain label MT. Ions with labels 11, 12, 14 in the surface code are measured in parallel in this measurement step. The neighboring data qubits on the ends of the subchain consisting of the three ancillary qubits restrict measurement on other ancillary qubits in this architecture. The entanglement gates outlined above allow for parallel implementation. Two simultaneous entanglement gates can be performed on two independent pairs of ions by exciting the $x$ and $y$ radial modes, respectively, for each pair. Single-qubit operations are completely parallel for both the serial and parallel implementations. The parallel operation times are shown in italics in figure \ref{fig:GateTimeTable}. For the detailed calculations below, we chose the ion chain configuration that gives the minimal total syndrome measurement time (serial or parallel), MT. Note that the gate times for the MM and MA configurations have shorter serial Logic times so these configurations will perform better than the MT configuration under the influence of our gate based error model outlined below. \section{Modeling Ion Trap Error Sources} \label{sec:ErrorModels} For accurate assessment of error correction in an ion trap quantum computer, appropriate error models must be developed to simulate noise sources in the physical architecture. This section provides the components for building up such complexity. The Kraus operator representation will be used to describe the components of the quantum error channel. A graphical representation of the full ion trap error model is shown in figure \ref{fig:ITErrorModel}. \subsection{Depolarizing Error Model} The depolarizing error model is a standard error model used in simulations of quantum error correcting codes. After the application of each gate in the quantum circuit implemented to measure the stabilizers, an element is sampled from the one-qubit (two-qubit) Pauli group and applied after each single-qubit (two-qubit) gate. The one- and two-qubit Kraus channels are of the form: \begin{equation} \begin{split} & E_{1,d} = \left\{ \sqrt{1-p} \, I, \; \sqrt{\frac{p}{3}} \, X, \; \sqrt{\frac{p}{3}} \, Y, \; \sqrt{\frac{p}{3}} \, Z \right\} \\ & E_{2,d} = \left\{ \sqrt{1-p} \, II, \; \sqrt{\frac{p}{15}} \, IX, \; \sqrt{\frac{p}{15}} \, IY, \; \sqrt{\frac{p}{15}} \, IZ, \; \sqrt{\frac{p}{15}} \, XX, ... , \; \sqrt{\frac{p}{15}} \, ZZ \right\} \end{split} \label{eqn:depchannel} \end{equation} where $p$ is the \emph{error rate} of the error channel. Furthermore, the application of perfect two-qubit gates still allows for certain errors to propagate from single- to two-qubit errors. For measurement of the stabilizers, the $CNOT$ (controlled-$X$) is the two-qubit gate and transforms two-qubit Pauli errors in the following manner: \begin{equation} \left\{ XI,\; XX, \; IZ,\; ZZ \right\} \rightarrow \left\{ XX, \; XI, \; ZZ,\; IZ \right\} \end{equation} where the first (second) operator is on the control (target). The $Y$ error rules can be built from the relation $Y = i XZ$. The stabilizer circuits in this work are built using only the $CNOT$ as the two-qubit gate. This error model allows for errors on both the data and ancilla qubits, which translate into errors in the measurement of stabilizers during syndrome extraction. Furthermore, preparation and measurement errors are modeled by the application of a single-qubit depolarizing error channel after preparation gates and before measurement. This model will serve as a baseline error model for assessment of error correction. \subsection{Coherent Over-Rotation of the M{\o}lmer-S{\o}rensen Gate} The first step in adding complexity to the error model entails compiling the two-qubit quantum logic gates in the abstract quantum circuit with experimental entangling gates. The M{\o}lmer-S{\o}rensen (MS) entangling gate \cite{SorensenPRLMSgate1999,SorensenPRAMSgate2000} was chosen for this purpose due to its faster gate times and higher gate fidelities relative to other entangling gate schemes \cite{DebnathSmallQCIons2016}. The MS gate uses a bichromatic laser field to induce a two-photon transition that couples $\ket{00} \leftrightarrow \ket{11}$ and $\ket{10} \leftrightarrow \ket{01}$ qubit states. The MS gate induces a transition with a bichromatic laser tuned close to the upper and lower motional sideband of a qubit transition \cite{SorensenPRLMSgate1999,SorensenPRAMSgate2000}. In the compuatational basis, the unitary operator associated with the M{\o}lmer-S{\o}rensen gate is: \begin{equation} XX \left(\chi\right) =\left( \begin{array}{c c c c} \mathrm{cos}\left(\chi \right) & 0 & 0 & -i \, \mathrm{sin}\left(\chi \right) \\ 0 & \mathrm{cos}\left(\chi \right) & -i \, \mathrm{sin}\left(\chi \right) & 0 \\ 0 & -i \, \mathrm{sin}\left(\chi \right) & \mathrm{cos}\left(\chi \right) & 0 \\ -i \, \mathrm{sin}\left(\chi \right) & 0 & 0 & \mathrm{cos}\left(\chi \right) \end{array} \right) \end{equation} where the parameter $\chi$ depends on the gate time applied to the specific ion pair \cite{DebnathSmallQCIons2016}. The absolute value of the angle, $|\chi|$, may be set to any real number between $0$ and $\pi/2$ by varying the power of the laser in the experiment \cite{DebnathSmallQCIons2016}. The sign of $\chi$ is dependent on the laser detuning which is chosen from normal modes of the ion pair \cite{DebnathSmallQCIons2016}. The $CNOT$ gate can be achieved by assigning $\chi = \pm \pi/4$ and sandwiching the two-qubit unitary between single-qubit gates as shown in figure \ref{fig:MStoCNOT} \cite{MaslovCircuitCompIT2017}. The M{\o}lmer-S{\o}rensen unitary implemented during the $CNOT$ can equivalently be written as: \begin{equation} XX\left(\chi\right) = \mathrm{exp} \left( - i \, \chi \, XX \right) = \mathrm{cos}\left(\chi\right)\, II - i \, \mathrm{sin}\left(\chi\right) \, XX \label{eqn:MSexp} \end{equation} where we attempt to assign $\chi$ as $\pi/4$ with the laser field. However due to experimental error, a small over-rotation (with angle $\alpha$) may be applied about the $XX$ axis with the real gate applied in equation \ref{eqn:MSexp} having an angle of $\chi+\alpha$. This error will be simulated by a probabilistic error channel of the form: \begin{equation} E_{2,xx} = \left\{ \sqrt{1-p_{xx}} \, II,\; \sqrt{p_{xx}} \, XX \right\} \end{equation} where the probability of the channel above is a function of the over-rotation angle. For example, one possible relation between $p_{xx}$ and $\alpha$ is obtained by the Pauli twirled approximation, which results in $p_{xx} = \mathrm{sin}^2 \left(\alpha\right)$ \cite{GutierrezIncohCohNoiseEC2016}. It is also possible to choose $p_{xx}$ such that the Pauli approximation to the over-rotation satisfies additional constraints \cite{PuzzuoliHonestApproxRealModels2014,GutierrezApproxRealError2013}. Furthermore, the single-qubit rotation gates in the circuit (figure \ref{fig:MStoCNOT}) can also suffer over-rotations, although typically to a much less degree. The over-rotations can be modeled in an analogous way giving three distinct gate-dependent error channels: \begin{equation} \begin{split} & E_{1,x} = \left\{ \sqrt{1-p_x} \, I, \; \sqrt{p_x} \, X \right\} \\ & E_{1,y} = \left\{ \sqrt{1-p_y} \, I, \; \sqrt{p_y} \, Y \right\} \\ & E_{1,z} = \left\{ \sqrt{1-p_z} \, I, \; \sqrt{p_z} \, Z \right\} \end{split} \end{equation} which are applied after every single-qubit rotation gate $R_X(\theta)$, $R_Y(\theta)$, and $R_Z(\theta)$, respectively. For simulations, the error rates for the single-qubit gates are a factor of 10 lower than those corresponding to two-qubit gates; representing observed single- and two-qubit gate fidelities \cite{BallanceControlHFQubits2016,BrownHighFidBe2011}. \begin{figure} \captionsetup{width=0.9\textwidth} \centering \mbox{ \Qcircuit @C=0.7em @R=2.1em { & \ctrl{1} & \qw \\ & \targ & \qw }} ~ \mbox{ \Qcircuit @C=0.6em @R=0.25em { & \\ & \\ & \\ & \\ & \qw \\ & \qw \\ & \qw } } ~ \mbox{ \Qcircuit @C=1em @R=0.8em { &\gate{R_Y\left(\pm \frac{\pi}{2} \right)} & \multigate{1}{XX \left(s \, \frac{\pi}{4} \right)} & \gate{R_X\left(s \, \frac{\pi}{2} \right)} & \gate{R_Y\left(\mp \frac{\pi}{2} \right)} & \qw \\ & \qw & \ghost{XX \left(s \, \frac{\pi}{4} \right)} & \gate{R_X\left(\mp \frac{\pi}{2} \right)} & \qw & \qw }} \caption{The construction of a $CNOT$ logic gate from a M{\o}lmer-S{\o}rensen entangling gate and single-qubit gates as follows from \cite{MaslovCircuitCompIT2017}. The quantity $s$ is ion specific and equal to the sign of the experimental interaction parameter $\chi$.} \label{fig:MStoCNOT} \end{figure} \subsection{Motional Mode Heating} In addition to control errors, the applied field from M\o lmer-S\o rensen gate can result in motional heating of the ions, which impacts the fidelity of the two-qubit gate. Modeling heating as a coupling of the motional states of the ions to an infinite temperature bath \cite{TurchetteIonHeatingBath2000}, Ballance et al. characterized the impact of motional heating on the error of a two-qubit entangling gate, $\epsilon_h$, giving: \begin{equation} \epsilon_h = \frac{\dot{\bar{n}} t_g}{2 K} \label{eqn:EfromHeat} \end{equation} where $\dot{\bar{n}}$ is the average change in the thermal occupation number of the gate mode, $t_g$ is the gate time, and $K$ is the number of loops in phase space traversed by the ions during the gate \cite{BallanceControlHFQubits2016}. We chose to study the low $K$ limit ($K = 1,2$) of equation \ref{eqn:EfromHeat} modeling heating errors with the Kraus operators which are applied after every MS gate: \begin{equation} E_{2,h} = \left\{ \sqrt{1-p_h} \, II,\; \sqrt{p_h} \, X X \right\} \end{equation} where the probabilities are ion-dependent: $p_h = \left( r_{heat} \right) \times \left( t_{MS} \right)$ where $r_h$ is the heating rate and $t_{MS}$ is the time of the M\o lmer-S\o rensen gate. It is important to note that this model is pessimistic with respect to ion heating, even in the low $K$ limit, and the choice of coupling modes can increase $K$ by $1-2$ orders of magnitude \cite{ROzeriErrSpontScatt2007,HayesMSErrorSupp2012}. \subsection{Background Depolarizing Noise} For the stable ``clock" states of the hyperfine qubits, errors arise from the application of gates. In addition to systematic over/under-rotations of the applied laser field, instabilities in the control of the qubits (laser field drifts, magnetic field fluctuations, etc.) can lead to stochastic error processes that we will model with a depolarizing error channel. One such natural stochastic process that has shown to be a contributing source of error is scattering during the application of the gate \cite{OzeriScattErrYb2007,Gea-BanaclocheHyperRamaScatt2005}. To model the effects of spontaneous Raman and Rayleigh photon scattering, we will apply a single-qubit depolarizing channel (equation \ref{eqn:depchannel}) after every qubit involved in a gate (single- or two-qubit gates). \subsection{Dephasing Errors} While the ions are located in the trap where the DC electric fields vanish, the ions may still be exposed to oscillating electric fields from blackbody radiation, laser fields, or motion around the field free point in the oscillating trap field \cite{LudlowOptClocks2014}. The application of the oscillating electric field shifts the energy each of the states of the two-level qubit system by the AC Stark effect, which introduces dephasing errors in the applied gates. This effect is observed for both single- and two-qubit gates. We choose to model these dephasing errors as a single qubit channel of the form: \begin{equation} E_{d} = \left\{ \sqrt{1-p_d} \, I, \; \sqrt{p_d} \, Z \right\} \end{equation} where each channel is applied to each qubit involved in single- and two-qubit gates and $p_d = r_d \times t_g$ for each gate where $r_d$ is the dephasing rate and $t_g$ is the time of the applied gate. We make the approximation that single- and two-qubit dephasing errors occur at a constant rate. This is certainly not true in that the dephasing rates will be gate dependent between two-qubit gates and will likely not be at the same rate of single-qubit dephasing but, taking that single-qubit gates have higher fidelities relative to two-qubit gates, this serves as a pessimistic approximation which is consistent with our level of abstraction. \begin{figure}[b!] \includegraphics[width=\textwidth]{ErrorModel.pdf} \caption{Graphical representation of the ion trap error model implemented for the 17-qubit surface code simulations. Rotation errors (for both single- and two-qubit gates) occur about the axis of rotation of the applied gate. Motional mode heating errors (for two-qubit gates) manifest themselves as $XX$ errors after the applied M\o lmer-S\o rensen gate with a probability proportional to the time of the applied gate (ion/qubit dependent). Depolarizing errors and dephasing errors are applied independently to each qubit involved in the gate with a static probability $p_{dep}$ and probability proportional to the gate time for depolarizing and dephasing errors, respectively.} \label{fig:ITErrorModel} \end{figure} \subsection{Ancilla Preparation and Measurement Errors} \label{sec:SPAM} For the ion trap error model, measurement errors were modeled by a single-qubit depolarizing channel applied before the measurement with a probability equivalent to that of the single-qubit over-rotation errors of the single-qubit gates. Preparation errors were modeled with a single-qubit depolarizing channel applied immediately after the preparation of the state but with a probability equivalent to the background depolarizing channel. All states are prepared and measured in the $+Z$ basis, which can be performed with high-fidelity \cite{HartyMagClock2014}. Note that this implementation is not ideal given that both state preparation and state readout rely on the same scattering processes. However, the preparation and measurement errors should not be the dominant source of failure in the simulations consistent with single-qubit gate, preparation, and readout fidelities of $\ge 99.9\, \%$ \cite{HartyMagClock2014}. Furthermore, state preparation/measurement is a high-fidelity operation (relative to two-qubit gates) so the inflated state preparation errors will give a pessimistic simulation of the fault-tolerance of the surface code on ion traps relative to the physical architecture. These claims are reinforced in section \ref{sec:SingleErrorThresh}. \section{Error Correction for Ion Trap Errors} To perform error correction on the surface code, classical decoding algorithms have been developed to determine the most appropriate correction operation to perform given the limited information about the encoded state from the syndrome. Various decoders are available that trade-off classical efficiency and observed error threshold. We apply a few decoders for error correction on the surface code below and discuss their performance. For all simulations, we implemented a Monte Carlo simulation of the surface code using an importance sampling method described in Ref. \cite{LiBareAnc2017}. \subsection{Integration into Ion Trap Hardware} When choosing a decoding method to integrate into a physical architecture, there is much to consider that extends beyond the (pseudo)threshold. Processing, memory, and runtime requirements of the decoder play a role in the feasibility of implementing error correction with an experimental control system. \subsubsection{Lookup Table Decoder} The simplest decoder is a lookup table that maps a syndrome configuration to the lowest weight Pauli error corresponding to the syndrome. We may represent an error configuration ${\bf e}$ as a binary (row) vector $\mathbb{F}_2^{18}$ where the first/last 9 elements of the vector correspond to $X$-type/$Z$-type errors on the data qubits; for instance: \begin{equation} {\bf e}(2563)= \left[0\;0\;0\;0\;0\;0\;1\;0\;1\;0\;0\;0\;0\;0\;0\;0\;1\;1 \right] = X_6 Z_7 Y_8 \end{equation} Given two matrices, $H$ and $G^T$, that correspond to the binary representation of the $X$-type and $Z$-type stabilizers, respectively; one may define a mapping matrix $T$ between error configurations ${\bf e}$ and binary syndrome (column) vectors ${\bf s}$: \begin{equation} T = \left( \begin{array}{cc} H & 0\\ 0 & G^T \end{array} \right) = \left( \begin{array}{ccccccccc ccccccccc} 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ \end{array} \right) \end{equation} Iterating over all elements of $\mathbb{F}_2^{18}$ and applying $T$, we constructed a lookup table Tab$\left[s\right] =\lfloor e \rfloor$ where $\lfloor e \rfloor = \mathrm{min}_s \left(|e|\right)$ is the minimum weight error configuration corresponding to the syndrome string $s$. With a slight abuse of notation, we denote $|\cdot|$ as the hamming weight of the error string $e$ with the caveat that $Y$-type errors are evaluated as the same weight as $X$ and $Z$-type errors. Those familiar with the CSS construction of quantum error correcting codes will recognize $H$ and $G^T$ as the parity check matrices of $\mathcal{C}$ and $\mathcal{C}^{\perp}$ of a classical linear error correcting code used to construct the 17-qubit surface code \cite{MikeNIke}. All of the rules of the full lookup table (Tab$\left[s\right]$) can be constructed with two 16-element tables, each with keys corresponding to the $X$-type and $Z$-type stabilizer measurements, respectively. For circuit-level noise, the lookup table above is not sufficient for fault-tolerance. A set of syndrome processing rules must be imposed to ensure that measurement errors do not result in faulty corrections that introduce errors onto the data qubits. An example of a typical set of rules is shown below ($a$, $b$, and $c$ are syndrome outcome strings): \begin{figure}[h!] \centering \begin{minipage}{0.5\textwidth} \mbox{ \Qcircuit @C=0.7em @R=2.1em { & \gate{a} & \qw & \gate{b} & \qw & \gate{a\neq b:c} \ar@{--}[]+<2.8em,1em>;[]+<2.8em,-1em> & \qw & \gate{a} & \qw & \gate{b} & \qw & \gate{a\neq b:c} & \qw & \hdots}} \\ \vspace{2mm} \end{minipage} \vspace{-3mm} \end{figure} \\ where two rounds of stabilizer measurement are performed and, if the first two measurement outcomes disagree, a third round of stabilizer measurement is performed. Correction is applied based upon the final measurement performed. We chose to employ a different set of fault-tolerant syndrome processing rules that can, on average, reduce the depth of the circuit required to perform a fault-tolerant correction by one round of stabilizer measurement. The routine: \begin{figure}[h!] \centering \begin{minipage}{0.40\textwidth} \mbox{ \Qcircuit @C=0.7em @R=2.1em { & \gate{a} & \qw & \gate{a\neq 0: b} \ar@{--}[]+<2.8em,1em>;[]+<2.8em,-1em> & \qw & \gate{a} & \qw & \gate{a \neq 0: b} & \qw & \hdots}} \end{minipage} \vspace{-3mm} \end{figure} \\ performs one round of stabilizer measurements and performs a correction based on the following round of stabilizer measurements ($b$) only if the first round was non-trivial $\left(a \neq 0 \right)$. These two sets of rules yield equivalent results for the 17-qubit surface code under circuit-level depolarizing noise. \subsubsection{Minimum Weight Perfect Matching} For topological codes, minimum weight matching algorithms have been shown to be a useful heuristic technique for performing error correction \cite{FowlerMatchingSC2012, EdmondsPaths1965, EdmondsMatroid1965}. For the distance 3 surface code, the minimum weight perfect matching rules can be encoded into a lookup table that presents a correction operation based on three rounds of syndrome measurement (for circuit-level depolarizing noise) \cite{TomitaLowDSC2014}. \begin{figure}[t!] \centering \begin{tabular}{|c | c | c |} \hline {\bf Decoder} & {\bf Level-1 Pseudothreshold} & {\bf Computational Time ($s$)}\\ \hline Lookup Table & $3.0 \times 10^{-3}$ & $1.1 \times 10^{-7}$ \\ \hline Matching (table) & $5.5 \times 10^{-3}$ & $1.43 \times 10^{-6}$ \\ \hline \end{tabular} \caption{Performance of the two lookup table style decoders considered for implementation into a near-term quantum error correction experiment. Lookup table style decoders were chosen due to their easy integration into the control software of an ion trap system.} \label{fig:DecoderPerf} \end{figure} \begin{figure} \centering \footnotesize \begin{subfigure}[t]{0.25\textwidth} \hspace{3mm} \mbox{ \Qcircuit @C=0.8em @R=.94em { \lstick{0} & \multigate{8}{MS(X)} & \gate{R_X(-\frac{\pi}{2})} & \qw \\ \lstick{1} & \ghost{MS(X)} & \qw & \qw \\ \lstick{2} & \ghost{MS(X)} & \gate{R_X(+\frac{\pi}{2})} & \qw \\ \lstick{3} & \ghost{MS(X)} & \gate{R_X(-\frac{\pi}{2})} & \qw \\ \lstick{4} & \ghost{MS(X)} & \qw & \qw \\ \lstick{5} & \ghost{MS(X)} & \gate{R_X(+\frac{\pi}{2})} & \qw \\ \lstick{6} & \ghost{MS(X)} & \gate{R_X(-\frac{\pi}{2})} & \qw \\ \lstick{7} & \ghost{MS(X)} & \qw & \qw \\ \lstick{8} & \ghost{MS(X)} & \gate{R_X(+\frac{\pi}{2})} & \qw \\ }} \caption{$X$-type stabilizers} \end{subfigure} \hspace{10mm} \begin{subfigure}[t]{0.45\textwidth} \hspace{3mm} \mbox{ \Qcircuit @C=0.8em @R=0.65em { \lstick{0} & \gate{R_Y(+\frac{\pi}{2})} & \multigate{8}{MS(Z)} & \gate{R_X(+\frac{\pi}{2})} & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{1} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \gate{R_X(+\frac{\pi}{2})} & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{2} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \gate{R_X(+\frac{\pi}{2})} & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{3} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \qw & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{4} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \qw & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{5} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \qw & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{6} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \gate{R_X(-\frac{\pi}{2})} & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{7} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \gate{R_X(-\frac{\pi}{2})} & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ \lstick{8} & \gate{R_Y(+\frac{\pi}{2})} & \ghost{MS(Z)} & \gate{R_X(-\frac{\pi}{2})} & \gate{R_Y(-\frac{\pi}{2})} & \qw \\ }} \caption{$Z$-type stabilizers} \end{subfigure} \vspace{2mm} \begin{subfigure}[t]{0.6\textwidth} \centering \begin{lstlisting}[backgroundcolor = \color{softblue}] MS(X) MS(Z) GATE s ID1 ID2 GATE s ID1 ID2 PREP 16 PREP 10 XX + 6 16 XX - 0 10 XX - 7 16 XX + 3 10 MEAS 16 MEAS 10 PREP 11 PREP 12 XX + 0 11 XX - 1 12 XX - 1 11 XX - 4 12 XX + 3 11 XX + 2 12 XX - 4 11 XX + 5 12 MEAS 11 MEAS 12 PREP 14 PREP 13 XX + 4 14 XX - 3 13 XX - 5 14 XX - 6 13 XX + 7 14 XX + 4 13 XX - 8 14 XX + 7 13 MEAS 14 MEAS 13 PREP 9 PREP 15 XX + 1 9 XX - 5 15 XX - 2 9 XX + 8 15 MEAS 9 MEAS 15 \end{lstlisting} \end{subfigure} \caption{(Top) The syndrome extraction circuit for the 17-qubit surface code compiled with M\o lmer-S\o rensen entangling gates and single-qubit ion trap operations where the ancillary qubit wires have been suppressed. The number of single-qubit gates required for the circuit is 30, which is a substantial reduction relative to the naive implementation. (Bottom) The primitive gate operations compiling the $MS(X)$ and $MS(Z)$ gates above. The values ID1 and ID2 correspond to the qubit indices to which the gate is applied defined in figure \ref{fig:17QSC}. The PREP gate projects ancillary qubits into the $\ket{0}$ state and all MEAS gates are $Z$-basis measurements (see section \ref{sec:SPAM}). The XX gates are M\o lmer-S\o rensen gates. The parameter $s$ which is dictated by the sign experimental interaction parameter was taken as a free parameter during compilation. The assignment of $s$ for each gate is shown explicitly. Note that this is an intuitive representation of the stabilizer measurements and does not indicate the order of operations in our architecture (recall that all entangling gates are performed and then preparation/measurement gates).} \label{fig:MSsyndromecirc} \end{figure} \subsubsection{Decoder Performances} Figure \ref{fig:DecoderPerf} shows the performance of the two lookup table style decoders, standard lookup and matching rule derived lookup, considered for implementation in a near-term experimental quantum error correction routine. Lookup table decoders were chosen for their easy integration into existing ion trap experimental controls which have restricted logic/memory available versus other techniques, such as maximum likelihood \cite{BravyiMLD2014,HeimCircuitLevelMLD2016} or deeper memory step matching algorithms \cite{FowlerMatchingSC2012, EdmondsPaths1965, EdmondsMatroid1965} for example, which would require additional processing power to implement/integrate into an experiment. The lookup table decoder was favored over the matching table because of its requirement for one less round of stabilizer measurement to perform fault-tolerant error correction with a comparable level-1 pseudothreshold to the matching table (figure \ref{fig:DecoderPerf}). Because current estimates of the syndrome extraction indicate it is relatively slow (figure \ref{fig:GateTimeTable}), the ability to choose a correction fault-tolerantly from a minimal number of experimental operations is important to maintain coherence of the encoded information. The lookup table was implemented in all further simulations because of ease of integration into ion trap controls while requiring at most two syndrome measurements to fault-tolerantly perform error correction. \subsection{Error Correction on Ion Traps} Now that a fast, light memory, high-performance decoder has been identified, we will switch attention to using such a method to apply error correction on the 17-qubit surface code under the influence of ion trap errors. First, we must map the abstract quantum circuit used for error correction in the surface code to a circuit that implements gates that would be available in an ion trap quantum computer; specifically single-qubit rotations and M\o lmer-S\o rensen gates. Next, we will discuss the influence of the individual ion trap error sources (outlined in section \ref{sec:ErrorModels}) on the fault-tolerance of the surface code mapped to a linear ion chain highlighting the experimental parameter regimes which would allow for fault-tolerance for the surface code implementation. Finally, we analyze the error subset probabilities from the importance sampling simulations to understand the roles of the competing error sources and gain insight into the error sources that are most influential/detrimental to the error correcting properties of the code. \subsection{Surface-17 Syndrome Extraction Circuit Gate Compilation} As shown in figure \ref{fig:MStoCNOT}, the two-qubit gates in the syndrome extraction circuit for the 17-qubit surface code must be decomposed into single-qubit rotation gates and two-qubit M\o lmer-S\o rensen gates. In addition, Hadamard gates are required during the measurement of the $X$-type stabilizers which can be decomposed into rotation gates in two equivalent ways: $H \equiv R_Y\left(-\frac{\pi}{2} \right) R_X\left(\pi \right)$ or $H \equiv R_X\left(-\pi \right) R_Y\left(\frac{\pi}{2} \right)$. Note that the implementation of the rotation gates constructing the $CNOT$ gate allows for some freedom in the direction of the rotation which can be used to reduce the number of primitive gates (an outline of the ion trap compilation techniques can be found in \cite{MaslovCircuitCompIT2017}). The parameter $s \in \left\{+1,\, -1 \right\}$ in the circuit dictated by the sign of the interaction parameter $\chi$ between two ions which is determined by the experimental apparatus. At our layer of abstraction, the value of $s$ is left as a free parameter. Applying such a compilation method allowed for the reduction of the number of single-qubit gates from 48 in the naive implementation to 30 in the compiled circuit; the number of entangling gates cannot be reduced in the error correction routine leaving 24 M\o lmer-S\o rensen gates as well. A representation of the compiled syndrome extraction circuit is shown in figure \ref{fig:MSsyndromecirc} where the ancillary wires have been suppressed. This circuit was used for all further results. \subsection{Single Error Source Dominant Effects} \label{sec:SingleErrorThresh} In this section, we characterize the influence of the error sources in the limit where each error type is the dominant source of the error. Therefore the simulations that generate the following pseudothresholds will have varying single- and two-qubit error rates (remember that $p_x = p_y = p_z = p_{xx}/10 $) and constant heating, depolarizing, or spin dephasing error rates during simulations. Our goal is to find a parameter range under which, again in this limit of a dominant error source, fault-tolerant retention of the encoded information would be possible. In all instances, a two-qubit gate fidelity of $\ge 99.9\%$ and an error source error rate below a critical rate is necessary to allow for fault-tolerance (see figure \ref{fig:singlesourcethresh}). We discuss those critical rates for each error source below. \begin{figure}[t!] \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{heating.pdf} \caption{Ion heating} \label{fig:singlesourceheating} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{paulibath.pdf} \caption{Background Depolarizing Noise} \label{fig:singlesourcedep} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{stark.pdf} \caption{Spin dephasing} \label{fig:singlesourcedephase} \end{subfigure} \caption{The influence of ion trap error sources on the fault-tolerance of the 17-qubit surface code. For each plot, only the labeled error source was introduced in the simulation in addition to gate errors. To achieve fault-tolerance, a two-qubit gate fidelity of $\ge 99.9\%$ and an error in the gate from the specific error source below a critical value (green curves) is required.} \label{fig:singlesourcethresh} \end{figure} Ion heating was characterized by a parameterized representation of the heating rate $\dot{\bar{n}}/2K$ where $\dot{\bar{n}}$ is the heating (in quanta$/s$) of the gate motional mode and $K$ is the number of loops in phase space traversed by the M\o lmer-S\o rensen gate. As shown in figure \ref{fig:singlesourceheating}, fault-tolerance is not achieved at a heating rate above $25$ quanta$/s$ which corresponds to a motional mode heating rate during the gate of $100$ and $200$ quanta$/s$ for $K=2$ and $K=4$, respectively. A heating rate ($\dot{\bar{n}}$) of about $58$ quanta$/s$ has been observed for a single $^{9}$Be$^+$ ion on a room temperature surface trap \cite{HiteArSurfaceTrap2012} and a silicon based trap in a cryogenic environment used to trap individual $^{40}$Ca$^{+}$ ions exhibited heating rates as low as $0.33$ quanta$/s$ ($0.6(2)$ quanta$/s$ on average) \cite{NiedermayrCryoTrap2014}. Note that macroscopic traps exhibit significantly lower heating rates relative to surface traps; for instance a single trapped $^{111}$Cd$^{+}$ ion exhibited a heating rate of $2.48$ quant$/s$ for a room temperature macroscopic trap \cite{DeslauriersMacroCdHeating2004}. However, additional difficulties arise for macroscopic traps in engineering a system that allows for ion separation, addressing, and detection required for an error correction protocol. Also, the use of sympathetic cooling ions has been shown to reduce motional mode heating effects on $T_2^*$ \cite{Wang10minT22017}; a method which could reduce the heating rates of the idle computational qubits during the error correction routine. The depolarizing error channel was applied to simulate stochastic error processes. One such process of interest is spontaneous Raman and Rayleigh scattering which results in single- and two-qubit gate errors. Figure \ref{fig:singlesourcedep} displays an upper limit on the scattering rate (per-gate) of $8 \times 10^{-4}$ to allow for fault-tolerance when scattering errors dominate. Ozeri et al. have shown that gate errors due to Raman scattering to occur at a rate less than $10^{-4}$ for single-qubit gates but two-qubit gates have scattering rates on the order of $10^{-2}$ for their experimental setup for various species of trapped ions \cite{OzeriScattErrYb2007}. These achieved scattering rates are still above the theoretical lower bound on the scattering rates for single- and two-qubit gates for $^{171}$Yb$^{+}$ by 3 and 7 orders of magnitude, respectively \cite{OzeriScattErrYb2007}, showing potential for improvement especially in the two-qubit scattering case. Rayleigh scattering errors are less substantial, resulting in error rates per gate orders of magnitude below the Raman scattering error rates for heavy ions such as $^{171}$Yb$^{+}$ \cite{OzeriScattErrYb2007}. Spin dephasing was modeled using a model that assumed a constant dephasing rate that scaled linearly with the time of the applied gate. The upper bound on the error rate (figure \ref{fig:singlesourcedephase}) corresponds to a dephasing rate of $15 \, s^{-1}$. These values are related to $T_2^*$ \cite{Wang10minT22017}. Note that the use of magnetic clock transitions \cite{LangerMagClock2005,HartyMagClock2014}, decoherence free subspaces \cite{Schmidt-KalerDFSCa2003}, or sympathetic cooling ions \cite{Wang10minT22017} during computation has been observed to increase the $T_2$ coherence times of the qubits to the order of seconds. A 5-qubit system that has implemented small quantum algorithms \cite{DebnathSmallQCIons2016} and the $\llbracket4,2,2\rrbracket$ error detection code \cite{Linke422Ions2016} with hyperfine qubits exhibits a $T_2^*$ of $\approx0.5\, s$ \cite{Linke5qubitComp2017}, but further magnetic field stabilization could improve this as shown in \cite{Wang10minT22017} which exhibits over a 10 minute coherence time for trapped $^{171}$Yb$^{+}$ ions. \subsection{Competing Error Sources: Dominant Errors} To characterize the dominant error sources contributing to the logical error rate in the 17-qubit surface code in the case where multiple error sources are competing, we take advantage of the importance sampling technique. We will briefly outline the importance sampling method; highlighting the use of error subsets that will be independently analyzed to gain insight into the effect of the error source on the logical error rate of the encoded state. This will then be followed by an analysis of the statistically significant error subsets, which will be used to characterize the most malignant errors contributing to the failure rate of the error correcting circuit. \subsubsection{Importance Sampling} \label{sec:importsample} This method is an adaptation of the method from \cite{LiBareAnc2017} but extended to the case where multiple error sources are available during the simulation. The method relies on approximating the logical error rate as a sum of statistically weighted logical error rates of error subsets. For low enough physical error rates, few subsets need to be sampled in order to obtain an accurate approximation, which makes the approach considerably more efficient than the standard direct Monte Carlo sampling. The subsets are indexed according to the number of errors present in the circuit. For instance for the standard depolarizing error model, the subsets would be indexed according to the number of single- and two-qubit errors present in the circuit. Sampling error configurations corresponding to the number of errors for this subset and calculating the fraction of configurations resulting in a logical error gives an effective subset error rate $A_{s,t}$. Multiplying this subset logical error rate by the total statistical weight of the error subset will provide the subset's contribution to the total logical error rate. Computing the statistical weight is done as so: \begin{equation} W_{s,t} = \left( \begin{array}{c} n_s \\ s \end{array} \right) p_s^{|e|}(1-p_s)^{n_s - |e|} \left( \begin{array}{c} n_t \\ t \end{array} \right) p_t^{|e|}(1-p_t)^{n_t - |e|} \end{equation} where $s$ and $t$ are the number of single- and two-qubit errors in the circuit being considered, respectively. These are also the indices of the subset. The values $n_s$ and $n_t$ are the number of single- and two-qubit fault-points in the circuit, respectively. The values $p_s$ and $p_t$ are the single- and two-qubit error channel probabilities and $|e|$ denotes the weight of the error. Estimating the logical error rate then constitutes calculating the following sum: \begin{equation} \displaystyle p_L \approx \sum_{(s,t)}^{W_{s,t}<\lfloor W \rfloor} W_{s,t} \, A_{s,t} \label{eqn:pLapprox} \end{equation} where subsets with statistical weights below a chosen cutoff value, $\lfloor W \rfloor$, are omitted from the sum. Note that, with this method, the sampling of each error subset only needs to be performed once to generate a logical error curve. We altered the method above to handle situations where errors of equivalent types have different error rates; such is the situation for our ion heating and dephasing error models with ion dependent gate times, which influence the error rate per qubit. To motivate this point, consider the quantum circuit in figure \ref{fig:samplecircexample}. \begin{figure*}[t!] \centering \mbox{ \Qcircuit @C=1.em @R=0.8em { \lstick{} & \qw & \multigate{1}{a} & \multigate{1}{b} & \multigate{1}{c} & \qw & \qw \\ \lstick{} & \qw & \ghost{a} & \ghost{b} & \ghost{c} & \qw & \qw \\ }} \caption{Circuit containing three two-qubit gates, labeled $a$, $b$, and $c$, with error rates $p_a$, $p_b$, and $p_c$, respectively.} \label{fig:samplecircexample} \end{figure*} This circuit contains three two-qubit gates, $a$, $b$, and $c$, with different error rates $p_a$, $p_b$, and $p_c$, respectively. The weight of the $(0,2)$ subset would then be: \begin{equation} p_a p_b (1-p_c) + p_a p_c (1-p_b) + p_b p_c (1-p_a) \end{equation} so the two-qubit subset calculation requires one more ingredient: we need to sum over all $n$-tuple error configurations ($f_n$) during the subset weight calculation: \begin{equation} \displaystyle W_n = \sum_{n = |e|}^{e\in f_n} \prod_{k\in e} p_k \prod_{j \not\in e} (1-p_j) \end{equation} When we adapt this approach to heating errors in an ion trap circuit, we get the following calculation of the subset weight: \begin{equation} \left( \begin{array}{c} n_s \\ s \end{array} \right) p_s^{|e|}(1-p_s)^{n_s - |e|} \left( \begin{array}{c} n_t \\ t \end{array} \right) p_t^{|e|}(1-p_t)^{n_t - |e|} \; \sum_{n = |e|}^{e\in f_n} \prod_{h\in e} p_h \prod_{\not h\not\in e} (1-p_{\not h}) \label{eqn:subetweightexample} \end{equation} where $p_h$ and $p_{\not h}$ are the individual error rates of the heating error channels for each two-qubit configuration on which an entangling gate is and is not applied in the simulation, respectively. We have taken into account the influence of the different rates for the calculation of the subset weights, but this also has an influence on the sampled subset logical error rates as certain error configurations will be more probable than others. Because the heating error rates are linearly proportional to the gate times in our error model, we have chosen to sample heating error configurations from a gate time weighted distribution of error configurations giving a corresponding logical error rate of $A_{s,t,h}$. With the new subset weights and subset logical error rates, the estimation of the total logical error rate naturally extends from equation \ref{eqn:pLapprox}. Note that heating adds an extra subset label: ($s$,$t$,$h$). The indices $s$, $t$, and $h$ represent the number of single-qubit gate, two-qubit gate, and heating errors sampled, respectively. Recall that the ion trap error model from section \ref{sec:ErrorModels} contains 5 distinct error sources. Therefore, we extended the concepts from equations \ref{eqn:pLapprox} and \ref{eqn:subetweightexample} to calculate the logical error rate of the 17-qubit surface code under the influence of single-qubit gate, two-qubit gate, ion heating, background depolarization, and dephasing errors. The analysis below will include 5 index subsets ordered with the indices listing the number of single-qubit gate, two-qubit gate, heating, background depolarization, and dephasing errors sampled in the circuit, in that order. \begin{figure}[t!] \begin{subfigure}[t!]{0.5\textwidth} \includegraphics[width=\textwidth]{subsetswts1e-3.pdf} \caption{$10^{-3} \leq W < 10^{-2}$} \label{fig:subsetweightsa} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{subsetswts1e-4.pdf} \caption{$10^{-4} \leq W < 10^{-3}$} \label{fig:subsetweightsb} \end{subfigure} \begin{subfigure}[t!]{0.5\textwidth} \includegraphics[width=\textwidth]{subsetswts1e-5.pdf} \caption{$10^{-5} \leq W < 10^{-4}$} \label{fig:subsetweightsc} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{subsetswts1e-6.pdf} \caption{$10^{-6} \leq W < 10^{-5}$} \label{fig:subsetweightsd} \end{subfigure} \caption{The subset logical error rates and subset statistical weights above the cutoff of $10^{-6}$ corresponding to events expected to be sampled from a random distribution at least once out of a million samples. The data is separated into four plots according to the order of magnitude of the subset statistical weights, which are plotted in red. The logical error rates for the subsets are plotted in blue. Note that the product of the subset weight and its corresponding logical error rate dictates the subset's contribution to the total logical error rate of the code. For calculation of the statistical weights of the subsets, the single-qubit gate error rate $\left(p_y = p_x = p_z \right)$, two-qubit gate error rate $\left(p_{xx}\right)$, rate of heating $\left(r_{heat}\right)$, background depolarizing noise error rate $\left(p_{dep}\right)$, and rate of dephasing $\left( r_d \right)$ were $10^{-4}$, $10^{-3}$, $25$ quanta$/s$, $8 \times 10^{-4}$, and $15$ $s^{-1}$, respectively, which correspond to the parameters allowing for the logical error rate equivalent to the unencoded two-qubit gate error rate in section \ref{sec:SingleErrorThresh}. A plot containing similar subset information for data beyond the subset cutoff is shown in figure \ref{fig:allsubsets}.} \label{fig:SubsetTab} \end{figure} \begin{figure}[b!] \begin{subfigure}{\textwidth} \centering \mbox{ \Qcircuit @C=0.8em @R=.94em { \lstick{\ket{\psi}} & \multigate{1}{MS} & \qw \\ \lstick{\ket{0}} & \ghost{MS} & \qw }} \parbox{0.05\textwidth}{ \centering $\Rightarrow$\\ \vspace{-8mm} } \mbox{ \Qcircuit @C=0.8em @R=.94em { \lstick{} & \multigate{1}{\textcolor{red}{E_2}} & \qw \\ \lstick{} & \ghost{\textcolor{red}{E_2}} & \qw }} \parbox{0.05\textwidth}{ \centering $\Rightarrow$\\ \vspace{-8mm} } \mbox{ \Qcircuit @C=0.8em @R=.94em { \lstick{} & \gate{R_X(\pm\frac{\pi}{2})} & \gate{R_Y(\pm \frac{\pi}{2})} & \qw \\ \lstick{} & \qw & \qw & \qw }} \parbox{0.05\textwidth}{ \centering $\Rightarrow$\\ \vspace{-8mm} } \mbox{ \Qcircuit @C=0.8em @R=.94em { \lstick{} & \multigate{1}{\textcolor{red}{E_2'}} & \qw \\ \lstick{} & \ghost{\textcolor{red}{E_2'}} & \qw }} \caption{Faults existing after a M\o lmer-S\o rensen gate are trans-\\formed by single-qubit rotations gates and their errors.} \end{subfigure} \vspace{3mm} \begin{subfigure}{0.5\textwidth} \centering $\begin{array}{c c c} \underline{E_2} & & \underline{E_2'} \\ II & \rightarrow & \textcolor{newgreen}{ZI} \\ XI & \rightarrow & \textcolor{blue}{II} \\ YI & \rightarrow & \textcolor{newgreen}{YI} \\ ZI & \rightarrow & \textcolor{newgreen}{XI} \\ XX & \rightarrow & \textcolor{red}{IX} \\ XY & \rightarrow & \textcolor{red}{IY} \\ XZ & \rightarrow & \textcolor{blue}{IZ} \\ YX & \rightarrow & \textcolor{orange}{YX} \\ \end{array}$ ~ $\begin{array}{c c c} \underline{E_2} & & \underline{E_2'} \\ YY & \rightarrow & \textcolor{orange}{YY} \\ YZ & \rightarrow & \textcolor{newgreen}{YZ} \\ ZX & \rightarrow & \textcolor{orange}{XX} \\ ZY & \rightarrow & \textcolor{orange}{XY} \\ ZZ & \rightarrow & \textcolor{newgreen}{XZ} \\ IX & \rightarrow & \textcolor{orange}{ZX} \\ IY & \rightarrow & \textcolor{orange}{ZY} \\ IZ & \rightarrow & \textcolor{newgreen}{ZZ} \\ \end{array}$ \caption{$R_X \left(\pm \frac{\pi}{2} \right)$ Gate Error: $X$} \label{fig:RXerrors} \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering $\begin{array}{c c c} \underline{E_2} & & \underline{E_2'} \\ II & \rightarrow & \textcolor{newgreen}{YI} \\ XI & \rightarrow & \textcolor{newgreen}{XI} \\ YI & \rightarrow & \textcolor{newgreen}{ZI} \\ ZI & \rightarrow & \textcolor{blue}{II} \\ XX & \rightarrow & \textcolor{orange}{XX} \\ XY & \rightarrow & \textcolor{orange}{XY} \\ XZ & \rightarrow & \textcolor{newgreen}{XZ} \\ YX & \rightarrow & \textcolor{orange}{ZX} \\ \end{array}$ ~ $\begin{array}{c c c} \underline{E_2} & & \underline{E_2'} \\ YY & \rightarrow & \textcolor{orange}{ZY} \\ YZ & \rightarrow & \textcolor{newgreen}{ZZ} \\ ZX & \rightarrow & \textcolor{red}{IX} \\ ZY & \rightarrow & \textcolor{red}{IY} \\ ZZ & \rightarrow & \textcolor{blue}{IZ} \\ IX & \rightarrow & \textcolor{orange}{YX} \\ IY & \rightarrow & \textcolor{orange}{YY} \\ IZ & \rightarrow & \textcolor{newgreen}{YZ} \\ \end{array}$ \caption{$R_Y \left(\pm \frac{\pi}{2} \right)$ Gate Error: $Y$} \label{fig:RYerrors} \end{subfigure} \vspace{2mm} \begin{subfigure}{\textwidth} \centering \textcolor{blue}{$\bullet$ Corrected} \hspace{2mm} \textcolor{newgreen}{$\bullet$ Single Data Error} \hspace{2mm} $\bullet$ \textcolor{orange}{Flagged Error} \hspace{2mm} \textcolor{red}{$\bullet$ Meaurement Error} \end{subfigure} \caption{The errors in rotation gates will transform the faults that exist after a two-qubit gate. The transformation of an existing two-qubit Pauli error (ignoring the phase) after a single-qubit gate error on wires that contain an $R_X \left(\pm \frac{\pi}{2} \right)$ and an $R_Y \left(\pm \frac{\pi}{2} \right)$ gate is shown in b.) and c.). The first and second element of the Pauli error correspond to the error on the data and ancilla qubit, respectively. There are two types of single-qubit over-rotation errors, $X$ and $Y$, which transform Pauli errors according to b.) and c.), respectively. Measurement errors can introduce of errors into the code through faulty correction when errors occur in the following round of stabilizer measurement. Flagged errors are detectable errors that are not always corrected properly but the error stays local to the qubit given that the following syndrome measurement is accurate. Single data errors are favorable because the error stays local to the data qubit and can be more easily corrected in the next round of measurement because no faulty information was sent to the decoder. Self correction may occur in as well for specific errors. Applying the transformation $II \leftrightarrow ZI$ on the $E_2'$ values in c.) give the resulting error on wires containing only $R_Y \left(\pm \frac{\pi}{2} \right)$ gate errors. } \label{fig:sqgateerrors} \end{figure} \subsubsection{Competing Error Sources: Sampling Subset Analysis} \label{sec:subsetanal} For the importance sampling simulations of the 17-qubit surface code, a subset weight cutoff of $\lfloor W \rfloor$ $>$ $10^{-6}$ was used and $30,000$ samples were collected for the calculation of each subset's logical error rate $A_{s,t,h,dep,z}$. This weight cutoff corresponds to events expected to be sampled at least once out of a million randomly sampled events, which is sufficient for near-term error correction experiments. To calculate the statistical weights of the subsets, a single-qubit gate error rate $\left(p_y = p_x = p_z \right)$, two-qubit gate error rate $\left(p_{xx}\right)$, rate of heating $\left(r_{heat}\right)$, background depolarizing noise error rate $\left(p_{dep}\right)$, and rate of dephasing $\left( r_d \right)$ of $10^{-4}$, $10^{-3}$, $25$ quanta$/s$, $8 \times 10^{-4}$, and $15$ $s^{-1}$ was chosen, respectively, which corresponds to the error rates that exhibit a logical error rate equal to the two-qubit gate error rate (see the green curves in figure \ref{fig:singlesourcethresh}). The logical error rates and statistical error weights calculated for each subset are presented in figure \ref{fig:SubsetTab}. An expanded number of subsets beyond this cutoff were run and are shown in figure \ref{fig:allsubsets} (See Supplementary Material). The goal of this analysis is to parse out situations where certain error sources show dominant contribution to the failure rate of the quantum error correcting circuit. The logical error rates for each of the subsets sampled are shown in blue in figure \ref{fig:SubsetTab}. The error subsets containing two-qubit gate or heating errors tend to have higher logical error rates than other subsets containing comparable number of errors. This occurs due to the ability of errors of this type to generate measurement faults in the circuit. The M\o lmer-S\o rensen gate transforms single-qubit data errors in the following manner: $ZI \leftrightarrow YX$ and $YI \leftrightarrow -ZX$ where the data and ancilla qubit errors are the first and second elements, respectively. A two-qubit gate or heating error makes preexisting errors undetectable which is a particularly malignant case. The tendancy towards measurement errors in the ion trap error model indicates that implementing a decoder that makes a correction based on more syndrome measurement rounds may show an above average performance boost in error correction. Error subsets containing single-qubit gate errors tend to have lower logical error rates that other subsets with comparable number of errors. To understand why this is the case, we explore the effect the errors have on error correction. Figure \ref{fig:sqgateerrors} shows how single-qubit gate errors transform preexisting errors in the circuit. The particularly malignant case in when there is a measurement error which can introduce errors into the code. For each single-qubit fault point, there are only two elements of the two-qubit Pauli group that are transformed in a manner that would result in a meaurement error. Actually, half of the elements of this group result in single-qubit errors (or no errors) on data that can be readily decoded in the following step of stabilizer measurement (see figure \ref{fig:sqgateerrors}). The remaining errors are detectable but not necessarily corrected properly (this depends on the location that the fault occurs). However, these errors do alert the decoder to the location of an error on the code which is favorable and the faulty correction on these qubits will not propagate errors in a malignant manner given the next round of stabilizer measurement is correct. Take note that one of the malignant errors transformed in figure \ref{fig:RXerrors} ($R_X(\pm \frac{\pi}{2})$ gate error) is $XX$ which is the form of the two-qubit gate and heating errors. Therefore, compiling out the single-qubit gates $R_X (\pm \frac{\pi}{2})$ gates seems to have also boosted the efficiency of the decoder to decode two-qubit gate and heating errors in addition to the obvious performance boost from less general fault points in the compiled circuit. Another alarming malignant configuration in figure \ref{fig:RYerrors} is the $ZX$ error which is the result of the M\o lmer-S\o rensen transformation of $YI$ (a single-qubit data error). However, this fault requires two single-qubit $Y$ errors which have a low statistical weight of occurance (see figure \ref{fig:SubsetTab}). How else can we use this information to improve error correction? One obvious extension is to use such statistics to develop decoders targeted for such errors. For instance, this information about the failure rates \emph{at the logical level} may be used to bias transition matrix elements of a maximum likelihood decoder to include information about the influence of error cosets on the code's performance instead of only considering the statistical weights of the error cosets \cite{TuckettSCBiasNoise2017}. Code considerations when optimizing the ion chain layout could serve to bound the effects of the gate-time dependent error sources. Specifically, optimizing the ion chain to assign the most distant qubits (with the longest gate times) to weight-2 stabilizers can reduce the influence of anisotropic error sources. Consider a scenario where two-qubit gate and heating errors dominate, then assigning the faultiest gates to the weight-2 $X$-type stabilizers would bound the influence of the most probable heating errors to single-qubit $X$-errors on the data. While this does not apply to our current error model where dephasing ($Z$) and heating ($XX$) errors both have the same dependency on gate time, this is probably not the case experimentally and, the greater the anisotropy of the errors, the greater one can reduce their effect. The subset statistical weights (probabilites of occurence) are shown in red in figure \ref{fig:SubsetTab}. These statistical weights give insight into the likelihood of sampling particular error events. Recall that the subset's contribution to the total logical error rate of the code (used to generate the pseudothreshold plots in \ref{fig:singlesourcethresh}) is the product of the subset weight and subset logical error rate (see section \ref{sec:importsample}). Only ten points above the subset weight of $10^{-3}$ (figure \ref{fig:subsetweightsa}) have significant contribution to the total logical error rate; that is, this small collection of subsets can be used to completely recreate the pseudothreshold plots in \ref{fig:singlesourcethresh}. Actually, the two subsets, $(0,0,1,0,1)$ and $(0,0,1,1,0)$, have the largest contribution to the encoded logical error rate and bound the logical error rate to $p_L \approx \sum A_{s,t,h,dep,z} \times W \approx 1 \times 10^{-3}$, which corresponds to a two-qubit gate fidelity of $99.9 \%$ (recall that the gate error rate for calculation of the subset weights was $10^{-3}$). This essentially recreates our calculation of a $99.9 \%$ two-qubit gate fidelity for fault-tolerance that used more subsets. Therefore, changes in the statistics of the dominating subsets have significant influence on the observed pseudothreshold of the quantum error correcting code and can be considered when implementing a decoding algorithm. This also illustrates the concern that a success metric such as the (pseudo)threshold only represents the mean statistics of an underlying error model \cite{BarnesFailureDistr2017}. \section{Conclusions} We studied the feasibility of implementing the quantum error correction with the 17-qubit surface code on a linear chain of atomic qubits. Optimization of the ion chain showed a preference for mixed data/ancilla configurations to reduce the gate times for syndrome extraction. We showed that the 17-qubit surface code contained enough structure to allow for the use two 16 key lookup tables for error correction with a respectably high pseudothreshold of $3 \times 10^{-3}$ for circuit-level depolarizing noise. The lookup table decoder is easily integrated into the logic of the ion trap controls and decodes at a rate much faster ($\mathcal{O}\left(ns\right)$) than any physical operation on the qubits. When modeling ion trap error sources, it was shown that a two-qubit gate fidelity of $\ge 99.9 \, \%$ is required in the cases where ion heating, scattering, or spin dephasing are the dominant error sources. Furthermore, the parameter regimes that allow for fault-tolerant error correction are not outlandish for such error sources. Finally, we took advantage of the error subset data required for our simulations to parse out trends that occur when multiple error sources occur during error correction. We found that two-qubit gate and heating errors are the most malignant error sources and single-qubit gate errors are manageable in our ion trap error model. We also speculate on how this subset information can be used to reduce the influence of malignant error sources on error correction. Similar calculations have recently been done for the Steane [[7,1,3]] code in a linear chain of ions that also allows for the rotation of ion crystals. The calculations presented in \cite{BermudezITQCwithSteane2017} use a different Pauli error model for ion trap errors that emphasizes memory errors. The key difference in approach is that our model includes enough ancillae such that a full syndrome measurement takes place during a single measurement time. The serialization of measurements in \cite{BermudezITQCwithSteane2017}, when combined with intrinsic memory errors, requires a lower physical qubit error rate in order to achieve a break even logical error rate. To better assess the performance of quantum error correcting codes in real systems, more detailed physical error models are warranted \cite{GutierrezIncohCohNoiseEC2016}. A promising approach is to use realistic error channels with quantum trajectories to avoid simulating the entire density matrix. As recently shown for superconducting systems and the surface-17 code, a smart choice in circuit representation allows the entire 17 qubit system to be modeled with only 10 active qubits \cite{OBrienSurface17DensityMatrix2017}. In the future, we plan to apply this technique to experimentally derived error models for ion traps to help assess which coherent errors have the most deleterious effect. \section{Acknowledgements} We thank Chris Monroe, Jungsang Kim, and Norbert Linke for useful discussions. This work was supported by the Office of the Director of National Intelligence - Intelligence Advanced Research Projects Activity through ARO contract W911NF-10-1-0231 and the National Science Foundation grant PHY-1415461. \bibliographystyle{apsrev}
2,869,038,156,140
arxiv
\section{Introduction} Convolutional neural networks heavily rely on equivariance of the convolution operator under translation. Weights are shared between different spatial positions, which reduces the number of parameters and pairs well with the often occurring underlying translational transformations in image data. It naturally follows that a large amount of research is done to exploit other underlying transformations and symmetries and provide deep neural network models with equivariance or invariance under those transformations (\emph{cf.}} \def\Cf{\emph{Cf.}~Figure~\ref{fig:Overview}). Further, equivariance and invariance are useful properties when aiming to produce data representations that disentangle factors of variation: when transforming a given input example by varying one factor, we usually aim for equivariance in one representation entry and invariance in the others. One recent line of methods that aim to provide a relaxed version of such a setting are \emph{capsule networks}. Our work focuses on obtaining a formalized version of capsule networks that guarantees those properties as well as bringing them together with \emph{group equivariant convolutions} by \citet{Cohen:2016}, which also provide provable equivariance properties under transformations within a group. In the following, we will shortly introduce capsule networks, as proposed by \citeauthor{Hinton:2011} and \citeauthor{Sabour:2017}, before we outline our contribution in detail. \subsection{Capsule networks} Capsule networks~\citep{Hinton:2011} and the recently proposed routing by agreement algorithm~\citep{Sabour:2017} represent a different paradigm for deep neural networks for vision tasks. They aim to hard-wire the ability to disentangle the pose of an object from the evidence of its existence, also called \emph{viewpoint equi- and invariance} in the context of vision tasks. This is done by encoding the output of one layer as a tuple of a pose vector and an activation. Further, they are inspired by human vision and detect linear, hierarchical relationships occurring in the data. Recent advances describe the dynamic routing by agreement method that iteratively computes how to route data from one layer to the next. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{overview} \caption{The task of dynamic routing for capsules with concepts of equivariant pose vectors and invariant agreements. Layers with those properties can be used to build viewpoint invariant architectures, which disentangle factors of variation.}\label{fig:Overview} \end{figure} One capsule layer receives $n$ pose matrices $\mathbf{M}_i$, which are then transformed by a trainable linear transformation $\mathbf{W}_{i,j}$ to cast $n$ votes for the pose of the $j$th output capsule: \begin{equation*} \mathbf{V}_{i,j} = \mathbf{M}_i \cdot \mathbf{W}_{i,j}. \end{equation*} The votes are used to compute a proposal for an output pose by a variant of weighted averaging. The weights are then iteratively refined using distances between votes and the proposal. Last, an agreement value is computed as output activation, which encodes how strong the votes agree on the output pose. The capsule layer outputs a set of tuples $(\mathbf{M}, a)$, each containing the pose matrix and the agreement (as activation) of one output capsule. \subsection{Motivation and contribution} General capsule networks do not come with guaranteed equivariances or invariances which are essential to guarantee disentangled representations and viewpoint invariance. We identified two issues that prevent exact equivariance in current capsule architectures: First, the averaging of votes takes place in a vector space, while the underlying space of poses is a manifold. The vote averaging of vector space representations does not produce equivariant mean estimates on the manifold. Second, capsule layers use trainable transformation kernels defined over a local receptive field in the spatial vector field domain, where the receptive field coordinates are agnostic to the pose. They lead to non-equivariant votes and consequently, non-equivariant output poses. In this work, we propose possible solutions for these issues. Our contribution can be divided into the following parts. First, we present group equivariant capsule layers, a specialized kind of capsule layer whose pose vectors are elements of a group ($G,\circ)$ (\emph{cf.}} \def\Cf{\emph{Cf.}~Section~\ref{sec:capsules}). Given this restriction, we provide a general scheme for dynamic routing by agreement algorithms and show that, under certain conditions, equivariance and invariance properties under transformations from $G$ are mathematically guaranteed. Second, we tackle the issue of aggregating over local receptive fields in group capsule networks (\emph{cf.}} \def\Cf{\emph{Cf.}~Section~\ref{sec:pooling}). Third, we bring together capsule networks with group convolutions and show how the group capsule layers can be leveraged to build convolutional neural networks that inherit the guaranteed equi- and invariances, as well as producing disentangled representations (\emph{cf.}} \def\Cf{\emph{Cf.}~Section~\ref{sec:groupconv}). Last, we apply this combined architecture as proof of concept application of our framework to MNIST datasets and verify the properties experimentally. \section{Group equivariant capsules}\label{sec:capsules} We begin with essential definitions for group capsule layers and the properties we aim to guarantee. Given a Lie group $(G, \circ)$, we formally describe a group capsule layer with $m$ output capsules by a set of function tuples \begin{equation} \{(L_p^j(\mathbf{P}, \mathbf{a}), L_a^j(\mathbf{P}, \mathbf{a})) \,\, | \,\, j \in\{1,\ldots,m\}\} \textrm{.} \end{equation} Here, the functions $L_p$ compute the output pose vectors while functions $L_a$ compute output activations, given input pose vectors $\mathbf{P} = (\mathbf{p}_1,...,\mathbf{p}_n) \in G^n$ and input activations $\mathbf{a} \in \mathbb{R}^n$. Since our goal is to achieve global invariance and local equivariance under the group law $\circ$, we define those two properties for one single group capsule layer (\emph{cf.}} \def\Cf{\emph{Cf.}~Figure~\ref{fig:Overview}). First, the function computing the output pose vectors of one layer is \emph{left-equivariant} regarding applications of the group law if \begin{equation} \label{eq:equivariance_req} L_p(\mathbf{g}\circ\mathbf{P}, \mathbf{a}) = \mathbf{g}\circ L_p(\mathbf{P}, \mathbf{a}), \,\,\,\,\,\,\, \forall \mathbf{g} \in G. \end{equation} Second, the function computing activations of one layer is \emph{invariant} under applications of the group law $\circ$ if \begin{equation} \label{eq:invariance_req} L_a(\mathbf{g}\circ\mathbf{P}, \mathbf{a}) = L_a(\mathbf{P}, \mathbf{a}), \,\,\,\,\,\,\, \forall \mathbf{g} \in G. \end{equation} Since equivariance is transitive, it can be deducted that stacking layers that fulfill these properties preserves both properties for the combined operation. Therefore, if we apply a transformation from $G$ on the input of a sequence of those layers (\emph{e.g.}} \def\Eg{\emph{E.g.}~a whole deep network), we do not change the resulting output activations but produce output pose vectors which are transformed by the same transformation. This sums up to fulfilling the vision of locally equivariant and globally invariant capsule networks. \subsection{Group capsule layer} We define the group capsule layer functions as the output of an iterative routing by agreement, similar to the approach proposed by \citet{Sabour:2017}. The whole algorithm, given a generic \emph{weighted average operation} $\mathcal{M}$ and a \emph{distance measure} $\delta$, is shown in Algorithm~\ref{alg:routing}. \begin{algorithm} \caption{Group capsule layer} \begin{algorithmic}\label{alg:routing} \STATE{} \textbf{Input}: poses $\mathbf{P} = (\mathbf{p}_1, \ldots, \mathbf{p}_n) \in G^n$, activations $\mathbf{a} = (a_1, \ldots, a_n) \in \mathbb{R}^n$ \STATE{} \textbf{Trainable parameters}: transformations $\mathbf{t}_{i,j}$ \STATE{} \textbf{Output}: poses $\hat{\mathbf{P}} = (\hat{\mathbf{p}}_1, \ldots, \hat{\mathbf{p}}_m) \in G^m$, activations $\hat{\mathbf{a}} = (\hat{a}_1, \ldots, \hat{a}_m) \in \mathbb{R}^m$ \STATE{} -------------------------------------------------------------------------------------------------------------------- \STATE{} $\mathbf{v}_{i,j} \leftarrow \mathbf{p}_i \circ \mathbf{t}_{i,j}$ \hfill for all input capsules $i$ and output capsules $j$ \STATE{} $\hat{\mathbf{p}}_j \leftarrow \mathcal{M}((\mathbf{v}_{1,j}, \ldots, \mathbf{v}_{n,j}),\mathbf{a})$ \hfill $\forall j$ \FOR{$r$ iterations} \STATE{} $w_{i,j} \leftarrow \sigma(-\delta(\hat{\mathbf{p}}_j, \mathbf{v}_{i,j})) \cdot a_i$ \hfill $\forall i , j$ \STATE{} $\hat{\mathbf{p}}_j \leftarrow \mathcal{M}((\mathbf{v}_{1,j}, \ldots, \mathbf{v}_{n,j}),\mathbf{w}_{:,j})$ \hspace{8.02cm} $\forall j $ \ENDFOR% \STATE{} $\hat{a}_j \leftarrow \sigma(-\frac{1}{n}\sum_{i=1}^n \delta(\hat{\mathbf{p}}_j, \mathbf{v}_{i,j}))$ \hfill $\forall j$ \STATE{} Return $\hat{\mathbf{p}}_1, \ldots, \hat{\mathbf{p}}_m$, $\hat{\mathbf{a}}$ \end{algorithmic} \end{algorithm} Generally, votes are cast by applying trainable group elements $\mathbf{t}_{i,j}$ to the input pose vectors $\mathbf{p}_{i}$ (using the group law $\circ$), where $i$ and $j$ are the indices for input and output capsules, respectively. Then, the agreement is iteratively computed: First, new pose candidates are obtained by using the weighted average operator $\mathcal{M}$. Second, the negative, shifted $\delta$-distance between votes pose candidates are used for the weight update. Last, the agreement is computed by averaging negative distances between votes and the new pose. The functions $\sigma$ can be chosen to be some scaling and shifting non-linearity, for example $\sigma(x) = \texttt{sigmoid}(\alpha \cdot x + \beta)$ with trainable $\alpha$ and $\beta$, or as softmax over the output capsule dimension. \paragraph{Properties of $\mathcal{M}$ and $\delta$} For the following theorems we need to define specific properties of $\mathcal{M}$ and $\delta$. The mean operation $\mathcal{M}:G^n \times \mathbb{R}^n \rightarrow G$ should map $n$ elements of the group $(G,\circ)$, weighted by values $\mathbf{x} =(x_1,...,x_n) \in \mathbb{R}^n$, to some kind of weighted mean of those values in $G$. Besides the closure, $\mathcal{M}$ should be \emph{left-equivariant} under the group law, formally: \begin{equation} \mathcal{M}(\mathbf{g}\circ\mathbf{P}, \mathbf{x}) = \mathbf{g}\circ \mathcal{M}(\mathbf{P}, \mathbf{x}), \,\,\,\,\,\,\, \forall \mathbf{g} \in G \textrm{,} \end{equation} as well as invariant under permutations of the inputs. Further, the distance measure $\delta$ needs to be chosen so that transformations $\mathbf{g} \in G$ are $\delta$-distance preserving: \begin{equation} \delta(\mathbf{g}\circ\mathbf{g}_1,\mathbf{g}\circ\mathbf{g}_2) = \delta(\mathbf{g}_1,\mathbf{g}_2), \mathbf{x}), \,\,\,\,\,\,\, \forall \mathbf{g} \in G \textrm{.} \end{equation} Given these preliminaries, we can formulate the following two theorems. \newtheorem{theorem1}{Theorem} \begin{theorem1}\label{theorem1} Let $\mathcal{M}$ be a weighted averaging operation that is equivariant under left-applications of $\mathbf{g} \in G$ and let $G$ be closed under applications of $\mathcal{M}$. Further, let $\delta$ be chosen so that all $\mathbf{g}\in G$ are $\delta$-distance preserving. Then, the function $L_p(\mathbf{P}, \mathbf{a}) = (\hat{\mathbf{p}}_1, \ldots, \hat{\mathbf{p}}_m)$, defined by Algorithm~\ref{alg:routing}, is equivariant under left-applications of $\mathbf{g} \in G$ on input pose vectors $\mathbf{P} \in G^n$: \begin{equation} \label{eq:equivariance_req2} L_p(\mathbf{g}\circ\mathbf{P}, \mathbf{a}) = \mathbf{g}\circ L_p(\mathbf{P}, \mathbf{a}), \,\,\,\,\,\,\,\,\,\,\,\, \forall \mathbf{g} \in G \textrm{.} \end{equation} \end{theorem1} \begin{proof} The theorem follows by induction over the inner loop of the algorithm, using the equivariance of $\mathcal{M}$, $\delta$-preservation and group properties. The full proof is provided in the appendix. \end{proof} \begin{theorem1}\label{theorem2} Given the same conditions as in Theorem~\ref{theorem1}. Then, the function $L_a(\mathbf{P}, \mathbf{a}) = (\hat{a}_1, \ldots, \hat{a}_m)$ defined by Algorithm~\ref{alg:routing} is invariant under joint left-applications of $\mathbf{g} \in G$ on input pose vectors $\mathbf{P} \in G^n$: \begin{equation} \label{eq:invariance_req2} L_a(\mathbf{g}\circ\mathbf{P}, \mathbf{a}) = L_a(\mathbf{P}, \mathbf{a}), \,\,\,\,\,\,\, \forall \mathbf{g} \in G \textrm{.} \end{equation} \end{theorem1} \begin{proof} The result follows by applying Theorem~\ref{theorem1} and the $\delta$-distance preservation. The full proof is provided in the appendix. \end{proof} Given these two theorems (and the method proposed in Section~\ref{sec:pooling}), we are able to build a deep group capsule network, by a composition of those layers, that guarantees global invariance in output activations and equivariance in pose vectors. \subsection{Examples of useful groups} Given the proposed algorithm, $\mathcal{M}$ and $\delta$ have to be chosen based on the chosen group and element representations. A canonical application of the proposed framework on images is achieved by using the two-dimensional rotation group $SO(2)$. We chose to represent the elements of $G$ as two-dimensional unit vectors, $\mathcal{M}$ as the renormalized, Euclidean, weighted mean, and $\delta$ as the negative scalar product. Further higher dimensional groups include the three-dimensional rotation group $SO(3)$ as well as $GL(n,\mathbf{R})$, the group of general invertible matrices. Other potentially interesting applications of group capsules are translation groups. Further discussion about them, as well as other groups, can be found in the appendix. \paragraph{Group products} It should be noted that using the direct product of groups allows us to apply our framework for group combinations. Given two groups $(G,\circ_G)$ and $(H, \circ_H)$, we can construct the direct product group $(G,\circ_G) \times (H, \circ_H) = (G \times H, \circ)$, with $(\mathbf{g}_1, \mathbf{h}_1) \circ (\mathbf{g}_2, \mathbf{h}_2) = (\mathbf{g}_1 \circ_G \mathbf{g}_2, \mathbf{h}_1 \circ_H \mathbf{h}_2)$. Thus, for example, the product $SO(2) \times (\mathbb{R}^2,+)$ is again a group. Therefore, Theorem~\ref{theorem1} and~\ref{theorem2} also apply for those combinations. As a result, the pose vectors contain independent poses for each group, keeping information disentangled between the individual ones. \section{Spatial aggregation with group capsules}\label{sec:pooling} This section describes our proposed spatial aggregation method for group capsule networks. As previously mentioned, current capsule networks perform spatial aggregation of capsules, which does not result in equivariant poses. When the input of a capsule network is transformed, not only the deeper pose vectors change accordingly. Since vector fields of poses are computed, the positions of those pose vectors in $\mathbb{R}^n$ might also change based on the transformation, formally modeled using the concept of induced representations \citep{Cohen:2018b}. The trainable transformations $\mathbf{t}$ however, are defined for fixed positions of the local receptive field, which is agnostic to those translations. Therefore, the composition of pose vectors and trainable transformations to compute the votes depends on the input transformation, which prevents equivariance and invariance. Formally, the votes $\mathbf{v}_i$ computed in a capsule layer over a local receptive field can be described by \begin{equation} \begin{split} \mathbf{v}_i = \mathbf{g} \circ p(\mathbf{g}^{-1}(\mathbf{x}_i)) \, \circ \, t(\mathbf{x}_i) \textrm{,} \end{split} \end{equation} where $\mathbf{x}_i$ is a receptive field position, $p(\mathbf{x}_i)$ the input pose at position $\mathbf{x}_i$, $t(\mathbf{x}_i)$ the trainable transformation at position $\mathbf{x}_i$, and $\mathbf{g}$ the input transformation. It can be seen that we do not receive a set of equivariant votes $\mathbf{v}_i$ since the matching of $p(\cdot)$ and $t(\cdot)$ varies depending on $\mathbf{g}$. A visual example of the described issue (and a counterexample for equivariance) for an aggregation over a $2 \times 2$ block and $G=SO(2)$ can be found in Figures~\ref{fig:Spatial-Aggregationa} and \ref{fig:Spatial-Aggregationb}. \begin{figure}[t] \centering \begin{subfigure}[b]{0.29\textwidth} \centering \includegraphics[width=\textwidth]{spatial1} \caption{Non-rotated input and poses}\label{fig:Spatial-Aggregationa} \end{subfigure} ~ \hfill \begin{subfigure}[b]{0.29\textwidth} \centering \includegraphics[width=\textwidth]{spatial2} \caption{Rotated input, false matching}\label{fig:Spatial-Aggregationb} \end{subfigure} ~ \hfill \begin{subfigure}[b]{0.29\textwidth} \centering \includegraphics[width=\textwidth]{spatial3} \caption{Pose-aligned $t$-kernels}\label{fig:Spatial-Aggregationc} \end{subfigure} \caption{Example for the spatial aggregation of a $2\times2$ block of $SO(2)$ capsules. Figure (a) shows the behavior for non-rotated inputs. The resulting votes have full agreement, pointing to the top. Figure (b) shows the behavior when rotating the input by $\pi/2$, where we obtain a different element-wise matching of pose vectors $p(\cdot)$ and transformations $t(\cdot)$, depending on the input rotation. Figure (c) shows the behavior with the proposed kernel alignment. It can be seen that $p$ and $t$ match again and the result is the same full pose agreement as in (a) with equivariant mean pose, pointing to the left.}\label{fig:Spatial-Aggregation} \end{figure} \vspace{-0.1cm} \paragraph{Pose-aligning transformation kernels} As a solution, we propose to align the constant positions $\mathbf{x}_i$ based on the pose before using them as input for a trainable transformation generator $t(\cdot)$. We can compute $\bar{\mathbf{p}} = \mathcal{M}(\mathbf{p}_1,\ldots,\mathbf{p}_n, \mathbf{1})$, a mean pose vector for the current receptive field, given local pose vectors $\mathbf{p}_1,\ldots,\mathbf{p}_n$. The mean poses of transformed and non-transformed inputs differ by the transformation $\mathbf{g}$: $\bar{\mathbf{p}} = \mathbf{g} \circ \bar{\mathbf{q}}$. This follows from equivariance of $\mathcal{M}$, invariance of $\mathcal{M}$ under permutation, and from the equivariance property of previous layers, meaning that the rotation applied to the input directly translates to the pose vectors in deeper layers. Therefore, we can apply the inverse mean pose $\bar{\mathbf{p}}^{-1} = \bar{\mathbf{q}}^{-1} \circ \mathbf{g}^{-1}$ to the constant input positions $\mathbf{x}$ of $t$ and calculate the votes as \begin{equation} \begin{split} \mathbf{v}_i = \mathbf{g} \circ p(\mathbf{g}^{-1}(\mathbf{x}_i)) \circ t((\bar{\mathbf{q}}^{-1}\circ\mathbf{g}^{-1})(\mathbf{x}_i)) = \mathbf{g} \circ p(\hat{\mathbf{x}}_i) \circ t(\bar{\mathbf{q}}^{-1}(\hat{\mathbf{x}}_i)) \textrm{,} \end{split} \end{equation} as shown as an example in Figure~\ref{fig:Spatial-Aggregationc}. Using this construction, we use the induced representation as inputs for $p(\cdot)$ and $t(\cdot)$ equally, leading to a combination of $p(\cdot)$ and $t(\cdot)$ that is independent from $\mathbf{g}$. Note that $\bar{\mathbf{q}}^{-1} \in G$ is constant for all input transformations and therefore does not lead to further issues. In practice, we use a two-layer MLP to calculate $t(\cdot)$, which maps the normalized position to $n \cdot m$ transformations (for $n$ input capsules per position and $m$ output capsules). The proposed method can also be understood as pose-aligning a trainable, continuous kernel window, which generates transformations from $G$. It is similar to techniques applied for sparse data aggregation in irregular domains \citep{Gilmer:2017}. Since commutativity is not required, it also works for non-abelian groups (\emph{e.g.}} \def\Eg{\emph{E.g.}~$SO(3)$). As an additional benefit, we observed significantly faster convergence during training when using the MLP generator instead of directly optimizing the transformations $\mathbf{t}$. \vspace{-0.1cm} \section{Group capsules and group convolutions}\label{sec:groupconv} \vspace{-0.1cm} The newly won properties of pose vectors and activations allow us to combine our group equivariant capsule networks with methods from the field of group equivariant convolutional networks. We show that we can build sparse group convolutional networks that inherit invariance of activations under the group law from the capsule part of the network. Instead of using a regular discretization of the group, those networks evaluate the convolution for a fixed set of arbitrary group elements. The proposed method leads to improved theoretical efficiency for group convolutions, improves the qualitative performance of our capsule networks and is still able to provide disentangled information. In the following, we shortly introduce group convolutions before presenting the combined architecture. \paragraph{Group convolution} Group convolutions (G-convs) are a generalized convolution/correlation operator defined for elements of a group $(G,\circ)$ (here for Lie groups with underlying manifold): \begin{equation} \left[f \star \psi \right] (\mathbf{g}) = \int_{\mathbf{h}\in G} \sum^K_{k=1} f_k(\mathbf{h})\psi(\mathbf{g}^{-1}\mathbf{h}) \,\, d\mathbf{h} \textrm{,} \end{equation} for $K$ input feature signals, which behaves equivariant under applications of the group law $\circ$ \citep{Cohen:2016, Cohen:2018}. The authors showed that they can be used to build group equivariant convolutional neural networks that apply a stack of those layers to obtain an equivariant architecture. However, compared to capsule networks, they do not directly compute disentangled representations, which we aim to achieve through the combination with capsule networks. \subsection{Sparse group convolution} An intuition for the proposed method is to interpret our group capsule network as a sparse tree representation of a group equivariant network. The output feature map of a group convolution layer $\left[f \star \psi \right] (\mathbf{g})$ over group $G$ is defined for each element $\mathbf{g} \in G$. In contrast, the output of our group capsule layer is a set of tuples $(\mathbf{g},a)$ with group element $\mathbf{g}$ (pose vector) and activation $a$, which can be interpreted as a sparse index/value representation of the output of a G-conv layer. In this context, the pose $\mathbf{g}$, computed using routing by agreement from poses of layer $l$, serves as the hypothesis for the relevance of the feature map content of layer $l+1$ at position $\mathbf{g}$. We can now sparsely evaluate the feature map output of the group convolution and can use the agreement values from capsules to dampen or amplify the resulting feature map contents, bringing captured pose covariances into consideration. Figure~\ref{fig:Combination} shows a scheme of this idea. \begin{figure}[t] \centering \begin{subfigure}[b]{0.36\textwidth} \centering \includegraphics[width=\textwidth]{network} \caption{Sparse group convolution.} \label{fig:Combination} \end{subfigure} \hfill \begin{subfigure}[b]{0.62\textwidth} \centering \includegraphics[width=\textwidth]{receptive} \caption{Handling of local receptive fields with different poses.} \label{fig:Spatial_combination} \end{subfigure} \caption{(a) Scheme for the combination of capsules and group convolutions. Poses computed by dynamic routing are used to evaluate group convolutions. The output is weighted by the computed agreement. The invariance property of capsule activations is inherited to the output feature maps of the group convolutions. (b) Realization of the sparse group convolution. The local receptive fields are transformed using the calculated poses $L_p$ before aggregated using a continuous kernel function $\psi$.} \end{figure} We show that when using the pose vector outputs to evaluate a G-conv layer for group element $\mathbf{g}$ we inherit the invariance property from the capsule activations, by proving the following theorem: \begin{theorem1}\label{theorem3} Given pose vector outputs $L_p(\mathbf{p}, \mathbf{a})$ of a group capsule layer for group $G$, input signal $f:G \rightarrow \mathbb{R}$, and filter $\psi:G \rightarrow \mathbb{R}$. Then, the group convolution $\left[f\star \psi\right]$ is invariant under joint left-applications of $\mathbf{g} \in G$ on capsule input pose vectors $\mathbf{P} \in G^n$ and signal $f$: \begin{equation} \left[(\mathbf{g}\circ f)\star \psi\right](L_p(\mathbf{g}\circ\mathbf{P}, \mathbf{a})) = \left[f\star \psi\right](L_p(\mathbf{P}, \mathbf{a})). \end{equation} \end{theorem1} \begin{proof} The invariance follows from Theorem~\ref{theorem1}, the definition of group law application on the feature map, and the group properties. The full proof is provided in the appendix. \end{proof} The result tells us that when we pair each capsule in the network with an operator that performs pose-normalized convolution on a feature map, we get activations that are invariant under transformations from $G$. We can go one step further: given a group convolution layer for a product group, we can use the capsule output poses as an index for one group and densely evaluate the convolution for the other, leading to equivariance in the dense dimension (follows from equivariance of group convolution) and invariance in the capsule-indexed dimension. This leads to our proof of concept application with two-dimensional rotation and translation. We provide further formal details and a proof in the appendix. Calculation of the convolutions can be performed by applying the inverse transformation to the local input using the capsule's pose vector, as it is shown in Figure~\ref{fig:Spatial_combination}. In practice, it can be achieved, \emph{e.g.}} \def\Eg{\emph{E.g.}, by using the grid warping approach proposed by \citet{Henriques:2017} or by using spatial graph-based convolution operators, \emph{e.g.}} \def\Eg{\emph{E.g.}~from~\cite{Fey:2018}. Further, we can use the iteratively computed weights from the routing algorithm to perform \emph{pooling by agreement} on the feature maps: instead of using max or average operators for spatial aggregation, the feature map content can be dynamically aggregated by weighting it with the routing weights before combining it. \section{Related work} Different ways to provide deep neural networks with specific equivariance properties have been introduced. One way is to share weights over differently rotated filters or augment the input heavily by transformations \citep{Zhou:2017, Weiler:2018}. A related but more general set of methods are the group convolutional networks \citep{Cohen:2016, Dielemann:2016} and its applications like Spherical CNNs in $SO(3)$ \citep{Cohen:2018} and Steerable CNNs in $SO(2)$ \citep{Cohen:2017}, which both result in special convolution realizations. Capsule networks were introduced by \citet{Hinton:2011}. Lately, dynamic routing algorithms for capsule networks have been proposed \citep{Sabour:2017, Hinton:2018}. Our work builds upon their methods and vision for capsule networks, as well as connect those to the group equivariant networks. Further methods include harmonic networks \citep{Worrall:2017}, which use circular harmonics as a basis for filter sets, and vector field networks \citep{Marcos:2017}. These methods focus on two-dimensional rotational equivariance. While we chose an experiment which is similar to their approaches, our work aims to build a more general framework for different groups and disentangled representations. \section{Experiments} \vspace{-0.1cm} We provide proof of concept experiments to verify and visualize the theoretic properties shown in the previous sections. As an instance of our framework, we chose an architecture for rotational equivariant classification on different MNIST datasets \citep{LeCun:1998}. \vspace{-0.1cm} \subsection{Implementation and training details} \vspace{-0.1cm} \paragraph{Initial pose extraction} An important subject which we did not tackle yet is the first pose extraction of a group capsule network. We need to extract pose vectors $\mathbf{p} \in G$ with activations $\mathbf{a}$ out of the raw input of the network without eliminating the equi- and invariance properties of Equations~\ref{eq:equivariance_req} and~\ref{eq:invariance_req}. Our solution for images is to simply compute local gradients using a Sobel operator and taking the length of the gradient as activation. For the case of a zero gradient, we need to ensure that capsules with only zero inputs also produce a zero agreement and an undefined pose vector. \vspace{-0.2cm} \paragraph{Convolution operator} As convolution implementation we chose the spline-based convolution operator proposed by \citet{Fey:2018}. Although the discrete two- or three-dimensional convolution operator is also applicable, this variant allows us to omit the resampling of grids after applying group transformations on the signal $f$. The reason for this is the continuous definition range of the B-spline kernel functions. Due to the representation of images as grid graphs, these kernels allow us to easily transform local neighborhoods by transforming the relative positions given on the edges. \vspace{-0.2cm} \paragraph{Dynamic routing} In contrast to the method from \citet{Sabour:2017}, we do not use softmax over the output capsule dimension but the sigmoid function for each weight individually. The sigmoid function makes it possible for the network to route information to more than one output capsule as well as to no output capsule at all. Further, we use two iterations of computing pose proposals. \vspace{-0.2cm} \paragraph{Architecture and parameters} Our canonical architecture consists of five capsule layers where each layer aggregates capsules from $2\times2$ spatial blocks with stride $2$. The learned transformations are shared over the spatial positions. We use the routing procedure described in Section~\ref{sec:capsules} and the spatial aggregation method described in Section~\ref{sec:pooling}. We also pair each capsule with a pose-indexed convolution as described in Section~\ref{sec:groupconv} with ReLU non-linearities after each layer, leading to a CNN architecture that is guided by pose vectors to become a sparse group CNN. The numbers of output capsules are $16$, $32$, $32$, $64$, and $10$ per spatial position for each of the five capsule layers, respectively. In total, the architecture contains 235k trainable parameters (145k for the capsules and 90k for the CNN). The architecture results in two sets of classification outputs: the agreement values of the last capsule layer as well as the softmax outputs from the convolutional part. We use the spread loss as proposed by \citet{Hinton:2018} for the capsule part and standard cross entropy loss for the convolutional part and add them up. We trained our models for $45$ epochs. For further details, we refer to our implementation, which is available on Github\footnote{Implementation at: \url{https://github.com/mrjel/group_equivariant_capsules_pytorch}}. \subsection{Results} \vspace{-0.1cm} \paragraph{Equivariance properties and accuracy} We confirm equivariance and invariance properties of our architecture by training our network on non-rotated MNIST images and test it on images, which are randomly rotated by multiples of $\pi/2$. We can confirm that we achieve exactly the same accuracies, as if we evaluate on the non-rotated test set, which is $99.02\%$. We also obtain the same output activations and equivariant pose vectors with occasional small numerical errors $<0.0001$, which confirms equi- and invariance. This is true for capsule and convolutional outputs. When we consider arbitrary rotations for testing, the accuracy of a network trained on non-rotated images is $89.12 \%$, which is a decent generalization result, compared to standard CNNs. \begin{table} \begin{subtable}[t]{0.48\textwidth} \begin{tabular}{lccc} \toprule & MNIST & AffNist & MNIST \\ & rot. (50k) & & rot. (10k) \\ \midrule CNN(*) & 92.30\% & 81.64\% & 90.19\% \\ Capsules & 94.68\% & 71.86\% & 91.87\% \\ \textbf{Whole} & \textbf{98.42\%} & \textbf{89.10\%} & \textbf{97.40\%} \\ \bottomrule \end{tabular} \caption{Ablation experiment results} \label{tab:table1_a} \end{subtable} \hspace{\fill} \begin{subtable}[t]{0.47\textwidth} \begin{tabular}{lc} \toprule & Average pose \\ & error [degree] \\ \midrule Naive average poses & 70.92 \\ Capsules w/o recon. loss & 28.32 \\ \textbf{Capsules with recon. loss} & \textbf{16.21} \\ \bottomrule \end{tabular} \caption{Avg. pose errors for different configurations} \label{tab:table1_b} \end{subtable} \caption{(a) Ablation experiments for the individual parts of our architecture including the CNN without induced pose vectors, the equivariant capsule network and the combined architecture. All MNIST experiments are conducted using randomly rotated training and testing data. (b) Average pose extraction error for three scenarios: simple averaging of initial pose vectors as baseline, our capsule architecture without reconstruction loss, and the same model with reconstruction loss.} \end{table} For fully randomly rotated training and test sets we performed an ablation study using three datasets. Those include standard MNIST dataset with $50$k training examples and the dedicated MNIST-rot dataset with the $10$k/$50$k train/test split \citep{Larochelle:2007}. In addition, we replicated the experiment of \citet{Sabour:2017} on the affNIST dataset\footnote{affNIST: \url{http://www.cs.toronto.edu/~tijmen/affNIST/}}, a modification of MNIST where small, random affine transformations are applied to the images. We trained on padded and translated (not rotated) MNIST and tested on affNIST. All results are shown in Table~\ref{tab:table1_a}. We chose our CNN architecture without information from the capsule part as our baseline (*). Without the induced poses, the network is equivalent to a traditional CNN, similar to the grid experiment presented by \cite{Fey:2018}. When trained on a non-rotated MNIST, it achieves 99.13\% test accuracy and generalizes weakly to a rotated test set with only 58.79\% test accuracy. For training on rotated data, results are summarized in the table. The results show that combining capsules with convolutions significantly outperforms both parts alone. The pose vectors provided by the capsule network guide the CNN, which significantly boosts the CNN for rotation invariant classification. We do \emph{not} reach the state-of-the-art of 99.29\% in rotated MNIST classification obtained by \cite{Weiler:2018}. In the affNIST experiment we surpass the result of 79\% from \citet{Sabour:2017} with much less parameters (235k vs. 6.8M) by a large margin. \vspace{-0.2cm} \paragraph{Representations} We provide a quantitative and a qualitative analysis of generated representations of our MNIST trained model in Table~\ref{tab:table1_b} and Figure~\ref{fig:results}, respectively. We measured the average pose error by rotating each MNIST test example by a random angle and calculated the distance between the predicted and expected poses. The results of our capsule networks with and without a reconstruction loss (\emph{cf.}} \def\Cf{\emph{Cf.}~next paragraph) are compared to the naive approach of hierarchically averaging local pose vectors. The capsule poses are far more accurate, since they do not depend equally on all local poses but mostly on those which can be explained by the existence of the detected object. It should be noted that the pose extraction was not directly supervised---the networks were trained using discriminative class annotations (and reconstruction loss) only. Similar to \cite{Sabour:2017}, we observe that using an additional reconstruction loss improves the extracted representations. In Figure~\ref{fig:out_poses} we show output poses for eleven random test samples, each rotated in $\pi/4$ steps. It can be seen that equivariant output poses are produced in most cases. The bottom row shows an error case, where an ambiguous pattern creates false poses. We provide a more detailed analysis for different MNIST classes in the appendix. Figure~\ref{fig:deep_poses} shows poses after the first (top) and the second (bottom) capsule layer. \begin{figure}[t] \begin{minipage}[l]{0.45\columnwidth} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{grid.png} \caption{Output pose vectors for rotated inputs}\label{fig:out_poses} \end{subfigure} \end{minipage} \hfill{} \begin{minipage}[r]{0.45\columnwidth} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{grid_pose.png}\\ \caption{Poses after first and second capsule layer}\label{fig:deep_poses} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{grid_recon.png} \caption{Reconstruction with transformed poses}\label{fig:recon_poses} \end{subfigure} \end{minipage} \caption{Visualization of output poses (a), internal poses (b), and reconstructions (c). (a) It can be seen that the network produces equivariant output pose vectors. The bottom row shows a rare error case, where symmetries lead to false poses. (b) Internal poses behave nearly equivariant, we can see differences due to changing discretization and image resampling. (c) The original test sample is on the left. Then, reconstructions after rotating the representation pose vector are shown. For the reconstruction, we selected visually correct reconstructed samples, which was not always the case.}\label{fig:results} \end{figure} \vspace{-0.1cm} \paragraph{Reconstruction} For further verification of disentanglement, we also replicated the autoencoder experiment of \citet{Sabour:2017} by appending a three-layer MLP to convolution outputs, agreement outputs, and poses and train it to reconstruct the input image. Example reconstructions can be seen in Figure~\ref{fig:recon_poses}. To verify the disentanglement of rotation, we provide reconstructions of the images after we applied $\pi/4$ rotations to the output pose vectors. It can be seen that we have fine-grained control over the orientation of the resulting image. However, not all representations were reconstructed correctly. We chose visually correct ones for display. \vspace{-0.2cm} \section{Limitations} \vspace{-0.3cm} Limitations of our method arise from the restriction of capsule poses to be elements of a group for which we have proper $\mathcal{M}$ and $\delta$. Therefore, in contrast to the original capsule networks, arbitrary pose vectors can no longer be extracted. Through product groups though, it is possible to combine several groups and achieve more general pose vectors with internally disentangled information if we can find $\mathcal{M}$ and $\delta$ for this group. For Lie groups, an implementation of an equivariant Karcher mean would be a sufficient operator for $\mathcal{M}$. It is defined as the point on the manifold that minimizes the sum of all weighted geodesic distances \citep{Nielsen:2012}. However, for each group there is a different number of possible realizations from which only few are applicable in a deep neural network architecture. Finding appropriate candidates and evaluating them is part of our future work. \vspace{-0.2cm} \section{Conclusion} \vspace{-0.3cm} We proposed group equivariant capsule networks that provide provable equivariance and invariance properties. They include a scheme for routing by agreement algorithms, a spatial aggregation method, and the ability to integrate group convolutions. We proved the relevant properties and confirmed them through proof of concept experiments while showing that our architecture provides disentangled pose vectors. In addition, we provided an example of how sparse group equivariant CNNs can be constructed using guiding poses. Future work will include applying the proposed framework to other, higher-dimensional groups, to come closer to the expressiveness of original capsule networks while preserving the guarantees. \subsubsection*{Acknowledgments} \vspace{-0.2cm} Part of the work on this paper has been supported by Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center SFB~876 \emph{Providing Information by Resource-Constrained Analysis}, projects B2 and A6. \small \bibliographystyle{plainnat}
2,869,038,156,141
arxiv
\section{Introduction} \label{sec:intro} The effect of localized vibrations (phonons) in the electronic transport properties of nanoscale devices is attracting increasing attention (for a review see \cite{review}). Such effects have been identified in different systems like atomic contacts and atomic chains \cite{agrait}, semiconducting quantum dots \cite{weig}, carbon nanotubes \cite{CNT} and other molecular junctions \cite{park,smit,zhitenev}. Other systems in which a strong electron-phonon coupling can lead to polaronic effects dominating the electronic transport are organic semiconductors \cite{ortmann,ortmann2}. In spite of this variety, from the theoretical point of view all these situations can be qualitatively described by the rather simple Anderson-Holstein model. This model consists of a single resonant electronic level coupled to fermionic leads and to a localized phonon mode \cite{holstein}. Even in the more simple spinless case, this model corresponds to a non-trivial strongly correlated system in an out of equilibrium situation. This model can be regarded as ``paradigmatic" of an electronic system interacting with bosonic excitations. For instance, the same model was proposed by Langreth to describe the problem of photoemission through core-holes in metals \cite{langreth}. This model has been extensively analyzed by different theoretical approaches \cite{glazman,wingreen,flensberg,mitra} but there is still no exact solution available except for some limiting cases. Comparison with numerically exact methods like numerical renormalization group or quantum Monte Carlo is possible only for certain range of parameters both for equilibrium \cite{hewson,jeon,cornaglia,liliana} and more recently for nonequilibrium situations \cite{rabani,ferdinand1,anders,iterativepi}. Within this model one can distinguish between two different regimes depending on the strength of the electron-phonon coupling. For sufficiently weak coupling a lowest order perturbation theory is applicable \cite{viljas,egger,entin}. This situation is suitable to describe the case of atomic contacts and atomic chains \cite{frederiksen,laura}. As the electron-phonon coupling increases higher order diagrams, including vertex corrections, become of importance as discussed in Ref. \cite{ness2}. In the opposite regime, the so-called polaronic regime, perturbation theory breaks down and other type of approaches are necessary \cite{galperin,braig,vonoppen,alvaro,dong}. The analysis of the transport properties of this model has been more recently extended to the case of noise, and, more generally, to its full counting statistics (FCS). These studies have been mainly restricted to the perturbative regime \cite{remi,schmidt,haupt,urban}. Although there exist some studies of the noise properties in the polaronic regime \cite{galperin2,paperPTA} it is desirable to develop simple methods to analyze the crossover from the perturbative to the polaronic case. Two simple approximations have been proposed to describe the polaronic regime: the so-called single particle approximation (SPA) and the polaron tunneling approximation (PTA). Both approaches correspond to simple decoupling schemes which allow an analytical evaluation of the electronic Green functions. This simplicity has allowed for instance to extend PTA to analyze the transient behavior of this model yielding results in a remarkable good agreement with numerically exact ones \cite{Ferdinand}. In spite of their several advantages both approximations exhibit some pathological features. This is particularly noticeable in their spectral properties at low frequencies (SPA) and high frequencies (PTA). Although there exist other methods to describe this polaronic regime based either on the equation of motion technique \cite{galperin,carmina1} or other diagrammatic techniques \cite{carmina2,zazunov,konig,aligia}, these methods require a more involved numerical evaluation. These methods are therefore not easy to extend to more complex situations like the calculation of the FCS or the analysis of the transient behavior in the non-stationary case. The aim of the present work is to develop a simple method for describing the crossover region from the polaronic to the perturbative regime. Ideally this method should recover the good features of SPA and PTA commented above while eliminating their pathologies. By analyzing the exact perturbation series with respect to the tunneling to the leads we identify a family of diagrams which gives the dominant contribution in the polaronic regime and which can be summed up exactly. We will denote this approach as dressed tunneling approximation (DTA) \cite{master-thesis} as it corresponds to {\it dressing} the leads self-energy with the polaronic cloud. In spite of being derived for describing the polaronic regime we show that this approximation gives a reasonable description of the crossover region while exhibiting an increasing deviation from the perturbative results in the corresponding limit. The manuscript is organized as follows: in Sect. \ref{sec:theory} we introduce the model Hamiltonian and the basic Green functions formalism which allows to calculate the different electronic and transport properties. In Sect. \ref{sec:pert} we analyze the diagrammatic expansion of the relevant Green functions in the polaronic limit and briefly introduce the known simple approximations like PTA and SPA. Sect. \ref{sec:DTA} is devoted to introduce the DTA discussing the arguments for its derivation and giving the main expressions for the system Green functions. The corresponding results are described in Sect. \ref{sec:res} where the spectral densities are compared with other approaches. In this section we also analyze the DTA results for the transport properties like the current, the differential conductance and the noise. Finally, we summarize the main results of this work in Sect. \ref{sec:conclusion}. \section{Model and basic theoretical formulation} \label{sec:theory} We consider the simplest spinless Anderson-Holstein model in which a single electronic level is coupled to a localized vibrational mode. Electrons can tunnel from this resonant level into a left (L) and a right electrode (R). We shall generically refer to this central region, which can represent either a molecule, and atomic chain or quantum dot, as the ``dot" region. The corresponding Hamiltonian is given by $H=H_{leads}+H_{dot}+H_T$, with (in natural units, $\hbar=k_B=e=m_e=1$) \begin{equation} H_{dot}=\left[\epsilon_0+\lambda\left(a^\dagger+a\right)\right]d^\dagger d+\omega_0\;a^\dagger a \;, \label{Hv} \end{equation} where $\epsilon_0$ is the bare electronic level, $\lambda$ is the electron-phonon coupling constant and $\omega_0$ is the frequency of the localized vibration. The electron (phonon) creation operator in the dot is denoted by $d^{\dagger}$ ($a^{\dagger}$). On the other hand, $H_{leads}=\sum_{j k}\epsilon_{j k} c_{j k}^\dagger c_{j k}$ corresponds to the non-interacting leads Hamiltonian ($j\equiv L,R$) where $\epsilon_{j k}$ are the leads electron energies and $c^{\dagger}_{j k}$ are the corresponding creation operators. The bias voltage applied to the junction is imposed by shifting the chemical potential of the electrodes $V=\mu_L-\mu_R$.\newline \hspace*{5mm}The tunneling processes are described by \begin{equation} H_T=\sum_{k} \left(t_{L k} \;c_{L k}^{\dagger}\;d+t_{R k} \;c_{R k}^{\dagger}\;d+\mbox{h.c.}\right) \; , \end{equation} where $t_{j k}$ are the tunneling amplitudes. To address the polaronic regime it is convenient to perform the so-called Lang-Firsov unitary transformation \cite{LangFirsov} which allows to eliminate the linear term in the electron-phonon coupling \cite{Mahan} \begin{equation} \tilde{H}=S H S^\dagger , \quad S=e^{g d^\dagger d (a^\dagger - a)} ,\quad g=\frac{\lambda}{\omega_0} \;. \end{equation} Using this transformation \begin{equation} \tilde{H}_{dot} =\tilde{\epsilon}\;d^\dagger\;d \; + \; \omega_0 a^\dagger a \;, \end{equation} where $\tilde{\epsilon}=\epsilon_0-\lambda^2/\omega_0$. The tunneling Hamiltonian is transformed as \begin{equation} \tilde{H}_T=\sum_{k} \left(t_{L k}\;c_{L k}^{\dagger}\;X d+t_{R k} \;c_{R k}^{\dagger}\;X d+\mbox{h.c.}\right) \; . \end{equation} where $X = \exp{\left[g (a - a^{\dagger})\right]}$ is the phonon cloud operator. On the other hand, the free leads Hamiltonian remains invariant. For later use it is useful to introduce the tunneling rates $\Gamma_{j} = \mbox{Im} \sum_k |t_{j k}|^2/(\omega - i0^+ - \epsilon_{j k})$ which are approximated by constants in the so-called wide band approximation. To deal with the transport properties of this model it is convenient to use the Keldysh nonequilibrium formalism \cite{Keldysh}. The basic quantity required to calculate the electronic and transport properties are the dot Green functions \begin{equation} G^{\alpha\beta}(t,t')=-i\left\langle T_{\cal{C}}\{X(t) d(t) X^{\dagger}(t') d^\dagger(t')\}\right\rangle\;, \label{G0} \end{equation} where $T_{\cal{C}}$ is the time ordering operator in the Keldsyh contour and $\alpha, \beta \equiv +,-$ denote the different branches of the contour. A slight modification in the Keldysh formulation allows to address directly the noise properties of the system and more generally its FCS \cite{FCS,FCS-2}. This is achieved by introducing a ``counting-field" $\nu$, which changes sign on the two branches of the contour and which enters as a phase factor modulating the tunnel Hamiltonian. As the current is conserved in our two terminal device one can choose to introduce the counting field in either the left or the right tunneling term. For definiteness we choose to include it in the left and accordingly we define \begin{equation} \tilde{H}^{\nu}_T =\sum_{k} \left(e^{i\nu/2} t_{L k}\;c_{L k}^{\dagger}\;X d+t_{R k} \;c_{R k}^{\dagger}\;X d+\mbox{h.c.}\right) \;, \end{equation} and the Keldysh Green functions (GFs) in the presence of the counting field are \begin{widetext} \begin{equation} G^{\alpha\beta(\nu)}(t,t')=-i\left\langle T_{\cal{C}} \{X(t) d(t) X^{\dagger}(t') d^\dagger(t') e^{-i\int_{\cal{C}} \tilde{H}^{\nu}_T (\tau) d\tau} \} \right\rangle_0\;, \label{G0nu} \end{equation} \end{widetext} where the subscript $0$ indicates averaging over the states of the uncoupled Hamiltonians $\tilde{H}_{dot}$ and $H_{leads}$. More generally, the FCS can be obtained from a Cumulant Generating Function (CGF), \begin{equation} \chi(\nu)=\left\langle T_{\cal{C}} \;e^{-i\int_c dt \tilde{H}^{\nu}_T (t)}\right\rangle_0 \; . \end{equation} Formally $\chi(\nu)$ can be expanded as $\chi(\nu)=\sum_q e^{iq\nu}P_q$, where $P_q$ is the probability of a charge $q$ being transferred through the system. So, the cumulants can be computed using \begin{equation} \left\langle\delta^n \;q\right\rangle=\left.(-i)^n\;\frac{\partial^n}{\partial\;\nu^n}\ln\chi(\nu)\right|_{\nu=0}. \label{cumulants} \end{equation} In a stationary situation the first cumulant ($n=1$) in (\ref{cumulants}) corresponds to the mean current \begin{eqnarray} I_L &=& \int \frac{d\omega}{2\pi} \sum_k \left[ t_{Lk} g^{+-}_{L k}(\omega)G^{-+}(\omega) \right. \nonumber\\ && - \left. t^*_{Lk} g^{-+}_{L k}(\omega)G^{+-}(\omega) \right] , \label{generalI} \end{eqnarray} where $g^{\alpha\beta}_{jĸ}$ are the isolated leads GFs. A symmetrized expression of the current involving only the dot spectral density can be deduced using current conservation, leading to \cite{Meir} \begin{equation} I=\frac{8\Gamma_L\Gamma_R}{\Gamma} \int d\omega \left[f_L(\omega)-f_R(\omega)\right] A(\omega)\; , \label{ispectral} \end{equation} where $\Gamma=\Gamma_L+\Gamma_R$ and the spectral function $A(\omega)=-\mbox{Im}(G^{R}(\omega))/\pi$, $G^R$ being the retarded dot Green function On the other hand the second cumulant corresponds to the current noise and can be written as \begin{eqnarray} S_L &=& i\int\frac{d\omega}{2\pi} \sum_k \frac{\partial}{\partial\nu}\left[t_{L k} G^{-+(\nu)}(\omega) g_{L k}^{+-}(\omega)\;e^{i\nu} \right. \nonumber\\ && \left. \left.-t^*_{L k} G^{+-(\nu)}(\omega) g_{L k}^{-+}(\omega)\;e^{-i\nu}\right]\right|_{\nu=0} . \label{generalnoise} \end{eqnarray} Symmetrized expressions for the noise in different approximations are given in Appendix \ref{appendix}. It is also convenient to define the unperturbed polaron correlator in Keldysh space \begin{equation} \Lambda^{\alpha\beta}(t,t')=\left\langle T_C e^{g\left(a(t)-a^\dagger(t)\right)}e^{-g\left(a(t')-a^\dagger(t')\right)}\right\rangle_0 . \end{equation} \section{Diagrammatic expansions in the polaronic regime} \label{sec:pert} An exact solution to the problem of determining the GFs entering in the calculations of the various transport properties is still unknown. In the polaronic regime a perturbative expansion in the hopping to the leads would be appropriate. The lowest order diagrams in this expansion are shown in Fig. \ref{total_pert}. Therefore, the natural starting point for studying the system in this regime is provided by the so-called atomic limit \cite{more-atomic}. This is defined as the limit when the tunneling rates between the dot and the leads tend to zero. The Green functions in this limit can be calculated exactly and correspond to the zero order term in the expansion depicted in Fig. \ref{total_pert}, and thus its Keldysh components can be computed as ${G^{(0)\alpha\beta}(\omega)=G_{0}^{\alpha\beta}(\omega)\otimes\Lambda^{\alpha\beta}(\omega)}$, where $G_0^{\alpha\beta}$ are the bare dot GFs and $\otimes$ represents the convolution product. In frequency domain its retarded component has the form \begin{equation} G^{(0) R}(\omega)=\sum_{k=-\infty}^{\infty}\frac{\alpha_k n_0 +\alpha_{-k} (1-n_0)}{\omega-\tilde{\epsilon}+k\;\omega_0+i\eta} \; , \label{zerorder} \end{equation} where $n_0$ represents the average occupation number of the level of the dot, $\eta$ is an infinitesimal and $\alpha_k$ is a coefficient that, at finite temperature can be written as \begin{equation} \alpha_k=e^{-g^2(2\;n_p+1)}\;I_k\left(2g^2\sqrt{n_p(1+n_p)}\right)\;e^{k\beta\omega_0/2} , \label{alpha} \end{equation} $I_k$ being the modified Bessel function of the first kind which is symmetric in the $k$ argument ($I_k=I_{-k}$) and $n_p$ is the Bose factor $1/(e^{\beta\omega_0} - 1)$ with $\beta = 1/T$. At zero temperature this coefficient can be simplified as \begin{equation} \alpha_k =\left\{\begin{array}{rrr} e^{-g^2}\frac{g^{2k}}{k!}&\mbox{if }k\geq0\\ 0&\mbox{if }k<0 \end{array}\right. \label{alpha2} \end{equation} \begin{figure} \begin{center} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure1} \end{minipage} \end{center} \caption{(Color online) Lowest order Feynman diagrams of the perturbative expansion in the tunneling Hamiltonian. The solid lines represents the free dot GF ($G_0$), dashed lines represent the free lead GF ($g_k$, $k=L,R$), the wavy lines the polaron correlator ($\Lambda$) and the crosses the hopping events.} \label{total_pert} \end{figure} There exist in the literature two simple ways to include the effects of finite tunneling to the leads starting from the atomic limit. These are the so called Polaron Tunneling Approximation (PTA) and the Single Particle Approximation (SPA), which are briefly described below. Within PTA the phonons are assumed to be excited and deexcited instantaneously when the electrons tunnel from the leads to the dot \cite{paperPTA}. Diagrammatically, this approximation corresponds to summing up the series depicted in Fig. \ref{PTA} (a), i.e. can be expressed in a Dyson-like equation in Keldysh space (for simplicity we concentrate in the $\nu=0$ case for the discussion within this and the next section) \begin{equation} \textbf{G}_{PTA}=\textbf{G}^{(0)} + \textbf{G}^{(0)} {\bf \Sigma}_0 \textbf{G}_{PTA}, \label{Dyson} \end{equation} where the self-energy ${\bf \Sigma}_0={\bf \Sigma}_{0L}+{\bf \Sigma}_{0R}$, with \begin{equation} {\bf \Sigma}_{0j}(\omega) = i \Gamma_j \left(\begin{array}{cc} 2f_j(\omega)-1 & -2 f_j(\omega) \nonumber\\ -2 (f_j(\omega) - 1) & 2f_j(\omega)-1 \end{array} \right). \end{equation} \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure2} \end{minipage} \caption{(Color online) Feynman diagrams for (a) PTA (b) DSPA. In both cases, the zero order corresponds to the atomic limit GF (\ref{zerorder}).} \label{PTA} \end{figure} This approximation provides a good description of the spectral properties at low energies and for situations close to half-filling \cite{Ferdinand}. In particular it satisfies the Friedel sum rule (FSR) connecting the spectral density at zero energy with the dot charge, which implies that for the symmetric case ($\tilde{\epsilon} = 0$) $A_{PTA}(0) = 1/\pi\Gamma$ \cite{alvaro}. However, the spectral density at higher energies is somewhat pathological as it exhibits phonon side band peaks of vanishing width but with a constant height. On the other hand, as has been shown in Ref. \cite{Ferdinand} this approximation provides a good description of the non-stationary evolution of the model at short time scales. The SPA provides another simple picture of the polaronic regime. The general idea of this approximation is to directly decouple the electronic and the polaron degrees of freedom in the dot GFs. In the simplest form of this approximation \cite{flensberg} the retarded GF is given by \begin{equation} G^{R}_{SPA}(\omega) = \sum_{k=-\infty}^{\infty} \frac{\alpha_k n_0 + \alpha_{-k} (1 - n_0)}{\omega - \tilde{\epsilon} + k \omega_0 + i \Gamma} \;. \label{SPA} \end{equation} This expression is formally equivalent to broaden the poles of the atomic GF of Eq. (\ref{zerorder}) with the bare (frequency independent) tunneling rates to the leads. From this expression it is clear that this approximation does not satisfy the expected behavior at low frequencies as it does not fulfill the FSR nor reproduce the polaronic narrowing of the resonances close to the Fermi level. However, it does not exhibit the pathological behavior of the side band peaks at higher energies characteristic of PTA and recovers the exact results in the limit of a fully occupied or fully empty dot \cite{SPA-exact,glazman}. In a more transparent diagrammatic way the SPA decoupling scheme corresponds to the Feynman diagrams showed in Fig. \ref{PTA} (b) in which the bare dot Green function is dressed with the tunneling self-energy up to infinite order and then convoluted with the polaron correlator $\Lambda$ \cite{zazunov}. The resulting GFs are, however, not completely equivalent to the ansatz of Eq. (\ref{SPA}). The Keldysh components within this diagrammatic SPA (DSPA) are given by \begin{eqnarray} G_{DSPA}^{+-}(\omega)=-\sum^{\infty}_{k=-\infty}\alpha_k \frac{\Sigma^{+-}_0(\omega+k\;\omega_0)}{\textfrak{D}(\omega+k\;\omega_0)}\nonumber\\ G_{DSPA}^{-+}(\omega)=-\sum^{\infty}_{k=-\infty}\alpha_{-k} \frac{\Sigma^{-+}_0(\omega+k\;\omega_0)}{\textfrak{D}(\omega+k\;\omega_0)} \; , \end{eqnarray} where $\textfrak{D}(\omega) =(\omega-\tilde{\epsilon})^2 + \Gamma^2$. \section{Dressed Tunneling Approximation} \label{sec:DTA} It is thus desirable to develop a simple approximation that would exhibit the properties of the PTA at low energies and of SPA at high energies. For this purpose let us analyze the full diagrammatic expansion illustrated in Fig. \ref{total_pert}, taking as an example the second order diagram represented again in Fig. \ref{MSPA} (a). In the evaluation of this diagram there appear products of polaron correlators of the type $\Lambda(t,t_1) \Lambda^{-1}(t,t_2)$, where the time arguments $t_1$ and $t_2$ correspond to the exit and entrance of the electrons from the dot to the leads. As in the limit of strong electron-phonon coupling the lifetime of the electronic states in the dot is much larger than the one in the electrodes, it is then reasonable to make the approximation $\Lambda(t,t_1) \Lambda^{-1}(t,t_2) \sim 1$ (see Fig.\ref{MSPA} (a)). This can be more rigorously justified from the fact that in the wide band approximation the retarded leads self-energies are localized in time representation, i.e. $\Sigma^R_{0j}(t,t') \propto \theta(t-t')\delta(t-t')$. With this prescription the diagrammatic expansion reduces to the one illustrated in Fig. \ref{MSPA} (b) which can be evaluated exactly. One should notice that the cancellation of the ``crossing" polaron lines in the diagrammatic expansion implies that vertex corrections can be neglected in this limit. An approximate evaluation of these corrections in the $\lambda/\omega_0 <1$ regime was undertaken in Ref. \cite{zazunov}. As can be observed, the resulting approximation is equivalent to {\it dressing} the leads self-energies within DSPA with the polaron correlators, i.e. ${\tilde{\Sigma}^{\alpha\beta}(\omega)= \Sigma^{\alpha\beta}_0(\omega)\otimes\Lambda^{\alpha\beta}(\omega)}$. In this way, the self-energy components can be written as \begin{eqnarray} \tilde{\Sigma}^{+-}_{L,R}(\omega)=\sum^{\infty}_{k=-\infty}\alpha_k \Sigma^{+-}_{0\;L,R}(\omega+k\;\omega_0)\nonumber\\ \tilde{\Sigma}^{-+}_{L,R}(\omega)=\sum^{\infty}_{k=-\infty}\alpha_{-k} \Sigma^{-+}_{0\;L,R}(\omega+k\;\omega_0) \;. \end{eqnarray} Within the wide-band approximation these self-energy components are purely imaginary quantities. The resulting GFs can then be straight-forwardly evaluated as \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure3} \end{minipage} \caption{(Color online) Feynman diagrams for the DTA approximation. Panel a) indicates the simplifying approximation on the second order diagram and panel b) represents the diagrammatic series which is included within DTA. } \label{MSPA} \end{figure} \begin{eqnarray} G_{DTA}^{+-}(\omega)=-\sum^{\infty}_{k=-\infty}\alpha_k \frac{\tilde{\Sigma}^{+-}(\omega+k\;\omega_0)}{\tilde{\textfrak{D}}(\omega+k\;\omega_0)}\nonumber\\ G_{DTA}^{-+}(\omega)=-\sum^{\infty}_{k=-\infty}\alpha_{-k} \frac{\tilde{\Sigma}^{-+}(\omega+k\;\omega_0)}{\tilde{\textfrak{D}}(\omega+k\;\omega_0)} \label{GF_DTA} \end{eqnarray} where $\tilde{\Sigma}^{\alpha\beta} = \tilde{\Sigma}^{\alpha\beta}_L + \tilde{\Sigma}^{\alpha\beta}_R$, and \begin{eqnarray} \tilde{\textfrak{D}}(\omega)=\left|\omega-\tilde{\epsilon}-\tilde{\Sigma}^{R}(\omega)\right|^2 , \end{eqnarray} where \begin{eqnarray} \tilde{\Sigma}^R(\omega)=\sum^{\infty}_{\substack{k=-\infty\\ j=L,R}}\frac{i\;\alpha_k}{2\pi}\int d\omega'\left[\frac{\Sigma_{0j}^{+-}(\omega')}{\omega+k\;\omega_0-\omega'+i\eta}\right.\nonumber\\ + \left.\frac{\Sigma_{0j}^{-+}(\omega')}{\omega-k\;\omega_0-\omega'+i\eta}\right]. \end{eqnarray} With these components, the spectral function can be determined as \begin{equation} A_{DTA}(\omega) =\frac{1}{2\pi i} \left[G^{+-}_{DTA}(\omega) - G^{-+}_{DTA}(\omega)\right] \;. \end{equation} It should be noticed that a similar approach but derived from a decoupling procedure within the equation of motion of the system GFs was presented recently in Ref. \cite{dong}. We also point out that in all the preceding approximations the basic assumption of having a equilibrium phonon distribution was made for the evaluation of the polaron correlators. A simpler version of this approximation can be obtained within the same spirit as in the SPA discussed in the previous section. Within this approximation (that we call approximated DTA (ADTA)), $G^R$ can be written as \begin{equation} G^R_{ADTA}(\omega) = \sum_{k=-\infty}^{\infty}\frac{\alpha_k\;n_0+\alpha_{-k}\;(1-n_0)} {\omega+k\omega_0-\tilde{\epsilon}-\tilde{\Sigma}^R(\omega+k\omega_0)} \;. \end{equation} From this expression it is clear that within this approximation the pole structure of the atomic limit is preserved but with a broadening determined by $\mbox{Im}\tilde{\Sigma}^R$. As for large frequencies this effective broadening tends to $\Gamma$ one recovers the SPA result in this limit. However, this effective broadening is strongly reduced with respect to $\Gamma$ for low energies. In fact, for $\omega \rightarrow 0$ and for the symmetric case $|\mbox{Im}\tilde{\Sigma}^R| \rightarrow e^{-g^2} \Gamma$, thus yielding the correct polaronic reduction in the resonance width at the Fermi level \cite{carmina1}. \section{Results} \label{sec:res} This section contains the predictions of the DTA for different physical quantities compared with other approaches. \subsection{Spectral density} \label{subsec::spectral} \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1.0\textwidth]{figure4.eps} \end{minipage} \caption{(Color online) Spectral density for the symmetric case, zero bias voltage and zero temperature with $\Gamma=0.2\;\omega_0$ and $g=1.0$. The results correspond to the different approximations: DTA (full line, green or light gray), ADTA (full line, violet or dark gray), SPA (dotted line) and PTA (dashed line).} \label{DTA-vs-SPA-PTA} \end{figure} We first analyze the results for the dot spectral density $A(\omega)$. Fig. \ref{DTA-vs-SPA-PTA} shows the comparison of the DTA results for $A(\omega)$ with those from PTA and SPA for an electron-hole and left-right symmetric case in the polaronic regime. As can be observed, while the DTA and the PTA results tend to coincide at low energies (central resonance), they increasingly deviate at higher order resonances. In contrast, for these higher order resonances the DTA spectral density gradually converges to the SPA one. Therefore, as commented above, DTA contains the good features of both approximations but without their pathologies. The same conclusion is valid for the simpler ADTA method, as can be seen in Fig. \ref{DTA-vs-SPA-PTA}. \begin{figure} \begin{minipage}{1.0\linewidth} \includegraphics[width=1.0\textwidth]{figure5.eps} \end{minipage} \caption{(Color online) Evolution of the spectral density at zero temperature with increasing voltage within DTA. From top to bottom $V = 1$, 3, 9 $\omega_0$. The other parameters as in lower panel of Fig. \ref{interpolation}. The dotted line corresponds to the SPA result which is voltage independent.} \label{spectral-vs-voltage} \end{figure} In Fig. \ref{spectral-vs-voltage} we analyze the evolution of the spectral density with the applied bias voltage. As can be observed, the main effect of the applied bias is to gradually reduce the height of the phonon peaks. Remarkably, the DTA spectral density appears to evolve towards the SPA one which is voltage independent (indicated by the dotted curve in Fig. \ref{spectral-vs-voltage}). As large voltages correspond to high energies one would expect that SPA should become exact in the limit $V \rightarrow \infty$. A similar convergence to the SPA is obtained in the limit $|\tilde{\epsilon}| \rightarrow \infty$, corresponding to the exact result for a fully empty or a fully dot case \cite{SPA-exact}. \begin{figure} \begin{minipage}{1.0\linewidth} \includegraphics[width=1\textwidth]{figure6.eps} \end{minipage} \caption{(Color online) Spectral density at zero temperature in the DTA (full line) and the ISA (triangles) from Ref. \cite{carmina2} for the symmetric case with $\Gamma=0.2\omega_0$, $V=\omega_0$ and two values of the coupling constant $g=0.5$ (upper panel) and $g=1$ (lower panel).} \label{interpolation} \end{figure} Another interesting property of DTA is that it reasonably describes the transition from the polaronic to the weak electron-phonon coupling regimes. This is illustrated in Fig. \ref{interpolation} in which the DTA spectral density is shown for two values of the parameter $g = 1$ (lower panel) and $g=0.5$ (upper panel). For comparison we also show in these plots the corresponding results obtained by the interpolative self-energy approach (ISA) of Ref. \cite{us1993} and extended to the non-equilibrium Holstein model in Ref. \cite{carmina2}, which is constructed in order to interpolate between the second-order perturbation theory and the atomic limit. It is remarkable that the two approximations which are derived following such different criteria would so closely coincide in both regimes. \subsection{Current and Noise} \label{subsec::current} We analyze in this subsection the results from the DTA for several transport properties. As shown in the inset of Fig. \ref{intMC} for moderate values of $g$ ($g \sim 0.8$) the current-voltage characteristic starts to exhibit a step-like behavior. For the electron-hole and left-right symmetric case shown in Fig. \ref{intMC} the most pronounced features appear at $V \sim 2n\omega_0$ \cite{paperPTA,carmina2,anders}. It is interesting to note that the DTA results for the current quantitatively agree in this range of parameters with numerically exact results from diagrammatic MC calculations from Ref. \cite{rabani}, indicated by the symbols in the inset of Fig. \ref{intMC}. It should be also mentioned that, as shown in Appendix \ref{appendix}, DTA fulfills the current conservation condition. This condition is trivially fulfilled by PTA where no inelastic processes are included, but not for instance by DSPA. In the case of DSPA the violation of current conservation can be demonstrated explicitly (see Appendix \ref{appendix}). However, as SPA and ADTA consist in an ansatz for the retarded GFs the left and right currents cannot be calculated separately but just using the symmetrized expression of Eq. (\ref{ispectral}). Their non-conserving character can be inferred nevertheless from the violation of the FSR. In order to have a more detailed analysis of the features in the IV characteristics it is convenient to calculate the differential conductance. This quantity is represented in the main panel of Fig. \ref{intMC} for the same parameters as for the inset. We also show for comparison the corresponding results for the SPA and the ISA. Several features are worth noticing: 1) the zero bias conductance within DTA reaches the unitary limit as it corresponds to an electron-hole symmetric case. This condition, which is directly related to the FSR, is also fulfilled by ISA but not by SPA which yields a smaller conductance value. 2) There appears a conductance step at $V\sim \omega_0$. This step corresponds to the onset of inelastic processes due to phonon emission, which is absent within SPA (neither PTA, not shown in Fig. \ref{intMC}, exhibits this feature), and 3) There appears a more pronounced feature at $V \sim 2\omega_0$ corresponding to the side-band peaks in the spectral density. It should be noticed that the precise shape of this feature is extremely sensitive to the presence of a finite broadening of the logarithmic singularities in the real part of the electron self-energies. In fact, a finite broadening leads to a dip in the differential conductance at $V = 2\omega_0$ \cite{dong} which however tends to disappear as the broadening is reduced to zero. \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure7} \end{minipage} \caption{(Color online) Zero temperature differential conductance within DTA (full line, green or light gray), ISA (dotted line) and SPA (dots) for $\tilde{\epsilon}=0$, $\Gamma=0.1 \omega_0$ and $g=0.8$. The inset shows the current for the same parameters but with a finite temperature $T=0.04 \omega_0$ for comparison with diagrammatic Monte Carlo data (full squares) from \cite{rabani}.} \label{intMC} \end{figure} The inelastic features at $V \sim \omega_0$ become more pronounced as $\Gamma$ is increased. This is illustrated in Fig. \ref{increase-gamma} where the conductance is shown for fixed $g$ and increasing values of $\Gamma$. An interesting issue, which has been addressed repeatedly in the literature is the transition from a step up to a step down in the conductance at the inelastic threshold. For instance, this transition was analyzed in the perturbative regime for the electron-phonon coupling in Refs. \cite{laura,frederiksen}. As shown in Fig. \ref{increase-gamma} the DTA fairly reproduces the step up feature at small values of $\Gamma$ but the transition to the step down behavior at large $\Gamma$ is somewhat masked by the presence of the logarithmic singularity in the real part of the electron self-energy. For comparison we show in Fig. \ref{increase-gamma} the corresponding results obtained with ISA which by construction reproduces the expected step down feature in the large $\Gamma$ in agreement with perturbation theory in $g$. \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure8} \end{minipage} \caption{(Color online) Zero temperature conductance within DTA for $\tilde{\epsilon}=0$, $g=0.5$ and (from bottom to top) $\Gamma=0.1$, 0.2, 0.5, 1.0, 2.0 and 4.0 $\omega_0$. The dashed lines show the corresponding results for ISA.} \label{increase-gamma} \end{figure} The behavior of the differential conductance as $\tilde{\epsilon}$ is varied is shown in Fig. \ref{DTA-EOM}. It is interesting to analyze the evolution of the features at the inelastic threshold $V = \omega_0$. As can be observed in the first inset of Fig. \ref{DTA-EOM}, the initial step up feature for the symmetric case evolves into a step down as $\tilde{\epsilon}$ approaches $\omega_0/2$ where the elastic resonance condition $V/2 = \tilde{\epsilon}$ coincide with the inelastic threshold. In Fig. \ref{DTA-EOM} we compare the DTA results with those of the EOM method obtained in Ref. \cite{carmina1}. As can be observed there is a remarkable agreement between the two methods in this range of parameters. Additionally, the results exhibit a second inelastic threshold at $V \sim 2\omega_0$ which can be more clearly appreciated when $\tilde{\epsilon}$ is shifted from zero energy (see right inset of Fig. \ref{DTA-EOM}). This feature cannot be recovered by other methods like PTA, SPA or ISA. \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure9} \end{minipage} \caption{(Color online) Zero temperature conductance within DTA for $g=1$, $\Gamma = 0.1 \omega_0$ and (from left to right) increasing values of $\tilde{\epsilon}= 0.0$, 0.1, 0.2, 0.3, 0.4 and $0.5 \omega_0$. The dashed lines show the corresponding results for the EOM method of Ref. \cite{carmina1}. The insets correspond to a blow up of the first and second phonon resonance.} \label{DTA-EOM} \end{figure} We next analyze the results for the current noise within DTA. Fig. \ref{noise-vs-V} shows the noise and the differential noise, $\partial S/\partial V$ as a function of voltage for increasing values of $\Gamma$ at fixed $g$ for the same parameter choice as in Fig. \ref{increase-gamma} for the differential conductance. There is an overall behavior of $\partial S/\partial V$ which is maintained for all values of $\Gamma$: it starts from a zero value at $V=0$ as it corresponds to a perfect transmitting channel at zero temperature, there is then a maximum at around $V \sim 2\Gamma$ followed by a decay as expected for a Lorentzian resonance. In addition the noise exhibits features at $V \sim n \omega_0$ as the conductance. However, there appear interesting differences. For instance, the feature at the inelastic threshold at $V \sim \omega_0$ evolves as a function of $\Gamma$ from a step up at small values, to a step down at intermediate ones and eventually again to a step up at large $\Gamma$. As in the case of the conductance the features at large $\Gamma$ are masked by the logarithmic singularities in the real part of the self-energies and can only be identified by analyzing the behavior of the noise on the neighborhood of the inelastic threshold. This double change of sign in the step is qualitatively in agreement with previous analysis of the noise in the perturbative regime \cite{remi,schmidt,haupt} and which has been partially confirmed by experimental results for transport through small molecules \cite{kumar}. \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure10} \end{minipage} \caption{(Color online) Zero frequency noise within DTA for the same parameters as in Fig. \ref{increase-gamma}. The lower panel corresponds to the differential noise $\partial S/\partial V$. From top to bottom in the upper panel $\Gamma = 0.1$, 0.2, 0.5, 1.0, 2.0 and 4.0 $\omega_0$.} \label{noise-vs-V} \end{figure} As a final issue we discuss the zero-bias limit for the noise. In this limit the noise is purely due to thermal fluctuations and one should recover the fluctuation-dissipation theorem (FDT), stating that $S(0) = 4 T G(0)$. This can be proved analytically within DTA (and also for the DSPA) as shown in Appendix \ref{appendix}. Fig. \ref{noise-vs-T} illustrates the behavior of the thermal noise as a function of temperature for increasing values of $\Gamma$ at fixed $g$. As can be observed, all curves converge to the value $S(0)/4TG_0 = 1$ in the zero-temperature limit, thus indicating the fulfillment of FDT within this approach. \begin{figure} \begin{minipage}{1\linewidth} \includegraphics[width=1\textwidth]{figure11} \end{minipage} \caption{(Color online) Thermal noise as a function of temperature within DTA for the same parameters as in Figs. \ref{increase-gamma} and \ref{noise-vs-V}. The noise is normalized as $S(0)/4TG_0$ in such a way as to illustrate the fulfillment of the FDT in the zero-temperature limit.} \label{noise-vs-T} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work we have presented and analyzed a theoretical approach for the non-equilibrium transport properties of nanoscale systems coupled to metallic electrodes with strong electron-phonon interactions. We have shown that this method, that we have called DTA, provides analytical expressions for the system GFs which are as simple as previous analytical approximations for the polaronic regime like SPA and PTA. We show, however, that the DTA eliminates the more remarkable pathologies of these two previous approximations in the low energy (SPA) and high energy (PTA) regimes. By comparison with other methods we have shown that DTA additionally reproduces the correct behavior in the crossover regime, $\lambda^2/\omega_0 \lesssim \Gamma$. Only in the limit $\lambda^2/\omega_0\Gamma \ll 1$ this approximation progressively deviates from the results provided by perturbation theory in the electron-phonon coupling. Some exact known limits of the model like the fully empty or fully occupied dot case are also recovered within DTA. On the other hand, we have shown that DTA provides results for the current and the differential conductance in good agreement with results from other more elaborate methods including numerically exact methods like diagrammatic Monte Carlo. In addition DTA can be formulated in a way which allows to extract the noise properties and the more generally the FCS of the model. We have provided an analysis of the main features of the voltage-dependent shot noise and also of the thermal noise. For this last case we have shown analytically that DTA fulfills the fluctuation-dissipation theorem. We have also demonstrated that DTA satisfies current and noise conservation laws. This is a remarkable property in view of the simplicity of the approximation and its non self-consistent character \cite{self-consistency}. For future applications, the simplicity of the method could allow to address more complex situations like the non-stationary response of the model as studied in Ref. \cite{Ferdinand}. On the other hand, although the method has been derived for the more simple single model, the same ideas could be in principle extended to models including several dot levels coupled to multiple phonon modes like the one discussed in Ref. \cite{brandes}. One can also envisage improving the present approximation by including non-equilibrium effects in the phonon distribution, as discussed for instance in Ref. \cite{galperin}. \begin{acknowledgments} We would like to thank A. Zazunov and R. Avriller for very useful discussions. We are in debt to K. F Albrecht for sending us the quantum Monte Carlo data for the comparisons. We also thank Spanish MINECO for financial support under project FIS2011-26516. \end{acknowledgments}
2,869,038,156,142
arxiv
\section{Introduction} \label{Intro} Binary bismuthide $\beta$-PdBi$_2$ has attracted much interest recently as a promising candidate of topological superconductor (TS)\cite{Imai,chu15,Herrera,Sakano15,Kacmarcik,LuXin16}. Topological superconductivity is a new state of matter possessing symmetry-protected surface states while the bulk states are fully gapped by superconducting pairing\cite{Hasan10,Cava10,FuPRB,FuPRL}. The Majorana fermions are believed to exist on the surface or vortex core in such TSs, which may not only be of scientific importance, but also can lead to a wide-ranging applications in microelectronic devices and quantum computing. The centrosymmetric stoichiometric $\beta$-PdBi$_2$ ($T_c$$\sim$ 5 K) was claimed to be topologically nontrivial in view of the observation of the topologically-protected surface modes by spin- and angle-resolved ARPES\cite{Sakano15}. However, no Andreev bound states associated with Majorana fermions are detectable through point-contact spectroscopy\cite{LuXin16}, in sharp contrast to the cases in Cu-intercalated Bi$_2$Se$_3$\cite{Ando11,Ando11PCS} and In-doped SnTe\cite{Ando13}. On the other hand, it becomes the common wisdom that spin-orbit interaction (SOI) in heavy elements is crucial for the topological states. It is therefore heuristic to ask what if we replace Pd by heavier Pt element with enhanced SOI. \begin{figure} \includegraphics[width=9cm,keepaspectratio=true]{fig1.eps} \caption{(Color online) Crystal lattice of PtBi$_2$. (a) The primitive unit cell for hexagonal PtBi$_2$. The coordinates of Pt(1) and Pt(2) are ($\frac{a}{3},\frac{2a}{3}, 0.92c)$ and ($\frac{2a}{3},\frac{a}{3}, 0.08c)$, respectively. (b) The structure as seen from a perspective along the $c$-axis. } \label{Fig1} \end{figure} In this study, we substituted Pt for Pd in PdBi$_2$ and found that this new material actually crystallizes in a distinct structure. Unlike $\beta$-PdBi$_2$ which has the tetragonal structure in an $I$4/$mmm$ space group, PtBi$_2$ crystallizes in space group P-3 with a hexagonal unit cell of $a$=$b$=6.553$\AA$, $c$=6.165$\AA$\cite{book-PtBi2}. It is also different from its homologue PtBi superconductor ($T_c$=1 K) with a monoclinic unit cell\cite{Matthias}. The in-plane resistivity of PtBi$_2$ shows metallic behaviors down to 2 K, the lowest temperature studied in this work. The intra-plane and inter-plane magnetization displays pronounced anisotropy, being diamagnetic with field aligned along the plane and paramagnetic when field is perpendicular to the plane. The magnetoresistance (MR) and Hall resistivity measured on the same sample both show two types of carriers and the former one scales well to the semi-classical Kohler's rule\cite{Kohler1938,NieLuo02}. \section{Experimental} \label{Exp} PtBi$_2$ single crystals were fabricated via a melt-growth method. The starting materials of high purity, Bi powder(4N) and Pt powder (4N), were mixed thoroughly in the prescribed molar ratio of Bi:Pt = 2:1 (2 g in total weight). All these preparations were performed in a glove box filled with protective argon gas (both H$_2$O and O$_2$ contents were limited below 0.1ppm). The mixtures were loaded and sealed in an evacuated quartz tube. This quartz tube was then heated to 700$^\circ$C quickly in a sintering furnace and kept at this temperature for 48h, before being slowly cooled down to 450$^\circ$C(3$^\circ$C/h), and finally being quenched into cold water. Large pieces of dark-gray plate-like PtBi$_2$ single crystals of typical 7-8 mm in length were harvested. Energy dispersive x-ray (EDX) spectrometry confirms the stoichiometric ratio of the chemical composition (32.8 : 67.2 $\pm$ 3.0\% in molar percentage for Pt:Bi). The structure of crystals was characterized by powder x-ray diffraction (XRD) at room temperature using a Rigaku diffractometer with Cu $K$$\alpha$ radiation and a graphite monochromator. Lattice parameters were obtained by Rietveld refinements. The magnetization was measured by vibrating sample magnetometry using a Quantum Design MPMS-5 system. Measurements of MR and Hall effect were performed on the same sample by changing the field polarities. Signal even in field was defined as MR and the odd component was calculated as Hall resistivity. \begin{figure} \includegraphics[width=9cm,keepaspectratio=true]{fig2.eps} \caption{(Color online) Panels (a) and (b) represent the powder XRD patterns and single crystal XRD diffraction peaks, respectively. The asterisks in panel (a) mark the possible impurity phases. The optical image of a single-crystal sample is shown in the inset of panel (b).} \label{Fig2} \end{figure} \begin{figure} \includegraphics[width=9.2cm,keepaspectratio=true]{fig3.eps} \caption{(Color online) (a) Zero-field resistivity curve down to 2K. The low-$T$ resistivity fits to $\rho=\rho_0+AT^2$ very well below $\sim$35K (see the upper-left inset). (b) and (c) show the in-plane and inter-plane susceptibility under a field of 5koe, respectively.} \label{Fig3} \end{figure} \section{Results} \label{Results} The schematic view of the crystal structure of PtBi$_2$ is shown in Fig. 1. It crystallizes in a hexagonal structure with the space group P-3 (No.147). Its structure consists of alternate stacking of 2D Pt layers and bismuth bilayers along the $c$-axis. In one primitive unit cell, there are three Pt atoms, one being located at the corner of the polyhedron and the other two labelled as Pt(1) and Pt(2) in Fig. 1. The Bi atoms are trigonally-coordinated. The XRD pattern of PtBi$_2$ crystal is presented in Fig.2. A small trace of impurity phase, marked by the asterisks in panel (a), was detectable in the powder X-ray pattern and only (00$\ell$) diffraction peaks were observed in the single-crystal X-ray, indicating good $c$-axis orientation of the as-grown samples. The calculated lattice parameters are $a$=$b$=6.553$\AA$, $c$=6.165$\AA$, in consistence with previous reported results\cite{book-PtBi2}. \begin{figure*} \includegraphics[width=18cm,keepaspectratio=true]{fig4.eps} \caption{(Color online) The magnetoresistance (upper panel) and Hall resistivity (lower panel), both measured on the same crystal with the same electrical contacts, at several selected temperatures. The red solid curves delineate the fits to two-band carrier model.} \label{Fig4} \end{figure*} Zero-field in-plane resistivity is plotted in Fig. 3. The room temperature resistivity is about 0.12 m$\Omega$cm and it is metallic down to the lowest temperature we measured (2K). The residual resistivity ratio is approximately 50 for our samples, indicative of good sample quality. The sample is better characterized by the susceptibility measurements thereafter. Remarkably, the magnetization of the sample shows large anisotropy with respect to the field orientations. As illustrated in Fig. 3, the in-plane magnetization $\chi_{ab}$ is diamagnetic and varies little with $T$ down to 20K, below which it displays a significant upturn, whereas the inter-plane $\chi_c$ is paramagnetic instead and increases linearly with decreasing $T$, followed by a downward trend below 20K. The origin of these intriguing magnetization behaviors is not clear. The magnetoresistive and Hall response of a material can open a avenue for exploring the dispersion and dynamics of the charge carriers. First, in PtBi$_2$, it is noted that the absolute value of the MR, defined as $\frac{\Delta\rho}{\rho}$, is rather large, reaching $>$400\% at 2K in a magnetic field of 9T. This large MR implies a rather large electron mean free path, hence a long relaxation time. However, this MR is damped very fast with increasing $T$, as seen from the upper panels of Fig. 4. Second, in single-band metals, the MR at small fields is usually quadratic and the Hall resistivity varies linearly with field. However, in the two-band Drude model, on the assumption of the field-independent carrier density and relaxation time, $\frac{\Delta\rho}{\rho}(H)$ and $\rho_{xy}(H)$ can be written as\cite{Greene07,Rullier-Albenque09,Hussey10,Rullier-Albenque12} \begin{eqnarray} \frac{\Delta\rho}{\rho}=\frac{\sigma_h\sigma_e(\sigma_hR_h-\sigma_eR_e)^2H^2}{(\sigma_h+\sigma_e)^2+\sigma_h^2\sigma_e^2(R_h+R_e)^2H^2}\label{eqn:one} \end{eqnarray} \begin{eqnarray} \rho_{xy}(H)=\frac{\sigma_h^2R_h+\sigma_e^2R_e+\sigma_h^2\sigma_e^2R_hR_e(R_h+R_e)H^2}{(\sigma_h+\sigma_e)^2+\sigma_h^2\sigma_e^2(R_h+R_e)^2H^2}H\label{eqn:two} \end{eqnarray} \noindent where $\sigma_e(h)$ and $R_e(h)$ are electrical conductivity and Hall coefficient for electron (hole) band, respectively. The MR and the Hall signal for PtBi$_2$ sample are exemplified in Fig 4 at some selected temperatures. Although the individual curves can be fitted with the above two-band equations reasonably well, plotted as the red solid line in each panel, we failed to model these two transport coefficients simultaneously with the same set of four parameters. These difficulties may arise from the simple assumption of the field independent charge carrier density and scattering, in analogy to the case in cuprates\cite{Greene07}. Nevertheless, given the quality of our fitting and the strong non-linearity of the Hall resistivity, we strongly believe that the transport properties of this compound are governed by two-band charge carriers. In standard metals, the MR $\Delta \rho$/$\rho$ at a certain temperature under a field $H$ has a general form known as the Kohler's rule\cite{Kohler1938,NieLuo02}: $\Delta \rho$/$\rho$=$f$($H/\rho$). This rule can be derived from Boltzmann transport theory, on the assumption of constant carrier number with $T$ and a single scattering rate on the Fermi surface. From this rule, $\Delta \rho$/$\rho$ is literally independent of $T$ such that the plots of $\Delta \rho/\rho_0$ as a function of $H/\rho$ at distinct temperatures will collapse onto a single curve. Interestingly, this rule, albeit its semiclassical origin, was found to be well obeyed in a large number of metals from conventional metals to some quantum matters. These involve the metals with two types of carriers\cite{NieLuo02}, the pseudogap phase of the underdoped cuprates\cite{Greven14}, quasi-one-dimensional metals\cite{Narduzzo07,Xu15} as well as some topological semimetals\cite{Coldea}. We examined this rule in PtBi$_2$ (Fig. 5) and found that it is well obeyed in this material, over a wide field range (up to 9T) and a broad $T$ window (2K-100K. Above 100K, the MR tends to be negligible). Moreover, the longitudinal MR in PtBi$_2$ (H $\|$ I $\|$ $ab$) also shows two types of charge carriers and the validity of the Kohler's rule (data not shown). \begin{figure} \includegraphics[width=9cm,keepaspectratio=true]{fig5.eps} \caption{(Color online) The Kohler's plot for the MR data from Fig. 4. } \label{Fig5} \end{figure} \section{Discussions and Conclusion} \label{Discussions} In recent work by Sakano \textit{et al.}, several topologically-protected surface states were observed by spin-resolved ARPES in the TS candidate $\beta$-PdBi$_2$\cite{Sakano15}. These non-trivial surface bands include one crossing the Fermi level and the other one forming the Dirac cone state 2 eV below the Fermi level. It was noted that these topological surface states are \textit{all} derived as a consequence of SOI, although their respective microscopic details may be different. In PtBi$_2$, the SOI ought to be stronger. Owing to its good metallicity, however, the electrical transport is \textit{overall} dominated by its bulk electrons and it looks more like a conventional good metal from transport perspective. In this material, the possible quantum linear MR arising from the degenerate Dirac fermions in the quantum limit is not observed up to 9T\cite{Abrikosov98,Ong10}. Interestingly, this material was reported to superconduct below 150 mK\cite{PtBi2-0.15K}, $\sim$40 times lower than $T_c$ in PdBi$_2$. How the SOI changes the electronic structure of PtBi$_2$, and induces the non-trivial surface states, if any, await more investigations, both theoretically and experimentally. To summarize, we synthesized the single crystals of stoichiometric bismuthide PtBi$_2$ by a solid-state reaction method. The samples were carefully characterized by combined procedures of XRD, (magneto-)transport and susceptibility measurements. This compound shows prominent two-band transport behaviors with no clear signature from the possible surface states. However, the high-quality single crystals are now ready for prospective advanced experiments, especially for ones with more surface sensitivity. \begin{acknowledgments} The authors would like to thank C. M. J. Andrew, A. F. Bangura for stimulating discussions. This work is sponsored by the National Key Basic Research Program of China (Grant No. 2014CB648400), and by National Natural Science Foundation of China (Grant No. 11474080, U1432135, 11611140101). X.X. would also like to acknowledge the financial support from the Distinguished Young Scientist Funds of Zhejiang Province (LR14A040001) and an open program from Wuhan National High Magnetic Field Center (2015KF15). \end{acknowledgments}
2,869,038,156,143
arxiv
\section{Introduction} \mylabel{sec-intro} Suppose $\check{X}$ is a smooth surface and $\check{D}=\check{D}_1+\ldots + \check{D}_k$ is a divisor with each $\check{D}_i$ smooth. Suppose $\check{E}$ is a bundle provided with filtrations $\check{F}^i_{\cdot}$ along the $\check{D}_i$, and parabolic weights $\alpha ^i_{\cdot}$. If $\check{D}$ has normal crossings, this defines a locally abelian parabolic bundle on $(\check{X},\check{D})$ and the parabolic Chern classes have been calculated as explained in the previous part. Suppose that the singularities of $\check{D}$ contain some points of higher multiplicity. For the present work we assume that these are as easy as possible, namely several smooth branches passing through a single point with distinct tangent directions. The first basic case is a triple point. Let $\varphi : X \rightarrow \check{X}$ denote this birational transformation, and let $D_i\subset X$ denote the strict transforms of the $\check{D}_i$. Assuming for simplicity that there is a single multiple point, denote by $D_0$ the exceptional divisor. Now $D=D_0+\cdots + D_k$ is a divisor with normal crossings. Suppose $E$ is a vector bundle on $X$ with $$ E|_{X-D_0} = \varphi ^{\ast}(\check{E})|_{X-D_0}. $$ The filtrations $\check{F}^i_{\cdot}$ induce filtrations of $\varphi ^{\ast}(\check{E})|_{D_i}$ and hence of $E|_{D_i-D_i\cap D_0}$, which then extend uniquely to filtrations $F^i_{\cdot}$ of $E|_{D_i}$. Associate to these filtrations the same parabolic weights as before. Up until now we have already made a choice of extension of the bundle $E$. Choose furthermore a filtration $F^0_{\cdot}$ of $E|_{D_0}$ and parabolic weights associated to $D_0$. Having made these choices we get a parabolic bundle on the normal crossings divisor $(X,D)$, which determines parabolic Chern classes. We are particularly interested in the invariant $\Delta$ which combines $c_1$ and $c_2$ in such a way as to be invariant by tensoring with a line bundle. The goal of this paper is to provide a convenient calculation of $\Delta$ and then investigate its dependence on the choices which have been made above. In particular we would like to show that $\Delta$ achieves its minimum and calculate this minimum, which can be thought of as the Chern invariant associated to the original parabolic structure on the multiple point singularity $(\check{X},\check{D})$. The main difficulty is to understand the possible choices for $E$. For this we use the technical of Ballico-Gasparim \cite{Ballico} \cite{BallicoGasparim1} \cite{BallicoGasparim2}. \section{Calculating the invariant $\Delta$ of a locally abelian parabolic bundle} Recall from \cite{Taher} formulas for the parabolic first, second Chern characters of a locally abelian parabolic bundle $E$ in codimension one and two, $ch_{1}^{Par}(E)$, and $ch_{2}^{Par}(E)$. Let $X$ be a smooth projective variety over an algebraically closed field of characteristic zero and let $D$ be a strict normal crossings divisor on $X$. Write $D = D_{1} + ... + D_{n}$ where $D_{i}$ are the irreducible smooth components, meeting transversally. We sometimes denote by $\mathcal{S}:= \{ 1,\ldots , n\}$ the set of indices for components of the divisor $D$. For $i = 1, ..., n$, let $\Sigma_{i}$ be finite linearly ordered sets with notations $ \eta_{i} \leq ... \leq \sigma \leq \sigma ' \leq \sigma '' \leq ... \leq \tau_{i}$ where $\eta_{i}$ is the smallest element of $\Sigma_{i}$ and $\tau_{i}$ the greatest element of $\Sigma_{i}$. Let $\Sigma '_{i}$ be the set of connections between the $\sigma$'s i.e $$ \Sigma '_{i} = \{(\sigma, \sigma '),\ s.t\ \sigma < \sigma '\ and\ there \ exist \ no \ \sigma ''\ with\ \sigma < \sigma ''< \sigma' \}. $$ Consider the {\em tread functions} $ m_{+} :\Sigma'_i\rightarrow \Sigma_i$ and $m_{-} : \Sigma'_i \rightarrow \Sigma_i$ if $\lambda = (\sigma, \sigma') \in \Sigma'_{i} $ then $ \sigma = m_{-}(\lambda), \sigma' = m_{+}(\lambda)$. In the other direction, consider the {\em riser functions} $C_{+} : \Sigma _i- \{\tau _i\} \rightarrow \Sigma'_i $ and $C_{-} : \Sigma_i - \{\eta_i\} \rightarrow \Sigma '_i $ such that $C_{+}(\sigma) = (\sigma, \sigma ')$ where $ \sigma ' > \sigma$ the next element and $C_{-}(\sigma) = (\sigma '', \sigma)$ where $\sigma '' < \sigma$ the next smaller element. $$ $$ For any parabolic bundle $E$ in codimension one, and two, the parabolic first, second Chern characters $ch_{1}^{Par}(E), and \ ch_{2}^{Par}(E),\ $ are obtained as follows: $$ $$ $\centerdot \ ch^{Par}_{1}(E) := \ ch_{1}^{Vb}(E) \ - \ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \Sigma '_{i_{1}}}\alpha_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]$ $$ $$ $\centerdot \ ch^{Par}_{2}(E) := \ ch^{Vb}_{2}(E) \ -\ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha_{i_{1}}(\lambda_{i_{1}}).(\xi_{i_{1}})_{\star}\left(c_{1}^{D_{i_{1}}}(Gr^{i_{1}}_{\lambda_{i_{1}}})\right) $ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2} \ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha^{2}_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]^{2}$ $$ $$ $ \hspace{2.2cm} + \ \frac{1}{2}\sum_{i_{1} \neq i_{2}} \sum_{\aatop{\lambda_{i_{1}} }{ \lambda_{i_{2}}}} \sum_{p \in Irr(D_{i_{1}} \cap D_{i_{2}})} \alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{2}}(\lambda_{i_{2}}).rank_{p}(Gr^{i_{1}, i_{2}}_{\lambda_{i_{1}}, \lambda_{i_{2}}}).[D_{p}].$ $$ $$ Where: $\\$ $\centerdot \ ch_{1}^{Vb}(E), ch^{Vb}_{2}(E)$ denotes the first, second, Chern character of vector bundles $E$. $\centerdot \ Irr(D_{I})$ denotes the set of the irreducible components of $D_I:=D_{i_{1}} \cap D_{i_{2}} \cap ... \cap D_{i_{q}}$. $\centerdot \ \xi_{I}$ denotes the closed immersion $D_{I} \longrightarrow X$, and $\xi_{I,{\star}} : A^k(D_{I}) \longrightarrow A^{k+q}(X)$ denotes the associated Gysin map. $\centerdot$ \ Let $p$ be an element of $Irr(D_{i} \cap D_{j}).$ Then $rank_{p}(Gr^{I}_{\lambda})$ denotes the rank of $Gr^{I}_{\lambda}$ as an $\mathcal{O}_{p}$-module. $\centerdot \ [D_{i_j}] \in A^1(X)\otimes \mathbb{Q}$, and $[D_{p}] \in A^2(X)\otimes \mathbb{Q}$ denote the cycle classes given by $D_{i_j}$ and $D_{p}$ respectively. \begin{definition} Let $Gr^{i_{1}}_{\lambda_{i_{1}}}$ be a bundle over $D_{i_1}$. Define the degree of $Gr^{i_{1}}_{\lambda_{i_{1}}}$ to be $$ \emph{deg} (Gr^{i_{1}}_{\lambda_{i_{1}}}) := (\xi_{i_{1}})_{\star}\left(c_{1}^{D_{i_{1}}}(Gr^{i_{1}}_{\lambda_{i_{1}}})\right). $$ \end{definition} \begin{definition} The invariant $\Delta$, which is a normalized version of $c_2$ designed to be independent of tensorization by line bundles. It is defined by $$ \Delta = c_{2} - \frac{r - 1}{2r}c_{1}^{2}. $$ Recall that: $ch_{2} = \frac{1}{2}c_{1}^{2} - c_{2}$ $\Longrightarrow$ $c_{2} = \frac{1}{2}c_{1}^{2} - ch_{2}$. Therefore $$ \Delta = \frac{1}{2}c_{1}^{2} - ch_{2} - \frac{1}{2}c_{1}^{2} + \frac{1}{2}c_{1}^{2} + \frac{1}{2r}c_{1}^{2} = \frac{1}{2r}ch_{1}^{2} - ch_{2}. $$ \end{definition} Then $\Delta^{Par}(E) = \frac{1}{2r}ch^{Par}_{1}(E)^{2} - ch^{Par}_{2}(E) $ $$ $$ $ = \frac{1}{2r} \left[ch^{Vb}_{1}(E) \ - \ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \Sigma '_{i_{1}}}\alpha_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]\right]^{2}$ $$ $$ $ -\ ch^{Vb}_{2}(E) \ +\ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha_{i_{1}}(\lambda_{i_{1}}).\emph{deg} (Gr^{i_{1}}_{\lambda_{i_{1}}}) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha^{2}_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]^{2}$ $$ $$ $ \hspace{2.2cm} - \ \frac{1}{2}\sum_{i_{1} \neq i_{2}} \sum_{\aatop{\lambda_{i_{1}} }{ \lambda_{i_{2}}}} \sum_{p \in Irr(D_{i_{1}} \cap D_{i_{2}})} \alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{2}}(\lambda_{i_{2}}).rank_{p}(Gr^{i_{1}, i_{2}}_{\lambda_{i_{1}}, \lambda_{i_{2}}}).[D_{p}].$ $$ $$ $\hspace{2.2cm} = \frac{1}{2r}ch^{Vb}_{1}(E)^{2} $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{r} \left[ch^{Vb}_{1}(E) \right] \cdot \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \Sigma '_{i_{1}}}\alpha_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]$ $$ $$ $ \hspace{2.2cm} + \ \frac{1}{2r}\sum_{i_{1} \neq i_{2}} \sum_{\aatop{\lambda_{i_{1}} }{ \lambda_{i_{2}}}} \sum_{p \in Irr(D_{i_{1}} \cap D_{i_{2}})} \alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{2}}(\lambda_{i_{2}}). rank(Gr^{i_{1}}_{\lambda_{i_{1}}})rank(Gr^{i_{2}}_{\lambda_{i_{2}}}).[D_{p}].$ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2r}\sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \Sigma '_{i_{1}}}\sum_{\lambda'_{i_{1}} \in \Sigma '_{i_{1}}}\alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{1}}(\lambda'_{i_{1}}). rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).rank(Gr^{i_{1}}_{\lambda'_{i_{1}}}).[D_{i_{1}}]^{2}$ $$ $$ $\hspace{2.2cm} - \ ch^{Vb}_{2}(E) \ +\ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha_{i_{1}}(\lambda_{i_{1}}).\emph{deg} (Gr^{i_{1}}_{\lambda_{i_{1}}}) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha^{2}_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]^{2}$ $$ $$ $ \hspace{2.2cm} - \ \frac{1}{2}\sum_{i_{1} \neq i_{2}} \sum_{\aatop{\lambda_{i_{1}} }{ \lambda_{i_{2}}}} \sum_{p \in Irr(D_{i_{1}} \cap D_{i_{2}})} \alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{2}}(\lambda_{i_{2}}).rank_{p}(Gr^{i_{1}, i_{2}}_{\lambda_{i_{1}}, \lambda_{i_{2}}}).[D_{p}].$ \begin{proposition} $\Delta^{Par}(E) = \Delta^{Vb}(E) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{r} ch^{Vb}_{1}(E) \cdot \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \Sigma '_{i_{1}}}\alpha_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]$ $$ $$ $ \hspace{2.2cm} + \ \frac{1}{2r}\sum_{i_{1} \neq i_{2}} \sum_{\aatop{\lambda_{i_{1}} }{ \lambda_{i_{2}}}} \sum_{p \in Irr(D_{i_{1}} \cap D_{i_{2}})} \alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{2}}(\lambda_{i_{2}}). rank(Gr^{i_{1}}_{\lambda_{i_{1}}})rank(Gr^{i_{2}}_{\lambda_{i_{2}}})[D_{p}].$ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2r}\sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \Sigma '_{i_{1}}}\sum_{\lambda'_{i_{1}} \in \Sigma '_{i_{1}}}\alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{1}}(\lambda'_{i_{1}}). rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).rank(Gr^{i_{1}}_{\lambda'_{i_{1}}}).[D_{i_{1}}]^{2}$ $$ $$ $\hspace{2.2cm} + \ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha_{i_{1}}(\lambda_{i_{1}}).\emph{deg} (Gr^{i_{1}}_{\lambda_{i_{1}}}) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i_{1} \in \mathcal{S}}\sum_{\lambda_{i_{1}} \in \sum'_{i_{1}}} \alpha^{2}_{i_{1}}(\lambda_{i_{1}}).rank(Gr^{i_{1}}_{\lambda_{i_{1}}}).[D_{i_{1}}]^{2}$ $$ $$ $ \hspace{2.2cm} - \ \frac{1}{2}\sum_{i_{1} \neq i_{2}} \sum_{\aatop{\lambda_{i_{1}} }{ \lambda_{i_{2}}}} \sum_{y \in Irr(D_{i_{1}} \cap D_{i_{2}})} \alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{2}}(\lambda_{i_{2}}).rank_{y}(Gr^{i_{1}, i_{2}}_{\lambda_{i_{1}}, \lambda_{i_{2}}}).[D_{y}].$ \end{proposition} \section{Parabolic bundles with full flags} \mylabel{sec-parasurface} We use the fact that $X$ is a surface to simplify the above expressions, by assuming that the parabolic filtrations are full flags. \begin{proposition} If $E'$ is a locally free sheaf over $X - \lbrace P \rbrace$, then $\exists !$ extension to a locally free sheaf $E$ over $X$ s.t $E'\vert_{X - \lbrace P\rbrace} = E$. \end{proposition} \begin{proposition} If we have a strict sub-bundle of $E\vert D_{i} - \lbrace P \rbrace$ then $\exists !$ extension to a strict sub-bundle of $E\vert D_{i}$. \end{proposition} \begin{remark} It follows from these propositions that if $(E', F'^{i}_{\alpha_{i}})$ is a parabolic structure over $(X - \lbrace P \rbrace, D - \lbrace P \rbrace)$, then we obtain a bundle $E$ over $X$ with the filtrations $\lbrace F'^{i}_{\alpha_{i}} \rbrace$ of $E\vert_{D_{i}}$ by a strict sub-bundles. \end{remark} \begin{definition} $Hom(\mathcal{O}_{\mathbb{P}^{1}}(-m_{i}), \mathcal{O}_{\mathbb{P}^{1}}^{\oplus2}) = H^{0}(\mathbb{P}^{1}, \mathcal{O}(-m_{i})\otimes \mathcal{O})^{\oplus2} = H^{0}(\mathbb{P}^{1}, \mathcal{O}(m_{i}))^{\oplus2}.$ \end{definition} For example, subbundles of a rank two trivial bundle may be expressed very explicitly. \begin{proposition} Consider the two polynomials $(A_{i}, B_{i}) \in H^{0}(\mathbb{P}^{1}, \mathcal{O}(m_{i}))^{\oplus2}$, the sub-sheaves are saturated iff $m_{i} = ((max(deg(A_{i}), deg(B_{i}))$ and $(A_{i}, B_{i}) = 1$. Then there is an isomorphism $$ (A_{i}, B_{i}) : \mathcal{O}_{\mathbb{P}^{1}}(-m_{i}) \longrightarrow \mathcal{O}_{\mathbb{P}^{1}}^{\oplus2} $$ \end{proposition} \begin{lemma} $\forall$ $0 \subseteq F^{i}_{\sigma_{i}} \subseteq F^{i}_{\sigma'_{i}} \subseteq .... \subseteq F^{i}_{\tau_{i}} \subseteq E\vert_{D_{i}}$, $\exists$ complet flags $0 \subseteq \widehat{F}_{1} \subseteq ... \subseteq \widehat{F}_{r}= E\vert_{D_{i}}$ s.t $F^{i}_{\sigma_{i}} = \widehat{F}_{k(\sigma_{i})}$. where $\forall$ $\sigma_{i} \in \Sigma_{i}$ we have $k(i) \in \lbrace 0, 1, ..., r \rbrace$ then $\exists$ $k : \Sigma_{i} \longrightarrow \lbrace 0, 1, ..., r\rbrace$ s.t $k = rank(F^{i}_{\sigma_{i}})$. \end{lemma} In view of this lemma, we will now suppose that all the filtrations are complete flags. The weights should then form an increasing sequence but not necessarily strictly increasing. In particular we will change notation and denote the filtration of $E\vert_{D_{i}}$ by $$ 0= F^i_0 \subseteq F^i_1\subseteq ....\subseteq F^i_r = E\vert_{D_{i}} . $$ In this case $\Sigma _i = \{ \sigma _i^0,\ldots , \sigma _i^r\}$ and $\Sigma '_i = \{ \lambda _i^1,\ldots , \lambda _i^r\}$. These sets now have the same number of elements for each $i$ so we can return to a numerical indexation. We denote $$ Gr^i_k(E\vert_{D_{i}}) := Gr^i_{\lambda _i^k}(E\vert_{D_{i}}) = F^i_k/F^i_{k-1}. $$ \begin{proposition} $rank(Gr^{i}_{\lambda_{i}^{}}) = 1$ $ \Longleftrightarrow$ the filtrations $ F^i_{\sigma ^1_i} \leq F^i_{\sigma ^2_i} \leq ...\leq F^i_{\sigma ^r_i}$ are complet flags for $i = 1,2,...,n$. \end{proposition} Since we are on a surface, $D_i\cap D_j$ is a finite collection of points. At each point $P\in D_i\cap D_j$ we have two filtrations of $E_P$ coming from the parabolic filtrations along $D_i$ and $D_j$. We are now assuming that they are both complete flags. The incidence relationship between these filtrations is therefore encoded by a permutation. \begin{lemma} $\forall$ $k$ $\exists !$ $k' \in \lbrace 1, ..., r \rbrace$ s.t $\hspace{0.1cm}$ $rank \left(Gr^i_{k} Gr^i_{k'}(E_{P})\right)= \dfrac{F_{k}^{i} \cap F_{k'}^{j}}{F_{k-1}^{j} \cap F_{k'}^{j} + F_{k}^{i} \cap F_{k'-1}^{i}} = 1$. \end{lemma} \begin{definition} $\forall$ $P \in D_{i} \cap D_{j}$ define the permutation $\sigma( P, i, j) : \lbrace 1, ..., r \rbrace= \Sigma'_{i} \longrightarrow \lbrace 1, ..., r \rbrace $ which sends $k \in \lbrace 1, ..., r \rbrace $ to $ \sigma( P, i, j)(k) = k'$ where $k'$ is the unique index given in the previous lemma. \end{definition} \begin{lemma} $\forall$ $k$ if $k'' \neq \sigma(P, i, j)(k)$ then $ \hspace{0.1cm}$ $rank \left(Gr^{i}_{k} Gr^{i}_{k''}(E_{P})\right) = 0$. \end{lemma} Since the filtrations are full flags, there are $r$ different indices $\lambda _i^1,\ldots , \lambda _i^r$ for each divisor $D_i$. We introduce the notation $\alpha (D_i,k):=\alpha _i(\lambda _i^k)$. With this notation we obtain the following expression for the term involving $Gr^{i_{1}, i_{2}}_{\lambda_{i_{1}}, \lambda_{i_{2}}}$: $$ $$ $ \hspace{2.2cm} - \ \frac{1}{2}\sum_{i_{1} \neq i_{2}} \sum_{\aatop{\lambda_{i_{1}} }{ \lambda_{i_{2}}}} \sum_{p \in Irr(D_{i_{1}} \cap D_{i_{2}})} \alpha_{i_{1}}(\lambda_{i_{1}}).\alpha_{i_{2}}(\lambda_{i_{2}}).rank_{y}(Gr^{i_{1}, i_{2}}_{\lambda_{i_{1}}, \lambda_{i_{2}}}).[y].$ $$ $$ $= \hspace{2.2cm} - \ \frac{1}{2}\sum_{i \neq j} \sum_{k=1}^r \sum_{y \in Irr(D_{i} \cap D_{j})} \alpha (D_i,k).\alpha (D_j,\sigma (y,i,j)(k)).[y].$ $$ $$ On the other hand, all ranks of the graded pieces $Gr(D_i,k):=Gr^i_{\lambda ^k_i}$ are equal to $1$. They are line bundles on $D_i$. $\\$ \begin{definition} Suppose $i \neq 0$. Let $Gr(D_i, k)$ are line bundles over $D_i$. Then we define the $deg Gr(D_i, k)$ to be: $$ \emph{deg}\left(Gr(D_i, k)\right) = (\xi_{i})_{\star}\left(c_{1}^{D_{i}}(Gr(D_i,k))\right). $$ \end{definition} We can now rewrite the statement of Proposition. \begin{proposition} $\Delta^{Par}(E) = \Delta^{Vb}(E) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{r} ch^{Vb}_{1}(E) \cdot \sum_{i \in \mathcal{S}}\sum_{k=1}^r\alpha (D_i,k).[D_i]$ $$ $$ $ \hspace{2.2cm} + \ \frac{1}{2r}\sum_{i\neq j} \sum_{k,l\in [1,r]} \sum_{y \in Irr(D_{i} \cap D_{j})} \alpha (D_i,k).\alpha (D_j,l)[y].$ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2r}\sum_{i \in \mathcal{S}}\sum_{k,l\in [1,r]} \alpha (D_i,k).\alpha (D_i,l) .[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}}\sum_{k=1}^r \alpha (D_i,k).\emph{deg}\left(Gr(D_i, k)\right) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i\in \mathcal{S}}\sum_{k=1}^r \alpha (D_i,k)^{2}.[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2}\sum_{i \neq j} \sum_{k=1}^r \sum_{y \in Irr(D_{i} \cap D_{j})} \alpha (D_i,k).\alpha (D_j,\sigma (y,i,j)(k)).[y].$ \end{proposition} We have $\alpha(D_i, k) \in[-1, 0]$, define $\alpha^{tot}(D_i) := \sum_{k=1}^{r} \alpha(D_i, k)$. With this notation, $\Delta^{Par}(E) = \Delta^{Vb}(E) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{r} ch^{Vb}_{1}(E) \cdot \sum_{i \in \mathcal{S}}\alpha^{tot}(D_i)[D_i]$ $$ $$ $ \hspace{2.2cm} + \ \frac{1}{2r}\sum_{i\neq j} \alpha^{tot}(D_i)\alpha^{tot}(D_j)[D_i\cap D_j]$ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2r}\sum_{i \in \mathcal{S}} \alpha^{tot}(D_i)^2 .[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}}\sum_{k=1}^r \alpha (D_i,k). $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i\in \mathcal{S}}\sum_{k=1}^r \alpha (D_i,k)^{2}.[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2}\sum_{i \neq j} \sum_{k=1}^r \sum_{y \in Irr(D_{i} \cap D_{j})} \alpha (D_i,k).\alpha (D_j,\sigma (y,i,j)(k)).[y].$ $$ $$ For more simplification of $\Delta^{Par}(E)$, define $\beta$ such that $$ $$ $\beta(D_i, k) := \alpha(D_i, k) - \frac{\alpha^{tot}(D_i)}{r} \hspace{0.2cm} \Longrightarrow \hspace{0.2cm} \alpha(D_i, k) = \beta(D_i, k) + \frac{\alpha^{tot}(D_i)}{r}$. $$ $$ \begin{remark} We remark that $\sum_{k=1}^{r}\beta(D_i, k) = 0$. Hence $$ \sum_{k=1}^{r}\alpha (D_i, k)^2 = \sum_{k=1}^{r}\beta(D_i, k)^2 + \frac{\alpha ^{tot}(D_i)^2}{r}, $$ and for $i\neq j$ and $y \in Irr(D_{i} \cap D_{j})$, $$ \sum_{k=1}^r \alpha (D_i,k).\alpha (D_j,\sigma (y,i,j)(k)) = \sum_{k=1}^r \alpha (D_i,k).\alpha (D_j,\sigma (y,i,j)(k)) + \frac{ \alpha ^{tot}(D_i).\alpha ^{tot}(D_j)}{r}. $$ Furthermore note that $$ \sum_{k=1}^r (\xi_{i})_{\star}\left( c_{1}^{D_{i}}(Gr(D_i,k))\right) = (\xi_{i})_{\star}\left( c_{1}^{D_{i}}(E|_{D_i})\right) = c_1^{Vb}(E).[D_i] $$ so $$ \sum_{k=1}^r \alpha (D_i,k).\emph{deg}\left(Gr(D_i, k)\right) = \sum_{k=1}^r \beta (D_i,k).\emph{deg}\left(Gr(D_i, k)\right) + \frac{\alpha ^{tot} (D_i) c_1^{Vb}(E).[D_i]}{r} . $$ \end{remark} Using these remarks and the previous formula we get $\\$ $\Delta^{Par}(E) = \Delta^{Vb}(E) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{r} ch^{Vb}_{1}(E) \cdot \sum_{i \in \mathcal{S}}\alpha^{tot}(D_i)[D_i]$ $$ $$ $ \hspace{2.2cm} + \ \frac{1}{2r}\sum_{i\neq j} \alpha^{tot}(D_i)\alpha^{tot}(D_j)[D_i\cap D_j]$ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2r}\sum_{i \in \mathcal{S}} \alpha^{tot}(D_i)^2 .[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}}\sum_{k=1}^r \beta (D_i,k). \emph{deg}\left(Gr(D_i, k)\right)$ $$ $$ $\hspace{2.2cm} + \ \frac{1}{r} \sum_{i \in \mathcal{S}}\alpha ^{tot} (D_i).c_1^{Vb}(E).[D_i] $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i\in \mathcal{S}}\sum_{k=1}^r \beta (D_i,k)^{2}.[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2r} \ \sum_{i\in \mathcal{S}} \alpha ^{tot}(D_i)^{2}.[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2}\sum_{i \neq j} \sum_{k=1}^r \sum_{y \in Irr(D_{i} \cap D_{j})} \beta (D_i,k).\beta (D_j,\sigma (y,i,j)(k)).[y].$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2r}\sum_{i \neq j} \sum_{k=1}^r \sum_{y \in Irr(D_{i} \cap D_{j})} \alpha ^{tot}(D_i).\alpha ^{tot}(D_j).[y].$ $$ $$ The terms containing $\alpha ^{tot}(D_i)$ all cancel out, giving the following formula. \begin{proposition} \mylabel{propepar} $\Delta^{Par}(E) = \Delta^{Vb}(E) $ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}}\sum_{k=1}^r \beta (D_i,k).\emph{deg}\left(Gr(D_i, k)\right) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i\in \mathcal{S}}\sum_{k=1}^r \beta (D_i,k)^{2}.[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2}\sum_{i \neq j} \sum_{k=1}^r \sum_{y \in Irr(D_{i} \cap D_{j})} \beta (D_i,k).\beta (D_j,\sigma (y,i,j)(k)).[y].$ \end{proposition} The fact that $\Delta ^{Par}(E)$ is independent of $\alpha ^{tot}$ is the parabolic version of the invariance of $\Delta$ under tensoring with line bundles. Even though this is the theoretical explanation, for the proof it was more convenient to calculate explicitly the formula and notice that the terms containing $\alpha ^{tot}$ cancel out, than to try to compute the tensor product with a parabolic line bundle. \section{Resolution of singular divisors} Now we can consider a more general situation, where $\Check{X}$ is a smooth projective surface but $D=\bigcup _{i=1}^n D_i$ is a divisor which may have singularities worse than normal crossings. Let $\Check{P} = \lbrace \Check{P}_{1}, ..., \Check{P}_{r} \rbrace$ be a set of points. Assume that the points $\Check{P}_{j}$ are crossing points of $\Check{D}_{i}$, and that they are general multiple points, that is through a crossing point $P_{j}$ we have divisors $\Check{D}_{i_1}, ..., \Check{D}_{i_m}$ which are pairwise transverse. Assume that $\Check{D}$ has normal crossings outside of the set of points $\Check{P}$. We choose an embedded resolution given by a sequence of blowing-ups $\varphi:X\rightarrow \check{X}$ in $r$ points $\Check{P}_{1}, ..., \Check{P}_{r}$ and $P$ be the exceptional divisor on $X$, note that $P$ is a sum of disjoint exceptional components $P_{i} = \varphi^{-1}(\check{P}_{i})$ over the points $\check{P}_{i}$ respectively. The pullback divisor may be written as $D=D_1+\cdots +D_a +P_1+\cdots + P_b$ where $D_i$ is the strict transform of a component $\Check{D}_i$ of the original divisor, and $P_j$ are the exceptional divisors. \begin{definition} Let $E$ be a bundle over $X$, and consider the inclusion $i : U \hookrightarrow X$ where $U = X - \bigcup_{i=1}^{k}{P_{i}}$ be a smooth connected quasi-projective surface. Hence $P_{i} = \mathbb{P}^{1}$ and let the blowing-up $\varphi : X \longrightarrow \check{X} $. Define $\check{E}$ as a unique bundle over $\check{X}$ such that $$ \check{E}|_{U} \cong E|_{U}, $$ $$ \check{E} \hspace{0.15cm} is \hspace{0.15cm} locally \hspace{0.15cm} free. $$ \end{definition} This construction allows us to localize the contributions of the Chern classes of $E$ along the exceptional divisors, by comparison with $\varphi ^{\ast} (\check{E})$. \begin{definition} Let $E$ be a bundle over $X$. Consider the inclusions $\varphi^{\star}\check{E} \hookrightarrow i_{\star}(E|_{U})$, where $i_{\star}(E|_{U})$ is a quasi-coherent sheaves over $X$, and $E \hookrightarrow i_{\star}(E|_{U})$, where $i : U \hookrightarrow \check{X}$. Define $E''$ to be the intersection of subsheaves $\varphi^{\star}\check{E}$ and $E$ of $i_{\star}(E|_{U})$. \end{definition} \begin{lemma} $E''$ is a free locally coherent sheaf. \end{lemma} \begin{definition} Consider the two exact sequences $$ 0 \longrightarrow E'' \longrightarrow \varphi^{\star}\check{E} \longrightarrow Q' \longrightarrow 0 $$ $$ 0 \longrightarrow E'' \longrightarrow E \longrightarrow Q \longrightarrow 0 $$ Let $E/E'' = Q = \bigoplus_{i=1}^{k} Q_{i}$ and $\varphi^{\star}\check{E}/E'' = Q' = \bigoplus_{i=1}^{k} Q'_{i}$. Define the local contribution to be, $$ ch^{Vb}(E,P)_{loc} := ch^{Vb}(Q) - ch^{Vb}(Q') $$ \end{definition} \begin{proposition} If $\varphi : \check{X} \longrightarrow X$. Let $P_{i}$ the blowing-up of $\check{P_{i}}$ where $P_{i}$ is the exceptional divisor for $i = 1, 2, ...k$. Then $$ ch^{Vb}(E) = ch^{Vb}(\varphi^{\star}(\check{E})) + \sum_{i=1}^{k}ch^{Vb}(E, P_{i})_{loc} $$ \end{proposition} We have $ch^{Vb}_{1}(E) \in A^{1}(X)$. Let $\varphi^{\star} : A^{1}(\check{X}) \longrightarrow A^{1}(X)$; where $A^{1}(X) = A^{1}(\check{X}) \oplus \bigoplus_{i=1}^{k} .\mathbb{Z}.[P_{i}].$ We have $P_{i}.\varphi^{\star}(\check{D}) = 0$ if $\check{D} \in A^{1}(\check{X})$ and $P_{i}.P_{j} = 0$ if $i \neq j$. Then $ch^{Vb}_{1}(E) = \varphi^{\star}ch^{Vb}_{1}(\check{E}) + \sum_{i = 1}^{k}a_{i}[P_{i}] = \varphi^{\star}ch^{Vb}_{1}(\check{E}) + \sum_{i = 1}^{k}ch^{Vb}_{1}(E, P_{i})_{loc}.$ When we take the square, the cross-terms are zero, indeed $ch^{Vb}_{1}(E, P_{i})_{loc}$ is a multiple of the divisor class $[P_i]$ but $[P_i].[P_j] = 0 $ for $i\neq j$, and $[P_i].\varphi ^{\ast}[C]=0$ for any divisor $C$ on $\Check{X}$. Therefore, $ch^{Vb}_{1}(E)^{2} = \varphi^{\star}ch^{Vb}_{1}(\check{E})^{2} + \sum_{i = 1}^{k}a_{i}^{2}[P_{i}]^{2} = \varphi^{\star}ch^{Vb}_{1}(\check{E})^{2} + \sum_{i = 1}^{k}ch^{Vb}_{1}(E, P_{i})^{2}_{loc}$. \begin{lemma} If $L $ is a line bundle over $X$, then $$ \Delta(E \otimes L) = \Delta(E). $$ \end{lemma} \section{Local Bogomolov-Gieseker inequality} The classical {\em Bogomolov-Gieseker inequality} states that if $X$ is projective and $E$ is a semistable vector bundle then $\Delta (E)\geq 0$. We will see that a local version holds; the first observation is that the invariant $\Delta$ can be localized, even though it involves a quadratic term in $ch_1$. \begin{definition} $$ \Delta^{Vb}(E, P_{i})_{loc} := \frac{1}{2r}ch_{1}^{Vb}(E, P_{i})^{2}_{loc}- ch_{2}^{Vb}(E, P_{i})_{loc} $$ \end{definition} \begin{lemma} If $L = \varphi^{\star} \check{L}(\sum b_{i}P_{j})$ is a line bundle over $X$. Then $$ \Delta^{Vb}(E \otimes L; P_{i})_{loc} = \Delta^{Vb}(E, P_{i})_{loc} $$ \end{lemma} \begin{proposition} $$ \Delta^{Vb}(E) = \varphi^{\star}\Delta^{Vb}(\check{E}) + \sum_{i=1}^{k}\Delta^{Vb}(E, P_{i})_{loc} $$ \end{proposition} In order to get a bound, the technique is to apply the Grothendieck decomposition to analyse more closely the structure of $E$ near the exceptional divisors $P_i$, following Ballico \cite{Ballico} and Ballico-Gasparim \cite{BallicoGasparim1} \cite{BallicoGasparim2} and others. \begin{theorem} Every vector bundle $E$ on $\mathbf{P}^{1}$ is of the form $\mathcal{O}(m_{{1}})^{r_1} \oplus \cdots \oplus \mathcal{O}(m_{{a}})^{r_a} = \bigoplus_{j =1}^{a} \mathcal{O}(m_{{j}})^{r_j}$, $m_{{1}} < \ldots <m_{{r}}$ where $m_{{j}} \in \mathbb{Z}$, and the $r_j$ are positive integers with $r_1+\ldots + r_a=r$. This called the Grothendick decomposition and it is unique. \end{theorem} Apply this decomposition to the restriction of the bundle $E$ to each exceptional divisor $P_i\cong \mathbf{P}^{1}$. Thus $$ E|_{P_{i}} = \mathcal{O}(m_{i,{1}})^{r_{i,1}}\oplus ... \oplus \mathcal{O}(m_{i,{a_i}})^{r_{i,a_i}} = \bigoplus_{j =1}^{a_i} \mathcal{O}(m_{i,{j}})^{r_{i,j}} $$ with $m_{i,{1}}< \ldots < m_{i,{a_i}}$. \begin{proposition} Let $E$ be a bundle over $X$, we have, $$ m_{i,{j}} \hspace{0.1cm} = 0 \Longleftrightarrow \hspace{0,2cm} E \cong \varphi^{\star}\check{E} \hspace{0.2cm}, $$ if $E' = E(\sum_{i} k_i.P_{i})$ then $m'_{i,{j}} = m_{i,{j}} -k_i$, therefore $$ m_{i,{j}} = k_i \Longleftrightarrow E \cong (\varphi^{\star}\check{E})(-\sum_{i} k_i.P_{i}). $$ In this case we say that $E$ is {\em pure}, it is equivalent to saying that $a_i=1$. \end{proposition} See Ballico-Gasparim \cite{BallicoGasparim1}. \begin{definition} Let $E$ be a non trivial bundle, and $E|_{P} = \mathcal{O}(m_1)^{r_1} \oplus ... \oplus \mathcal{O}(m_a)^{r_a}$ be the restriction of the bundle $E$, for $m_1 < m_2 < ... < m_a$. We define $$ min(E|_{P}) := m_1 \hspace{0.2cm}, max(E|_{P}) := m_a, \hspace{0.2cm} and \hspace{0.2cm}\varphi(E) = max(E|_{P}) - min(E|_{P}). $$ \end{definition} \begin{remark} If $\mu(E) = m_a - m_1 = 0$. Then $E|_{P} = \mathcal{O}_{\mathbb{P}^1}(m_1)^r$ ; $E = E^{\vee}(-m.P)$ \end{remark} \begin{lemma} \mylabel{minmax} If we have an exact sequence of bundles over $\mathbb{P}^1$ $$ 0 \longrightarrow U \longrightarrow V \longrightarrow W \longrightarrow 0 $$ then $$ min(V) \geq min(min(U), min(W)), $$ $$ max(V) \leq max(max(U), max(W)). $$ \begin{proof} Define $$ max(U) = max\{n; \hspace{0.05cm} s.t \hspace{0.15cm} \exists \hspace{0.15cm} \mathcal{O}_{\mathbb{P}^1}(n) \rightarrow U \hspace{0.05cm} non \hspace{0.05cm} trivial \} = max \{ n; \hspace{0.05cm} s.t \hspace{0.15cm}H^0(U(-n)) \neq 0\} $$ $$ min(U) = min\{n; \hspace{0.05cm} s.t \hspace{0.15cm} \exists \hspace{0.15cm} U \rightarrow \mathcal{O}_{\mathbb{P}^n}(n) \hspace{0.05cm} non \hspace{0.05cm} trivial \} = max \{ n; \hspace{0.05cm} s.t \hspace{0.15cm}H^0(U^\ast(n)) \neq 0\} $$ then $$ max(U) \leq max(max(U), max(W)) $$ $$ min(V) \geq min(min(U), min(W)) $$ \end{proof} \end{lemma} Now we concentrate on one of the exceptional divisors $P_i$ and supress the index $i$ from the notation. Now for $1 \leq t \leq r$, suppose that $E|_P$ is not pure, and consider the exact sequence $$ \begin{array}{ccl} 0 & & \\ \uparrow & & \\ Q & := & \mathcal{O}(m_1)^{r_1} \\ \uparrow & & \\ E|_{P_{i}} & := & \mathcal{O}(m_1)^{r_1} \oplus \mathcal{O}(m_2)^{r_2} \oplus \cdots \oplus \mathcal{O}(m_a)^{r_a}\\ \uparrow & & \\ K &:= & \mathcal{O}(m_{2})^{r_2} \oplus \cdots \oplus \mathcal{O}(m_{a})^{r_a}\\ \uparrow & & \\ 0& & \end{array} $$ \begin{definition} Suppose $X$ and $D$ are smooth with $D \stackrel{i_\ast}{\hookrightarrow} X$. Let $E$ be a free locally bundle over $X$. Suppose we have an exact sequence $$ 0 \longrightarrow K \longrightarrow E|_D \longrightarrow Q \longrightarrow 0 $$ where $Q = \mathcal{O}(m_{1})^{r_{1}}$ is called constant stabilizer. Define $E'$ to be the elementary transformation of $E$ by $$ E':= Ker(E \rightarrow i_{\ast} Q). $$ Then the sequence $$ 0 \longrightarrow E' \longrightarrow E \longrightarrow i_{\ast}Q \longrightarrow 0. $$ is exact. \end{definition} \begin{lemma} \mylabel{elemexact} We have an exact sequence $$ 0 \longrightarrow Q(-D) \longrightarrow E'\mid_P \longrightarrow K \longrightarrow 0. $$ Then $$ E'(U) = \{ S \hspace{0.05cm} \in \hspace{0.05cm} E(U) \hspace{0.05cm} s.t \hspace{0.05cm} S|_{(D \cap U)} \in K(D \cap U)\}. $$ \end{lemma} \begin{lemma} $\mu(E') \leq \mu(E) -1$ (if $\mu(E) \geq 1$). \begin{proof} We have $$ \mathcal{O}_P(-P) = \mathcal{O}_P(i) $$ apply the Lemma \ref{elemexact} we get $$ 0 \longrightarrow \mathcal{O}(m_1 + 1)^{r_1} \longrightarrow E'|_P \longrightarrow \bigoplus_{i = 2}^{a} \mathcal{O}(m_i)^{r_i} \longrightarrow 0 $$ apply Lemma \ref{minmax}, take $min = max = m_1 + 1$ and $min = m_2$ then $max \geq m_1 + 1$ $\Longrightarrow$ $min(E'\mid_P) \leq m_a$. Therefore $$ \mu(E') \leq \mu(E) - 1. $$ \end{proof} \end{lemma} \begin{lemma} $$ ch\left(\mathcal{O}_P(m_1)\right) =(P + (m_{1} + \frac{1}{2})). $$ \end{lemma} \begin{proof} We have $$ \mathcal{O}(m_{1}) = \mathcal{O}(-m_{1}P), $$ consider the exact sequence $$ 0\longrightarrow \mathcal{O}_{X}(-(m_{1} + 1)P) \longrightarrow \mathcal{O}_{X}(-m_{1}P) \longrightarrow \mathcal{O}_{P}(-m_{1}P) \longrightarrow 0 $$ then $ch(\mathcal{O}_{P}(-m_{1} P)) = e^{-m_{1}P} - e^{(-m_{1} + 1)P} = (1 - m_{1}P + \frac{m_{1}^{2}}{2}P^{2}) - (P - m_{1}P^{2} - \frac{P^{2}}{2})$ $ = (P - (m_{1} + \frac{1}{2})P^2)$, but $P^{2} = -1$, then $ch(\mathcal{O}_{P}(-m_{1})) = (P + (m_{1} + \frac{1}{2}))$. \end{proof} \begin{proposition} We have $$ ch^{Vb}_{1}(E') = ch^{Vb}_{1}(E) - r_{1}P \hspace{0.3cm} and \hspace{0.3cm} ch^{Vb}_{2}(E') = ch^{Vb}_{2}(E) - (m_{1} + \frac{1}{2})r_{1}, $$ then $$ \Delta^{Vb}(E', P) = \frac{1}{2r}(ch^{Vb}_{1}(E) - r_{1}P)^{2} - ch^{Vb}_{2}(E) + (m_{1} + \frac{1}{2})r_{1}, $$ therefore $$ \Delta^{Vb}(E', P)= \Delta^{Vb}(E, P) - \frac{r_{1}}{r}ch^{Vb}_{1}(E).P - \frac{r_{1}^{2}}{2r} + (m_{1} + \frac{1}{2})r_{1}. $$ \end{proposition} We can now calculate using the previous lemma. $$ $$ $E|_{P} = \mathcal{O}(m_{1})^{r_{1}} \oplus \mathcal{O}(m_{2})^{r_{2}} \oplus ... \oplus \mathcal{O}(m_{k})^{r_{k}}$, where $ \sum_{i=1}^{k} r_{i} = r$, we have $ch_{1}(E).P = \xi_{P,\star} \left(ch_{1}(E\mid_{P}) \right) = ch_{1} \left(\bigoplus_{i = 1}^{k}\mathcal{O}(m_{i}) \right) = \sum_{i=1}^{k} m_{i}r_{i}$, then $\Delta^{Vb}(E', P) = \Delta^{Vb}(E, P) - \frac{r_{1}}{r}\sum_{i=1}^{k} m_{i}r_{i} - \frac{r_{1}^{2}}{2r} + m_{1}r_{1} + \frac{r_{1}}{2} = \Delta(E) - \frac{1}{r} \mathcal{A}$, where $\mathcal{A} = \sum_{i=2}^{k} m_{i} r_{i} r_{1} + m_{1}r_{1}^{2} + \frac{1}{2} r_{1}^{2} - (m_{1} + \frac{1}{2}) r_{1}r \hspace{0.4cm} for \hspace{0.4cm} r = r_{1} + .... + r_{k}$ $ = \sum_{i=2}^{k}m_{i} r_{i} r_{1} + m_{1}r_{1}^{2} + \frac{1}{2} r_{1}^{2} - (m_{1} + \frac{1}{2}) r_{1}^{2} - \sum_{i=2}^{k}(m_{1} + \frac{1}{2})r_{1}r_{i}$, then $\mathcal{A} = \sum_{i=2}^{k}(m_{i} - m_{1} -\frac{1}{2}) r_{1} r_{i}$ $\hspace{0.4cm}$ where $m_{i} > m_{1} + 1$ $$ $$ Note that, with our hypothesis that $E\mid_{P}$ is not pure, we have $m_i\geq m_1+1$ so $A > 0$. \begin{proposition} \mylabel{propdeltalocprime} If $E|_P$ is not pure, then let $E'$ be the elementary transformation considered above. The local invariant satisfies $$ \Delta^{Vb}_{loc}(E', P) = \Delta^{Vb}_{loc}(E,P) - \frac{1}{r} \sum_{i=2}^{k}(m_{i} - m_{1} -\frac{1}{2}) r_{1} r_{i} \hspace{0.3cm} where \hspace{0.3cm} m_{i} > m_{1} + 1. $$ In particular, $\hspace{0.3cm}$ $\Delta^{Vb}_{loc}(E', P) < \Delta^{Vb}_{loc}(E,P)$. \end{proposition} If $E'$ is pure then $\Delta ^{Vb}_{loc}(E',P)=0$, if not we can continue by applying the elementary transformation process to $E'$ and so on, until the result is pure. The resulting theorem can be viewed as a local analogue of the Bogomolov-Gieseker inequality. \begin{theorem} If $E$ is a vector bundle on $X$ and $P\cong \mathbf{P}^1\subset X$ is the exceptional divisor of blowing up a smooth point $\check{P}\in \check{X}$, then $\Delta ^{Vb}_{loc}(E,P)\geq 0$, and $\Delta ^{Vb}_{loc}(E,P)=0$ if and only if $E\cong \mu ^{\ast}(\check{E})$ is the pullback of a bundle from $\check{X}$. \end{theorem} The invariant $\Delta ^{Vb}_{loc}(X,P)$ also provides a bound for $m_i-m_1$. \begin{corollary} If $E\mid_{P} = \bigoplus_{i=1}^{k}\mathcal{O}(m_{i})^{r_{i}}$, where $m_{1} < m_{2} < ... <m_{k}$. Then $$ \Delta^{Vb}_{loc}(E', P) \geq 0 ; \hspace{0.5cm} \Delta^{Vb}_{loc}(E, P) \geq \frac{1}{r} \sum_{i=2}^{k}(m_{i} - m_{1} - \frac{1}{2})r_1 r_{i}. $$ \end{corollary} \section{Modification of filtrations due to elementary transformations} Given two bundles $E$ and $F$ such that $E|_U\cong F|_U$, then $F$ may be obtained from $E$ by a sequence of elementary transformations. We therefore analyse what happens to the filtrations along the divisor components $D_i$ different from exceptional divisors $P_u$, in the case of an elementary transformation. Suppose $E'$ is obtained from $E$ by an elementary transformation. We have bundles $Gr (D_i, k; E)$ and $Gr (D_i, k; E')$ over $D_i$. In order to follow the modification of the formula for $\Delta$ we need to consider this change. For the bundle $E$ we have a filtration by full flags $F^i_k\subset E|_{D_i}$. Suppose $E|_{D_0}\rightarrow Q$ is a quotient (locally free on $D_0$) and let $E'$ be the elementary transformation fitting into the exact sequence $$ 0\rightarrow E' \rightarrow E \rightarrow Q \rightarrow 0. $$ \begin{lemma} Suppose $i\neq 0$ so $D_i\cap D_0$ is transverse. Tensoring this exact sequence with $\mathcal{O} _{D_i}$ yields an exact sequence $$ 0\rightarrow E'|_{D_i} \rightarrow E|_{D_i} \rightarrow Q |_{D_i}\rightarrow 0. $$ \end{lemma} \begin{proof} In fact we get a long exact sequence $$ Tor^1_{\mathcal{O} _X}(\mathcal{O} _{D_i},Q)\rightarrow E'\otimes \mathcal{O} _{D_i} \rightarrow E \otimes \mathcal{O} _{D_i}\rightarrow Q \otimes \mathcal{O} _{D_i}\rightarrow 0, $$ but the facts that $Q$ is locally free on $D_0$ and $D_i$ is transverse to $D_0$ imply that $Tor^1_{\mathcal{O} _X}(\mathcal{O} _{D_i},Q)=0$. \end{proof} This lemma says that $E'|_{D_i}$ is an elementary transformation of $E|_{D_i}$. Notice that since $D_i$ is a curve, $Q|_{D_i}$ is a skyscraper sheaf. Define $F'^i_k:= F^i_k\cap (E'|_{D_i})$. It is a subsheaf of $(E'|_{D_i})$. Furthermore it is saturated, that is to say the quotient is torsion-free. To show this, suppose $s$ is a section of $(E'|_{D_i})$ which is contained in $F'^i_k$ over an open set. Then it may be seen as a section of $E|_{D_i}$ which is contained in $F^i_k$ over an open set, but $F^i_k$ is saturated so the section is contained in $F^i_k$. Hence by definition the section is contained in $F'^i_k$. Thus, we have defined a filtration $F'^i_k$ by sub-vector bundles. The same argument says that $F'^i_k$ is saturated in $F'^i_{k+1}$, so the quotients $Gr(D_i,k;E')= F'^i_{k+1}/F'^i_k$ are locally free; since they are line bundles over the open set, they are line bundles on $D_i$. Consider the morphism, induced by $(E'|_{D_i})\rightarrow (E|_{D_i})$: $$ F'^i_{k+1}/F'^i_k \rightarrow F^i_{k+1}/F^i_k. $$ By the definition of $F'^i_k$ it is seen that this morphism is an injection of sheaves. Consider the cokernel. If $s$ is a section of $F^i_{k+1}/F^i_k$ and if $z$ is a local coordinate on $D_i$ such that $z=0$ defines the intersection point $D_0\cap D_i$, then we claim that $zs$ must be in the image of $F'^i_{k+1}/F'^i_k$. Lift $s$ to a section also denoted $s$ of $F^i_{k+1}$. Then, thought of as a section of $E|_{D_i}$, notice that $zs$ projects to $0$ in $Q|_{D_i}$. This may be seen by further extending to a section of $E$ and extending $z$ to a coordinate function defining $D_0$; noting that $Q$ is supported scheme-theoretically on $D_0$ so $zs$ projects to $0$ in $Q$. From the exact sequence, we conclude that $zs$ is in the image of $F'^i_{k+1}/F'^i_k$. Hence, there are two cases: \newline (1) the map $F'^i_{k+1}/F'^i_k \rightarrow F^i_{k+1}/F^i_k$ is an isomorphism; or \newline (2) we have $F'^i_{k+1}/F'^i_k = F^i_{k+1}/F^i_k \otimes _{\mathcal{O} _{D_i}} \mathcal{O} _{D_i}(-D_0\cap D_i)$. In the first case (1), $$ c_1^{D_i}(Gr (D_i,k; E'))= c_1^{D_i}(Gr (D_i,k; E)). $$ In the second case (2), $$ c_1^{D_i}(Gr (D_i,k; E'))= c_1^{D_i}(Gr (D_i,k; E)) - [D_0\cap D_i]. $$ Applying $(\xi _i)_{\ast}$ gives the following proposition. \begin{proposition} Suppose $E'$ is an elementary transformation of $E$. Then there exist a unique invariant $\emph{deg}_{loc}$ that satisfy the following properties: $$ \emph{deg}_{loc}\left(D_j, k; \check{E}\right) : = 0, $$ $$ \emph{deg}_{loc}\left(D_j, k; \check{E}(m.P_i)\right) : = m, $$ and for divisor components $D_i$ intersecting $D_0$ transversally, the change in Chern class of the associated-graded pieces is $$ \emph{deg}_{loc}\left(Gr(D_j, k; E'), P_i \right) : = \emph{deg}_{loc}\left(Gr(D_j, k; E), P_i\right) - \tau (E,E'; k) $$ where $\tau (E,E'; k) = 0$ or $1$ in cases (1) or (2) respectively. \end{proposition} \begin{definition} \label{deglocdef} Let $S$ and $\check{S}$ are two sheaves of finite length with support at points $P_i$. Suppose We have bundles $Gr (D_j, k, E)$ and $Gr (D_j, k, \check{E})$ over $D_j$ respectively $\check{D}_j$. Let $F$ be the intersection of subsheaves $Gr (D_j, k, E)$ and $Gr (D_j, k, \check{E})$. Define $lg$ to be the length, and let $lg(S,P_i)$ be the length of the part supported set-theoretically at $P_i$. Thus $$ lg(S) = \sum _{i}lg (S, P_i) $$ and similarly for $\check{S}$. Consider the sequences: $$ F \longrightarrow Gr(D_j, k, E) \longrightarrow S \longrightarrow 0 $$ $$ F \longrightarrow Gr(D_j, k, \check{E}) \longrightarrow \check{S} \longrightarrow 0. $$ Define $$ \emph{deg}_{loc} \left(Gr(D_j, k;E, P_i)\right) := lg(S,P_i) - lg(\check{S},P_i). $$ \end{definition} Then $$ \emph{deg} \left(Gr(D_j, k, E)\right) = \emph{deg}(F) + lg(S) $$ $$ \emph{deg} \left(Gr(D_j, k, \check{E})\right) = \emph{deg}(F) + lg(\check{S}), $$ therefore $$ lg(S) - lg(\check{S}) = \sum_{P_i}\left[lg(S, P_i) - lg(\check{S}, P_i)\right] = \sum_{P_i} \emph{deg}_{loc} \left(Gr(D_j, k;E, P_i)\right). $$ Suppose $i \neq 0$, if $D_j$ are non-exceptional divisor. Then $$ \emph{deg}\left(Gr(D_j, k; E\right) = \emph{deg}\left(Gr(\check{D}_j, k; \check{E} \right) + \sum_{P_i} \emph{deg}_{loc}\left(Gr(D_j, k, E), P_i)\right). $$ This completes the proof of the proposition. \section{The local parabolic invariant} Let $E$ be a bundle, with $\beta(D_0, k) = 0, \forall k$. Then we would like to define the terms in the following equation: $$ \Delta^{Par}(E) = \Delta^{Par}(\check{E}) +\sum_{P_{i}}\Delta^{Par}_{loc}(E, P_i). $$ Assume that $\check{D}$ is a union of smooth divisors meeting in some multiple points. The divisor $D$ is obtained by blowing up the points $\check{P}_u$ of multiplicity $\geq 3$. Let $$ \varphi : X\rightarrow \check{X} $$ be the birational transformation. We use the previous formula to break down $\Delta ^{Par}(E)$ into a global contribution which depends only on $\check{E}$, plus a sum of local contributions depending on the choice of extension of the parabolic structure across $P_u$. Let $\check{\mathcal{S}}$ denote the set of divisor components in $\check{D}$ (before blowing-up) and define the global term $\Delta ^{Par}(\check{E})$ by the formula $$ $$ $\Delta^{Par}(\check{E}) := \Delta^{Vb}(\check{E}) $ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \check{\mathcal{S}}}\sum_{k=1}^r \beta (\check{D}_i,k).\emph{deg}\left(Gr(\check{D}_i, k)\right) $ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2} \ \sum_{i\in \mathcal{S}}\sum_{k=1}^r \beta (\check{D}_i,k)^{2}.[\check{D}_{i}]^{2}$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2}\sum_{i \neq j} \sum_{k=1}^r \sum_{y \in Irr(\check{D}_{i} \cap \check{D}_{j})} \beta (\check{D}_i,k).\beta (\check{D}_j,\sigma (y,i,j)(k)).[y].$ $$ $$ This formula imitates the formula for $\Delta ^{Par}$ by considering only pairwise intersections of divisor components even though several different pairwise intersections could occur at the same point. Recall that $[D_i]^2 = [\check{D}_i]^2 - m$ where $m$ is the number of points on $\check{D}_i$ which are blown up to pass to $D_i$. To define the local terms, suppose at least one of the divisors, say $D_0=P$, is the exceptional locus for a birational transformation blowing up the point $\check{P}$. We define a local contribution $\Delta ^{Par}_{loc}(E,P)$ to $\Delta ^{Par}$ by isolating the local contributions in the previous formula. Notice first of all that for any $D_i$ meeting $P$ transversally, we have defined above $\emph{deg}_{loc}\left(Gr(E;D_i, k), P\right)$, the local contribution at $P$, in such a way that $$ \emph{deg}\left(Gr(E;D_i, k)\right) = \emph{deg}\left(Gr(\check{E}; D_i, k)\right) + \sum _{P_u} \emph{deg}_{loc}\left(Gr(E;D_i, k), P_u\right) $$ where the sum is over the exceptional divisors $P_u$ meeting $D_i$, which correspond to the points $\check{P}_u\in \check{D}_i$ which are blown up. Let $\mathcal{S}(P)$ denote the set of divisor components which meet $P$ but not including $P=D_0$ itself. Define $$ $$ $\Delta^{Par}_{loc}(E,P) := \Delta^{Vb}_{loc}(E,P) $ $$ $$ $\hspace{2.2cm} + \ \sum_{k=1}^r \beta (P,k). \emph{deg}\left(Gr(E;P, k)\right) $ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}(P)}\sum_{k=1}^r \beta (D_i,k). \emph{deg}_{loc}\left(Gr(E;D_i, k), P\right) $ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2} \ \sum_{k=1}^r \beta (P,k)^{2}$ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2} \ \sum_{i\in \mathcal{S}(P)}\sum_{k=1}^r \beta (D_i,k)^{2}$ $$ $$ $\hspace{2.2cm} - \ \sum_{i \in \mathcal{S}(P)} \sum_{k=1}^r \beta (D_i,k).\beta (P,\sigma (i,P)(k)).[y]$ $$ $$ $\hspace{2.2cm} +\ \frac{1}{2}\sum_{i\neq j, \check{P}\in \check{D}_i\cap \check{D}_j} \sum_{k=1}^r \beta (\check{D}_i,k).\beta (\check{D}_j,\sigma (\check{P},i,j)(k)).[\check{P}]$ $$ $$ In the next to last term, $\sigma (i,P):=\sigma (y,i,v)$ where $P=D_v$ and $y$ is the unique intersection point of $P=D_v$ and $D_i$. The factor of $1/2$ disappears because we are implicitly choosing an ordering of the indices $i,j=0$ which occur here. The last term is put in to cancel with the corresponding term in the global expression for $\check{E}$ above, and $[\check{P}]$ designates any lifting of the point $\check{P}$ to a point on $P$. \begin{theorem} With the above definitions, we have $$ \Delta ^{Par}(E) = \Delta ^{Par}(\check{E}) + \sum _{P_u} \Delta ^{Par}_{loc}(E, P_u), $$ where the sum is over the exceptional divisors. \end{theorem} \begin{proof} This follows by comparing the above definitions with the formula of Proposition \ref{propepar}. \end{proof} Let $\varphi ^{\ast}\check{E}$ denote the parabolic bundle on $X$ given by using the trivial extension $\varphi^{\ast}E$ as underlying vector bundle, and setting $\beta (P_u,k):= 0$ for all exceptional divisor components $P_u$. Note that $\Delta^{Vb}_{loc}(\varphi^{\ast}E,P)=0$. Then \\ $\Delta^{Par}_{loc}(\varphi ^{\ast}\check{E},P) = \ \frac{1}{2} \ \sum_{i\in \mathcal{S}(P)}\sum_{k=1}^r \beta (\check{D}_i,k)^{2}$ $$ $$ $\hspace{2.2cm} +\ \frac{1}{2}\sum_{i\neq j, \check{P}\in \check{D}_i\cap \check{D}_j} \sum_{k=1}^r \beta (\check{D}_i,k).\beta (\check{D}_j,\sigma (\check{P},i,j)(k)).[\check{P}]$, \\ and $$ $$ $\Delta ^{Par}_{loc}(E,P)- \Delta ^{Par}_{loc}(\varphi ^{\ast}\check{E},P) = \Delta^{Vb}_{loc}(E,P)$ $$ $$ $\hspace{2.2cm} + \ \sum_{k=1}^r \beta (P,k). \emph{deg}\left(Gr(E;P, k)\right) $ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}(P)}\sum_{k=1}^r \beta (D_i,k). \emph{deg}_{loc}\left(Gr(E;D_i, k), P\right) $ $$ $$ $\hspace{2.2cm} + \ \frac{1}{2} \ \sum_{k=1}^r \beta (P,k)^{2}$ $$ $$ $\hspace{2.2cm} - \ \sum_{i \in \mathcal{S}(P)} \sum_{k=1}^r \beta (D_i,k).\beta (P,\sigma (i,P)(k)).[y]$. A different local-global decomposition may be obtained by noting that $$ \Delta ^{Par}(E)=\Delta^{Par}(\varphi ^{\ast}\check{E}) + \sum _u \left( \Delta ^{Par}_{loc}(E,P_u)- \Delta ^{Par}_{loc}(\varphi ^{\ast}\check{E},P_u) \right) $$ with the local terms $(\Delta ^{Par}_{loc}(E,P_u)- \Delta ^{Par}_{loc}(\varphi ^{\ast}\check{E},P_u))$ being given by the previous formula. \section{Normalization via standard elementary transformations} There is another modification of parabolic structures due to elementary transformations. This may also be viewed as a shift of the parabolic structures in the viewpoint of a collection of sheaves. If $E_{\cdot}=\{ E_{\alpha _1,\ldots , \alpha _n}\}$ is a parabolic sheaf, then we can shift the filtration at the $i$-th place defined by $$ (C^i_{\theta}E)_{\alpha _1,\ldots , \alpha _n}:= E_{\alpha _1,\ldots , \alpha _i-\theta , \ldots , \alpha _n}. $$ This may also be viewed as tensoring with a parabolic line bundle $$ C^iE = E\otimes \mathcal{O} (\theta D_i). $$ The weights of the parabolic structure $C^i_{\theta}E$ along $D_i$ are of the form $\alpha _i+\theta$ for $\alpha _i$ weights of $E$. In the point of view of a vector bundle with filtration, it may correspond to doing an elementary transformation. Suppose $0 <\theta < 1$. Then $$ (C^i_{\theta}E)_0 = E_{0,\ldots , 0,-\theta , 0,\ldots ,0} $$ and we have an exact sequence $$ 0\rightarrow (C^i_{\theta}E)_0 \rightarrow E_0 \rightarrow (E_0 / F^i_{-\theta}E_0)\rightarrow 0. $$ Therefore $(C^i_{\theta}E)_0$ is obtained by elementary transformation of $E_0$ along one of the elements of the parabolic filtration on the divisor $D_i$. This is specially useful when the rank is $2$. Suppose $rk(E)=2$. There is a single choice for the elementary transformation. If the weights of $E$ at $D_i$ are $\alpha^{tot} _i-\beta _i$ and $\alpha^{tot}_i + \beta _i$ then the weights of the elementary transformation will be $\theta +\alpha^{tot} _i+\beta _i-1$ and $\theta +\alpha^{tot} _i-\beta _i$. The shift $\theta $ should be chosen so that these lie in $(-1,0]$. The new weights may be written as $$ (\tilde{\alpha}_i^{tot} -\tilde{\beta}_i,\quad \tilde{\alpha}_i^{tot} + \tilde{\beta}_i $$ with $$ \tilde{\alpha}_i^{tot}:= (\theta +\alpha^{tot} _i-\frac{1}{2}) $$ which is the new average value, and $$ \tilde{\beta}_i:=\frac{1}{2} -\beta _i. $$ \begin{corollary} \label{quarter} In the case $rk(E)=2$, by replacing $E$ with its shift $C^i_{\theta}E$ if necessary, we may assume that $$ 0\leq \beta _i \leq \frac{1}{4}. $$ \end{corollary} \begin{proof} If $\beta _i> \frac{1}{4}$ then do the shift which corresponds to an elementary transformation; for the new parabolic structure $\tilde{\beta}_i=\frac{1}{2}-\beta _i$ and $0\leq \tilde{\beta}_i \leq \frac{1}{4}$. \end{proof} \section{The rank two case} In order to simplify the further constructions and computations, we now restrict to the case when $E$ has rank 2. The parabolic structures along $D_i$ are rank one subbundles $F^i\subset E|_{D_i}$. The associated graded pieces are $Gr (D_i,1)=F^i$ and $Gr(D_i, 2)=E|_{D_i}/F^i$. The normalized weights may be written as $$ \beta (D_i,1)=-\beta _i, \quad \beta (D_i, 2) =\beta _i $$ with $0\leq \beta _i<\frac{1}{2}$, and by Corollary \ref{quarter} we may furthermore suppose $0\leq \beta _i\leq\frac{1}{4}$. Define $\deg ^{\delta}(E_{D_i},F^i) := \left( \deg (E_{D_i}/F^i)-\deg (F^i)\right)$. This has a local version as discussed in Definition \ref{deglocdef}, $$ \deg^{\delta}_{loc}(E_{D_i},F^i,P) := \left( \deg _{loc}(E_{D_i}/F^i,P)-\deg _{loc}(F^i,P)\right) $$ whenever $D_i$ meets $P$ transversally. The main formula may now be rewritten: \begin{proposition} $\Delta^{Par}(E) = \Delta^{Vb}(E) $ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}} \beta _i\deg^{\delta}(E_{D_i},F^i) $ $$ $$ $\hspace{2.2cm} - \ \ \sum_{i\in \mathcal{S}} \beta _i^{2}.[D_{i}]^{2}$ $$ $$ $\hspace{2.2cm} - \ \sum_{i \neq j} \sum_{y \in Irr(D_{i} \cap D_{j})} \tau (y,i,j)\beta _i\beta _j.[y]$ \\ where $\tau (y,i,j)=1$ if $F^i(y)=F^j(y)$ and $\tau (y,i,j)=-1$ if $F^i(y)\neq F^j(y)$ as subspaces of $E(y)$. Similarly for the local parabolic invariants, denoting $P=D_0$ we have $$ $$ $\Delta ^{Par}_{loc}(E,P)- \Delta ^{Par}_{loc}(\varphi ^{\ast}\check{E},P) = \Delta^{Vb}_{loc}(E,P)$ $$ $$ $\hspace{2.2cm} + \ \beta _0. \deg ^{\delta}(E_{D_0},F^0) $ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}(P)} \beta _i. \deg ^{\delta}_{loc}(E_{D_i},F^i,P) $ $$ $$ $\hspace{2.2cm} + \ \beta _{0}^{2}$ $$ $$ $\hspace{2.2cm} - \ 2 \sum_{i \in \mathcal{S}(P)} \tau (i,P) \beta _i\beta _0 .[y]$. \end{proposition} \begin{example} Let $E$ be a non pure rank two bundle, we have $E\mid_{P} = \mathcal{O}(m_{1}) \oplus \mathcal{O}(m_{2})$, then $\hspace{0.4cm}$ $\Delta^{Vb}_{loc}(E,P) \geq \frac{1}{2}(m_{2} - m_{1} - \frac{1}{2})$. \end{example} \begin{example} Let $E$ be a rank 2 bundle with $E|_{P} = \mathcal{O} \oplus \mathcal{O}(1)$. The reduction by elementary transformation is $E|_{P} = \mathcal{O} \oplus \mathcal{O}(1) \rightsquigarrow E'$. We get an exact sequence $$ 0 \longrightarrow \mathcal{O}(1) \longrightarrow E'|_{P} \longrightarrow \mathcal{O}(1) \longrightarrow 0 $$ then $E'$ is pure, $E' = \mu^{\star}\check{E}(-P)$. \end{example} Suppose we start with the bundle $E$, by doing the sequence of elementary transformation we get $\check{E}(m.P_i)$. Number the sequence in opposite direction, we get a sequence of bundles of the form: $$ \check{E}(m.P_i) = E(0),\hspace{0.1cm} E(1),\hspace{0.1cm} E(2),\hspace{0.1cm} ...,\hspace{0.1cm} E(g) = E $$ where $g$ is the number of steps, and $E(j - 1) = (E(j))'$ for $j = 1, ...,g$. we recall that if $E|_P \cong \mathcal{O}(m_1) \oplus \mathcal{O}(m_2)$ with $m_1 \leq m_2$ then $\mu(E) = m_2 - m_ 1$. Also $\mu(E) = 0 \Longrightarrow E = \check{E}(m_iP_i)$. Furthermore if $m_1 < m_2$ then $\mu(E') < \mu(E)$. We see that $$ 0 = \mu(E_0) < \mu(E_1) < \mu(E_2) < ... < \mu(E_{g - 1}). $$ To calculate $\Delta(E, P)_{loc}$ we use the proposition \ref{propdeltalocprime} applied to each $E(j)$: $$ \Delta^{Vb}_{loc}(E(j)', P_0) = \Delta^{Vb}_{loc}(E(j), P_0) - \frac{1}{2} \mathcal{A}, $$ where $$ \mathcal{A} = \sum_{i = 2}^{k}(m_2 - m_1 - \frac{1}{2})r_1r_i = m_2(E(j)) - m_1(E(j)) - \frac{1}{2} = \mu(E(j)) - \frac{1}{2}. $$ Therefore $$ \Delta^{Vb}_{loc}(E(j - 1)) = \Delta^{Vb}_{loc}(E(j)) - \frac{1}{2}(\mu(E(j)) - \frac{1}{2}), $$ and putting them all together, $$ \Delta^{Vb}_{loc}(E(0)) = 0; \hspace{0.2cm} \Delta^{Vb}_{loc}(E(g)) = \frac{1}{2}\sum_{i = 1}^{g}(\mu(E(j)) - \frac{1}{2}). $$ We have \begin{equation} \mylabel{varphi} \mu(E(j-1)) < \mu((E(j)), \end{equation} so $\mu(E(j)) \geq j$. Now we divide the work in two parts, first term $\mu(E(g)) - \frac{1}{2}$, then the sum of the others. For each $\mu(E(j))$ is at least one greater than the previous one, this gives that the sum of the other terms is at least equal to $ (1 + 2 + 3 + ... + (g-1)) - \frac{(g-1)}{2}$. Then $$ \Delta^{Vb}_{loc}(E(g)) \geq \frac{1}{2}(1 + 2 + 3 + ... + (g-1)) - \frac{g}{4} + \frac{1}{2}\mu(E(g)) $$ $$ = \frac{g(g-1)}{4} - \frac{g}{4} + \frac{1}{2}\mu(E). $$ We have therefore proven the following: \begin{proposition} \mylabel{deltavbloc} If $E$ is a bundle which is brought to pure form in $g \geq 1$ steps of elementary transformation, and $\mu(E) = m_2(E|_P) - m_1(E|_P)$, then we have the lower bound $$ \Delta^{Vb}_{loc}(E) = \Delta^{Vb}_{loc}(E(g)) \geq \frac{g^2 - 2g}{4} + \frac{1}{2}\mu (E). $$ \end{proposition} We have for each $1\leq k \leq g$, $$ \left| \deg ^{\delta}_{loc}(E(k), F^i,P) - \deg ^{\delta}_{loc}(E(k-1), F^i,P) \right| \leq 1, $$ but also $E(0)=\varphi ^{\ast}(\check{E})$ and $\deg ^{\delta}_{loc}(E(0), F^i,P) =0$, so $$ \left| \deg ^{\delta}_{loc}(E(g), F^i,P) \right| \leq g. $$ Also along $P$ we have $E_P=\mathcal{O}(m_1)\oplus \mathcal{O}(m_2)$ with $m_1\leq m_2$. For any subbundle $F^0\subset E_P$ we have $\deg (F^0) \leq m_2$ and $\deg (E_P/F^0)\geq m_1$ so $$ \deg ^{\delta}(E_P, F^0)\geq m_1-m_2 = -\mu (E). $$ Then $$ $$ $\Delta ^{Par}_{loc}(E,P)- \Delta ^{Par}_{loc}(\varphi ^{\ast}\check{E},P) = \Delta^{Vb}_{loc}(E,P)$ $$ $$ $\hspace{2.2cm} + \ \beta _0. \deg ^{\delta}(E_{D_0},F^0) $ $$ $$ $\hspace{2.2cm} + \ \sum_{i \in \mathcal{S}(P)} \beta _i. \deg ^{\delta}_{loc}(E_{D_i},F^i,P) $ $$ $$ $\hspace{2.2cm} + \ \beta_{0}^{2}$ $$ $$ $\hspace{2.2cm} - \ 2 \sum_{i \in \mathcal{S}(P)} \tau (i,P) \beta _i\beta _0 .[y]$ $$ $$ $\hspace{2.2cm} \geq\frac{g^{2} - 2g}{4} \hspace{0.2cm} + \hspace{0.2cm} \frac{1}{2}\mu (E)$ $$ $$ $\hspace{2.2cm} - \ \beta _0\mu (E)$ $$ $$ $\hspace{2.2cm} - \sum _{i\in \mathcal{S}(P)}\beta _i g$ $$ $$ $\hspace{2.2cm} + \ \beta _{0}^{2}$ $$ $$ $\hspace{2.2cm} - \ 2 \sum_{i \in \mathcal{S}(P)} \beta _i\beta _0 .$ But we know that $\mid\beta_i\mid \leq \frac{1}{2}$. Then we get the following theorem. \begin{theorem} $$ \Delta^{Par}_{loc}(E; P) \geq \Delta ^{Par}_{loc}(\varphi ^{\ast}\check{E},P) + \frac{g^{2} - 2g}{4} - \frac{g +1}{2}.\kappa . $$ Where $\kappa = \# \mathcal{S}(P)$ is the number of divisors of $D_i$ meeting $P$. \end{theorem} \begin{theorem} If $\check{E}$ is a vector bundle of rank $2$ on $\check{X}$ with parabolic structures on the components $\check{D}_i$, then on $X$ obtained by blowing up the multiple points of $\check{D}$, the parabolic invariant $\Delta ^{Par}_{loc}(E, P)$ attains a minimum for some extension of the bundle $E$ and some parabolic structures on the exceptional loci. \end{theorem} \begin{proof} From the above theorem, the number of elementary transformations $g$ needed to get to any $E$ with $\Delta^{Par}_{loc}(E; P) \leq \Delta ^{Par}_{loc}(\varphi ^{\ast}\check{E},P)$, is bounded. Furthermore the number of numerical possibilities for the degrees $\deg ^{\delta}_{loc}(E_{D_i},F^i,P) $ and $\deg ^{\delta}(E_{P},F^0,P) $ leading to such a minimum, is finite. The parabolic weight $\beta _0$ may be chosen to lie in the closed interval $[0,\frac{1}{4}]$, so the set of possible numerical values lies in a compact subset; hence a minimum is attained. \end{proof} Denote the parabolic extension which achieves the minimum by $E^{min}$. There might be several possibilities, although we conjecture that usually it is unique. Thus $$ \Delta ^{Par}_{loc}(E^{min}, P) = {\rm min}_E \left( \Delta ^{Par}_{loc}(E, P)\right) . $$ With the minimum taken over all parabolic extensions $E$ of $\check{E}|_{\check{U}}$ across the exceptional divisor $P$. The minimal $E^{min}$ exists at each exceptional divisor and they fit together to give a global parabolic bundle. Define $$ $$ $\Delta ^{Par}_{min}(\check{E}):= \Delta ^{Par}(E^{min})$ $$ $$ $\hspace{2.2cm} = \Delta ^{Par}(\check{E}) + \sum _{P_u} \Delta ^{Par}_{loc}(E^{min},P_u).$ \subsection{Panov differentiation} D. Panov in his thesis \cite{Panov} used the idea of differentiation with respect to the parabolic weight. A version of this technique allows us to gain more precise information on the minimum. \begin{lemma} Let $E=E^{min}$ be the parabolic bundle extending $\check{E}|_{\check{U}}$ which achieves the minimum value $\Delta ^{Par}_{loc}(E^{min},P)$. By making an elementary transformation we may assume $0\leq \beta _0\leq \frac{1}{4}$. Denote also by $E$ the underlying vector bundle. Then for any subbundle $F'\subset E|_P$ we have $$ \deg (E|_P/F')- \deg (F') \geq -\kappa . $$ Thus if $E|_P = \mathcal{O}_P(m_1)\oplus \mathcal{O}_P(m_2)$ then $$ |m_2-m_1| \leq \kappa . $$ Where $\kappa = \# \mathcal{S}(P)$ is the number of divisors of $D_i$ meeting $P$. \end{lemma} \begin{proof} We show that $\deg (E|_P/F')- \deg (F') \geq -\kappa -\frac{1}{4}$ which implies the stated inequality since the left side and $\kappa$ are integers. Let $F=F^0\subset E|_P$ be the subbundle corresponding to the parabolic structure $E^{min}$. Consider two cases: \\ (i) if $F^0$ is the destabilizing bundle of $E|_P$ and $\beta _0 >0$; or \\ (ii) if $F^0$ is not the destabilizing bundle of $E|_P$, or else $\beta _0=0$. In case (i) note that $\beta_0$ may be allowed to range in the full interval $[0,\frac{1}{2})$ so the invariant $\Delta ^{Par}_{loc}(E,P)$ is a local minimum considered as a function of $\beta_0\in (0,\frac{1}{2})$. Then $$ \frac{d}{d\beta _0} \Delta ^{Par}_{loc}(E,P) = 0. $$ This gives the formula $$ $$ $ \deg ^{\delta}(E_{D_0},F^0) =2 \ \beta _0 + \ 2 \sum_{i \in \mathcal{S}(P)} \tau (F^i,F^0) \beta _i $, so $$ $$ $\deg ^{\delta}(E_{D_0},F^0) \geq - \frac{\kappa}{2}$. Since $F^0$ is the destabilizing bundle it implies that $$ $$ $\deg ^{\delta}(E_{D_0},F') \geq -\frac{\kappa}{2}$ \\ for any other subbundle $F'$ also, which is stronger than the desired inequality in this case. In case (ii) we have $\beta _0.\deg ^{\delta}(E_{D_0},F^0) \geq 0$ because in the contrary case that would imply that $F^0$ is the destabilizing subbundle. Suppose $F'\subset E|_P$ is a possibly different subbundle such that $$ \deg (E|_P/F')- \deg (F') < -\frac{1}{4}(1+4\kappa) . $$ Then make a new parabolic structure $E'$ using $F'$ instead of $F$, with parabolic weight $\beta '_0 = \frac{1}{4}$. We have $$ $$ $\Delta ^{Par}_{loc}(E',P)- \Delta ^{Par}_{loc}(E,P) = $ $$ $$ $\hspace{2.2cm} \ \frac{1}{4} \deg ^{\delta}(E_{D_0},F') $ $$ $$ $\hspace{2.2cm} - \ \beta _0. \deg ^{\delta}(E_{D_0},F^0) $ $$ $$ $\hspace{2.2cm} + \frac{1}{16} $ $$ $$ $\hspace{2.2cm} - \ \beta _0^{2}$ $$ $$ $\hspace{2.2cm} + \ 2 \sum_{i \in \mathcal{S}(P)} \tau (F,F^i) \beta _i\beta _0 .[y]$ $$ $$ $\hspace{2.2cm} - \ \frac{1}{2}\sum_{i \in \mathcal{S}(P)} \tau (F',F^i) \beta _i .[y]$ $$ $$ $\leq \frac{1}{4} \deg ^{\delta}(E_{D_0},F')+ \frac{1}{16}(1+4\kappa) $ $$ $$ $<0.$ This contradicts minimality of $E^{min}$, which shows the desired inequality. \end{proof} \begin{corollary} In the case of $3$ divisor components $\kappa = 3$ and the minimal extension $E^{min}$ satisfies $|m_2-m_1| \leq 3$. It is connected to $\varphi ^{\ast}(\check{E})$ by at most three elementary transformations. \end{corollary} This should permit an explicit description of all possible cases for $\kappa = 3$, we start on this below. \subsection{The Bogomolov-Gieseker inequality} Suppose $C\subset \check{X}$ is an ample curve meeting $\check{D}$ transversally. Then $\check{E}|_C$ is a parabolic bundle on $C$. \begin{proposition} Suppose $\check{E}|_C$ is a stable parabolic bundle. Then for any extension $E$ to a parabolic bundle over $X$, there exists an ample divisor $H$ on $X$ such that $E$ is $H$-stable. Hence $\Delta ^{Par}(E)\geq 0$. In particular $\Delta ^{Par}(E^{min}) \geq 0$. If $\check{E}$ comes from an irreducible unitary representation of $\pi _1(\check{X}-\check{D})$ then the parabolic extension on $X$ corresponding to the same unitary representation must be some choice of $E^{min}$. \end{proposition} \begin{proof} Fix an ample divisor $H'$. Then any divisor of the form $H=nC+H'$ is ample on $X$, and for $n$ sufficiently large $E$ will be $H$-stable. The Bogomolov-Gieseker inequality for parabolic bundles says that $\Delta ^{Par}(E)\geq 0$ with equality if and only if $E$ comes from a unitary representation. However, $\Delta ^{Par}(E)\geq \Delta ^{Par}(E^{min})\geq 0$ and if $E$ comes from a unitary representation then $\Delta ^{Par}(E)= \Delta ^{Par}(E^{min})= 0$. It follows in this case that $E$ is one of the choices of $E^{min}$. \end{proof}
2,869,038,156,144
arxiv
\section*{Acknowledgment} This work was partially supported by NSF grants DMS 2147546/2015447 and the NSF CAREER award DMS-2143215. Part of this work was done while G.~Li and Y.~Wei were visiting the Simons Institute for the Theory of Computing. \bibliographystyle{apalike} \section{Preliminaries: useful concentration results} \label{sec:Gaussian-concentration} This section gathers a few useful concentration results concerning functions of random vectors that will be applied multiple times throughout this paper. \subsection{List of concentration lemmas} The first result is concerned with Gaussian concentration for Lipschitz-continuous functions, whose proof can be found in Section~\ref{sec:pf-Gaussian}. Here and below, we remind the reader that $\mathbb{B}^d(r)$ indicates the $d$-dimensional Euclidean ball with radius $r$ centered at 0. \begin{lems} \label{lem:Gauss} Consider an $n$-dimensional Gaussian vector $X \sim \mathcal{N}(0, I_n)$, and a set of functions $f_{\theta} : \mathbb{R}^n \to \mathbb{R}$ as parameterized by $\theta \in \Theta \subseteq \mathbb{B}^d(r)$. Let $\mathcal{E}$ be some convex set obeying $\mathbb{P}(X \in \mathcal{E}) \geq 1 - O(n^{-11})$. Assume that for any fixed $\theta,\widetilde{\theta} \in \Theta$ and any given $Z_1, Z_2 \in \mathcal{E}$, we have \begin{align} \label{eqn:gauss-lipschitz} |f_{\theta}(Z_1) - f_{\theta}(Z_2)| \le \sigma \ltwo{Z_1 - Z_2} \qquad\text{and}\qquad \left\|f_{\theta}(Z) - f_{\widetilde{\theta}}(Z)\right\|_2 \le L \ltwo{\theta - \widetilde{\theta}}. \end{align} In addition, suppose that for any fixed $\theta \in \Theta$, we have \begin{align} \label{eqn:gauss-lipschitz-B-proj} \big|\mathbb{E}\left[f_{\theta}(\mathcal{P}_{\mathcal{E}}(X)) - f_{\theta}(X)\right]\big| \le B, \end{align} where $\mathcal{P}_{\mathcal{E}}(\cdot)$ denotes the Euclidean projection onto the set $\mathcal{E}$. Then for any $\epsilon < r$, \begin{align} \label{eqn:Gauss-target} \sup_{\theta\in \Theta} \big|f_{\theta}(X) - \mathbb{E}\left[f_{\theta}(X)\right] \big| \lesssim \sigma\sqrt{d\log\left(\frac{nr}{\epsilon}\right)} + L\epsilon + B \end{align} holds with probability at least $1-O(n^{-11})$. \end{lems} As an immediate consequence of Lemma~\ref{lem:Gauss}, we can take $\epsilon\asymp n^{-200}$ to yield the following result: \begin{cors} \label{cor:Gauss} Under the assumptions of Lemma~\ref{lem:Gauss}, suppose the convex set $\mathcal{E}$ obeys \begin{subequations} \label{eq:f-grad-bound-poly} \begin{align} \left\|f_{\theta}(Z) - f_{\widetilde{\theta}}(Z)\right\|_2 &\lesssim n^{100} \ltwo{\theta - \widetilde{\theta}} \qquad\text{for all }Z\in\mathcal{E}\text{ and all }\theta,\widetilde{\theta}\in \Theta;\\ \big|\mathbb{E}\left[f_{\theta}(\mathcal{P}_{\mathcal{E}}(X))-f_{\theta}(X)\right]\big| & \lesssim n^{-100}. \end{align} \end{subequations} Then with probability at least $1-O(n^{-11})$ one has \[ \sup_{\theta\in\Theta}\big|f_{\theta}(X)-\mathbb{E}\left[f_{\theta}(X)\right]\big|\lesssim\sigma\sqrt{d\log\left(nr\right)}+n^{-100}. \] \end{cors} Next, we develop concentration results for a family of functions that include indicator functions. Consider a set of independent random vectors $X_1,\ldots, X_{m} \in \ensuremath{\mathbb{R}}^{n}$ with $m\leq n$, and for each $1\leq i\leq m$, consider a collection of functions $f_{i, \theta}, h_{i, \theta} : \mathbb{R}^n \to \mathbb{R}$ indexed by $\theta \in \mathbb{B}^d(r)$. The following concentration bound --- whose proof is deferred to Section~\ref{sec:scriabin} --- proves useful when establishing our main results. \begin{lems}\label{lem:Gauss-jump} Suppose that for any given $\theta \in \Theta \subseteq \mathbb{B}^d(r)$, the random variable $f_{i, \theta}(X_i) \ge 0$ is $\sigma_i$-subexponential. Assume that there exist a set of events $\mathcal{E}_i$ ($1\leq i\leq m\leq n)$ obeying $\mathbb{P}(\bigcap_{i}\mathcal{E}_i) > 1 - O(n^{-11})$ such that: for any $i \in [m]$ and any $\theta,\widetilde{\theta}\in \Theta$, \begin{subequations} \label{eqn:lips-jump} \begin{align} \big\| f_{i, \theta}(Z_i) - f_{i, \widetilde{\theta}}(Z_i) \big\|_2 + \big\| h_{i, \theta}(Z_i) - h_{i, \widetilde{\theta}}(Z_i) \big\|_2 &\leq L\big\| \theta - \widetilde{\theta} \big\|_2 \qquad \text{for all }Z_i \text{ with }\ind_{\mathcal{E}_i}(Z_i)=1 , \\ % \mathbb{E}\big[ f_{i, \theta}(X_i) \ind\left(\mathcal{E}_i^{\mathrm{c}}\right)\big] &\le B. \end{align} \end{subequations} Also, for any $i\in [m]$ and any $\theta\in \Theta$, define \begin{equation} \varrho_{i,\theta} \coloneqq \mathbb{E}\big[f_{i, \theta}(X_i)\big] + L\epsilon+\sigma_i\log n , \qquad 1\leq i\leq m. \end{equation} Then for any $0<\epsilon < r$, with probability at least $1-O(n^{-11})$ one has \begin{align} & \left|\sum_{i=1}^{m}\Big(f_{i,\theta}(X_{i})\ind\big(h_{i,\theta}(X_{i})>\tau\big)-\mathbb{E}\Big[f_{i,\theta}(X_{i})\ind\big(h_{i,\theta}(X_{i})>\tau\big)\Big]\Big)\right|\nonumber\\ & \notag\qquad\lesssim\sqrt{\sum_{i=1}^{m}\varrho_{i,\theta}^{2}\Big(\mathbb{P}\left(h_{i,\theta}(X_{i})>\tau\right)+\frac{1}{n}\Big)d\log\frac{rn}{\epsilon}}+\Big(\max_{1\leq i\leq m}\varrho_{i,\theta}\Big) d\log\frac{rn}{\epsilon} \\ & \qquad\qquad\qquad+mL\epsilon + mB+\sum_{i=1}^{m}\varrho_{i,\theta}\mathbb{P}\Big(\tau-(3L+1)\epsilon\le h_{i,\theta}(X_{i})\le\tau+(3L+1)\epsilon\Big) \label{eqn:scriabin} \end{align} simultaneously for all $\theta \in \Theta$ and all $\tau \in [-r,r]$. \end{lems} Similar to Corollary~\ref{cor:Gauss}, we can take $\epsilon = n^{-200}$ to derive the following immediate consequence. \begin{cors} \label{cor:Gauss-jump} Under the assumptions of Lemma~\ref{lem:Gauss-jump}, suppose that \begin{subequations} \label{eq:f-grad-bound-poly-jump} \begin{align} \big\| f_{i, \theta}(Z_i) - f_{i, \widetilde{\theta}}(Z_i) \big\|_2 + \big\| h_{i, \theta}(Z_i) - h_{i, \widetilde{\theta}}(Z_i) \big\|_2 &\leq n^{100} \big\| \theta - \widetilde{\theta} \big\|_2 \qquad \text{for all }Z_i \text{ with }\ind_{\mathcal{E}_i}(Z_i)=1 \\ % \mathbb{E}\big[ f_{i, \theta}(X_i) \ind\left(\mathcal{E}_i^{\mathrm{c}}\right)\big] &\le n^{-100} \end{align} \end{subequations} for any $i\in [m]$ and any $\theta,\widetilde{\theta}\in \Theta$. Also, suppose that \[ \mathbb{P}\Big(\tau-400n^{-100}\le h_{i,\theta}(X_{i})\le\tau+400n^{-100}\Big) \lesssim m^{-1} , \qquad 1\leq i\leq m \] for any $\theta\in \Theta$ and any $\tau \in [-r,r]$. If we redefine \begin{equation} \varrho_{i,\theta} \coloneqq \mathbb{E}\big[f_{i, \theta}(X_i)\big] +\sigma_i\log n , \qquad 1\leq i\leq m, \end{equation} then with probability at least $1-O(n^{-11})$ one has \begin{align*} & \left|\sum_{i=1}^{m}\Big(f_{i,\theta}(X_{i})\ind\big(h_{i,\theta}(X_{i})>\tau\big)-\mathbb{E}\Big[f_{i,\theta}(X_{i})\ind\big(h_{i,\theta}(X_{i})>\tau\big)\Big]\Big)\right|\nonumber\\ & \notag\qquad\lesssim\sqrt{\sum_{i=1}^{m}\varrho_{i,\theta}^{2}\Big(\mathbb{P}\left(h_{i,\theta}(X_{i})>\tau\right)+\frac{1}{n}\Big)d\log(rn)}+\Big(\max_{1\leq i\leq m}\varrho_{i,\theta}\Big)d\log(rn) + \frac{n+d\log(rn)}{n^{100}} \end{align*} simultaneously for all $\theta \in \Theta$ and all $\tau \in [-r,r]$. \end{cors} The third result is concerned with norms of (linear combinations of) independent Gaussian vectors; the proof can be found in Section~\ref{sec:pf-brahms}. Here and throughout, for every vector $x\in \ensuremath{\mathbb{R}}^{n}$, we adopt the convention and let $|x|_{(i)}$ denote its $i$-th largest entry in magnitude. \begin{lems} \label{lem:brahms-lemma} Consider a collection of independent Gaussian vectors $\{\phi_k\}_{1\leq k\leq n}$ with $\phi_k \overset{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0,\frac{1}{n}I_n)$. With probability at least $1-\delta$, it holds that \begin{subequations} \label{eqn:vive-brahms} \begin{align} \label{eqn:brahms} \Big| \max_{1\leq k\leq t-1} \|\phi_k\|_2 -1 \Big| & \lesssim \sqrt{\frac{\log \frac{n}{\delta}}{n}}, \\ % \label{eqn:long} \sup_{a = [a_k]_{1\leq k< t} \in \mathcal{S}^{t-2}} \bigg| \Big\|\sum_{k = 1}^{t-1} a_k\phi_k\Big\|_2 - 1 \bigg| & \lesssim \sqrt{\frac{t\log \frac{n}{\delta}}{n}}, \\ % \label{eqn:vive} \sup_{a=[a_{k}]_{1\leq k<t}\in\mathcal{S}^{t-2}}\sum_{i=1}^{s}\Big|\sum_{k=1}^{t-1}a_{k}\phi_{k}\Big|_{(i)}^{2} &\lesssim\frac{(t+s)\log \frac{n}{\delta}}{n},\qquad \forall 1 \leq s\leq n. \end{align} \end{subequations} \end{lems} Finally, we state a lemma that quantifies the 1-Wasserstein distance between a weighted combination of independent Gaussian vectors (with the weights being possibly dependent on the Gaussian vectors) and an i.i.d.~Gaussian vector. The proof of this lemma can be found in Section~\ref{sec:proof-wasserstein}. \begin{lems} \label{lem:wasserstein} Consider a set of i.i.d.~random vectors $\phi_k \overset{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0, \frac{1}{n}I_n)$, as well as any unit vector $\beta =[\beta_i]_{1\leq i\leq t} \in \mathcal{S}^{t-1}$ that might be statistically dependent on $\{\phi_k\}$. Then the 1-Wasserstein distance (cf.~\eqref{eqn:wasserstein-p}) between the distribution of $\sum_{i=1}^{t}\beta_{k}\phi_{k}$ --- denoted by $\mu\big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\big)$ --- and $\mathcal{N}\big(0,\frac{1}{n}I_{n}\big)$ obeys \begin{align} \label{eqn:gaussian-approx} W_{1}\bigg(\mu\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big),\mathcal{N}\Big(0,\frac{1}{n}I_{n}\Big)\bigg) \lesssim \sqrt{\frac{t\log n}{n}}. \end{align} \end{lems} \subsection{Proof of Lemma~\ref{lem:Gauss}} \label{sec:pf-Gaussian} Let us define \begin{align*} g_{\theta}(X) \coloneqq f_{\theta}(\mathcal{P}_{\mathcal{E}}(X)). \end{align*} By Lipschitz property of $f_{\theta}$ (cf.~\eqref{eqn:gauss-lipschitz}), we can obtain the Lipschitz property for $g_{\theta}$ as follows: \begin{align*} \left|g_{\theta}(X) - g_{\theta}(Y)\right| = \left|f_{\theta}(\mathcal{P}_{\mathcal{E}}(X)) - f_{\theta}(\mathcal{P}_{\mathcal{E}}(Y))\right| \le \sigma\|\mathcal{P}_{\mathcal{E}}(X) - \mathcal{P}_{\mathcal{E}}(Y)\|_2 \le \sigma\|X - Y\|_2, \end{align*} where the last step uses the non-expansiveness of Euclidean projection onto convex sets. Gaussian isoperimetric inequalities (e.g., \citet[Theorem 3.8]{massart2007concentration}) then tells us that, for any fixed $\theta \in \Theta$, \begin{align} \label{eqn:schubert} \big|g_{\theta}(X) - \mathbb{E}\left[g_{\theta}(X)\right]\big| \leq \sigma\sqrt{2\log\frac{1}{\delta}} \end{align} holds with probability at least $1 - \delta$. Next, we need to establish uniform concentration over all $\theta\in \Theta$. Towards this, let us construct an $\epsilon$-net $\mathcal{N}_{\epsilon}$ for $\Theta$ with smallest size such that: for any $\theta \in \Theta$, there exists some $\ensuremath{\widehat{\theta}} \in \mathcal{N}_{\epsilon}$ obeying $\ltwo{\theta - \widehat{\theta}} \le \epsilon$. Given that $\Theta \subseteq \ensuremath{\mathbb{R}}^d$, it is easily seen that the cardinality of the $\epsilon$-net can be chosen such that $|\mathcal{N}_{\epsilon}| \le (\frac{2r}{\epsilon})^{d}$ \citep{vershynin2018high}. Taking \eqref{eqn:schubert} with the union bound over the set $\mathcal{N}_{\epsilon}$ reveals that: with probability at least $1 - \delta$, \begin{align} \label{eqn:schubert-impromptu} \sup_{\theta \in \mathcal{N}_{\epsilon}} \big|g_{\theta}(X) - \mathbb{E}\left[g_{\theta}(X)\right] \big| \leq \sigma \sqrt{2d \log\Big(\frac{2 r}{\delta\epsilon}\Big)}. \end{align} With the above concentration result in place, we are ready prove the advertised inequality~\eqref{eqn:Gauss-target}. First, recalling that $\left|\mathbb{E}\left[g_{\theta}(X) - f_{\theta}(X)\right]\right| \le B$ and $f_{\theta}(X) = g_{\theta}(X)$ with probability at least $1-O(n^{-11})$, one has \begin{align*} \big|f_{\theta}(X) - \mathbb{E}\left[f_{\theta}(X)\right]\big| & = \big|g_{\theta}(X) - \mathbb{E}\left[f_{\theta}(X)\right]\big| \le \big|g_{\theta}(X) - \mathbb{E}\left[g_{\theta}(X)\right]\big| + \big|\mathbb{E} [f_{\theta}(X)] - \mathbb{E}[g_{\theta}(X)]\big|\\ &\le \big|g_{\theta}(X) - \mathbb{E}\left[g_{\theta}(X)\right]\big| + B \end{align*} with probability at least $1-O(n^{-11})$. In addition, our assumption \eqref{eqn:gauss-lipschitz} also indicates that \begin{align*} &\big|g_{\theta_1}(X) - g_{\theta_2}(X)\big| = \big|f_{\theta_1}(\mathcal{P}_{\mathcal{E}}(X)) - f_{\theta_2}(\mathcal{P}_{\mathcal{E}}(X))\big| \le L\|\theta_1 - \theta_2\|_2, \qquad \text{and} \\ \big|\mathbb{E} [g_{\theta_1}(X)] - \mathbb{E}[g_{\theta_2}(X)]\big| &= \big|\mathbb{E}[f_{\theta_1}(\mathcal{P}_{\mathcal{E}}(X)) - f_{\theta_2}(\mathcal{P}_{\mathcal{E}}(X))]\big| \le \mathbb{E}\big[|f_{\theta_1}(\mathcal{P}_{\mathcal{E}}(X)) - f_{\theta_2}(\mathcal{P}_{\mathcal{E}}(X))|\big] \leq L\|\theta_1 - \theta_2\|_2. \end{align*} Consequently, for every $\theta\in \Theta$, we obtain \begin{align*} \left|f_{\theta}(X) - \mathbb{E}\left[f_{\theta}(X)\right]\right| &\le \left|g_{\widehat{\theta}}(X) - \mathbb{E}\left[g_{\widehat{\theta}}(X)\right]\right| + 2L\epsilon + B \\ &\lesssim \sigma\sqrt{d\log\left(\frac{nr}{\epsilon}\right)} + L\epsilon + B, \end{align*} with probability exceeding $1-O(n^{-11})$, where we invoke the Lipschitz property for $g_{\theta}$ and $\mathbb{E} g_{\theta}$ with respect to $\theta$, and the last inequality follows from relation~\eqref{eqn:schubert-impromptu} with $\delta = n^{-11}$. We have thus established Lemma~\ref{lem:Gauss}. \subsection{Proof of Lemma~\ref{lem:Gauss-jump}} \label{sec:scriabin} \paragraph{Step 1: establishing concentration for any fixed $\theta$ and $\tau$.} For notational simplicity, let us introduce \begin{align*} \mu_i &\coloneqq \mathbb{E}\big[f_{i, \theta}(X_i)\ind(\mathcal{E}_i)\big] \\ % Z_i &\coloneqq f_{i, \theta}(X_i)\ind\left(h_{i, \theta}(X_i) > \tau\right)\ind(\mathcal{E}_i) \end{align*} for any $i\in [m]$, and any fixed $\theta\in \Theta$ and $\tau \in [-r,r]$. The goal of this step is to show the following Bernstein-type inequality: for any given $\theta\in \Theta$ and $\tau \in [-r,r]$, \begin{align} \label{eqn:sum-zi} \Big|\sum_{i = 1}^m Z_i - \mathbb{E}[Z_i]\Big| \lesssim \sqrt{\sum_{i = 1}^m \left(\mu_i+\sigma_i\log n\right)^2\left(\mathbb{P}\big(h_{i, \theta}(X_i) > \tau\big) + \frac{1}{n}\right)\log\frac{1}{\delta}} + \max_{i} \left(\mu_i+\sigma_i\log n\right)\log\frac{1}{\delta} \end{align} holds with probability at least $1 - \delta$, where $\delta$ can be any value in $(0,1)$. The remainder of this step is devoted to establishing \eqref{eqn:sum-zi}. We find it useful to first single out several preliminary facts. Recognizing that $f_{i, \theta}(X_i)$ is assumed to be non-negative, one has $\mu_{i} \le \mathbb{E}\big[f_{i, \theta}(X_i)\big]$. Given that $f_{i, \theta}(X_i)$ is assumed to be $\sigma_i$-subexponential, we see that $f_{i, \theta}(X_i)\ind(\mathcal{E}_i)$ is also $\sigma_i$-subexponential, which further implies that the centered version $f_{i, \theta}(X_i)\ind(\mathcal{E}_i)-\mu_i$ is $O(\sigma_i)$-subexponential (see \citet[Exercise 2.7.10]{vershynin2018high}); this means that there exists some universal constant $c_5\geq 1$ such that \begin{align} \mathbb{P}\Big( f_{i, \theta}(X_i)\ind(\mathcal{E}_i) \geq \mu_i + \tau \Big) \leq \mathbb{P}\left( \big|f_{i, \theta}(X_i)\ind(\mathcal{E}_i) - \mu_i \big| \geq \tau \right) \leq 2\exp\Big(- \frac{\tau}{c_5 \sigma_i} \Big) \label{eq:sub-exponential-simgai-135} \end{align} for any $\tau \geq 0$ and any $i \in [m]$. In what follows, we shall follow similar ideas for proving Bernstein's inequality (e.g., \cite{wainwright2019high}). For every integer $k\geq 1$, let us first look at the $k$-moment of $Z_{i} - \mathbb{E}[Z_i]$. Note that for any two non-negative numbers $a,b\geq 0$, one has $|(a-b)^k|\leq a^k + b^k$. This fact taken together with Jensen's inequality gives \begin{align} \notag\mathbb{E}\left[\big|Z_{i}-\mathbb{E}[Z_{i}]\big|^{k}\right] & \leq\mathbb{E}\left[Z_{i}^{k}\right]+\big(\mathbb{E}[Z_{i}]\big)^{k}\leq2\mathbb{E}\left[Z_{i}^{k}\right]\\ \notag & =2\mathbb{E}\left[\big(f_{i,\theta}(X_{i})\big)^{k}\ind\left(h_{i,\theta}(X_{i})>\tau\right)\ind(\mathcal{E}_{i})\right]\\ \notag & \leq2\mathbb{E}\left[\big(f_{i,\theta}(X_{i})\big)^{k}\ind(\mathcal{E}_{i})\ind\left(h_{i,\theta}(X_{i})>\tau\right)\ind\big(f_{i,\theta}(X_{i})\leq \mu_{i}+c_5\sigma_{i}k\log n\big)\right]\\ \notag & \qquad+2\mathbb{E}\left[\big(f_{i,\theta}(X_{i})\big)^{k}\ind(\mathcal{E}_{i})\ind\big(f_{i,\theta}> \mu_{i}+ c_5\sigma_{i}k\log n\big)\right]\\ & \stackrel{(\text{i})}{\leq} 2\left(\mu_{i}+c_5\sigma_{i}k\log n\right)^{k}\mathbb{E}\big[\ind\left(h_{i,\theta}(X_{i})>\tau\right)\big]+16(\mu_{i}+c_{5}\sigma_{i}k\log n)^{k}\exp(-k\log n) \notag\\ & \leq 16 \left(\mu_{i}+ c_5\sigma_{i}k\log n\right)^{k}\Big(\mathbb{P}\big(h_{i,\theta}(X_{i})>\tau\big)+\frac{1}{n}\Big). \label{eqn:bernstein-condition} \end{align} To justify why (i) is valid, we note that \begin{align} \mathbb{E}\left[\big(f_{i,\theta}(X_{i})\big)^{k}\ind(\mathcal{E}_{i})\ind\big(f_{i,\theta}(X_{i})>\mu_{i}+c_5\sigma_{i}k\log n\big)\right] & \leq\int_{(\mu_{i}+c_5\sigma_{i}k\log n)^{k}}^{\infty}\mathbb{P}\left\{ \big(f_{i,\theta}(X_{i})\ind(\mathcal{E}_{i})\big)^{k}\geq\tau\right\} \mathrm{d}\tau\notag\\ & =\int_{(\mu_{i}+c_5\sigma_{i}k\log n)^{k}}^{\infty}\mathbb{P}\left\{ f_{i,\theta}(X_{i})\ind(\mathcal{E}_{i})\geq\tau^{1/k}\right\} \mathrm{d}\tau\notag\\ & =\int_{\mu_{i}+c_5\sigma_{i}k\log n}^{\infty}kx^{k-1}\mathbb{P}\Big\{ f_{i,\theta}(X_{i})\ind(\mathcal{E}_{i})\geq x\Big\}\mathrm{d}x\notag\\ & =\int_{c_5\sigma_{i}k\log n}^{\infty}k(x+\mu_{i})^{k-1}\mathbb{P}\Big\{ f_{i,\theta}(X_{i})\ind(\mathcal{E}_{i})\geq x+\mu_{i}\Big\}\mathrm{d}x\notag\\ & \overset{(\mathrm{ii})}{\leq}2k\int_{c_5\sigma_{i}k\log n}^{\infty}(x+\mu_{i})^{k-1}\exp\Big(-\frac{x}{c_{5}\sigma_{i}}\Big)\mathrm{d}x, \label{eq:RHS-135} \end{align} where (ii) follows from inequality~\eqref{eq:sub-exponential-simgai-135}. Now the right-hand side of the above inequality can be further controlled as \begin{align} \eqref{eq:RHS-135} & =2c_{5}\sigma_{i}k\int_{k\log n}^{\infty}(c_{5}\sigma_{i}x+\mu_{i})^{k-1}\exp(-x)\mathrm{d}x\notag\\ & \overset{(\mathrm{iii})}{\leq}2c_{5}\sigma_{i}k\sum_{l=k\log n}^{\infty}(c_{5}\sigma_{i}l+\mu_{i})^{k-1}\exp(-l)\notag\\ & \overset{(\mathrm{iv})}{\leq}2c_{5}\sigma_{i}k\cdot(c_{5}\sigma_{i}k\log n+\mu_{i})^{k-1}\exp(-k\log n)\sum_{i=0}^{\infty}\Big(\frac{2}{e}\Big)^{i} \notag\\ & \leq8(c_{5}\sigma_{i}k\log n+\mu_{i})^{k}\exp(-k\log n), \label{eq:exp-tail} \end{align} where (iii) is valid since the function $(ax+b)^{k}e^{-x}$ with $a,b>0$ is decreasing in $x$ for any $x>k$, and (iv) holds since for any $l\geq k\log n$, one has \[ \frac{\big((l+1)c_{5}\sigma_{i}+\mu_{i} \big)^{k}\exp(-(l+1))}{\big(lc_{5}\sigma_{i}+\mu_{i}\big)^{k}\exp(-l)}\le e^{-1}\Big(1+\frac{1}{l}\Big)^{k}\le \frac{2}{e}, \] namely, $\big(lc_{5}\sigma_{i}+\mu_{i}\big)^{k}\exp(-l)$ decreases geometrically in $l$ with a contraction factor $2/e$. This validates Step (i) in \eqref{eqn:bernstein-condition}. In view of \eqref{eqn:bernstein-condition}, letting $\widetilde{Z}_i \coloneqq Z_{i}-\mathbb{E}[Z_{i}]$ (so that $\mathbb{E}[\widetilde{Z}_i]=0$) and using the power series expansion, we obtain \begin{align*} \mathbb{E}\left[\sum_{k=0}^{\infty}\frac{\lambda^{k}\widetilde{Z}_{i}^{k}}{k!}\right] & =1+\mathbb{E}\left[\sum_{k=2}^{\infty}\frac{\lambda^{k}\widetilde{Z}_{i}^{k}}{k!}\right]\le\exp\left(\sum_{k=2}^{\infty}\frac{\mathbb{E}\left[\lambda^{k}\widetilde{Z}_{i}^{k}\right]}{k!}\right)\\ & \le\exp\left(\sum_{k=2}^{\infty}\frac{8\lambda^{k}\left(\mu_{i}+c_{5}\sigma_{i}k\log n\right)^{k}}{k!}\left(\mathbb{P}\big(h_{i,\theta}(X_{i})>\tau\big)+\frac{1}{n}\right)\right)\\ & \le\exp\left(16e^{2}c_5^2\lambda^{2} \left(\mu_{i}+\sigma_{i}\log n\right)^{2}\left(\mathbb{P}\big(h_{i,\theta}(X_{i})>\tau\big)+\frac{1}{n}\right)\right) \end{align*} for any $\lambda>0$ obeying $c_5\lambda(\mu_i+\sigma_i\log n) \le (2e)^{-1}$, where the first line applies the elementary inequality $1+x\leq\exp(x)$ for any $x\in \ensuremath{\mathbb{R}}$, and the last line holds since, by taking $z = c_5\lambda(\mu_i+\sigma_i\log n) \le (2e)^{-1}$, one has \[ \sum_{k=2}^{\infty}\frac{\big[\lambda(\mu_{i}+c_{5}\sigma_{i}k\log n)\big]^{k}}{k!}\stackrel{(\text{v})}{\le}\sum_{k=2}^{\infty}\frac{\big[\lambda c_{5}(\mu_{i}+\sigma_{i}\log n)k\big]^{k}}{\sqrt{2\pi k}\,k^{k}e^{-k}}\leq\sum_{k=2}^{\infty}(ez)^{k}\leq e^{2}z^{2}\sum_{i=0}^{\infty}\frac{1}{2^{i}}=2e^{2}z^{2}, \] where (v) follows from the fact $c_5\geq 1$ and the well-known Stirling inequality $\sqrt{2\pi}k^{k+\frac{1}{2}}e^{-k}\leq k!$. Given the above convergence of the power series, we conclude that for any $\lambda>0$ obeying $c_5\lambda(\mu_i+\sigma_i\log n) \le (2e)^{-1}$, \begin{align} \mathbb{E}\big[\exp\big(\lambda \widetilde{Z}_i\big)\big] = \mathbb{E}\left[\sum_{k=0}^\infty \frac{\lambda^k \widetilde{Z}_i^k}{k!}\right] \le\exp\left(16e^{2}c_5^2\lambda^{2} \left(\mu_{i}+\sigma_{i}\log n\right)^{2}\left(\mathbb{P}\big(h_{i,\theta}(X_{i})>\tau\big)+\frac{1}{n}\right)\right) \eqqcolon \exp(\lambda^2 \nu_0^2 ). \end{align} To finish up, letting $L_{0}=\max_{1\leq i\leq n}2ec_{5}(\mu_{i}+\sigma_{i}\log n)$, we can apply Markov's inequality to obtain \begin{align*} \mathbb{P}\bigg(\sum_{i=1}^{m}\widetilde{Z}_{i}>\tau\bigg) & \le\min_{0<\lambda<\frac{1}{L_{0}}}\Bigg\{\exp(-\lambda\tau)\cdot\mathbb{E}\Big[\exp\Big(\lambda\sum_{i=1}^{m}\widetilde{Z}_{i}\Big)\Big]\Bigg\}\le\min_{0<\lambda<\frac{1}{L_{0}}}\Big\{\exp(-\lambda\tau)\exp\big(\lambda^{2}\nu_{0}^{2}\big)\Big\}. \end{align*} Repeating standard arguments for establishing Bernstein's inequality (see, e.g., \citet[Step 4 in Pages 118-119]{vershynin2018high}), one can immediately conclude that \begin{align*} \sum_{i = 1}^m \widetilde{Z}_i &\lesssim \max \bigg\{ \sqrt{\nu_0 \log \frac{1}{\delta}} + L_0 \log \frac{1}{\delta} \bigg\} \\ &\asymp \sqrt{\sum_{i = 1}^m \left(\mu_i+\sigma_i\log n\right)^2\left(\mathbb{P}\left(h_{i, \theta}(X_i) > \tau\right) + \frac{1}{n}\right)\log\frac{1}{\delta}} + \max_{i}\big\{ \mu_i+\sigma_i\log n \big\} \log\frac{1}{\delta} \end{align*} with probability exceeding $1-\delta$. Repeating the same argument reveals that the above inequality continues to hold if $\sum_{i = 1}^m \widetilde{Z}_i$ is replaced with $-\sum_{i = 1}^m \widetilde{Z}_i$. This in turn establishes \eqref{eqn:sum-zi} any fixed $\theta\in \Theta$ and $\tau \in [-r,r]$. \paragraph{Step 2: controlling the difference between the $\epsilon$-net and the remaining parameters.} To show uniform concentration over all $\theta$ and $\tau$, we intend to invoke an $\epsilon$-net-based argument. Towards this, let us construct an $\epsilon$-net $\mathcal{N}_{\epsilon}^{\Theta} \subseteq \Theta \subseteq \mathbb{B}^d(r)$ for the $d$-dimensional ball $\mathbb{B}^d(r)$ of radius $r$ --- which can be chosen to have cardinality $|\mathcal{N}_{\epsilon}^{\Theta}| \le (\frac{3r}{\epsilon})^{d}$ \citep[Chapter 4.2]{vershynin2018high} --- such that for any $\theta \in \Theta$, there exists some $\ensuremath{\widehat{\theta}} \in \mathcal{N}_{\epsilon}^{\Theta}$ satisfying $\ltwo{\ensuremath{\widehat{\theta}} - \theta} \le \epsilon<r$. In addition, we construct another $\epsilon$-net $\mathcal{N}_{\epsilon}^{[-r,r]} \subseteq [-r, r]$ obeying $|\mathcal{N}_{\epsilon}^{[-r,r]}| \le \frac{2r}{\epsilon}$ for the interval $[-r,r]$, such that for any $\tau\in [-r,r]$, there exists $\widetilde{\tau}\in \mathcal{N}_{\epsilon}^{[-r,r]}$ obeying $|\tau - \widetilde{\tau}|\leq \epsilon$. Let us now look at an arbitrary $\theta \in \Theta$ and its nearest neighbor $\ensuremath{\widehat{\theta}}$ in $\mathcal{N}_{\epsilon}^{\Theta}$ (so that $\|\ensuremath{\widehat{\theta}} - \theta\|_2 \le \epsilon$). In view of the Lipschitz property~\eqref{eqn:lips-jump}, we can deduce that \begin{align} \notag\sum_{i=1}^{m}f_{i,\theta}(X_{i})\ind\big(h_{i,\theta}(X_{i})>\tau\big)\ind(\mathcal{E}_{i}) & =\sum_{i=1}^{m}f_{i,\theta}(X_{i})\ind\left(h_{i,\ensuremath{\widehat{\theta}}}(X_{i})>\tau+h_{i,\ensuremath{\widehat{\theta}}}(X_{i})-h_{i,\theta}(X_{i})\right)\ind\left(\mathcal{E}_{i}\right)\\ & \leq\sum_{i=1}^{m}f_{i,\ensuremath{\widehat{\theta}}}(X_{i})\ind\left(h_{i,\ensuremath{\widehat{\theta}}}(X_{i})>\tau+h_{i,\ensuremath{\widehat{\theta}}}(X_{i})-h_{i,\theta}(X_{i})\right)\ind\left(\mathcal{E}_{i}\right)+O(mL\epsilon) \notag\\ & \le\sum_{i=1}^{m}f_{i,\ensuremath{\widehat{\theta}}}(X_{i})\ind\left(h_{i,\ensuremath{\widehat{\theta}}}(X_{i})>\widehat{\tau}_{-}\right)\ind(\mathcal{E}_{i}) +O(mL\epsilon) \label{eqn:jump-lips-up} \end{align} for some point $\widehat{\tau}_- \in \mathcal{N}_{\epsilon}^{[-r,r]}$ satisfying \begin{align*} \tau - (L+1)\epsilon \le \widehat{\tau}_- \le \tau + h_{i, \ensuremath{\widehat{\theta}}}(X_i) - h_{i, \theta}(X_i). \end{align*} Here, the second line in \eqref{eqn:jump-lips-up} applies the Lipschitz continuity of $f_{i,\theta}$ w.r.t.~$\theta$, while the last line relies on the Lipschitz condition that $|h_{i, \ensuremath{\widehat{\theta}}}(X_i) - h_{i, \theta}(X_i)| \leq L\epsilon$. Similarly, we have the following lower bound: \begin{align} \label{eqn:jump-lips-down} \sum_{i = 1}^m f_{i, \theta}(X_i)\ind\big(h_{i, \theta}(X_i) > \tau\big)\ind\left(\mathcal{E}_i\right) \ge \sum_{i = 1}^m f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\left(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}_+\right) \ind(\mathcal{E}_{i}) - O(mL\epsilon) \end{align} for some point $\widehat{\tau}_+ \in \mathcal{N}_{\epsilon}^{[-r,r]}$ obeying \begin{align*} \tau + h_{i, \ensuremath{\widehat{\theta}}}(X_i) - h_{i, \theta}(X_i) \le \widehat{\tau}_+ \le \tau + (L+1)\epsilon. \end{align*} Next, we turn attention to the mean term $\mathbb{E}\big[f_{i, \theta}(X_i)\ind\left(h_{i, \theta}(X_i) > \tau\right)\big].$ Consider $\widehat{\tau} \in \mathcal{N}_{\epsilon}^{[-r,r]}$ such that \begin{align*} |\widehat{\tau} - \tau| \le (L+1)\epsilon. \end{align*} Clearly, there might be more than one points in the $\epsilon$-net that are within distance $(L+1)\epsilon$ to $\tau$, and we shall specify the choice of $\widehat{\tau}$ momentarily. Recall the assumption $\mathbb{E}\big[ f_{i, \theta}(X_i) \ind\left(\mathcal{E}_i^{\mathrm{c}}\right)\big] \le B$ (cf.~\eqref{eqn:lips-jump}) and the non-negativity of $f_{i,\theta}$ to arrive at \begin{align} &\mathbb{E}\big[f_{i, \theta}(X_i)\ind\left(h_{i, \theta}(X_i) > \tau\right)\big] \leq \mathbb{E}\big[f_{i, \theta}(X_i)\ind\left(h_{i, \theta}(X_i) > \tau\right)\ind\left(\mathcal{E}_i\right)\big] + O(B) \notag\\ &\qquad \leq \mathbb{E}\left[f_{i, \theta}(X_i)\ind\left(h_{i, \theta}(X_i) > h_{i, \ensuremath{\widehat{\theta}}}(X_i) - h_{i, \theta}(X_i) + \tau\right)\ind\left(\mathcal{E}_i\right)\right] + O\left( B\right) \notag\\ &\qquad\leq \mathbb{E}\left[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\left(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > h_{i, \ensuremath{\widehat{\theta}}}(X_i) - h_{i, \theta}(X_i) + \tau\right)\ind\left(\mathcal{E}_i\right)\right] + O\left(L\epsilon + B\right) \notag\\ &\qquad\leq \mathbb{E}\left[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\left(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\right)\ind\left(\mathcal{E}_i\right)\right] \notag\\ &\qquad \quad + O\left(\mathbb{E}\left[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\left(\tau-(L+1)\epsilon \le h_{i, \ensuremath{\widehat{\theta}}}(X_i) \le \tau+(L+1)\epsilon)\right)\ind\left(\mathcal{E}_i\right)\right] + L\epsilon + B\right). \label{eq:E-f-indicator-1729} \end{align} Here, the second inequality follows from the Lipschitz continuity of $f_{i,\theta}$ w.r.t.~$\theta$ (cf.~\eqref{eqn:lips-jump}), and the last inequality holds since \begin{align} \notag &\left\{Z_i : \ind\left(h_{i, \ensuremath{\widehat{\theta}}}(Z_i) > h_{i, \ensuremath{\widehat{\theta}}}(Z_i) - h_{i, \theta}(Z_i) + \tau\right) \ne \ind\left(h_{i, \ensuremath{\widehat{\theta}}}(Z_i) > \widehat{\tau}\right)\right\} \\ \notag &\subseteq \Big\{Z_i : \widehat{\tau} \le h_{i, \ensuremath{\widehat{\theta}}}(Z_i) \le h_{i, \ensuremath{\widehat{\theta}}}(Z_i) - h_{i, \theta}(Z_i) + \tau\Big\} \cup \Big\{Z_i : h_{i, \ensuremath{\widehat{\theta}}}(Z_i) - h_{i, \theta}(Z_i) + \tau \le h_{i, \ensuremath{\widehat{\theta}}}(Z_i) \le \widehat{\tau}\Big\} \\ &\subseteq \Big\{Z_i : \tau-(L+1)\epsilon \le h_{i, \ensuremath{\widehat{\theta}}}(Z_i) \le \tau+(L+1)\epsilon\Big\}. \label{eqn:set-relation} \end{align} where we invoke again $|h_{i, \ensuremath{\widehat{\theta}}}(Z_i) - h_{i, \theta}(Z_i)| \leq L\epsilon$ and $|\widehat{\tau} - \tau| \le (L+1)\epsilon$. Let us augment the notation $\mu_i$ to make explicit the dependency on $\theta_i$ as follows \begin{equation} \mu_{i,\theta} \coloneqq \mathbb{E}\big[f_{i, \theta}(X_i)\ind(\mathcal{E}_i)\big] . \label{eq:defn-mu-i-theta} \end{equation} Then an application of the bound~\eqref{eq:exp-tail} with $k=1$ leads directly to \begin{align*} & \mathbb{E}\left[f_{i,\ensuremath{\widehat{\theta}}}(X_{i})\ind\left(\tau-(2L+1)\epsilon\le h_{i,\widehat{\theta}}(X_{i})\le\tau+(2L+1)\epsilon)\right)\ind\left(\mathcal{E}_{i}\right)\right]\\ & \qquad=\mathbb{E}\left[f_{i,\ensuremath{\widehat{\theta}}}(X_{i})\ind\left(\tau-(2L+1)\epsilon\le h_{i,\widehat{\theta}}(X_{i})\le\tau+(2L+1)\epsilon)\right)\ind\left(\mathcal{E}_{i}\right)\ind\left(f_{i,\ensuremath{\widehat{\theta}}}(X_{i})\leq\mu_{i,\ensuremath{\widehat{\theta}}}+c_5\sigma_{i}\log n\right)\right]\\ & \qquad\qquad+\mathbb{E}\left[f_{i,\ensuremath{\widehat{\theta}}}(X_{i})\ind\left(\tau-(2L+1)\epsilon\le h_{i,\widehat{\theta}}(X_{i})\le\tau+(2L+1)\epsilon)\right)\ind\left(\mathcal{E}_{i}\right)\ind\left(f_{i,\ensuremath{\widehat{\theta}}}(X_{i})>\mu_{i,\ensuremath{\widehat{\theta}}}+c_5\sigma_{i}\log n\right)\right]\\ & \qquad\lesssim(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n)\mathbb{P}\Big(\tau-(2L+1)\epsilon\le h_{i,\widehat{\theta}}(X_{i})\le\tau+(2L+1)\epsilon)\Big)+\frac{\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n}{n}. \end{align*} Substituting it into \eqref{eq:E-f-indicator-1729} yields \begin{align} \notag &\mathbb{E}\Big[f_{i, \theta}(X_i)\ind\big(h_{i, \theta}(X_i) > \tau\big)\Big] - \mathbb{E}\left[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\big(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\big)\ind\left(\mathcal{E}_i\right)\right]\\ % &\qquad \lesssim (\mu_{i,\ensuremath{\widehat{\theta}}} + \sigma_i\log n)\mathbb{P}\Big(\tau-(2L+1)\epsilon \le h_{i, \ensuremath{\widehat{\theta}}}(X_i) \le \tau+(2L+1)\epsilon\Big) + \frac{\mu_{i,\ensuremath{\widehat{\theta}}} + \sigma_i\log n}{n} + L\epsilon + B. \label{eqn:expectation-jump-123} \end{align} Clearly, repeating the above argument shows that \eqref{eqn:expectation-jump-123} continues to hold if the left-hand side of \eqref{eqn:expectation-jump-123} is replaced by $\mathbb{E}\big[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\big(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\big)\ind\left(\mathcal{E}_i\right)\big] - \mathbb{E}\big[f_{i, \theta}(X_i)\ind\big(h_{i, \theta}(X_i) > \tau\big)\big]$. As a result, we conclude that \begin{align} \notag &\Big| \mathbb{E}\Big[f_{i, \theta}(X_i)\ind\big(h_{i, \theta}(X_i) > \tau\big)\Big] - \mathbb{E}\left[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\big(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\big)\ind\left(\mathcal{E}_i\right)\right] \Big|\\ % &\qquad \lesssim (\mu_{i,\ensuremath{\widehat{\theta}}} + \sigma_i\log n)\mathbb{P}\Big(\tau-(2L+1)\epsilon \le h_{i, \ensuremath{\widehat{\theta}}}(X_i) \le \tau+(2L+1)\epsilon\Big) + \frac{\mu_{i,\ensuremath{\widehat{\theta}}} + \sigma_i\log n}{n} + L\epsilon + B. \label{eqn:expectation-jump} \end{align} \paragraph{Step 3: establishing uniform convergence.} We are now ready to establish the advertised concentration result~\eqref{eqn:scriabin}. Recall that the concentration result~\eqref{eqn:sum-zi} holds for every fixed $(\ensuremath{\widehat{\theta}},\widehat{\tau})$ pair. By taking the union bound over all points in $\mathcal{N}_{\epsilon}^{\Theta}\times \mathcal{N}_{\epsilon}^{[-r,r]}$ and setting $\delta = n^{-11}(\frac{\epsilon}{3r})^{d+1}$, we can see that with probability at least $1 - \delta (\frac{3r}{\epsilon})^{d+1} = 1 - O(n^{-11})$, \begin{align} \label{eqn:sum-zi-eps-net} &\bigg|\sum_{i = 1}^m f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\left(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\right) \ind(\mathcal{E}_i) - \mathbb{E}\Big[f_{i, \ensuremath{\widehat{\theta}}}(X_i) \ind\left(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\right) \ind(\mathcal{E}_i) \Big]\bigg| \notag\\ &\qquad \lesssim\sqrt{\sum_{i = 1}^m \big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big)^2\left(\mathbb{P}\big(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau} \big) + \frac{1}{n}\right) d\log\frac{nr}{\epsilon}} + \max_{i} \big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big) d\log\frac{nr}{\epsilon} \end{align} holds simultaneously for all $(\ensuremath{\widehat{\theta}},\widehat{\tau})\in \mathcal{N}_{\epsilon}^{\Theta}\times \mathcal{N}_{\epsilon}^{[-r,r]}$. Consider an arbitrary point $\theta \in \Theta$ and $\tau\in [-r,r]$; let $\ensuremath{\widehat{\theta}}$ be its closest point in $\mathcal{N}_{\epsilon}^{\theta}$, and take $\widehat{\tau}$ to be either $\widehat{\tau}_-$ or $\widehat{\tau}_+$. Combining \eqref{eqn:jump-lips-up}, \eqref{eqn:jump-lips-down} and \eqref{eqn:expectation-jump} and using the assumption $\mathbb{P}(\cap_{i}\mathcal{E}_i)\geq 1-O(n^{-11})$ lead to: with probability at least $1-O(n^{-11})$, \begin{align*} &\left|\sum_{i = 1}^m \Big(f_{i, \theta}(X_i)\ind\left(h_{i, \theta}(X_i) > \tau\right) - \mathbb{E}\big[f_{i, \theta}(X_i)\ind\big(h_{i, \theta}(X_i) > \tau\big)\big]\Big)\right| \\ &\qquad =\left|\sum_{i = 1}^m \Big(f_{i, \theta}(X_i)\ind\left(h_{i, \theta}(X_i) > \tau\right)\ind(\mathcal{E}_i) - \mathbb{E}\big[f_{i, \theta}(X_i)\ind\big(h_{i, \theta}(X_i) > \tau\big)\big]\Big)\right| \\ &\qquad \le \left|\sum_{i = 1}^m \left(f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\big(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\big)\ind(\mathcal{E}_i) - \mathbb{E}\left[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind\big(h_{i, \ensuremath{\widehat{\theta}}}(X_i) > \widehat{\tau}\big)\ind\left(\mathcal{E}_i\right)\right]\right)\right| \\ &\qquad\qquad+ O\bigg(mL\epsilon + mB + \sum_i \big(\mu_{i,\ensuremath{\widehat{\theta}}} + \sigma_i\log n\big)\Big[\mathbb{P}\Big(\tau-(2L+1)\epsilon \le h_{i, \ensuremath{\widehat{\theta}}}(X_i) \le \tau+(2L+1)\epsilon \Big) + \frac{1}{n}\Big]\bigg)\\ &\qquad\lesssim \sqrt{\sum_{i = 1}^m \big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big)^2\Big(\mathbb{P}\left(h_{i, \widehat{\theta}}(X_i) > \widehat{\tau}\right)+\frac{1}{n} \Big)d\log\frac{rn}{\epsilon}} + \max_i \big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big)d\log\frac{rn}{\epsilon} \\ &\qquad\qquad+ mL\epsilon + mB + \sum_i \big(\mu_{i,\ensuremath{\widehat{\theta}}} + \sigma_i\log n\big) \underbrace{\mathbb{P}\Big(\tau-(2L+1)\epsilon \le h_{i, \ensuremath{\widehat{\theta}}}(X_i) \le \tau+(2L+1)\epsilon \Big)}_{\eqqcolon\, \widehat{p}_i}, \end{align*} where the last inequality follows from \eqref{eqn:sum-zi-eps-net}. Additionally, recognizing that $|h_{i, \ensuremath{\widehat{\theta}}}(X_i) - h_{i, \theta}(X_i)| \leq L\epsilon$ (see \eqref{eqn:lips-jump}), we see that \[ \widehat{p}_{i}\leq\mathbb{P}\Big(\tau-(3L+1)\epsilon\le h_{i,\theta}(X_{i})\le\tau+(3L+1)\epsilon\Big)\eqqcolon p_{i}. \] Finally, we remind the readers of the set relation~\eqref{eqn:set-relation}. Repeating the argument in \eqref{eq:E-f-indicator-1729}, we obtain \begin{align*} \mathbb{P}\left(h_{i, \widehat{\theta}}(X_i) > \widehat{\tau}\right) &\le \mathbb{P}\big(h_{i, \theta}(X_i) > \tau\big) + \mathbb{P}\Big(\tau-(2L+1)\epsilon \le h_{i, \theta}(X_i) \le \tau+(2L+1)\epsilon \Big) \\ &\leq \mathbb{P}\big(h_{i, \theta}(X_i) > \tau\big) + p_i. \end{align*} We also make note of the following elementary relation \begin{align*} \sqrt{\sum_{i = 1}^m \big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big)^2 p_i d\log\frac{rn}{\epsilon}} &\leq \sqrt{\left\{ \max_i \big( \mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big)d\log\frac{rn}{\epsilon} \right\} \sum_{i = 1}^m \big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big) p_i } \\ &\leq \max_i \big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_i\log n\big)d\log\frac{rn}{\epsilon} + \sum_i \big(\mu_{i,\ensuremath{\widehat{\theta}}} + \sigma_i\log n \big)p_i. \end{align*} Putting the above pieces together then yields \begin{align} & \left|\sum_{i=1}^{m}\Big(f_{i,\theta}(X_{i})\ind\left(h_{i,\theta}(X_{i})>\tau\right)-\mathbb{E}\big[f_{i,\theta}(X_{i})\ind\big(h_{i,\theta}(X_{i})>\tau\big)\big]\Big)\right| \notag\\ & \qquad\lesssim\sqrt{\sum_{i=1}^{m}\left(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n\right)^{2}\Big(\mathbb{P}\left(h_{i,\theta}(X_{i})>\tau\right)+p_{i}+\frac{1}{n}\Big)d\log\frac{rn}{\epsilon}}+\max_i\big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n\big)d\log\frac{rn}{\epsilon} \notag\\ & \qquad\qquad\qquad+mL\epsilon+mB+\sum_{i}(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n)p_{i} \notag\\ & \qquad\lesssim\sqrt{\sum_{i=1}^{m}\left(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n\right)^{2}\Big(\mathbb{P}\left(h_{i,\theta}(X_{i})>\tau\right)+\frac{1}{n}\Big)d\log\frac{rn}{\epsilon}}+\max\big(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n\big)d\log\frac{rn}{\epsilon} \notag\\ & \qquad\qquad\qquad+mL\epsilon+mB+\sum_{i}(\mu_{i,\ensuremath{\widehat{\theta}}}+\sigma_{i}\log n)p_{i}. \label{eq:sum-f-dc-abded} \end{align} Finally, recall that the Lipschitz continuity of $f_{i,\theta}$ w.r.t.~$\theta$ gives \[ \mu_{i,\ensuremath{\widehat{\theta}}} = \mathbb{E}\big[f_{i, \ensuremath{\widehat{\theta}}}(X_i)\ind(\mathcal{E}_i)\big] \leq \mathbb{E}\big[f_{i, \theta}(X_i)\ind(\mathcal{E}_i)\big] + L\epsilon \le \mathbb{E}\big[f_{i, \theta}(X_i)\big] + L\epsilon . \] Substitution into \eqref{eq:sum-f-dc-abded} thus completes the proof. \subsection{Proof of Lemma~\ref{lem:brahms-lemma}} \label{sec:pf-brahms} For a set of random vectors $\{\phi_k\}_{k=1}^{t-1}$ independently drawn from $\mathcal{N}(0,\frac{1}{n}I_n)$, standard concentration results for Wishart matrices (e.g., \citet[Example 6.2]{wainwright2019high}) together with the union bound tell us that \begin{align} \label{eqn:simple-rm} \left\|(\phi_1, \ldots, \phi_{t-1})^{\top}(\phi_1, \ldots, \phi_{t-1}) - I_{t-1}\right\| \lesssim \sqrt{\frac{t\log \frac{n}{\delta}}{n}},\qquad\text{for any } 1 < t \leq n \end{align} with probability at least $1 - \delta$. Two immediate consequences of \eqref{eqn:simple-rm} are in order. \begin{itemize} \item First, taking $t=2$ in \eqref{eqn:simple-rm} reveals that with probability at least $1 - \delta$, % \[ \Big|\|\phi_{1}\|_{2}^{2}-1\Big|\lesssim\sqrt{\frac{\log \frac{n}{\delta}}{n}}\text{\ensuremath{\qquad}}\Longrightarrow\qquad\Big|\|\phi_{1}\|_{2}-1\Big|=\frac{\big|\|\phi_{1}\|_{2}^{2}-1\big|}{\big|\|\phi_{1}\|_{2}+1\big|} \leq \Big|\|\phi_{1}\|_{2}^{2}-1\Big| \lesssim\sqrt{\frac{\log \frac{n}{\delta}}{n}}. \] % Clearly, this inequality holds if $\phi_1$ is replaced by any other $\phi_k$. Taking the union bound over all $1\leq k \leq n$ establishes inequality \eqref{eqn:brahms}. \item As another direct consequence of \eqref{eqn:simple-rm}, we have, with probability at least $1 - \delta$, \begin{align*} \sup_{a = [a_k]_{1\leq k< t} \in \mathcal{S}^{t-2}} \bigg| \Big\|\sum_{k = 1}^{t-1} a_k\phi_k \Big\|_2^2 - 1 \bigg| \lesssim \sqrt{\frac{t\log \frac{n}{\delta}}{n}} , \end{align*} which allows us to establish the claim \eqref{eqn:long} as follows: \begin{align} \label{eqn:spect-brahms} \sup_{a = [a_k]_{1\leq k< t} \in \mathcal{S}^{t-2}} \bigg| \Big\|\sum_{k = 1}^{t-1} a_k\phi_k \Big\|_2 - 1 \bigg| = \sup_{a = [a_k]_{1\leq k< t} \in \mathcal{S}^{t-2}} \frac{\Big| \big\|\sum_{k = 1}^{t-1} a_k\phi_k \big\|_2^2 - 1 \Big|}{ \big\|\sum_{k = 1}^{t-1} a_k\phi_k \big\|_2 + 1 } \lesssim \sqrt{\frac{t\log \frac{n}{\delta}}{n}}. \end{align} \end{itemize} Next, we turn attention to the claim~\eqref{eqn:vive}. Here and throughout, for any vector $x\in \ensuremath{\mathbb{R}}^n$ and any index set $S\subseteq [n]$, we let $x_S$ denote the subvector of $x$ formed by the entries of $x$ at indices from $S$. Following the discretization argument \citep[Chapter 5]{wainwright2019high}, we can construct an $\epsilon$-net $\mathcal{N}_{\epsilon}$ on $\mathcal{S}^{t-2}$ --- which can be chosen such that its cardinality does not exceed $(3/\epsilon)^t$ \citep[Eq.~(4.10)]{vershynin2018high} --- such that for any $a\in \mathcal{S}^{t-2}$, one can find a point $\widetilde{a}\in \mathcal{N}_{\epsilon}$ such that $\|a-\widetilde{a}\|_2\leq \epsilon<1$. \begin{itemize} \item We first bound the supermum over $\mathcal{N}_{\epsilon}$. Note that for any fixed $a \in \mathcal{N}_\epsilon \subseteq \mathcal{S}^{t-2}$ and any subset $S \subseteq [n]$ with $|S| = s$, the vector $(\sum_{k = 1}^{t-1} a_k\phi_k)_S$ is a Gaussian vector drawn from $\mathcal{N}(0, \frac{1}{n}I_s)$. Applying \citet[Proposition 1]{hsu2012tail} then implies that \[ \mathbb{P}\left\{ \bigg\|\Big(\sum_{k=1}^{t-1}a_{k}\phi_{k}\Big)_{S}\bigg\|_{2}>\frac{2}{\sqrt{n}}(\sqrt{s}+\sqrt{\tau})\right\} \leq\mathbb{P}\left\{ \bigg\|\Big(\sum_{k=1}^{t-1}a_{k}\phi_{k}\Big)_{S}\bigg\|_{2}^{2}>\frac{s}{n}+\frac{2\sqrt{s\tau}}{n}+\frac{2\tau}{n}\right\} \leq e^{-\tau} \] for any $\tau>0$. Setting $\tau= \log \big( \frac{1}{\delta} (\frac{3}{\epsilon})^t {n \choose s} \big)$ and combining this inequality with the union bound over all $a\in \mathcal{N}_{\epsilon}$ and all $S\subseteq [n]$ with $|S|=s$ lead to \begin{align*} & \mathbb{P}\Bigg\{\sup_{\substack{a\in\mathcal{N}_{\epsilon}\\ S\subset[n],|S|=s } }\bigg\|\Big(\sum_{k=1}^{t-1}a_{k}\phi_{k}\Big)_{S}\bigg\|_{2}>\frac{2\sqrt{s}}{\sqrt{n}}+\frac{2}{\sqrt{n}}\sqrt{\log\left(\frac{1}{\delta}\Big(\frac{3}{\epsilon}\Big)^{t}{n \choose s}\right)}\Bigg\}\\ & \qquad\leq\sum_{\substack{a\in\mathcal{N}_{\epsilon}\\ S\subset[n],|S|=s } }\mathbb{P}\Bigg\{\bigg\|\Big(\sum_{k=1}^{t-1}a_{k}\phi_{k}\Big)_{S}\bigg\|_{2}>\frac{2\sqrt{s}}{\sqrt{n}}+\frac{2}{\sqrt{n}}\sqrt{\log\bigg(\frac{1}{\delta}\Big(\frac{3}{\epsilon}\Big)^{t}{n \choose s}\bigg)}\Bigg\}\\ & \qquad\leq\Big(\frac{3}{\epsilon}\Big)^{t}{n \choose s}\exp\left(-\log\bigg(\frac{1}{\delta}\Big(\frac{3}{\epsilon}\Big)^{t}{n \choose s}\bigg)\right) \leq \delta. \end{align*} Taking $\epsilon = (\delta/n)^{10}$ and using ${n \choose s} \leq n^s$ imply that: with probability exceeding $1-\delta$, \begin{equation} \sup_{\substack{a\in\mathcal{N}_{\epsilon}\\ S\subset[n],|S|=s } }\bigg\|\Big(\sum_{k=1}^{t-1}a_{k}\phi_{k}\Big)_{S}\bigg\|_{2}\lesssim\frac{\sqrt{s \log\frac{n}{\delta}}}{\sqrt{n}}+\frac{\sqrt{t\log \frac{n}{\delta}}}{\sqrt{n}}\label{eq:sup-Neps-a-S-sum} \end{equation} \item Next, consider an arbitrary vector $a\in \mathcal{S}^{t-2}$ and let $\widetilde{a}\in \mathcal{N}_{\epsilon}$ obey $\|a-\widetilde{a}\|_2\leq \epsilon= (\delta/n)^{10}$. Then \eqref{eq:sup-Neps-a-S-sum} together with the triangle inequality tells us that with probability exceeding $1-\delta$, % \begin{align*} \bigg\|\Big(\sum_{k=1}^{t-1}a_{k}\phi_{k}\Big)_{S}\bigg\|_{2} & \leq\bigg\|\Big(\sum_{k=1}^{t-1}\widetilde{a}_{k}\phi_{k}\Big)_{S}\bigg\|_{2}+\bigg\|\Big(\sum_{k=1}^{t-1}(a_{k}-\widetilde{a}_{k})\phi_{k}\Big)_{S}\bigg\|_{2}\\ & \lesssim\frac{\sqrt{s\log \frac{n}{\delta}}}{\sqrt{n}}+\frac{\sqrt{t\log \frac{n}{\delta}}}{\sqrt{n}}+\|a-\widetilde{a}\|_{2}\Big\|\Big[\phi_{1},\cdots,\phi_{t-1}\Big]\Big\|\\ & \asymp\frac{\sqrt{s\log \frac{n}{\delta}}}{\sqrt{n}}+\frac{\sqrt{t\log \frac{n}{\delta}}}{\sqrt{n}}, \end{align*} where the last line holds since $\|a-\widetilde{a}\|_2\leq (\delta/n)^{10}$ and, with probability exceeding $1-\delta$, $\big\|\big[\phi_{1},\cdots,\phi_{t-1}\big]\big\|\leq \sqrt{t/\delta}$ \citep{vershynin2018high}. Given that $a$ can be an arbitrary vector lying within $\mathcal{S}^{t-2}$, we have concluded the proof of the claim~\eqref{eqn:vive}. \end{itemize} \subsection{Proof of Lemma~\ref{lem:wasserstein}} \label{sec:proof-wasserstein} Recall the definition \eqref{eqn:wasserstein-p} of the Wasserstein metric between to probability measures. In view of the celebrated Kantorovich-Rubinstein duality, the 1-Wasserstein distance admits the following dual representation: \begin{align} W_1(\mu, \nu) = \sup \Big\{\mathbb{E}_{\mu}[f] - \mathbb{E}_{\nu}[f] : f \text{ is } 1\text{-Lipschitz}\Big\}, \label{eq:K-R-duality} \end{align} which is the key to establishing this lemma. Let us start by considering any given $1$-Lipschitz function $f$. It is assumed without loss of generality that $f(0) = 0$ (as the expression \eqref{eq:K-R-duality} only involves the difference of $f$), which together with the 1-Lipschitz property gives \begin{align} |f(x)| = |f(x) - f(0)| \le \|x\|_2 . \label{eq:fx-size-norm2} \end{align} For any fixed unit vector $\widetilde{\beta}=[\widetilde{\beta}_k]_{1\leq k\leq t}\in \mathcal{S}^{t-1}$, the vector $\sum_{i = 1}^t \widetilde{\beta}_k \phi_k$ clearly follows a Gaussian distribution $\mathcal{N}(0, \frac{1}{n}I_n)$. Applying Gaussian isoperimetric inequalities (e.g., \citet[Theorem 3.8]{massart2007concentration}) yields \begin{align} \label{eqn:blues} f\Big(\sum_{i = 1}^t \widetilde{\beta}_k \phi_k\Big) - \mathop{\mathbb{E}}\limits_{g \sim \mathcal{N}(0, \frac{1}{n}I_n)}\big[f(g)\big] \le \sqrt{\frac{2\log\frac{1}{\delta}}{n}} \end{align} with probability at least $1 - \delta$. Next, let us construct an $\epsilon$-net $\mathcal{N}_{\epsilon}$ of $\mathcal{S}^{t-1}$ with cardinality not exceeding $(2/\epsilon )^t$, such that for any $\widehat{\beta} \in \mathcal{S}^{t-1}$, one can find a point $\widetilde{\beta}\in \mathcal{N}_{\epsilon}$ obeying $\|\widehat{\beta} - \widetilde{\beta}\|_2\leq \epsilon$. Taking the above inequality with the union bound over $\mathcal{N}_{\epsilon}$ then leads to \begin{align*} \sup_{\widetilde{\beta}\in\mathcal{N}_{\epsilon}}\bigg\{ f\Big(\sum_{i=1}^{t}\widetilde{\beta}_{k}\phi_{k}\Big)-\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]\bigg\} & \lesssim\sqrt{\frac{t\log n}{n}} \end{align*} with probability at least $1-O(n^{-11})$. Armed with this result, for an arbitrary $\widehat{\beta} \in \mathcal{S}^{t-1}$ one can show that \begin{align*} f\Big(\sum_{i=1}^{t}\widehat{\beta}_{k}\phi_{k}\Big)-\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big] & \leq\Bigg\{ f\Big(\sum_{i=1}^{t}\widetilde{\beta}_{k}\phi_{k}\Big)-\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]\Bigg\}+f\Big(\sum_{i=1}^{t}\widehat{\beta}_{k}\phi_{k}\Big)-f\Big(\sum_{i=1}^{t}\widetilde{\beta}_{k}\phi_{k}\Big)\\ & \lesssim\sqrt{\frac{t\log n}{n}}+\bigg\|\sum_{i=1}^{t}\widehat{\beta}_{k}\phi_{k}-\sum_{i=1}^{t}\widetilde{\beta}_{k}\phi_{k}\bigg\|_{2}\asymp\sqrt{\frac{t\log n}{n}}+\bigg\|\sum_{i=1}^{t}\big(\widetilde{\beta}_{k}-\widehat{\beta}_{k}\big)\phi_{k}\bigg\|_{2}\\ & \lesssim\sqrt{\frac{t\log n}{n}}+\big\|\widetilde{\beta}-\widehat{\beta}\big\|_{2}\big\|\left[\phi_{1},\cdots,\phi_{t}\right]\big\|\lesssim\sqrt{\frac{t\log n}{n}}+\epsilon\,\big\|\left[\phi_{1},\cdots,\phi_{t}\right]\big\|\\ & \lesssim\sqrt{\frac{t\log n}{n}}+\frac{\sqrt{t}}{n^{5}}\asymp\sqrt{\frac{t\log n}{n}} \end{align*} with probability at least $1-O(n^{-11})$, where the second line results from the 1-Lipschitz property of $f$, and the last line takes $\epsilon = 1 / n^5$ and invokes standard random matrix theory \citep{vershynin2018high} that asserts \begin{equation} \mathbb{P} \Big\{ \big\|\left[\phi_{1},\cdots,\phi_{t}\right]\big\| \leq C_8 \sqrt{t} \Big\} \geq 1-O(n^{-11}) \end{equation} for some constant $C_8>0$. Given that the above inequality holds simultaneously for all $\widehat{\beta} \in \mathcal{S}^{t-1}$, we have \begin{align} \sup_{\widehat{\beta}=[\widehat{\beta}_{k}]_{1\leq k\leq t}\in\mathcal{S}^{t-1}}\bigg\{ f\Big(\sum_{i=1}^{t}\widehat{\beta}_{k}\phi_{k}\Big)-\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]\bigg\} & \leq C_7\sqrt{\frac{t\log n}{n}}\label{eq:sup-f-beta-phi-Ef} \end{align} with probability exceeding $1-O(n^{-11})$, where $C_7>0$ is some universal constant. Next, we would like to use \eqref{eq:sup-f-beta-phi-Ef} to bound $\mathbb{E}\big[f\big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\big)\big]$. Let us define the following event: \begin{align*} \mathcal{E}_{1} & \coloneqq\left\{ f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\leq\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]+C_{7}\sqrt{\frac{t\log n}{n}}\right\} , \end{align*} which clearly obeys $ \mathbb{P}(\mathcal{E}_1)\geq 1-O(n^{-11}) . $ One can then decompose \begin{align} \mathbb{E}\bigg[f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\bigg] & =\mathbb{E}\bigg[f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\ind(\mathcal{E}_{1})\bigg]+\mathbb{E}\bigg[f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\ind(\mathcal{E}_{1}^{\mathrm{c}})\bigg] \label{eq:Exp-f-beta-phi-decompose} \end{align} The first term on the right-hand side of \eqref{eq:Exp-f-beta-phi-decompose} can be controlled as follows: \begin{align*} \mathbb{E}\bigg[f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\ind(\mathcal{E}_{1})\bigg] & \leq\mathbb{E}\bigg[\bigg\{\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]+C_{7}\sqrt{\frac{t\log n}{n}}\bigg\}\ind(\mathcal{E}_{1})\bigg]\\ & \leq\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]+C_{7}\sqrt{\frac{t\log n}{n}}+\Bigg|\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]+C_{7}\sqrt{\frac{t\log n}{n}}\,\Bigg|\,\mathbb{P}(\mathcal{E}_{1}^{\mathrm{c}})\\ & \leq\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]+O\left(\sqrt{\frac{t\log n}{n}}\right). \end{align*} Here, the last line holds since $\mathbb{P}(\mathcal{E}_{1}^{\mathrm{c}})\leq O(n^{-11})$ and \[ \bigg|\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]\bigg|\leq\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\left[\|g\|_{2}\right]\leq1+\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\left[\|g\|_{2}^{2}\right]=2, \] where the first inequality arises from \eqref{eq:fx-size-norm2}. When it comes to the second term on the right-hand side of \eqref{eq:Exp-f-beta-phi-decompose}, we make the observation that \begin{align*} \mathbb{E}\bigg[f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\ind(\mathcal{E}_{1}^{\mathrm{c}})\bigg] & \leq\mathbb{E}\bigg[\bigg\|\sum_{i=1}^{t}\beta_{k}\phi_{k}\bigg\|_{2}\ind(\mathcal{E}_{1}^{\mathrm{c}})\bigg]\leq\mathbb{E}\bigg[\|\beta\|_{2} \cdot \big\|\big[\phi_{1},\cdots,\phi_{t}\big]\big\|\ind(\mathcal{E}_{1}^{\mathrm{c}})\bigg]\\ & =\mathbb{E}\bigg[\big\|\big[\phi_{1},\cdots,\phi_{t}\big]\big\|\ind(\mathcal{E}_{1}^{\mathrm{c}})\bigg]\leq\mathbb{E}\Big[\big\|\big[\phi_{1},\cdots,\phi_{t}\big]\big\|_{\mathrm{F}}\ind(\mathcal{E}_{1}^{\mathrm{c}})\Big]\\ & \leq\sqrt{\mathbb{E}\bigg[\big\|\big[\phi_{1},\cdots,\phi_{t}\big]\big\|_{\mathrm{F}}^{2}\bigg]}\sqrt{\mathbb{E}\big[\ind(\mathcal{E}_{1}^{\mathrm{c}})\big]}\\ & \leq\sqrt{t}\cdot O(n^{-11})\leq O(n^{-10}), \end{align*} where the first inequality comes from \eqref{eq:fx-size-norm2}, the second line is valid since $\|\beta\|_2=1$, and the third line invokes the Cauchy-Schwarz inequality. Substituting the above two inequalities into \eqref{eq:Exp-f-beta-phi-decompose}, we obtain \begin{align} \mathbb{E}\bigg[f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\bigg] & \leq\mathop{\mathbb{E}}\limits _{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]+O\left(\sqrt{\frac{t\log n}{n}}\right). \label{eqn:w1-arbitrary-f} \end{align} To finish up, combine \eqref{eqn:w1-arbitrary-f} with \eqref{eq:K-R-duality} to arrive at \[ W_{1}\bigg(\mu\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big),\mathcal{N}\Big(0,\frac{1}{n}I_{n}\Big)\bigg)=\sup\bigg\{\mathbb{E}\bigg[f\Big(\sum_{i=1}^{t}\beta_{k}\phi_{k}\Big)\bigg]-\mathbb{E}_{g\sim\mathcal{N}(0,\frac{1}{n}I_{n})}\big[f(g)\big]:f\text{ is }1\text{-Lipschitz}\bigg\}\lesssim \sqrt{\frac{t\log n}{n}}. \] \section{Proof of auxiliary lemmas for master theorems (Theorems~\ref{thm:recursion}-\ref{thm:main})} \subsection{Proof of Lemma~\ref{lem:distribution}} \label{sec:pf-distribution} Before embarking on the proof, let us introduce some notation and basic properties. Recall that $\{z_k\}_{k\leq t}$ are orthonormal (see Lemma~\ref{lemma:zk-orthonormal}) and $U_{t-1}=[z_1,\cdots,z_{t-1}] \in \ensuremath{\mathbb{R}}^{n\times (t-1)}$. For any $1\leq k< n$, we let $U_k^{\perp}\in \ensuremath{\mathbb{R}}^{n\times (n-k)}$ represent the orthogonal complement of $U_k$ (such that $U_k^{\top} U_k^{\perp} = 0$ and $U_k^{\perp\top}U_k^{\perp}=I_{n-k}$). We also define the projection of $W_{k+1}$ onto $U_k^{\perp}$ as follows \begin{align} \widetilde{W}_{k+1} &\coloneqq U_k^{\perp \top} W_{k+1} U_k^{\perp} \end{align} which together with the construction \eqref{eqn:Wt} clearly satisfies \begin{align} \widetilde{W}_{k+1} = U_k^{\perp \top} (I_n - z_{k}z_{k}^{\top}) W_{k}(I_n - z_{k}z_{k}^{\top}) U_k^{\perp} = U_k^{\perp \top} W_{k} U_k^{\perp} = \cdots = U_k^{\perp \top} W U_k^{\perp} \in \ensuremath{\mathbb{R}}^{(n-k)\times (n-k)}. \label{eq:tilde-W-k-W-relation} \end{align} In view of the construction, we also have \begin{align} W_{k+1} &= \left(I_n - z_{k}z_{k}^{\top}\right)W_{k}\left(I_n - z_{k}z_{k}^{\top}\right) = \cdots = \left( I_n - U_k U_k^{\top} \right) W \left( I_n - U_k U_k^{\top} \right) \notag\\ &= U_k^{\perp} U_k^{\perp \top} W U_k^{\perp} U_k^{\perp \top} = U_k^{\perp} \widetilde{W}_{k+1} U_k^{\perp\top}. \label{eq:W-k-tilde-W-relation} \end{align} To establish Lemma~\ref{lem:distribution}, the first step lies in proving the following claim. In the sequel, let us prove this crucial claim first before moving on to the next step. \begin{claim} \label{claim:independence} Consider any $2\leq k\leq n$. Conditional on $\{z_i\}_{i < k}$ and $x_1$, the following hold: % \begin{itemize} \item $\widetilde{W}_k$ is a (rescaled) Wigner matrix in the sense that its entries $\big\{(\widetilde{W}_k)_{ij} \mid i\geq j \big\}$ are independent obeying % \begin{align} (\widetilde{W}_k)_{ii} \sim \mathcal{N}\Big(0,\frac{2}{n}\Big) \qquad \text{and} \qquad (\widetilde{W}_k)_{ij} = (\widetilde{W}_k)_{ji} \sim \mathcal{N}\Big(0,\frac{1}{n}\Big) \quad \text{for any }i>j ; \label{eq:proj-Wk-Wigner} \end{align} % \item $W_k$ is conditionally independent of $\{W_iz_i\}_{i < k}$; \item the randomness of $x_k$ and $z_{k}$ comes purely from that of $\{W_iz_i\}_{i < k}$ and $x_1$, and hence $x_k$ and $z_{k}$ are conditionally independent of $W_k$. \end{itemize} \end{claim} \begin{proof}[Proof of Claim~\ref{claim:independence}] The proof of this claim proceeds via an inductive argument. \paragraph{The base case with $k=2$.} Consider first the case when $k = 2$. In view of the definition~\eqref{eqn:Wt}, we have \begin{align*} W_2 &= \big(I - z_{1}z_{1}^{\top}\big)W \big(I - z_{1}z_{1}^{\top}\big) \end{align*} where $z_1$ is independent from $W$. Let $z_1^{\perp}=\ensuremath{\mathbb{R}}^{n\times (n-1)}$ denote the orthogonal complement of $z_1$ (so that $z_1^{\top}z_1^{\perp}=0$ and $z_1^{\perp\top}z_1^{\perp}=I_{n-1}$), and define the projection of $W_2$ onto $z_1^{\perp}$ (see \eqref{eq:tilde-W-k-W-relation}) obeys: \begin{equation} \widetilde{W}_2 = z_1^{\perp \top} W_2 z_1^{\perp} = z_1^{\perp \top} W z_1^{\perp} \overset{\mathrm{d}}{=} e_1^{\perp \top} W e_1^{\perp}, \end{equation} where the last relation arises from the rotational invariance of the Wigner matrix (with $e_1$ denoting the first standard basis vector). Therefore, it is readily seen that: conditioned on $z_{1}$, \begin{itemize} \item $\widetilde{W}_{2}$ is a (rescaled) Wigner matrix in $\ensuremath{\mathbb{R}}^{(n-1)\times (n-1)}$ obeying \eqref{eq:proj-Wk-Wigner}; \item $\widetilde{W}_{2}$ --- and hence $W_{2}$ --- is statistically independent from $Wz_1$. \end{itemize} In addition, recalling the update rule \eqref{eqn:AMP-updates}, the definition \eqref{eqn:z-w-init} of $z_1$ and the assumption $\eta_0(x_0)=0$, we have \begin{align*} x_2 &= \lambdav^\star v^{\star \top}\eta_1(x_1) + W\eta_1(x_1) = \big( \lambda v^{\star \top}z_1 \ltwo{\eta_1(x_1)} \big) \cdot v^\star+ \ltwo{\eta_1(x_1)} \cdot Wz_1, \end{align*} where the last step relies on the definition \eqref{eqn:z-w-init} of $z_{1}$. Given that $z_1$ is fully determined by $x_1$, we see that the randomness of $x_{2}$ --- and hence that of $z_2$ --- comes entirely from $Wz_{1}$ and $x_1$. We have thus established the advertised claim for the case with $k=2.$ \paragraph{The induction step.} Next, assuming that the claim holds for all step $i$ with $i\leq k$, let us extend it to the $(k+1)$-th step. To begin with, the inductive assumption tells us that: conditional on $\{z_i\}_{i < k}$ and $x_1$, \begin{itemize} \item[(i)] $W_k$ is independent of $\{W_iz_i\}_{i < k}$; \item[(ii)] the randomness of $z_k$ purely comes from $\{W_iz_i\}_{i < k}$, and hence $W_k$ is also independent of $z_k$. \end{itemize} Taking these two conditions together reveals that: if we condition on $\{z_i\}_{i \leq k}$ and $x_1$ (namely, we condition on an additional variable $z_k$ compared to the above induction hypothesis), then clearly $W_k$ is still independent of $\{W_iz_i\}_{i < k}$. Recalling that \begin{equation} W_{k+1} = \left(I_n - z_{k}z_{k}^{\top}\right)W_{k}\left(I_n - z_{k}z_{k}^{\top}\right), \label{eq:expression-Wk+1-Wk} \end{equation} we can readily conclude that: conditioned on $\{z_i\}_{i \leq k}$ and $x_1$, \begin{itemize} \item $W_{k+1}$ is also independent of $\{W_iz_i\}_{i < k}$, given the conditional independence between $W_k$ and $\{W_iz_i\}_{i < k}$ and the fact that $z_k$ is being conditioned now; \item $W_{k+1}$ is independent of $\{W_iz_i\}_{i < k}$. \end{itemize} As a result, in order to show that $W_{k+1}$ is conditionally independent from $\{W_iz_i\}_{i \leq k}$, it suffices to justify that it is conditionally independent from $W_kz_k$, which we shall accomplish next. Recall that $\widetilde{W}_k$ is a rescaled Wigner matrix independent of $z_k$ (see Property (ii) above) when conditioned on $\{z_i\}_{i < k}$ and $x_1$. Akin to the argument for the base case, the rotational invariance of the Wigner matrix together with expression \eqref{eq:expression-Wk+1-Wk} tells us that: conditional on $\{z_i\}_{i \leq k}$ and $x_1$, \begin{itemize} \item $\widetilde{W}_{k+1}$ is a (rescaled) Wigner matrix in $\ensuremath{\mathbb{R}}^{(n-k)\times (n-k)}$ obeying \eqref{eq:proj-Wk-Wigner}; \item $\widetilde{W}_{k+1}$ --- and hence $W_{k+1}$ --- is statistically independent from $W_kz_k$. \end{itemize} We can thus conclude that: conditional on $\{z_i\}_{i \leq k}$ and $x_1$, both $W_{k+1}$ and $\widetilde{W}_{k+1}$ are independent from $\big\{ W_iz_i\big\}_{1\leq i\leq k}$. In addition, given the AMP update rule \eqref{eqn:AMP-updates}, it is legitimate to write \begin{align*} x_{k+1} &= (\lambdav^\star v^{\star \top} + W)\eta_k(x_{k}) - \big\langle\eta_k^{\prime}(x_{k})\big\rangle \cdot \eta_{k-1}(x_{k-1})\\ &= \lambdav^\star v^{\star \top}\eta_k(x_{k}) + \sum_{i = 1}^{k} \beta_{k}^i W_iz_i + \sum_{i = 1}^{k-1} z_i\Big[\langle W_iz_i, \eta_{k}(x_{k})\rangle - \langle\eta_k^{\prime}(x_{k})\rangle \beta_{k-1}^i - \beta_{k}^iz_i^{\top}W_iz_i\Big], \end{align*} where the last equality follows from expression~\eqref{eqn:xt-by-Wkzk}. Clearly, $x_{k+1}$ is determined by $x_k$, $\big\{W_iz_i\big\}_{i\leq k}$, and $\{z_i\}_{i\leq k}$ (given that $\beta_k^i$ is also determined by $z_i$ and $x_k$), in addition to other deterministic objects. Moreover, our induction hypothesis asserts that the randomness of $x_k$ and $z_k$ all comes from $\big\{W_iz_i\big\}_{i< k}$ and $x_1$. Consequently, these taken collectively imply that all randomness of $x_{k+1}$ (and hence $z_{k+1}$) comes from $\{W_iz_i\}_{i \leq k}$ and $x_1$. We have thus established the claim for step $k+1.$ To finish up, applying the inductive argument concludes the proof of Claim~\ref{claim:independence}. \end{proof} Armed with the results in Claim~\ref{claim:independence}, we can characterize the conditional distribution of $W_kz_k$. Given that the $z_i$'s are orthonormal (cf.~Lemma~\ref{lemma:zk-orthonormal}), we can apply Claim~\ref{claim:independence} to show that: conditional on $\{z_i\}_{1\leq i\leq k}$ and $x_1$, \begin{subequations} \label{defn:zWz} \begin{align} z_{i}^{\top}W_{k}z_{k} & =z_{i}^{\top}U_{k-1}^{\perp}U_{k-1}^{\perp\top}WU_{k-1}^{\perp}U_{k-1}^{\perp\top}z_{k}=0\qquad\qquad\text{for }i< k, \label{defn:zWz-i-less-k}\\ z_{k}^{\top}W_{k}z_{k} & =z_{k}^{\top}U_{k-1}^{\perp}U_{k-1}^{\perp\top}WU_{k-1}^{\perp}U_{k-1}^{\perp\top}z_{k}=\left(U_{k-1}^{\perp\top}z_{k}\right)^{\top}\widetilde{W}_{k}\left(U_{k-1}^{\perp\top}z_{k}\right)\overset{\mathrm{d}}{=}e_{1}^{\top}\widetilde{W}_{k}e_{1}\sim\mathcal{N}\Big(0,\frac{2}{n}\Big), \\ U_{k}^{\perp\top}W_{k}z_{k} & =U_{k}^{\perp\top}U_{k-1}^{\perp}U_{k-1}^{\perp\top}WU_{k-1}^{\perp}U_{k-1}^{\perp\top}z_{k}=\left(U_{k-1}^{\perp\top}U_{k}^{\perp}\right)^{\top}\widetilde{W}_{k}\left(U_{k-1}^{\perp\top}z_{k}\right)\sim\mathcal{N}\Big(0,\frac{1}{n}I_{n-k}\Big), \end{align} \end{subequations} where we have made use of the fact in Claim~\ref{claim:independence} that, conditional on $\{z_i\}_{1\leq i< k}$ and $x_1$, $\widetilde{W}_{k}$ is a rescaled Wigner matrix independent of $z_k$. Therefore, if we generate i.i.d.~Gaussian random variables $g_i^k \sim \mathcal{N}(0,\frac{1}{n})$ for all $i < k$, then conditional on $\{z_i\}_{i \leq k}$ and $x_1$, it follows that \begin{align} \phi_{k} & \coloneqq W_{k}z_{k}+\Big(\frac{\sqrt{2}}{2}-1\Big)z_{k}^{\top}W_{k}z_{k}\cdot z_{k}+\sum_{i=1}^{k-1}g_{i}^{k}z_{i} \label{eq:defn-phi-k-proof}\\ & =\bigg(\sum_{i=1}^{k}z_{i}z_{i}^{\top}\bigg)W_{k}z_{k}+\big(U_{k}^{\perp}U_{k}^{\perp\top}\big)W_{k}z_{k}+\Big(\frac{\sqrt{2}}{2}-1\Big)z_{k}^{\top}W_{k}z_{k}\cdot z_{k}+\sum_{i=1}^{k-1}g_{i}^{k}z_{i} \notag\\ & =\frac{\sqrt{2}}{2}\left(z_{k}^{\top}W_{k}z_{k}\right)z_{k}+\sum_{i=1}^{k-1}g_{i}^{k}z_{i}+U_{k}^{\perp}\big(U_{k}^{\perp\top}W_{k}z_{k}\big) \notag\\ & \sim\mathcal{N}\Big(0,\frac{1}{n}I_{n}\Big). \label{eq:phi-k-distribution} \end{align} Here, the penultimate line makes use of \eqref{defn:zWz-i-less-k} and a little algebra, whereas the last line is valid since, along each basis direction (i.e., $z_1,\cdots,z_k$ and each column of $U_k^{\perp}$), the projection of $\phi_k$ is independent $\mathcal{N}(0,1/n)$. In fact, \eqref{eq:phi-k-distribution} tells us that the conditional distribution of $\phi_{k}$ is always $\mathcal{N}(0, \frac{1}{n}I_n)$ no matter what value the sequence $\{z_i\}_{i \leq k}$ takes, thus indicating the (unconditional) distribution of $\phi_{k}$ as follows: % \begin{align} \phi_{k} \sim\mathcal{N}\Big(0,\frac{1}{n}I_{n}\Big). \label{eq:phi-k-distribution-marginal} \end{align} Finally, we demonstrate that $\{\phi_i\}_{1 \le i \leq k}$ are independent. To this end, we first observe that $\phi_{k}$ is independent of $\{z_i\}_{i < k}$ and $x_1$, which is an immediate consequence of the conditional distribution derivation \eqref{eq:phi-k-distribution}. Further, combining Claim~\ref{claim:independence} with the definition \eqref{eq:defn-phi-k-proof} of $\phi_k$ (which depends only on $W_kz_k$ and $\{g_i^k\}$ conditional on $\{z_i\}_{1\leq i\leq k}$) reveals that: conditional on $\{z_i\}_{i < k}$ and $x_1$, $\phi_k$ is statistically independent from $\phi_{k-1},\cdots,\phi_1$. Letting us abuse the notation and use $f$ to represent the pdf of the random vectors of interest, we obtain \begin{align*} & f(\phi_{k},\phi_{k-1},\cdots,\phi_{1})={\displaystyle \int}f\big(\phi_{k},\phi_{k-1},\cdots,\phi_{1}\mid z_{k-1},\cdots,z_{1},x_{1}\big)\mu\left(\mathrm{d}z_{k-1},\cdots,\mathrm{d}z_{1},\mathrm{d}x_{1}\right)\\ & ={\displaystyle \int}f\big(\phi_{k}\mid z_{k-1},\cdots,z_{1},x_{1}\big)f\big(\phi_{k-1},\cdots,\phi_{1}\mid z_{k-1},\cdots,z_{1},x_{1}\big)\mu\left(\mathrm{d}z_{k-1},\cdots,\mathrm{d}z_{1},\mathrm{d}x_{1}\right)\\ & =f(\phi_{k}){\displaystyle \int}f\big(\phi_{k-1},\cdots,\phi_{1}\mid z_{k-1},\cdots,z_{1},x_{1}\big)\mu\left(\mathrm{d}z_{k-1},\cdots,\mathrm{d}z_{1},\mathrm{d}x_{1}\right)\\ & =f\big(\phi_{k}\big)f\big(\phi_{k-1},\cdots,\phi_{1}\big), \end{align*} where the second line holds since, as shown above, $\phi_k$ is independent of $\phi_{k-1},\cdots,\phi_1$ when conditioned on $z_1,\cdots,z_{k-1}$ and $x_1$, and the third line makes use of the statistical independence between $\phi_k$ and $z_{k-1},\cdots,z_1,x_1$. Repeating the above derivation gives \begin{align*} f(\phi_{k},\phi_{k-1},\cdots,\phi_{1}) & =f\big(\phi_{k}\big)f\big(\phi_{k-1},\cdots,\phi_{1}\big)=\cdots=f\big(\phi_{k}\big)f\big(\phi_{k-1}\big)\cdots f\big(\phi_{1}\big), \end{align*} thereby justifying that $\{\phi_i\}_{1 \le i \leq k}$ are statistically independent. \subsection{Proof of Lemma~\ref{lem:concentration}} \label{sec:pf-concentration} To begin with, it is seen from property~\eqref{defn:zWz} that $z_k^{\top}W_kz_k$ follows a normal distribution with variance $2/n$ (given that this distribution is independent of $\{z_i\}_{1\leq i\leq k}$ and $x_1$). Standard Gaussian concentration inequalities \citep{vershynin2018high} together with the union bound tell us that \begin{equation} \max_{1\leq k\leq n} \big| z_k^{\top}W_kz_k \big| \lesssim \sqrt{\frac{\log n}{n}} \end{equation} with probability at least $1-n^{-11}$. Consequently, we have \begin{align} \Big|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\Big| & \leq\Big|\max_{1\leq k\leq n}z_{k}^{\top}W_{k}z_{k}\Big|\cdot\sum_{k=1}^{t-1}\big|\mu_{t}^{k}\beta_{t}^{k}\big|\leq\Big|\max_{1\leq k\leq n}z_{k}^{\top}W_{k}z_{k}\Big|\cdot\|\mu_{t}\|_{2}\|\beta_{t}\|_{2}\nonumber \\ & \lesssim\sqrt{\frac{\log n}{n}}\|\beta_{t}\|_{2},\label{eq:sum-mu-beta-zWz} \end{align} with probability at least $1-n^{-11}$, where we remind the reader of the notation $\mu_t=[\mu_t^k]_{1\leq k\leq t}$ and $\beta_t=[\beta_t^k]_{1\leq k\leq t}$ and the fact that $\|\mu_t\|_2=1$. Next, we turn to the following term \begin{align*} \mathcal{I}_{1} & \coloneqq\sum_{k=1}^{t-1}\mu_{t}^{k}\Big(\sum_{i=1}^{k-1}\beta_{t}^{i}g_{i}^{k}+\sum_{i=k+1}^{t}\beta_{t}^{i}g_{k}^{i}\Big) \eqqcolon \sum_{k=1}^{t-1}\mu_{t}^{k} \varrho_k, \end{align*} where we recall that each random variable $g_i^{k}$ with $i\neq k$ is independently generated from $\mathcal{N}(0,1/n)$, which is also independent from $\beta_{t}$ (but not $\mu_t$). Conditional on $\beta_t$, one has \begin{align*} \mathsf{Var}\left(\varrho_{k}\mid\beta_{t}\right) & \coloneqq\frac{1}{n}\sum_{i=1}^{k-1}\big(\beta_{t}^{i}\big)^{2}+\frac{1}{n}\sum_{i=k+1}^{t}\big(\beta_{t}^{i}\big)^{2}\leq\frac{\|\beta_{t}\|_{2}^{2}}{n}, \end{align*} which combined with Gaussian concentration inequalities \citep{vershynin2018high} and the union bound yields \begin{align} \max_{1\leq k\leq n}|\varrho_k| \lesssim \frac{\|\beta_{t}\|_{2}\sqrt{\log n}}{\sqrt{n}} \label{eq:var-rho-k-UB} \end{align} with probability at least $1-O(n^{-11})$. As a result, the Cauchy-Schwarz inequality gives \begin{align} |\mathcal{I}_{1}| & =\Big|\sum_{k=1}^{t-1}\mu_{t}^{k}\varrho_{k}\Big|\leq\|\mu_{t}\|_{2}\sqrt{\sum_{k=1}^{t-1}\varrho_{k}^{2}} \lesssim \sqrt{\frac{t\log n}{n}} \|\beta_{t}\|_{2} \label{eq:UB-I1-135} \end{align} with probability at least $1-O(n^{-11})$, where we have used \eqref{eq:var-rho-k-UB} and the fact $\|\mu_t\|_2=1$. Combining \eqref{eq:sum-mu-beta-zWz} and \eqref{eq:UB-I1-135} immediately finishes the proof. \subsection{Proof of Lemma~\ref{lem:recursion}} \label{sec:pf-lem-recursion} Lemma~\ref{lem:recursion} involves bounds concerning the continuous part of the function and that of the discontinuous part, which we shall prove separately. \paragraph{The continuous part: proof of inequalities \eqref{eq:lem-recursion-smooth-part-1} and \eqref{eq:lem-recursion-smooth-part-2}.} First, some basic algebra leads to \begin{align} \notag\Big\langle\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k},\eta_{t}^{\prime}(v_{t})\circ\xi_{t-1}\Big\rangle-\langle\eta_{t}^{\prime\prime}(v_{t})\circ\xi_{t-1}\rangle\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k} & =\bigg\langle\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\circ\eta_{t}^{\prime}(v_{t}),\xi_{t-1}\bigg\rangle-\bigg\langle\frac{1}{n}\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\eta_{t}^{\prime\prime}(v_{t}),\xi_{t-1}\bigg\rangle\\ & \le\bigg\|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\circ\eta_{t}^{\prime}(v_{t})-\frac{1}{n}\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\eta_{t}^{\prime\prime}(v_{t})\bigg\|_{2}\|\xi_{t-1}\|_{2}.\label{eqn:shostakovich-simple} \end{align} The condition \eqref{defi:D} imposed in Assumption~\ref{assump:A-H-eta} tells us that \begin{align*} \Big\|\sum_{k = 1}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime} - \frac{1}{n}\sum_{k = 1}^{t-1} \mu_t^k\beta_{t-1}^k\eta_{t}^{\prime\prime}\Big\|_2^2 & \le \kappa_t^2 + D_t, \end{align*} which taken collectively with inequality~\eqref{eqn:shostakovich-simple} concludes the proof of inequality \eqref{eq:lem-recursion-smooth-part-1}. When it comes to the second claim \eqref{eq:lem-recursion-smooth-part-2}, we observe that for any $t\leq n$, \begin{align*} \rho_{1}\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\big|\xi_{t-1}\big|^{2}\Big\rangle+\rho_{2}\Big\langle\big|\xi_{t-1}\big|^{2}\Big\rangle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg| & \leq\rho_{1}\bigg\{\max_{1\leq i\leq n}\Big|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\Big|_{i}\bigg\}\cdot\|\xi_{t-1}\|_{2}^{2}+\frac{\rho_{2}}{n}\|\xi_{t-1}\|_{2}^{2}\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg|\\ & \lesssim\left(\rho_{1}\sqrt{\frac{t\log n}{n}}+\frac{\rho_{2}\|\beta_{t-1}\|_{2}}{n}\right)\|\xi_{t-1}\|_{2}^{2}. \end{align*} Here, the last line makes use of two properties: (i) $\big| \sum_{k = 1}^{t-1} \mu_t^k\beta_{t-1}^k \big| \le \ltwo{\mu_t^k} \|\beta_{t-1}\|_2 = \ltwo{\beta_{t-1}}$ (given that $\mu_t$ is constructed as a unit vector); (ii) the standard Gaussian concentration inequalities \citep{vershynin2018high} indicating that, with probability at least $1 - O(n^{-11})$, \[ \max_{1\leq i\leq n}\Big|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\Big|_{i}\leq\max_{1\leq i\leq n}\|\mu_{t}\|_{2}\big\|[\phi_{1,i},\cdots,\phi_{t-1,i}]\big\|_{2}=\max_{1\leq i\leq n}\big\|[\phi_{1,i},\cdots,\phi_{t-1,i}]\big\|_{2}\lesssim \frac{\sqrt{t}+\sqrt{\log n}}{\sqrt{n}} . \] This establishes inequality \eqref{eq:lem-recursion-smooth-part-2}. \paragraph{The discontinuous part: proof of inequality \eqref{eqn:new-version}.} We first make the observation that: the quantity $\theta(m)$ defined in expression~\eqref{defi:theta} obeys \begin{align} \label{eqn:zero-norm-comparison} \sum_{j=1}^{n}\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le|\xi_{t-1,j}|\Big) \le\sum_{j=1}^{n}\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le\theta(m)\Big), \end{align} which can be proved using the definition \eqref{defi:theta} as follows. \begin{proof}[Proof of \eqref{eqn:zero-norm-comparison}] By defining the set \begin{align*} \mathcal{J} \coloneqq \bigg\{j\in [n] : \Big|\alpha_tv_j^{\star} + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_{k, j} - m\Big| \le |\xi_{t-1, j}|\bigg\}, \end{align*} we can easily see that \begin{align} \sum_{j \in \mathcal{J}}\Big|\alpha_tv_j^{\star} + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_{k, j} - m_i\Big|^2 \le \sum_{j \in \mathcal{J}} |\xi_{t-1, j}|^2 \le \|\xi_{t-1}\|_2^2 . \label{eq:Jtrue-condition} \end{align} Additionally, if we define another set $\mathcal{J}^{\prime}$ as follows \begin{align} \mathcal{J}^{\prime} \coloneqq \bigg\{j\in [n] : \Big|\alpha_tv_j^{\star} + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_{k, j} - m\Big| \le \theta(m) \bigg\}, \label{eq:Jprime-condition} \end{align} then in view of definition of $\theta(m)$, $\mathcal{J}^{\prime}$ is clearly the index set with the largest cardinality obeying $$\sum_{j \in \mathcal{J}^{\prime}}\Big|\alpha_tv_j^{\star} + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_{k, j} - m\Big|^2 \le \|\xi_{t-1}\|_2^2.$$ Since $\mathcal{J}$ also satisfies this relation (cf.~\eqref{eq:Jtrue-condition}), we arrive at $\left|\mathcal{J}^{\prime}\right| \ge \left|\mathcal{J}\right|$, thus validating inequality~\eqref{eqn:zero-norm-comparison}. \end{proof} Next, for any $m\in \mathcal{M}_{\mathsf{dc}}$, define $\Gamma(m) \coloneqq \big[\Gamma_j(m) \big]_{1\leq j\leq n}\in \ensuremath{\mathbb{R}}^n$ (see \eqref{eq:Gamma-ub-discontinuous}). Equipped with the above relation \eqref{eqn:zero-norm-comparison}, one can show that \begin{align} 2\rho\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\,\Gamma\circ\big|\xi_{t-1}\big|\bigg\rangle & =2\rho\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|\circ\Gamma,\,\big|\xi_{t-1}\big|\bigg\rangle=\sum_{m\in\mathcal{M}_{\mathsf{dc}}}2\rho\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|\circ\Gamma(m),\,\big|\xi_{t-1}\big|\bigg\rangle \notag\\ & \leq\sum_{m\in\mathcal{M}_{\mathsf{dc}}}2\rho\Big\|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\circ\Gamma(m)\Big\|_{2}\cdot\big\|\xi_{t-1}\big\|_{2}. \label{eqn:sonnet} \end{align} To control the right-hand side of \eqref{eqn:sonnet}, we first apply inequality~\eqref{eqn:vive} in Lemma~\ref{lem:brahms-lemma} with $s = t$ to obtain \begin{align*} \sum_{i = 1}^t \Big|\sum_{k = 1}^{t-1} \mu_t^k\phi_k\Big|_{(i)}^2 \lesssim \frac{t\log{n}}{n} \end{align*} with probability at least $1-O(n^{-11})$. This relation in turn implies that for every $j \geq t$, \begin{align*} \Big|\sum_{k = 1}^{t-1} \mu_t^k\phi_k\Big|_{(j)}^2 \leq \frac{1}{t} \sum_{i = 1}^t \Big|\sum_{k = 1}^{t-1} \mu_t^k\phi_k\Big|_{(i)}^2 \lesssim \frac{\log{n}}{n}. \end{align*} With these two inequalities in mind, we can deduce that \begin{align} \Big\|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\circ\Gamma(m)\Big\|_{2}^{2} & \leq \sum_{i=1}^{t}\Big|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\Big|_{(i)}^{2}+\Big|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\Big|_{(t+1)}^{2}\cdot\big\|\Gamma(m)\big\|_{1} \notag\\ & \lesssim\frac{t\log n}{n}+\frac{\log n}{n}\sum_{j=1}^{n}\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le|\xi_{t-1,j}|\Big). \label{eqn:l2-decomposition} \end{align} This taken collectively with inequality~\eqref{eqn:sonnet} leads to \begin{align*} 2\rho\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\,\Gamma\circ\big|\xi_{t-1}\big|\bigg\rangle & \lesssim\rho\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\left(\sqrt{\frac{t\log n}{n}}+\sqrt{\frac{\log n}{n}}\sqrt{\sum_{j=1}^{n}\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le|\xi_{t-1,j}|\Big)}\right)\big\|\xi_{t-1}\big\|_{2}\\ & \lesssim\rho\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\left(\sqrt{\frac{t\log n}{n}}+\sqrt{\frac{\log n}{n}}\sqrt{\sum_{j=1}^{n}\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le \theta(m) \Big)}\right) \big\|\xi_{t-1}\big\|_{2}\\ & \lesssim\rho\sqrt{\frac{(E_{t}+t)\log n}{n}}\big\|\xi_{t-1}\big\|_{2}, \end{align*} where the second inequality comes from \eqref{eqn:zero-norm-comparison}, and the last inequality makes use of the definition \eqref{defi:E} of $E_t$. Similar calculations lead to \begin{align*} & \Big\{2\rho\langle\Gamma\rangle+2\rho_{1}\big\langle\Gamma\circ\big|\xi_{t-1}\big|\big\rangle\Big\}\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg|\leq2\Big(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty}\Big)\langle\Gamma\rangle\big\|\mu_{t}\big\|_{2}\big\|\beta_{t-1}\big\|_{2}\\ & \qquad=\frac{2(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty})}{n}\Big\{\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\sum_{j=1}^{n}\Gamma_{j}(m)\Big\}\big\|\beta_{t-1}\big\|_{2}\\ & \qquad\leq\frac{2(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty})\big\|\beta_{t-1}\big\|_{2}}{n}\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\sum_{j=1}^{n}\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le|\xi_{t-1,j}|\Big)\\ & \qquad\leq\frac{2(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty})\big\|\beta_{t-1}\big\|_{2}}{n}\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\sum_{j=1}^{n}\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le\theta(m)\Big)\\ & \qquad\leq\frac{2(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty})E_{t}\big\|\beta_{t-1}\big\|_{2}}{n} \end{align*} Taking the above pieces collectively, we demonstrate that \begin{align*} & 2\rho\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\,\Gamma\circ\big|\xi_{t-1}\big|\bigg\rangle+\Big\{2\rho\langle\Gamma\rangle+2\rho_{1}\big\langle\Gamma\circ\big|\xi_{t-1}\big|\big\rangle\Big\}\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg|\\ & \qquad\lesssim\rho\sqrt{\frac{(E_{t}+t)\log n}{n}}\big\|\xi_{t-1}\big\|_{2}+\frac{(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty})E_{t}\big\|\beta_{t-1}\big\|_{2}}{n} \end{align*} as claimed. \subsection{Proof of Lemma~\ref{lem:recursion2}} \label{sec:pf-lem-recursion2} Recall from \eqref{eqn:shostakovich-delta-t} that $\delta_{t}$ obeys \begin{align} \Big|\delta_{t} & -\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\Big| \leq \rho_{1}\big|\xi_{t-1}\big|^{2}+ 2\rho\Gamma \circ \big|\xi_{t-1}\big|. \label{eqn:shostakovich-delta-t-135} \end{align} To bound $\inprod{v^\star}{\delta_t}$, we control the inner product of $v^\star$ with each part of \eqref{eqn:shostakovich-delta-t-135} separately. First, observe that \begin{align*} \Big|\Big\langle v^\star, \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\Big) \circ \xi_{t-1}\Big\rangle\Big| &= \Big|\Big\langle v^\star \circ \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\Big), \xi_{t-1} \Big\rangle\Big| \\ &\le \Big\|v^\star \circ \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\Big)\Big\|_2\cdot \|\xi_{t-1}\|_2 \leq \rho\|\xi_{t-1}\|_2, \end{align*} where the last step follows from $\ltwo{v^\star} = 1$ and $\|\eta'_t\|_{\infty} \leq \rho.$ Also, a little algebra yields \begin{align*} \big|\big\langle v^\star, \rho_1 \big|\xi_{t-1}\big|^2\big\rangle\big| \le \rho_1\|v^\star\|_{\infty} \big\| |\xi_{t-1} |^2 \big\|_1 = \rho_1\|v^\star\|_{\infty} \big\|\xi_{t-1} \big\|_2^2. \end{align*} Also, reusing our notation $\Gamma(m)=[\Gamma_j(m)]_{1\leq j\leq m}\in \ensuremath{\mathbb{R}}^n$ before (see \eqref{eq:Gamma-ub-discontinuous}), \begin{align*} \big|\big\langlev^\star,2\rho\Gamma\circ\big|\xi_{t-1}\big|\big\rangle\big| & =2\rho\big|\big\langlev^\star\circ\Gamma,\big|\xi_{t-1}\big|\big\rangle\big|=2\rho\bigg|\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\big\langlev^\star\circ\Gamma(m),\big|\xi_{t-1}\big|\big\rangle\bigg|\\ & \leq2\rho\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\big\|v^\star\circ\widetilde{\Gamma}(m)\big\|_{2}\big\|\xi_{t-1}\big\|_{2}\\ & \leq\big|\mathcal{M}_{\mathsf{dc}}\big|\cdot2\rho\max_{m\in\mathcal{M}_{\mathsf{dc}}}\big\|v^\star\circ\widetilde{\Gamma}(m)\big\|_{2}\big\|\xi_{t-1}\big\|_{2}\\ & \lesssim\rho\bigg(\sum_{i=1}^{E_{t}}|v^\star|_{(i)}^{2}\bigg)^{1/2}\|\xi_{t-1}\|_{2}, \end{align*} where $\widetilde{\Gamma}(m)\coloneqq\big[\widetilde{\Gamma}_{j}(m)\big]_{1\leq j\leq n}\in\ensuremath{\mathbb{R}}^{n}$ with $$\widetilde{\Gamma}_{j}(m)=\ind\Big(\Big|\alpha_{t}v^\star_{j}+\sum\nolimits_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}-m\Big|\le\theta(m)\Big).$$ Here, the second line arises from the the relation \eqref{eqn:zero-norm-comparison}, whereas the last line is valid since $\big|\mathcal{M}_{\mathsf{dc}}\big|=O(1)$, $\sum_{m\in\mathcal{M}_{\mathsf{dc}}}\sum_{j=1}^{n}\widetilde{\Gamma}_{j}(m) \leq E_{t}$ (see \eqref{defi:E}), and the fact that each $\widetilde{\Gamma}_{j}(m)$ is an indicator variable. Putting the above bounds together and making use of \eqref{eqn:shostakovich-delta-t-135}, we immediate establish \eqref{eqn:cello}. We then move on to proving the other two inequalities \eqref{eqn:viola} and \eqref{eqn:violin}. In view of \eqref{eqn:shostakovich-delta-t-135}, we have \begin{align*} \Big|\big\langle\eta_{t}(v_{t}),\delta_{t}\big\rangle\Big| & \leq\Big|\big\langle\eta_{t}(v_{t}),\eta_{t}^{\prime}(v_{t})\circ\xi_{t-1}\big\rangle\Big|+\rho_{1}\Big|\big\langle\eta_{t}(v_{t}),\big|\xi_{t-1}\big|^{2}\big\rangle\Big|+2\rho\Big|\big\langle\eta_{t}(v_{t}),\Gamma\circ\big|\xi_{t-1}\big|\big\rangle\Big|\\ & =\Big|\big\langle\eta_{t}(v_{t})\circ\eta_{t}^{\prime}(v_{t}),\xi_{t-1}\big\rangle\Big|+\rho_{1}\big\|\eta_{t}(v_{t})\big\|_{\infty}\Big\|\big|\xi_{t-1}\big|^{2}\Big\|_{1}+2\rho\Big|\big\langle\eta_{t}(v_{t})\circ\Gamma,\big|\xi_{t-1}\big|\big\rangle\Big|\\ & \leq\big\|\eta_{t}(v_{t})\circ\eta_{t}^{\prime}(v_{t})\big\|_{2}\big\|\xi_{t-1}\big\|_{2}+\rho_{1}\big\|\eta_{t}(v_{t})\big\|_{\infty}\big\|\xi_{t-1}\big\|_{2}^{2}+2\rho\big\|\eta_{t}(v_{t})\big\|_{\infty}\big\|\Gamma\big\|_{2}\big\|\xi_{t-1}\big\|_{2}\\ & \lesssim F_{t}\big\|\xi_{t-1}\big\|_{2}+\rho_{1}G_{t}\big\|\xi_{t-1}\big\|_{2}^{2}+ \rho G_{t}\sqrt{E_{t}}\big\|\xi_{t-1}\big\|_{2}. \end{align*} Here, the last line follows from Assumptions~\eqref{defi:F} and \eqref{defi:G}, as well as the fact that \[ \big\|\Gamma\big\|_{2} \leq \big\|\widetilde{\Gamma}\big\|_{2} \leq \sum_{m\in\mathcal{M}_{\mathsf{dc}}} \big\|\widetilde{\Gamma}(m)\big\|_{2} = \sum_{m\in\mathcal{M}_{\mathsf{dc}}} \bigg( \sum_{j=1}^n \widetilde{\Gamma}_j(m) \bigg)^{1/2} \leq \sum_{m\in\mathcal{M}_{\mathsf{dc}}} \sqrt{E_t} \asymp \sqrt{E_t}, \] where we have invoked the assumption that $|\mathcal{M}_{\mathsf{dc}}|=O(1)$. This concludes the proof of the claim \eqref{eqn:viola}. Additionally, the relation \eqref{eqn:shostakovich-delta-t-135} also tells us that \begin{align*} \big\|\delta_{t}\big\|_{2}^{2} & \leq3\Big\|\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\Big\|_{2}^{2}+3\rho_{1}^{2}\Big\|\big|\xi_{t-1}\big|^{2}\Big\|_{2}^{2}+12\rho^{2}\Big\|\Gamma\circ\big|\xi_{t-1}\big|\Big\|_{2}^{2}\\ & \lesssim\Big\|\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\Big\|_{\infty}^{2}\big\|\xi_{t-1}\big\|_{2}^{2}+\rho_{1}^{2}\big\|\xi_{t-1}\big\|_{2}^{4}+\rho^{2}\|\Gamma\|_{\infty}\big\|\xi_{t-1}\big\|_{2}^{2}\\ & \lesssim \rho^{2}\big\|\xi_{t-1}\big\|_{2}^{2}+\rho_{1}^{2}\big\|\xi_{t-1}\big\|_{2}^{4}+\rho^{2}E_{t}\big\|\xi_{t-1}\big\|_{2}^{2} \end{align*} where the last inequality comes from the fact that $\|\Gamma\|_{\infty}\leq\|\widetilde{\Gamma}\|_{\infty}\leq E_{t}$. This establishes \eqref{eqn:violin}. \section{Auxiliary lemmas and details} \subsection{Proof of Proposition~\ref{thm:sparse-init}} \label{sec:pf-sparse-ini} Let us begin by considering the magnitude of $\langle v^{\star}, \eta_2(x_2)\rangle$, which is the focus of the claim \eqref{eqn:vstar-s}. From the AMP iteration~\eqref{eqn:AMP-updates}, it is seen that \begin{align} \label{eqn:intermezzi} x_2 = M\eta_1(x_1) = \Big(\lambdav^\star\big(v^{\star}\big)^{\top} + W\Big) e_s = \lambdav^\star_sv^\star + We_s, \end{align} where we note that $\eta_{1}(x_1) = e_{s}$ as a consequence of the denoising function \eqref{eqn:eta-sparse}. Recognizing that each $W_{ii}$ (resp.~$W_{ij}$ with $i\neq j$) is an independent Gaussian random variable with variance $2/n$ (resp.~$1/n$), one can invoke standard Gaussian concentration results to obtain \citep{vershynin2018high} \begin{align} \label{eqn:simple-diagonal} \max_{1\leq i, j\leq n} \left|\big(We_j\big)_{i}\right| \leq 6 \sqrt{\frac{\log n}{n}} \end{align} with probability at least $1 - O(n^{-11})$. Therefore, if $\lambda|v^\star_s| \geq 12 \sqrt{\frac{k \log n}{n}}$, then it follows from \eqref{eqn:simple-diagonal} that \begin{align*} |x_{2, i}| \geq |\lambda v^\star_s|\cdot |v_i^{\star} | - \big|\big(We_j\big)_{i} \big| &> 6 \sqrt{\frac{\log n}{n}},\qquad\text{if }|v^\star_i| \ge \frac{1}{2\sqrt{k}}; \\ |x_{2, i}| \leq \big|\big(We_j\big)_{i}\big| &\leq 6 \sqrt{\frac{\log n}{n}},\qquad\text{if }v^\star_i = 0. \end{align*} In the meantime, the above argument also reveals that \begin{align*} \mathrm{sign}(x_{2, i}) &= \mathrm{sign}(v^\star_i),\qquad&&\text{if }|x_{2, i}| > 6 \sqrt{\frac{\log n}{n}} ; \\ \big(|x_{2,i}|-\tau_2\big)_+ &= 0, \qquad &&\text{if }|x_{2, i}| \leq 6 \sqrt{\frac{\log n}{n}}. \end{align*} With the preceding observations in place, we can demonstrate that \begin{align*} \notag \Big\|\mathrm{sign}(x_2)\circ (|x_2| - \tau_2 1)_{+}\Big\|_2^2 &\le \sum_{i = 1}^n x_{2, i}^2\ind\Big(|x_{2, i}| \gtrsim \sqrt{\frac{\log n}{n}}\Big) \lesssim \sum_{i : v^\star_i \ne 0} \Big(|\lambdav^\star_sv^\star_i| + \sqrt{\frac{\log n}{n}}\Big)^2 \\ &\lesssim \left(\lambdav^\star_s\right)^2 + \frac{k\log n}{n} \asymp \left(\lambdav^\star_s\right)^2 \end{align*} under the assumption that $\lambda|v^\star_s| \geq 12 \sqrt{\frac{k \log n}{n}}$. In addition, the above properties also reveal that: \begin{align} \label{eqn:qianqian} \notag \big|\langlev^\star, \mathrm{sign}(x_2)\circ (|x_2| - \tau_2 1)_{+}\rangle\big| & = \sum_{ i : |x_{2, i}| > 6 \sqrt{\frac{\log n}{n}}} \left(|x_{2, i}| - \tau_2\right)|v^\star_i| \geq \sum_{i : |v^\star_i| \ge \frac{1}{2\sqrt{k}}} \left(|x_{2, i}| - \tau_2\right)|v^\star_i| \\ &\geq \frac{1}{2} \sum_{i : |v^\star_i| \ge \frac{1}{2\sqrt{k}}} |x_{2, i}|\cdot |v^\star_i| \gtrsim \sum_{i : |v^\star_i| \ge \frac{1}{2\sqrt{k}}} |\lambdav^\star_s|\, v_i^{\star 2} \gtrsim \lambda |v^\star_s|, \end{align} where the last line follows from the assumption $\lambda|v^\star_s| \geq C_5 \sqrt{\frac{k \log n}{n}}$ for some large enough constant $C_5>0$ (so that $|x_{2, i}| \geq 2 \tau_2$), as well as the following condition (using the $k$-sparse property of $v^\star$) \begin{align} \label{eqn:vstar-basic-1} \sum_{i : |v^\star_i| \geq \frac{1}{2\sqrt{k}}} v_i^{\star 2} = 1 - \sum_{i : |v^\star_i| < \frac{1}{2\sqrt{k}}} v_i^{\star 2} \geq 1 - k\cdot \bigg( \frac{1}{2\sqrt{k}} \bigg)^2 = \frac{3}{4}. \end{align} Putting the above pieces together and recalling that $\|\eta_(x_2)\|_2=1$, we conclude that \begin{align*} \big|\inprod{v^\star}{\eta_2(x_2)}\big| = \frac{\big|\langlev^\star, \mathrm{sign}(x_2)\circ (|x_2| - \tau_2 1)_{+}\rangle\big|}{\left\|\mathrm{sign}(x_2)\circ (|x_2| - \tau_2 1)_{+}\right\|_2} \asymp 1. \end{align*} Next, we turn to proving the second claim of Proposition~\ref{thm:sparse-init}, towards which we would like to show that with probability at least $1 - O(n^{-11})$, \begin{align} \label{eqn:vs-is-large} |\langle x_{1}, v^\star\rangle| = |v^\star_{\hat{s}}| \ge \frac{1}{2}\|v^{\star}\|_{\infty}, \end{align} where $\hat{s} \coloneqq \arg\max_i \left|M_{ii}\right|$ and $x_{1} = e_{\hat{s}}$. Indeed, it follows from \eqref{eqn:simple-diagonal} that \begin{align*} \lambda\big(v^{\star}_{i}\big)^2 - 6\sqrt{\frac{\log n}{n}} \leq \Big| \big(\lambda v^{\star}v^{\star\top} + W\big)_{ii} \Big| \leq \lambda\big(v^{\star}_{i}\big)^2 + 6\sqrt{\frac{\log n}{n}} \end{align*} with probability at least $1 - O(n^{-11})$. Given that $\hat{s}$ maximizes $|(\lambda v^{\star}v^{\star\top} + W)_{ii}|$ over all $i\in [n]$, we have \begin{align*} \lambda\big(v^{\star}_{\hat{s}}\big)^2 + 6\sqrt{\frac{\log n}{n}} \ge \Big| \left(\lambda v^{\star}v^{\star\top} + W\right)_{\hat{s} \hat{s}} \Big| \geq \Big| \big(\lambda v^{\star}v^{\star\top} + W\big)_{jj} \Big| \geq \lambda\big(v^{\star}_{\max}\big)^2 -6 \sqrt{\frac{\log n}{n}}, \quad \text{with } j \coloneqq \arg\!\max_i |v^\star_i|. \end{align*} In the case when $\lambda\|v^{\star}\|_{\infty} \gtrsim \sqrt{\frac{k\log n}{n}}$, the above inequality immediately establishes~\eqref{eqn:vs-is-large} by observing that $ \|v^{\star}\|_{\infty} \geq \sqrt{\frac{\ltwo{v^\star}^{2}}{k}} = \frac{1}{\sqrt{k}}. $ We have thus completed the proof of Proposition~\ref{thm:sparse-init}. \subsection{Proof of Proposition~\ref{prop:sparse-split}} \label{sec:pf-prop-sparse-split} For notational convenience, we omit the index of $j$ in $\mathcal{I}_{j}$ and write $\mathcal{I}$ instead throughout this proof, as long as it is clear from the context. \paragraph{Step 1: analysis for a single round.} Let us first state some concentration properties regarding $v^{\star}_{\mathcal{I}}$. By construction, $\|v^{\star}_{\mathcal{I}}\|_0$ can be viewed as the sum of $k$ independent Bernoulli random variables, each of which has mean $p$. The Bernstein inequality tells us that \begin{align*} \Big|\big\|v^{\star}_{\mathcal{I}}\big\|_0 - kp\Big| \leq \sqrt{2kp(1-p)\log \frac{2}{\delta}} + 2\log \frac{2}{\delta} \end{align*} holds with probability at least $1 - \delta$. It thus implies that $$ \big\|v^{\star}_{\mathcal{I}}\big\|_0 - pk = o(pk) $$ under the assumption that $kp \gtrsim \log n \gg \log\frac{2}{\delta}$ and $\delta \asymp 1$ (e.g., $\delta = \frac{1}{10}$). In addition, $\|v^{\star}_{\mathcal{I}}\big\|_2^2 = \sum_{i\in \text{supp}(v^\star)} v^{\star 2}_{i} \ind(i \in \mathcal{I})$ is the sum of $k$ independent bounded random variables, with total variance bounded above by \[ \mathsf{Var}\big( \|v^{\star}_{\mathcal{I}}\big\|_2^2 \big) \leq p \big\|v^{\star}_{\mathcal{I}}\big\|_{\infty}^2 \sum_{i\in \text{supp}(v^\star)} v^{\star 2}_{i} = p \big\|v^{\star}_{\mathcal{I}}\big\|_{\infty}^2. \] Invoking Bernstein's inequality again gives \begin{align*} \left|\big\|v^{\star}_{\mathcal{I}}\big\|_2^2 - p\right| \leq \big\|v^{\star}_{\mathcal{I}}\big\|_{\infty}\sqrt{2 p\log \frac{2}{\delta}} + 2\big\|v^{\star}_{\mathcal{I}}\big\|_{\infty}^2\log \frac{2}{\delta}, \end{align*} with probability at least $1 - \delta$. Consequently, it guarantees that with probability $1 - \delta$, \begin{align} \label{eqn:sparse-vc-norm-tmp} \|v^{\star}_{\mathcal{I}}\big\|_2^2 - p = o(p) ~\text{ and }~ \|v^{\star}_{\mathcal{I}^c}\big\|_2^2 = 1 - p + o(p), \end{align} under the assumption that $\|v^{\star}\|_{\infty} = o\big(\sqrt{\frac{\log n}{k}}\big)$ and $p \gtrsim \frac{\log n}{k}$. Combining the above two relations gives \begin{align} \label{eqn:vstar-i} \lambda \big\|v^{\star}_{\mathcal{I}}\big\|_2^2 ~\gtrsim~ \frac{\big\|v^{\star}_{\mathcal{I}}\big\|_0}{\sqrt{n}}, \end{align} with the proviso that $\lambda \gtrsim k/\sqrt{n}.$ Suppose now that there exists an oracle algorithm (as in \eqref{eq:correlation-oracle}) whose returned solution $v_{\mathcal{I}} = \mathsf{Oracle}(M_{\mathcal{I}, \mathcal{I}})$ (computed solely based on $M_{\mathcal{I}, \mathcal{I}}$) satisfies \begin{align*} \langle v^{\star}_{\mathcal{I}}, v_{\mathcal{I}}\rangle \asymp \big\|v^{\star}_{\mathcal{I}}\big\|_2 \asymp \sqrt{p} \end{align*} with probability at least $1-\delta$ for some small constant $\delta$ (note that the randomness comes from the sampling process). Taking $v \coloneqq M_{\mathcal{I}^{c}, \mathcal{I}} v_{\mathcal{I}}$ yields \begin{align} \label{eqn:x1-sparse-split} v = M_{\mathcal{I}^{c}, \mathcal{I}} v_{\mathcal{I}} = (\lambda v^\star_{\mathcal{I}^{c}} v_{\mathcal{I}}^{\star\top} + W_{\mathcal{I}^{c}, \mathcal{I}}) v_{\mathcal{I}} = \alpha_1 v_{\mathcal{I}^{c}}^{\star} + \phi_0, \end{align} where $\alpha_1 = \lambda \langle v^{\star}_{\mathcal{I}}, v_{\mathcal{I}}\rangle \asymp \lambda\sqrt{p}$ and $\phi_0 \sim \mathcal{N}(0, \frac{\ltwo{ v_{\mathcal{I}}}^2}{n}I)$. Importantly, both $\alpha_1$ and $\phi_0$ are independent of $W_{\mathcal{I}^{c}, \mathcal{I}^{c}}$. Based on the construction \eqref{eqn:sparse-choose-j} of $x_{1}$, it suffices to verify \begin{align} \label{eqn:initial-const-corr} \frac{\big\langle v_{\mathcal{I}^{c}}^{\star}, \mathsf{ST}_{\tau_1}(v) \big\rangle}{\big\|v_{\mathcal{I}^{c}}^{\star}\big\|_2\big\|\mathsf{ST}_{\tau_1}(v)\big\|_2} \asymp 1. \end{align} In order to validate~\eqref{eqn:initial-const-corr}, we look at the distributional of~\eqref{eqn:x1-sparse-split}. First, given the fact that $\|\phi_0\|_{\infty} \leq 6 \sqrt{\frac{\log n}{n}}$ with probability at least $1 - O(n^{-11})$ and $\alpha_1 \asymp \lambda \sqrt{p} \gtrsim \sqrt{\frac{k\log n \log \frac{1}{\delta}}{n}},$ we can use \eqref{eqn:x1-sparse-split} to get \begin{align*} \mathrm{sign}\big(\mathsf{ST}_{\tau_1}(v_i) \big) = \mathrm{sign} \big(v^{\star}_i \big) \qquad \text{and} \qquad % \begin{cases} \Big| \mathsf{ST}_{\tau_1}(v_i)\Big|\geq \frac{1}{2}\alpha_1 |v_{i}^{\star}| \mathds{1} \Big(\lambda\sqrt{p} |v_{i}^{\star}| \geq C_8 \sqrt{\frac{\log n}{n}} 1\Big) \\ \Big| \mathsf{ST}_{\tau_1}(v_i)\Big|\leq 2\alpha_1 |v_{i}^{\star}| \mathds{1} \Big(\lambda\sqrt{p} |v_{i}^{\star}| \geq C_7 \sqrt{\frac{\log n}{n}} 1\Big) \end{cases} \qquad i\in \mathcal{I}^{c} \end{align*} for some suitable constants $C_7,C_8>0$. As a result, we can see that \begin{align} \Big\langle v_{\mathcal{I}^{c}}^{\star}, \frac{\mathsf{ST}_{\tau_1}(v)}{\ltwo{\mathsf{ST}_{\tau_1}(v)}} \Big\rangle \gtrsim \frac{\alpha_1\sum_{i\in \mathcal{B}} v_{i}^{\star 2}}{\alpha_1 \|v^{\star}\|_2} \qquad \text{with }~\mathcal{B} \coloneqq \Big\{i \mid i \in \mathcal{I}^{c}; |v_{i}^{\star}| \geq C_8 \frac{\sqrt{\log n/n}}{\lambda\sqrt{p}} \Big\}. \label{eq:ST-inner-product-v-LB} \end{align} Moreover, recalling the concentration results $\|v^{\star}_{\mathcal{I}}\big\|_2^2 - p = o(p)$ and $\big\|v^{\star}_{\mathcal{I}}\big\|_0 - pk = o(pk)$, we find that \begin{align*} \sum_{i\in\mathcal{B}^{c}\cap\mathcal{I}^{c}}v_{i}^{\star2} & =\Big\| v_{\mathcal{I}^{c}}\circ\ind\Big(v_{\mathcal{I}^{c}}<\frac{C_{8}\sqrt{\log n/n}}{\lambda\sqrt{p}}1\Big)\Big\|_{2}^{2}\leq\big(k-kp+o(kp)\big)\cdot\frac{C_{8}^{2}\log n}{\lambda^{2}pn}\\ & \leq\big(k-kp+o(kp)\big)\cdot\frac{C_{8}^{2}\log n}{10C_{8}^{2}k\log n\log\frac{1}{\delta}}\leq\frac{1-p}{10\log(1/\delta)}(1+o(1))\leq\frac{1}{10}\ltwo{v_{\mathcal{I}^{c}}^{\star}}^{2}, \end{align*} provided that $\lambda^2 p \geq 10C_8^2 \frac{k\log n \log \frac{1}{\delta}}{n}$. Substitution into \eqref{eq:ST-inner-product-v-LB} gives \[ \Big\langle v_{\mathcal{I}^{c}}^{\star}, \frac{\mathsf{ST}_{\tau_1}(v)}{\ltwo{\mathsf{ST}_{\tau_1}(v)}} \Big\rangle \gtrsim 1, \] which together with $\|v_{\mathcal{I}^{c}}\|_2\asymp 1$ establishes the required inequality~\eqref{eqn:initial-const-corr}. \paragraph{Step 2: repeating the procedure for $N$ times.} Thus far, we have proved that inequalities~\eqref{eqn:sparse-vc-norm-tmp} and \eqref{eqn:initial-const-corr} are satisfied with probability $1 - \delta$, for $\delta$ being some small constant (e.g., $\delta = 1/10$); here, the uncertainty comes from the random sampling process. In order to boost the success probability, we --- as detailed in Step 1 --- repeat the sampling procedure for $N \coloneqq 10\log n/\log (1/\delta)$ times, outputing $N$ independent subsets $\mathcal{I}_1, \ldots, \mathcal{I}_{N} \subseteq [n]$ and $N$ corresponding estimators (denoted by $v^{j}$ for $1\leq j\leq N$). Then there exists at least one subset $\mathcal{I}_j$ such that, with probability at least $1 - \delta^{N} = 1 - O(n^{-10})$, \begin{align} \label{eqn:tzigane} \|v^{\star}_{\mathcal{I}^c}\big\|_2^2 = 1 - p + o(p) \qquad \text{and} \qquad \frac{\big\langle v_{\mathcal{I}^{c}}^{\star}, \mathsf{ST}_{\tau_1}(v^{j}) \big\rangle}{\big\|v_{\mathcal{I}^{c}}^{\star}\big\|_2\big\|\mathsf{ST}_{\tau_1}(v^{j})\big\|_2} \asymp 1 \end{align} for $v^j \coloneqq M_{\mathcal{I}_j^{c}, \mathcal{I}_j} v_{\mathcal{I}_j}$. Based on these two relations, we arrive at \begin{align} \notag \frac{\mathsf{ST}_{\tau_1}(v^j)^\top M_{\mathcal{I}_j^{c}, \mathcal{I}_j^c} \mathsf{ST}_{\tau_1}(v^j)}{\|\mathsf{ST}_{\tau_1}(v^j)\|_2^2} &= \lambda \frac{\big\langle v_{\mathcal{I}_j^{c}}^{\star}, \mathsf{ST}_{\tau_1}(v^{j}) \big\rangle^2}{\big\|\mathsf{ST}_{\tau_1}(v^{j})\big\|^2_2} + \underbrace{\frac{\mathsf{ST}_{\tau_1}(v^j)^\top W_{\mathcal{I}_j^{c}\mathcal{I}_j^{c}} \mathsf{ST}_{\tau_1}(v^j)}{\big\|\mathsf{ST}_{\tau_1}(v^j)\big\|^2_2}}_{=: \varepsilon_j}\\ % \notag &= \lambda \frac{\big\langle v_{\mathcal{I}_j^{c}}^{\star}, \mathsf{ST}_{\tau_1}(v^j) \big\rangle^2}{\ltwo{v_{\mathcal{I}_j^{c}}^{\star}}^2 \big\|\mathsf{ST}_{\tau_1}(v^j)\big\|^2_2} \cdot \ltwo{v_{\mathcal{I}_j^{c}}^{\star}}^2 + \varepsilon_j \\ % \notag &\gtrsim \lambda (1 - p) + \varepsilon_j, \qquad \text{where } \varepsilon_j \sim \mathcal{N}\Big(0,\frac{1}{n}\Big)\\ % &\asymp \lambda \end{align} Here, we remind the readers that since $v^{j}$ is independent of $W_{\mathcal{I}_j^{c}\mathcal{I}_j^{c}}$, and $\varepsilon_{j}$ follows a Gaussian distribution $\mathcal{N}(0,\frac{1}{n}).$ We also make note of the relation that $\lambda(1-p) \gtrsim \frac{k(1-p)}{\sqrt{n}} \gtrsim \frac{\log n}{\sqrt{n}}$ and $\max_{1\leq j\leq N} |\varepsilon_{j}| \leq 5\sqrt{\frac{\log N}{n}}$ with probability $1-O(n^{-11})$. Therefore, if we select $\widehat{j}$ according to \eqref{eqn:sparse-choose-j} as follows: \begin{align*} \widehat{j} \coloneqq \arg\max_j ~\Bigg\{\frac{\mathsf{ST}_{\tau_1}(v^j)^\top M_{\mathcal{I}_j^{c}, \mathcal{I}_j^c} \mathsf{ST}_{\tau_1}(v^j)}{\|\mathsf{ST}_{\tau_1}(v^j)\|_2^2}\Bigg\}, \end{align*} then we necessarily have \begin{align} \label{eqn:sparse-concl} \frac{\big\langle v_{\mathcal{I}_{\widehat{j}}^{c}}^{\star}, \mathsf{ST}_{\tau_1}(v^{\widehat{j}}) \big\rangle}{\big\|v_{\mathcal{I}_{\widehat{j}}^{c}}^{\star}\big\|_2\big\|\mathsf{ST}_{\tau_1}(v^{\widehat{j}})\big\|_2} \asymp 1. \end{align} Otherwise, we would end up with \begin{align*} \frac{\mathsf{ST}_{\tau_1}(v^{\widehat{j}})^\top M_{\mathcal{I}^{c}, \mathcal{I}^c} \mathsf{ST}_{\tau_1}(v^{\widehat{j}})}{\|\mathsf{ST}_{\tau_1}(v^{\widehat{j}})\|_2^2} &= \lambda\frac{\big\langle v_{\mathcal{I}_{\widehat{j}}^{c}}^{\star}, \mathsf{ST}_{\tau_1}(v^{\widehat{j}}) \big\rangle^2}{\big\|\mathsf{ST}_{\tau_1}(v^{\widehat{j}})\big\|^2_2} + \frac{\mathsf{ST}_{\tau_1}(v^{\widehat{j}})^\top W_{\mathcal{I}_{\widehat{j}}^{c}\mathcal{I}_{\widehat{j}}^{c}} \mathsf{ST}_{\tau_1}(v^{\widehat{j}})}{\big\|\mathsf{ST}_{\tau_1}(v^{\widehat{j}})\big\|^2_2}\\ % &\ll \lambda + \sqrt{\frac{\log n}{n}} \\ % &\lesssim \frac{\mathsf{ST}_{\tau_1}(v^j)^\top M_{\mathcal{I}^{c}, \mathcal{I}^c} \mathsf{ST}_{\tau_1}(x^j_1)}{\|\mathsf{ST}_{\tau_1}(x^j_1)\|_2^2}, \end{align*} where $j$ corresponds to the one obeying \eqref{eqn:tzigane}. This, however, contradicts the definition of $\widehat{j}.$ We have therefore concluded the proof of Proposition~\ref{prop:sparse-split}. \subsection{Proof of Lemma~\ref{lem:sparse-ini-1-ini}} \label{sec:pf-ken-sparse-ini} In this subsection, we analyze the first three AMP iterates when initialized at $\eta_0(x_{0}) = 0$ and $x_1 = e_{s}$ for any given $s \in \mathcal{S}_0$, where $\mathcal{S}_0 \coloneqq \{s \in [n] \mid |v^\star_s| \geq \frac{1}{2} \|v^\star\|_\infty \}$ (defined in expression \eqref{eqn:sparse-set-s0}). Without loss of generality, we shall assume $v_{s}^{\star} >0$ throughout this subsection. \paragraph{The 2nd iterate.} It follows from the AMP update rule that \begin{align*} x_2 = M\eta_1(x_1) = \Big(\lambdav^\star\big(v^{\star}\big)^{\top} + W\Big) e_{s} = \lambdav^\star_sv^\star + We_{s}, \end{align*} where we use $\eta_{1}(x_1) = e_{s}$ in view of the definition of the denoising function in \eqref{eqn:eta-sparse}. Recalling that in the proof of Theorem~\ref{thm:main}, we establish the decomposition~\eqref{def:dynamics} with $\phi_{j}$ and $\beta_t^j$ defined in \eqref{def:phi_k} and \eqref{eqn:eta-decomposition} respectively. Instantiating this to the current case, we have $$ \beta_1^1 = 1 \qquad \text{and}\qquad \phi_{1} = We_{s} + \Big(\frac{\sqrt{2}}{2}-1 \Big)e_{s}^{\top}We_{s} \sim \mathcal{N}\Big(0, \frac{1}{n}I_n\Big). $$ Hence, $x_{2}$ can be expressed as \begin{align} x_2 &= \alpha_2v^\star + \phi_1 + \xi_1, \end{align} where $\alpha_2 = \lambdav^\star_s \gtrsim \lambda$ obeying $|\alpha_2| \geq \|v^\star\|_{\infty}$, and $\xi_1 = (1 - \frac{\sqrt{2}}{2}) (e_{s}^{\top}We_{s}) e_{s}$. To proceed, we find it helpful to make note of the following two properties. First, from expression~\eqref{eqn:qianqian}, we know that \begin{align} \label{eqn:cadenza} \gamma_2^{-1} = \big\|\mathrm{sign}(x_2)\circ (|x_2| - \tau_2 1)_{+} \big\|_2 \geq \big|\langlev^\star, \mathrm{sign}(x_2)\circ (|x_2| - \tau_2 1)_{+} \rangle\big| \gtrsim \lambda |v^\star_s|. \end{align} In addition, with probability at least $1 - O(n^{-11})$, one can derive from \eqref{eqn:simple-diagonal} that \begin{align} \label{eqn:finale} |\xi_{1,s}| = \Big|\Big(1 - \frac{\sqrt{2}}{2} \Big) e_{s}^{\top}We_{s} \Big| \leq \max_{1\leq i\leq n} |W_{ii}| \lesssim \sqrt{\frac{\log n}{n}}. \end{align} \paragraph{The 3rd iterate.} In view of decomposition~\eqref{def:dynamics}, we write \begin{align*} x_3 &= M\eta_2(x_2) - \langle\eta_2^{\prime}(x_2)\rangle\eta_1(x_1) = \alpha_3 v^\star + \beta_2^1 \phi_1 + \beta_2^2 \phi_2 + \xi_2, \end{align*} where (see\eqref{eq:xi-expression}) $$ \alpha_3 = \lambda v^{\star\top}\eta_2(x_2) \qquad \text{and} \qquad \xi_{2} \in \mathsf{span}\{e_{s}, \eta_2(x_2)\}. $$ Proposition~\ref{thm:sparse-init} tells us that $\alpha_3 \asymp \lambda.$ Next, we look at the size of $\ltwo{\xi_2}.$ To begin with, by definition, we have $\mu_2^1 = \frac{\xi_1^\top e_{s}}{\ltwo{\xi_1}} = 1$ for $\xi_1 = (1 - \frac{\sqrt{2}}{2}) (e_{s}^{\top}We_{s}) e_{s}$. Combining this with relation~\eqref{eq:xi_bound} for $t=2$, we arrive at \begin{align} \label{eqn:xi-2-vincent} \|\xi_2\|_2 = \langle \phi_1, \delta_{2}\rangle - \langle\delta_{2}^{\prime}\rangle + \Delta_2 + O\Big(\sqrt{\frac{\log n}{n}}\Big), \end{align} where $\delta_{2}$ takes the following form \begin{align*} \delta_{2} = \eta_2(\alpha_2v^\star + \phi_1 + \xi_1) - \eta_2(\alpha_2v^\star + \phi_1). \end{align*} To control $\|\xi_2\|_2$, it then suffices to upper bound $\Delta_{2}$ as well as $\langle \phi_1, \delta_{2}\rangle - \langle\delta_{2}^{\prime}\rangle$. Let us start with the quantity $\Delta_{2}$. Recall that $|\Delta_{2}| \leq A_2 \lesssim \sqrt{\frac{2\log n}{n}}$ (in view of \eqref{eqn:sparse-At}), where we remind the readers that the proof of \eqref{eqn:sparse-At} is built upon the assumptions $\alpha_2 \lesssim \lambda$ and $\tau_2 \asymp \sqrt{\frac{\log n}{n}}.$ We then move on to consider the quantity $\langle \phi_1, \delta_{2}\rangle - \langle\delta_{2}^{\prime}\rangle$. First, given that $\xi_{1}$ is along the direction of $e_{s}$, the entries of $\delta_{2}$ are all zero except for $\delta_{2,s}.$ With this observation in mind, the term of interest can be written as \begin{align} \label{eqn:opera} \big| \langle \phi_1, \delta_{2}\rangle - \langle\delta_{2}^{\prime}\rangle \big| &= \Big| \phi_{1,s}\delta_{2, s} - \frac{1}{n}\delta_{2,s}^{\prime} \Big| \lesssim \sqrt{\frac{\log n}{n}} \cdot \frac{|\xi_{1,s}|}{\lambda v_{s}^{\star}} + \frac{1}{n\lambda v_{s}^{\star}}. \end{align} To see why the last inequality is valid, we use inequality~\eqref{eqn:cadenza} to obtain \begin{align*} |\delta_{2,s}| = \big|\eta_2(\alpha_2v^\star_s + \phi_{1,s} + \xi_{1,s}) - \eta_2(\alpha_2v^\star_{s} + \phi_{1,s})| \leq \gamma_2 |\xi_{1,s} \big| \lesssim \frac{|\xi_{1,s}|}{\lambdav^\star_s}, \end{align*} as a result of the Lipschitz property of $\eta_t$, and in addition, \begin{align*} |\delta_{2,s}'| \leq 2 \gamma_2 \lesssim \frac{1}{\lambdav^\star_{s}}. \end{align*} Taking inequality~\eqref{eqn:opera} together with \eqref{eqn:finale} and recalling $v^\star_{s} \gtrsim \frac{1}{\sqrt{k}}$ (cf.~\eqref{eqn:vstar-s}) lead to \begin{align*} \big| \langle \phi_1, \delta_{2}\rangle - \langle\delta_{2}^{\prime}\rangle \big| & \lesssim \sqrt{\frac{\log n}{n}}\sqrt{\frac{k\log n}{n\lambda^2}} + \frac{\sqrt{k}}{n\lambda} \lesssim \sqrt{\frac{\log n}{n}}. \end{align*} Substitution back into \eqref{eqn:xi-2-vincent} gives \begin{align*} \|\xi_2\|_2 \lesssim \sqrt{\frac{\log n}{n}}. \end{align*} We have thus established Lemma~\ref{lem:sparse-ini-1-ini}. \section{Discussion} \label{sec:discussion} In this paper, we have proposed a general recipe towards analyzing the finite-sample performance of the AMP algorithm when applied to spiked Wigner models. Our analysis framework makes explicit a crucial decomposition of each AMP iterate (as a superposition of a signal term and a Gaussian-type stochastic component), with a residual term that can be tracked recursively without exploding rapidly. Further, this analysis framework can be seamlessly integrated with spectral initialization. The power of our analysis strategy has been demonstrated via two concrete applications: $\mathbb{Z}_{2}$ synchronization and sparse PCA. In both cases, explicit non-asymptotic behaviors of AMP have been derived up to a polynomial number of iterations, thereby revealing new insights about the finite-sample convergence properties of AMP. Our work leaves open a variety of questions; we conclude the paper by highlighting a few of them. \begin{itemize} \item Firstly, while we have illustrated the effectiveness of our master theorems with two examples of different flavor, there is no shortage of other signal structures that are of practical interest. For instance, one might wonder how AMP behaves non-asymptotically when the signal $v^\star$ is known to satisfy certain shape constraints (e.g., having non-negative entries, residing in a monotone or convex cone \citep{bandeira2019computational,wei2019geometry}). In some of these cases, the natural denoising functions might not be separable, therefore while the decomposition in Theorem~\ref{thm:recursion} still holds true, controlling those residual terms is significantly more complicated. \item Secondly, our analysis is tailored to the spiked Wigner model where the noise takes the form of an independent Gaussian matrix. It remains unclear whether our non-asymptotic characterizations can be generalized to accommodate non-Gaussian noise matrices \citep{bayati2015universality,chen2021universality,dudeja2022universality}. Developing universality results in a non-asymptotic manner is an important yet highly challenging task worthy of future investigation. \item Moving beyond spiked models, it would be of great interest to see whether similar frameworks can be developed for other models for which AMP is known to be powerful. Examples include sparse linear regression \citep{bayati2011lasso}, generalized linear models \citep{sur2019likelihood}, stochastic block models \citep{deshpande2017asymptotic}, among others. \end{itemize} \subsection{Sparse PCA (sparse spiked Wigner matrix)} \label{sec:sparse-main} Another specific model of interest is concerned with sparse PCA. In the statistics literature, spiked models with sparsity constraints have been a main-stay for studying sparse PCA \citep{johnstone2009consistency}, inspiring various algorithms including regression-type methods \citep{zou2006sparse}, convex relaxation \citep{amini2008high,d2004direct,vu2013fantope}, iterative thresholding \citep{ma2013sparse,krauthgamer2015semidefinite,deshpande2014sparse}, sum of squares hierarchy \citep{hopkins2017power}, among many others. This paper contributes to this growing literature by studying the effectiveness of AMP for sparse PCA (see also,~\cite{deshpande2014information,montanari2021estimation}). More specifically, this subsection considers sparse estimation in the spiked Wigner model\footnote{Note that another popular model for sparse PCA is the sparse spike Wishart model \cite{johnstone2009consistency}. We choose the spiked Wigner model as it is closer to the context studied in this paper.}, where we seek to estimate a $k$-sparse eigenvector $v^\star \in \mathcal{S}^{n-1}$ from the following data matrix: \begin{align} \label{eqn:wigner-sparse} M = \lambda v^\star v^{\star\top} + W \in \mathbb{R}^{n\times n}, \qquad \text{where } \|v^\star\|_0=k. \end{align} We would like to leverage upon our analysis framework to track the non-asymptotic performance of AMP in the face of the sparsity structure. \paragraph{AMP for sparse spiked Wigner models.} For each $t\geq 1$, the AMP update rule takes the following form: \begin{subequations} \label{eq:AMP-sparse} \begin{align} \label{eqn:AMP-updates-sparse} x_{t+1} = M\eta_t(x_{t}) - \big\langle\eta_t^{\prime}(x_{t}) \big\rangle \cdot \eta_{t-1}(x_{t-1}) \qquad \text{with } \eta_t(x) = \gamma_t\mathrm{sign}(x) \circ (|x| - \tau_t 1 )_{+}, \end{align} where the denoising function $\eta_t(\cdot)$ is taken to be the soft thresholding function (applied entry-by-entry) with a threshold $\tau_t$ and a rescaling pre-factor to ensure $\|\eta_t(x_t)\|_2 = 1$: \begin{align} \label{eqn:eta-sparse} \gamma_t \coloneqq \big\|\mathrm{sign}(x_t) \circ (|x_t| - \tau_t 1)_{+} \big\|_2^{-1}. \end{align} \end{subequations} It is worth noting that $\eta_{t}$ is differentiable almost everywhere except for two points (i.e., $\pm \tau_t$), with $\eta_{t}^{\prime}(x) = \gamma_t\mathds{1}(|x| > \tau_t).$ In addition, the threshold $\tau_t$ shall be selected to be $\tau_t\asymp\sqrt{\frac{\log n}{n}}$, to be specified shortly. \subsubsection{Non-asymptotic AMP theory with an independent initialization} To begin with, we characterize the performances of AMP when an informative yet independent initialization is available. For notational simplicity, we define the following function: \begin{align} \label{eqn:franck} f(\alpha) \coloneqq \frac{\lambda v^{\star \top} \displaystyle \int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)}{\sqrt{\displaystyle \int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)}}, \end{align} where $\mathsf{ST}_{\tau_t}(x) \coloneqq \mathsf{sign}(x)(|x| - \tau_t)_{+}$ for any $x\in \ensuremath{\mathbb{R}}$ and $\varphi_n(\cdot)$ is the pdf of $\mathcal{N}\big(0, I_n\big)$. Let us introduce the state evolution recursion as follows (which depends only on $\lambda$ and $v^\star$): \begin{align} \label{eqn:alpha-star-sparse} \alpha_{t+1}^{\star} = f(\alpha_t^\star), \end{align} with the initial condition obeying $\alpha^{\star}_2 \asymp \lambda.$ Our non-asymptotic theory for sparse PCA is stated below, with its proof deferred to Section~\ref{sec:pf-sparse}. \begin{theos} \label{thm:sparse} Consider the model \eqref{eqn:wigner-sparse} where $0<\lambda \lesssim 1$. Given an independent initial point $x_{1}$ obeying $\inprod{v^\star}{\eta_1(x_1)} \asymp 1$ and $\eta_0(x_0) = 0$, the AMP algorithm~\eqref{eq:AMP-sparse} satisfies the following decomposition: \begin{align} \label{eqn:sparse-decomp-dvorak} &x_{t+1} = \alpha_{t+1} v^\star + \sum_{k = 1}^{t} \beta_{t}^k\phi_k + \xi_{t}, \qquad\text{for }t\geq 1, \\ \label{eqn:sparse-SE} \text{with }~ \alpha_{t+1} = & \lambda v^{\star \top} \int{\eta}_{t}\left(\alpha_{t} v^\star + \frac{x}{\sqrt{n}} \right)\varphi(\mathrm{d} x) + \lambda\Delta_{\alpha,t}, \qquad \|\beta_{t-1}\|_2 = 1, \end{align} where it holds with probability at least $1-O(n^{-11})$ that \begin{subequations} \label{eqn:soccer} \begin{align} \lambda |\Delta_{\alpha, t}| &\lesssim \sqrt{\frac{k+t\log^3 n}{n}}, \qquad \|\xi_{t}\|_2 \lesssim \sqrt{\frac{k+t\log^3 n}{n}}, \label{eq:residual-sparse} \\ \label{eqn:se-alpha-sparse} &\big|\alpha_{t+1} - \alpha_{t+1}^{\star}\big| \lesssim \sqrt{\frac{k \log n + t\log^3 n}{n}}, \end{align} \end{subequations} provided that \begin{align} t \lesssim \frac{n\lambda^2}{\log^3 n}\qquad \text{and} \qquad \frac{k\log n}{n\lambda^2} \lesssim 1. \label{cond:t-k} \end{align} \end{theos} In a nutshell, each AMP iterate behaves almost like a signal component superimposed by a Gaussian-type component (see the decomposition \eqref{eqn:sparse-decomp-dvorak}), with an residual term that is well controlled up until the number of iterations reaches $$O\left(\frac{n\lambda^2}{\log^3 n}\right).$$ If $\lambda \asymp 1$, then the validity of the above non-asymptotic theory is guaranteed for $O(n/\log^3 n)$ iterations, which is far beyond what existing theory can cover. It is also worth pointing out that the non-asymptotic state-evolution~\eqref{eqn:se-alpha-sparse} matches the one derived in existing literature (cf.~\eqref{eq:SE-Montanari}) when $\frac{k\log n + t\log^3 n}{n} \to 0$. In the sequel, we single out a few additional remarks of this result in order. \begin{itemize} \item In comparison to several prior works (e.g., \citet{amini2008high,montanari2021estimation,ding2019subexponential}), our results impose absolutely no assumption on either the empirical distribution of $v^\star$, or the values of the non-zero entries of $v^\star$. For instance, we do allow some non-zero entries of $v^\star$ to be either extremely large or exceedingly small. \item Different from \cite{montanari2021estimation}, we permit $\lambda$ to enter the regime where $\lambda < 1$. In this regime, the leading eigenvector of the observed matrix $M$ becomes uninformative \citep{deshpande2017asymptotic}, and therefore, spectral initialization fails to provide a warm start as required in \cite{montanari2021estimation}. \item In fact, assuming access to an informative initialization independent of $W$, Theorem~\ref{thm:sparse} only requires $\lambda \gtrsim \sqrt{\frac{k\log n}{n}}$. This threshold matches the known information-theoretical lower bound in order to enable consistent estimation; see also \citet{vu2012minimax,berthet2013optimal} for relevant messages derived for the spiked covariance model. \end{itemize} Finally, an informative starting point is not always available, particularly when it is close to the information-theoretic threshold. Noteworthily, a growing body of sparse PCA literature provided evidence concerning the existence of computational barriers that prevent one from finding polynomial-time algorithms to approach the information-theoretic limits \citep{berthet2013computational,lesieur2015phase,krzakala2016mutual,hopkins2017power,macris2020all}. In light of this, the next two subsections focus on the scenario where the SNR rises above the computational limit, and study AMP with two data-driven initialization schemes that achieve non-trivial correlation with the truth. These two initialization schemes are designed to tackle different SNR regimes. \section{Consequences for specific models} \label{sec:examples} Focusing on two important models (i.e., $\mathbb{Z}_{2}$ synchronization and sparse spiked Wigner models), this section develops concrete consequences of our general recipe presented in Section~\ref{sec:main}, aimed at illustrating the effectiveness of our non-asymptotic theory. \subsection{$\mathbb{Z}_{2}$ synchronization} \label{sec:z2-main} The first concrete model considered here is $\mathbb{Z}_{2}$ synchronization, which augments \eqref{eqn:wigner} with some binary-valued signal structure as follows: \begin{align} \label{eqn:wigner-Z2} M = \lambda v^\star v^{\star\top} + W \in \mathbb{R}^{n\times n}, \qquad \text{where }v_i^{\star}\in \Big\{\frac{1}{\sqrt{n}}, -\frac{1}{\sqrt{n}} \Big\}, ~ 1\leq i\leq n. \end{align} It can be viewed as a special example of synchronization over compact groups \citep{singer2011angular,perry2018message,zhong2018near,gao2022sdp}. Given this observation matrix and a signal prior (e.g., $ v^\star_i \stackrel{\textrm{i.i.d.}}{\sim} \textsf{Unif}\big\{\frac{1}{\sqrt{n}}, -\frac{1}{\sqrt{n}}\big\}$), the Bayes-optimal estimate for the rank-one matrix $v^\star v^{\star\top}$ takes the following form: \begin{align} \widehat{X}^{\textrm{bayes}} \coloneqq \mathbb{E}[v^\star v^{\star\top} \mid M]. \end{align} Computing the Bayes-optimal solution is, however, computationally infeasible due to the combinatorial nature of the underlying optimization problem. A recent line of research searched for nearly tight yet tractable approximation to the Bayes-optimal estimator \citep{peche2006largest,baik2005phase,javanmard2016phase,fan2021tap,montanari2016semidefinite}, with AMP being one natural choice \citep{deshpande2017asymptotic,celentano2021local,lelarge2019fundamental}. Recall that the majority of AMP analysis for $\mathbb{Z}_{2}$ synchronization operates under the assumption that $n\rightarrow \infty$ and $t$ stays fixed. In order to obtain an optimal estimator with finite-sample guarantees in the most challenging regime $\lambda >1$, the recent work \cite{celentano2021local} proposed a three-stage hybrid algorithm: (i) starting with a spectral initialization, (ii) running AMP updates for constant number of steps, (iii) refining by running, say, natural gradient descent method, until convergence. This procedure yields a polynomial-time algorithm that converges to a local minimizer $m_{\star}$ of the so-called TAP free energy (which obeys $\|m_{\star}m_{\star}^\top - \widehat{X}^{\textrm{bayes}}\|_{\mathrm{F}}\to 0$ in probability)\footnote{Note here, to be consistent with other parts of the paper, we adopt a different scaling by taking $\ltwo{m_{\star}} = 1$}. \cite{celentano2021local} further conjectured based on numerical experiments that a spectrally initialized AMP might be actually sufficient (in the absence of a third refinement stage). This raises a natural theoretical question: \begin{center} \emph{How does spectrally initialized AMP perform when $t$ far exceeds a constant or even $o\big(\frac{\log n}{\log \log n}\big)$? } \end{center} As discussed in \cite{celentano2021local}, existing state-evolution-based arguments fell short in answering this question due to their asymptotic nature. In the following, we aim to answer the question positively, with the aid of our non-asymptotic framework developed in this paper. \paragraph{Spectrally initialized AMP for $\mathbb{Z}_{2}$ synchronization.} Let us begin by formalizing the AMP procedure to be studied herein. Specifically, the AMP updates take the following form for each $t\geq 1$: \begin{subequations} \label{eq:AMP-z2} \begin{align} \label{eqn:AMP-updates-Z2} x_{t+1} = M\eta_t(x_{t}) - \big\langle\eta_t^{\prime}(x_{t}) \big\rangle \cdot \eta_{t-1}(x_{t-1}) \qquad \text{with } \eta_t(x) = \gamma_t\tanh\left(\pi_tx\right), \end{align} where \begin{align} \label{eqn:eta-z2-new} \pi_t \coloneqq \sqrt{n(\|x_t\|_2^2-1)}\qquad\text{and} \qquad\gamma_t \coloneqq \left\|\tanh\left(\pi_tx_t\right)\right\|_2^{-1}. \end{align} Here, the pre-factor $\gamma_{t}$ is chosen to ensure $\ltwo{\eta_{t}(x_t)} = 1$ for normalization purpose (note that this differs from the pre-factor adopted in \cite{celentano2021local}). As already recognized in prior work, a properly rescaled $\tanh(\cdot)$ function is capable of approaching the Bayes-optimal estimator. The first iterate $x_1$ is obtain via the spectral method, or more precisely, the power method, that is, \begin{align} \label{eqn:Z2-initialization} x_1 \coloneqq \lambda a_s M^s \widetilde{v}\qquad\text{with }~ s \asymp \frac{\lambda^2 \log n}{(\lambda - 1)^2} ~~\text{and}~~ a_s = \frac{1}{\ltwo{M^s \widetilde{v}}}. \end{align} \end{subequations} We shall also choose $x_0$ such that $\eta_0(x_0) = x_1/\lambda$ to be consistent with Theorem~\ref{thm:recursion-spectral}. Given that it is infeasible to distinguish $v^\star$ and $-v^\star$ given only the observation $M$, we shall assume --- without loss of generality --- $x_1^\top v^\star \geq 0$ throughout the rest of the paper. \paragraph{Non-asymptotic theoretical guarantees.} We now invoke our general recipe to analyze the non-asymptotic performance of \eqref{eq:AMP-z2}. In order to do so, we find it helpful to first introduce the (limiting version of) state evolution (SE) tailored to the denoising function $\eta_t(\cdot)\propto \tanh(\cdot)$. Specifically, let us produce a scalar sequence $\{\tau_t\}$ recursively as follows: \begin{align} \label{eqn:tau-t-z2} \tau_{1} \coloneqq \lambda^{2}-1 \qquad \text{and} \qquad \tau_{t+1} \coloneqq \lambda^2 \int \tanh(\tau_t + \sqrt{\tau_t}x)\varphi(\mathrm{d} x) , \quad t\geq 1, \end{align} where $\varphi(\cdot)$ represents the pdf of $\mathcal{N}(0,1)$. Note that this SE recurrence is consistent with what has been derived in the prior work \cite{celentano2021local}. With this in mind, we state in Theorem~\ref{thm:Z2} our non-asymptotic characterization for the AMP algorithm, whose proof can be found in Section~\ref{sec:pf-thm-Z2}. \begin{theos} \label{thm:Z2} Consider the model \eqref{eqn:wigner-Z2} with $1+ \frac{\log n}{n^{1/16}} < \lambda \leq 1.2$, and recall the scalar sequence $\{\tau_t\}$ in \eqref{eqn:tau-t-z2}. With probability at least $1 - O(n^{-11})$, the spectrally initialized AMP \eqref{eq:AMP-z2} admits the following decomposition: \begin{align} \label{eqn:z2-decomposition} x_{t+1} = \alpha_{t+1} v^\star + \sum_{k = -2s}^{t} \beta_{t}^k\phi_k + \xi_{t} \qquad \text{for all } 0\leq t = o\left( \frac{n(\lambda - 1)^{10}}{\log^{7} n} \right), \end{align} where the $\phi_k$'s are i.i.d.~random vectors drawn from $\mathcal{N}\big(0, \frac{1}{n}I_n\big)$, and the parameters satisfy \begin{subequations} \label{eqn:z2-final} \begin{align} % \alpha_1^2 &= \lambda^2 - 1, \\ \alpha_{t+1}^2 &= \lambda^2 \big( v^{\star\top} \eta_{t}(x_{t}) \big)^2= \left(1+O\bigg(\sqrt{\frac{t\log n}{(\lambda-1)^{8}n}} + \frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{14}n}}\bigg)\right)\tau_{t+1}, \qquad t \geq 1, \label{eqn:z2-delta-alpha-final}\\ \|\beta_{t}\|_2 &= \big\| \big[ \beta_{t}^{-2s},\cdots, \beta_{t}^{t} \big] \big\|_2 = 1,\\ \|\xi_{t}\|_2 &\lesssim \sqrt{\frac{t\log n}{(\lambda-1)^{3}n}}+\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{9}n}} . \end{align} \end{subequations} \end{theos} \begin{remark} Note that the assumption $\lambda \leq 1.2$ is not necessary and can be safely eliminated. We assume $\lambda \leq 1.2$ for two reasons: (i) it represents the most challenging regime for $\mathbb{Z}_2$ synchronization; (ii) assuming $\lambda \leq 1.2$ allows us to streamline some (non-critical) part of the proof. \end{remark} In words, Theorem~\ref{thm:Z2} captures the finite-sample dynamics of the AMP \eqref{eq:AMP-z2} up to $o\big( \frac{n(\lambda - 1)^{10}}{\log^7 n} \big)$ iterations. Each iterate is very well approximated by a superposition of a signal component and a Gaussian component, up to a small error at most on the order of $\sqrt{\frac{t\log n}{(\lambda-1)^{3}n}}+\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{9}n}}$. Recognizing that $\|\beta_{t-1}\|_2=1$, one arrives at the following heuristic approximation: \begin{equation} x_t \approx \alpha_t v^\star + \mathcal{N}\Big( 0, \frac{1}{n}I_n\Big), \end{equation} which can be rigorized under 1-Wasserstein using standard Gaussian concentration results (see, e.g., Lemma~\ref{lem:wasserstein}); this is consistent with the prediction of prior works (e.g., \citet{deshpande2017asymptotic}) under high-dimensional asymptotics (up to proper rescaling). To the best of our knowledge, Theorem~\ref{thm:Z2} delivers the first finite-sample characterization of AMP in the $\mathbb{Z}_{2}$ synchronization setting beyond $O_n(1)$ iterations. As asserted by the result \eqref{eqn:z2-delta-alpha-final} in Theorem~\ref{thm:Z2}, the strength of the signal component in $x_t$ remains fairly close to the prediction of state evolution \eqref{eqn:tau-t-z2}, that is, \begin{equation} \frac{\big( \big\langle v^\star, \, \eta_t(x_t) \big\rangle \big)^2}{ \|v^\star\|_2^2 \| \eta_t(x_t) \|_2^2 } = \frac{\tau_{t+1}}{\lambda^2} \left(1+O\bigg(\sqrt{\frac{t\log n}{(\lambda-1)^{8}n}} + \frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{14}n}}\bigg)\right) \end{equation} up to $o\big( \frac{n(\lambda - 1)^{10}}{\log^7 n} \big)$ iterations, where we recall that $\| \eta_t(x_t) \|_2=\|v^\star\|_2=1$. We also make note of a phase transition phenomenon that has been established in \citet{deshpande2017asymptotic}. Namely, when $\lambda < 1$, the Bayes-optimal estimate converges to the zero estimator, meaning that no estimator whatsoever is able to obtain non-trivial estimation; in contrast, when $\lambda > 1$, it is possible to achieve non-trivial correlation with the underlying signal. Therefore, it suffices to focus on the scenario where $\lambda>1$. It is worth emphasizing that our result is fully non-asymptotic in terms of the spectral gap $\lambda - 1$ as well. In fact, our theory allows $\lambda$ to be exceedingly close to 1 (i.e., $\lambda - 1 = o_n(1)$), which is in sharp contrast to prior works that all required $\lambda\geq 1+\epsilon$ for some strictly positive constant $\epsilon$. Note that we have made no efforts to obtain the sharpest constant in the assumption $\lambda > 1+ \frac{\log n}{n^{1/16}}$, the $1/16$ herein is likely to be improved with more careful book-keeping. \begin{remark} As pointed out by \citet[Lemma A.7]{celentano2021local}, in the large $n$ limit, the AMP algorithm yields matching asymptotic performances as that of the Bayes-optimal estimator, in the sense that \begin{align*} \lim_{t\to \infty}\lim_{n \to \infty} \big\|v^\star v^{\star\top} - x_tx_t^\top \big\|^2_{\mathrm{F}} =\lim_{t\to \infty}\lim_{n \to \infty} \big\|v^\star v^{\star\top} - \widehat{X}^{\mathrm{bayes}} \big\|^2_{\mathrm{F}}. \end{align*} This further implies that the minimum mean square estimation error is dictated by the (unique) fixed point of the state evolution recursion \eqref{eqn:tau-t-z2}. In addition, our proof of Theorem~\ref{thm:Z2} also makes explicit the convergence rate of $\tau_t$ to $\tau^{\star}$. To be more precise, as we shall demonstrate in Section~\ref{sec:main-recursion-z2} (see, e.g., discussions around \eqref{eq:SE-induction} and \eqref{eqn:middle}), we have \begin{align*} \ltwo{\tau_{t+1} - {\tau^\star}} \leq \big(1 - (\lambda-1) \big)\ltwo{\tau_t - {\tau^\star}}. \end{align*} This taken collectively with Theorem~\ref{thm:Z2} leads to \begin{align} \alpha_t^2 - \tau^{\star} = (\lambda^2-1)\big(1 - (\lambda-1) \big)^t + O\bigg(\sqrt{\frac{t\log n}{(\lambda-1)^{8}n}} + \frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{14}n}}\bigg), \end{align} which captures how far $\alpha_t^2$ deviates from the asymptotic limit as the iteration number $t$ increases. This helps answer a natural question regarding the finite-sample convergence property of spectrally initialized AMP. \end{remark} \section{A general recipe for non-asymptotic analysis of AMP} \label{sec:main} In this section, we develop a general recipe that leads to a non-asymptotic analysis framework for the AMP algorithm \eqref{eqn:AMP-updates}. This constitutes two master theorems (i.e., Theorems~\ref{thm:recursion} and \ref{thm:main}) that uncover the key decomposition for the AMP iterates and single out several key quantities to be controlled in order to bound the deviation between the true AMP behavior and the state evolution recurrence. Our analysis framework is further extended in Section~\ref{sec:spectral} to accommodate spectrally initialized AMP. Here and throughout, it is assumed that the denoising functions $\{\eta_t(\cdot)\}$ are differentiable almost everywhere. In addition, we allow the denoising function $\eta_t(\cdot)$ in the $t$-th iteration to be chosen based on the current iterate $x_t$. \subsection{A crucial decomposition of AMP iterates} \label{sec:decomposition-thm-1} We begin by presenting a key decomposition of the AMP iterates in the following theorem, which lies at the core of the non-asymptotic theory developed in this paper. The proof of this result is postponed to Section~\ref{sec:pf-thm-recursion}. \begin{theos} \label{thm:recursion} Suppose that the AMP algorithm~\eqref{eqn:AMP-updates} is initialized with some vector $x_0 $ obeying $\eta_0(x_0)=0$ and some vector $x_1$ independent of $W$. Then for every $1\le t< n$, the AMP iterates admit the following decomposition: \begin{align} \label{eqn:xt-decomposition} x_{t+1} = \alpha_{t+1} v^\star + \sum_{k = 1}^{t} \beta_{t}^k\phi_k + \xi_{t}, \end{align} where \begin{itemize} \item[(i)] the coefficient $\alpha_{t+1} \in \ensuremath{\mathbb{R}}$ obeys $\alpha_{t+1} = \lambda v^{\star\top} \eta_{t}(x_{t})$; \item[(ii)] $\{\phi_k\}_{1\leq k\leq t}$ are independently generated obeying $\phi_k \overset{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0, \frac{1}{n}I_n)$; \item[(iii)] the coefficient vector $\beta_t\coloneqq (\beta_t^1,\beta_t^2,\ldots,\beta_t^t) \in \ensuremath{\mathbb{R}}^t$ obeys $\|\beta_t\|_2 = \left\|\eta_{t}(x_t)\right\|_2$; \item[(iv)] $\xi_t \in \mathbb{R}^{n}$ is some residual vector such that, with probability at least $1-O(n^{-11})$, \begin{align} \label{eqn:xi-norm-main} \|\xi_{t}\|_2 = \Big\langle \sum_{k = 1}^{t-1} \mu_t^k\phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = 1}^{t-1} \mu_t^k\beta_{t-1}^k + \Delta_t + O\Big(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_2 \Big) \end{align} holds for some some unit vector $\mu_t = [\mu_t^k]_{1\leq k\leq t-1} \in \mathbb{R}^{t-1}$, where we define \begin{subequations} \label{eqn:delta-chorus} \begin{align} v_t &\coloneqq \alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k , \label{defn:v-t-thm1}\\ \label{defn:delta-t} \delta_{t} &\coloneqq \eta_{t}(x_t) - \eta_{t}(v_t), \\ \label{defn:delta-prime-t} \delta_{t}^{\prime} &\coloneqq \eta_{t}^{\prime}(x_t) - \eta_{t}^{\prime}(v_t),\\ \label{defn:Delta-t} \Delta_t &\coloneqq \sum_{k = 1}^{t-1} \mu_t^k \Big[\big\langle \phi_k, \eta_{t}(v_t)\big\rangle - \big\langle\eta_t^{\prime}(v_t)\big\rangle \beta_{t-1}^k\Big]. \end{align} \end{subequations} \end{itemize} \end{theos} \begin{remark} The auxiliary vector $v_t$ defined in \eqref{defn:v-t-thm1}, which is a linear combination of $v^\star$ and the Gaussian vectors $\{\phi_{k}\}$, can be viewed as $x_t$ with the residual term $\xi_{t-1}$ dropped (see \eqref{eqn:xt-decomposition}). As we shall see momentarily, $v_t$ often serves as a fairly tight and informative approximation of $x_t$. \end{remark} In a nutshell, Theorem~\ref{thm:recursion} decomposes the $t+1$-th iterate of the AMP algorithm $x_{t+1}$ into three components: {\em $(1)$ a signal component $\alpha_{t+1}v^\star$:} which is perfectly aligned with $v^\star$, whose strength is captured by $\alpha_{t+1}$; $(2)$ {\em a random noise component $\sum_{k = 1}^{t} \beta_{t}^k\phi_k$:} which behaves as a weighted superposition of $t$ i.i.d.~Gaussian vectors, although the weights $\beta_t$ might be statistically dependent on the $\phi_k$'s; $(3)$ {\em a residual term $\xi_{t}$:} which hopefully can be well controlled. This decomposition, which holds all the way up to the $n$-th iteration, is fairly general and plays a crucial role in obtaining non-asymptotic characterizations of $x_{t+1}$. In particular, it imposes little assumption (resp.~no assumption) on the denoising function (resp.~the underlying signal $v^\star$). In what follows, we single out several important remarks about the three components in \eqref{eqn:xt-decomposition}. \begin{itemize} \item Let us first look at the random noise component $\sum_{k = 1}^{t} \beta_{t}^k\phi_k$. Clearly, if $\beta_{t}$ were statistically independent from the i.i.d.~Gaussian vectors $\{\phi_k\}$, then $\frac{1}{\|\beta_{t}\|_2}\sum_{k = 1}^{t} \beta_{t}^k\phi_k$ would exhibit an ideal Gaussian distribution $\mathcal{N}\big(0, \frac{1}{n}I_n\big)$. In general, however, $\beta_{t}$ exhibits delicate dependency on $\{\phi_k\}$, thus complicating matters. Fortunately, the 1-Wasserstein distance between $\frac{1}{\|\beta_{t}\|_2}\sum_{k = 1}^{t} \beta_{t}^k\phi_k$ and the desired $\mathcal{N}\big(0, \frac{1}{n}I_n\big)$ remains small as long as $t$ is not too large; that is, \begin{align} W_{1}\Bigg(\mu\bigg(\frac{1}{\ltwo{\beta_t}}\sum_{i=1}^{t}\beta_{k}\phi_{k}\bigg), \,\mathcal{N}\bigg(0,\frac{1}{n}I_{n}\bigg)\Bigg) \lesssim \sqrt{\frac{t\log n}{n}}, \end{align} as asserted by Lemma~\ref{lem:wasserstein}, where $\mu(X)$ denotes the law of the random variable $X$. This reveals that this random noise component almost resembles an ideal Gaussian vector $\mathcal{N}\big(0, \frac{1}{n}I_n\big)$ for a wide range of $t$. \item Next, as shall be made clear momentarily, quantities $\alpha_{t+1}$ and $\ltwo{\beta_{t}}$ in \eqref{eqn:xt-decomposition} are intimately related to the primary quantities in the state evolution formula~\eqref{eq:SE-Montanari}, although their evolutions are now described in a non-asymptotic fashion. This paves the path for a non-asymptotic characterization of its convergence behavior towards a stationary point. \item The residual term $\xi_{t}$ is fairly complicated, depending heavily on the previous iterations of AMP as well as the specific choices of the denoising functions $\eta_{t}$. In truth, with different choices of $\eta_{t}$, the residual term $\xi_t$ might exhibit very different dependence on the salient parameters. In Theorem~\ref{thm:recursion} and its analysis, we provide a recursive characterization of $\|\xi_t\|_2$ using several quantities in the preceding iteration, and unveil certain low-dimensional structure of the residual term $\xi_t$. These important observations pave the way to a more systematic control of these residual terms. \end{itemize} Finally, it is worth noting that prior AMP theory often hinges upon a Gaussian conditioning technique (e.g., \cite{bayati2011dynamics,bolthausen2009high,rush2018finite}). When this technique is applied to quantify the behavior of AMP, it falls short of delineating --- in a simple yet tractable way --- how the residual terms accumulate over time. Instead, basic union bounds are applied to the branches of certain tree-like error relations, which explode exponentially fast in $t$ and lose control if the number of iterations exceeds $o\big( \frac{\log n}{\log \log n}\big)$. Addressing this issue calls for a more refined and parsimonious way to track error accumulation, which inspires the development of Theorem~\ref{thm:recursion} and the ensuing theory. \subsection{Non-asymptotic error characterizations} \label{sec:decomposition-thm-2} Thus far, we have identified a general decomposition of the AMP iterates in Theorem~\ref{thm:recursion}, accompanied by a recursive formula \eqref{eqn:xi-norm-main} to describe how the size $\|\xi_{t}\|_2$ of the residual term evolves. Nevertheless, the formula \eqref{eqn:xi-norm-main} might remain elusive at first glance, as it is built upon multiple different objects in the previous iteration. In order to better understand the advantages of the recursive relations in Theorem~\ref{thm:recursion}, we single out several additional quantities, which --- if easily controllable --- help further simplify the recurrence. These taken collectively constitute our general recipe for non-asymptotic analysis of AMP, whose utility will be brought to light via two concrete applications in Section~\ref{sec:examples}. \paragraph{Assumptions and key quantities.} Let us first impose the following basic assumptions on the denoising function $\eta_{t}: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}$. Here and throughout, we let $\eta_t^{\prime}(\cdot)$, $\eta_t^{\prime\prime}(\cdot)$ and $\eta_t^{\prime\prime\prime}(\cdot)$ denote respectively the first-order, second-order, and third-order derivatives of $\eta_{t}$; when we apply $\eta_t^{\prime}(\cdot)$, $\eta_t^{\prime\prime}(\cdot)$ and $\eta_t^{\prime\prime\prime}(\cdot)$ to vectors, it is understood that they are applied entry-by-entry. \begin{assumption} \label{assump:eta} For every $1\leq t\leq n$, it is assumed that: \begin{itemize} \item $\eta_t(\cdot)$ is continuous everywhere, and is differentiable up to the 3rd order everywhere except for a finite set $\mathcal{M}_{\mathsf{dc}}$ of points with $\big|\mathcal{M}_{\mathsf{dc}}\big|=O(1)$; \item $|\eta_{t}^{\prime}(w)|\leq \rho$ for any differentiable point $w$ of $\eta_t(\cdot)$; \item $|\eta_{t}^{\prime\prime}(w)|\leq \rho_1$ for any differentiable point $w$ of $\eta_t^{\prime}(\cdot)$; \item $|\eta_{t}^{\prime\prime\prime}(w)|\leq \rho_2$ for any differentiable point $w$ of $\eta_t^{\prime\prime}(\cdot)$. \end{itemize} \end{assumption} \noindent For notational simplicity, we shall --- unless otherwise noted --- take $\eta_t^{\prime}(w)=\eta_t^{\prime\prime}(w)=\eta_t^{\prime\prime\prime}(w)=0$ for any non-differentiable point, with the impact of these singular points explicitly taken into account in the quantity $E_t$ to be defined in Assumption~\ref{assump:A-H-eta}. In the next assumption, we would like to isolate a few additional quantities that can often be bounded separately. We shall formally state this assumption after defining the following additional quantity: \begin{align} \label{defi:kappa} \notag \kappa_t^2 \coloneqq \max\Bigg\{ \Bigg\langle\int\Big[x\eta_{t}^{\prime}\Big(\alpha_tv^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big) - \frac{\ltwo{\beta_{t-1}}}{\sqrt{n}}&\eta_{t}^{\prime\prime}\Big(\alpha_tv^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\Big]^2 \varphi_n(\mathrm{d} x)\Bigg\rangle, ~\\ &\bigg\langle \int\Big[\eta_{t}^{\prime}\Big(\alpha_tv^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\Big]^2\varphi_n(\mathrm{d} x) \bigg\rangle\Bigg\}, \end{align} where we recall that $\varphi_n(\cdot)$ is the pdf~of $\mathcal{N}(0,I_n)$ and $\langle x\rangle \coloneqq \frac{1}{n} \sum_{i=1}^{n} x_{i}$. \begin{assumption} \label{assump:A-H-eta} For any $1\leq t \leq n$, consider arbitrary vectors $\mu_t \in \mathcal{S}^{t-1}$, $\xi_{t-1}\in \ensuremath{\mathbb{R}}^n$, and coefficients $(\alpha_t, \beta_{t-1}) \in \ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}^{t-1}$ that might all be statistically dependent on $\phi_{k}$, and define $v_t$ as in \eqref{defn:v-t-thm1} accordingly. In addition to imposing Assumption~\ref{assump:eta}, we assume the existence of (possibly random) quantities $A_{t}, \cdots, G_{t}$ such that with probability at least $1-O(n^{-11})$, the following inequalities hold \begin{subequations} \begin{align} \label{defi:A} \Big|\sum_{k = 1}^{t-1} \mu_t^k\Big[\big\langle \phi_k, \eta_{t}(v_t)\big\rangle - \big\langle\eta_t^{\prime}(v_t)\big\rangle \beta_{t-1}^k\Big]\Big| &\,\le\, A_t, \\ \Big|v^{\star\top}\eta_{t}(v_t) - v^{\star\top}\int\eta_t\Big(\alpha_t v^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\varphi_n(\mathrm{d} x)\Big| &\,\le\, B_t, \label{defi:B}\\ \Big|\big\|\eta_{t}(v_t)\big\|_2^2 - \int\Big\|\eta_t\Big(\alpha_t v^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\Big\|_2^2\varphi_n(\mathrm{d} x)\Big| &\,\le\, C_t, \label{defi:C}\\ \Big\|\sum_{k = 1}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime}(v_t) - \frac{1}{n}\sum_{k = 1}^{t-1} \mu_t^k\beta_{t-1}^k\eta_{t}^{\prime\prime}(v_t)\Big\|_2^2 - \kappa_t^2 &\,\le\, D_t, \label{defi:D}\\ \big\|\eta_{t}(v_t) \circ \eta_{t}^{\prime}(v_t)\big\|_2 &\,\le\, F_t, \label{defi:F}\\ \big\|\eta_{t}(v_t)\big\|_{\infty} &\,\le\, G_t. \label{defi:G} \end{align} In addition, for any non-differentiable point $m\in \mathcal{M}_{\mathsf{dc}}$, define $\theta(m)\in \ensuremath{\mathbb{R}}$ as \begin{align} \label{defi:theta} \theta(m)\coloneqq\sup\left\{ \theta: \,\sum_{j=1}^{n}\bigg|m-\alpha_{t}v^\star_{j}-\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}\bigg|^{2}\ind\bigg(\bigg|m-\alpha_{t}v^\star_{j}-\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}\bigg|\le\theta\bigg) \le \|\xi_{t-1}\|_{2}^{2}\right\} , \end{align} and we assume the existence of some quantity $E_t$ such that, with probability at least $1-O(n^{-11})$, \begin{align} \label{defi:E} \sum_{m\in\mathcal{M}_{\mathsf{dc}}}\sum_{j=1}^{n}\ind\Big(\Big|m-\alpha_{t}v^\star_{j}-\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}\Big|\le\theta(m)\Big)\,\le\,E_{t} . \end{align} \end{subequations} \end{assumption} \paragraph{Error control and state evolution.} Armed with the above two assumptions, we are positioned to control the magnitude of the residual term $\xi_{t}$ as well as quantities $\alpha_{t+1}$ and $\beta_{t}$. Our result is summarized in the following theorem, with the proof deferred to Section~\ref{sec:pf-thm-main}. % \begin{theos} \label{thm:main} Consider the settings of Theorem~\ref{thm:recursion}, and impose Assumptions~\ref{assump:eta}-\ref{assump:A-H-eta}. Then with probability at least $1-O(n^{-11})$, the AMP iterates \eqref{eqn:AMP-updates} satisfy the decomposition~\eqref{eqn:xt-decomposition} with \begin{subequations} \label{eqn:state-evolution-finite} \begin{align} \label{eqn:alpha-t-genearl} \alpha_{t+1} &= \lambda v^{\star \top} \int{\eta}_{t}\left(\alpha_t v^\star + \frac{\ltwo{\beta_{t-1}}}{\sqrt{n}}x\right)\varphi_n(\mathrm{d} x) + \lambda\Delta_{\alpha,t} \\ \|\beta_t\|_2^2 &= n\bigg\langle\int{\eta}_{t}^2\left(\alpha_t v^\star + \frac{\ltwo{\beta_{t-1}}}{\sqrt{n}}x\right)\varphi_n(\mathrm{d} x)\bigg\rangle + \Delta_{\beta,t} \label{eqn:beta-t-genearl} \end{align} \end{subequations} for any $t\leq n$, where the residual terms obey \begin{subequations} \label{eqn:para-general} \begin{align} \label{eqn:delta-alpha-general} |\Delta_{\alpha,t}| &\,\lesssim\, B_t + \left(\rho + \rho_1\|v^\star\|_{\infty} \|\xi_{t-1}\|_2 + \rho\bigg(\sum_{i=1}^{E_{t}}|v^\star|_{(i)}^{2}\bigg)^{1/2}\right) \|\xi_{t-1}\|_2, \\ \label{eqn:delta-beta-general} |\Delta_{\beta,t}| &\lesssim C_t + \left(F_t + \rho_1G_t\|\xi_{t-1}\|_2 + \rho\sqrt{E_t}G_t + \rho^2\|\xi_{t-1}\|_2 + \rho_1^2\|\xi_{t-1}\|_2^3 + \rho^2E_t\|\xi_{t-1}\|_2\right)\|\xi_{t-1}\|_2, \\ \label{eqn:xi-t-general} \|\xi_{t}\|_{2}&\le\sqrt{\kappa_{t}^{2}+D_{t}}\,\|\xi_{t-1}\|_{2}+O\Bigg(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_{2}+A_{t}+\left[\sqrt{\frac{t+\log n}{n}}\rho_{1}+\frac{\rho_{2}\|\beta_{t-1}\|_{2}}{n}\right]\|\xi_{t-1}\|_{2}^{2}\nonumber\\ & \qquad\qquad\qquad\qquad\qquad\qquad+\rho\sqrt{\frac{{E_{t}+t\log n}}{n}}\|\xi_{t-1}\|_{2}+\frac{(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty})E_{t}\|\beta_{t-1}\|_{2}}{n}\Bigg) . \end{align} \end{subequations} \end{theos} Theorem~\ref{thm:main} offers an explicit and recursive way to control the quantities $\alpha_{t+1}, \ltwo{\beta_t}, \xi_{t}$, assuming that the quantities $A_t,\ldots,G_t$ isolated in Assumption~\ref{assump:A-H-eta} can be bounded effectively. This framework is fully non-asymptotic, provided that $A_t,\ldots,G_t$ admits some non-asymptotic bounds as well. Crucially, the results in \eqref{eqn:state-evolution-finite} can be viewed as the non-asymptotic analog of the asymptotic state evolution recurrence~\eqref{eq:SE-Montanari}. To be more precise, note that if we assume the empirical distribution of $\{\sqrt{n}v^\star_{i}\}_{1\leq i\leq n}$ converges to some distribution $\mu_{V}$ and generate $V\sim \mu_V, G\sim \mathcal{N}(0,1)$ independently, then \eqref{eqn:state-evolution-finite} can be alternatively interpreted as \begin{subequations} \begin{align} \frac{\alpha_{t+1}}{\sqrt{n}} &\approx \lambda \ensuremath{\mathbb{E}} \bigg[V \eta_t\Big( \frac{\alpha_t}{\sqrt{n}} V + \frac{\ltwo{\beta_{t-1}}}{\sqrt{n}} G \Big)\bigg] , \\ \frac{\ltwo{\beta_{t}}^2}{n} &\approx \ensuremath{\mathbb{E}} \bigg[\eta_t^2 \Big(\frac{\alpha_t}{\sqrt{n}} V + \frac{\ltwo{\beta_{t-1}}}{\sqrt{n}} G \Big)\bigg] , \end{align} \end{subequations} which --- upon proper rescaling --- is consistent with \eqref{eq:SE-Montanari} as long as $\Delta_{\alpha}$ and $\Delta_{\beta}$ are negligible. The basic idea of Theorem~\ref{thm:main} is to divide the ultimate goal into multiple sub-tasks, motivating us to bound the derivatives stated in Assumption~\ref{assump:eta} and each of the quantities $A_t,\ldots,G_t$ separately. It remains unclear, however, whether it is feasible to control $A_t,\ldots,G_t$ to the desired order. In the sequel, let us illustrate this point by looking at some examples; more details can be seen when we move on to the two concrete applications in Section~\ref{sec:examples}. \begin{itemize} \item Let us take a closer inspection of the left-hand side of \eqref{defi:A} concerning $A_t$. Heuristically, consider the idealistic case where $\mu_{t}$, $\alpha_t$ and $\beta_{t-1}$ are independent of $\{\phi_k\}_{1\leq k\leq t-1}$. By the well-renowned Stein lemma, we can easily show that the quantity $\sum_{k = 1}^{t-1} \mu_t^k\big[\big\langle \phi_k, \eta_{t}(v_t)\big\rangle - \big\langle\eta_t^{\prime}(v_t)\big\rangle \beta_{t-1}^k\big]$ has zero mean. In addition, this quantity can be viewed as a Lipschitz function of an i.i.d.~Gaussian vector, which is expected to concentrate sharply around its mean \citep{massart2007concentration}. Such concentration results can then be extended to accommodate statistically dependent $\mu_{t}$ and $\beta_{t-1}$ via standard covering arguments (see, e.g., the uniform concentration results in Section~\ref{sec:Gaussian-concentration}). Similar ideas can be applied to control $D_t$ (cf.~\eqref{defi:D}), although the expression of $D_t$ is more complicated and it has a non-zero mean value. \item Similarly, the target quantities (excluding the absolute value symbols) that define $B_t$ (cf.~\eqref{defi:B}) and $C_t$ (cf.~\eqref{defi:C}) are also zero-mean Lipschitz functions of i.i.d.~Gaussian vectors, if we take $\alpha_t$ and $\beta_{t-1}$ to be independent of $\{\phi_k\}_{1\leq k\leq t-1}$. As a result, we expect $B_t$ and $C_t$ to be controllable again using uniform Gaussian concentration results. \item In terms of quantity $E_t$, which captures the influence of non-differentiable points of the denoising function $\eta_{t}(\cdot)$. In those problems with smooth $\eta_t(\cdot)$ (e.g., $\mathbb{Z}_2$ synchronization to be explored in Section~\ref{sec:z2-main}), we have $E_{t} = 0$, which allows for significant simplification of \eqref{eqn:para-general}. Nonetheless, it plays a crucial role in problems with non-differentiable denoising functions, as shall be seen in the example of sparse PCA (in Section~\ref{sec:sparse-main}). \end{itemize} Finally, the signal-to-noise ratio in decomposition~\eqref{eqn:xt-decomposition} is captured by $\frac{\alpha_{t+1}}{\ltwo{\beta_t}}.$ Clearly, if throughout the execution of AMP, each $\eta_t$ is properly normalized such that $\ltwo{\eta_t(x_t)} = \|\beta_{t}\|_2 = 1$, then $\Delta_{\beta,t} = 0$ for every $1 \leq t < n$. Therefore it is sufficient to focus on the dynamics of $\{\alpha_{t}\}$. In such case, the application of Theorem~\ref{thm:main} is further simplified by controlling quantities $A_t, B_t$, $D_t$ and $E_t$. \section{Introduction} Approximate Message Passing (AMP) refers to a class of iterative algorithms that have received considerable attention over the past two decades, partly due to its versatility in solving a diverse array of science and engineering problems (\cite{schniter2011message,fletcher2014scalable,rush2017capacity,borgerding2016onsager}) as well as its capability in approaching the theoretical limits of many of these problems. Originally introduced in the context of compressed sensing as a family of low-complexity iterative algorithms \citep{donoho2009message}, AMP lends it well to a wide spectrum of high-dimensional statistical problems, both as a class of efficient estimation algorithms and as a powerful theoretical machinery. Examples of this kind abound, including robust M-estimators \citep{donoho2016high,donoho2015variance}, sparse linear regression \citep{bayati2011lasso,donoho2013information,bu2020algorithmic,li2021minimum}, generalized linear models \citep{sur2019likelihood,sur2019modern,venkataramanan2021estimation,barbier2019optimal}, phase retrieval \citep{ma2018optimization,schniter2014compressive,aubin2020exact}, community detection \citep{deshpande2017asymptotic,ma2021community}, structured matrix estimation and principal component analysis (PCA) \citep{rangan2012iterative,montanari2021estimation,deshpande2014information,mondelli2021pca}, mean-field spin glass models \citep{sellke2021optimizing,fan2022tap,fan2021replica}, to name just a few. The interested reader is referred to \cite{feng2021unifying} for a recent overview of AMP and its wide applicability. \subsection{Asymptotic vs.~non-asymptotic AMP theory} \paragraph{High-dimensional asymptotics and state evolution.} A key appealing feature of AMP lies in its effectiveness in tackling high-dimensional asymptotics or large-system limits (for instance, in robust M-estimation, this might refer to the regime where the number of observations scales proportionally with the number of unknowns \citep{bayati2011dynamics,javanmard2013state}). In such challenging regimes, the limiting behavior of AMP (as the problem dimension diverges) can often be accurately predicted by the so-called \emph{state evolution (SE)}, a recurrence formula that tracks how a small number of key parameters evolve from one iteration to the next. For various estimation problems, an algorithmic design paradigm is to construct a general class of AMP instances, and then identify the optimal choice by inspecting their state-evolution characterizations (which can often be done given that state evolution might only involve very few (e.g., 2) key parameters). \paragraph{Non-asymptotic theory for AMP?} Despite the predicting power of state evolution in high-dimensional asymptotics, most existing AMP theory exhibited an asymptotic flavor (often stated in a weak convergence sense as problem dimension tends to infinity), which fell short of validity if the number of iterations grows with the problem dimension. It light of this, there are two main limitations that are pronounced in current understanding of AMP: \begin{itemize} \item[(i)] When AMP is deployed as an analysis device, the theoretical guarantees obtained based on existing state-evolution predictions are asymptotic in nature. For this reason, it might sometimes lose advantages over alternative machineries such as the convex Gaussian min-max theorem \citep{thrampoulidis2018precise,celentano2020lasso} and the leave-one-out analysis framework \citep{el2018impact} when the goal is to understand non-asymptotic fine-grained statistical behavior of the estimators; \item[(ii)] When AMP is employed as an optimization algorithm of its own, most prior AMP theory could only accommodate a non-growing number of iterations, thereby significantly limiting the optimization accuracy AMP can achieve (e.g., such asymptotic AMP theory cannot yield an optimization error that is $o_n(1)$). This stands in stark contrast to other non-asymptotic analysis of optimization-based algorithms (e.g., gradient descent), which deliver characterization of iteration complexity for arbitrary optimization accuracy levels (e.g., \citet{keshavan2010matrix,candes2015phase,ma2020implicit}). \end{itemize} \noindent In order to address the aforementioned limitations of asymptotic theory, \citet{rush2018finite} developed a finite-sample analysis of AMP (for noisy linear models) that permits the number of iterations to reach $o\big( \frac{\log n }{ \log\log n}\big)$. However, $o\big( \frac{\log n }{ \log\log n}\big)$ iterations of AMP are, for the most part, unable to yield a (relative) convergence error of $O(n^{-\varepsilon})$ for even an arbitrarily small constant $\varepsilon>0$. Another recent work \cite{celentano2021local} considered the use of spectrally initialized AMP for $\mathbb{Z}_2$ synchronization, and appended it with another gradient-type algorithm in order to allow for a growing number of iterations; this, however, did not reveal non-asymptotic behavior of AMP either. All this motivates the following question that we would like to study in this paper: \begin{center} \emph{Is it possible to develop non-asymptotic analysis of AMP beyond $o\big( \frac{\log n}{ \log \log n} \big)$ iterations?} \end{center} On a technical level, the challenge lies in understanding the complicated dependence structures of AMP iterates across iterations. In prior analysis, the bounds on certain residual terms (e.g., the difference between the behavior of the AMP and what state evolution predicts) blow up dramatically fast in the iteration number, thus calling for new analysis ideas to enable tighter controls of such residual terms. \subsection{AMP for spiked Wigner models} In this paper, we attempt to answer the question posed above in the affirmative, focusing on the context of estimation in spiked matrix models as detailed below. To facilitate concrete discussions, let us first set the stage by introducing the model and algorithm studied herein, before moving on to describe our main results in the next subsection. \paragraph{Spiked Wigner models.} The spiked matrix model refers to a class of data matrices that can be decomposed into a rank-one signal and a random noise matrix, which was proposed by \cite{johnstone2001distribution} as a way to study PCA in high dimension and has inspired substantial subsequent works in both statistics and random matrix theory \citep{peche2006largest,baik2005phase,bai2008central,johnstone2009consistency,johnstone2018pca}. This paper assumes access to a rank-one deformation of a Wigner matrix $W=[W_{ij}]_{1\leq i,j\leq n}$ as follows: \begin{align} \label{eqn:wigner} M = \lambda v^\star v^{\star\top} + W \in \mathbb{R}^{n\times n}, \end{align} where the spiked vector $v^{\star}=[v^{\star}_i]_{1\leq i\leq n} \in \ensuremath{\mathbb{R}}^n$ obeys $\|v^\star\|_2=1$ and represents the signal to be estimated, $\lambda>0$ determines the signal-to-noise ratio (SNR), and the $W_{ij}$'s ($i\geq j$) are independently generated such that \begin{align} W_{ij} = W_{ji} \overset{\mathrm{i.i.d.}}{\sim} \mathcal{N}\Big(0, \frac{1}{n} \Big) \qquad \text{and} \qquad W_{ii} \overset{\mathrm{i.i.d.}}{\sim} \mathcal{N}\Big(0,\frac{2}{n} \Big) . \end{align} As has been shown in prior literature \citep{peche2006largest,feral2007largest,capitaine2009largest}, the leading eigenvalue of $M$ stands out from the semicircular bulk under the condition $\lambda > 1$; in contrast, it is information-theoretically infeasible to detect the planted signal if $\lambda < 1$, unless additional structural information about $v^\star$ is available. Prominent examples of such structural information include sparsity \citep{johnstone2009consistency,berthet2013optimal}, non-negativity \citep{montanari2015non}, cone constraints \citep{deshpande2014cone,lesieur2017constrained}, synchronization over finite groups \citep{perry2018message,javanmard2016phase}, among others. Nevertheless, finding the maximum likelihood estimates or Bayes-optimal estimates is often computationally intractable (due to nonconvexity), thus complicating the computational/statistical analyses of the iterative estimators in use. \paragraph{AMP for spiked Wigner models.} The AMP algorithm tailored to estimating the spiked Wigner model adopts the following update rule: \begin{align} \label{eqn:AMP-updates} x_{t+1} = M\eta_t(x_{t}) - \big\langle\eta_t^{\prime}(x_{t}) \big\rangle \cdot \eta_{t-1}(x_{t-1}), \qquad \text{ for } t\geq 1. \end{align} where $\langle z \rangle \coloneqq \frac{1}{n} \sum_{i=1}^n z_i$ for any vector $z=[z_i]_{1\leq i \leq n}\in \ensuremath{\mathbb{R}}^n$. Here, the key elements are described as follows: \begin{itemize} \item $x_t\in \ensuremath{\mathbb{R}}^n$ denotes the AMP iterate in the $t$-th iteration, where the initialization $x_0$ and $x_1$ can sometimes be selected in a problem-specific manner. \item The scalar function $\eta_t: \ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}$ stands for the denoising function adopted in the $t$-th iteration, with $\eta_t'(\cdot)$ denoting the derivative of $\eta_t(\cdot)$; when applied to a vector $x$, it is understood that $\eta_t(\cdot)$ (resp.~$\eta'(\cdot)$) is applied entry-by-entry. \item The first term $M\eta_t(x_{t})$ on the right-hand side of \eqref{eqn:AMP-updates} performs a power iteration to the denoised iterate $\eta_t(x_{t})$, while the second term $\big\langle\eta_t^{\prime}(x_{t}) \big\rangle \cdot \eta_{t-1}(x_{t-1})$ --- often referred to as the ``Onsager term'' --- plays a crucial role in cancelling out certain correlation across iterations. \end{itemize} \paragraph{State evolution.} As alluded to previously, the limiting behavior of the AMP sequence can be pinned down through a small-dimensional recurrence termed the {\em state evolution (SE)}. More precisely, assuming that the empirical distribution of\footnote{Here, we adopt the factor $\sqrt{n}$ to be consistent with the scaling of this paper, given that $\ltwo{v^\star} = 1$.} $\{\sqrt{n} v_i^{\star}\}_{i=1}^{n}$ converges weakly to a distribution $\mu_{V}$ on $\mathbb{R}$ with unit second moment, the SE associated with \eqref{eqn:AMP-updates} is the following recurrence involving two scalar sequences $\{\alpha_t\}$ and $\{\beta_t\}$: \begin{subequations} \label{eq:SE-Montanari} \begin{align} \alpha_{t+1} &= \lambda \mathbb{E}\big[V \eta_t(\alpha_t V + \beta_t G)\big] \\ \beta_{t+1}^2 &= \mathbb{E}\big[\eta_t^2(\alpha_t V + \beta_t G)\big] \end{align} \end{subequations} for any $t\geq 1$, where ${V \sim \mu_{V}}$ and ${G\sim \mathcal{N}(0,1)}$ are independent random variables. The SE \eqref{eq:SE-Montanari} has been studied by \cite{fletcher2018iterative} in the presence of an independent initialization, and by \cite{montanari2021estimation} under spectral initialization. As shown in \citet{montanari2021estimation}, for and fixed $t$ and any pseudo-Lipschitz function $\Psi: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$, it holds almost surely that \begin{align} \lim_{n\to \infty} \frac{1}{n}\sum_{i=1}^n \Psi(\sqrt{n}v^\star_i, \sqrt{n} x_{t,i}) = \mathbb{E} \Big[\Psi(V, \alpha_tV + \beta_t G)\Big] \label{eq:AMP-prediction-SE} \end{align} when the AMP sequence $\{x_t\}$ is initialized by spectral methods. Informally, this result \eqref{eq:AMP-prediction-SE} uncovers that each coordinate of the AMP iterate behaves like $\alpha_tV + \beta_t G$ (after proper rescaling), containing an extra source of Gaussian-type randomness that is crucial in explaining the AMP dynamics under high-dimensional asymptotics. Moreover, property~\eqref{eq:AMP-prediction-SE} also suggests that the denoising functions $\{\eta_t\}$ can be optimally selected \citep{bayati2011dynamics,montanari2021estimation} as the minimum mean square error (MMSE) estimator (or Bayes-optimal estimator if given $\nu_V$), namely, \begin{align} \label{eqn:eta-principle} \eta_t(x) \,=\, \ensuremath{\mathbb{E}}[V \mid \alpha_t V + \beta_t G = x]. \end{align} Note, however, that the validity of this SE-based prediction has only been verified when $t$ is fixed and $n\rightarrow \infty$. It remains to see whether the SE can track the AMP behavior in a non-asymptotic manner in the presence of a possibly large number of iterations. \subsection{A glimpse of main contributions} The main contributions of this paper is the development of a non-asymptotic analysis framework that helps understand the AMP behavior when the number of iterations is chosen polynomial in $n$. Our main findings are summarized as follows. \begin{itemize} \item {\bf A key decomposition of AMP iterates with tractable residual terms.} We develop in Theorem~\ref{thm:recursion} a general decomposition of the $t+1$-th iterate of AMP as follows: \begin{align} \label{eqn:general-decomp} x_{t+1} = \alpha_{t+1} v^\star + \sum_{k = 1}^{t} \beta_{t}^k\phi_k + \xi_{t}, \qquad \text{for } t\geq 1. \end{align} Here, $v^\star$ is the underlying signal, $\{\phi_k\}_{k=1}^{t}$ stands for a collection of independent Gaussian vectors, $\alpha_{t+1}$ and $\beta_t=[\beta_t^1,\ldots,\beta_t^{t}]\in \ensuremath{\mathbb{R}}^{t}$ are a set of weights, and $\xi_{t} \in \ensuremath{\mathbb{R}}^{n}$ is a residual term that lies in a $t$-dimensional subspace determined by the previous iterates. This decomposition is fairly general with little assumption imposed on either the denoising function, the number of iterations, or $v^\star$. Our analysis reveals that the residual terms $\{\xi_{t}\}$ can often be bounded in a recursive yet tractable manner without blowing up rapidly. \item {\bf Finite-sample analysis beyond $o\big(\log n/\log \log n\big)$ iterations.} Leveraging upon the decomposition in~\eqref{eqn:general-decomp}, in Theorem~\ref{thm:main}, we develop an analysis framework to track $\alpha_{t+1}$ and $\beta_{t}$ in a non-asymptotic fashion, which intimately connects with the state evolution recurrence \eqref{eq:SE-Montanari}. In fact, our analysis idea could yield non-asymptotic characterizations of AMP iterates for a certain polynomial number of iterations, which go far beyond the $o\big(\frac{\log n}{\log \log n}\big)$ iterations covered in prior art. All this is largely enabled due to our ability to control the residual size $\|\xi_{t}\|_2$ --- often to the order of $O\big(\sqrt{\frac{t\mathrm{poly}\log(n)}{n}} \big)$. \item {\bf Non-asymptotic theory for AMP with spectral initialization.} A widely used scheme to initialize AMP for spiked models is the spectral method, which often provides an informative initial estimate with non-vanishing correlation with the truth. Motivated by its widespread adoption in practice, we extend the above analysis framework to study non-asymptotic behavior of spectrally initialized AMP. As it turns out, our AMP analysis recipe can be tightly integrated with the analysis of spectral initialization, with the aid of two auxiliary AMP sequences and a similar decomposition as of \eqref{eqn:general-decomp} is established for such sequences. Details can be found in Section~\ref{sec:spectral}. \item {\bf Concrete consequences: non-asymptotic theory for $\mathbb{Z}_2$ synchronization and sparse PCA.} In Section~\ref{sec:examples}, we apply our general recipe to two widely studied models that are very different in nature: the problem of $\mathbb{Z}_{2}$ synchronization and that of sparse PCA (in the context of the sparse spiked Wigner model). For $\mathbb{Z}_{2}$ synchronization, we focus on the most challenging scenario where the spectral gap $\lambda - 1$ approaches 0, and characterize the non-asymptotic behavior of spectrally initialized AMP all the way up to $O\big(\frac{n}{\mathrm{poly}\log (n)}\big)$ iterations (in addition to other dependency on $\lambda-1$). This helps address a conjecture in \cite{celentano2021local} regarding the finite-sample behavior of spectrally initialized AMP. When it comes to the sparse spiked Wigner model, our general recipe leads to non-asymptotic characterizations of the AMP iterates as well. If an independent yet informative initialization is provided, then our theory allows the SNR to approach the order of the information-theoretic limit; otherwise, our AMP theory can be combined with two initialization schemes in order to accommodate the regime above the computational limit. \end{itemize} \subsection{Other related works} The studies of the spiked Wigner model --- also under the names of deformed Wigner models or matrix denoising --- have received much attention from multiple domains, including but not limited to statistics, random matrix theory, and information theory (e.g., \citet{knowles2013isotropic,cheng2021tackling,el2020fundamental,bao2021singular,yan2021inference,fan2022asymptotic,lee2016bulk,perry2018optimality,peng2012eigenvalues,simchowitz2018tight}). Subsuming multiple problems as special cases (e.g., phase synchronization, sparse estimation in Wigner models), the spiked Wigner model serves as a stylized model that helps uncover various phenomena in high dimensions, such as universality, computational-to-statistical gaps, phase transition, unreasonable effectiveness of nonconvex optimization, etc. We briefly highlight some of these aspects below. While a large fraction of AMP theory, including the current paper, focuses on the case with i.i.d.~Gaussian noise and/or i.i.d.~Gaussian designs, certain \emph{universality} phenomena beyond i.i.d.~Gaussian noise have been empirically observed and theoretically established in the context of AMP \citep{bayati2015universality,chen2021universality,wang2022universality,dudeja2022universality,fan2022approximate} and in broader scenarios \citep{lee2016bulk,hu2020universality,oymak2018universality}. For instance, \citet{bayati2015universality} and \cite{chen2021universality} studied a random design matrix with i.i.d.~sub-Gaussian entries, and \citet{fan2022approximate} was able to accommodate the family of rotationally invariant designs, thus allowing for a spectral distribution that differs from the semicircle or Marcenko-Pastur law. Additionally, for many structured estimation problems, empirical evidence suggests the potential existence of a gap between the fundamental statistical limit and what can be done computationally efficiently. This has inspired considerable theoretical interest towards solidifying such computational-to-statistical gaps; see \cite{bandeira2018notes} for a tutorial and also \cite{zdeborova2016statistical} for a connection to statistical physics. The spiked Wigner model forms an idealized model to study such gaps, for multiple structured problems like sparse PCA and non-negative PCA. It is also worth noting that AMP, in various settings, is able to achieve the optimal performance among polynomial-time estimators \citep{donoho2009message,celentano2022fundamental}. It has also been employed as a machinery to characterize the information-theoretic limits of several high-dimensional problems \citep{deshpande2014information,barbier2016mutual,reeves2019replica}. Further, estimating the underlying signal from a spiked Wigner model is, for the most part, concerned with solving a highly nonconvex problem, particularly in the presence of additional structural constraints. In such cases, the initialization schemes exert considerable influences on the subsequent AMP dynamics. In fact, a large body of existing AMP theory assumes availability of an informative initialization. For instance, in a special case where each entry of $v^\star$ has positive mean, it might be sufficient to initialize AMP with an all-one vector \citep{deshpande2014information,montanari2015non}; when the SNR is large enough such that $\lambda > 1$, an estimate returned by the spectral method is known to achieve strictly positive correlation with the ground-truth spike, which therefore serves as a common initialization scheme for AMP as well \citep{montanari2021estimation,fan2021tap}. \subsection{Organization and notation} \paragraph{Paper organization.} The remainder of this paper is organized as follows. Sections~\ref{sec:decomposition-thm-1}-\ref{sec:decomposition-thm-2} develop a general recipe that enables non-asymptotic characterizations of the AMP in spiked models, assuming independent initialization. This framework is further extended in Section~\ref{sec:spectral} for the case when AMP is used along with spectral initialization. Sections~\ref{sec:z2-main} and \ref{sec:sparse-main} instantiate our analysis framework to $\mathbb{Z}_{2}$ synchronization and sparse PCA, respectively, confirming the utility of our non-asymptotic theory. The proof ideas of two master theorems are presented in Section~\ref{sec:main-analysis}, with other technical details deferred to the appendices. Section~\ref{sec:discussion} concludes the paper by pointing out several future directions. \paragraph{Notation.} We often use 0 (resp.~1) to denote the all-zero (resp.~all-one) vector, and let $I_n$ (or simply $I$) denote the $n\times n$ identity matrix. For any $w\in \ensuremath{\mathbb{R}}$, we denote $w_+ \coloneqq \max\{w, 0\}$. We denote by $\varphi(\cdot)$ (resp.~$\varphi_n(\cdot)$) the probability density function (p.d.f.) of a standard Gaussian random variable (resp.~a Gaussian random vector $\mathcal{N}(0,I_n)$). For any positive integer $k$, we say a function $f: \ensuremath{\mathbb{R}}^k \to \ensuremath{\mathbb{R}}$ is $L$-Lipschitz continuous for some quantity $L>0$ if, for every $z_{1}$ and $z_{2}$, one has $|f(z_1) - f(z_2)| \leq L \cdot \ltwo{z_1 - z_2}$. When a function is applied to a vector, it should be understood as being applied in a component-wise manner; for instance, for any vector $x=[x_i]_{1\leq i\leq n}$, we let $|x|\coloneqq [|x_i|]_{1\leq i\leq n}$ and $x_+ \coloneqq [ \max\{x_i, 0\} ]_{1\leq i\leq n}$. For any two vectors $x,y\in \ensuremath{\mathbb{R}}^n$, we write $x \circ y $ for their Kronecker product, namely, $x \circ y = (x_1y_1,\ldots, x_ny_n)^{\top} \in \ensuremath{\mathbb{R}}^{n}.$ For two functions $f(n)$ and $g(n)$, we write $f(n)\lesssim g(n)$ to indicate that $f(n)\leq c_1 g(n)$ for some constant $c_1>0$ that does not depend on $n$, and similarly, $f(n)\gtrsim g(n)$ means that $f(n)\geq c_2 g(n)$ for some constant $c_2>0$ independent of $n$. We also adopt the notation $f(n)\asymp g(n)$ to indicate that both $f(n)\lesssim g(n)$ and $f(n)\gtrsim g(n)$ hold simultaneously. In addition, we write $f(n) \ll g(n)$ or $f(n)=o(g(n))$ if $f(n)/g(n)\to 0$ as $n\to \infty$ and $f(n) \gg g(n)$ if $g(n)/f(n)\to 0$. For any matrix $M$, we let $\|M\|$ and $\|M\|_{\mathrm{F}}$ denote the spectral norm and the Frobenius norm of $M$, respectively. For any integer $n>0$, we let $[n]\coloneqq \{1,\cdots, n\}$. Also, for any vector $x\in [x_i]_{1\leq i\leq n} \in \ensuremath{\mathbb{R}}^n$, we denote by $|x|_{(i)}$ the $i$-th largest element within $\{|x_i|\}_{1\leq i\leq n}$. In addition, given two probability measures $\mu$ and $\nu$ on $\ensuremath{\mathbb{R}}^{n}$, the Wasserstein distance of order $p$ between them is defined and denoted by \begin{align} \label{eqn:wasserstein-p} W_p(\mu, \nu) \coloneqq \bigg(\inf_{\gamma \in \mathcal{C}(\mu,\nu)}\int \|x - y\|_2^p \, \mathrm{d}\gamma (x,y)\bigg)^{1/p}, \end{align} where $\mathcal{C}(\mu,\nu)$ is the set comprising all couplings of $\mu$ and $\nu$ (i.e., all joint distributions $\gamma(x,y)$ whose marginal distributions are $\mu$ and $\nu$, respectively). We let $\mathcal{S}^{d-1}=\{x\in \ensuremath{\mathbb{R}}^d \mid \|x\|_2=1\}$ represent the unit sphere in $\ensuremath{\mathbb{R}}^d$, and denote by $\mathbb{B}^d(r)=\{\theta \in \ensuremath{\mathbb{R}}^d \mid \|\theta\|_2\leq r\}$ the $d$-dimensional ball of radius $r$ centered at 0. \section{Main analysis} \label{sec:main-analysis} We present the proofs of Theorem~\ref{thm:recursion} and \ref{thm:main} in this section and defer other technical details and lemmas to the appendices. \subsection{Proof of Theorem~\ref{thm:recursion}} \label{sec:pf-thm-recursion} We carry out the main analysis for Theorem~\ref{thm:recursion} in the following three steps. \paragraph{Step 1: constructing a key set of auxiliary sequences.} Let us first introduce a sequence of auxiliary vectors/matrices $\{z_k, W_k, \zeta_k\}_{1 \le k \le n}$ in a recursive manner as follows. \begin{itemize} \item[(i)] With the Wigner matrix $W$ and the initialization $x_{1}$ (pre-selected independent of $W$) in place, we define \begin{subequations} \label{eqn:z-w-recursion} \begin{align} \label{eqn:z-w-init} z_1 \coloneqq \frac{\eta_1(x_1)}{\left\|\eta_1(x_1)\right\|_2} \in \ensuremath{\mathbb{R}}^n \qquad\text{and}\qquad W_1 \coloneqq W \in \ensuremath{\mathbb{R}}^{n\times n}, \end{align} which are statistically independent from each other. \item[(ii)] For any $2 \leq t \leq n$, concatenate the $z_{k}$'s into a matrix $U_{t-1} \coloneqq [z_k]_{1 \le k \leq t-1} \in \ensuremath{\mathbb{R}}^{n\times (t-1)}$ and set \begin{align} z_t &\coloneqq \frac{\left(I_n - U_{t-1}U_{t-1}^{\top}\right)\eta_{t}(x_{t})}{\left\|\left(I_n - U_{t-1}U_{t-1}^{\top}\right)\eta_{t}(x_{t})\right\|_2}, \label{eqn:zt}\\ W_t &\coloneqq \left(I_n - z_{t-1}z_{t-1}^{\top}\right)W_{t-1}\left(I_n - z_{t-1}z_{t-1}^{\top}\right), \label{eqn:Wt} \end{align} where $\{x_t\}$ is the sequence generated by the AMP updates~\eqref{eqn:AMP-updates}. \end{subequations} \end{itemize} In view of these definitions, we immediately single out the following basic fact. \begin{lems} \label{lemma:zk-orthonormal} The set of vectors $\{z_k\}_{1\leq k\leq n}$ forms an orthonormal basis. \end{lems} \begin{proof} First, it is clear that $U_1=z_1$ consists of orthonormal columns. Next, suppose that $U_{t-1}$ contains orthonormal columns for some $t$, then $I_n-U_{t-1}U_{t-1}^{\top}$ forms a projection matrix onto the subspace perpendicular to $U_{t-1}=[z_1,\cdots,z_{t-1}]$, and hence $\langle z_t, z_k\rangle = 0$ for all $1\leq k\leq t-1$ (cf.~\eqref{eqn:zt}). This implies that $U_t$ also consists of orthonormal columns. An induction argument thus concludes the proof. \end{proof} As it turns out, $\{z_i\}_{1\leq i\leq t}$ assists in obtaining a useful decomposition of $\eta_t(x_t)$. By construction, for each $t$ we have $\eta_t(x_t) \in \mathsf{span}\big\{ z_t, U_{t-1} \big\} = \mathsf{span}\big\{ z_t, \cdots, z_{1} \big\}$. This together with Lemma~\ref{lemma:zk-orthonormal} allows us to decompose \begin{align} \label{eqn:eta-decomposition} \eta_{t}(x_{t}) = \sum_{k = 1}^{t} \beta_{t}^kz_k, \qquad\text{with }\beta_{t}^k \coloneqq \big\langle\eta_{t}(x_{t}), z_k \big\rangle ~~~(1\leq k\leq t), \end{align} which satisfies \begin{align} \left\|\eta_{t}(x_{t})\right\|_2 = \left\|\beta_{t}\right\|_2 \qquad \text{with }~ \beta_{t} \coloneqq \big(\beta_t^1,\beta_t^2,\ldots,\beta_t^t \big)^{\top} \in \ensuremath{\mathbb{R}}^{t}. \end{align} \paragraph{Step 2: deriving distributional properties of $W_kz_k$.} Next, we look at some useful distributional properties of $W_kz_k$. Towards this end, let us generate another set of auxiliary vectors \begin{align} \label{eqn:zeta-k} \zeta_k \coloneqq \Big(\frac{\sqrt{2}}{2} - 1\Big) z_kz_k^{\top}W_kz_k + \sum_{i = 1}^{k - 1} g_i^kz_i, \qquad 1\leq k\leq n, \end{align} where the $g_i^k$'s are independently drawn from $\mathcal{N}(0, \frac{1}{n})$. As it turns out, we can characterize the distribution of the superposition of $W_kz_k$ and $\zeta_k$, as stated in the following lemma. \begin{lems} \label{lem:distribution} With $\{z_k, W_k, \zeta_k\}_{1 \le k \le n}$ defined as above, one has \begin{align} \label{def:phi_k} \phi_k \coloneqq W_kz_k + \zeta_k \sim \mathcal{N}\left(0, \frac{1}{n}I_n\right),\qquad\text{for all }1 \le k \le n. \end{align} Further, $\{\phi_k\}_{1\leq k\leq n}$ are statistically independent. { \end{lems} In words, when properly augmented by i.i.d.~Gaussians in the directions $\{z_i\}_{1\leq i < t}$ and adjusting the size of $W_kz_k$ along the direction $z_k$, we arrive at an i.i.d.~Gaussian vector. The proof is postponed to Section~\ref{sec:pf-distribution}. \paragraph{Step 3: establishing a key decomposition of $\{x_t\}$.} Equipped with the definitions above, we claim that the AMP updates satisfy the following decomposition: \begin{align} x_t \coloneqq \alpha_t v^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1}, \qquad\text{for }t \ge 2, \label{def:dynamics} \end{align} where $\alpha_{t} = \lambda v^{\star\top} \eta_{t-1}(x_{t-1})$ and $\xi_{t-1}$ denotes some residual term obeying \[ \xi_{t-1} \in U_{t-1}. \] Here and below, we abuse the notation $U_{t-1}$ to denote the subspace spanned by the columns of $[z_1,\cdots,z_{t-1}]$. \begin{proof}[Proof of decomposition \eqref{def:dynamics}] The proof proceeds in an inductive manner. First, recalling the update rule of AMP, the definition \eqref{eqn:z-w-init}, and the assumption $\eta_0(x_0)=0$ yields \begin{align*} x_{2} & =(\lambdav^\star v^{\star\top}+W)\eta_{1}(x_{1})\\ & =\lambda v^{\star\top}\eta_{1}(x_{1})\cdotv^\star+W\eta_{1}(x_{1})=\lambda v^{\star\top}\eta_{1}(x_{1})\cdotv^\star+\ltwo{\eta_{1}(x_{1})}\cdot W_{1}z_{1}\\ & =\alpha_{2}v^\star+\beta_{1}^{1}W_{1}z_{1}=\alpha_{2}v^\star+\beta_{1}^{1}\phi_{1}+\underset{\eqqcolon\,\xi_{1}}{\underbrace{\left(-\beta_{1}^{1}\zeta_{1}\right)}}, \end{align*} where the penultimate identity comes from the definition of $\alpha_t$ and $\beta_{t}^k$, and the last relation arises from \eqref{def:phi_k}. Clearly, $\xi_{1}\in U_1$ according to \eqref{eqn:zeta-k}. This establishes the claim \eqref{def:dynamics} for the base case with $t=2$. Next, suppose that the decomposition \eqref{def:dynamics} is valid for step $t$, and we aim to justify it for step $t+1$ as well. Towards this, let us begin by expressing $W_1$ as \begin{align} W_1 = W_t + \sum_{k = 1}^{t-1}(W_k - W_{k+1}) = W_t + \sum_{k = 1}^{t-1} \left[W_kz_kz_k^{\top} + z_kz_k^{\top}W_k - z_kz_k^{\top}W_kz_kz_k^{\top}\right], \label{eq:W1-recursive-expand} \end{align} which comes from the definition~\eqref{eqn:Wt}. Based on this decomposition and the relation \eqref{eqn:eta-decomposition}, we can express the AMP iteration as: \begin{align} \notag x_{t+1} & =\alpha_{t+1}v^\star+W_{1}\eta_{t}(x_{t})-\langle\eta_{t}^{\prime}(x_{t})\rangle\eta_{t-1}(x_{t-1})=\alpha_{t+1}v^\star+W_{1}\eta_{t}(x_{t})-\langle\eta_{t}^{\prime}(x_{t})\rangle\sum_{k=1}^{t-1}\beta_{t-1}^{k}z_{k}\notag\\ & =\alpha_{t+1}v^\star+W_{t}\eta_{t}(x_{t})+\sum_{k=1}^{t-1}\left[W_{k}z_{k}z_{k}^{\top}+z_{k}z_{k}^{\top}W_{k}-z_{k}z_{k}^{\top}W_{k}z_{k}z_{k}^{\top}\right]\eta_{t}(x_{t})-\langle\eta_{t}^{\prime}(x_{t})\rangle\sum_{k=1}^{t-1}\beta_{t-1}^{k}z_{k}\notag\\ & =\alpha_{t+1}v^\star+W_{t}\eta_{t}(x_{t})+\sum_{k=1}^{t-1}\beta_{t}^{k}W_{k}z_{k}+\sum_{k=1}^{t-1}z_{k}\big\langle W_{k}z_{k},\eta_{t}(x_{t})\big\rangle-\sum_{k=1}^{t-1}z_{k}\big(\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\big)-\langle\eta_{t}^{\prime}(x_{t})\rangle\sum_{k=1}^{t-1}\beta_{t-1}^{k}z_{k}\notag\\ & =\alpha_{t+1}v^\star+\sum_{k=1}^{t}\beta_{t}^{k}W_{k}z_{k}+\sum_{k=1}^{t-1}z_{k}\left[\langle W_{k}z_{k},\eta_{t}(x_{t})\rangle-\langle\eta_{t}^{\prime}(x_{t})\rangle\beta_{t-1}^{k}-\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\right]\label{eqn:xt-by-Wkzk}\\ & =\alpha_{t+1}v^\star+\sum_{k=1}^{t}\beta_{t}^{k}\phi_{k}+ \underset{\eqqcolon\, \xi_t}{\underbrace{ \sum_{k=1}^{t-1}z_{k}\left[\langle W_{k}z_{k},\eta_{t}(x_{t})\rangle-\langle\eta_{t}^{\prime}(x_{t})\rangle\beta_{t-1}^{k}-\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\right]-\sum_{k=1}^{t}\beta_{t}^{k}\zeta_{k}}}. \label{eqn:xt-by-phik} \end{align} where the second line invokes \eqref{eq:W1-recursive-expand}, the fourth line makes use of the fact that \[ W_t\eta_{t}(x_{t})= W_t \big( I-U_{t-1}U_{t-1}^{\top} \big) \eta_{t}(x_{t}) = W_t (\beta_t^t z_t), \] and the last line in \eqref{eqn:xt-by-phik} follows from \eqref{def:phi_k}. By construction, $\zeta_k\in U_k$, and hence the expression of $\xi_t$ in \eqref{eqn:xt-by-phik} immediately reveals that $\xi_t\in U_t$. \end{proof} Before moving on, we further take a moment to derive an alternative expression of $\xi_t$. Let us first make the following observation arising from the definition \eqref{eqn:zeta-k} and the decomposition~\eqref{eqn:eta-decomposition}: \begin{align*} \sum_{k = 1}^{t} \beta_{t}^k\zeta_k = \sum_{k = 1}^{t} \beta_{t}^k\left[\bigg(\frac{\sqrt{2}}{2} - 1\bigg)z_kz_k^{\top}W_kz_k + \sum_{i = 1}^{k - 1} g_i^kz_i\right] = \sum_{k = 1}^{t} z_k\left[\beta_{t}^k\bigg(\frac{\sqrt{2}}{2} - 1\bigg)z_k^{\top}W_kz_k + \sum_{i = k+1}^{t} \beta_{t}^ig_k^i\right], \end{align*} where the last line holds since \begin{align*} \sum_{k=1}^{t}\beta_{t}^{k}\sum_{i=1}^{k-1}g_{i}^{k}z_{i} & =\sum_{i=1}^{t-1}z_{i}\sum_{k=i+1}^{t}\beta_{t}^{k}g_{i}^{k}=\sum_{k=1}^{t-1}z_{k}\sum_{i=k+1}^{t}\beta_{t}^{i}g_{k}^{i}=\sum_{k=1}^{t}z_{k}\sum_{i=k+1}^{t}\beta_{t}^{i}g_{k}^{i} . \end{align*} Additionally, apply the decomposition~\eqref{eqn:eta-decomposition} and the \eqref{eqn:zeta-k} once again to reach \begin{align*} \big\langle \zeta_k, \eta_{t}(x_t)\big\rangle = \left\langle\Big(\frac{\sqrt{2}}{2} - 1\Big) z_kz_k^{\top}W_kz_k + \sum_{i = 1}^{k - 1} g_i^kz_i, \sum_{k = 1}^{t} \beta_{t}^kz_k\right\rangle = \bigg(\frac{\sqrt{2}}{2} - 1\bigg)\beta_t^kz_k^{\top}W_kz_k + \sum_{i = 1}^{k - 1} \beta_t^ig_i^k \end{align*} for any $k\leq t$. Substituting the above two inequalities into \eqref{eqn:xt-by-phik}, we arrive at \begin{align} \xi_{t} & =\sum_{k=1}^{t-1}z_{k}\left[\langle W_{k}z_{k},\eta_{t}(x_{t})\rangle-\langle\eta_{t}^{\prime}(x_{t})\rangle\beta_{t-1}^{k}-\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\right]-\sum_{k=1}^{t}\beta_{t}^{k}\zeta_{k}\nonumber\\ & =\sum_{k=1}^{t-1}z_{k}\left[\langle\phi_{k},\eta_{t}(x_{t})\rangle-\langle\zeta_{k},\eta_{t}(x_{t})\rangle-\langle\eta_{t}^{\prime}(x_{t})\rangle\beta_{t-1}^{k}-\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\right]-\sum_{k=1}^{t}\beta_{t}^{k}\zeta_{k}\nonumber\\ & =\sum_{k=1}^{t-1}z_{k}\Bigg[\Big\langle\phi_{k},\eta_{t}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\xi_{t-1}\Big)\Big\rangle-\langle\eta_{t}^{\prime}(x_{t})\rangle\beta_{t-1}^{k} - \sum_{i=1}^{k-1}\beta_{t}^{i}g_{i}^{k}-\sum_{i=k+1}^{t}\beta_{t}^{i}g_{k}^{i} \notag\\ & \qquad\qquad-\big(\sqrt{2}-1\big)\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\bigg] - \bigg(\frac{\sqrt{2}}{2} - 1\bigg)\beta_t^tz_tz_t^{\top}W_tz_t,\label{eq:xi-expression} \end{align} where the last line invokes the decomposition \eqref{def:dynamics}. \paragraph{Step 4: bounding the residual term $\ltwo{\xi_t}$.} Everything then boils down to controlling $\ltwo{\xi_t}$. Let us define a unit vector $\mu_t =[\mu_t^k]_{1\leq k\leq t} \in \mathbb{R}^{t}$ with coordinates \[ \mu_t^k \coloneqq \frac{\xi_t^{\top}z_k}{\|\xi_t\|_2}, \qquad 1\leq k\leq t. \] Given that $\{z_k\}_{k\leq t}$ forms an orthonormal basis and that $\xi_t\in U_t$, it follows that $\ltwo{\mu_t} = 1.$ A little algebra leads to \begin{align} \|\xi_{t}\|_{2} & =\sum_{k=1}^{t-1}\mu_t^{k}\Bigg[\Big\langle\phi_{k},\eta_{t}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\xi_{t-1}\Big)\Big\rangle-\langle\eta_{t}^{\prime}(x_{t})\rangle\beta_{t-1}^{k} - \sum_{i=1}^{k-1}\beta_{t}^{i}g_{i}^{k}-\sum_{i=k+1}^{t}\beta_{t}^{i}g_{k}^{i} \notag\\ & \qquad\qquad-\big(\sqrt{2}-1\big)\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}\bigg] - \bigg(\frac{\sqrt{2}}{2} - 1\bigg)\beta_t^t\mu_t^tz_t^{\top}W_tz_t \nonumber\\ \notag & =\bigg\langle\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k},\delta_{t}\bigg\rangle-\langle\delta_{t}^{\prime}\rangle\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k} - \bigg(\frac{\sqrt{2}}{2} - 1\bigg)\beta_t^t\mu_t^tz_t^{\top}W_tz_t\\ \notag & \qquad\qquad-\sum_{k=1}^{t-1}\mu_{t}^{k}\left[-\big\langle\phi_{k},\eta_{t}(v_{t})\big\rangle+\big\langle\eta_{t}^{\prime}(v_{t})\big\rangle\beta_{t-1}^{k}+(\sqrt{2}-1)\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}+\sum_{i=1}^{k-1}\beta_{t}^{i}g_{i}^{k}+\sum_{i=k+1}^{t}\beta_{t}^{i}g_{k}^{i}\right]\\ & =\Big\langle\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k},\delta_{t}\Big\rangle-\langle\delta_{t}^{\prime}\rangle\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}+\Delta_{t} - \bigg(\frac{\sqrt{2}}{2} - 1\bigg)\beta_t^t\mu_t^tz_t^{\top}W_tz_t \notag \\ & \qquad\qquad- \sum_{k=1}^{t-1}\mu_{t}^{k}\left[(\sqrt{2}-1)\beta_{t}^{k}z_{k}^{\top}W_{k}z_{k}+\sum_{i=1}^{k-1}\beta_{t}^{i}g_{i}^{k}+\sum_{i=k+1}^{t}\beta_{t}^{i}g_{k}^{i}\right],\label{eq:xi_bound} \end{align} where we remind the reader of the definitions in~\eqref{eqn:delta-chorus} as follows: \begin{align*} v_{t} & \coloneqq\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k},\\ \Delta_{t} & \coloneqq\sum_{k=1}^{t-1}\mu_{t}^{k}\Big[\big\langle\phi_{k},\eta_{t}(v_{t})\big\rangle-\big\langle\eta_{t}^{\prime}(v_{t})\big\rangle\beta_{t-1}^{k}\Big],\\ \delta_{t} & \coloneqq\eta_{t}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\xi_{t-1}\Big)-\eta_{t}(v_{t}),\\ \delta_{t}^{\prime} & \coloneqq\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\xi_{t-1}\Big)-\eta_{t}^{\prime}(v_{t}). \end{align*} To establish Theorem~\ref{thm:recursion}, it then suffices to control the last term on the right-hand side of \eqref{eq:xi_bound}. This is accomplished in the following lemma, whose proof is deferred to Section~\ref{sec:pf-concentration}. \begin{lems} \label{lem:concentration} With probability at least $1-O(n^{-11})$, for any $t\leq n$ we have \begin{align*} \bigg|\sum_{k = 1}^{t-1} \mu_t^k\Big((\sqrt{2}-1)\beta_{t}^kz_k^{\top}W_kz_k + \sum_{i = 1}^{k - 1} \beta_t^ig_i^k + \sum_{i = k+1}^{t} \beta_{t}^ig_k^i\Big)\bigg| &\lesssim \sqrt{\frac{t\log n}{n}} \|\beta_{t}\|_2. \end{align*} \end{lems} Taking this lemma collectively with inequality~\eqref{eq:xi_bound} and the trivial bound $|z_t^{\top}W_tz_t| \lesssim \sqrt{\frac{\log n}{n}}$ leads to \begin{align} \label{eqn:sonata} \|\xi_t\|_2 = \Big\langle \sum_{k = 1}^{t-1} \mu_t^k\phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = 1}^{t-1} \mu_t^k\beta_{t-1}^k + \Delta_t + O\bigg(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_2 \bigg), \end{align} thus completing the proof of Theorem~\ref{thm:recursion}. \subsection{Proof of Theorem~\ref{thm:main}} \label{sec:pf-thm-main} Before embarking on the proof, we remind the reader of several results that have already proven for $\alpha_t$ and $\beta_t$. Recall that in the proof of Theorem~\ref{thm:recursion}, we decompose the AMP iterate $x_{t+1}$ as follows \begin{align*} x_{t+1} = \alpha_{t+1} v^\star + \sum_{k = 1}^{t} \beta_{t}^k\phi_k + \xi_{t}, \qquad 1\leq t\leq n, \end{align*} where $\xi_{t}\in U_{t}$ (some linear subspace of dimension $t$) represents some residual term, and \begin{subequations} \label{eq:alpha-beta-t-expansion-thm2} \begin{align} \alpha_{t+1} &= \lambda v^{\star\top}\eta_t(x_t) = \lambda v^{\star\top}\eta_t\Big(\alpha_t v^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1}\Big), \label{eq:alpha-t-expansion-thm2}\\ % \|\beta_t\|_2 &= \|\eta_t(x_t)\|_2 = \Big\|\eta_t\Big(\alpha_t v^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1} \Big)\Big\|_2. \label{eq:beta-t-expansion-thm2} \end{align} \end{subequations} We have also shown in Theorem~\ref{thm:recursion} that with probability at least $1-O(n^{-11})$, the residual term satisfies \begin{align} \notag \|\xi_{t}\|_2 = \Big\langle \sum_{k = 1}^{t-1} \mu^k_t \phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = 1}^{t-1} \mu^k_t \beta_{t-1}^k + \Delta_t + O\Big(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_2 \Big)\\ % \label{eqn:xi-t-tmp} \leq \Big\langle \sum_{k = 1}^{t-1} \mu^k_t \phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = 1}^{t-1} \mu^k_t \beta_{t-1}^k + A_t + O\Big(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_2 \Big), \end{align} where the last step invokes property~\eqref{defi:A} in Assumption~\ref{assump:A-H-eta} as well as the definition \eqref{defn:Delta-t} of $\Delta_t$. \paragraph{Step 1: bounding $\Delta_{\alpha,t} $ and $\Delta_{\beta,t} $ in terms of $\delta_t$ and $\delta_t^{\prime}$.} We begin by controlling the size of the term $\Delta_{\alpha,t} $. In view of its definition in \eqref{eqn:alpha-t-genearl}, we have \begin{align*} \Delta_{\alpha,t} &\coloneqq \frac{\alpha_{t+1}}{\lambda} - v^{\star\top}\int\eta_t\Big(\alpha_t v^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\varphi_n(\mathrm{d} x), \\ % & = v^{\star\top}\delta_t + v^{\star\top}\eta_{t}\Big(\alpha_t v^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\Big) - v^{\star\top}\int\eta_t\Big(\alpha_tv^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\varphi_n(\mathrm{d} x), \end{align*} where the second line follows from \eqref{eq:alpha-t-expansion-thm2} and the definition \eqref{defn:delta-t} of $\delta_t$. As a direct consequence of the assumption~\eqref{defi:B}, we obtain \begin{align} \label{eqn:tmp-Delta-alpha} |\Delta_{\alpha,t}| &\leq \left|\inprod{v^\star}{\delta_t}\right| + B_t. \end{align} We then move on to the term $\Delta_{\beta,t}$. Recognizing that \[ \| \beta_t \|_2^2 = \|\eta_t(x_t)\|_2^2 = \Big\| \eta_{t}\Big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k \Big) + \delta_t \Big\|_2^2, \] we can combine it with the definition \eqref{eqn:beta-t-genearl} to obtain \begin{align*} \Delta_{\beta,t} &\coloneqq \|\beta_t\|_2^2 - \int\Big\|\eta_t\Big(\alpha_tv^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\Big\|_2^2\varphi_n(\mathrm{d} x)\\ % &= \Big\langle2\eta_{t}\Big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\Big), \delta_t\Big\rangle + \|\delta_t\|_2^2 + \Big\|\eta_{t}\Big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\Big)\Big\|_2^2 - \int\Big\|\eta_t\Big(\alpha_tv^\star + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\Big\|_2^2\varphi_n(\mathrm{d} x). \end{align*} By virtue of the assumption~\eqref{defi:C}, we obtain \begin{align} \label{eqn:tmp-Delta-beta} |\Delta_{\beta,t}| &\leq \Big|\Big\langle2\eta_{t}\Big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\Big), \delta_t\Big\rangle\Big| + \|\delta_t\|_2^2 + C_t. \end{align} \paragraph{Step 2: bounding $\delta_t$ and $\delta_t^{\prime}$.} To further control the right-hand side of \eqref{eqn:tmp-Delta-alpha} and \eqref{eqn:tmp-Delta-beta}, we proceed by bounding terms associated with $\delta_{t}$. Given that $\eta_t(\cdot)$ is assumed to be continuous, one can derive \begin{align} \notag \delta_{t} & =\eta_{t}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\xi_{t-1}\Big)-\eta_{t}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\\ \notag & ={\displaystyle \int}_{0}^{1}\bigg\{\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\tau\xi_{t-1}\Big)\circ\xi_{t-1}\bigg\}\mathrm{d}\tau\\ & =\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}+{\displaystyle \int}_{0}^{1}\bigg\{\bigg[\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\tau\xi_{t-1}\Big)-\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\bigg]\circ\xi_{t-1}\bigg\}\mathrm{d}\tau, \label{eqn:shostakovich-delta-t-123} \end{align} where the second line invokes the fundamental theorem of calculus. Note that $\eta_t^{\prime}(\cdot)$ has a finite number of discontinuous points. Recalling that $|\eta_t^{\prime}(w)|\leq \rho$ and $|\eta_t^{\prime\prime}(w)|\leq \rho_1$ for any continuous point $w\in \ensuremath{\mathbb{R}}$ (see Assumption~\ref{assump:eta}), we have \begin{align} &\left|\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\tau\xi_{t-1}\Big)-\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\right| \notag\\ &\qquad \leq\left|{\displaystyle \int}_{0}^{1}\bigg\{\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\tau_{1}\tau\xi_{t-1}\Big)\circ\big(\tau\xi_{t-1}\big)\bigg\}\mathrm{d}\tau_{1}\right|+ 2\rho\Gamma \notag\\ &\qquad \leq\rho_{1}\big|\xi_{t-1}\big|+ 2\rho\Gamma, \label{eq:diff-eta-t-prime-UB135} \end{align} where $\Gamma= [\Gamma_j]_{1\leq j\leq n} \in \ensuremath{\mathbb{R}}^n$ is a term reflecting the influence of discontinuous points. More precisely, $\Gamma_j$ denotes the number of discontinuities of $\eta_t^{\prime}(\cdot)$ encountered between $\big[\alpha_{t}v^\star_j+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}, \alpha_{t}v^\star_j+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j}+\xi_{t-1,j}\big]$. Note that if a point $m$ is contained in an interval $[a,b]$, then one must have $a+\tau(b-a)=m$ for some $\tau\in[0,1]$, which requires that $|b-a|\geq|\tau(b-a)|=|a-m|$. This basic fact allows us to take \begin{align} \Gamma_j = \sum_{m\in\mathcal{M}_{\mathsf{dc}}}\ind\bigg\{ \big|\xi_{t-1,j}\big|\geq\Big|\alpha_{t}v_{j}^{\star}+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k,j} -m\Big|\bigg\} \eqqcolon \sum_{m\in\mathcal{M}_{\mathsf{dc}}} \Gamma_{j}(m). \label{eq:Gamma-ub-discontinuous} \end{align} Substitution into \eqref{eqn:shostakovich-delta-t-123} yields \begin{align} \Big|\delta_{t} & -\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\Big| \leq \rho_{1}\big|\xi_{t-1}\big|^{2}+ 2\rho\Gamma \circ \big|\xi_{t-1}\big| . \label{eqn:shostakovich-delta-t} \end{align} Similarly, we can repeat the same argument (particularly \eqref{eq:diff-eta-t-prime-UB135} and \eqref{eqn:shostakovich-delta-t}) to bound $\delta_{t}^{\prime}$ as follows: \begin{align} & \bigg|\delta_{t}^{\prime}-\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\bigg| \notag\\ & =\bigg|\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\xi_{t-1}\Big)-\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)-\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\bigg| \notag\\ & \leq\bigg|{\displaystyle \int}_{0}^{1}\bigg\{\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}+\tau\xi_{t-1}\Big)\circ\xi_{t-1}\bigg\}\mathrm{d}\tau-\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\bigg|+2\rho\Gamma \notag\\ & \leq\rho_{2}\big|\xi_{t-1}\big|^{2}+ 2\rho\Gamma +2\rho_{1}\Gamma \circ \big|\xi_{t-1}\big|. \label{eqn:shostakovich-delta-t-prime} \end{align} With the above bounds on $\delta_t$ and $\delta_t^{\prime}$ in place, we are ready to establish the advertised results \eqref{eqn:delta-alpha-general}, \eqref{eqn:delta-beta-general} and \eqref{eqn:xi-t-general}, which we will look at one by one in the sequel. \paragraph{Step 3: establishing inequality~\eqref{eqn:xi-t-general}.} With these relations in place, let us start with controlling quantity $\|\xi_{t}\|_2$. In view of expression~\eqref{eqn:xi-t-tmp}, it requires us to bound $\langle \mu_t^k\phi_k, \delta_{t}\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = 1}^{t-1} \mu_t^k\beta_{t-1}^k$. Taking the bounds~\eqref{eqn:shostakovich-delta-t} and \eqref{eqn:shostakovich-delta-t-prime} collectively with \eqref{eqn:xi-t-tmp}, and recalling the definition \eqref{defn:v-t-thm1} of $v_t$, we arrive at \begin{align} \|\xi_{t}\|_{2} & \leq\Big\langle\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k},\delta_{t}\Big\rangle-\langle\delta_{t}^{\prime}\rangle\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}+A_{t}+O\Big(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_{2}\Big)\notag\\ & =\bigg\langle\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k},\,\eta_{t}^{\prime}(v_{t})\circ\xi_{t-1}\bigg\rangle-\bigg\langle\eta_{t}^{\prime\prime}(v_{t})\circ\xi_{t-1}\bigg\rangle\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\notag\\ & \qquad+\rho_{1}\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\big|\xi_{t-1}\big|^{2}\bigg\rangle+\rho_{2}\Big\langle\big|\xi_{t-1}\big|^{2}\Big\rangle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg|\notag\\ & \qquad+2\rho\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\,\Gamma\circ \big|\xi_{t-1}\big|\bigg\rangle+\Big\{2\rho\langle \Gamma \rangle + 2\rho_{1}\big\langle\Gamma\circ \big|\xi_{t-1}\big|\big\rangle\Big\}\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg| \notag\\ & \qquad +A_{t}+O\Big(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_{2}\Big). \label{eq:xi-t-UB-13579} \end{align} This leaves us with several terms to control, which is the content of the lemma below; the proof is deferred to Section~\ref{sec:pf-lem-recursion}. \begin{lems} \label{lem:recursion} Consider any $t\leq n$. Given $\kappa_t$ defined in~\eqref{defi:kappa}, it holds that \begin{subequations} \begin{align} \bigg\langle\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k},\eta_{t}^{\prime}(v_{t})\circ\xi_{t-1}\bigg\rangle-\Big\langle\eta_{t}^{\prime\prime}(v_{t})\circ\xi_{t-1}\Big\rangle\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k} &\le\sqrt{\kappa_{t}^{2}+D_{t}}\,\|\xi_{t-1}\|_{2} \label{eq:lem-recursion-smooth-part-1} \\ \rho_{1}\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\big|\xi_{t-1}\big|^{2}\Big\rangle+\rho_{2}\Big\langle\big|\xi_{t-1}\big|^{2}\Big\rangle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg| & \lesssim\left( \rho_{1} \frac{\sqrt{t}+\sqrt{\log n}}{\sqrt{n}} +\frac{\rho_{2}\|\beta_{t-1}\|_{2}}{n}\right)\|\xi_{t-1}\|_{2}^{2} \label{eq:lem-recursion-smooth-part-2} \end{align} hold with probability at least $1-O(n^{-11})$. In addition, one has \begin{align} & 2\rho\bigg\langle\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\,\Gamma\circ\big|\xi_{t-1}\big|\bigg\rangle+\Big\{2\rho\langle\Gamma\rangle+2\rho_{1}\big\langle\Gamma\circ\big|\xi_{t-1}\big|\big\rangle\Big\}\bigg|\sum_{k=1}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg| \notag\\ & \qquad\lesssim\rho\sqrt{\frac{(E_{t}+t)\log n}{n}}\big\|\xi_{t-1}\big\|_{2}+\frac{(\rho+\rho_{1}\big\|\xi_{t-1}\big\|_{\infty})E_{t}\big\|\beta_{t-1}\big\|_{2}}{n}. \label{eqn:new-version} \end{align} \end{subequations} \end{lems} Combining Lemma~\ref{lem:recursion} with \eqref{eq:xi-t-UB-13579} immediately completes the proof of inequality~\eqref{eqn:xi-t-general}. \paragraph{Step 4: establishing inequalities \eqref{eqn:delta-alpha-general} and \eqref{eqn:delta-beta-general}.} Finally, we return to establish the advertised bounds on $|\Delta_{\alpha,t}|$ and $|\Delta_{\beta,t}|$. Towards this, we are in need of the following lemma, whose proof is provided in Section~\ref{sec:pf-lem-recursion2}. \begin{lems} \label{lem:recursion2} The following inequalities hold true: \begin{subequations} \begin{align} \label{eqn:cello} |\langle v^\star, \delta_t\rangle| &\lesssim \rho\|\xi_{t-1}\|_2 + \rho_1\|v^\star\|_{\infty}\|\xi_{t-1}\|_2^2 + \rho\bigg(\sum_{i=1}^{E_{t}}|v^\star|_{(i)}^{2}\bigg)^{1/2}\|\xi_{t-1}\|_{2}, \\ \label{eqn:viola} \Big|\Big\langle\eta_{t}\big(\alpha_tv^\star + \sum_{k = 1}^{t-1} \beta_{t-1}^k\phi_k\big), \delta_t \Big\rangle\Big| &\lesssim F_t\|\xi_{t-1}\|_2 + \rho_1G_t\|\xi_{t-1}\|_2^2 + \rho \sqrt{E_t}G_t \|\xi_{t-1}\|_2, \\ \label{eqn:violin} \|\delta_t\|_2^2 &\lesssim \rho^2\|\xi_{t-1}\|_2^2 + \rho_1^2\|\xi_{t-1}\|_2^4 + \rho^2E_t\|\xi_{t-1}\|_2^2. \end{align} \end{subequations} \end{lems} Substituting the results in Lemma~\ref{lem:recursion2} into inequalities~\eqref{eqn:tmp-Delta-alpha} and \eqref{eqn:tmp-Delta-beta} immediately establishes \eqref{eqn:delta-alpha-general} and \eqref{eqn:delta-beta-general}. We have thus completed the proof of Theorem~\ref{thm:main}. \section{Problem settings and background} \subsection{Tighter bounds for $\xi_t$, $\Delta_{\alpha,t}$ and $\gamma_{t}$} \subsection{Establishing the induction hypotheses for the next iteration} \label{sec:xi-z2} In this subsection, we move on to establish the induction hypotheses \eqref{Z2-induction} for the $(t+1)$-th iteration, in addition to controlling several intermediate quantities. For this purpose, Theorem~\ref{thm:main} offers a general recipe to control the residual terms $\ltwo{\xi_{t}}$ and $|\Delta_{\alpha,t}|$ by means of the key quantities $A_t,B_t,D_t$ that have been analyzed in Section~\ref{sec:z2-key}. Direct application of Theorem~\ref{thm:main} or Corollary~\ref{cor:recursion-spectral} already leads to non-asymptotic performance bounds. It turns out that for the problem of $\mathbb{Z}_2$ synchronization, we might be able to obtain tighter error bounds (i.e., $\sqrt{t/n}$ vs.~$\sqrt{t^2/n}$) if we slightly refine the analysis of Theorem~\ref{thm:main} by exploiting the problem-specific structure, which we shall detail as follows. \subsubsection{Induction step for bounding $\ltwo{\xi_{t}}$} \label{sec:z2-xi-t} In this subsection, we aim to establish the induction hypothesis \eqref{eqn:Z2-induction-st} for the next iteration (namely, showing that $\|\xi_t\|_2\leq S_{t+1}$. % In view of Theorem~\ref{thm:recursion-spectral} and \eqref{defi:A}, the residual term $\xi_{t}$ obeys \begin{align} \label{eqn:xi-z2-again} \|\xi_{t}\|_2 \leq \Big\langle \sum_{k = -2s}^{t-1} \mu_t^{k}\phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = -2s}^{t-1} \mu_t^{k}\beta_{t-1}^k + A_t + O\Big(\sqrt{\frac{(t+s)\log n}{n}} \Big), \end{align} where $\delta_{t}$ and $\delta_{t}^{\prime}$ are defined as \begin{align*} \delta_{t} &\coloneqq \eta_{t}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1}\Big) - \eta_{t}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big), \\ \delta_{t}^{\prime} &\coloneqq \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1}\Big) - \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big). \end{align*} We have already bounded $A_t$ in Section~\ref{sec:control-A-Z2}. As a result, it comes down to bounding $\delta_{t}$ and $\delta_{t}^{\prime}$. As alluded to previously, we can obtain slightly tighter bounds than directly invoking Theorem~\ref{thm:main} or Corollary~\ref{cor:recursion-spectral}, by improving the proof of Theorem~\ref{thm:main} a little a bit with the aid of the special structure of $\mathbb{Z}_2$ synchronization. Specifically, recall from \eqref{eqn:chocolate} that $$ |\eta_t(w)| \lesssim \frac{1}{\alpha_t\sqrt{n}} \qquad \text{and} \qquad |\eta_t^{\prime}(w)| \lesssim 1 \qquad \text{for any }w\in \ensuremath{\mathbb{R}}. $$ These two basic bounds allow us to strengthen the \eqref{eqn:shostakovich-delta-t} and \eqref{eqn:shostakovich-delta-t-prime} as follows in the proof of Theorem~\ref{thm:main}: \begin{subequations} \label{eqn:delta-z2} \begin{align} \Big|\delta_{t}-\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\Big| & \leq\rho_{1}\big|\xi_{t-1}\big|^{2},\\ \|\delta_{t}\|_{\infty} & \lesssim\frac{1}{\alpha_{t}\sqrt{n}},\\ \bigg|\delta_{t}^{\prime}-\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\bigg| & \leq\rho_{2}\big|\xi_{t-1}\big|^{2},\\ \|\delta_{t}^{\prime}\|_{\infty} & \lesssim1, \end{align} \end{subequations} where we recall that $\Gamma = 0$ in $\mathbb{Z}_2$ synchronization (as there is no discontinuous point in $\tanh(\cdot)$). To help further bound~\eqref{eqn:delta-z2}, we make note of some preliminary facts below. Let us introduce the following index set: \begin{align*} \mathcal{I} \coloneqq \left\{i : \bigg|\sum_{k = -2s}^{t-1} \mu_t^k\phi_{k, i}\bigg| > C_7 \sqrt{\frac{\log n}{n}}\right\}, \end{align*} where $C_7>0$ is a large enough constant employed in \eqref{eqn:muphi-rank}. By virtue of \eqref{eqn:muphi-rank}, one has \begin{align*} |\mathcal{I}|\leq t+s. \end{align*} For notational simplicity, we overload the notation by introducing two vectors: \[ \ind_{\mathcal{I}} \coloneqq \big[ \ind_{\mathcal{I}}(i) \big]_{1\leq i\leq n} \in \ensuremath{\mathbb{R}}^n \qquad \text{and} \qquad \ind_{\mathcal{I}^{\mathrm{c}}} \coloneqq \big[ \ind_{\mathcal{I}^{\mathrm{c}}}(i) \big]_{1\leq i\leq n} \in \ensuremath{\mathbb{R}}^n. \] In addition, let us define $$ \widehat{\xi}_{t-1} \coloneqq \xi_{t-1} \circ \ind_{\mathcal{I}^{\mathrm{c}}}. $$ Based on this set of notation, we can readily derive from \eqref{eqn:delta-z2} that \begin{subequations} \label{eqn:z2-delta-t} \begin{align} \Big|\delta_{t}-\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\widehat{\xi}_{t-1}\Big| & \lesssim\rho_{1}\big|\widehat{\xi}_{t-1}\big|^{2}+\frac{1}{\alpha_{t}\sqrt{n}}\ind_{\mathcal{I}},\\ \bigg|\delta_{t}^{\prime}-\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=1}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\bigg| & \leq\rho_{2}\big|\xi_{t-1}\big|^{2}. \end{align} \end{subequations} We aim to control the right-hand side of expression~\eqref{eqn:xi-z2-again}, which boils down to bounding $\Big\langle \sum_{k = -2s}^{t-1} \mu_t^k\phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k$. Substitution of \eqref{eqn:z2-delta-t} into \eqref{eqn:xi-z2-again} leads to \begin{align} \notag\|\xi_{t}\|_{2} & \leq\bigg\langle\bigg|\sum_{k=-2s}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\rho_{1}\widehat{\xi}_{t-1}^{2}+\frac{1}{\alpha_{t}\sqrt{n}}\ind_{\mathcal{I}}\bigg\rangle+\bigg\langle\sum_{k=-2s}^{t-1}\mu_{t}^{k}\phi_{k},\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\sum_{k=-2s}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\widehat{\xi}_{t-1}\bigg\rangle\\ & -\bigg\langle\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\sum_{k=-2s}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big)\circ\xi_{t-1}\bigg\rangle\sum_{k=-2s}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}+\left\langle\rho_{2}\xi_{t-1}^{2}\right\rangle\bigg|\sum_{k=-2s}^{t-1}\mu_{t}^{k}\beta_{t-1}^{k}\bigg|+A_{t}+O\Big(\sqrt{\frac{(t+s)\log n}{n}}\Big). \label{eqn:papageno} \end{align} Next, we shall control each term in \eqref{eqn:papageno} separately. \begin{itemize} \item We begin with the first term in \eqref{eqn:papageno}. Recalling the definition of set $\mathcal{I}$ and using $\rho_1\lesssim \sqrt{n}$, we have \begin{align*} \Bigg\langle\bigg|\sum_{k=-2s}^{t-1}\mu_{t}^{k}\phi_{k}\bigg|,\rho_{1}\widehat{\xi}_{t-1}^{2}+\frac{1}{\alpha_{t}\sqrt{n}}\ind_{\mathcal{I}}\Bigg\rangle & \lesssim\frac{1}{\alpha_{t}\sqrt{n}}\sum_{i\in\mathcal{I}}\Big|\sum_{k=-2s}^{t-1}\mu_{t}^{k}\phi_{k,i}\Big|+ \rho_1 \sum_{i\notin\mathcal{I}}\Big|\sum_{k=-2s}^{t-1}\mu_{t}^{k}\phi_{k,i}\xi_{t-1,i}^{2}\Big|\\ & \leq\frac{1}{\alpha_{t}\sqrt{n}}\sqrt{\big|\mathcal{I}\big|\sum_{i\in\mathcal{I}}\Big|\sum_{k=-2s}^{t-1}\mu_{t}^{k}\phi_{k,i}\Big|^{2}}+(\sqrt{n})C_{7}\sqrt{\frac{\log n}{n}}\sum_{i\notin\mathcal{I}}\xi_{t-1,i}^{2}\\ & \lesssim\frac{1}{\alpha_{t}\sqrt{n}}\sqrt{\big|\mathcal{I}\big|\sum_{i\in\mathcal{I}}\Big|\sum_{k=-2s}^{t-1}\mu_{t}^{k}\phi_{k,i}\Big|^{2}}+ \sqrt{\log n} \,\|\xi_{t-1}\|_{2}^{2}, \end{align*} where the second line follows from Cauchy-Schwarz and the definition of $\mathcal{I}$. Recalling that $\{\phi_k\}$ fall within the set $\mathcal{E}$ (cf.~\eqref{eq:eps-set}) with high probability and using $|\mathcal{I}|\lesssim t+s$, we can further derive \begin{align} \Bigg\langle \bigg| \sum_{k = -2s}^{t-1} \mu_t^k\phi_k \bigg|, \rho_1\widehat{\xi}_{t-1}^2 + \frac{1}{\alpha_t\sqrt{n}} \ind_{\mathcal{I}}\Bigg\rangle &\lesssim \frac{(t+s)\sqrt{\log n}}{\alpha_tn} + \sqrt{\log n} \,\|\xi_{t-1}\|_2^2 \label{eqn:z2-part-1} \end{align} with probability at least $1 - O(n^{-11})$. \item Regarding the fourth term in \eqref{eqn:papageno}, one can use $|\mu_t^{\top}\beta_{t-1}|\leq \|\mu_t\|_2\|\beta_{t-1}\|_2=1$ and $\rho_2\lesssim n $ to get % \begin{align} \left\langle\rho_2\xi_{t-1}^2\right \rangle \bigg| \sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k \bigg| \lesssim \|\xi_{t-1}\|_2^2. \label{eqn:z2-part-2} \end{align} \item With regards to the second and third term in \eqref{eqn:papageno}, direct calculations yield % \begin{align} \notag &\bigg| \Big\langle \sum_{k = -2s}^{t-1} \mu_t^k\phi_k, \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big) \circ \widehat{\xi}_{t-1}\Big\rangle - \Big\langle\eta_{t}^{\prime\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big) \circ \xi_{t-1}\Big\rangle \sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k \bigg| \\ \notag &= \bigg| \Big\langle \sum_{k = -2s}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big) \circ \ind_{\mathcal{I}^{\mathrm{c}}} - \frac{1}{n}\sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k\eta_{t}^{\prime\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big), \xi_{t-1}\Big\rangle \bigg| \\ \notag &\le \Big\| \sum_{k = -2s}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big) - \frac{1}{n}\sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k\eta_{t}^{\prime\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big)\Big\|_2 \|\xi_{t-1}\|_2 \\ \notag &\qquad+ \Big\| \sum_{k = -2s}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime}\Big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big) \circ \ind_{\mathcal{I}}\Big\|_2 \|\xi_{t-1}\|_2 \\ &\leq \sqrt{\kappa_t^2 + D_t}\,\|\xi_{t-1}\|_2 + O\bigg(\sqrt{\frac{(t+s)\log n}{n}}\bigg)\|\xi_{t-1}\|_2 , \label{eqn:z2-part-2} \end{align} where we invoke the assumption~\eqref{defi:D}, the inequality \eqref{eqn:spect-brahms} and the bound~\eqref{eqn:chocolate}. \end{itemize} Substituting the above bounds into inequality~\eqref{eqn:papageno} gives \begin{align} \|\xi_{t}\|_2 &\le \Bigg(\sqrt{\kappa_t^2 + D_t} + O\bigg(\sqrt{\frac{(t+s)\log n}{n}}\bigg)\Bigg)\|\xi_{t-1}\|_2 + O\Bigg(\sqrt{\frac{(t+s)\log n}{n}} + A_t + \frac{(t+s)\sqrt{\log n}}{\alpha_tn} + \sqrt{\log n}\,\|\xi_{t-1}\|_2^2\Bigg) \notag\\ &\le \left(\sqrt{\kappa_t^2 + O\bigg(\sqrt{\frac{(t+s)\log^2 n}{n}} \bigg)} + O\left(\sqrt{\frac{(t+s)\log n}{n}} + \sqrt{\log n} \, S_t\right)\right)\|\xi_{t-1}\|_2 + O\left(\sqrt{\frac{(t+s)\log n}{(\lambda- 1)n}}\right) \notag\\ &\le \left( 1- \frac{1}{40}(\lambda-1) \right)\|\xi_{t-1}\|_2 + O\left(\sqrt{\frac{(t+s)\log n}{(\lambda - 1) n}}\right) ; \label{eq:xit-UB-recurrence-Z2} \end{align} here, the penultimate step follows from the inequalities \eqref{eqn:Z2-At}, \eqref{eqn:Z2-Dt} and induction hypothesis \eqref{Z2-induction} for $\|\xi_{t-1}\|_2$, while the last line makes use of the induction hypothesis and is valid if $\sqrt{\frac{(t+s)\log^2 n}{n}} \ll \lambda-1$ and if \begin{equation} \kappa_t \leq 1 - \frac{1}{15}(\lambda - 1). \label{eq:kappat-UB-claim-Z2} \end{equation} The proof of this inequality \eqref{eq:kappat-UB-claim-Z2} is postponed to Section~\ref{sec:bound-kappat-Z2}. Invoking the induction the hypothesis (\ref{eqn:Z2-induction-st}) for $\|\xi_{t-1}\|_2$ in the above inequality \eqref{eq:xit-UB-recurrence-Z2}, we arrive at \begin{align*} \|\xi_{t}\|_{2} & \le\left(1-\frac{1}{40}(\lambda-1)\right)\|\xi_{t-1}\|_{2}+C_{3}\sqrt{\frac{(t+s)\log n}{(\lambda-1)n}}\\ & \leq\left(1-\frac{1}{40}(\lambda-1)\right)\left\{ C_{1}\sqrt{\frac{(t+s)\log n}{(\lambda-1)^{3}n}}+C_{1}\left(1-\frac{1}{40}(\lambda-1)\right)^{t-1}\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{9}n}}\right\} +C_{3}\sqrt{\frac{(t+s)\log n}{(\lambda-1)n}}\\ & =C_{1}\left(1-\frac{1}{15}(\lambda-1)\right)^{t}\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{9}n}}+\left\{ C_{1}\left(1-\frac{1}{40}(\lambda-1)\right)\sqrt{\frac{(t+s)\log n}{(\lambda-1)^{3}n}}+C_{3}(\lambda-1)\sqrt{\frac{(t+s)\log n}{(\lambda-1)^{3}n}}\right\} \\ & \leq C_{1}\left(1-\frac{1}{15}(\lambda-1)\right)^{t}\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{9}n}}+C_{1}\sqrt{\frac{(t+s)\log n}{(\lambda-1)^{3}n}}, \end{align*} provided that the ratio $C_{1}/C_{3}$ is sufficiently large. This validates the induction hypothesis (\ref{eqn:Z2-induction-st}) for $\|\xi_{t}\|_2$, thereby completing the induction step for $\|\xi_{t}\|_2.$ \subsubsection{Bounding the residual term $\Delta_{\alpha,t}$} \label{sec:z2-delta-alpha} Note that the denoising function in $\mathbb{Z}_2$ synchronization is smooth everywhere, and hence $E_t = 0$ (see \eqref{defi:E}). The bound \eqref{eqn:delta-alpha-general} then yields \begin{align*} |\Delta_{\alpha,t}| &\lesssim B_t + \left(\rho + \rho_1\|v^\star\|_{\infty}\|\xi_{t-1}\|_2 \right)\cdot \|\xi_{t-1}\|_2 \\ &\lesssim B_t + \ltwo{\xi_{t-1}} + \|\xi_{t-1}\|_2^2, \end{align*} where the last line follows by relation~\eqref{eqn:chocolate}. Recall that our induction hypothesis says $\|\xi_{t-1}\|_2 \leq S_t$, and that we have bounded $B_{t}$ in \eqref{eqn:Z2-Bt}. These taken together imply that \begin{align} \label{eqn:beethoven-sonata} |\Delta_{\alpha,t}| &\lesssim \|\xi_{t-1}\|_2 + \sqrt{\frac{(t+s)\log n}{n}} \lesssim S_t, \end{align} where the last inequality comes from \eqref{eqn:Z2-induction-st}. \subsubsection{Bounding $\alpha_{t}$ and understanding state evolution} \label{sec:main-recursion-z2} Next, we turn to the induction step for establishing \eqref{eq:Z2-induction-alphat} and \eqref{eqn:z2-delta-alpha-final} regarding $\alpha_{t}$. More precisely, under the induction hypothesis~\eqref{Z2-induction} for the $t$-th iteration, we would like to show that \eqref{eq:Z2-induction-alphat} and \eqref{eqn:z2-delta-alpha-final} hold for the $(t+1)$-th iteration w.r.t.~$\alpha_{t+1}$. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{numerics.pdf} \caption{ Numerical illustrations for quantities regarding $\kappa_t$ (the left panel) and $\alpha_t$ (the middle and the right panel). Left panel: quantity associated with \eqref{eqn:kappa-ze-left} as a function of $\lambda$ within the range $(1,1.2]$; middle panel: the quantity \eqref{eqn:middle-figure} as a function of $\lambda$ within the range $(1,1.2]$; right panel: the derivative of \eqref{eq:g-lambda-tau-defi} as a function of $\tau$ within the range of $[0,1.44]$. } \label{fig:numerics} \end{figure} From the definition of $\alpha_{t+1}$ (see \eqref{eq:alpha-beta-recursion-spect}), we have \begin{align} \alpha_{t+1}^2 &= \lambda^2 \langle v^\star, \eta_t(x_t)\rangle^2 = \frac{\lambda^2 \langle v^\star, \tanh(\pi_tx_t)\rangle^2}{\|\tanh(\pi_tx_t)\|_2^2}. \label{eq:alphat-defn-ind} \end{align} To understand the dynamics of $\alpha_{t}$, let us look at the state evolution recursion --- namely, a sequence of scalars $\{\tau_t\}$ defined recursively as follows: \begin{subequations} \label{eq:SE} \begin{align} \tau_{1} &= \lambda^{2}-1 \\ \tau_{t+1} &\coloneqq \frac{\lambda^2 \left[\int \tanh(\tau_t + \sqrt{\tau_t}x)\varphi(\mathrm{d} x)\right]^2}{\int \tanh^2(\tau_t + \sqrt{\tau_t}x)\varphi(\mathrm{d} x)} = \lambda^2 \int \tanh(\tau_t + \sqrt{\tau_t}x)\varphi(\mathrm{d} x). \label{eq:SE-t-t} \end{align} \end{subequations} Here, the last line comes from \citet[Appendix B.2]{deshpande2017asymptotic}. As it turns out, this scalar sequence \eqref{eq:SE} converges monotonically to a fixed point $\tau^{\star}$ of the recursion \eqref{eq:SE}, namely, \begin{align} \tau_t \nearrow \tau^{\star}, \qquad \tau_t,\tau^{\star} \in (\lambda^2 - 1 , \lambda^2), \qquad\text{ where } \tau^{\star} \text{ obeys } \tau^{\star} = \lambda^2 \int \tanh(\tau^{\star} + \sqrt{\tau^{\star}}x)\varphi(\mathrm{d} x). \label{eq:tau-t-monotone} \end{align} This claim can be established as follows by studying the property of the function \begin{equation} g_{\lambda}(\tau) = \frac{\lambda^2 \left[\int \tanh(\tau + \sqrt{\tau}x)\varphi(\mathrm{d} x)\right]^2}{\int \tanh^2(\tau + \sqrt{\tau}x)\varphi(\mathrm{d} x)} = \lambda^2 \int \tanh^2(\tau + \sqrt{\tau}x)\varphi(\mathrm{d} x). \label{eq:definition:glambda-tau} \end{equation} \begin{itemize} \item[(i)] We first observe that, for any $1<\lambda \leq 1.2$, the following derivative % \begin{align} \label{eq:g-lambda-tau-defi} \frac{\mathrm{d} g_{\lambda}(\tau)}{\mathrm{d}\tau} = \frac{\mathrm{d}}{\mathrm{d} \tau} \int \tanh(\tau + \sqrt{\tau}x)\varphi(\mathrm{d} x) = \int \left(1 + \frac{x}{2\sqrt{\tau}}\right)\left(1 - \tanh^2(\tau + \sqrt{\tau}x)\right)\varphi(\mathrm{d} x) \end{align} always obeys \begin{equation} \frac{\mathrm{d} g_{\lambda}(\tau)}{\mathrm{d}\tau} \in (0,1) \label{eq:g-lambda-tau-01} \end{equation} and is decreasing in $\tau$ within the interval $\tau \in [\lambda^2-1, \lambda^2] \subseteq [0, 1.44]$ (given our assumption that $1<\lambda \leq 1.2$); this is numerically validated in the right panel of Figure~\ref{fig:numerics}. \item[(ii)] Secondly, we observe that $g_{\lambda}(\tau_1)>\tau_1$, where $\tau_1 = \lambda^2 - 1$. To prove this, consider the problem of estimating a Bernoulli random variable $X \sim \mathsf{Bernoulli}(1/2) \in \mathcal\{1,-1\}$ based on the observation $Y=\sqrt{\tau} X + Z$, where $Z\sim \mathcal{N}(0,1)$ is independent from $X$. It is well known that $\tanh(\sqrt{\tau}Y)=\tanh(\tau X+ \sqrt{\tau}Z) = \mathbb{E}[X\mid Y]$ is the minimum mean square error (MMSE) estimator \citep[Appendix B.2]{deshpande2017asymptotic}. In addition, the MMSE estimator $\mathbb{E}[X\mid Y]$ is known to be the projection of $Y$ onto the space of functions of $Y$, and as a result, it achieves the largest correlation with $X$ among all estimators based on $Y$. This implies that the estimator $\tanh(\tau X+ \sqrt{\tau}Z)$ enjoys higher correlation with $X$ compared to the other estimator $\tau X+ \sqrt{\tau}Z$, thus leading to \begin{align*} \frac{\left[\int \tanh(\tau_1 + \sqrt{\tau_1}x)\varphi(\mathrm{d} x)\right]^2}{\int \tanh^2(\tau_1 + \sqrt{\tau_1}x)\varphi(\mathrm{d} x)} \geq \frac{\left[\int (\tau_1 + \sqrt{\tau_1}x)\varphi(\mathrm{d} x)\right]^2}{\int (\tau_1 + \sqrt{\tau_1}x)^2\varphi(\mathrm{d} x)} = \frac{\tau_1^2}{\tau_1^2 + \tau_1} = \frac{\lambda^2 - 1}{\lambda^2}. \end{align*} Given that the left-hand side of the above relation is given by $\frac{g_{\lambda}(\tau_1)}{\lambda^2}$ (cf.~\eqref{eq:SE-t-t}), we conclude that \[ \frac{g_{\lambda}(\tau_1)}{\lambda^2} \geq \frac{\lambda^2 - 1}{\lambda^2} \qquad \Longrightarrow \qquad g_{\lambda}(\tau_1) \geq \lambda^2 - 1 = \tau_1. \] \item[(iii)] Thirdly, it is seen that $g_{\lambda}(\lambda^2) < \lambda^2$, which follows from \eqref{eq:definition:glambda-tau} and the fact that $|\tanh(w)|<1$ $(w\in \ensuremath{\mathbb{R}})$. \item[(iv)] The above three properties immediately reveal that: \begin{itemize} \item[(a)] There exists a unique fixed point $\tau^{\star}$ of $g_{\lambda}(\cdot)$ within $(\lambda^2 - 1, \lambda^2)$; \item[(b)] Starting from $\tau_1=\lambda^2 - 1$, $\tau_t$ is monotonically increasing in $t$ and keeps moving closer to (but remains below) $\tau^{\star}$. To see this, note that for any $\tau_t< \tau^{\star}$, one has $\tau_{t+1}=g_{\lambda}(\tau_t) \leq g_{\lambda}(\tau^{\star}) = \tau^{\star}$ and $\tau_{t+1}=g_{\lambda}(\tau_t) > \tau_t$ (as $\tau_t<\tau^{\star}$ and $\tau_1<g_{\lambda}(\tau_1)$). \end{itemize} \end{itemize} With the state evolution sequence $\{\tau_t\}$ in place, we claim that for every $t$, it satisfies \begin{align} \alpha_{t+1}^2 = \left(1 + O\Big(\frac{\widetilde{S}_{t+1}}{(\lambda-1)^{2.5}} \Big)\right)\tau_{t+1} = \big( 1 + o(1) \big) \tau_{t+1}, \label{eq:SE-induction} \end{align} where $\widetilde{S}_t$ is defined in \eqref{eqn:crude-st}. If the claim \eqref{eq:SE-induction} were valid, then one could readily conclude that \begin{align*} (1 + o(1))\lambda^2 &\geq \alpha_{t+1}^2 = \left(1 + o(1)\right)\tau_{t+1} \geq (1 + o(1)) \tau_{1} = (1 + o(1)) (\lambda^2 - 1), \\ \alpha_{t+1}^{2}&=\left(1+O\bigg(\sqrt{\frac{t\log n}{(\lambda-1)^{8}n}}+\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{14}n}}\bigg)\right)\tau_{t+1}, \end{align*} where we have used \eqref{eq:tau-t-monotone} and the definition \eqref{eqn:crude-st} of $\widetilde{S}_t$. Consequently, if we can establish inequality~\eqref{eq:SE-induction}, we can finish the inductive step with respect to $\alpha_{t}$. \paragraph{Proof of claim \eqref{eq:SE-induction}.} We intend to accomplish this via an induction argument. Assuming that \eqref{eq:SE-induction} is valid for the $t$-th iteration, we would like to establish \eqref{eq:SE-induction} for the $(t+1)$-th iteration as well. Let \begin{align} \varsigma_{t} &\coloneqq \alpha_{t}^2 - \tau_{t}, \qquad t\geq 1, \end{align} then it is equivalent to proving that \begin{equation} |\varsigma_{t+1}| \leq \left(\frac{C_{6}\widetilde{S}_{t+1}}{(\lambda-1)^{2.5}}\right)\tau_{t+1} \label{eq:varsigma-t1-Z2} \end{equation} for some constant $C_6>0$ large enough, provided that \begin{equation} |\varsigma_{t}| \leq \left(\frac{C_{6}\widetilde{S}_{t}}{(\lambda-1)^{2.5}}\right)\tau_{t}. \label{eq:varsigma-t-Z2} \end{equation} Towards this end, let us define the following quantity: \begin{align*} \mathcal{T}_1 \coloneqq \frac{\lambda^2 \langle v^\star, \tanh(\pi_tx_t)\rangle^2}{\|\tanh(\pi_tx_t)\|_2^2} - \frac{\lambda^2 \left[\int \tanh(\alpha_t^2 + \alpha_tx)\varphi(\mathrm{d} x)\right]^2}{\int \tanh^2(\alpha_t^2 + \alpha_tx)\varphi(\mathrm{d} x)}. \end{align*} We can then employ \eqref{eq:alphat-defn-ind} and \eqref{eq:SE} to derive \begin{align} \varsigma_{t+1} \coloneqq \alpha_{t+1}^2 - \tau_{t+1} &= \lambda^2 \int \tanh(\alpha_t^2 + \alpha_tx)\varphi(\mathrm{d} x) - \lambda^2 \int \tanh(\tau_t + \sqrt{\tau_t}x)\varphi(\mathrm{d} x) + \mathcal{T}_1 \notag\\ &= \varsigma_t\, \underbrace{\lambda^2 \int \left(1 - \tanh^2(\tau_t + \sqrt{\tau_t}x)\right)\left(1 + \frac{1}{2\sqrt{\tau_t}}x\right)\varphi(\mathrm{d} x)}_{=:\mathcal{T}_2} + O\left(\frac{\varsigma_t^2}{\tau_t^{3/2}}\right) + \mathcal{T}_1, \label{eq:vartau-identity} \end{align} where the last identity shall be established towards the end of this subsection. In what follows, we shall look at $\mathcal{T}_1$ and $\mathcal{T}_2$ separately. \begin{itemize} \item To control $\mathcal{T}_{2}$, we first observe that $\mathcal{T}_2 \geq 0$, a direct consequence of \eqref{eq:g-lambda-tau-01} and \eqref{eq:g-lambda-tau-defi}. In addition, we claim that, for any $\tau \ge \lambda^2 - 1$ and any $\lambda \in [1,1.2]$, \begin{align} \label{eqn:middle} 0\leq \mathcal{T}_2 (\lambda,\tau) \coloneqq \lambda^2 \int \left(1 - \tanh^2(\tau + \sqrt{\tau}x)\right)\left(1 + \frac{1}{2\sqrt{\tau}}x\right)\varphi(\mathrm{d} x) \le 1 - (\lambda-1). \end{align} To see this, we resort to the numerical verification. To be specific, the middle panel of Figure~\ref{fig:numerics} plots the following quantity \begin{align} \label{eqn:middle-figure} \frac{1 - \sup_{\tau: \lambda^2 -1 \leq \tau \leq \lambda^2} \mathcal{T}_2(\lambda, \tau)}{\lambda -1} \end{align} as a function of $\lambda$; it is clearly seen from Figure~\ref{fig:numerics} that this ratio is strictly above 1 for any $\lambda \in [1,1.2].$ All this indicates that \[ 0 \leq \mathcal{T}_2\leq 1 - (\lambda - 1). \] \item Next, we turn attention to $\mathcal{T}_1$. Repeating the same argument as in \eqref{eq:tanh2-diff-123} and \eqref{eq:tanh2-diff-456} and recognizing that $|\tanh^{\prime}(w)|\leq 1$, we have \[ \bigg|{\displaystyle \int}\tanh\Big(\frac{\pi_{t}}{\sqrt{n}}(\alpha_{t}+x)\Big)\varphi(\mathrm{d}x)-{\displaystyle \int}\tanh\big(\alpha_{t}(\alpha_{t}+x)\big)\varphi(\mathrm{d}x)\bigg|\lesssim \frac{S_t}{\alpha_t} . \] This taken together with the definition of $\Delta_{\alpha, t}$ (cf.~\eqref{eqn:alpha-t-genearl}), the definition of $\alpha_{t+1}$ (cf.~\eqref{eq:alpha-beta-recursion-spect}) and the fact that $\sqrt{n}v^{\star}_i\in \{1,-1\}$ ($1\leq i\leq n$) gives \begin{align*} \langle v^\star, \tanh(\pi_tx_t)\rangle &= \sqrt{n}\int \tanh\left(\frac{\pi_t}{\sqrt{n}}(\alpha_t + x)\right)\varphi(\mathrm{d} x) + O\left(\gamma_t^{-1} \big|\Delta_{\alpha, t}\big| \right) \\ &= \sqrt{n}\int \tanh(\alpha_t^2 + \alpha_tx)\varphi(\mathrm{d} x) + O\left( \frac{S_t}{\alpha_t} \sqrt{n} + \alpha_t S_t \sqrt{n} \right) \\ &= \sqrt{n}\int \tanh(\alpha_t^2 + \alpha_tx)\varphi(\mathrm{d} x) + O\left( \frac{S_t}{\alpha_t} \sqrt{n} \right), \end{align*} where the second line is due to \eqref{eqn:beethoven-sonata} and \eqref{eqn:giovanni}, and the last line is valid since $\alpha_t\lesssim \lambda \leq 1$ (cf.~\eqref{eq:Z2-induction-alphat}). Additionally, \eqref{eqn:giovanni-2} tells us that \begin{align*} \|\tanh(\pi_tx_t)\|_2^2 = n\int \tanh^2(\alpha_t^2 + \alpha_tx)\varphi(\mathrm{d} x) + O\left(\alpha_{t}^{2}n\bigg(\frac{S_{t}}{\alpha_{t}^{3}}\bigg)\right). \end{align*} Taking these two relations collectively with \eqref{eqn:tanh-basic-alpha} and \eqref{eq:int-alpha2-tanh-123} ensures that \begin{align*} |\mathcal{T}_1| \lesssim \lambda^2 \alpha_t^2 \left( \frac{S_{t}}{\alpha_{t}^{3}} \right) \asymp \lambda^2 \frac{S_{t}}{\alpha_{t}} . \end{align*} \end{itemize} Putting the above bounds together, we arrive at \[ |\varsigma_{t+1}| \leq\big(1-(\lambda-1)\big) |\varsigma_{t}| + O\bigg(\frac{\varsigma_{t}^{2}}{\tau_{t}^{3/2}}\bigg) + O\left( \lambda^2 \frac{S_{t}}{\alpha_{t}} \right). \] Given that $\tau_t$ is increasing in $t$ (see \eqref{eq:tau-t-monotone}), there exists some large enough constant $C_8>0$ such that \begin{align*} \frac{|\varsigma_{t+1}|}{\tau_{t+1}} & \leq\big(1-(\lambda-1)\big)\frac{|\varsigma_{t}|}{\tau_{t}}+O\bigg(\frac{\varsigma_{t}^{2}}{\tau_{t}^{5/2}}\bigg)+O\left(\lambda^{2}\frac{S_{t}}{\alpha_{t}\tau_{t}}\right)\\ & \leq\big(1-(\lambda-1)\big)\frac{|\varsigma_{t}|}{\tau_{t}}+\frac{C_{8}}{\sqrt{\lambda-1}}\frac{\varsigma_{t}^{2}}{\tau_{t}^{2}}+\frac{C_{8}S_{t}}{(\lambda-1)^{1.5}}\\ & \leq\big(1-(\lambda-1)\big)\left\{ \frac{C_{6}\widetilde{S}_{t}}{(\lambda-1)^{2.5}}\right\} +\left(C_{8}\frac{C_{6}\widetilde{S}_{t}}{(\lambda-1)^{3}}\right)\frac{C_{6}\widetilde{S}_{t}}{(\lambda-1)^{2.5}}+\frac{C_{8}\widetilde{S}_{t}}{(\lambda-1)^{1.5}}\\ & \leq\frac{C_{6}\widetilde{S}_{t}}{(\lambda-1)^{2.5}}\leq\frac{C_{6}\widetilde{S}_{t+1}}{(\lambda-1)^{2.5}}, \end{align*} where the second line holds since $\alpha_t^2=(1+o(1))\tau_t$ and $\tau_t\geq \lambda^2 - 1 \asymp \lambda - 1$ (cf.~\eqref{eq:tau-t-monotone}), the third line relies on \eqref{eq:varsigma-t-Z2} and $\tau_t\gtrsim \lambda - 1$, and the last line is valid provided that $\frac{\widetilde{S}_{t}}{(\lambda-1)^{4}}\ll1$. This in turn establishes \eqref{eq:varsigma-t1-Z2} for the $(t+1)$-th iteration. \paragraph{Proof of relation \eqref{eq:vartau-identity}.} We first make the observation that \begin{align} & \Big|\tanh(\alpha_{t}^{2}+\alpha_{t}x)-\tanh(\tau_{t}+\sqrt{\tau_{t}}x)-\big(1-\tanh^{2}(\tau_{t}+\sqrt{\tau_{t}}x)\big)\Big(\alpha_{t}^{2}+\alpha_{t}x-\tau_{t}-\sqrt{\tau_{t}}x\Big)\Big|\nonumber \\ & \qquad\leq\frac{1}{2}(\alpha_{t}^{2}+\alpha_{t}x-\tau_{t}-\sqrt{\tau_{t}}x)^{2},\label{eq:tanh-diff-135} \end{align} which follows due to Taylor expansion and the fact that $\tanh^{\prime}(w)=1-\tanh^{2}(w)\in[0,1]$ for any $w\in\mathbb{R}$. In addition, one has \begin{align*} \alpha_{t}^{2}+\alpha_{t}x-\tau_{t}-\sqrt{\tau_{t}}x & =(\alpha_{t}^{2}-\tau_{t})+\frac{\alpha_{t}^{2}-\tau_{t}}{\alpha_{t}+\sqrt{\tau_{t}}}x=\varsigma_{t}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big)x+\varsigma_{t}\Big(\frac{1}{\alpha_{t}+\sqrt{\tau_{t}}}-\frac{1}{2\sqrt{\tau_{t}}}\Big)x\\ & =\varsigma_{t}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big)x-\Big(\frac{\varsigma_{t}^{2}}{2(\alpha_{t}+\sqrt{\tau_{t}})^{2}\sqrt{\tau_{t}}}\Big)x, \end{align*} which further implies that \begin{align*} &\qquad \Big|\big(\alpha_{t}^{2}+\alpha_{t}x-\tau_{t}-\sqrt{\tau_{t}}x\big)-\varsigma_{t}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big)x\Big| \leq\frac{\varsigma_{t}^{2}}{2\tau_{t}^{3/2}}|x| \\ &\Longrightarrow \qquad \big(\alpha_{t}^{2}+\alpha_{t}x-\tau_{t}-\sqrt{\tau_{t}}x\big)^{2} \leq2\varsigma_{t}^{2}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big)^{2}x^{2}+\frac{\varsigma_{t}^{4}}{2\tau_{t}^{3}}x^{2}. \end{align*} Substituting the preceding two inequalities into (\ref{eq:tanh-diff-135}) yields \begin{align*} & \Big|\tanh(\alpha_{t}^{2}+\alpha_{t}x)-\tanh(\tau_{t}+\sqrt{\tau_{t}}x)-\big(1-\tanh^{2}(\tau_{t}+\sqrt{\tau_{t}}x)\big)\varsigma_{t}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big)x\Big|\\ & \qquad\leq\frac{\varsigma_{t}^{2}}{2\tau_{t}^{3/2}}|x|+\varsigma_{t}^{2}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big)^{2}x^{2}+\frac{\varsigma_{t}^{4}}{4\tau_{t}^{3}}x^{2}. \end{align*} Taking the integral and using the facts that $\tau_t\leq \lambda^2\lesssim 1$ (cf.~\eqref{eq:tau-t-monotone}) and the induction hypothesis $|\varsigma_t|\lesssim \tau_t$ then give \begin{align*} & \left|{\displaystyle \int}\tanh(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)-{\displaystyle \int}\tanh(\tau_{t}+\sqrt{\tau_{t}}x)\varphi(\mathrm{d}x)-\varsigma_{t}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big){\displaystyle \int}\big(1-\tanh^{2}(\tau_{t}+\sqrt{\tau_{t}}x)\big)\varphi(\mathrm{d}x)\right|\\ & \quad\leq{\displaystyle \int}\left\{ \frac{\varsigma_{t}^{2}}{2\tau_{t}^{3/2}}|x|+\varsigma_{t}^{2}\Big(1+\frac{1}{2\sqrt{\tau_{t}}}\Big)^{2}x^{2}+\frac{\varsigma_{t}^{4}}{4\tau_{t}^{3}}x^{2}\right\} \varphi(\mathrm{d}x) \lesssim\frac{\varsigma_{t}^{2}}{\tau_{t}^{3/2}}+\frac{\varsigma_{t}^{2}}{\tau_{t}}+\frac{\varsigma_{t}^{4}}{\tau_{t}^{3}} \asymp \frac{\varsigma_{t}^{2}}{\tau_{t}^{3/2}}. \end{align*} \subsubsection{Bounding quantity $\kappa_t$} \label{sec:bound-kappat-Z2} Recall that the analysis in Section~\ref{sec:z2-xi-t} requires bounding $\kappa_t$, which shall be done in this subsection with the assistance of expression \eqref{eq:SE-induction}. First, combining~\eqref{eqn:don} and~\eqref{eqn:giovanni} leads to \begin{align} \gamma_t^2\pi_t^2 = \frac{(1 + o(\lambda - 1))\alpha_t^2}{\int \tanh(\alpha_t^2 + \alpha_tx)\varphi(\mathrm{d} x)} = \frac{(1 + o(\lambda -1)) \alpha_t^2}{(1 + o(\lambda -1)) \tau_{t+1} / \lambda^2} \leq \frac{(1 + o(\lambda -1)) \lambda^2 \alpha_t^2}{(1 + o(\lambda -1)) \tau_{t} } = (1 + o(\lambda -1))\lambda^2, \label{eq:prod-gamma-pi-lambda} \end{align} provided that $\frac{\widetilde{S}_t}{(\lambda-1)^{3.5}}\ll 1$. Here, the second identity holds due to~\eqref{eq:SE}, \eqref{eq:varsigma-t1-Z2} and \eqref{eq:vartau-identity}, the inequality is valid since $\tau_t$ is increasing in $t$ (see \eqref{eq:tau-t-monotone}), and the last identity comes from \eqref{eq:SE-induction}. Recalling the definition of $\kappa_{t}$ (cf.~\eqref{defi:kappa}) and the fact that $\ltwo{\beta_{t-1}} = 1$ gives \begin{align} \kappa_t^2 = \max\Bigg\{ \Bigg\langle\int\Bigg[x\eta_{t}^{\prime}\Big(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\Big) - \frac{1}{\sqrt{n}}\eta_{t}^{\prime\prime}\Big(\alpha_tv^\star &+ \frac{1}{\sqrt{n}}x\Big)\Bigg]^2 \varphi_n(\mathrm{d} x)\Bigg\rangle, ~ \bigg\langle \int\Big[\eta_{t}^{\prime}\Big(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\Big)\Big]^2\varphi_n(\mathrm{d} x) \bigg\rangle\Bigg\}. \label{eq:kappat-square-135} \end{align} In what follows, let us control each term in \eqref{eq:kappat-square-135} separately. \begin{itemize} \item To begin with, in view of the relations~\eqref{eqn:super-basic}, we obtain \begin{align} \notag & \Bigg\langle\int\Bigg[x\eta_{t}^{\prime}\Big(\alpha_{t}v^\star+\frac{1}{\sqrt{n}}x\Big)-\frac{1}{\sqrt{n}}\eta_{t}^{\prime\prime}\Big(\alpha_{t}v^\star+\frac{1}{\sqrt{n}}x\Big)\Bigg]^{2}\varphi_{n}(\mathrm{d} x)\Bigg\rangle\\ \notag & =\frac{1}{n}\int\left[\left(\gamma_{t}\pi_{t}x+\frac{2}{\sqrt{n}}\gamma_{t}\pi_{t}^{2}\tanh\Big(\pi_{t}\Big(\alpha_{t}v^\star+\frac{1}{\sqrt{n}}x\Big)\Big)\right)\cdot\Big(1-\tanh^{2}\Big(\pi_{t}\Big(\alpha_{t}v^\star+\frac{1}{\sqrt{n}}x\Big)\Big)\Big)\right]^{2}\varphi_{n}(\mathrm{d} x)\\ & =\int\left[\left(\gamma_{t}\pi_{t}x+\frac{2}{\sqrt{n}}\gamma_{t}\pi_{t}^{2}\tanh\Big(\frac{\pi_{t}}{\sqrt{n}}\big(\alpha_{t}+x\big)\Big)\right)\cdot\Big(1-\tanh^{2}\Big(\frac{\pi_{t}}{\sqrt{n}}\big(\alpha_{t}+x\big)\Big)\Big)\right]^{2}\varphi(\mathrm{d} x), \label{eq:w-inner-prod-identity} \end{align} where the last step follows from $\sqrt{n}v^\star_{i} \in \{+1, -1\}$ and the symmetry of $\varphi(\cdot).$ Reorganizing terms and recalling that $\pi_t = (1 + o(\lambda -1))\alpha_t\sqrt{n}$ (cf.~\eqref{eqn:don}) and $\gamma_t^2\pi_t^2 \le (1 + o(\lambda -1))\lambda^2$ (cf.~\eqref{eq:prod-gamma-pi-lambda}), we arrive at \begin{align*} \eqref{eq:w-inner-prod-identity} &= (1+o(\lambda -1))\gamma^2_t \pi^2_t \int \left[\left(x + 2\alpha_t\tanh(\alpha_t^2+\alpha_t x)\right)\left(1-\tanh^2(\alpha_t^2+\alpha_t x)\right)\right]^2 \varphi(\mathrm{d} x) \\ % &\le (1+o(\lambda -1))\lambda^2\int \left[\left(x + 2\sqrt{\tau_t}\tanh(\tau_t+\sqrt{\tau_t} x)\right)\left(1-\tanh^2(\tau_t+\sqrt{\tau_t} x)\right)\right]^2 \varphi(\mathrm{d} x), \end{align*} where the last line also relies on the relation \eqref{eq:SE-induction}. As a result, we reach \begin{align} \notag &\Bigg\langle\int\Bigg[x\eta_{t}^{\prime}\big(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\big) - \frac{1}{\sqrt{n}}\eta_{t}^{\prime\prime}\big(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\big)\Bigg]^2 \varphi_n(\mathrm{d} x)\Bigg\rangle \\ % &\qquad \leq (1+o(\lambda -1))\lambda^2\int \left[\left(x + 2\sqrt{\tau_t}\tanh(\tau_t+\sqrt{\tau_t} x)\right)\left(1-\tanh^2(\tau_t+\sqrt{\tau_t} x)\right)\right]^2 \varphi(\mathrm{d} x). \label{eqn:vienna} \end{align} \item Through similar calculations (for which we omit the details here), one can deduce that \begin{align} \label{eqn:Musikverein} &\Big\langle \int\Big[\eta_{t}^{\prime}\Big(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\big)\Big]^2\varphi_n(\mathrm{d} x) \Big\rangle \le (1+o(\lambda -1)) \lambda^2\int \left[1-\tanh^2(\tau_t+\sqrt{\tau_t} x)\right]^2 \varphi(\mathrm{d} x). \end{align} \end{itemize} Finally, let us look at the following function: \begin{align} &{\kappa}^2(\lambda, \tau) \coloneqq \lambda^2 \max \bigg\{\int \Big[\left(x + 2\sqrt{\tau}\tanh(\tau+\sqrt{\tau} x)\right)\left(1-\tanh^2(\tau+\sqrt{\tau} x)\right)\Big]^2 \varphi(\mathrm{d} x), \int \left[1-\tanh^2(\tau+\sqrt{\tau} x)\right]^2 \varphi(\mathrm{d} x)\bigg\}. \label{eq:defn-kappa2-lambda-tau} \end{align} For any $1 < \lambda < 1.2$, we observe that \begin{align} \label{eqn:kappa-ze-left} &\sup_{\tau: \lambda^2-1 \le \tau \le \lambda^2}\kappa(\lambda, \tau) \le 1 - \frac{\lambda-1}{12} , \end{align} which has been numerically validated in the left panel of Figure~\ref{fig:numerics}. Thus, putting the above results together, we have demonstrated the advertised bound for $\kappa_{t}$: \begin{align} \label{eqn:kappa-Z2} \kappa_t^2 \leq (1+o(1))\sup_{\tau: \lambda^2-1 \le \tau \le \lambda^2}\kappa(\lambda, \tau) \le 1 - \frac{\lambda-1}{15}. \end{align} \section{$\mathbb{Z}_2$ synchronization: Proof of Theorem~\ref{thm:Z2}} \label{sec:pf-thm-Z2} With the denoising functions selected as in \eqref{eqn:eta-z2-new}, we first point out that \begin{align} \|\beta_{t}\|_2 = \ltwo{\eta_{t}(x_{t})} = 1 , \qquad t\geq 1 \end{align} throughout the execution of AMP. This basic fact helps simplify the analysis, as there is no need to control the its related quantity $\Delta_{\beta, t}$ (see \eqref{eqn:delta-beta-general}) given that $\|\beta_{t-1}\|_2$ is fixed. As a result, this section focuses attention on characterizing the dynamics of $\alpha_{t}.$ \paragraph{Induction hypotheses.} The proof of Theorem~\ref{thm:Z2} is built upon Theorem~\ref{thm:recursion-spectral} as well as the analysis framework laid out in Theorem~\ref{thm:main} (or Corollary~\ref{cor:recursion-spectral}). The proof is inductive in nature; more specifically, we aim to show, by induction, that for every $t$ obeying~\eqref{eqn:z2-decomposition}, the AMP iterates $\{x_t\}$ satisfy the desired decomposition \eqref{eqn:z2-decomposition} in Theorem~\ref{thm:Z2} while satisfying the following properties: \begin{subequations} \label{Z2-induction} \begin{align} \big(1+o(1)\big)\lambda &\geq \alpha_t \ge \big(1+o(1)\big)\sqrt{\lambda^2 - 1} \label{eq:Z2-induction-alphat}\\ \| \xi_{t-1}\|_2 & \leq C_1 \sqrt{\frac{(t+s)\log n}{(\lambda - 1)^3 n}} + C_1 \left( 1 - \frac{1}{40}(\lambda - 1) \right) ^{t-1} \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^9 n}} \eqqcolon S_t \label{eqn:Z2-induction-st} \end{align} for some large enough constant $C_1>0$. Given that $s\asymp \frac{\log n}{(\lambda-1)^2}$, it is straightforward to see that \begin{align} \label{eqn:crude-st} S_t \leq \underset{\eqqcolon \,\widetilde{S}_t}{\underbrace{ C_1 \sqrt{\frac{(t+s)\log n}{(\lambda - 1)^3 n}} + C_1 \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^9 n}} }} \lesssim \sqrt{\frac{t\log n}{(\lambda-1)^3n}} + \sqrt{\frac{\log^7 n}{(\lambda-1)^9 n}} . \end{align} \end{subequations} We first verify these hypotheses for the base case. In view of Theorem~\ref{thm:recursion-spectral}, the spectral initialization $x_1$ (defined in \eqref{eqn:Z2-initialization}) admits the decomposition~\eqref{eqn:z2-decomposition} and satisfies \begin{align} \label{eq:init-alpha-beta-Z2} \alpha_1 = \sqrt{\lambda^2 - 1} , \qquad \ltwo{\beta_{0}} = 1, \qquad \|\xi_{0}\|_2 \lesssim \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^9n}}. \end{align} This validates the induction hypotheses \eqref{Z2-induction} for the base case with $t = 1$. In order to carry out the induction argument, we shall --- throughout the rest of the proof --- assume that the induction hypotheses \eqref{Z2-induction} hold true for every iteration $k\leq t$, and attempt to show their validity for the $(t+1)$-th iteration. \paragraph{Organization of the proof. } The proof is organized as follows. Section~\ref{sec:prelim-z2} collects a couple of preliminary facts (e.g., basic concentration inequalities, derivatives of the denoising function, and tight estimates of $\pi_t$ and $\gamma_t$) that will be used throughout the induction argument. Section~\ref{sec:z2-key} develops upper bounds on several key quantities (e.g., $A_t, B_t, D_t$) that underlie our analysis framework in Theorem~\ref{thm:main} and Corollary~\ref{cor:recursion-spectral}. The main recursion is established in Section~\ref{sec:xi-z2}; specifically, Section~\ref{sec:z2-xi-t} is devoted to establishing the bound for $\ltwo{\xi_t}$, Section~\ref{sec:z2-delta-alpha} studies the size of $\Delta_{\alpha, t}$, while Section~\ref{sec:main-recursion-z2} is dedicated to the analysis of $\alpha_{t}$. \subsection{Preliminary facts} \label{sec:prelim-z2} Before embarking on the main proof of Theorem~\ref{thm:Z2}, let us gather some preliminary facts that shall be used multiple times throughout the proof. \subsubsection{Basic concentration results} \label{sec:basic-concentration-buble} We begin by stating some concentration results that follow directly from the results in Section~\ref{sec:Gaussian-concentration}. Recall that the $\phi_k$'s are i.i.d.~drawn from $\phi_{k} \stackrel{\text{i.i.d}}{\sim} \mathcal{N}(0, \frac{1}{n}I_n)$, and for every $x\in \ensuremath{\mathbb{R}}^n$ we denote by $|x|_{(i)}$ its $i$-th largest entry in magnitude. In the statement of Lemma~\ref{lem:Gauss}, we mention some convex set $\mathcal{E}$, which we shall select as follows. For any fixed $1\leq t < n-2s$ and $1\leq \tau\leq n$, let us define the following set: \begin{align} &\notag \mathcal{E}_\tau \coloneqq \left\{ \{\phi_k\}: \max_{-2s\leq k\leq t-1} \|\phi_k\|_2 < 1+ C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\right\} \bigcap \left\{\{\phi_k\}: \sup_{a \in \mathcal{S}^{2s+t-1}} \Big\|\sum_{k = -2s}^{t-1} a_k\phi_k\Big\|_2 < 1 + C_5\sqrt{\frac{(t+s)\log \frac{n}{\delta} }{n}} \right\} \\ &\hspace{2cm} \bigcap \left\{\{\phi_k\}: \sup_{a = [a_k]_{-2s\leq k< t} \in \mathcal{S}^{2s+t-1}} \sum_{i = 1}^{\tau} \Big|\sum_{k = -2s}^{t-1} a_k\phi_k\Big|_{(i)}^2 < \frac{C_5(t + s+ \tau)\log \frac{n}{\delta}}{n} \right\} \label{eq:eps-set} \end{align} for some large enough constant $C_5>0$. It is easily seen that $\mathcal{E}_{\tau}$ is a convex set with respect to $(\phi_{-2s},\ldots,\phi_{t-1})$. Additionally, Lemma~\ref{lem:brahms-lemma} together with the union bound reveals that $\{\mathcal{E}_\tau\}$ is a set of high-probability events: \begin{align} \label{eqn:eps-interset} \ensuremath{\mathbb{P}}(\{\phi_k\} \in \mathcal{E} ) \geq 1 - \delta, \qquad \text{with } \mathcal{E} \coloneqq \bigcap_{\tau = 1}^n \mathcal{E}_{\tau} \end{align} In addition, Lemma~\ref{lem:Gauss} and Corollary~\ref{cor:Gauss} entail bounding the expected difference between a function $f$ and its projection onto $\mathcal{E}$ (see \eqref{eqn:gauss-lipschitz-B-proj}). Here, we state a simple result that leads to a useful bound in this regard. Specifically, denote $\Phi \coloneqq \sqrt{n}(\phi_{-2s},\ldots,\phi_{t-1})$, and consider any given function $f: \ensuremath{\mathbb{R}}^{n\times (2s+t)}\to \ensuremath{\mathbb{R}}$ obeying \begin{equation} |f(\Phi)| \lesssim n^{100} \Big(\max_k \|\phi_k\|_2\Big)^{100} . \label{eq:f-Phi-poly} \end{equation} Denoting by $\mathcal{P}_{\mathcal{E}}(\cdot)$ the Euclidean projection onto the set $\mathcal{E}$ and taking $\delta \asymp n^{-300}$, we assert that \begin{align} \label{eqn:brahms-conc} \mathbb{E}\big[\big|f(\Phi) - f(\mathcal{P}_{\mathcal{E}}(\Phi)) \big|\big] \lesssim n^{-100}. \end{align} In light of this result, we shall choose the set $\mathcal{E}$ with $\delta \asymp n^{-300}$ throughout the rest of this section. \begin{proof}[Proof of inequality~\eqref{eqn:brahms-conc}] We divide into two cases depending on the value of $\max_k \|\phi_k\|_2$, namely, \begin{align*} \mathbb{E}\big[ \big|f(\Phi) - f(\mathcal{P}_{\mathcal{E}}(\Phi)) \big|\big] & = \mathbb{E}\Big[\big|f(\Phi) - f(\mathcal{P}_{\mathcal{E}}(\Phi)) \big| \ind\left(\mathcal{E}^{\mathrm{c}}\right)\Big] \lesssim \mathbb{E}\left[n^{100} \Big(\max_k \|\phi_k\|_2\Big)^{100} \ind\left(\mathcal{E}^{\mathrm{c}}\right)\right] \\ &\lesssim \mathbb{E}\left[n^{100} \Big(\max_k \|\phi_k\|_2\Big)^{100} \ind\left(\mathcal{E}^{\mathrm{c}}\right)\ind\bigg(\max_k \|\phi_k\|_2 \le 1+C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\bigg)\right] \\ &\qquad+ \mathbb{E}\left[n^{100} \Big(\max_k \|\phi_k\|_2\Big)^{100} \ind\left(\mathcal{E}^{\mathrm{c}}\right) \ind\bigg(\max_k \|\phi_k\|_2 > 1+C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\bigg)\right]. \end{align*} First, it is easily seen from \eqref{eqn:eps-interset} that \begin{align*} \mathbb{E}\left[n^{100} \Big(\max_k \|\phi_k\|_2\Big)^{100} \ind\left(\mathcal{E}^{\mathrm{c}}\right)\ind\Big(\max_k \|\phi_k\|_2 \le 1+C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\Big)\right] \lesssim n^{200}\delta, \end{align*} provided that $\log\frac{1}{\delta} \lesssim \log n$. In addition, one can deduce that \begin{align*} & n^{100} \sum_k \mathbb{E}\left[\|\phi_k\|_2^{100} \ind\left(\mathcal{E}^{\mathrm{c}}\right) \ind\Big(\max_k \|\phi_k\|_2 > 1+C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\Big)\right] \\ &\qquad \lesssim n^{100} \sum_k \mathbb{E}\left[\|\phi_k\|_2^{100} \ind\left(\mathcal{E}^{\mathrm{c}}\right) \ind\Big(\|\phi_k\|_2 \le 1+C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\Big)\ind\Big(\max_k \|\phi_k\|_2 > 1+C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\Big)\right] \\ &\qquad \qquad+ n^{100} \sum_k \mathbb{E}\left[\|\phi_k\|_2^{100} \ind\Big(\|\phi_k\|_2 > 1+C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\Big)\right] \\ &\qquad \lesssim n^{100} \sum_k \left(1+ C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\right)^{100} \delta + n^{100} \sum_k \int_{C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}}^{\infty} (1+x)^{100} \exp(-\frac{nx^2}{2})\mathrm{d} x \le n^{200} \delta + n^{-100}, \end{align*} provided that $\log\frac{1}{\delta} \lesssim \log n$ and that $C_5$ is large enough. Putting these two cases together and choosing $\delta \asymp n^{-300}$ finish the proof. \end{proof} \subsubsection{Properties about the denoising function $\eta_t$} \label{sec:property-eta-z2} Recall that the denoising function is \[ \eta_t(x) = \gamma_t \tanh(\pi_t x) \qquad \text{with } \pi_t = \sqrt{n(\|x_t\|_2^2-1)} \text{ and } \gamma_t = \left\|\tanh\left(\pi_tx_t\right)\right\|_2^{-1}. \] In this subsection, we single out several useful properties related to $\eta_t(\cdot)$. \paragraph{Tight estimates of $\pi_t$ and $\gamma_t$. } Given that $\eta_t(\cdot)$ involves two quantities $\pi_t$ and $\gamma_t$, we first develop tight bounds on the sizes of them in the following, which are legitimate under the induction hypotheses \eqref{Z2-induction}. The proof is deferred to Section~\ref{sec:pf-z2-pi}. \begin{lems} \label{lem:pit-gammat} Under the induction hypotheses \eqref{Z2-induction}, we have \begin{subequations} \label{eq:parameters} \begin{align} \label{eqn:don} \pi_t &= \bigg(1 + O\bigg( \frac{S_t}{\alpha^2_t} \bigg) \bigg)\alpha_t\sqrt{n} = \big(1+o(1)\big) \alpha_t \sqrt{n} \\ \label{eqn:giovanni} \gamma_t^{-2} &= \left(1+O\bigg(\frac{S_{t}}{\alpha_{t}}+\frac{S_{t}}{\alpha_{t}^{3}}\bigg)\right)n\int\tanh\big(\alpha_{t}(\alpha_{t}+x)\big)\varphi(\mathrm{d}x)\asymp\alpha_{t}^{2}n \end{align} \end{subequations} with probability exceeding $1-O(n^{-11})$. \end{lems} \paragraph{Bounds on derivatives and gradients.} Next, we look at the derivatives and gradients of the denoising function. As can be straightforwardly seen, the function $\eta_t(x) \coloneqq \gamma_t\tanh(\pi_tx)$ is smooth everywhere, whose first three derivatives are given by \begin{gather} \begin{aligned} \label{eqn:super-basic} \eta_{t}^{\prime}(x) &= \gamma_t\pi_t\big(1-\tanh^2(\pi_tx)\big) \\ \eta_{t}^{\prime\prime}(x) &= -2\gamma_t\pi_t^2\tanh(\pi_tx)\big(1-\tanh^2(\pi_tx)\big)\\ \eta_{t}^{(\prime\prime\prime)}(x) &= -2\gamma_t\pi_t^3\big(1-\tanh^2(\pi_tx)\big)\big(1-3\tanh^2(\pi_tx)\big) \end{aligned} \end{gather} for any $x\in \ensuremath{\mathbb{R}}$. Combining the identities with \eqref{eq:parameters} and the fact $|\tanh(x)| \leq 1$, we can easily validate that \begin{gather} \label{eqn:chocolate} \begin{aligned} &|\eta_t(x)| \lesssim \frac{1}{\alpha_t\sqrt{n}} , \qquad\qquad &&|\eta_t^{\prime} (x) | \lesssim 1 \eqqcolon \rho, \\ & |\eta_t^{\prime\prime} (x)| \lesssim \alpha_t\sqrt{n} \lesssim \sqrt{n} \eqqcolon \rho_1, \qquad && |\eta_{t}^{(\prime\prime\prime)} (x)| \lesssim \alpha_t^2n \lesssim n \eqqcolon \rho_2. \end{aligned} \end{gather} Next, let us consider any given vectors $\mu =[\mu^k]_{-2s\leq k \leq t-1} \in \mathcal{S}^{t+2s-1}$, $\beta =[\beta^k]_{-2s\leq k \leq t-1} \in \mathcal{S}^{t+2s-1}$, and any given $\alpha \in \ensuremath{\mathbb{R}}$ obeying $\lambda \geq \alpha \geq \sqrt{\lambda^2 - 1}$ (note that, for the moment, we shall treat them as fixed parameters independent of $\{\phi_{k}\}$). We shall also define \begin{align*} \eta_{t}^{(i)}\big(v(\alpha,\beta)\big) \coloneqq \eta_{t}^{(i)}\Big(\alpha v^\star + \sum_{k = -2s}^{t-1} \beta^k\phi_k\Big) \in \ensuremath{\mathbb{R}}^n, \quad \text{with }v(\alpha,\beta) \coloneqq \alphav^\star + \sum_{k = -2s}^{t-1} \beta^k\phi_k, \end{align*} where the superscript $i$ denotes the $i$-th derivative (computed in an entrywise manner). In what follows, we collect several elementary results that are useful for our main proof. \begin{subequations} \label{eqn:-z2-basic-derivatives} \begin{align} \Big\|\nabla_{\phi_j} \Big\langle \sum_{k = -2s}^{t-1} \mu^k\phi_k, a\Big\rangle\Big\|_2 &\le |\mu^j| \cdot \|a\|_2, \qquad &&\text{for any given } a\in\ensuremath{\mathbb{R}}^n \label{mu-phi}\\ % \Big\|\nabla_{\phi_j} \big\langle \eta_t^{(s)}\big(v(\alpha,\beta)\big), a\big\rangle\Big\|_2 &\le |\beta^{j}|\cdot \big\|\eta_t^{(s+1)}\big(v(\alpha,\beta)\big) \circ a \big\|_2, \qquad &&\text{for any given } a\in\ensuremath{\mathbb{R}}^n \label{eta-phi}\\ % \Big\|\nabla_{\mu} \Big\langle \sum_{k = -2s}^{t-1} \mu^k\phi_k, a\Big\rangle\Big\|_2 &\le \|a\|_2\sum_{k = -2s}^{t-1} \|\phi_k\|_2, \qquad &&\text{for any given } a\in\ensuremath{\mathbb{R}}^n \label{mu-mu}\\ % \Big\|\nabla_{\mu, \beta} \Big( \sum_{k = -2s}^{t-1} \mu^k\beta^k \Big) \Big\|_2 &\le \|\mu\|_2 + \|\beta\|_2 = 2, \label{mu-beta}\\ % \Big\|\nabla_{\beta} \Big\langle \eta_t^{(s)}\big(v(\alpha,\beta)\big), a\Big\rangle\Big\|_2 &\le \|a\|_2\cdot \big\|\eta_t^{(s+1)}\big(v(\alpha,\beta)\big) \big\|_2\cdot\sum_{k=-2s}^{t-1}\|\phi_k\|_2, \qquad &&\text{for any given } a\in\ensuremath{\mathbb{R}}^n.\label{eta-beta} \end{align} \end{subequations} The proofs of these results are fairly elementary and are hence omitted for the sake of brevity. \subsubsection{Proof of tight estimates of $\pi_t$ and $\gamma_t$ (Lemma~\ref{lem:pit-gammat})} \label{sec:pf-z2-pi} \paragraph{Bounding quantity $\pi_{t}$.} In view of Lemma~\ref{lem:brahms-lemma} and the fact that $v^{\star\top}\big[\phi_{1},\cdots,\phi_{t-1}\big]\sim \mathcal{N}(0,\frac{1}{n}I_{t-1})$, we have \begin{align*} \left\|\sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\right\|_2 &= 1 + O\left(\sqrt{\frac{(t+s)\log n}{n}}\right), \\ \Big|\Big\langlev^\star,\sum_{k=-2s}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big\rangle\Big| &=\Big|\Big\langle v^{\star\top}\big[\phi_{-2s},\cdots,\phi_{t-1}\big],\big[\beta_{t-1}^{-2s},\cdots\beta_{t-1}^{t-1}\big]\Big\rangle\Big|\leq\Big\|v^{\star \top}\big[\phi_{-2s},\cdots,\phi_{t-1}\big]\Big\|_{2}\|\beta_{t-1}\|_{2} \\ &\lesssim\sqrt{\frac{(t+s)\log n}{n}} \end{align*} with probability exceeding $1-O(n^{-11})$, where we recall that $\|\beta_{t-1}\|_{2}=1$. As a result, recalling the induction hypothesis that $|\alpha_t|\leq \lambda \lesssim 1$ (see \eqref{eq:Z2-induction-alphat}), we arrive at \[ \Big\|\alpha_{t}v^\star+\sum_{k=-2s}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big\|_{2}^{2}=\alpha_{t}^{2}+2\alpha_{t}\Big\langlev^\star,\sum_{k=-2s}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big\rangle+\Big\|\sum_{k=-2s}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big\|_{2}^{2}=\alpha_{t}^{2}+1+O\Big(\sqrt{\frac{(t+s)\log n}{n}}\Big) \lesssim 1. \] Invoke the other induction hypothesis \eqref{eqn:Z2-induction-st} and the condition $s\asymp \frac{\log n}{(\lambda-1)^2}$ to obtain \begin{align*} \|x_t\|_2^2 &= \Big\|\alpha_t v^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1}\Big\|_2^2 = \Big\|\alpha_{t}v^\star+\sum_{k=-2s}^{t-1}\beta_{t-1}^{k}\phi_{k}\Big\|_{2}^{2} + 2 \Big\langle \xi_{t-1}, \alpha_t v^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big\rangle + \ltwo{\xi_{t-1}}^2\\ &= \alpha_t^2 + 1 + O\Big(\sqrt{\frac{(t+s)\log n}{n}}\Big) + O\bigg( \big\| \xi_{t-1} \big\|_2 \Big\| \alpha_t v^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k\Big\|_2 \bigg) + \ltwo{\xi_{t-1}}^2\\ &= \alpha_t^2 + 1 + O\Big(\sqrt{\frac{(t+s)\log n}{n}}\Big) + O\left(\ltwo{\xi_{t-1}}\right) = \alpha_t^2 + 1 + O(S_t). \end{align*} Therefore, we can conclude that \begin{align} \pi_t = \sqrt{n(\|x_t\|_2^2-1)} &= \alpha_t\sqrt{n} \left(1 + O\left(\frac{S_t}{\alpha^2_t}\right)\right), \label{eq:gamma-part1} \end{align} where we use the induction hypothesis \eqref{Z2-induction}. This establishes the advertised relation~\eqref{eqn:don} about $\pi_{t}$. \paragraph{Bounding quantity $\gamma_t$.} Before proceeding, we find it helpful to first establish a connection between $\|\tanh\left(\pi_tx_t\right)\|_2$ and $\|\tanh(\pi_tv_t)\|_2$, where we recall that $x_{t} = v_{t} + \xi_{t-1}$. Recognizing that $|\tanh(x)| \leq 1$ and $|\tanh'(x)| \leq 1$, we can guarantee that, for each $1\leq i\leq n$, \begin{align*} \big|\tanh\left(\pi_tx_{t,i}\right) - \tanh\left(\pi_tv_{t,i}\right)\big| \leq \pi_t |{\xi}_{t-1,i}| \lesssim \alpha_t\sqrt{n} \cdot |{\xi}_{t-1,i}|, \end{align*} where the last inequality follows from~\eqref{eq:gamma-part1}. By virtue of the the induction hypothesis \eqref{eqn:Z2-induction-st}, we can obtain \begin{align} \notag \left| \big\|\tanh\left(\pi_tx_t\right)\big\|_2^2 - \big\|\tanh\left(\pi_tv_t\right)\big\|_2^2\right| &\leq \big\|\tanh\left(\pi_tx_t\right) - \tanh\left(\pi_tv_t\right)\big\|_2 \cdot \big\|\tanh\left(\pi_tx_t\right)+\tanh\left(\pi_tv_t\right)\big\|_2\\ &\lesssim \alpha_tn\|\xi_{t-1}\|_2 \lesssim \alpha_tnS_t . \label{eq:gamma-part2} \end{align} The above relation \eqref{eq:gamma-part2} allows us to turn attention to the quantity $\left\|\tanh\left(\pi_tv_t\right)\right\|_2^2$, towards which we would like to invoke Lemma~\ref{lem:Gauss} to control the following quantity \begin{align*} \big\|\tanh\left(\pi_tv_t\right)\big\|_2^2 - n\int \tanh^2\left(\frac{\pi_t}{\sqrt{n}}\left(\alpha_t+ x\right)\right) \varphi(\mathrm{d} x). \end{align*} Given that for any coordinate $1\leq i\leq n$, one has $\sqrt{n}v^\star_{i} \in \{+1, -1\}$ and hence (due to symmetry) \begin{align*} \int \tanh^2 \left(\pi_t\left(\alpha_t v^\star_i + \frac{x}{\sqrt{n}}\right)\right) \varphi(\mathrm{d} x) = \int \tanh^2 \left(\frac{\pi_t}{\sqrt{n}}\left(\alpha_t + x\right)\right) \varphi(\mathrm{d} x), \end{align*} we are motivated to look at the following function \begin{align*} f_{\theta}(\Phi) \coloneqq \big\| \tanh(\pi v) \big\|_2^2 - \int\bigg\|\tanh\left(\pi \left(\alpha v^\star + \frac{1}{\sqrt{n}}x\right) \right)\bigg\|_2^2\varphi_n(\mathrm{d} x) \qquad \text{where } v \coloneqq \alpha v^\star + \sum_{k = -2s}^{t-1} \beta^k\phi_k. \end{align*} where we define $$ \Phi= \sqrt{n} \big[ \phi_{-2s},\ldots, \phi_{t-1} \big] , \qquad \theta = [\alpha, \beta, \pi] \in \ensuremath{\mathbb{R}}^{t+2s+2} \qquad \text{and} \qquad \beta = [\beta^{-2s},\cdots, \beta^{t-1}]. $$ Clearly, in order to bound $\big\| \tanh(\pi_t v_t) \big\|_2^2 - \int\big\|\tanh\big(\pi_t \big(\alpha_t v^\star + \frac{1}{\sqrt{n}}x\big) \big)\big\|_2^2\varphi_n(\mathrm{d} x)$, it suffices to develop a bound on $f_{\theta}(\Phi)$ uniformly over all $\theta$ within the following set: \begin{equation} \Theta \coloneqq \Big\{ \theta = (\alpha,\beta,\pi) \mid \|\beta \|_2 = 1, \sqrt{\lambda - 1} \lesssim \alpha \lesssim 1, \pi \asymp \alpha \sqrt{n} \Big\}. \end{equation} Towards this end, observe that \begin{align*} \left\|\nabla_{\Phi} f_\theta(\Phi)\right\|_2 \le \frac{2\pi\|\beta\|_2}{\sqrt{n}}\left\|\tanh(\pi v) \circ \tanh^{\prime}(\pi v) \right\|_2 \leq 2\pi\|\beta\|_2 &\lesssim \alpha \sqrt{n}, \end{align*} where we have used the facts that $\|\beta\|_2=1$, and $\pi \asymp \alpha \sqrt{n}$. Additionally, it is straightforward to check that $f_{\theta}(\Phi)$ obeys $\|\nabla_{\theta} f_{\theta}(\Phi) \|_2 \lesssim n^{100} $ for all $Z\in \mathcal{E}$ and $|f_{\theta}(\Phi)| \lesssim n^{100} \big( \max_k \|\phi_k\|_2 \big)^{100}$. For any fixed $\theta$, it is readily seen that $\mathbb{E}[f_{\theta}(\Theta)]=0$. Applying Corollary~\ref{cor:Gauss} in conjunction with \eqref{eqn:brahms-conc} yields \[ \sup_{\theta\in \Theta} \bigg| \frac{1}{\alpha} f_{\theta}(\Phi) \bigg| \lesssim \sqrt{n(t+s)\log n} \] with probability at least $1-O(n^{-11})$. This in turn leads to \begin{align} \left|\big\| \tanh(\pi_t v_t) \big\|_2^2 - \int\Big\|\tanh\left(\pi_t\left(\alpha_t v^{\star} + \frac{1}{\sqrt{n}}x\right)\right)\Big\|_2^2\varphi(\mathrm{d} x)\right| \leq \alpha_t \sup_{\theta\in \Theta} \frac{1}{\alpha} \left| f_{\theta}(\Phi) \right| \lesssim \alpha_t\sqrt{(t+s)n\log n}. \label{eq:gamma-part3} \end{align} Putting~\eqref{eq:gamma-part1},~\eqref{eq:gamma-part2}, and~\eqref{eq:gamma-part3} together leads to \begin{align} \|\tanh(\pi_tx_t)\|_2^2 &= n\int \tanh^2\left(\frac{\pi_t}{\sqrt{n}}\left(\alpha_t+ x\right)\right) \varphi(\mathrm{d} x) + O\Big(\alpha_tnS_t + \alpha_t\sqrt{(t+s)n\log n}\Big) \notag\\ & = n\int \tanh^2\left(\frac{\pi_t}{\sqrt{n}}\left(\alpha_t+ x\right)\right) \varphi(\mathrm{d} x) + O\big(\alpha_tnS_t \big). \label{eq:identity-tanh-pixt} \end{align} In view of the mean value theorem and the fact that $|(\tanh^{2})^{\prime}(w)|=|2\tanh(w)\tanh^{\prime}(w)|\leq 2$ (and hence $\tanh^{2}$ is 2-Lipschitz continuous), we have \begin{align} \bigg|\tanh^{2}\Big(\frac{\pi_{t}}{\sqrt{n}}(\alpha_{t}+x)\Big)-\tanh^{2}\Big(\alpha_{t}(\alpha_{t}+x)\Big)\bigg| & \leq2\left|\Big(\frac{\pi_{t}}{\sqrt{n}}-\alpha_{t}\Big)(\alpha_{t}+x)\right|\leq2\left|\frac{\pi_{t}}{\sqrt{n}}-\alpha_{t}\right|\big(\alpha_{t}+|x|\big), \label{eq:tanh2-diff-123} \end{align} which together with \eqref{eq:gamma-part1} yields \begin{align} & \bigg|{\displaystyle \int}\tanh^{2}\Big(\frac{\pi_{t}}{\sqrt{n}}(\alpha_{t}+x)\Big)\varphi(\mathrm{d}x)-{\displaystyle \int}\tanh^{2}\Big(\alpha_{t}(\alpha_{t}+x)\Big)\varphi(\mathrm{d}x)\bigg| \notag\\ &\qquad \leq2\left|\frac{\pi_{t}}{\sqrt{n}}-\alpha_{t}\right|\left({\displaystyle \int}\alpha_{t}\varphi(\mathrm{d}x)+{\displaystyle \int}|x|\varphi(\mathrm{d}x)\right) \lesssim\left|\frac{\pi_{t}}{\sqrt{n}}-\alpha_{t}\right| \lesssim \frac{S_t}{\alpha_t}. \label{eq:tanh2-diff-456} \end{align} Substitution into \eqref{eq:identity-tanh-pixt} gives \begin{align} \big\|\tanh(\pi_{t}x_{t}) \big\|_{2}^{2} &=n\int\tanh^{2}\big(\alpha_{t} (\alpha_{t}+x )\big)\varphi(\mathrm{d}x)+O\bigg(\alpha_{t}nS_{t} + \frac{nS_t}{\alpha_t} \bigg) \notag\\ & =n\int\tanh \big(\alpha_{t} (\alpha_{t}+x)\big)\varphi(\mathrm{d}x) + O\left(\alpha_{t}^{2}n\bigg(\frac{S_{t}}{\alpha_{t}}+\frac{S_{t}}{\alpha_{t}^{3}}\bigg)\right) , \label{eqn:giovanni-2} \end{align} where the last line follows from \citet[Eq.~(B.4) in Appendix B.2]{deshpande2017asymptotic}. Finally, we justify that $\int\tanh \left(\alpha_{t}\left(\alpha_{t}+x\right)\right)\varphi(\mathrm{d}x) \asymp \alpha_t^2$. Towards this, we make the observation that \begin{align*} & \int\tanh(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)\\ & \qquad=\int_{0}^{2\alpha_{t}}\tanh(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)+\int_{\alpha_{t}}^{\infty}\left\{ \tanh\big(\alpha_{t}^{2}+\alpha_{t}(\alpha_{t}+z)\big)+\tanh\big(\alpha_{t}^{2}+\alpha_{t}(\alpha_{t}-z)\big)\right\} \varphi(\mathrm{d}x)\\ & \qquad\geq\int_{0}^{2\alpha_{t}}\tanh(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)\geq\int_{0}^{2\alpha_{t}}\tanh'(3\alpha_{t}^{2})(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)\\ & \qquad\geq\Big(1-\tanh^{2}(3\alpha_{t}^{2})\Big)\Big(2\alpha_{t}^{3}+\varphi(2\alpha_{t})2\alpha_{t}^{2}\Big)\asymp\alpha_{t}^{2}, \end{align*} where the first inequality follows since $\tanh\big(\alpha_{t}^{2}+\alpha_{t}(\alpha_{t}+z)\big)+\tanh\big(\alpha_{t}^{2}+\alpha_{t}(\alpha_{t}-z)\big)\geq 0$ for any $z\geq 0$, the second inequality holds since $\tanh^{\prime}(0)=0$ and $\tanh^{\prime}(w)$ is decreasing in $w$ for $w\geq 0$, and the last line uses $\tanh^{\prime}(w)=1-\tanh^2(w)$ and the induction hypothesis that $\alpha_t\lesssim 1$ (cf.~\eqref{eq:Z2-induction-alphat}). Additionally, it results from the Taylor expansion as well as the facts $\tanh(0)=0$ and $|\tanh^{\prime\prime}(w)|\leq1$ that \begin{align*} \int\tanh(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x) & \leq\int(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)+\frac{1}{2}\int|\alpha_{t}^{2}+\alpha_{t}x|^{2}\varphi(\mathrm{d}x)\\ & \leq\int(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)+\int\alpha_{t}^{4}\varphi(\mathrm{d}x)+\int\alpha_{t}^{2}x^{2}\varphi(\mathrm{d}x) =2\alpha_{t}^{2} + \alpha_t^4. \end{align*} Consequently, we have justified that \begin{equation} \int\tanh(\alpha_{t}^{2}+\alpha_{t}x)\varphi(\mathrm{d}x)\asymp \alpha_t^2 \label{eq:int-alpha2-tanh-123} \end{equation} given that $\alpha_t\lesssim 1$ (cf.~\eqref{eq:Z2-induction-alphat}). Combining this with~\eqref{eqn:giovanni-2} and the induction hypothesis that $\sqrt{\lambda - 1} \lesssim \alpha_t \lesssim 1$ (cf.~\eqref{eq:Z2-induction-alphat}), we reach \begin{align} \label{eqn:tanh-basic-alpha} \big\|\tanh(\pi_{t}x_{t}) \big\|_{2}^{2} & = \left(1+ O\bigg(\frac{S_{t}}{\alpha_{t}}+\frac{S_{t}}{\alpha_{t}^{3}}\bigg)\right) n\int\tanh\big(\alpha_{t}(\alpha_{t}+x)\big)\varphi(\mathrm{d}x)\asymp n\alpha_{t}^{2} , \end{align} provided that $\frac{S_{t}}{\alpha_{t}}+\frac{S_{t}}{\alpha_{t}^{3}} \ll 1$. This concludes the proof of Lemma~\ref{lem:pit-gammat}. \subsection{Controlling several key quantities $A_t,B_t,D_t$} \label{sec:z2-key} By virtue of Theorem~\ref{thm:main} or Corollary~\ref{cor:recursion-spectral}, the behavior of $\alpha_{t+1}$ is governed by a couple of key quantities as defined in Assumptions~\ref{assump:A-H-eta} (except that those sums w.r.t.~$\sum_{k=1}^{t-1}$ there should be replaced with $\sum_{k=-2s}^{t-1}$ to account for spectral initialization). Several immediate remarks are in order. \begin{itemize} \item As alluded to previously, there is no need to bound $\Delta_{\beta,t}$ given that $\|\beta_t\|_2$ is fixed. As a result, there is no need in controlling $C_t, F_{t}$ and $G_t$. \item Given that the denoising function is smooth everywhere, we clearly have $E_t=0$. \end{itemize} With these remarks in mind, the proof of Theorem~\ref{thm:Z2} largely consists of identifying sufficiently small quantities $A_{t}, B_t, D_{t}$ such that \eqref{defi:A}, \eqref{defi:B} and \eqref{defi:D} are satisfied with high probability, which forms the main content of this subsection. The analysis in this subsection operates under the induction hypotheses \eqref{Z2-induction}. \subsubsection{Quantity $A_t$ in \eqref{defi:A}} \label{sec:control-A-Z2} Recall that this part is concerned with bounding the following quantity \begin{align} \Big\langle \sum_{k = -2s}^{t-1} \mu^k_t \phi_k, \eta_{t}(v_t)\Big\rangle - \left\langle\eta_t^{\prime}(v_t) \right\rangle \sum_{k = -2s}^{t-1} \mu^k_t\beta_{t-1}^k ; \label{eq:target-An-expression} \end{align} note that the summation starts from $k=-2s$ in order to take into account spectral initialization. In order to analyze this quantity, we introduce \begin{equation} \Phi \coloneqq \sqrt{n}(\phi_{-2s},\ldots,\phi_{t-1}),\qquad \theta \coloneqq (\mu,\alpha,\beta, \pi, \gamma) \in \mathcal{S}^{2s+t-1} \times \ensuremath{\mathbb{R}} \times \mathcal{S}^{2s+t-1} \times \ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}, \label{eq:defn-Phi-theta-Z2-A} \end{equation} and define the following function \begin{align*} f_\theta(\Phi) \coloneqq \Big\langle \sum_{k = -2s}^{t-1} \mu^{k} \phi_k, \eta \Big(\alpha v^\star + \sum_{k=-2s}^{t-1} \beta^{k} \phi_k\Big)\Big\rangle - \Big\langle\eta^{\prime}\Big(\alpha v^\star + \sum_{k=-2s}^{t-1} \beta^{k} \phi_k\Big) \Big\rangle \sum_{k = -2s}^{t-1} \mu^k\beta^{k}, \end{align*} where \begin{equation} \eta (w) \coloneqq \gamma^{-1} \tanh(\pi w) ; \label{eq:defn-eta-Z2-A} \end{equation} here, we suppress the dependency on $\alpha$, $\pi$ and $\gamma$ in the notation $\eta(\cdot)$ for simplicity. Clearly, the quantity \eqref{eq:target-An-expression} can be expressed as $f_\theta(\Phi)$ with $\alpha=\alpha_t$, $\mu = \mu_t, \beta = \beta_{t-1}, \pi = \pi_t$ and $\gamma = \gamma_t$; these parameters, however, are statistically dependent on $\Phi$. As a result, we resort to Lemma~\ref{lem:Gauss} in order to obtain a uniform control over all parameters within a suitable region \begin{equation} \Theta \coloneqq \Big\{ \theta = (\mu,\alpha,\beta,\pi,\gamma) \mid \|\mu \|_2 = \|\beta \|_2 = 1, \sqrt{\lambda - 1} \lesssim \alpha \lesssim 1, \pi \asymp \gamma^{-1} \asymp \alpha \sqrt{n} \Big\}. \label{eq:parameter-set-Z2-A} \end{equation} It follows immediately from the calculation in Section~\ref{sec:property-eta-z2} that, for any $\theta \in \Theta$ and any $x\in \ensuremath{\mathbb{R}}$, \begin{gather} \label{eqn:chocolate-At-Z2} \begin{aligned} &|\eta(x)| \lesssim \frac{1}{\alpha \sqrt{n}} , \qquad |\eta^{\prime} (x) | \lesssim 1 , \qquad |\eta^{\prime\prime} (x)| \lesssim \alpha \sqrt{n} , \qquad && |\eta^{\prime\prime\prime} (x)| \lesssim \alpha ^2n . \end{aligned} \end{gather} Clearly, we can see that (i) $\|\theta\|_2\lesssim \sqrt{n}$ for any $\theta \in \Theta$, (ii) $\|\nabla_{\theta} f_{\theta} (Z) \|_2 \lesssim n^{100}$ for any $Z\in \mathcal{E}$ (see \eqref{eqn:eps-interset}), and (iii) $|f_{\theta}(\Phi)|\lesssim n^{100}\big(\max_{k}\|\phi_{k}\|_{2}\big)^{100}$. Then according to \eqref{eqn:brahms-conc}, it would be natural to invoke Corollary~\ref{cor:Gauss} to obtain uniform control of $f_{\theta}(\Phi)$. The main step then boils down to bounding $\nabla_{\Phi} f_{\theta}(\Phi)$, which we accomplish in what follows. Letting $v \coloneqq \alpha v^\star + \sum_{k=-2s}^{t-1} \beta^{k} \phi_k$ for notational simplicity, we can directly bound the derivative of $f$ w.r.t.~$\Phi$ as follows: \begin{align*} \big\|\nabla_{\Phi}f_{\theta}(\Phi)\big\|_{2} & \le\frac{\|\mu\|_{2}}{\sqrt{n}}\left\|\eta_{t}(v)\right\|_{2}+\frac{\|\beta\|_{2}}{\sqrt{n}}\Big\|\sum_{k=-2s}^{t-1}\mu^{k}\phi_{k}\circ\eta^{\prime}(v)\Big\|_{2}+\bigg(\frac{\|\beta\|_{2}}{n\sqrt{n}}\left\|\eta^{\prime\prime}(v)\right\|_{2}\bigg)\big(\|\mu\|_{2}\|\beta\|_{2}\big)\\ & = \frac{1}{\sqrt{n}}\left\|\eta_{t}(v)\right\|_{2}+\frac{1}{\sqrt{n}}\Big\|\sum_{k=-2s}^{t-1}\mu^{k}\phi_{k}\circ\eta^{\prime}(v)\Big\|_{2}+\frac{1}{n\sqrt{n}}\left\|\eta^{\prime\prime}(v)\right\|_{2}\\ & \lesssim\frac{1}{\alpha\sqrt{n}} , \end{align*} where the first inequality follows from \eqref{mu-phi} and~\eqref{eta-phi}, the second line relies on the condition $\|\mu\|_2 = \|\beta\|_2=1$, and the last inequality makes use of~\eqref{eqn:chocolate}, the condition $\sqrt{\lambda - 1}\lesssim \alpha\lesssim 1$, and the fact that $\big\| \sum_{k=-2s}^{t-1}\mu^{k}\phi_{k} \big\|_2 \lesssim 1$ (see the second pair of curly brackets in \eqref{eq:eps-set}). Applying Corollary~\ref{cor:Gauss} then gives \begin{align} \label{eqn:moonlight} \sup_{\theta \in \Theta} \big| \alpha f_\theta(\Phi) - \alpha \mathbb{E} [f_\theta(\Phi)] \big| \lesssim \sqrt{\frac{(t+s)\log n}{n}} \end{align} with probability at least $1-O(n^{-11})$. In addition, we observe that: for any fixed parameter $\theta$, Stein's lemma reveals that \begin{align*} \mathbb{E} \big[f_\theta(\Phi)\big] = \mathbb{E} \left[\Big\langle \sum_{k = -2s}^{t-1} \mu^{k} \phi_k, \eta_{t}\Big(\alpha v^\star + \sum_{k=-2s}^{t-1} \beta^{k} \phi_k\Big)\Big\rangle - \Big\langle\eta_t^{\prime}\Big(\alpha v^\star + \sum_{k=-2s}^{t-1} \beta^{k} \phi_k\Big) \Big\rangle \sum_{k = -2s}^{t-1} \mu^k\beta^{k}\right] = 0, \end{align*} which together with \eqref{eqn:moonlight} gives \begin{align} \label{eqn:moonlight} \sup_{\theta \in \Theta} \Big\{ \alpha\, \big|f_\theta(\Phi) \big| \Big\} \lesssim \sqrt{\frac{(t+s)\log n}{n}} \end{align} with probability at least $1-O(n^{-11})$. Consequently, it is sufficient to take \begin{align} \label{eqn:Z2-At} A_t \asymp \frac{1}{\alpha_t}\sqrt{\frac{(t+s)\log n}{n}}. \end{align} \subsubsection{Quantity $B_t$ in \eqref{defi:B}} Regarding the quantity $B_t$, we need to examine the following function \begin{align*} f_{\theta}(\Phi) \coloneqq v^{\star \top}\eta(v), \qquad \text{with } v \coloneqq \alpha v^\star + \sum_{k = -2s}^{t-1} \beta^k\phi_k, \end{align*} where $\Phi$ and $\theta$ are defined in \eqref{eq:defn-Phi-theta-Z2-A}, and $\eta(\cdot)$ is defined in \eqref{eq:defn-eta-Z2-A}. Clearly, the target quantity on the left-hand side of \eqref{defi:B} can be viewed as $f_{\theta}(\Phi)$ with $\alpha = \alpha_t, \beta = \beta_{t-1}, \mu = \mu_t, \pi = \pi_t$ and $\gamma = \gamma_t$. When it comes to the convex set $\mathcal{E}$ (cf.~\eqref{eqn:eps-interset}) and the parameter set $\Theta$ (cf.~\eqref{eq:parameter-set-Z2-A}), it is straightforward to verify that $\|\nabla_{\theta} f_{\theta}(Z) \|_2\lesssim n^{100}$ for any $Z\in \mathcal{E}$ and $| f_{\theta}(Z) | \lesssim n^{100} \big(\|Z \|_{\mathrm{F}} \big)^{100}$. Therefore, in view of \eqref{eqn:brahms-conc}, we shall resort to Corollary~\ref{cor:Gauss} to obtain uniform control of $f_{\theta}(\Phi)$ over all $\theta \in \Theta$. Invoking inequality \eqref{eta-phi} yields \begin{align*} \left\|\nabla_{\Phi} f_{\theta}(\Phi)\right\|_2 &\le \frac{\|\beta\|_2}{\sqrt{n}}\left\|v^\star \circ \eta^{\prime}(v)\right\|_2 \lesssim \frac{1}{\sqrt{n}}, \end{align*} where the last inequality holds since, according to~\eqref{eqn:chocolate-At-Z2}, \begin{align} \|\eta^{\prime}(v) \circ v^\star\|_2 &\lesssim \|v^\star\|_2 = 1. \end{align} Apply Corollary~\ref{cor:Gauss} to arrive at \begin{align*} \sup_{\theta \in \Theta} \left|v^{\star \top}\eta(v) - \mathbb{E}[v^{\star \top}\eta(v)]\right| &\lesssim \sqrt{\frac{(t+s)\log n}{n}} \end{align*} with probability at least $1-O(n^{-11})$. In addition, for every given $\theta$ we have \begin{align*} \mathbb{E}[v^{\star \top}\eta(v)] &= \mathbb{E} \left[v^{\star \top} \eta\Big(\alpha v^\star + \sum_{k = -2s}^{t-1} \beta^k\phi_k\Big)\right] % = v^{\star \top}\int\eta \left(\alpha v^\star + \frac{1}{\sqrt{n}}x\right)\varphi_n(\mathrm{d} x), % \end{align*} where $\varphi_n(\cdot)$ is the CDF of $\mathcal{N}(0,I_n)$. Therefore, in view of the definition \eqref{defi:B}, it is sufficient to set \begin{align} \label{eqn:Z2-Bt} B_t \asymp \sqrt{\frac{(t+s)\log n}{n}}. \end{align} \subsubsection{Quantity $D_t$ in \eqref{defi:D}} With regards to quantity $D_t$, we aim to justify that \begin{align} \Big\|\sum_{k = -2s}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime}(v_t) - \frac{1}{n}\sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k\eta_{t}^{\prime\prime}(v_t)\Big\|_2^2 - \kappa_t^2 \lesssim D_t \asymp \sqrt{\frac{(t+s)\log^2 n}{n}} . \label{eqn:Z2-Dt} \end{align} In order to prove this, let us introduce the following function \begin{align*} &f_\theta(\Phi) \coloneqq \Big\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \eta^{\prime}(v) - \frac{1}{n}\sum_{k = -2s}^{t-1} \mu^k\beta^k\eta^{\prime\prime}(v)\Big\|_2^2 - \kappa^2,\\ % &\qquad\qquad \text{with } v \coloneqq \alpha v^\star + \sum_{k = -2s}^{t-1} \beta^k\phi_k; \end{align*} here, $\Phi$ and $\theta$ are defined in \eqref{eq:defn-Phi-theta-Z2-A}, $\eta(\cdot)$ is defined in \eqref{eq:defn-eta-Z2-A}, whereas $\kappa$ is defined such that \begin{align} & \notag\kappa^{2}\coloneqq\max\Bigg\{\Bigg\langle\int\Big[x\eta^{\prime}\Big(\alphav^\star+\frac{1}{\sqrt{n}}x\Big)-\frac{\ltwo{\beta}}{\sqrt{n}}\eta^{\prime\prime}\Big(\alphav^\star+\frac{1}{\sqrt{n}}x\Big)\Big]^{2}\varphi_{n}(\mathrm{d} x)\Bigg\rangle,~\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad \bigg\langle\int\Big[\eta^{\prime}\Big(\alphav^\star+\frac{1}{\sqrt{n}}x\big)\Big]^{2}\varphi_{n}(\mathrm{d} x)\bigg\rangle\Bigg\} . \label{defi:kappa-Z2-D} \end{align} We shall also introduce the set $\mathcal{E}$ (resp.~$\Theta$) as in \eqref{eqn:eps-interset} (resp.~\eqref{eq:parameter-set-Z2-A}). Once again, it is easily seen that $\|\nabla_{\theta} f_{\theta}(Z) \|_2\lesssim n^{100}$ holds for any $Z\in \mathcal{E}$ and $| f_{\theta}(Z) | \lesssim n^{100} \big(\|Z \|_{\mathrm{F}} \big)^{100}$. In light of \eqref{eqn:brahms-conc}, it is natural to apply Corollary~\ref{cor:Gauss} to obtain uniform control of $f_\theta(\Phi)$ over all $\theta \in \Theta$, which we detail as follows. To begin with, we can take the derivative and use $\|\mu\|_2=\|\beta\|_2=1$ to obtain \begin{align} \label{eqn:dt-mighty-five} \notag &\left\|\nabla_{\Phi} f_\theta(\Phi)\right\|_2 \\ \notag &\lesssim \frac{2}{\sqrt{n}}\left\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \eta^{\prime}(v) \circ \eta^{\prime}(v)\right\|_2 + \frac{2}{\sqrt{n}}\left\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \eta^{\prime}(v) \circ \eta^{\prime\prime}(v)\right\|_2 + \frac{2}{n^2\sqrt{n}}\left\|\eta^{\prime\prime}(v) \circ \eta^{\prime\prime\prime}(v) \right\|_2 \\ &\qquad + \frac{2}{n\sqrt{n}}\left\|\eta^{\prime}(v) \circ \eta^{\prime\prime}(v)\right\|_2 + \frac{2}{n\sqrt{n}}\left\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \eta^{\prime\prime}(v) \circ \eta^{\prime\prime}(v) \right\|_2 + \frac{2}{n\sqrt{n}}\left\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \eta^{\prime}(v) \circ \eta^{\prime\prime\prime}(v) \right\|_2, \end{align} as a consequence of \eqref{mu-phi} and~\eqref{eta-phi}. With this in place, we can further deduce that \begin{align} \label{eqn:scriabin-mys} \left\|\nabla_{\Phi} f_\theta(\Phi)\right\|_2 &\lesssim \sqrt{\frac{\log n}{n}}, \end{align} whose proof is deferred to the end of this subsection. Applying Corollary~\ref{cor:Gauss} then reveals that: with probability at least $1-O(n^{-11})$, \begin{align} &\sup_{\theta\in \Theta} \Big\{f_\theta(\Phi) - \ensuremath{\mathbb{E}}[f_\theta(\Phi)]\Big\} \lesssim \sqrt{\frac{(t+s)\log^2 n}{n}}. \label{eqn:Dt-concentration} \end{align} Taking $\theta = (\mu_t, \alpha_t,\beta_{t-1},\pi_t,\gamma_t)$ in the above inequality~\eqref{eqn:Dt-concentration} and making use of the following observation \begin{align*} \Big\|\sum_{k = -2s}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime}(v_t) - \frac{1}{n}\sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k\eta_{t}^{\prime\prime}(v_t)\Big\|_2^2 - \kappa_t^2 - \sup_{\theta\in \Theta} \ensuremath{\mathbb{E}}[f_\theta(\Phi)] \leq \sup_{\theta\in \Theta} \Big\{f_\theta(\Phi) - \ensuremath{\mathbb{E}}[f_\theta(\Phi)]\Big\} , \end{align*} we arrive at \begin{align*} \Big\|\sum_{k = -2s}^{t-1} \mu_t^k\phi_k \circ \eta_{t}^{\prime}(v_t) - \frac{1}{n}\sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k\eta_{t}^{\prime\prime}(v_t)\Big\|_2^2 - \kappa_t^2 - \sup_{\theta\in \Theta} \ensuremath{\mathbb{E}}[f_\theta(\Phi)] \lesssim \sqrt{\frac{(t+s)\log^2 n}{n}}. \end{align*} In order to conclude the proof of \eqref{eqn:Z2-Dt}, it suffices to show that for every $\theta\in \Theta$, one has $\ensuremath{\mathbb{E}} \left[f_{\theta}(\Phi)\right] \le 0$. To see this, consider any fixed $\theta$, and use $\varrho$ to denote the angle between the two unit vectors $\mu$ and $\beta$ (so that $\cos\varrho = \inprod{\mu}{\beta}$). Hence, one can write \begin{align} \notag &\ensuremath{\mathbb{E}} \left[ \left\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \eta^{\prime}(v) - \frac{1}{n}\sum_{k = -2s}^{t-1} \mu^k\beta^k\eta^{\prime\prime}(v)\right\|_2^2 \right] \\ \notag &= \ensuremath{\mathbb{E}} _{X, Y \overset{\text{i.i.d.}}{\sim} \mathcal{N}(0, I_n)}\left[\left\|\frac{1}{\sqrt{n}}(X\cos\varrho + Y\sin\varrho) \circ \eta^{\prime}\left(\alpha v^{\star} + \frac{1}{\sqrt{n}}X\right) - \frac{\cos\varrho}{n}\eta^{\prime\prime}\left(\alpha v^{\star} + \frac{1}{\sqrt{n}}X\right)\right\|_2^2 \right] \\ \notag &= \cos^2\varrho\cdot\ensuremath{\mathbb{E}} _{X \sim \mathcal{N}(0, I_n)}\left[\left\|\frac{1}{\sqrt{n}}X \circ \eta^{\prime}\left(\alpha v^{\star} + \frac{1}{\sqrt{n}}X\right) - \frac{1}{n}\eta^{\prime\prime}\left(\alpha v^{\star} + \frac{1}{\sqrt{n}}X\right)\right\|_2^2 \right] \\ \notag &\qquad+ \sin^2\varrho\cdot\ensuremath{\mathbb{E}} _{X \sim \mathcal{N}(0, I_n)}\left[\left\|\frac{1}{\sqrt{n}}\eta^{\prime}\left(\alpha v^{\star} + \frac{1}{\sqrt{n}}X\right)\right\|_2^2 \right] \\ &\le \kappa^2, \label{eqn:beethoven-vive} \end{align} where the last line follows directly from the definition of $\kappa.$ This in turn implies that $\ensuremath{\mathbb{E}} \left[f_{\theta}(\Phi)\right] \le 0$. Putting the above pieces together justifies the desired inequality~\eqref{eqn:Z2-Dt}, provided that \eqref{eqn:scriabin-mys} is valid. \paragraph{Proof of inequality~\eqref{eqn:scriabin-mys}.} In the sequel, let us first carry out the calculations for the dominant term --- namely, the second term of expression~\eqref{eqn:dt-mighty-five}; Note that the $(t+s)$-th largest entry (in magnitude) of $\sum_{k = -2s}^{t-1} \mu^k\phi_k$ obeys \begin{subequations} \label{eqn:muphi-rank} \begin{align} (t+s) \left| \sum_{k = -2s}^{t-1} \mu^k\phi_k\right|_{(t+1)}^2 \leq \sum_{i=1}^{t+s} \left| \sum_{k = -2s}^{t-1} \mu^k\phi_k\right| _{(i)}^2 \lesssim \frac{(t+s)\log n}{n}, \end{align} which follows from the definition of the event $\mathcal{E}$ (cf.~\eqref{eqn:eps-interset}). This implies that \begin{align} \left| \sum_{k = -2s}^{t-1} \mu^k\phi_k\right|_{(t+s)} \leq C_7\sqrt{ \frac{\log n}{n} } \end{align} \end{subequations} for some large enough constant $C_7>0$. By virtue of \eqref{eqn:chocolate-At-Z2}, it holds that \begin{align*} &\bigg\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \eta^{\prime}(v) \circ \eta^{\prime\prime}(v)\bigg\|_2 \lesssim \alpha\sqrt{n}\,\bigg\|\sum_{k = -2s}^{t-1} \mu^k\phi_k \circ \sum_{k = -2s}^{t-1} \mu^k\phi_k\bigg\|_2 \\ &\qquad \lesssim \sqrt{n}\,\bigg\|\sum_{k = -2s}^{t-1} \mu^k\phi_k\bigg\|_{\infty}\bigg( \sum_{i=1}^t\Big|\sum_{k = -2s}^{t-1} \mu^k\phi_k\Big|_{(i)}^2 \bigg)^{1/2} +\sqrt{n} \left|\sum_{k = -2s}^{t-1} \mu^k\phi_k\right|_{(t+1)}\bigg\|\sum_{k = -2s}^{t-1} \mu^k\phi_k\bigg\|_2 \\ &\qquad \lesssim \sqrt{n} \sqrt{\frac{(t+s)\log n}{n}} \cdot \sqrt{\frac{(t+s)\log n}{n}} + \sqrt{n} \sqrt{\frac{\log n}{n}} \bigg( 1 + \sqrt{\frac{(t+s)\log n}{n}} \bigg)\\ &\qquad \lesssim \sqrt{\log n}. \end{align*} This leads to the desired bound for the second term of \eqref{eqn:dt-mighty-five}. The other terms can be bounded in a similar manner, which we omit here for brevity. \subsubsection{Quantity $E_t$ in \eqref{defi:E}} We now turn attention to quantity $E_t$, which requires us to work with non-differentiable points. Note that the denosing function $\eta'_{t}$ is only non-differentiable at two points: $-\tau_{t}$ and $\tau_{t}$. The goal of this subsection to prove that: with probability at least $1-O(n^{-11})$, \begin{align} \label{eqn:sparse-Et} \sum_{m\in\{\tau_{t},-\tau_{t}\}}\sum_{i=1}^{n}\ind\bigg(\bigg|\alpha_{t}v_{i}^{\star}+\sum_{j=1}^{t-1}\beta_{t-1}^{j}\phi_{j,i}-m\bigg|\le\theta(m)\bigg)\lesssim k+t\log^3 n+n\|\xi_{t-1}\|_{2}^{2}\eqqcolon E_{t} \end{align} holds for any choice $\theta(m)$ satisfying \begin{align*} \sum_{m\in\{\tau_{t},-\tau_{t}\}}\sum_{i=1}^{n}\bigg|\alpha_{t}v_{i}^{\star}+\sum_{j=1}^{t-1}\beta_{t-1}^{j}\phi_{j,i}-m\bigg|^{2}\ind\bigg(\bigg|\alpha_{t}v_{i}^{\star}+\sum_{j=1}^{t-1}\beta_{t-1}^{j}\phi_{j,i}-m\bigg|\le\theta(m)\bigg)\le\|\xi_{t-1}\|_{2}^{2}. \end{align*} Towards this, let us adopt the definitions of $\Phi, \theta, \Theta, v$ in \eqref{eq:defn-Phi-Theta-theta-sparse} as before, and generate a Gaussian random variable $z \sim \mathcal{N}(0, 1/n)$. As shall be seen momentarily, the following two relations hold true uniformly over all $\theta \in \Theta$ and all $\omega\in \ensuremath{\mathbb{R}}$ obeying $\omega\lesssim n$: \begin{subequations} \begin{align} \label{eqn:simon} \notag &\Bigg\|\bigg|\alpha v^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau 1 \bigg|\circ \ind\bigg(\Big|\alpha v^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau 1\Big| \le \omega 1\bigg)\Bigg\|_2^2 \\ &\quad \ge (n-k)\mathbb{E}\left[\left|z - \tau\right|^2\ind\left(\left|z - \tau 1\right| \le \omega\right)\right] - \frac{\log n}{n} \cdot O\left(\sqrt{(n-k)\mathbb{P}\left(\left|z - \tau\right| \le \omega\right)t\log n} + t\log n\right), \\ \label{eqn:garfunkel} & \bigg\|\ind\Big(\Big|\alpha v^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau 1\Big| \le \omega 1\Big)\bigg\|_0 \lesssim k + (n-k) \mathbb{P}\left(\left|z - \tau\right| \le \omega\right) + t\log n. \end{align} \end{subequations} Taking these two inequalities~\eqref{eqn:simon} and \eqref{eqn:garfunkel} as given for the moment (which we shall return to prove shortly), we proceed to justify the following claim: for any point $\omega\in \ensuremath{\mathbb{R}}$ obeying $\omega \lesssim n$ and \begin{align} \Bigg\|\bigg|\alpha_t v^{\star} + \sum_{j = 1}^{t-1} \beta^j_{t-1} \phi_j - \tau_t 1\bigg| \circ \ind\bigg(\Big|\alpha_t v^{\star} + \sum_{j = 1}^{t-1} \beta^j_{t-1} \phi_j - \tau_t 1 \Big| \le \omega 1 \bigg)\Bigg\|_2^2 \leq \|\xi_{t-1}\|_2^2, \label{eqn:sparse-Et-condition-1} \end{align} one necessarily satisfies \begin{align} \label{eqn:sparse-Et-temp} \sum_{m\in\{\tau_{t},-\tau_{t}\}}\sum_{i=1}^{n}\ind\bigg(\bigg|\alpha_{t}v_{i}^{\star}+\sum_{j=1}^{t-1}\beta_{t-1}^{j}\phi_{j,i}-m\bigg|\le \omega \bigg)\lesssim k+t\log^3 n+n\|\xi_{t-1}\|_{2}^{2}. \end{align} If this were valid, then one could immediately establish~\eqref{eqn:sparse-Et}, thus completing the control of $E_{t}$. In what follows, let us prove this claim \eqref{eqn:sparse-Et-temp}. \begin{itemize} \item Suppose the point $\omega$ satisfies $\mathbb{P}\left(\left|z - \tau\right| \le \omega\right) \lesssim \frac{k+t\log^3 n}{n}$. Then in view of \eqref{eqn:garfunkel}, one has \begin{align*} \bigg\|\ind\Big(\Big|\alpha v^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau\Big| \le \omega\Big)\bigg\|_0 \lesssim k + t\log^3 n, \end{align*} which holds uniformly over all $\theta \in \Theta$. If this is the case for our choice $(\alpha, \beta,\tau) = (\alpha_t, \beta_{t-1}, \pm \tau_t)$, then we have established inequality~\eqref{eqn:sparse-Et}. \item Consider now the complement case where $\omega$ satisfies \begin{equation} \mathbb{P}\left(\left|z - \tau\right| \leq \omega\right) \gg \frac{k+t\log^3 n}{n}. \label{eq:complement-z-omega-tau-sparse} \end{equation} We first make note of the fact that $\omega$ needs to satisfy $\omega \geq \sqrt{8/n}$ in this case; otherwise one must have $$ \mathbb{P}\left(\left|z - \tau\right| \le \omega\right) =\mathbb{P}\left( \tau - \omega \le z \le \tau + \omega\right) \le \mathbb{P}\left(z \ge \tau - \omega\right) \le \mathbb{P}\left(z \geq \sqrt{\frac{2\log n}{n}}\right) \le \frac{k+t\log^3 n}{n}, $$ which belongs to the previous case. Based on this simple observation, direct calculations lead to \begin{align*} & \mathbb{E}\left[\left|z-\tau\right|^{2}\ind\left(\left|z-\tau\right|\le\omega\right)\right] =\mathbb{E}\left[\left|z-\tau\right|^{2}\,\big|\,\left|z-\tau\right|\le\omega\right]\mathbb{P}\left(\left|z-\tau\right|\le\omega\right)\\ & \qquad \geq\mathbb{E}\left[\left|z-\tau\right|^{2}\ind\{z\in[\tau-\omega,\tau-\omega/2]\}\,\big|\,\left|z-\tau\right|\le\omega\right]\mathbb{P}\left(\left|z-\tau\right|\le\omega\right)\\ & \qquad \geq \bigg(\frac{\omega}{2}\bigg)^{2}\mathbb{P}\Big( z\in[\tau-\omega,\tau-\omega/2]\,\big|\,\left|z-\tau\right|\le\omega\Big)\mathbb{P}\left(\left|z-\tau\right|\le\omega\right) \\ & \qquad \geq \frac{\omega^2}{8} \mathbb{P}\left(\left|z-\tau\right|\le\omega\right) \geq \frac{1}{n}\mathbb{P}\left(\left|z-\tau\right|\le\omega\right), \end{align*} which combined with \eqref{eqn:simon} and \eqref{eq:complement-z-omega-tau-sparse} gives \begin{align*} & \bigg\|\Big|\alpha v^{\star}+\sum_{j=1}^{t-1}\beta^{j}\phi_{j}-\tau 1\Big| \circ \ind\Big(\Big|\alpha v^{\star}+\sum_{j=1}^{t-1}\beta^{j}\phi_{j}-\tau1\Big|\le\omega1\Big)\bigg\|_{2}^{2}\\ & \qquad \geq(n-k)\mathbb{E}\left[\left|z-\tau\right|^{2}\ind\left(\left|z-\tau1\right|\le\omega\right)\right]-\frac{\log n}{n}\cdot O\left(\sqrt{(n-k)\mathbb{P}\left(\left|z-\tau\right|\le\omega\right)t\log n}+t\log n\right)\\ & \qquad \geq\mathbb{P}\left(\left|z-\tau\right|\le\omega\right)-\frac{\log n}{n}\cdot O\left(\sqrt{(n-k)\mathbb{P}\left(\left|z-\tau\right|\le\omega\right)t\log n}+t\log n\right)\gtrsim\mathbb{P}\left(\left|z-\tau\right|\le\omega\right). \end{align*} Setting $\alpha=\alpha_t$, $\beta=\beta_{t-1}$ and $\tau= \pm \tau_t$ and utilizing \eqref{eqn:sparse-Et-condition-1}, we arrive at \[ \big\| \xi_{t-1}\big\|_2^2 \gtrsim \mathbb{P}\left(\left|z-\tau_t\right|\le\omega\right). \] Taking this and expression~\eqref{eqn:garfunkel} collectively yields our advertised bound \eqref{eqn:sparse-Et}, given that $\theta(m)$ is trivially below $n$ with high probability. \end{itemize} With the above arguments in mind, everything comes down to establishing the inequalities~\eqref{eqn:simon} and \eqref{eqn:garfunkel}, which shall be done in the following. \paragraph{Proof of inequality~\eqref{eqn:simon}.} First, given any $\theta\in \Theta$ and any $\omega \lesssim n$, we find it useful to develop the following lower bound: \begin{align} \label{eqn:simple-lb} \notag &\Bigg\|\Big|\alpha_tv^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau 1\Big|\ind\bigg(\Big|\alpha_tv^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau 1\Big| \le \omega 1\bigg)\Bigg\|_2^2 \\ \notag &\qquad \qquad = \sum_{i = 1}^n \Big|\alpha v_i^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau \Big|^2 \ind\Big(\Big|\alpha v_i^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau \Big| \le \omega\Big) \\ &\qquad \qquad \ge \sum_{i :\, v_i^{\star} = 0} \Big|\sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau \Big|^2\ind\Big(\Big|\sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau \Big| \le \omega\Big). \end{align} Next, we aim to further bound the right-hand side of \eqref{eqn:simple-lb} by means of Corollary~\ref{cor:Gauss-jump}. Towards this, let us define the following functions: \begin{align*} f_{i, \theta}(\Phi_{i, :}) &\coloneqq \Big(\sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau\Big)^2, \qquad \text{ and } \qquad h_{i,\theta}(\Phi_{i, :}) \coloneqq \bigg| \sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau \bigg|. \end{align*} Note that for any fixed $\beta \in \mathcal{S}^{t-2}$, $\sum_{j = 1}^{t-1} \beta^j\phi_{j, i}$ follows a Gaussian distribution with variance $1/n$; therefore, $f_{i, \theta}(\Phi_{i, :})$ is a $\big(\tau^2 + \frac{1}{n}\big)$-subexponential random variable with mean $\mathbb{E}[f_{i, \theta}] = \tau^2 + \frac{1}{n}$. We make the observation that (i) $|f_{i, \theta}(\Phi_{i, :})|\lesssim n^{100} \max_j \|\phi_j\|_2^{100}$; (ii) $\|\nabla_{\theta} f_{i, \theta} (\Phi_{i, :})\|_2 \lesssim n^{100}$ for any $\Phi\in \mathcal{E}$; (iii) $\mathbb{P}\big( \tau - 400n^{-100} \leq h_{i,\theta}(\Phi_{i, :}) \leq \tau + 400n^{-100} \big) \lesssim 1/n$ for any $\tau\in \ensuremath{\mathbb{R}}$. Therefore, applying Corollary~\ref{cor:Gauss-jump} together with \eqref{eqn:brahms-conc} yields: with probability exceeding $1-O(n^{-11})$, \begin{align} \label{eqn:hawaii} \notag &\sum_{i : v_i^{\star} = 0} \Big|\sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau\Big|^2\ind\Big(\Big|\sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau\Big| \le \omega\Big) - (n-k)\mathbb{E}\left[\left|z - \tau\right|^2\ind\left(\left|z - \tau\right| \le \omega\right)\right] \\ &\qquad \qquad \ge - \frac{\log n}{n} \cdot O\left(\sqrt{(n-k)\mathbb{P}\left(\left|z - \tau\right| \le \omega\right)t\log n} + t\log n\right), \end{align} holds simultaneously for all $\theta \in \Theta$ and all $\omega \lesssim n$, where we denote $z\sim \mathcal{N}(0,1/n)$. Careful readers might already notice that: when applying Corollary~\ref{cor:Gauss-jump}, instead of considering the indicator function $\ind\left(h_{i, \theta}(X_i) > \tau\right)$ as in the original form, our result above is concerned with a different kind of indicator function $\ind\left( h_{i, \theta}(X_i) < \tau\right)$; fortunately, the proof of this version of indicator functions follows verbatim as that of Lemma~\ref{lem:Gauss-jump} and Corollary~\ref{cor:Gauss-jump}, and hence we omit the details here. Putting \eqref{eqn:simple-lb} and \eqref{eqn:hawaii} together, we have validated relation~\eqref{eqn:simon}. \paragraph{Proof of inequality~\eqref{eqn:garfunkel}.} Using exactly the same analysis as above and taking $f_{i, \theta}(\Phi_{i, :}) = 1$ and $h_{i,\theta}(\Phi_{i, :}) = \big| \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau \big|$, Corollary~\ref{cor:Gauss-jump} ensures that \begin{align*} \bigg\|\ind\bigg(\Big|\alpha v^{\star} + \sum_{j = 1}^{t-1} \beta^j\phi_j - \tau\Big| \le \omega\bigg)\bigg\|_0 &\le k + \sum_{i :\, v_i^{\star} = 0} \ind\Big(\Big|\sum_{j = 1}^{t-1} \beta^j\phi_{j, i} - \tau\Big| \le \omega\Big) \\ &\lesssim k + \sqrt{(n-k) \mathbb{P}\left(\left|z - \tau\right| \le \omega\right)t\log n} + t\log n \\ &\lesssim k + (n-k) \mathbb{P}\left(\left|z - \tau\right| \le \omega\right) + t\log n \end{align*} holds simultaneously for all $\theta \in \Theta$ and all $\omega \lesssim n$. This completes the proof of \eqref{eqn:garfunkel}. \subsection{Establishing the induction hypotheses via recursion} \label{sec:recursion} The goal of this subsection is to finish the induction-based proof of \eqref{eqn:sparse-induction}. We shall start by establishing \eqref{eqn:sparse-induction} for the $(t+1)$-th iteration, assuming that it holds for the $t$-th iteration. We will then return to verify the base case for the two types of initialization methods. Before proceeding, we remind the readers of our assumptions: \begin{align} \label{eqn:don-giovani} \frac{t\log^3 n}{n\lambda^2} \ll 1,\qquad \frac{k\log n}{n\lambda^2} \ll 1, \end{align} and we shall always take $\tau_{t}$ to be on the order of $\sqrt{\frac{\log n}{n}}$ with some sufficiently large preconstant. In addition, \begin{equation} \label{eq:rho-sparse} |\eta_t^{\prime}(w)| \lesssim \frac{1}{\lambda} \eqqcolon \rho , \quad |\eta_t^{\prime\prime}(w)| = 0 \eqqcolon \rho_1, \quad |\eta_t^{\prime\prime\prime}(w)| = 0 \eqqcolon \rho_2 \qquad \text{for any differentiable point }w\in \ensuremath{\mathbb{R}}, \end{equation} where the calculation of $\rho$ has made use of \eqref{eqn:gamma-t-evolution}. \subsubsection{Inductive step for \eqref{eqn:sparse-induction} regarding $\xi_{t}$, $\Delta_{\alpha,t}$ and $\alpha_t$ } Assuming the induction hypotheses~\eqref{eqn:sparse-induction} hold at $t$, we intend to prove their validity for $t+1.$ \paragraph{Bounding $\xi_{t}$.} In terms of $\|\xi_{t}\|_2$, the result~\eqref{eqn:xi-t-general} of Theorem~\ref{thm:main} together with \eqref{eq:rho-sparse} and $\|\beta_t\|_2=1$ gives \begin{align*} \|\xi_{t}\|_2 &\le \sqrt{\kappa_t^2 + D_t} \,\|\xi_{t-1}\|_2 \\ &\qquad+ O\left(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_2 + A_t + \left(\sqrt{\frac{t}{n}}\rho_1 + \frac{\rho_2\|\beta_{t-1}\|_2}{n}\right)\|\xi_{t-1}\|_2^2 + \rho\sqrt{\frac{E_t + t\log n}{n}}\|\xi_{t-1}\|_2 + \frac{\rho E_t\|\beta_{t-1}\|_2}{n}\right) \\ &\le \sqrt{\kappa_t^2 + D_t} \, \|\xi_{t-1}\|_2 + O\left(\sqrt{\frac{t\log n}{n}} + A_t + \sqrt{\frac{E_t + t\log n}{\lambda^2 n}}\|\xi_{t-1}\|_2 + \frac{ E_t}{\lambda n}\right). \end{align*} Making use of the bounds \eqref{eqn:sparse-kappa}, \eqref{eqn:sparse-At}, \eqref{eqn:sparse-dt} and \eqref{eqn:sparse-Et}, we can further derive \begin{align} \|\xi_{t}\|_2 &\le c \sqrt{\frac{k}{n\lambda^2} + \frac{\sqrt{t(t+k)}\log^2 n}{n\lambda^2}} \|\xi_{t-1}\|_2 + O\left(\sqrt{\frac{t\log n}{n}} + \frac{1}{\lambda}\sqrt{\frac{E_t + t\log n}{n}}\|\xi_{t-1}\|_2 + \frac{E_t}{n\lambda}\right) \notag\\ % &\le c^{\prime}\|\xi_{t-1}\|_2 + C_7\bigg(\sqrt{\frac{t\log n}{n}} + \sqrt{\frac{k+t \log^3 n}{n \lambda^2}}\|\xi_{t-1}\|_2 + \frac{1}{\lambda}\|\xi_{t-1}\|_2^2 + \frac{k+t\log^3 n}{n\lambda}\bigg) \label{eq:xi-recurse-sparse-1} \end{align} for some large enough constant $C_7>0$, where we take $$ c' \coloneqq c \sqrt{\frac{k}{n\lambda^2} + \frac{\sqrt{t(t+k)}\log^2 n}{n\lambda^2}} \ll 1 $$ under the assumptions \eqref{eqn:don-giovani}. Supposing that $$\ltwo{\xi_{t-1}} \leq C_8 \sqrt{\frac{(t-1)\log^{3}n+k}{n}}$$ for some constant $C_8>0$ large enough, we can invoke \eqref{eq:xi-recurse-sparse-1} to reach \begin{align} \|\xi_{t}\|_{2} & \le c^{\prime}C_{8}\sqrt{\frac{(t-1)\log^{3}n+k}{n}}\\ & \quad+C_{7}\bigg(\sqrt{\frac{t\log n}{n}}+C_{8}\sqrt{\frac{k+t\log^{3}n}{n\lambda^{2}}}\sqrt{\frac{(t-1)\log^{3}n+k}{n}}+\frac{C_{8}^{2}\big((t-1)\log^{3}n+k\big)}{\lambda n}+\frac{k+t\log^{3}n}{n\lambda}\bigg)\\ & \leq C_{8}\sqrt{\frac{t\log^{3}n+k}{n}} \end{align} where we have made use of the relation~\eqref{eqn:don-giovani} and the condition $c'\ll 1$. This in turn finishes the inductive step for bounding $\|\xi_{t}\|_2$. \paragraph{Bounding $\Delta_{\alpha,t}$.} A direct application of inequality~\eqref{eqn:delta-alpha-general} in Theorem~\ref{thm:main} together with \eqref{eq:rho-sparse} gives \begin{align*} |\Delta_{\alpha,t}| &\,\lesssim\, B_t + \left(\rho + \rho_1\|v^\star\|_{\infty}\|\xi_{t-1}\|_2 + \rho\bigg(\sum_{i=1}^{E_{t}}|v^\star|_{(i)}^{2}\bigg)^{1/2} \right)\|\xi_{t-1}\|_2 \\ & \lesssim\, B_t + \left(\rho + \frac{1}{\lambda}\bigg(\sum_{i=1}^{E_{t}}|v^\star|_{(i)}^{2}\bigg)^{1/2} \right)\|\xi_{t-1}\|_2. \end{align*} Replacing $B_{t}$ and $\rho$ with their corresponding bounds in \eqref{eqn:sparse-Bt} and \eqref{eqn:rho-sparse}, and invoking \eqref{eqn:sparse-induction}, we arrive at \begin{align} |\Delta_{t, \alpha}| &\lesssim \sqrt{\frac{t\log n}{n\lambda^2}} + \Big(\frac{1}{\lambda} + \frac{1}{\lambda}\Big)\|\xi_{t-1}\|_2 \lesssim \sqrt{\frac{k + t\log^3 n}{n\lambda^2}}. \label{eq:Delta-t-UB-sparse} \end{align} \paragraph{Controlling $\alpha_t$.} Equipped with the control of $|\Delta_{t, \alpha}|$ in \eqref{eq:Delta-t-UB-sparse}, we can now prove that $\alpha_{t+1} \asymp \lambda$ for all $t \geq 1$; in fact, we intend to prove a stronger result, namely, if $\alpha_{t} \asymp \lambda$, \begin{align} \label{eqn:stable-alphat-sparse} \alpha_{t+1} = \lambda + O\bigg(\sqrt{\frac{k\log n + (t+1)\log^3 n}{n}}\bigg). \end{align} Taking the definition \eqref{eqn:alpha-t-genearl} of $\alpha_{t+1}$ collectively with expressions~\eqref{eqn:beethoven} and \eqref{eq:residual-sparse} as well as the property $\lambda \asymp \alpha_t \gtrsim \sqrt{\frac{(k+t)\log n}{n}}$ gives \begin{align} \alpha_{t+1} &= \lambda v^{\star \top} \int \gamma_t \mathsf{ST}_{\tau_t}\left(\alpha_{t} v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x) + \lambda\Delta_{\alpha,t} \notag\\ &= \frac{\lambda v^{\star \top} \int \mathsf{ST}_{\tau_t}\left(\alpha_{t} v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)}{\sqrt{\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha_{t} v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)} + O\Big(\sqrt{\frac{t\log n}{n\lambda^{2}}}+ \|\xi_{t-1}\|_2 \Big)} + \lambda\Delta_{\alpha,t} \notag\\ &= \left( 1+ O\Bigg(\frac{1}{\lambda}\sqrt{\frac{t\log n}{n}}+\frac{\|\xi_{t-1}\|_{2}}{\lambda}\Bigg) \right) \frac{\lambda v^{\star \top} \int \mathsf{ST}_{\tau_t}\left(\alpha_{t} v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)}{\sqrt{\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha_{t} v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)} } + O\Big(\sqrt{\frac{k + t\log^3 n}{n}}\Big). \label{eqn:sparse-alpha-recursion} \end{align} In order to bound \eqref{eqn:sparse-alpha-recursion}, we first observe that for every $\alpha$ obeying $ \sqrt{\frac{(k+t) \log n}{n}}\lesssim \alpha \lesssim 1$ and $\tau_{t}\asymp \sqrt{\frac{\log n}{n}}$, it holds that \begin{align*} \left\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right) - \alpha v^\star\right\|_2 &\le \left\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star \right) - \alpha v^\star\right\|_2 + \left\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right) - \mathsf{ST}_{\tau_t}\left(\alpha v^\star \right)\right\|_2 \\ % &\le \left\|\tau_t 1 \circ \ind\left(\left|v^\star\right| > 0 \right)\right\|_2 + \left\|\frac{x}{\sqrt{n}} \circ \left(\ind\left(\left|\alpha v^\star\right| > \tau_t 1 \right) + \ind\left(\left|\alpha v^\star + \frac{x}{\sqrt{n}}\right| > \tau_t 1 \right)\right)\right\|_2, % \end{align*} where the last step invokes relation $\big||\alpha v^\star + \frac{x}{\sqrt{n}}| - |\alphav^\star|\big| \leq \frac{|x|}{\sqrt{n}}.$ In addition, note that \begin{align*} \ind\left(\left|\alpha v^\star\right| > \tau_t 1 \right) + \ind\left(\left|\alpha v^\star + \frac{x}{\sqrt{n}}\right| > \tau_t 1 \right) &\leq \ind\left(\left|\alpha v^\star\right| > \tau_t 1 \right) + \ind\left(\left|\alpha v^\star \right| > \frac{\tau_t}{2} 1 \right) + \ind\left(\left|\frac{x}{\sqrt{n}}\right| > \frac{\tau_t}{2} 1 \right) \\ &\leq 2\ind(|v^\star| > 0) + \ind\left(\frac{|x|}{\sqrt{n}} > \frac{\tau_t}{2} 1 \right). \end{align*} Taking the above two relations together yields \begin{align*} \left\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right) - \alpha v^\star\right\|_2 &\lesssim \left\|\left(\tau_t + \frac{|x|}{\sqrt{n}}\right) \circ \ind\left(\left|v^\star\right| > 0 \right)\right\|_2 + \left\|\frac{x}{\sqrt{n}} \circ \ind\left(\frac{|x|}{\sqrt{n}} > \frac{\tau_t}{2} 1 \right)\right\|_2, \end{align*} which further implies \begin{align} \label{eqn:cello-sonata} \notag \displaystyle \int \left\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right) - \alpha v^\star\right\|_2^2 \varphi_n(\mathrm{d} x) &\lesssim \displaystyle\int \sum_{i \in\{v^\star_i\neq 0\}} \left(\tau^2_t + \frac{x_i^2}{n}\right) \varphi(\mathrm{d} x) + \frac{1}{n}\sum_{i=1}^n 2\displaystyle\int^{\infty}_{2\sqrt{\log n}} x_i^2 \varphi(\mathrm{d} x)\\ % &\lesssim \frac{k\log n}{n}. \end{align} In words, relation~\eqref{eqn:cello-sonata} ensures that $\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)$ lies close to $ \alpha v^\star$. Next, we would like to employ the above relation to show that \begin{align} \label{eqn:stable-inner-sparse} \frac{ v^{\star \top} \displaystyle\int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)}{\sqrt{\displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)}} = 1 + O\Big(\frac{1}{\alpha}\sqrt{\frac{k\log n}{n}}\Big); \end{align} if this were true, then combining it with~\eqref{eqn:sparse-alpha-recursion}, \eqref{eq:residual-sparse} and $\alpha_t\asymp \lambda$ would justify the bound stated in \eqref{eqn:stable-alphat-sparse}. To prove~\eqref{eqn:stable-inner-sparse}, we find it helpful to consider the inner product between $\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)$ and $ \alpha v^\star$ as follows \begin{align} \label{eqn:inner-product-11} \notag &\alpha v^{\star \top} \int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)\\ \notag &\quad = \displaystyle \int \Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|^2\varphi_n(\mathrm{d} x) + \displaystyle\int \Big(\alpha v^{\star} - \mathsf{ST}_{\tau_t}\Big(\alpha v^\star + \frac{x}{\sqrt{n}} \Big)\Big)^\top \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x) \\ \notag &\quad = \displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x) + O\left(\sqrt{\displaystyle\int \Big\|\alpha v^{\star} - \mathsf{ST}_{\tau_t}\Big(\alpha v^\star + \frac{x}{\sqrt{n}}\Big) \Big\|_2^2\varphi_n(x)} \sqrt{\displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)}\right)\\ \notag &\quad = \displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x) + O\Big( \alpha \sqrt{\frac{k\log n}{n}} + \frac{k\log n}{n} \Big) \\ &\quad = \displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x) + O\Big( \alpha \sqrt{\frac{k\log n}{n}} \Big), \end{align} where the last line is valid since $\alpha \gtrsim \sqrt{\frac{k \log n}{n}}$, and the penultimate step uses inequality~\eqref{eqn:cello-sonata} and the following crude bound: \begin{align*} \displaystyle\int \Big\|\mathsf{ST}_{\tau_t}\Big(\alpha v^\star + \frac{x}{\sqrt{n}}\Big) \Big\|_2^2\varphi_n(x) \lesssim \displaystyle\int \Big\|\alpha v^{\star} - \mathsf{ST}_{\tau_t}\Big(\alpha v^\star + \frac{x}{\sqrt{n}}\Big) \Big\|_2^2\varphi_n(x) + \alpha^2 \leq \frac{k\log n}{n} + \alpha^2. \end{align*} Similarly, we can also write \begin{align} \label{eqn:big-ben} \notag &\alpha v^{\star \top} \int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)\\ \notag &\quad = \displaystyle \int \Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|^2\varphi_n(\mathrm{d} x) + \displaystyle\int \Big(\alpha v^{\star} - \mathsf{ST}_{\tau_t}\Big(\alpha v^\star + \frac{x}{\sqrt{n}} \Big)\Big)^\top \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x) \\ \notag &\quad = \displaystyle\int \ltwo{\alphav^\star}^2\varphi_{n}(\mathrm{d} x) + O\left(\sqrt{\displaystyle\int \Big\|\alpha v^{\star} - \mathsf{ST}_{\tau_t}\Big(\alpha v^\star + \frac{x}{\sqrt{n}}\Big) \Big\|_2^2\varphi_n(x)} \sqrt{\displaystyle\int \ltwo{\alphav^\star}^2\varphi_{n}(\mathrm{d} x)}\right)\\ &\quad = \alpha^2 + O\Big(\alpha \sqrt{\frac{k\log n}{n}}\Big). \end{align} Putting the above two relations together and recalling that $\alpha\gtrsim \sqrt{\frac{k\log n}{n}}$ yields \begin{align} \label{eqn:st-is-bounded} \displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x) = \alpha^2 + O\Big(\alpha \sqrt{\frac{k\log n}{n}}\Big) \asymp \alpha^2, \end{align} and hence \begin{align*} \displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x) &= \alpha^2 \bigg(1+ O\bigg( \frac{1}{\alpha} \sqrt{\frac{k\log n}{n}}\bigg) \bigg);\\ \alpha v^{\star \top} \int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x) &= \alpha^2 \bigg(1+ O\bigg( \frac{1}{\alpha} \sqrt{\frac{k\log n}{n}}\bigg) \bigg). \end{align*} This implies \eqref{eqn:stable-inner-sparse}, thus completing the proof of \eqref{eqn:stable-alphat-sparse}. Consequently, we complete the inductive step. \bigskip\noindent It remains to verify the base case for \eqref{eqn:sparse-induction}, which is postponed to Section~\ref{sec:pf-initialization-sparse}. \subsubsection{Bounding $\alpha_{t+1}-\alpha_{t+1}^{\star}$} Another condition claimed in Theorem~\ref{thm:sparse} is the bound \eqref{eqn:se-alpha-sparse} on the difference between $\alpha_{t+1}$ and $\alpha_{t+1}^{\star}$, which we study in this subsection. In order to understand the dynamics of $\alpha_t$, it remains to understand the property of the function $f(\cdot)$ defined in \eqref{eqn:franck}. A little algebra yields \begin{align*} \frac{\mathrm{d} f(\alpha)}{\partial {\alpha}} = \frac{\lambda v^{\star \top} \displaystyle \int v^\star \circ \ind\left(\Big|\alpha v^\star + \frac{x}{\sqrt{n}} \Big| > \tau_t 1 \right)\varphi_n(\mathrm{d} x)}{\left(\displaystyle \int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)\right)^{1/2}} - \frac{\lambda \left(v^{\star \top} \displaystyle \int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)\right)^2}{\left(\displaystyle \int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)\right)^{3/2}} \ge 0, \end{align*} where the last inequality follows from the elementary relation $\mathbb{E}[X^2]\cdot\mathbb{E}[Y^2] \ge \big(\mathbb{E}[XY]\big)^2$. In addition, we make the observation that every $\alpha = \lambda + O\Big(\sqrt{\frac{t\log^3 n +k\log n}{n}}\Big)$ obeys \begin{align*} \frac{\mathrm{d} f(\alpha)}{\partial {\alpha}} &\leq \frac{\lambda\left(\displaystyle \int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)\right)-\lambda \left(v^{\star \top} \displaystyle \int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)\right)^2}{\left(\displaystyle \int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)\right)^{3/2}}\\ &\leq \frac{\lambda \left(\lambda^2 + O\Big(\lambda \sqrt{\frac{k\log n + t\log^3 n}{n}}\Big)\right) - \lambda \left(\lambda - O\Big(\lambda \sqrt{\frac{k\log n + t\log^3 n}{n}}\Big)\right)^2}{\left(\lambda^2 - O\Big(\lambda \sqrt{\frac{k\log n + t\log^3 n}{n}}\Big)\right)^{3/2}} \\ &\lesssim \sqrt{\frac{k \log n + t\log^3 n}{n\lambda^2}} \le \frac{1}{2}, \end{align*} where the second line follows from \eqref{eqn:inner-product-11} and \eqref{eqn:big-ben}, and the last line is valid under the assumptions \eqref{eqn:don-giovani}. Based on the above properties, we further claim that $f(\alpha) = \alpha$ has one solution --- denoted by $\alpha^{\star}$ --- within the range $[\lambda/10, \lambda]$. In order to see this, recall that in \eqref{eqn:stable-inner-sparse}, we have shown that for any given $\alpha \asymp \lambda$, \begin{align} f(\alpha) = \frac{\lambda v^{\star \top} \displaystyle\int \mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\varphi_n(\mathrm{d} x)}{\sqrt{\displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)}} = \bigg(1 + O\bigg(\sqrt{\frac{k \log n+t\log^3 n}{n\lambda^2}}\bigg) \bigg) \lambda, \label{eq:f-alpha-property} \end{align} where we invoke the assumption that $\frac{k \log n}{n\lambda^2} \ll 1$. In particular, by taking $\alpha = \frac{1}{10}\lambda$, we can deduce that \begin{align*} f\Big(\frac{1}{10}\lambda\Big) = \big(1 + o(1) \big) \lambda > \frac{1}{10} \lambda. \end{align*} In addition, it is easily seen that \begin{align*} f(\lambda) \leq \frac{\lambda \| v^{\star }\|_2 \sqrt{\displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)}}{\sqrt{\displaystyle\int\Big\|\mathsf{ST}_{\tau_t}\left(\alpha v^\star + \frac{x}{\sqrt{n}} \right)\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)}} = \lambda. \end{align*} Given that $\frac{\mathrm{d} f(\alpha)}{\partial {\alpha}} \in [0,1/2] $ for any $\alpha\in [\lambda/10, \lambda]$, we conclude that there exists a unique point within $[\lambda/10, \lambda]$ obeying $f({\alpha})=\alpha$. Hence, for any $t$ such that $\alpha_t = \lambda + O\Big(\sqrt{\frac{t\log^3 n +k\log n}{n}}\Big)$ obeying \begin{align*} \big|\alpha_{t} - \alpha_{t}^{\star}\big| \leq C_6 \sqrt{\frac{k \log n + t\log^3 n}{n}} \end{align*} for some large enough constant $C_6>0$, one can invoke $\alpha^\star_{t+1} = f(\alpha^\star_{t})$ to deduce that \begin{align*} \big|\alpha_{t+1} - \alpha_{t+1}^{\star}\big| &\le \big|f(\alpha_{t}) - f(\alpha_{t}^{\star})\big| + C_7 \sqrt{\frac{k \log n + t\log^3 n}{n}} \\ &\le \frac{1}{2}\big|\alpha_{t} - \alpha_{t}^{\star}\big| + C_7 \sqrt{\frac{k \log n + t\log^3 n}{n}} \leq C_6 \sqrt{\frac{k \log n + t\log^3 n}{n}} \end{align*} provided that $C_6>2C_7$, where we recall that each $\alpha_{t}$ satisfies~\eqref{eqn:stable-alphat-sparse} shortly after initialization. Invoking the above relation recursively leads to \begin{align*} \big|\alpha_{t+1} - \alpha_{t+1}^{\star}\big| \lesssim \sqrt{\frac{k \log n + t\log^3 n}{n}}, \end{align*} thus establishing the advertised bound~\eqref{eqn:se-alpha-sparse}. \subsubsection{Initial condition for \eqref{eqn:sparse-induction} with two initialization paradigms} \label{sec:pf-initialization-sparse} In order to conclude the proof, we still need to verify whether the induction hypotheses \eqref{eqn:sparse-induction} hold at the initial stage of the algorithm. In what follows, we shall look at two types of initialization schemes separately. \begin{itemize} \item Suppose now that we have access to an initialization point $x_{1}$ independent of $W$ such that $\inprod{v^\star}{\eta_1(x_1)}\asymp 1$, as assumed in Theorem~\ref{thm:sparse}. In this case, one can see that \begin{align*} x_2 = M\eta_1(x_1) &= \big(\lambdav^\star v^{\star\top} + W \big) \eta_1(x_1) = \lambda\inprod{v^\star}{\eta_1(x_1)} \cdot v^\star + W\eta_1(x_1)\\ &= \alpha_2 v^\star + \phi_1 + \xi_1, \end{align*} where \begin{align*} \alpha_{2} & \coloneqq \lambda\inprod{v^\star}{\eta_1(x_1)} \asymp \lambda, \\ \phi_1 &\coloneqq W\eta_1(x_{1}) + \zeta_1, \\ -\xi_1 = \zeta_1 &\coloneqq \Big(\frac{\sqrt{2}}{2}-1\Big) \Big( \eta_1(x_{1}) ^\top W \eta_1(x_{1}) \Big) \eta_1(x_{1}). \end{align*} As discussed in Lemma~\ref{lem:distribution} and in \eqref{defn:zWz}, we have $\phi_1\sim \mathcal{N}(0,\frac{1}{n}I_n)$ and $\eta_1(x_{1})^\top W \eta_1(x_{1}) \sim \mathcal{N}(0, \frac{2}{n})$, where we have used the fact that $\|\eta_1(x_1)\|_2=1$. Therefore, it holds that $\ltwo{\xi_1} \leq \sqrt{\frac{\log n}{n}}$ with probability at least $1 - O(n^{-11})$. As a result, we have established the induction hypotheses \eqref{eqn:sparse-induction} for $t=2$, as required by Theorem~\ref{thm:sparse}. \item Another type of initialization schemes considered in this paper is~\eqref{defi:v1}, which concerns Corollary~\ref{cor:sparse-init-1}. By definition, index $\hat{s}$ is selected by maximizing the diagonal entries of $M$, resulting in statistical dependence between $x_{1}$ and $W$. As a consequence, Theorem~\ref{thm:main} is not directly applicable. To cope with the statistical dependency, let us generate an AMP sequence starting from $e_s$ for each given $s\in \mathcal{S}_{0}$. For each of these AMP sequences, it turns out that the initial condition for \eqref{eqn:sparse-induction} when $t=3$ is satisfied, which is formulated in the result below. The proof of this lemma can be found in Section~\ref{sec:pf-ken-sparse-ini}. \begin{lems} \label{lem:sparse-ini-1-ini} Consider the AMP procedure initialized with $\eta_0(x_{0}) = 0$ and $x_{1} = e_{s}$ for any given $s\in \mathcal{S}_{0}$ (cf.~\eqref{eqn:sparse-set-s0}). With probability at least $1 - O(n^{-11})$, the iterate $x_3$ admits the following decomposition: \begin{align*} x_3 = \alpha_3 v^\star + \beta_2^1 \phi_1 + \beta_2^2 \phi_2 + \xi_2, \end{align*} where $\phi_1$ and $\phi_2$ are i.i.d.~drawn from $\mathcal{N}\big(0,\frac{1}{n}I_n\big)$, and \begin{align} \alpha_3 = \lambda v^{\star\top} \eta_2(x_2) \asymp \lambda ~~\text{ and }~~ \|\xi_2\|_2 \lesssim \sqrt{\frac{\log n}{n}}. \end{align} \end{lems} Taking a simple union bound over all $s \in \mathcal{S}_{0}$, we conclude that with probability at least $1 - O(n^{-10})$, the initial condition \eqref{eqn:sparse-induction} is satisfied for $t=3$ if AMP is initialized at $e_{s}$ for any $s \in \mathcal{S}_{0}$, thus making Theorem~\ref{thm:sparse}) applicable. Further, Proposition~\ref{thm:sparse-init} guarantees that $\ensuremath{\mathbb{P}}(\hat{s} \in \mathcal{S}_0) \geq 1 - O(n^{-11}).$ Putting these together, we can guarantee that the AMP initialized at $e_{\hat{s}}$ yields the required decomposition with probability at least $1 - O(n^{-10})$, as claimed in Corollary~\ref{cor:sparse-init-1}. \end{itemize} \noindent In summary, putting the above two initial conditions together with the previous inductive steps finishes the proof of Theorem~\ref{thm:sparse} and Corollary~\ref{cor:sparse-init-1}. \subsubsection{Implications for $\ell_2$ estimation accuracy} Our theory reveals the $\ell_2$ estimation accuracy of AMP, which we briefly discuss in this subsection. Consider the estimator $\frac{1}{\lambda}\mathsf{ST}_{\tau_t}\left(x_t \right)$ and look at its $\ell_2$ estimation error. The triangle inequality directly implies \begin{align*} \left\|\mathsf{ST}_{\tau_t}\left(x_t \right) - \lambdav^\star\right\|_2 % &\leq \left\|\mathsf{ST}_{\tau_t}\left(x_t \right) - \mathsf{ST}_{\tau_t}\left(v_t \right)\right\|_2 + \left\|\mathsf{ST}_{\tau_t}\left(v_t \right) - \alpha_t v^\star\right\|_2 + \left\|\alpha_t v^\star - \lambda v^\star\right\|_2 \\ % &\leq \ltwo{\xi_{t-1}} + \left\|\mathsf{ST}_{\tau_t}\left(v_t \right) - \alpha_t v^\star\right\|_2 + |\alpha_t - \lambda|\\ % &\leq \left\|\mathsf{ST}_{\tau_t}\left(v_t \right) - \alpha_t v^\star\right\|_2 + O\left(\sqrt{\frac{k\log n + t\log^3 n}{n}}\right) \end{align*} with probability at least $1-O(n^{-11})$, where the last step follows from Theorem~\ref{thm:sparse} and inequality~\eqref{eqn:stable-alphat-sparse}. Now consider the Lipschitz function $f(x) \coloneqq \ltwo{\mathsf{ST}_{\tau_t}(\alpha_tv^\star + x) - \alpha_tv^\star}, ~x \in \ensuremath{\mathbb{R}}^{n}$. Inequality~\eqref{eq:sup-f-beta-phi-Ef} yields \begin{align*} \Big\|\mathsf{ST}_{\tau_t}\Big(\alpha_t v^\star + \sum_{j=1}^{t-1} \beta_{t-1}^j \phi_j \Big) - \alpha_t v^\star\Big\|_2 - \ensuremath{\mathbb{E}}_{g\sim \mathcal{N}(0,\frac{1}{n}I_n)}\Big[ \left\|\mathsf{ST}_{\tau_t}\big(\alpha_t v^\star + g \big) - \alpha_t v^\star\right\|_2 \Big] \lesssim \sqrt{\frac{t\log n}{n}} % \end{align*} holds with probability at least $1 - O(n^{-11})$. In addition, in view of \eqref{eqn:cello-sonata} (taking $\alpha = \alpha_t$), we can deduce \begin{align*} \ensuremath{\mathbb{E}}_{g\sim \mathcal{N}(0,\frac{1}{n}I_n)}\Big[ \left\|\mathsf{ST}_{\tau_t}\big(\alpha_t v^\star + g \big) - \alpha_t v^\star\right\|_2 \Big] \lesssim \sqrt{\frac{k\log n}{n}}. \end{align*} Putting these pieces together ensures that, with probability at least $1 - O(n^{-11})$, \begin{align} \label{eqn:l2-error-hp} \Big\|\frac{1}{\lambda}\mathsf{ST}_{\tau_t}\left(x_t \right) - v^\star\Big\|_2 \lesssim \sqrt{\frac{k\log n + t\log^3 n}{n\lambda^2}}. \end{align} \subsection{Proof of Proposition~\ref{eqn:hard-regime-sparse}} \label{sec:pf-decomp-2-stage} Given every fixed $j \in \mathcal{S}_{0}$, consider the AMP initialized at $x^{j}$ (independent of $M_{\mathcal{I}^c_j, \mathcal{I}_j^c}$). Since $\langle v_{\mathcal{I}_{j}^{c}}^{\star}, x^j \rangle/\|v_{\mathcal{I}_{j}^{c}}^{\star}\|_2 \asymp 1$, Theorem~\ref{thm:sparse} justifies the decomposition for AMP iterates with corresponding error control in \eqref{eqn:soccer} holding with probability at least $1 - O(n^{-11}).$ Taking a union bound over all possible choices of $j \in \mathcal{S}_{0}$ (which in total has $N$ choices), the corresponding inequalities~\eqref{eqn:soccer} hold simultaneously with probability at least $1 - O(n^{-10}).$ Now since Proposition~\ref{prop:sparse-split} ensures that $\widehat{j} \in \mathcal{S}_{0}$ with probability at least $1 - O(n^{-10})$, we can conclude: with probability at least $1 - O(n^{-10})$, the execution of AMP on $M_{\mathcal{I}_{{j}}^c,\mathcal{I}_{{j}}^c}$ leads to decomposition \begin{subequations} \label{eqn:finale-prop3} \begin{align} \label{eqn:decomp-1-sparse} x_{t}^\ell = \alpha_{t} v^\star_{\ell} + \sum_{j = 1}^{t-1} \beta_{t-1}^j\phi_{j, \ell} + \xi_{t-1, \ell}, \qquad \ell \in \mathcal{I}^c_{\widehat{j}} \end{align} for \begin{align} &\|v^\star_{\mathcal{I}_{\widehat{j}}^{c}}\|^2_2 = 1 - p + o(p), \quad \|\beta_{t-1}\|_2 = 1, \quad \alpha_{t} = (1 - p + o(p)) \cdot \lambda + O\left(\sqrt{\frac{k + t\log^3 n}{n}}\right),\\ \text{and}\qquad &\ltwo{\xi}^2 \lesssim (1 - p) \sqrt{\frac{k+t\log^3 n}{n}}, \end{align} where each $\phi_j \stackrel{\text{i.i.d.}}{\sim} \mathcal{N}(0, \frac{1}{n} I_{|\mathcal{I}^c_{\widehat{j}}|}).$ Now moving on to the second AMP sequence. Here, we remind the readers that by construction $\overline{\mathcal{I}}_{\widehat{i}} \subset \mathcal{I}_{\widehat{j}}^{c}$, which implies $\mathcal{I}_{\widehat{j}} \subset \overline{\mathcal{I}}_{\widehat{i}}^{c}.$ Similarly, execution of AMP on $M_{\overline{\mathcal{I}}_{\widehat{i}}^c,\overline{\mathcal{I}}_{\widehat{i}}^c}$ yields the decomposition \begin{align} \label{eqn:decomp-2-sparse} \overline{x}_{t}^\ell = \overline{\alpha}_{t} v^\star_\ell + \sum_{j = 1}^{t-1} \overline{\beta}_{t-1}^j \overline{\phi}_{j, \ell} + \overline{\xi}_{t-1, \ell}, \qquad \ell \in \overline{\mathcal{I}}_{\widehat{i}}^{c}, \end{align} \end{subequations} where the parameters satisfy expressions~\eqref{eqn:finale-prop3}. Note that here, we take the union bound over $N \times N$ choices of index $\widehat{j} \in N$ and index $\widehat{i} \in N$ (where for each set $\mathcal{I}_j$, we construct $N$ such $\overline{\mathcal{I}}_i \subset \mathcal{I}_{j}^c$). Taking these two decompositions collectively, we can write the output $u_{t}$ (defined in \eqref{eqn:estimate-u}) as \begin{align*} u_t = \alpha_{t} v^\star + \sum_{j = 1}^{t-1}\beta_{t-1}^j \phi_{j} + \widetilde{\xi}_{t-1}. \end{align*} Here, $\{\phi_{j}\}$ are elongated to an $n$-dimensional Gaussian with $\phi_j\stackrel{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0,\frac{1}{n}I_n)$ and \begin{align*} \widetilde{\xi}_{t-1, \mathcal{I}_{\widehat{j}}^{c}} = \xi_{t-1, \mathcal{I}_{\widehat{j}}^{c}}, \qquad \widetilde{\xi}_{t-1, \mathcal{I}_{\widehat{j}}} = (\overline{\alpha}_{t} - \alpha_t) v^\star_{\mathcal{I}_{\widehat{j}}} + \sum_{j = 1}^{t-1} \overline{\beta}_{t-1}^j\overline{\phi}_{j, \mathcal{I}_{\widehat{j}}} - \sum_{j = 1}^{t-1} \beta_{t-1}^j\phi_{j, \mathcal{I}_{\widehat{j}}} + \overline{\xi}_{t-1, \mathcal{I}_{\widehat{j}}}. \end{align*} Comparing this with~\eqref{eqn:decomp-1-sparse} and \eqref{eqn:decomp-2-sparse} and according to Theorem~\ref{thm:sparse}, one has \begin{align*} \|\widetilde{\xi}_{t-1}\|_2 &\le \|\xi_{t-1, \mathcal{I}_{\widehat{j}}^c}\|_2 + \|\widetilde{\xi}_{t-1, \mathcal{I}_{\widehat{j}}}\|_2 \\ &\leq (1-p)\sqrt{\frac{k+t\log^3 n}{n}} + |\overline{\alpha}_{t} - \alpha_t| p + \red{\sqrt{\frac{t\log n}{n}} \cdot \sqrt{np}} + \|\overline{\xi}_{t-1, \overline{\mathcal{I}_{\widehat{i}}}^{c}}\|_2 \lesssim \frac{\lambda \log^2 n}{k^2} + \sqrt{\frac{k+t\log^3 n}{n}}. \end{align*} where the penultimate step uses $\ltwo{\beta_{t-1}} = \ltwo{\overline{\beta}_{t-1}} = 1$ and relation \eqref{eqn:simple-rm}, and the last step uses $p \asymp \frac{\log n}{k}$. Thus, we complete the proof of inequality~\eqref{eqn:decomp-u}. \section{Sparse PCA: Proofs of Theorem~\ref{thm:sparse} and Corollary~\ref{cor:sparse-init-1}} \label{sec:pf-sparse} Akin to the problem of $\mathbb{Z}_{2}$ synchronization, we always have $\|\beta_{t-1}\|_2 = \ltwo{\eta_{t-1}(x_{t-1})} = 1$ given our choices of the denoising functions~\eqref{eqn:eta-sparse}. As a result, we shall focus attention on tracking $\alpha_{t}.$ The proofs of Theorem~\ref{thm:sparse} and Corollary~\ref{cor:sparse-init-1} mainly follow from Theorem~\ref{thm:main}, with the assistance of an induction argument. Specifically, our induction hypotheses for the $t$-th iteration are \begin{align} \label{eqn:sparse-induction} \alpha_{t} \asymp \lambda \qquad \text{and} \qquad \ltwo{\xi_{t-1}} \lesssim \sqrt{\frac{(t-1)\log^3 n+ k}{n}}. \end{align} In what follows, we shall assume the induction hypotheses \eqref{eqn:sparse-induction} are valid for the $t$-th iteration, and demonstrate their validity for the $(t+1)$-th iteration; the base case will be validated in Section~\ref{sec:recursion}. The only difference between Theorem~\ref{thm:sparse} and Corollary~\ref{cor:sparse-init-1} lies in the initialization step which is detailed in Section~\ref{sec:pf-initialization-sparse}. \subsection{Preliminary facts} \label{sec:prelim-sparse} Before delving into the details of the main proof, we collect several preliminary results that shall be used repeatedly throughout this section. \subsubsection{Properties about the denoising functions} Recall that we adopt the following denoising functions: for any $x\in \ensuremath{\mathbb{R}}^n$, \begin{align} \eta_{t}(x) \coloneqq \gamma_t \cdot \mathrm{sign}(x) \circ (|x| - \tau_t 1)_{+} \qquad \text{where }\gamma_t \coloneqq \big\|\mathrm{sign}(x_t) \circ (|x_t| - \tau_t 1)_{+}\big\|_2^{-1} ~\text{ and }~\tau_t \asymp \sqrt{\frac{\log n}{n}}, \end{align} Here and throughout, we abuse the notation to use it in an entrywise manner when applied to vectors, i.e., \begin{equation} \mathrm{sign}(x)=\big[\mathrm{sign}(x_i) \big]_{1\leq i\leq n} \qquad \text{and} \qquad (|x| - \tau 1)_{+} = \big[ (|x_i|-\tau )_+ \big]_{1\leq i\leq n}. \end{equation} for any $x=[x_i]_{1\leq i\leq n}\in \ensuremath{\mathbb{R}}^n$. The entrywise derivative of $\eta_{t}(x)$ w.r.t.~$x$ is given by \begin{equation} \eta_{t}^{\prime}(x) = \gamma_t \mathds{1}(|x| > \tau_t 1) \coloneqq \gamma_t \big[ \mathds{1}(|x_i| > \tau_t) \big]_{1\leq i\leq n}. \end{equation} Here, $\eta_{t}^{\prime}(w)$ is well-defined for all differentiable points $w\in \ensuremath{\mathbb{R}}$, with its value for the non-differentiable points (i.e., $w=\pm\tau_t$) taken to be 0; this works for our purpose given that the non-differentiable part are accounted for separately in Theorem~\ref{thm:main}. Next, consider a set of parameters $\mu = [\mu^j]_{1\leq j\leq t-1} \in \mathcal{S}^{t-2}$, $\alpha \in \ensuremath{\mathbb{R}}$ and $\beta = [\beta^j]_{1\leq j\leq t-1} \in \ensuremath{\mathbb{R}}^{t-1}$ independent of $\{\phi_j\}$, and define the following vector (which is a function of $\alpha$ and $\beta$): \[ v = \alpha v^\star + \sum_{j = 1}^{t-1} \beta^j\phi_j \in \ensuremath{\mathbb{R}}^n. \] We also define, for any positive numbers $\tau,\gamma \in \ensuremath{\mathbb{R}}$, \begin{equation} \eta(x; \tau) \coloneqq \gamma \cdot \mathrm{sign}(x) \circ (|x| - \tau 1)_{+}. \end{equation} Elementary calculations together with $\|\mu\|_2=\|\beta\|_2=1$ yield \begin{subequations} \label{eqn:sparse-derivative} \begin{align} \Big\|\nabla_{\phi_j} \Big\langle \sum_{j = 1}^{t-1} \mu^j\phi_j, a\Big\rangle\Big\|_2 &\le |\mu^j|\|a\|_2,\qquad &&\text{for any given } a \in \mathbb{R}^n \\ \Big\|\nabla_{\phi_j} \big\langle \eta(v; \tau), a\big\rangle\big\|_2 &\le |\beta^{j}|\|\eta^{\prime}(v; \tau) \circ a\|_2\qquad &&\text{for any given } a \in \mathbb{R}^n \\ \Big\|\nabla_{\mu} \Big\langle \sum_{j = 1}^{t-1} \mu^j\phi_j, a\Big\rangle\Big\|_2 &\le \|a\|_2\sum_{j = 1}^{t-1} \|\phi_j\|_2\qquad &&\text{for any given } a \in \mathbb{R}^n \\ \Big\|\nabla_{\mu, \beta} \Big( \sum_{j = 1}^{t-1} \mu^j\beta^j \Big)\Big\|_2 &\le 2 \\ \Big\|\nabla_{\alpha, \beta, \tau} \big\langle \eta(v; \tau), a\big\rangle\Big\|_2 &\le \|a\|_2\|\eta^{\prime}(v; \tau)\|_2 \Big( 1+ \sum_j\|\phi_j\|_2 \Big) \qquad &&\text{for any given } a \in \mathbb{R}^n. \end{align} \end{subequations} \subsubsection{Basic concentration results} Next, we collect some basic concentration results. Similar to \eqref{eq:eps-set} in Section~\ref{sec:basic-concentration-buble}, we define \begin{align} &\notag \mathcal{E}_s \coloneqq \left\{ \{\phi_j\} : \max_{1\leq j\leq t-1} \|\phi_j\|_2 < 1+ C_5\sqrt{\frac{\log \frac{n}{\delta}}{n}}\right\} \bigcap \left\{ \{\phi_j\} : \sup_{a \in \mathcal{S}^{t-2}} \Big\|\sum_{j = 1}^{t-1} a_k\phi_j\Big\|_2 < 1 + C_5 \sqrt{\frac{t\log \frac{n}{\delta}}{n}} \right\} \\ &\hspace{2cm} \bigcap \left\{\{\phi_j\} : \sup_{a \in \mathcal{S}^{t-2}}\sum_{i = 1}^s \Big|\sum_{j = 1}^{t-1} a_k\phi_j\Big|_{(i)}^2 < \frac{C_5(t + s)\log \frac{n}{\delta}}{n}\right\} \label{eq:eps-set-2} \end{align} for some sufficiently large constant $C_5>0$. As discussed in~\eqref{eqn:eps-interset}, the convex set $\mathcal{E}_{s}$ satisfies \begin{align*} \ensuremath{\mathbb{P}}\left(\{\phi_j\} \in\bigcap_{s = 1}^n \mathcal{E}_s \right) \geq 1 - \delta. \end{align*} In addition, let us introduce an additional collection of convex sets: for any $1\leq s\leq n$, \begin{align} \label{eqn:set-E-tilde} \widetilde{\mathcal{E}}_s \coloneqq \left\{\{\phi_j\} : \|\Phi_{s, :}\|_2 \leq C_5 \sqrt{(t-1)\log \frac{n}{\delta}}\right\}, \end{align} where $\Phi_{s, :}$ denotes the $s$-th row of matrix $\Phi = \sqrt{n}[\phi_1,\cdots,\phi_{t-1}] \in \ensuremath{\mathbb{R}}^{n \times (t-1)}$. Standard Gaussian concentration results \citep{vershynin2018high} reveal that $\{\phi_j\}$ falls within $\bigcap_{s = 1}^n \widetilde{\mathcal{E}}_s$ with probability at least $1-\delta$, provided that $C_5$ is large enough. As a result, it is readily seen that \begin{align} \label{eqn:eps-interset-sparse} \ensuremath{\mathbb{P}}( \{\phi_j\} \in \mathcal{E} ) \geq 1 - 2 \delta, \qquad \text{for }\mathcal{E} \coloneqq \bigcap_{s = 1}^n \big( \mathcal{E}_s \cap \widetilde{\mathcal{E}}_s \big). \end{align} Throughout the rest of the proof, we shall take $\delta$ to be sufficiently small, say, $\delta \asymp n^{-300}$ (similar to Section~\ref{sec:basic-concentration-buble}). \subsubsection{Bounding the size of $\eta^{\prime}_t$} As studied in the case of $\mathbb{Z}_{2}$ synchronization (see \eqref{eqn:muphi-rank}), we know that conditional on the event $\mathcal{E}$, the $t$-th largest entry of $\sum_{j = 1}^{t-1} \beta^j \phi_j$ for an arbitrary unit vector $\beta=[\beta^1,\cdots,\beta^{t-1}]\in \mathcal{S}^{t-2}$ obeys \begin{align*} t \bigg| \sum_{j = 1}^{t-1} \beta^j\phi_j\bigg|_{(t)}^2 \leq \sum_{i=1}^t \bigg|\sum_{j = 1}^{t-1} \beta^j\phi_j\bigg|_{(i)}^2 \lesssim \frac{(t + t)\log n}{n} \asymp \frac{t\log n}{n}, \end{align*} where the last inequality uses the definition of $\mathcal{E}$. It therefore implies that for every $l \geq t$, one has \begin{align*} \Big|\sum_{j = 1}^{t-1} \beta^j\phi_j\Big|_{(l)} \leq \bigg| \sum_{j = 1}^{t-1} \beta^j\phi_j\bigg|_{(t)}\lesssim \sqrt{\frac{\log n}{n}} \end{align*} with probability at least $1-O(n^{-11})$. Now consider the vector $v \coloneqq \alpha_tv^\star + \sum_{j = 1}^{t-1} \beta^j\phi_j$. Since $|v^{\star}|_{(k +1)} = 0$ (given that $v^{\star}$ is $k$-sparse) and that $\tau_{t}\geq C_3\sqrt{\frac{\log n}{n}}$ for some constant $C_3>0$ large enough, we can show that % \begin{align} \label{eqn:gradient-sparse} \big|\eta_t^{\prime}(v)\big|_{(i)} = \big| |v|-\tau 1 \big|_{(i)} = 0,\qquad\text{for }i > k + t \end{align} with probability exceeding $1-O(n^{-11})$. As a direct consequence of \eqref{eqn:gradient-sparse}, for any vector $a \in \ensuremath{\mathbb{R}}^n$ one has \begin{align} \label{eqn:sparse-eta-a-prod} \left\|\eta_t^{\prime}(v) \circ a\right\|_2 = \gamma_t \sqrt{ \sum_{i=1}^{k+t} |a|_{(i)}^2 } \lesssim \lambda^{-1}\sqrt{ \sum_{i=1}^{k+t} |a|_{(i)}^2 }, \end{align} where the last relation comes from~\eqref{eqn:gamma-t-evolution}. \subsection{Tight estimate of $\gamma_t$} \label{sec:tight-estimate-gamma-sparse} In this subsection, our goal is to show that under the induction hypotheses \eqref{eqn:sparse-induction}, we have \begin{align} \label{eqn:gamma-t-evolution} \gamma_t \coloneqq \big\|\mathrm{sign}(x_t)(|x_t| - \tau_t 1)_{+} \big\|_2^{-1} = \big\|(|x_t| - \tau_t 1)_{+} \big\|_2^{-1} \asymp \lambda^{-1}, \end{align} which would then imply that (see Assumption~\ref{assump:eta}) \begin{align} \label{eqn:rho-sparse} \rho = \lambda^{-1} \qquad\text{and}\qquad \rho_1 = 0. \end{align} In order to show this, we resort to Corollary~\ref{cor:Gauss}. Let us define \begin{align*} \Phi = \sqrt{n} \big[ \phi_1,\cdots,\phi_{t-1} \big] \qquad \text{and}\qquad \theta = (\alpha, \beta, \tau) \in \mathbb{R} \times \mathbb{R}^{t-1} \times \mathbb{R} \quad \text{with }\beta = [\beta^1,\cdots,\beta^{t-1}], \end{align*} and consider the following function: \begin{align} f_{\theta}(\Phi) \coloneqq \big\|\mathrm{sign}(v) \circ (|v| - \tau 1)_{+}\big\|_2^2, \qquad \text{with } v \coloneqq \alpha v^\star + \sum_{j = 1}^{t-1} \beta^j\phi_j. \label{eq:f-Theta-v-sparse-1} \end{align} Let us also introduce the following set of parameters: \begin{align*} \Theta \coloneqq \left\{ \theta = (\alpha, \beta, \tau) \in \mathbb{R} \times \mathbb{R}^{t-1} \times \mathbb{R} \,\Big| \, \alpha \asymp \lambda, \|\beta\|_2=1, C_5 \sqrt{\frac{\log n}{n}}\leq \tau \asymp \sqrt{\frac{\log n}{n}}\right \} \end{align*} for some large enough constant $C_5>0$. Consequently, $\gamma_t$ (cf.~\eqref{eqn:gamma-t-evolution}) can be viewed as $f_{\theta}(\Phi)$ with $\theta = (\alpha_t, \beta_{t-1}, \tau_t)$, and hence it suffices to develop a uniform bound on $f_{\theta}(\Phi)$ over all $\theta \in \Theta$. It is easily seen that $|f_{\theta}(\Phi)|\lesssim n^{100} \big(\max_j \|\phi_j\|_2\big)^{100}$, and that $\|\nabla_{\theta} f(Z) \|_2 \lesssim n^{100}$ for all $Z\in \mathcal{E}$ (cf.~\eqref{eqn:eps-interset-sparse}). In addition, given that $\log\frac{1}{\delta} \asymp \log n$, it follows from \eqref{eq:f-Theta-v-sparse-1} and \eqref{eq:eps-set-2} that \begin{align} \label{eqn:vbound-sparse} \|v\|_2 \leq |\alpha|\ltwo{v^\star} + 1 + C_5\sqrt{\frac{t\log \frac{n}{\delta}}{n}} \asymp \lambda + 1 \asymp 1 \end{align} over the set $\mathcal{E}$, where the penultimate step is valid as long as $\frac{t\log n}{n} \lesssim 1 $. Moreover, we observe that \begin{align*} \big\|(|v|-\tau1)_{+}\big\|_{2}^{2} & \leq\sum_{i:\,v_{i}^{\star}\neq0}\Big|\alpha v_{i}^{\star}+\sum_{j=1}^{t-1}\beta^{j}\phi_{j,i}\Big|^{2}+\sum_{i:\,v_{i}^{\star}=0}\bigg(\Big|\sum_{j=1}^{t-1}\beta^{j}\phi_{j,i}\Big|-\tau\bigg)^{2}\\ & \lesssim\alpha^{2}\|v^{\star}\|_{2}^{2}+\sum_{i:\,v_{i}^{\star}\neq0}\Big|\sum_{j=1}^{t-1}\beta^{j}\phi_{j,i}\Big|^{2}+\sum_{i=1}^{n}\bigg(\Big|\sum_{j=1}^{t-1}\beta^{j}\phi_{j,i}\Big|-\tau\bigg)^{2}\\ & \lesssim\alpha^{2}+\sum_{i=1}^{2k+t}\Big|\sum_{j=1}^{t-1}\beta^{j}\phi_{j}\Big|_{(i)}^{2}\lesssim\alpha^{2}+\frac{(k+t)\log n}{n}, \end{align*} where the last line comes from (\ref{eqn:gradient-sparse}) and the definition (\ref{eq:eps-set-2}) of $\mathcal{E}$. This in turn allows us to calculate \begin{align*} \left\|\nabla_{\Phi} f_{\theta}(\Phi)\right\|_2 &\le 2\frac{\|\beta\|_2}{\sqrt{n}} \big\|\mathrm{sign}(v) \circ (|v| - \tau 1)_{+} \circ \ind\left(|v| > \tau 1\right)\big\|_2 \le 2\frac{\big\|(|v|-\tau1)_{+}\big\|_{2}}{\sqrt{n}} \lesssim \frac{ \alpha + \sqrt{\frac{(k+t)\log n}{n}} }{\sqrt{n}} . \end{align*} Therefore, Corollary~\ref{cor:Gauss} and \eqref{eqn:brahms-conc} tell us that, with probability at least $1-O(n^{-11})$, \begin{align*} \left| \big\| (|v| - \tau 1)_{+} \big\|_2^2 - \int\Big\|\Big(\Big|\alpha v^\star + \frac{1}{\sqrt{n}}x\Big| - \tau 1\Big)_{+}\Big\|_2^2\varphi_n(\mathrm{d} x)\right| &\lesssim \bigg( \alpha + \sqrt{\frac{(k+t)\log n}{n}} \bigg) \sqrt{\frac{t\log n}{n}} \end{align*} holds simultaneously for all $\theta\in\Theta$. This in turn implies that \begin{align} \label{eqn:summer} \bigg| \big\| (|v_t| - \tau_t 1)_{+} \big\|_2^2 - \int\Big\|\left(\Big|\alpha_tv^\star + \frac{1}{\sqrt{n}}x \Big| - \tau_t 1\right)_{+}\Big\|_2^2\varphi_n(\mathrm{d} x)\bigg| &\lesssim \bigg( \alpha_t + \sqrt{\frac{(k+t)\log n}{n}} \bigg) \sqrt{\frac{t\log n}{n}}. \end{align} Next, let us assess the size of the quantity $\int\|(|\alpha_tv^{\star} + \frac{1}{\sqrt{n}}x| - \tau_t 1)_{+}\|_2^2\varphi_n(\mathrm{d} x)$. For those indices $i$ obeying $|\alpha_tv_i^{\star}| \geq 2 \tau_t$, it is easily seen from basic Gaussian properties that \begin{align} \label{eqn:tmp-integral} \int\Big(\Big|\alpha_tv_i^{\star} + \frac{1}{\sqrt{n}}x\Big| - \tau_t\Big)_{+}^2\varphi(\mathrm{d} x) \asymp \int_{-\sqrt{\log n}}^{\sqrt{\log n}} \big(\alpha_tv_i^{\star}\big)^2\varphi(\mathrm{d} x) \asymp \left(\alpha_tv_i^{\star}\right)^2, \end{align} which together with the induction hypothesis $\alpha_t\asymp \lambda$ gives \begin{align} \label{eqn:quartet1579-large} \sum_{i:\, |\alpha_t v_i^{\star}| \geq 2\tau_t \asymp \sqrt{\frac{\log n}{n}} }\int\Big(\Big|\alpha_tv_i^{\star} + \frac{1}{\sqrt{n}}x\Big| - \tau_t\Big)_{+}^2\varphi(\mathrm{d} x) \asymp \lambda^2 \sum_{i:\, |\alpha_t v_i^{\star}| \geq 2\tau_t \asymp \sqrt{\frac{\log n}{n}} }\left(v_i^{\star}\right)^2. \end{align} Additionally, it is observed that \begin{align} 1\ge\sum_{i}\big(v_{i}^{\star}\big)^{2}\ind\Big(|\alpha_t v_{i}^{\star}|\geq 2\tau_t \Big) & \overset{(\mathrm{i})}{\geq}\sum_{i}\big(v_{i}^{\star}\big)^{2}\ind\Big(|v_{i}^{\star}|\geq\sqrt{\frac{1}{2k}}\Big)=1-\sum_{i}\big(v_{i}^{\star}\big)^{2}\ind\Big(0<|v_{i}^{\star}|<\sqrt{\frac{1}{2k}}\Big) \notag\\ & \geq1-k\cdot\left(\sqrt{\frac{1}{2k}}\right)^{2}1=\frac{1}{2} , \label{eqn:basics-vstar-2} \end{align} where (i) holds since for any $i$ with $|v_{i}^{\star}|\geq\sqrt{\frac{1}{2k}}$, one necessarily has $|\alpha_t v_{i}^{\star}| \asymp |\lambda v_{i}^{\star}|\geq 2\tau_t \asymp \sqrt{\frac{\log n}{n}}$ as long as $\lambda \geq C_2 \sqrt{\frac{k\log n}{n}}$ for some large enough constant $C_2>0$. Substitution into \eqref{eqn:quartet1579-large} yields \begin{align} \label{eqn:quartet1579-large-135} \sum_{i:\, |\alpha_t v_i^{\star}| \geq 2\tau_t \asymp \sqrt{\frac{\log n}{n}} }\int\Big(\Big|\alpha_tv_i^{\star} + \frac{1}{\sqrt{n}}x\Big| - \tau_t\Big)_{+}^2\varphi(\mathrm{d} x) \asymp \lambda^2 . \end{align} Moreover, when it comes to those indices $i$ obeying $|\alpha_tv_i^{\star}|<2\tau_t \asymp \sqrt{\frac{\log n}{n}}$, one has \[ \sum_{i:\,|\alpha_{t}v_{i}^{\star}|<2\tau_{t}\asymp\sqrt{\frac{\log n}{n}}}\int\Big(\Big|\alpha_{t}v_{i}^{\star}+\frac{1}{\sqrt{n}}x\Big|-\tau_{t}\Big)_{+}^{2}\varphi(\mathrm{d}x)\leq\sum_{i:\,|\alpha_{t}v_{i}^{\star}|<2\tau_{t}\asymp\sqrt{\frac{\log n}{n}}}\big(\alpha_{t}v_{i}^{\star}\big)^{2}\lesssim k\cdot\frac{\log n}{n}\lesssim\lambda^{2}, \] provided that $\lambda^2 \gtrsim \frac{k \log n}{n}$. This combined with \eqref{eqn:quartet1579-large-135} leads to \begin{align} \label{eqn:quartet15} \int \Big\| \Big(\Big|\alpha_tv^{\star} + \frac{1}{\sqrt{n}}x \Big| - \tau_t 1 \Big)_{+} \Big\|_2^2\varphi_n(\mathrm{d} x) &\asymp \lambda^2. \end{align} Taking this collectively with \eqref{eqn:summer} gives \begin{align} & \bigg|\big\|(|v_{t}|-\tau_{t} 1)_{+}\big\|_{2}-\bigg(\int\Big\|\left(\Big|\alpha_{t}v^\star+\frac{1}{\sqrt{n}}x\Big|-\tau_{t} 1\right)_{+}\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)\bigg)^{\frac{1}{2}}\bigg|\notag\\ & \quad\quad\leq\frac{\Big|\big\|(|v_{t}|-\tau_{t} 1)_{+}\big\|_{2}^{2}-\int\big\|\left(\big|\alpha_{t}v^\star+\frac{1}{\sqrt{n}}x\big|-\tau_{t} 1\right)_{+}\big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)\Big|}{\Big(\int\big\|\big(\big|\alpha_{t}v^\star+\frac{1}{\sqrt{n}}x\big|-\tau_{t} 1\big)_{+}\big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)\Big)^{\frac{1}{2}}} \lesssim\frac{ \alpha_t + \sqrt{\frac{(k+t)\log n}{n}} }{\lambda}\sqrt{\frac{t\log n}{n}}. \label{eq:summer-twice} \end{align} Now in order to control $\gamma_t$, we still need to establish a connection between $\|\mathrm{sign}(x_t)\circ(|x_t| - \tau_t)_{+}\|_2$ and $\|\mathrm{sign}(v_t)\circ(|v_t| - \tau_t)_{+}\|_2.$ Recognizing that $x_t = v_t + \xi_{t-1}$, we can invoke the triangle inequality to obtain \begin{align} \notag\gamma_{t}^{-1} & =\big\|(|x_{t}|-\tau_{t} 1)_{+}\big\|_{2} =\big\|(|v_{t}|-\tau_{t} 1)_{+}\big\|_{2}+O(\|\xi_{t-1}\big\|_{2})\\ \notag & =\sqrt{\int\Big\|\Big(\Big|\alpha_{t}v^{\star}+\frac{1}{\sqrt{n}}x\Big|-\tau_{t} 1\Big)_{+}\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)}+O\bigg(\frac{1}{\lambda}\sqrt{\frac{t\log n}{n}}+\|\xi_{t-1}\|_{2}\bigg) \\ & =\left( 1+ O\Bigg(\frac{\alpha_t + \sqrt{\frac{(k+t)\log n}{n}}}{\lambda^2}\sqrt{\frac{t\log n}{n}}+\frac{\|\xi_{t-1}\|_{2}}{\lambda}\Bigg) \right) \sqrt{\int\Big\|\Big(\Big|\alpha_{t}v^{\star}+\frac{1}{\sqrt{n}}x\Big|-\tau_{t} 1\Big)_{+}\Big\|_{2}^{2}\varphi_{n}(\mathrm{d} x)} \asymp \lambda, \label{eqn:beethoven} \end{align} where the penultimate step follows from inequality~\eqref{eq:summer-twice}, and the last line makes use of \eqref{eqn:quartet15}. This establishes the claimed result in \eqref{eqn:gamma-t-evolution}. \subsection{Controlling key quantities $A_t,B_t,D_t, E_t$ and $\kappa_t$} \label{sec:control-sparse} In order to apply Theorem~\ref{thm:main} for the sparse spiked Wigner model, a key step lies in bounding the multiple key quantities $A_t,\ldots,G_t$ (see \eqref{defi:A}-\eqref{defi:G}) as specified in Assumption~\ref{assump:A-H-eta}, which we aim to accomplish in this subsection. Note that we do not need to bound $C_t, F_t$ and $G_t$ as they only appear in the bound on $\Delta_{\beta,t}$, which is irrelevant in this case. The rest of the section is dedicated to bounding $A_t,B_t,D_t, E_t$. Along the way, we shall also control $\kappa_t$, which is needed when calculating $D_t$. \subsubsection{Quantity $A_t$ in \eqref{defi:A}} Unlike the case of $\mathbb{Z}_{2}$ synchronization where the denoising functions are smooth everywhere, caution needs to be exercised when handling discontinuity points in sparse spiked Wigner models. Consider any given $\mu=[\mu^1,\cdots,\mu^{t-1}]$, $\alpha \in \ensuremath{\mathbb{R}}$, $\beta \in [\beta^1,\cdots,\beta^{t-1}]$ and $\tau,\gamma\in \ensuremath{\mathbb{R}}$, and let us take \begin{subequations} \label{eq:defn-Phi-Theta-theta-sparse} \begin{align} &\Phi \coloneqq \sqrt{n}(\phi_1,\ldots,\phi_{t-1}), \qquad \theta \coloneqq \big[ \mu, \alpha, \beta, \tau, \gamma \big] \in \mathcal{S}^{t-2} \times \ensuremath{\mathbb{R}} \times \mathcal{S}^{t-2} \times \ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}, \\ v \coloneqq \alpha v^\star + &\sum_{j = 1}^{t-1} \beta^j\phi_j \in \ensuremath{\mathbb{R}}^n \quad \Theta \coloneqq \left\{ \theta = \big[ \mu, \alpha, \beta, \tau \big] \,\Big|\, \alpha \asymp \gamma^{-1}\asymp \lambda, \|\mu\|_2=\|\beta\|_2=1, \tau \asymp \sqrt{\frac{\log n}{n}}\right\}. \end{align} \end{subequations} Recall that $A_t$ consists of two parts: $\big\langle \sum_{j = 1}^{t-1} \mu^j\phi_j, \eta_{t}(v_t) \big\rangle$ and $\big\langle\eta_t^{\prime}\big\rangle \sum_{j = 1}^{t-1} \mu^j\beta_{t-1}^j$. In order to bound the first part of $A_t$, we intend to first derive a uniform control of the following function: \begin{align*} f_{\theta}(\Phi) \coloneqq \Big\langle \sum_{j = 1}^{t-1} \mu^j\phi_j, \eta(v)\Big\rangle \end{align*} over all $\theta \in \Theta$, where we define (with its dependency on $\theta$ suppressed in the notation) \begin{align} \eta(x) \coloneqq \gamma\, \mathrm{sign}(x) \circ (|x|-\tau 1)_+. \label{eq:eta-notation-suppressed-sparse} \end{align} Towards this end, we first repeat the analysis in Section~\ref{sec:tight-estimate-gamma-sparse} (in particular, \eqref{eqn:summer} and \eqref{eqn:quartet15}) to derive \begin{equation} \big\| (|v|-\tau 1)_+ \big\|_2 \asymp \lambda \qquad \text{and} \qquad \|\eta(v)\|_2 = \gamma\, \big\| (|v|-\tau 1)_+ \big\|_2 \asymp 1, \label{eq:eta-v-norm-sparse} \end{equation} for any $\theta \in \Theta$. We can then invoke the derivative calculation in~\eqref{eqn:sparse-derivative} to arrive at \begin{align*} \left\|\nabla_{\Phi} f_{\theta}(\Phi)\right\|_2 &\le \frac{\|\mu\|_2}{\sqrt{n}}\left\|\eta(v)\right\|_2 + \frac{\|\beta\|_2}{\sqrt{n}}\bigg\|\sum_{j = 1}^{t-1} \mu^j\phi_j \circ \eta^{\prime}(v)\bigg\|_2 \\ &\lesssim \frac{1}{\sqrt{n}} + \frac{1}{\lambda\sqrt{n}}\Bigg(\sum_{l=1}^{k+t}\bigg|\sum_{j=1}^{t-1}\mu^{j}\phi_{j}\bigg|_{(l)}^{2}\Bigg)^{1/2} \lesssim \frac{1}{\sqrt{n}}\bigg(1+\frac{1}{\lambda}\sqrt{\frac{(t+k)\log n}{n}}\bigg)\asymp\frac{1}{\sqrt{n}}, \end{align*} where the second inequality applies \eqref{eq:eta-v-norm-sparse} and \eqref{eqn:sparse-eta-a-prod}, the third inequality invokes the property of $\mathcal{E}$ in \eqref{eq:eps-set-2}, and the last relation is valid as long as $\lambda^2 \gtrsim \frac{k\log n}{n}$ and $t\lesssim \frac{\lambda^2 n}{\log n}$. Additionally, it is trivially seen that $ f_{\theta}(\Phi) $ as a function of $\theta$ is $n^{100}$-Lipschitz for any given $\Phi\in \mathcal{E}$ and $|f_{\theta}(\Phi) |\lesssim n^{100} \big( \max_j \|\phi_j\|_2 \big)^{100}$. As a result, invoke Corollary~\ref{cor:Gauss} in conjunction with \eqref{eqn:brahms-conc} to arrive at % \begin{align} \label{eqn:sparse-At-1} \sup_{\theta\in \Theta}\Big|f_{\theta}(\Phi) - \mathbb{E}\left[f_{\theta}(\Phi)\right]\Big| &\lesssim \sqrt{\frac{t\log n}{n}}, \end{align} with probability at least $1-O(n^{-11})$. Next, we move on to consider the second part of $A_t$, namely, \begin{align*} \big\langle\eta_t^{\prime}(v_t)\big\rangle \cdot \sum_{j = 1}^{t-1} \mu_t^j \beta_{t-1}^j \qquad \text{where } \left\langle\eta_t^{\prime}(v_t)\right\rangle &= \frac{\gamma_t}{n} \sum_{i = 1}^n \ind\Big(\big|\alpha_tv_i^{\star} + \sum_j\beta_{t-1}^j\phi_{j, i}\big| > \tau_t\Big). \end{align*} Given that the indicator function is not Lipschitz continuous, we resort to Corollary~\ref{cor:Gauss-jump} to control it. For any given $\theta \in \Theta$, define \begin{align} \label{eqn:hfunction} h_{i, \theta}(\Phi_{i, :}) \coloneqq \bigg| \alpha v_i^{\star} + \sum_{j=1}^{t-1}\beta^j\phi_{j, i} \bigg|, \qquad 1\leq i\leq n, \end{align} where $\Phi_{i,:}$ denotes the $i$-th row of $\Phi$. Clearly, for any $\theta,\widetilde{\theta}\in \Theta,$ one can easily verify that \[ \big| h_{i, \theta}(\Phi_{i, :}) - h_{i, \widetilde{\theta}}(\Phi_{i, :}) \big| \leq n^{100} \big\|\theta - \widetilde{\theta}\|_2 \] for any $\Phi\in \mathcal{E}$; and for any $\theta \in \Theta$ and $\tau \leq n$, one has \[ \mathbb{P}\Big( \tau - 400n^{-100} \leq h_{i,\theta}(\Phi_{i,:}) \leq \tau + 400 n^{-100} \Big) \lesssim n^{-1}. \] Therefore, Corollary~\ref{cor:Gauss-jump} together with \eqref{eqn:brahms-conc} reveals that with probability at least $1-O(n^{-11})$, \begin{align} \sup_{\theta\in \Theta} \left|\sum_{i = 1}^n \ind\left(h_{i, \theta} > \tau\right) - \sum_{i = 1}^n \mathbb{P}\left(h_{i, \theta} > \tau\right)\right| &\lesssim \sup_{\theta\in \Theta} \sqrt{\sum_{i = 1}^n \mathbb{P}\left(h_{i, \theta} > \tau\right)t\log n} + t\log n \notag\\ & \lesssim \sqrt{(k + n\cdot O(n^{-11})) t\log n} + t\log n \lesssim \sqrt{t(t+k)} \log n \label{eq:sum-h-tau-sparse-UB} \end{align} holds simultaneously for all $\theta\in \Theta$, where the last inequality comes from \eqref{eqn:gradient-sparse} given that $\tau \asymp \sqrt{\frac{\log n}{n}}$. Recognizing that $|\mu^{\top}\beta| \leq 1$, we further have \begin{align} \sup_{\theta\in \Theta} \left| \mu^{\top} \beta \sum_{i = 1}^n \ind\left(h_{i, \theta} > \tau\right) - \mu^{\top} \beta\sum_{i = 1}^n \mathbb{P}\left(h_{i, \theta} > \tau\right)\right| &\lesssim |\mu^{\top}\beta| \sqrt{t(t+k)\log n} \lesssim \sqrt{t(t+k)} \log n. \label{eqn:sparse-At-3} \end{align} To summarize, let us decompose the quantity of interest in \eqref{defi:A} as follows: \begin{align*} &\Bigg|\left\langle \sum_{j = 1}^{t-1} \mu^j\phi_j, \eta_{t}(v_t)\right\rangle - \big\langle\eta_t^{\prime}(v_t)\big\rangle \sum_{j = 1}^{t-1} \mu^j\beta_{t-1}^j\Bigg| \\ &\qquad \qquad \leq \sup_{\theta\in \Theta} \big|f_{\theta}(\Phi) - \ensuremath{\mathbb{E}} [f_{\theta}(\Phi)]\big| + \sup_{\theta\in \Theta} \Big|\ensuremath{\mathbb{E}} [f_{\theta}(\Phi)] - \big\langle\eta^{\prime}(v)\big\rangle \cdot \sum_{j = 1}^{t-1} \mu^j \beta^j\Big|\\ &\qquad \qquad = \sup_{\theta\in \Theta} \big|f_{\theta}(\Phi) - \ensuremath{\mathbb{E}} [f_{\theta}(\Phi)]\big| + \sup_{\theta\in \Theta} \Big|\frac{\gamma}{n}\sum_{i = 1}^n \mu^\top \beta \cdot\mathbb{P}\left(|h_{i, \theta}| > \tau\right) - \big\langle\eta^{\prime}(v)\big\rangle \mu^{\top} \beta \Big|, \end{align*} where the last equality follows from Stein's lemma, that is, \begin{align*} \ensuremath{\mathbb{E}} [f_{\theta}(\Phi)] = \mathbb{E}\left[\left\langle \sum_{j = 1}^{t-1} \mu^j\phi_j, \eta(v)\right\rangle\right] = \ensuremath{\mathbb{E}}\left[\left\langle\eta^{\prime}(v)\right\rangle \sum_{j = 1}^{t-1} \mu^j\beta^j\right] = \frac{\gamma}{n}\sum_{i = 1}^n \mu^\top \beta \cdot\mathbb{P}\left(|h_{i, \theta}| > \tau\right). \end{align*} Taking the decomposition above collectively with \eqref{eqn:sparse-At-1} and \eqref{eqn:sparse-At-3} yields \begin{align} \Bigg|\left\langle \sum_{j = 1}^{t-1} \mu^j\phi_j, \eta_{t}(v_t)\right\rangle - \left\langle\eta_t^{\prime}\right\rangle \sum_{j = 1}^{t-1} \mu^j\beta_{t-1}^j\Bigg| &\lesssim \sqrt{\frac{t\log n}{n}} + \frac{\sqrt{t(t+k)}\log n}{n} \asymp \sqrt{\frac{t\log n}{n}} =: A_t, \label{eqn:sparse-At} \end{align} where the last relation is valid under Assumption \eqref{cond:t-k}. \subsubsection{Quantity $B_t$ in \eqref{defi:B}} \label{sec:control-Bt-sparse} Recall that quantity $B_{t}$ is concerned with bounding $v^{\star\top}\eta_{t}(v_t)$. To do so, let us again adopt the definitions of $\Phi, \theta, \Theta, v$ as in \eqref{eq:defn-Phi-Theta-theta-sparse}, and definte the following function parameterized by $\theta$: \begin{align*} f_{\theta}(\Phi) \coloneqq v^{\star\top}\eta(v), \end{align*} with the function $\eta$ defined in \eqref{eq:eta-notation-suppressed-sparse}. In order to bound $v^{\star\top}\eta_{t}(v_t)$, we first develop a valid bound on $f_{\theta}(\Phi)$ that is valid simultaneously for all $\theta\in \Theta$. Towards this end, consider any fixed parameter $\theta \in \Theta$, and apply \eqref{eqn:sparse-derivative} to reach \begin{align} \left\|\nabla_{\Phi} f_{\theta}(\Phi)\right\|_2 &\le \frac{\|\beta\|_2}{\sqrt{n}}\left\|v^{\star} \circ \eta^{\prime}(v)\right\|_2 \lesssim \frac{1}{\sqrt{n}} \cdot \frac{1}{\lambda } \|v^\star\|_2 \lesssim \sqrt{\frac{1}{n\lambda^2}}, \label{eq:grad-f-Bt-sparse} \end{align} where we have used property~\eqref{eqn:sparse-eta-a-prod} as well as the fact that $\|\beta\|_2=1$. Additionally, it is easily seen that $\left\|\nabla_{\theta} f_{\theta}(\Phi)\right\|_2 \lesssim n^{100}$ for any $\Phi\in \mathcal{E}$ and $|f_{\theta}(\Phi)|\lesssim n^{100} \max_j \|\phi_j\|_2^{100}$. As a consequence, Corollary~\ref{cor:Gauss-jump} taken together with \eqref{eqn:brahms-conc} indicates that, with probability at least $1-O(n^{-11})$, \begin{align} \notag \left|v^{\star\top}\eta_{t}(v_t) - v^{\star\top}\int\eta_t\left(\alpha_tv^{\star} + \frac{1}{\sqrt{n}}x\right)\varphi_n(\mathrm{d} x)\right| &\le \sup_{\theta} \left|v^{\star\top}\eta - v^{\star\top}\int\eta\left(\alpha v^{\star} + \frac{1}{\sqrt{n}}x\right)\varphi_n(\mathrm{d} x)\right| \\ &\lesssim \sqrt{\frac{t\log n}{n\lambda^2}} =: B_t. \label{eqn:sparse-Bt} \end{align} \subsubsection{Bounding quantity $\kappa_t$} This subsection develops an upper bound on the quantity $\kappa_t^2$ defined in \eqref{defi:kappa}, which is crucial in controlling $D_t$. From the choices of the denoising functions, $\eta_{t}^{\prime\prime}$ is well-defined and equal to $0$ except at two non-differentiable points. To bound $\kappa_t^2$, it is thus sufficient to control quantities $\langle \int[\eta_{t}^{\prime}(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\big)]^2\varphi_n(\mathrm{d} x) \rangle$ and $\langle\int[x\eta_{t}^{\prime}\big(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\big)]^2 \varphi_n(\mathrm{d} x)\rangle$ separately, given that $\|\beta_{t-1}\|_2=1$. Let us first consider the term $\langle \int[\eta_{t}^{\prime}(\alpha_tv^\star + \frac{1}{\sqrt{n}}x\big)]^2\varphi(\mathrm{d} x) \rangle$. Recall our induction hypothesis $\alpha_t \asymp \lambda$ as well as our assumptions $\lambda \gtrsim \sqrt{\frac{k\log n}{n}}$ and $\tau_t \asymp \sqrt{\frac{\log n}{n}}$. We shall divide the index set $[n]$ into two parts and look at each part separately. For those indices $i$ obeying $v_i^{\star}\neq 0$, one has the trivial upper bound \begin{align} \int\ind\Big(\Big|\alpha_{t}v_{i}^{\star}+\frac{1}{\sqrt{n}}x\Big|>\tau_{t}\Big)\varphi(\mathrm{d} x) \leq 1. \label{eq:indicator-large-sparse-123} \end{align} Otherwise, for those entries with $v_i^{\star}=0$, we find that \begin{align*} \int\ind\Big(\Big|\alpha_{t}v_{i}^{\star}+\frac{1}{\sqrt{n}}x\Big|>\tau_{t}\Big)\varphi(\mathrm{d} x) = \int\ind\Big(\Big|\frac{1}{\sqrt{n}}x\Big|>\tau_{t}\Big)\varphi(\mathrm{d} x) \leq 2\int_{\sqrt{n}\tau_t}^{\infty} \varphi(\mathrm{d} x) \lesssim \frac{1}{n}, \end{align*} provided that $\tau_t \geq 2 \sqrt{\frac{\log n}{n}}$. Putting these two cases together gives \begin{align} \notag \left\langle\int\Big[\eta_{t}^{\prime}\Big(\alpha_tv + \frac{\|\beta_{t-1}\|_2}{\sqrt{n}}x\Big)\Big]^2\varphi_n(\mathrm{d} x)\right\rangle &= \gamma_t^2\left\langle\int \ind\Big(\Big|\alpha_tv^{\star} + \frac{1}{\sqrt{n}}x\Big| > \tau_t 1\Big)\varphi_n(\mathrm{d} x)\right\rangle \\ & \lesssim \frac{1}{n\lambda^2} \left( k\cdot 1 + (n-k)\cdot \frac{1}{n} \right) \asymp \frac{k}{n\lambda^2}. \label{eqn:kt-1} \end{align} Similarly, it can also be established that \begin{align} \label{eqn:kt-2} \left\langle\int\left[x\eta_{t}^{\prime}\left(\alpha_tv^{\star} + \frac{1}{\sqrt{n}}x\right)\right]^2\varphi_n(\mathrm{d} x)\right\rangle &= \gamma_t^2\left\langle\int x^2\ind\left(\left|\alpha_tv^{\star} + \frac{1}{\sqrt{n}}x\right| > \tau_t 1\right)\varphi_n(\mathrm{d} x)\right\rangle \lesssim \frac{k}{n\lambda^2}. \end{align} Consequently, putting the above two cases together with the definition \eqref{defi:kappa} yields \begin{align} \label{eqn:sparse-kappa} \kappa_t^2 \lesssim \frac{k}{n\lambda^2}. \end{align} \subsubsection{Quantity $D_t$ in \eqref{defi:D}} We now turn to the analysis of $D_{t}$. Note that $\eta_{t}^{\prime\prime}$ is well-defined and equal to $0$ except at two non-differentiable points. Hence, to control $D_{t}$, it is sufficient to consider the following function: \begin{align} \Big\|\sum_{j = 1}^{t-1} \mu^j_t\phi_j \circ \eta_{t}^{\prime}(v_t) \Big\|_2^2 &= \gamma_t^2 \sum_{i = 1}^n \Big(\sum_{j = 1}^{t-1} \mu^j_t \phi_{j, i}\Big)^2 \ind\Big(\Big|\alpha_tv_i^{\star} + \sum_j\beta_{t-1}^j\phi_{j, i}\Big| > \tau_t\Big). \end{align} Setting the stage, let us define $\Phi, \theta, \Theta, v, \eta$ as in \eqref{eq:defn-Phi-Theta-theta-sparse} and \eqref{eq:eta-notation-suppressed-sparse}, and introduce the following functions: \begin{align*} f_{i, \theta}(\Phi_{i, :}) \coloneqq \bigg(\sum_{j = 1}^{t-1} \mu^j\phi_{j, i}\bigg)^2, \qquad \text{ and } \qquad h_{i, \theta}(\Phi_{i, :}) \coloneqq \bigg| \alpha v_i^{\star} + \sum_{j=1}^{t-1}\beta^j\phi_{j, i} \bigg| . \end{align*} For every fixed $\mu\in \mathcal{S}^{t-2}$, $\sum_{j = 1}^{t-1} \mu^j\phi_{j, i}$ is Gaussian with mean zero and variance $1/n$; therefore, $f_{i,\theta} \ge 0$ is $\frac{1}{n}$-subexponential with $\mathbb{E}[f_{i,\theta}] = 1/n$ (see \citet[Lemma 2.7]{vershynin2018high}). In addition, it can be straightforwardly checked that (i) $\|\nabla_{\theta} f_{i, \theta}(\Phi_{i, :})\|_2\lesssim n^{100}$ for any $\Phi\in \mathcal{E}$; (ii) $|f_{i, \theta}(\Phi_{i, :})|\lesssim n^{100} \|\Phi\|_{\mathrm{F}}^{100}$; and (iii) $\mathbb{P}\big(\tau - 400n^{-100} \leq h_{i, \theta}(\Phi_{i, :}) \leq \tau + 400n^{-100}\big)\lesssim n^{-1}$ for any $\tau\in \ensuremath{\mathbb{R}}$ and any $\theta \in \Theta$. By virtue of Corollary~\ref{cor:Gauss-jump} and \eqref{eqn:brahms-conc}, we can readily see that, with probability at least $1-O(n^{-11})$, \begin{align*} &\sup_{\theta \in \Theta}\left|\sum_{i = 1}^n \Big(\sum_{j = 1}^{t-1} \mu^j\phi_{j, i}\Big)^2\ind\Big( \Big| \alpha v_i^{\star} + \sum_{j=1}^{t-1}\beta^j\phi_{j, i} \Big| > \tau\Big) - \mathbb{E}\bigg[\Big\|\sum_{j = 1}^{t-1} \mu^j\phi_j \circ \ind\Big( \Big| \alpha v^{\star} + \sum_{j=1}^{t-1}\beta^j\phi_{j} \Big| > \tau 1\Big)\Big\|_2^2\bigg]\right| \\ &\qquad \lesssim \sup_{\theta\in \Theta}\frac{1}{n}\sqrt{\sum_{i = 1}^n \mathbb{P}\left( \Big| \alpha v_i^{\star} + \sum_{j = 1}^{t-1}\beta^j\phi_{j, i} \Big| > \tau\right)t\log^3 n} + \frac{t\log^2 n}{n} \lesssim \sqrt{\frac{t(t+k)\log^4 n}{n^2}} \end{align*} holds simultaneously for all $\theta\in \Theta$, where the last inequality follows from the same argument as in \eqref{eq:sum-h-tau-sparse-UB}. Additionally, recalling that for general denoising functions, we have established relation~\eqref{eqn:beethoven-vive}. When specialized to the current setting, it asserts that \begin{align*} \mathbb{E}\Bigg[\Big\|\sum_{j = 1}^{t-1} \mu^j\phi_j \circ \eta^{\prime}(v)\Big\|_2^2\Bigg] - \max\left\{ \Big\langle\int\Big[x\eta^{\prime}\Big(\alpha v^{\star} + \frac{1}{\sqrt{n}}x\Big)\Big]^2\varphi_n(\mathrm{d} x)\Big\rangle, \Big\langle\int\Big[\eta^{\prime}\Big(\alpha v^{\star} + \frac{1}{\sqrt{n}}x\Big)\Big]^2\varphi_n(\mathrm{d} x)\Big\rangle\right\} \leq 0. \end{align*} Putting the above bounds together, using the definition \eqref{defi:kappa} of $\kappa_t$, and recognizing that $(\mu_t, \alpha_t, \beta_{t-1}, \tau_t, \gamma_t)\in \Theta$, we can obtain \begin{align} & \bigg\|\sum_{j=1}^{t-1}\mu^{j}\phi_{j}\circ\eta_{t}^{\prime}(v_{t})\bigg\|_{2}^{2}-\kappa_{t}^{2} \notag\\ & \lesssim\gamma_{t}^{2}\sup_{\theta\in\Theta}\Bigg|\sum_{i=1}^{n}\Big(\sum_{j=1}^{t-1}\mu^{j}\phi_{j,i}\Big)^{2}\ind\Big(\Big|\alpha v_{i}^{\star}+\sum_{j=1}^{t-1}\beta^{j}\phi_{j,i}\Big|>\tau\Big)-\mathbb{E}\bigg[\Big\|\sum_{j=1}^{t-1}\mu^{j}\phi_{j}\circ\ind\Big(\Big|\alpha v^{\star}+\sum_{j=1}^{t-1}\beta^{j}\phi_{j}\Big|>\tau\Big)\Big\|_{2}^{2}\bigg]\Bigg|\notag\\ & \quad+\sup_{\theta\in\Theta}\left\{ \mathbb{E}\Bigg[\Big\|\sum_{j=1}^{t-1}\mu^{j}\phi_{j}\circ\eta^{\prime}(v)\Big\|_{2}^{2}\Bigg]-\max\left\{\Big\langle\int\Big[x\eta^{\prime}\Big(\alpha v^{\star}+\frac{1}{\sqrt{n}}x\Big)\Big]^{2}\varphi_{n}(\mathrm{d} x)\Big\rangle,\Big\langle\int\Big[\eta^{\prime}\Big(\alpha v^{\star}+\frac{1}{\sqrt{n}}x\Big)\Big]^{2}\varphi_{n}(\mathrm{d} x)\Big\rangle\right\}\right\} \notag\\ & \lesssim\gamma_{t}^{2}\sqrt{\frac{t(t+k)\log^{4}n}{n^{2}}} \asymp \frac{1}{\lambda^{2}}\sqrt{\frac{t(t+k)\log^{4}n}{n^{2}}}\eqqcolon D_{t}, \label{eqn:sparse-dt} \end{align} where we remind the reader that $\gamma_t\asymp \lambda^{-1}$ (see \eqref{eqn:gamma-t-evolution}). \section{Analysis for spectral initialization: Proof of Theorem~\ref{thm:recursion-spectral}} \label{sec:pf-thm-spectral} To establish Theorem~\ref{thm:recursion-spectral}, our strategy is to construct some auxiliary AMP sequences that are intimately connected to spectral initialization (obtained via a sequence of power iterations), thus allowing us to analyze spectrally initialized AMP by means of the theory developed in Theorems~\ref{thm:recursion} and \ref{thm:main}. Note that the auxiliary AMP sequence to be introduced below is designed only for analysis purposes, and is not implemented during the execution of the real algorithm. Throughout this section, we denote by $\lambda_{\max}$ (resp.~$\widehat{v}^{\star}$) the leading eigenvalue (resp.~eigenvector) of $M$, and let $\lambda_i(M)$ represent the $i$-th largest eigenvalue (in magnitude) of $M$. \subsection{Preliminaries: non-asymptotic eigenvalue and eigenvector analysis} \label{sec:preliminary-evalue-evector} Understanding the performance of spectral methods requires careful control of the eigenvalues and eigenvectors of the random matrices of interest. Before embarking on the proof, we gather several useful non-asymptotic eigenvalue/eigenvector perturbation bounds. \begin{itemize} \item \citet[Corollary~3.9]{bandeira2016sharp} asserts that (by taking $\varepsilon$ therein to be $(\log n/n)^{1/3}$) \begin{subequations} \label{eq:banderia-W-ub} \begin{align} \| W \| \leq 2 + O\bigg( \Big(\frac{\log n}{n}\Big)^{1/3} \bigg), \end{align} holds with probability at least $1-O(n^{-15})$. This combined with Weyl's inequality further leads to \begin{align} \big|\lambda_2(M)\big| \leq \| W \| \leq 2 + O\bigg( \Big(\frac{\log n}{n}\Big)^{1/3} \bigg). \label{eq:banderia-lambda-i-ub} \end{align} \end{subequations} \item \citet[Theorem 3.1]{peng2012eigenvalues} establishes that, with probability at least $1-O(n^{-11})$ one has % \begin{align} \lambda+\frac{1}{\lambda}- C_9\sqrt{\frac{ \log n}{n(\lambda-1)^{5}}} \leq \lambda_{\max} \leq \lambda+\frac{1}{\lambda}+C_9 \sqrt{\frac{ \log n}{n}} \label{eq:minyu-lambda-LB} \end{align} % for some large enough constant $C_9>0$, provided that $ 1 + \big(\frac{\log n}{n}\big)^{1/5} < \lambda = O(1)$. \item Applying Weyl's inequality (i.e., $|\lambda_{2}(M)| \leq \|W\|$) and \citet[Theorem 6.1]{simchowitz2018tight} (with $\epsilon=\frac{1}{8}\frac{\lambda + \frac{1}{\lambda}-2}{\lambda + \frac{1}{\lambda}}\min\big\{\frac{1}{2},\frac{1}{\lambda^2-1}\big\}$ and $\kappa=1/2$ taken therein) yield \begin{subequations} \begin{align} \lambda_{\max}-|\lambda_{2}(M)| &\geq \lambda_{\max} - \| W \| \geq \frac{\lambda + \frac{1}{\lambda}-2}{4} = \frac{(\lambda-1)^2}{4\lambda} \label{eq:lambda-1-gap-non-asymptotic-UCB} \\ \bigg|\lambda_{\max}-\lambda-\frac{1}{\lambda}\bigg| &\leq\min\left\{ \frac{(\lambda-1)^{2}}{16\lambda},\frac{1}{8\lambda}\cdot\frac{\lambda-1}{\lambda+1}\right\} \label{eq:lambda-1-non-asymptotic-UCB} \end{align} \end{subequations} with probability at least $1-O(n^{-11})$, provided that \begin{align} \lambda-1\geq C_{\lambda}\Big(\frac{\log (n\lambda) }{n}\Big)^{1/6} \label{eq:lambda-1-gap-condition-UCB} \end{align} for some sufficiently large constant $C_{\lambda}>0$. A direct consequence of \eqref{eq:lambda-1-non-asymptotic-UCB} is that \begin{subequations} \label{eq:lambda-max-two} \begin{align} \lambda_{\max}-2 &\geq\lambda+\frac{1}{\lambda}-2-\frac{(\lambda-1)^{2}}{16\lambda}=\frac{15(\lambda-1)^{2}}{16\lambda}>0, \\ \lambda_{\max} &\leq \lambda+\frac{1}{\lambda} + \frac{1}{8\lambda} \leq 3\lambda. \end{align} \end{subequations} \item In addition, we make note of an immediate consequence of \eqref{eq:lambda-1-non-asymptotic-UCB} as follows: \begin{equation} |\widetilde{\lambda} -\lambda|\leq\min\left\{ C_9\sqrt{\frac{ \log n}{n(\lambda-1)^{7}}}, \frac{\lambda-1}{4},\frac{1}{2(\lambda+1)}\right\}, \label{eq:lambda-max-correct-bound} \end{equation} where we recall $\widetilde{\lambda}\coloneqq\frac{\lambda_{\max}+\sqrt{\lambda_{\max}^{2}-4}}{2}.$ \begin{proof}[Proof of inequality \eqref{eq:lambda-max-correct-bound}] In view of \eqref{eq:lambda-1-non-asymptotic-UCB}, one can write $\lambda_{\max}=\lambda+\frac{1}{\lambda}+\Delta$ for some $\Delta$ with $|\Delta|\leq\min\big\{ C_9\sqrt{\frac{ \log n}{n(\lambda-1)^{5}}}, \frac{(\lambda-1)^{2}}{16\lambda},\frac{1}{8\lambda}\frac{\lambda-1}{\lambda+1}\big\}$. It is readily seen that \begin{align*} \Bigg|\frac{\lambda_{\max}+\sqrt{\lambda_{\max}^{2}-4}}{2}-\lambda\Bigg| & =\Bigg|\frac{\lambda+\frac{1}{\lambda}+\Delta+\sqrt{\left(\lambda+\frac{1}{\lambda}+\Delta\right)^{2}-4}}{2}-\lambda\Bigg|\\ & =\Bigg|\frac{\sqrt{\left(\lambda+\frac{1}{\lambda}+\Delta\right)^{2}-4}-\sqrt{\left(\lambda+\frac{1}{\lambda}\right)^{2}-4}}{2}+\frac{\Delta}{2}\Bigg|\\ & \leq\frac{1}{2}\frac{|2\Delta\left(\lambda+\frac{1}{\lambda}\right)+\Delta^{2}|}{\sqrt{\left(\lambda+\frac{1}{\lambda}+\Delta\right)^{2}-4}+\sqrt{\left(\lambda+\frac{1}{\lambda}\right)^{2}-4}}+\frac{|\Delta|}{2}\\ & \leq \frac{3|\Delta|\lambda}{\lambda-\frac{1}{\lambda}}+\frac{|\Delta|}{2} \leq\frac{4|\Delta|\lambda}{\lambda-1}, \end{align*} where the last line follows since $\lambda+\frac{1}{\lambda}\leq2\lambda$ and $|\Delta|\leq\lambda+\frac{1}{\lambda}$. This directly concludes the proof. \end{proof} \end{itemize} Furthermore, the following lemma develops a non-asymptotic bound on the correlation between the leading eigenvector $\widehat{v}^{\star}$ and the ground truth $v^\star$; the proof can be found in Section~\ref{sec:lem:cor-evector}. \begin{lems} \label{lem:cor-evector} Suppose that $1 + C_{\lambda}\big(\frac{\log n }{n}\big)^{1/9} \leq \lambda= O(1)$ for some large enough constant $C_{\lambda }>0$. The correlation between $v^\star$ and the leading eigenvector $\widehat{v}^{\star}$ of $M$ satisfies \begin{equation} \label{eq:cor-evector} \big|\big\langle\widehat{v}^{\star},v^{\star}\big\rangle\big|=\sqrt{1-\frac{1}{\lambda^{2}}}+O\Big(\sqrt{\frac{\log n}{(\lambda-1)^{9}n}}\Big) \end{equation} with probability at least $1-O(n^{-11})$. \end{lems} \subsection{Constructing an AMP-style basis that covers $\widehat{v}^{\star}$ approximately} \label{sec:spec-vhatstar} In this subsection, we design an auxiliary AMP sequence that allows us to construct a set of orthonormal vectors $\{y_s\}_{1\leq k\leq s}$, whose span {\em approximately} covers the leading eigenvector $\widehat{v}^{\star}$ of $M$. \paragraph{Construction of auxiliary AMP iterates.} Let us produce the following iterative procedure initialized at the truth $v^\star$: \begin{align} \label{eqn:amp-for-spectral} \omega_{t+1} = W \omega_t - \omega_{t-1}, \qquad (t\geq 1) \qquad\text{with }~ w_0 = 0 ~\text{ and }~ w_1 = v^\star. \end{align} This iterative procedure involves a power iteration $W \omega_t$ in each iteration, while at the same time it takes the form of AMP updates (by subtracting $\omega_{t-1}$ and choosing the denoiser to be the identity function). Note, however, that the power iteration $W \omega_t$ in \eqref{eqn:amp-for-spectral} is concerned with only the noise matrix $W$, which stands in stark contrast to \eqref{eqn:AMP-updates} that consists of computing $(\lambda v^\star v^{\star\top} + W)\eta_t(x_t)$ and involves the signal component $\lambda v^\star v^{\star\top}$. In fact, the signal component comes into play in \eqref{eqn:amp-for-spectral} only through the initial vector $ v_1 = v^\star.$ \paragraph{Other auxiliary sequences derived from $\{\omega_k\}$.} Akin to our proof of Theorem~\ref{thm:recursion} (see Section~\ref{sec:pf-thm-recursion}), we find it useful to look at several auxiliary sequences $\{y_k\}_{k \ge 1}$, $\{\zeta_k^{\prime}\}$ and $\{\psi_k\}$ derived based on $\{\omega_k\}$, which will assist in analyzing $\{\omega_t\}$. \begin{itemize} \item[(i)] Given that $\omega_1= v^{\star}$, we define \begin{align} y_1 \coloneqq \omega_1 = v^\star \in \mathcal{S}^{n-1},\qquad\text{and}\qquad W_1^{\prime} \coloneqq W. \label{eqn:y-auxiliary-init} \end{align} \item[(ii)] For each $2\leq t<n$, concatenate $\{y_{k}\}$ into a matrix $V_{t-1} \coloneqq [y_k]_{1 \le k \leq t-1} \in \ensuremath{\mathbb{R}}^{n\times (t-1)}$ and define \begin{equation} \label{eqn:spect-seq-1} \begin{aligned} y_t &\coloneqq \frac{\left(I - V_{t-1}V_{t-1}^{\top}\right)\omega_{t}}{\left\|\left(I - V_{t-1}V_{t-1}^{\top}\right)\omega_{t}\right\|_2}, \\ W_t^\prime &\coloneqq \left(I - y_{t-1}y_{t-1}^{\top}\right)W_{t-1}^{\prime}\left(I - y_{t-1}y_{t-1}^{\top}\right). \end{aligned} \end{equation} According to Lemma~\ref{lemma:zk-orthonormal}, the $y_k$'s constructed above are orthonormal, and $\omega_t \in \mathsf{span}\{y_1,\ldots,y_t\}$. \item[(iii)] Additionally, if we generate $\{g_i^k\}_{1\leq i,k\leq n}$ as i.i.d.~$\mathcal{N}(0, \frac{1}{n})$ and define \begin{align} \zeta_k^{\prime} \coloneqq \Big(\frac{\sqrt{2}}{2} - 1\Big) y_ky_k^{\top}W_k^{\prime}y_k + \sum_{i = 1}^{k - 1} g_i^ky_i \in \ensuremath{\mathbb{R}}^n, \qquad 1\leq k\leq n , \end{align} then Lemma~\ref{lem:distribution} reveals that the $\psi_k$'s constructed below are i.i.d.~obeying \begin{align} \label{def:psi_k} \psi_k \coloneqq W_k^{\prime}y_k + \zeta_k^{\prime} \overset{\text{i.i.d.}}{\sim} \mathcal{N}\left(0, \frac{1}{n}I_n\right),\qquad 1 \le k \le n. \end{align} \end{itemize} Clearly, $\{y_k, W_k^\prime, \zeta_k^{\prime}, \psi_k\}$ plays the same role as $\{z_k, W_k, \zeta_k, \phi_k\}$ in expression \eqref{eqn:z-w-recursion} in the proof of Theorem~\ref{thm:recursion}. \paragraph{Connections between $\{\omega_k\}$ and spectral initialization.} We now discuss some important connections between $\{\omega_k\}$ and the leading eigenvector $\widehat{v}^{\star}$ of $M$. One basic fact to connect \eqref{eqn:amp-for-spectral} with the power method is that: $W^t v^{\star}$ can be linearly represented by the iterates $\{\omega_i\}_{1 \le i \le t+1}$, as stated in the following lemma. Intuitively, this fact makes sense as the update rule \eqref{eqn:amp-for-spectral} resembles that of the power method. \begin{lems} \label{lem:linear-combination-power-method-AMP} For every $t\geq 0$, $W^t v^{\star}$ is a linear combination of $\{\omega_i\}_{1 \le i \le t+1}$. \end{lems} \begin{proof}[Proof of Lemma~\ref{lem:linear-combination-power-method-AMP}] We shall establish this result by induction. First, the claim is trivially true for $t= 0$ since $\omega_1=v^{\star}$. Now, suppose the statement further holds true for $t-1$, i.e., \begin{align} \label{eqn:magic-flute} W^{t-1}v^{\star} = \sum_{i = 1}^{t} c_{t}^i \omega_i \qquad \text{for some coefficients } c_{t}^i, \end{align} and we would like to extend it to $t$. Towards this, observe that \begin{align*} W^tv^{\star} = W\big(W^{t-1}v^{\star}\big) = W\sum_{i = 1}^t c_t^i\omega_i = \sum_{i = 1}^t c_t^i(\omega_{i+1} + \omega_{i-1}), \end{align*} where the last step follows since $W\omega_i = \omega_{i+1}+\omega_{i-1}$ (cf.~\eqref{eqn:amp-for-spectral}). For notational simplicity, we shall also set \begin{align} c_t^i = 0 \qquad \text{for any }i > t \text{ or } i = 0. \label{eq:ct-i-zero-coefficient} \end{align} As a result, we can write \begin{subequations} \label{eqn:linear-induc} \begin{equation} W^t v^{\star} = \sum_{i = 1}^{t+1} c_{t+1}^i \omega_i \end{equation} with the coefficients \begin{align} c_{t+1}^{i}=c_{t}^{i+1}+c_{t}^{i-1} ; \label{eq:c-t-i-recursion} \end{align} here, we have invoked \eqref{eq:ct-i-zero-coefficient}. \end{subequations} This validates the claimed result. \end{proof} Moreover, it turns out that $\widehat{v}^{\star}$ can be approximately represented as (i) a linear combination of $\{y_k\}_{1 \le k \le s}$, and also (ii) a linear combination of the set of independent Gaussian vectors $\{\psi_k\}_{1 \le k \le s}$ (cf.~\eqref{def:psi_k}). This is asserted by the following lemma, whose proof can be found in Section~\ref{sec:pf-lem-spec}. \begin{lems} \label{lem:spec} Let $s = \frac{C_v\lambda^2 \log n}{(\lambda - 1)^2}$ for some sufficiently large constant $C_v>0$. Assume that $1+C_{\lambda} (\frac{\log^{7} n}{n})^{1/6} < \lambda =O(1)$ for some large enough constant $C_{\lambda}>0$. With probability at least $1-O(n^{-11})$, there exist coefficients $c_i $ $(1\leq i\leq s)$ such that \begin{align} \label{eqn:spectral-expansion} \bigg\| \widehat{v}^{\star} - \sum_{i = 1}^{s} c_iy_i \bigg\|_2 \lesssim \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^6n}} \qquad \text{and} \qquad \bigg\| \widehat{v}^{\star} - c_1v^{\star} + \frac{1}{\widetilde{\lambda}}\sum_{i = 1}^{s} c_i\psi_i \bigg\|_2 \lesssim \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^6n}}, \end{align} where $\widetilde{\lambda} \coloneqq \frac{2}{\lambda_{\max} - \sqrt{\lambda_{\max}^2 - 4}}$, and $c_{i+1} = \widetilde{\lambda}^{-1} c_i$ for all $i\geq 1$. \end{lems} This approximate linear representation of $\widehat{v}^{\star}$ plays a crucial role in explaining why spectrally initialized AMP yields a similar decomposition as another AMP with independent initialization. \subsection{Constructing another AMP-style basis that covers $x_1$ exactly} \label{sec:rep-spec} Next, we turn to our spectral estimate obtained through the power method: \[ x_{1} = a_s M^s \widetilde{v} \qquad \text {with }a_s = \frac{1}{\|M^s \widetilde{v}\|_2}, \] where $\widetilde{v} \sim \mathcal{N}(0,\frac{1}{n}I_n)$ is the initial vector of the power method chosen randomly. Based upon our results in Section~\ref{sec:spec-vhatstar}, we intend to further augment $\{y_k\}_{1\leq k\leq s}$ into a set of $2s+1$ orthonormal vectors $\{\widehat{y}_t\}_{1\leq t \leq 2s+1}$ --- again via a certain auxiliary AMP sequence --- such that $x_{1}$ falls {\em perfectly} within $\mathsf{span} \{\widehat{y}_1,\ldots, \widehat{y}_{2s+1}\}$. \paragraph{Preliminaries about the power method.} Standard convergence analysis for the power method tells us that: if we take $s \geq \widetilde{C}_v \frac{ \lambda_{\max} \log n}{\lambda_{\max}-|\lambda_2(M)|}$ for some constant $\widetilde{C}_v>0$ large enough and if $\widetilde{v}\sim \mathcal{N}(0,\frac{1}{n}I_n)$, then with probability exceeding $1-O(n^{-11})$ we can guarantee that \begin{subequations} \label{eqn:power-error} \begin{align} \|x_1 - \widehat{v}^{\star}\|_2 &\lesssim \frac{1}{n^{12}} \qquad \text{ and } \qquad \\ a_s = \frac{1}{\ltwo{M^s \widetilde{v}}} &\leq \frac{1}{\lambda_{\max}^s |\langle \widehat{v}^{\star}, \widetilde{v}\rangle | } \lesssim \frac{n^{11.5}}{\lambda_{\max}^s}, \end{align} \end{subequations} where the last inequality is valid since $\langle \widehat{v}^{\star}, \widetilde{v}\rangle \stackrel{\textrm{d}}{=} \frac{\inprod{g}{v}}{\ltwo{g}},~g\sim \mathcal{N}(0,\frac{1}{n}I_n)$ and hence $|\langle \widehat{v}^{\star}, \widetilde{v}\rangle| \gtrsim n^{-11.5}$ with probability at least $1-O(n^{-11})$. In addition, \eqref{eq:lambda-1-gap-non-asymptotic-UCB} and \eqref{eq:lambda-1-non-asymptotic-UCB} allow us to control $\frac{\lambda_{\max}}{\lambda_{\max} - |\lambda_2(M)|}$, thus indicating that \eqref{eqn:power-error} is guaranteed to hold as long as \[ s \geq C_v \frac{ \lambda \log n}{(\lambda - 1)^2} \] for some constant $C_v$ large enough. In addition, we remark that there exist a set of coefficients $a_{0},\ldots, a_{s-1} \in \ensuremath{\mathbb{R}}$ that allow us to express \begin{align} \label{eqn:x1-decmp-violin} x_1 = \sum_{i = 0}^{s-1} a_iW^{i}v^{\star} + a_sW^s\widetilde{v}, \qquad \text{with }a_s = \frac{1}{\ltwo{M^s \widetilde{v}}}. \end{align} \begin{proof}[Proof of \eqref{eqn:x1-decmp-violin}] Recall that $x_1$ is proportional to $ (\lambda v^{\star}v^{\star\top} + W)^s \widetilde{v}$. By expanding $(\lambda v^{\star}v^{\star\top} + W)^s$, we know that each term in the expansion takes one of the following forms: \[ \text{(i) }\lambda v^{\star}v^{\star\top}A_{1}\widetilde{v}\text{ for some matrix }A_{1};\quad \text{(ii) }W^{i}(\lambda v^{\star}v^{\star\top})A_{2}\widetilde{v}\text{ for some matrix }A_{2}\text{ and some }i; \quad \text{(iii) }W^{s}\widetilde{\nu}. \] Clearly, in each case the term falls within the span of $\{v^{\star}, W^i v^{\star}, W^s \widetilde{v} \}$, thus concluding the proof. \end{proof} \paragraph{Construction of a set of basis vectors using another auxiliary AMP.} Based on the decomposition \eqref{eqn:x1-decmp-violin}, we intend to show that $x_{1}$ can be linearly represented (in an exact manner) using a set of $2s+1$ orthonormal basis vectors, in a way similar to Lemma~\ref{lem:spec}. Towards this end, we design another AMP-type algorithm (with the denoising functions taken as the identity function): \begin{align} \label{eqn:amp-for-power} u_{t+1} = W u_t - u_{t-1} \quad (t>s), \qquad\text{with }~ u_{s} = 0 ~\text{ and }~ u_{s+1} = \widetilde{v}, \end{align} Despite the use of the same update rule, a key difference between \eqref{eqn:amp-for-power} and \eqref{eqn:amp-for-spectral} lies in that $u_t$ starts from $\widetilde{v}$ (i.e., the vector used to initialize the power method), while $\omega_t$ starts from the ground-truth vector $v^\star$. Akin to our analysis for Theorem~\ref{thm:recursion}, we generate a sequence of orthonormal vectors $\{\widehat{y}_k\}$ and auxiliary random matrices $\widehat{W}_k$ as follows: recalling the sequence $\{y_k, W_k\}$ defined in \eqref{eqn:spect-seq-1}, we take \begin{equation} \label{eqn:spect-seq-2} \begin{aligned} \widehat{y}_t &\coloneqq y_t, \qquad &&1\leq t\leq s, \\ \widehat{W}_t &\coloneqq W_t^{\prime}, \qquad &&1\leq t\leq s, \\ \widehat{y}_t &\coloneqq \frac{\big(I - \widehat{V}_{t-1}\widehat{V}_{t-1}^{\top}\big)u_{t}}{\big\|\big(I - \widehat{V}_{t-1}\widehat{V}_{t-1}^{\top}\big)u_{t}\big\|_2}, \qquad && s< t \leq 2s+1,\\ \widehat{W}_t &\coloneqq \big(I - \widehat{y}_{t-1}\widehat{y}_{t-1}^{\top}\big)\widehat{W}_{t-1}\big(I - \widehat{y}_{t-1}\widehat{y}_{t-1}^{\top}\big), \qquad && s< t \leq 2s+1, \end{aligned} \end{equation} where $\widehat{V}_{t-1} \coloneqq [\widehat{y}_k]_{1 \le k \leq t-1} \in \ensuremath{\mathbb{R}}^{n\times (t-1)}$. The orthonormality of the sequence $\{\widehat{y}_k\}_{1\leq k\leq 2s+1}$ can be seen by repeating the proof of Lemma~\ref{lemma:zk-orthonormal}. In addition, let us further generate the following vectors \begin{align} \widehat{\psi}_k \coloneqq \widehat{W}_k\widehat{y}_k + \widehat{\zeta}_k \overset{\text{i.i.d.}}{\sim} \mathcal{N}\left(0, \frac{1}{n}I_n\right),\qquad\text{for all }1 \le k \le n, \end{align} where \begin{align} \widehat{\zeta}_k \coloneqq \Big(\frac{\sqrt{2}}{2} - 1\Big) \widehat{y}_k\widehat{y}_k^{\top}\widehat{W}_k\widehat{y}_k + \sum_{i = 1}^{k - 1} g_i^k\widehat{y}_i, \end{align} with the $g_i^k$'s independently drawn from $\mathcal{N}(0, \frac{1}{n})$. Then Lemma~\ref{lem:distribution} and its analysis immediately tell us that the $\widehat{\psi}_k$'s are statistically independent obeying \begin{align} \widehat{\psi}_k \overset{\text{i.i.d.}}{\sim} \mathcal{N}\left(0, \frac{1}{n}I_n\right),\qquad\text{for all }1 \le k \le n, \end{align} \paragraph{Linear representation of $x_1.$} We are positioned to represent $x_1$ over the set of basis vectors It turns out that $x_1$ can be represented approximately as the linear combination of $\{\widehat{y}_k\}$, or the set of independent Gaussian vectors $\{\widehat{\psi}_k\}$. Our result is formally stated as follows. \begin{lems} \label{lem:alt-x1} With probability exceeding $1-O(n^{-11})$, we have \begin{align} \label{eqn:alt-x1} x_1 = \sum_{i = 1}^{2s+1} b_i\widehat{y}_i \qquad \text{and} \qquad \Bigg\| x_{1}-c_{1}v^{\star}-\frac{1}{\lambda}\sum_{i=1}^{2s+1}b_{i}\widehat{\psi}_{i}\Bigg\|_{2} & \lesssim\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{7}n}} \end{align} with \begin{equation} \label{eq:defn-bi-x1} b_i \coloneqq \inprod{\widehat{y}_i}{x_1}\qquad \text{for any }1\leq i\leq 2s+1. \end{equation} \end{lems} \noindent The proof of this lemma is deferred to Section~\ref{sec:proof-eqn:alt-x1} \subsection{Analysis for spectrally initialized AMP} We are now positioned to develop non-asymptotic analysis for the spectrally initialized AMP, namely, the AMP sequence $\{x_t\}$ (cf.~\eqref{eqn:AMP-updates}) when initialized to $x_1$ (i.e., the output of the power method). \paragraph{Auxiliary sequences derived from $\{x_t\}$.} Akin to the proof of Theorem~\ref{thm:recursion}, we introduce a sequence of auxiliary vectors/matrices $\{z_k, W_k, \zeta_k\}_{-2s\le k \le n}$ in a recursive manner in order to help understand the dynamics of $x_{t}$: \begin{itemize} \item For any $k$ with $-2s \leq k \leq 0$, set \begin{align*} z_{k} \coloneqq \widehat{y}_{k+2s+1}, \quad W_{k} \coloneqq \widehat{W}_{k+2s+1}^{\prime}, \quad \zeta_{k} \coloneqq \zeta_{k+2s+1}^{\prime}, \quad \phi_{k} \coloneqq \widehat{\psi}_{k+2s+1}, \end{align*} where $\{\widehat{y}_{k}, \widehat{W}_k , \widehat{\psi}_k\}$ have been introduced in Section~\ref{sec:rep-spec}. \item For any $1\leq k \leq n$, define \begin{subequations} \begin{align} z_1 \coloneqq \frac{\big(I - \widehat{V}_{2s+1}\widehat{V}_{2s+1}^{\top}\big)\eta_1(x_1)}{\big\|\big(I - \widehat{V}_{2s+1}\widehat{V}_{2s+1}^{\top}\big)\eta_1(x_1)\big\|_2} \in \ensuremath{\mathbb{R}}^n, \end{align} where we remind the readers that $\widehat{V}_{2s+1}=[\widehat{y}_1,\ldots,\widehat{y}_{2s+1}]$. Further, we take \begin{equation} \begin{aligned} U_{k-1} &\coloneqq \big[\widehat{V}_{2s+1},~[z_i]_{1 \le i \leq k-1}\big] \in \ensuremath{\mathbb{R}}^{n\times (k+2s)},\\ z_{k} &\coloneqq \frac{\left(I - U_{k-1}U_{k-1}^{\top}\right)\eta_{k}(x_{k})}{\left\|\left(I - U_{k-1}U_{k-1}^{\top}\right)\eta_{k}(x_{k})\right\|_2}, \\ W_k &\coloneqq \left(I - z_{k-1}z_{k-1}^{\top}\right)W_{k-1}\left(I - z_{k-1}z_{k-1}^{\top}\right). \end{aligned} \end{equation} \end{subequations} \end{itemize} \noindent With these definitions in place, we see that for each $t\geq 1$, the vectors $\{z_k\}^{t}_{-2s}$ are orthonormal whose span contains $\eta_t(x_t)$ (see Lemma~\ref{lemma:zk-orthonormal} and the text right after), This allows us to decompose \begin{align} \label{eqn:eta-decomposition-augment} \eta_{t}(x_{t}) = \sum_{k = -2s}^{t} \beta_{t}^kz_k, \qquad\text{with }\beta_{t}^k \coloneqq \langle\eta_{t}(x_{t}), z_k\rangle \end{align} and ensure that $\left\|\eta_{t}(x_{t})\right\|_2 = \left\|\beta_{t}\right\|_2$ with $\beta_{t} \coloneqq (\beta_t^{-2s},\ldots,\beta_t^{0},\beta_t^1,\beta_t^2,\ldots,\beta_t^t)^{\top} \in \ensuremath{\mathbb{R}}^{t+2s+1}.$ Additionally, we introduce the following vectors as in Lemma~\ref{lem:distribution}: \begin{align} \label{eqn:zeta-k-augment} \zeta_k \coloneqq \Big(\frac{\sqrt{2}}{2} - 1\Big) z_kz_k^{\top}W_kz_k + \sum_{i = -2s}^{k - 1} g_i^kz_i \qquad\text{for } k\geq 1, \end{align} with each $g_i^k$ independently generated from $\mathcal{N}(0, \frac{1}{n})$, and we set \begin{align} \phi_k \coloneqq W_kz_k + \zeta_k ,\qquad\text{for all } -2s \le k < n-2s. \end{align} Consequently, repeating exactly the same argument as in the proof of Lemma~\ref{lem:distribution} reveals that \begin{align} \phi_k \overset{\text{i.i.d.}}{\sim} \mathcal{N}\left(0, \frac{1}{n}I_n\right),\qquad\text{for all } -2s \le k < n-2s. \end{align} \paragraph{Analyzing spectrally initialized AMP via our general recipe.} Recall that Theorem~\ref{thm:recursion} offers a general recipe in deriving the decomposition for $x_{t}$. As it turns out, the same induction-based proof idea developed for Theorem~\ref{thm:recursion} continues to work for analyzing spectrally initialized AMP. In fact, assuming validity for the initialization (which we shall justify momentarily), such proof arguments lead to: \begin{align} x_t \coloneqq \alpha_t v^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1}, \qquad\text{for }1 \leq t < n-2s; \label{def:dynamics-augment} \end{align} here, $\alpha_{t+1} = \lambda v^{\star\top} \eta_{t}(x_t)$, and $\xi_{t-1}$ is the residual term obeying \begin{align*} \xi_t &= \sum_{k = -2s}^{t-1} z_k\bigg[\Big\langle \phi_k, \eta_{t}\big(\alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k + \xi_{t-1}\big)\Big\rangle - \langle\eta_t^{\prime}(x_{t})\rangle \beta_{t-1}^k - (\sqrt{2}-1) \beta_{t}^kz_k^{\top}W_kz_k - \sum_{i = -2s}^{k - 1} \beta_t^ig_i^k - \sum_{i = k+1}^{t} \beta_{t}^ig_k^i\bigg]. % \end{align*} Following Steps 2 and 3 verbatim in Section~\ref{sec:pf-thm-recursion}, we see that \eqref{def:dynamics-augment} holds true for $t+1$ and satisfies \begin{align*} \|\xi_t\|_2 &= \Big\langle \sum_{k = -2s}^{t-1} \mu_t^k\phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k + \Delta_t - \sum_{k = -2s}^{t-1} \mu_t^k\left[ (\sqrt{2}-1) \beta_{t}^kz_k^{\top}W_kz_k + \sum_{i = -2s}^{k - 1} \beta_t^ig_i^k + \sum_{i = k+1}^{t} \beta_{t}^ig_k^i\right], \end{align*} where $\Delta_t$, $\delta_{t}$ and $\delta_{t}^{\prime}$ are defined in \eqref{eqn:delta-chorus-spectral}. Further, taking this collectively with Lemma~\ref{lem:concentration} establishes \eqref{eqn:xi-norm-main-spectral}. We still need to verify that the spectral estimate $x_1$ also satisfies the desired decomposition \eqref{def:dynamics-augment}. Towards this, note that if we choose $x_1 = \lambda \eta_{0}(x_0)$ (e.g., taking $x_1= \lambda x_0$ and choosing $\eta_0$ to be identity), then \begin{align} \beta_0^k \coloneqq \frac{1}{\lambda} b_{k+2s+1} = \frac{1}{\lambda} \inprod{x_1}{\widehat{y}_{k+2s+1}} = \frac{1}{\lambda} \inprod{x_1}{z_{k}} = \big\langle \eta_{0}(x_0), z_k \big\rangle , \label{eq:defn-beta0-spect} \end{align} where we have used $\eqref{eq:defn-bi-x1}$. This combined with inequality~\eqref{eqn:alt-x1} gives \begin{align*} \bigg\| x_1 - c_1v^{\star} - \sum_{i = -2s}^{0} \beta_0^k\phi_k \bigg\|_2 = \bigg\| x_1 - c_1v^{\star} - \frac{1}{\lambda} \sum_{i = -2s}^{0} b_{k+2s+1}\widehat{\psi}_{k+2s+1} \bigg\|_2 \lesssim \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^7n}}, \end{align*} In addition, Lemma~\ref{lem:spec} tells us that \[ \big|c_{1}-\big\langle y_{1},\widehat{v}^{\star}\big\rangle\big|=\bigg|\Big\langle y_{1},\sum_{i=1}^{s}c_{i}y_{i}\Big\rangle-\big\langle y_{1},\widehat{v}^{\star}\big\rangle\bigg|\leq\bigg\|\sum_{i=1}^{s}c_{i}y_{i}-\widehat{v}^{\star}\bigg\|_{2}\lesssim\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}, \] where we have used the fact that $y_{1}=v^{\star}$ and the orthonormality of $\{y_i\}$. Combining this with Lemma~\ref{lem:cor-evector} implies that \[ \Big|c_{1}- \sqrt{1-\frac{1}{\lambda^{2}}} \Big| \leq \big|c_{1}-\big\langle y_{1},\widehat{v}^{\star}\big\rangle\big| + \Big|\big\langle y_{1},\widehat{v}^{\star}\big\rangle - \sqrt{1-\frac{1}{\lambda^{2}}} \Big| \lesssim\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{9}n}}. \] Putting the preceding results together, we can express \begin{align} \label{eqn:ronaldinho} x_1 = \sqrt{1-\frac{1}{\lambda^{2}}} \,v^{\star} + \sum_{i = -2s}^{0} \beta_0^k\phi_k + \xi_0 \end{align} where $\|\xi_0\|_2\lesssim \frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{9}n}}$. This justifies the validity of \eqref{def:dynamics-augment} for the base case with $t=1$, thus concluding the proof of Theorem~\ref{thm:recursion-spectral}. \subsection{Proof of Lemma~\ref{lem:cor-evector}} \label{sec:lem:cor-evector} Given the rotational invariance of the Wigner matrix $W$, we shall assume without loss of generality that $v^{\star}=e_{1}$ throughout this proof. We also introduce the convenient notation $W=\left[\begin{array}{cc} W_{1,1} & w_{n-1}^{\top}\\ w_{n-1} & W_{n-1} \end{array}\right]$, where $W_{n-1}\in\mathbb{R}^{(n-1)\times(n-1)}$ and $w_{n-1}\in\mathbb{R}^{n-1}$ are statistically independent. To begin with, it is readily seen from (\ref{eq:lambda-1-gap-non-asymptotic-UCB}) that $\lambda_{\max}I_{n-1}-W_{n-1}$ is invertible with probability at least $1-O(n^{-11})$. Apply \citet[Theorem~5]{li2021minimax} to show that \[ \big|\big\langle\widehat{v}^{\star},v^{\star}\big\rangle\big|^{2}=\frac{1}{1+\big\|\big(\lambda_{\max}I_{n-1}-W_{n-1}\big)^{-1}w_{n-1}\big\|_{2}^{2}}\eqqcolon\frac{1}{1+\big\|\sqrt{n}R(\lambda_{\max})w_{n-1}\big\|_{2}^{2}}, \] where for notational simplicity we define, for any $\lambda_{0}>\|W\|$, \[ R(\lambda_{0})\coloneqq\frac{1}{\sqrt{n}}\big(\lambda_{0}I_{n-1}-W_{n-1}\big)^{-1}. \] In what follows, let us first analyze the target quantity for any fixed $\lambda_{0}\in\big[\frac{7}{8}\big(\lambda+\frac{1}{\lambda}\big)+\frac{1}{4},3\lambda]$. \begin{itemize} \item First of all, \citet[Lemmas~5.6 and 5.7]{peng2012eigenvalues} combined with a little algebra implies that \[ \Bigg|\big\| R(\lambda_{0})\big\|_{\mathrm{F}}^{2}-\frac{1}{\big(\frac{\lambda_{0}+\sqrt{\lambda_{0}^{2}-4}}{2}\big)^{2}-1}\Bigg|=\Bigg|\mathsf{Tr}\Big[\big(\lambda_{0}I_{n-1}-W_{n-1}\big)^{-2}\Big]-\frac{1}{\big(\frac{\lambda_{0}+\sqrt{\lambda_{0}^{2}-4}}{2}\big)^{2}-1}\Bigg|\lesssim\sqrt{\frac{\log n}{(\lambda-1)^{8}n}} \] holds with probability at least $1-O(n^{-15})$, provided that $1<\lambda=O(1)$. As a result, \begin{equation} \big\| R(\lambda_{0})\big\|_{\mathrm{F}}^{2}=\frac{1}{\big(\frac{\lambda_{0}+\sqrt{\lambda_{0}^{2}-4}}{2}\big)^{2}-1}+O\bigg(\sqrt{\frac{\log n}{(\lambda-1)^{8}n}}\bigg)\lesssim \frac{1}{\lambda-1},\label{eq:Rlambda-F} \end{equation} where the last relation holds since $\lambda_{0} \geq \lambda+\frac{1}{\lambda}-\frac{1}{4}\big(\lambda+\frac{1}{\lambda}-2\big)$ and hence $\frac{\lambda_{0}+\sqrt{\lambda_{0}^{2}-4}}{2}-1 \gtrsim \lambda - 1$ (by repeating the proof of inequality \eqref{eq:lambda-max-correct-bound} with $\Delta = \frac{1}{4}\big(\lambda+\frac{1}{\lambda}-2$). In addition, it follows from (\ref{eq:banderia-W-ub}) that \begin{equation} \|R(\lambda_{0})\|\leq\frac{1}{\sqrt{n}}\frac{1}{\lambda_{0}-\|W_{n-1}\|}\leq\frac{1}{\sqrt{n}}\cdot\frac{1}{\frac{7}{8}\big(\lambda+\frac{1}{\lambda}\big)+\frac{1}{4}-2-O\big((\frac{\log n}{n})^{1/3}\big)}\lesssim\frac{1}{(\lambda-1)^{2}\sqrt{n}}.\label{eq:Rlambda-op} \end{equation} \item Further, invoking \citet[Eq.~(2.1)]{rudelson2013hanson} (with $\varepsilon$ therein taken to be $C_{9}\frac{\|R(\lambda_{0})\|\sqrt{\log n}}{\|R(\lambda_{0})\|_{\mathrm{F}}}$ for some large enough constant $C_{9}>0$) reveals that, conditional on $W_{n-1}$, \[ \Big|\big\|\sqrt{n}R(\lambda_{0})w_{n-1}\big\|^{2}-\|R(\lambda_{0})\|_{\mathrm{F}}^{2}\Big|\leq C_{9}\|R(\lambda_{0})\|_{\mathrm{F}}\|R(\lambda_{0})\|\sqrt{\log n}\lesssim\sqrt{\frac{\log n}{(\lambda-1)^{5}n}} \] holds with probability at least $1-O(n^{-15})$, provided that $C_{9}\frac{\|R(\lambda_{0})\|\sqrt{\log n}}{\|R(\lambda_{0})\|_{\mathrm{F}}}<1$ (which is guaranteed to hold due to (\ref{eq:Rlambda-F}) and (\ref{eq:Rlambda-op})). \item Combine the above results to yield, with probability at least $1-O(n^{-15})$, \begin{align*} \Bigg|\frac{1}{1+\big\|\sqrt{n}R(\lambda_{0})w_{n-1}\big\|_{2}^{2}}-\frac{1}{1+\frac{1}{\big(\frac{\lambda_{0}+\sqrt{\lambda_{0}^{2}-4}}{2}\big)^{2}-1}}\Bigg| & \leq\Bigg|\big\|\sqrt{n}R(\lambda_{0})w_{n-1}\big\|_{2}^{2}-\frac{1}{\big(\frac{\lambda_{0}+\sqrt{\lambda_{0}^{2}-4}}{2}\big)^{2}-1}\Bigg|\lesssim\sqrt{\frac{\log n}{(\lambda-1)^{8}n}}. \end{align*} \end{itemize} Next, invoke standard epsilon-net argument \citep{vershynin2018high} to show that \begin{align*} \Bigg|\frac{1}{1+\big\|\sqrt{n}R(\lambda_{0})w_{n-1}\big\|_{2}^{2}}-\frac{1}{1+\frac{1}{\big(\frac{\lambda_{0}+\sqrt{\lambda_{0}^{2}-4}}{2}\big)^{2}-1}}\Bigg| & \lesssim\sqrt{\frac{\log n}{(\lambda-1)^{8}n}},\qquad\forall\lambda_{0}\in\Big[\frac{7}{8}\big(\lambda+\frac{1}{\lambda}\big)+\frac{1}{4},3\lambda\Big] \end{align*} with probability at least $1-O(n^{-11})$; we omit this standard argument here for the sake of brevity. Recognizing that $\lambda_{\max}\in\big[\frac{7}{8}\big(\lambda+\frac{1}{\lambda}\big)+\frac{1}{4},3\lambda]$ (see \eqref{eq:lambda-1-non-asymptotic-UCB} and \eqref{eq:lambda-max-two}) and defining $\widetilde{\lambda}\coloneqq\frac{\lambda_{\max}+\sqrt{\lambda_{\max}^{2}-4}}{2}$, we immediately obtain \begin{align*} \big|\big\langle\widehat{v}^{\star},v^{\star}\big\rangle\big|^{2} & =\frac{1}{1+\big\|\sqrt{n}R(\lambda_{\max})w_{n-1}\big\|_{2}^{2}}=\frac{1}{1+\frac{1}{\widetilde{\lambda}^{2}-1}}+O\Big(\sqrt{\frac{\log n}{(\lambda-1)^{8}n}}\Big)=1-\frac{1}{\widetilde{\lambda}^{2}}+O\Big(\sqrt{\frac{\log n}{(\lambda-1)^{8}n}}\Big)\\ & =1-\frac{1}{\lambda^{2}}+ O\Big( \frac{|\lambda - \widetilde{\lambda} |}{\lambda \widetilde{\lambda}} \Big) + O\Big(\sqrt{\frac{\log n}{(\lambda-1)^{8}n}}\Big) =1-\frac{1}{\lambda^{2}}+O\Big(\sqrt{\frac{\log n}{(\lambda-1)^{8}n}}\Big), \end{align*} where the last inequality comes from (\ref{eq:lambda-max-correct-bound}) and $\lambda\asymp1$. Consequently, we arrive at \[ \Bigg|\big|\big\langle\widehat{v}^{\star},v^{\star}\big\rangle\big|-\sqrt{1-\frac{1}{\lambda^{2}}}\Bigg|=\frac{\Big|\big|\big\langle\widehat{v}^{\star},v^{\star}\big\rangle\big|^{2}-\big(1-\frac{1}{\lambda^{2}}\big)\Big|}{\big|\big\langle\widehat{v}^{\star},v^{\star}\big\rangle\big|+\sqrt{1-\frac{1}{\lambda^{2}}}}\lesssim\frac{\sqrt{\frac{\log n}{(\lambda-1)^{8}n}}}{\sqrt{1-\frac{1}{\lambda^{2}}}}\asymp\sqrt{\frac{\log n}{(\lambda-1)^{9}n}}. \] \subsection{Proof of Lemma~\ref{lem:spec}} \label{sec:pf-lem-spec} Recall that the leading eigenvector $\widehat{v}^{\star}$ of $M$ satisfies $M \widehat{v}^{\star} = \lambda_{\max} \widehat{v}^{\star}$. In view of the Neumann expansion for eigenvectors (see, e.g.~\cite[Theorem 2]{chen2021asymmetry}), $\widehat{v}^{\star}$ admits the following expansion: \begin{align} \label{eqn:x0-neumann} \widehat{v}^{\star} = \widetilde{c}_0\sum_{t = 0}^{\infty} \frac{1}{\lambda_{\max}^{t}} W^tv^{\star}, \qquad \text{with }\widetilde{c}_0 = \frac{\lambda}{\lambda_{\max}} \langle v^{\star},\widehat{v}^{\star}\rangle , \end{align} with the proviso that $\|W\|<\lambda_{\max}$ --- a condition that has been guaranteed in \eqref{eq:lambda-1-gap-non-asymptotic-UCB}. Clearly, one has \begin{equation} |\widetilde{c}_0|=\frac{\lambda}{\lambda_{\max}}\big|\langle v^{\star},\widehat{v}^{\star}\rangle\big|\leq\big|\langle v^{\star},\widehat{v}^{\star}\rangle\big|\leq 1. \label{eq:coefficient-c0-small-than-1} \end{equation} Next, it follows from Lemma~\ref{lem:linear-combination-power-method-AMP} that $W^t v^{\star}$ can be written as a linear combination of $\{\omega_i\}_{1 \le i \le t+1}$ with $\omega_{i}$ defined in \eqref{eqn:amp-for-spectral}. Substituting \eqref{eqn:linear-induc} into expression~\eqref{eqn:x0-neumann} yields \begin{align} \widehat{v}^{\star} &= \widetilde{c}_0\sum_{t = 0}^{\infty} \frac{1}{\lambda_{\max}^{t}} W^tv^{\star} = \widetilde{c}_0\sum_{t = 0}^{\infty} \frac{1}{\lambda_{\max}^{t}} \Big(\sum_{i = 1}^{t+1} c_{t+1}^i \omega_i\Big) = \sum_{i=1}^{\infty} \Big(\widetilde{c}_0\sum_{t = i-1}^{\infty} \lambda_{\max}^{-t}c_{t+1}^i \Big) \omega_i \notag\\ &= \sum_{i=1}^{\infty} \Big(\widetilde{c}_0\sum_{t = 0}^{\infty} \lambda_{\max}^{-t}c_{t+1}^i \Big) \omega_i \eqqcolon \sum_{i=1}^{\infty} c_i \omega_i, \qquad \text{with } c_i \coloneqq \widetilde{c}_0\sum_{t = 0}^{\infty} \lambda_{\max}^{-t}c_{t+1}^i \text{ for }i\geq 1, \label{eq:vhat-ci-defn} \end{align} where the second line has made use of \eqref{eq:ct-i-zero-coefficient}. To proceed, let us claim for the moment that the following relations hold true for all $t$ obeying $\frac{t^{5}\log n}{n}=o(1)$: \begin{subequations} \label{eqn:spec-finale} \begin{align} & c_i = \widetilde{\lambda}^{-1}\cdot c_{i-1}, \qquad \text{where }\widetilde{\lambda}^{-1} \coloneqq \frac{\lambda_{\max} - \sqrt{\lambda_{\max}^2 - 4}}{2}, \label{eqn:spec-finale-1}\\ & \|\omega_t\|_2 = 1+O\Big(\sqrt{\frac{t^5\log n}{n}}\Big),\label{eqn:spec-finale-3} \\ &\big\| \omega_t - \psi_{t-1} \big\|_2 \lesssim \sqrt{\frac{t^5\log n}{n}} \quad \text{and} \quad \big\| \omega_t - y_t \big\|_2 \lesssim \sqrt{\frac{t^5\log n}{n}}, \label{eqn:spec-finale-2} \\ &\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}(c_{i})^{2} \leq4.\label{eq:sum-of-square-ci} \end{align} \end{subequations} In particular, when $\lambda = O(1)$, it follows from \eqref{eq:lambda-max-correct-bound} that, with probability at least $1-O(n^{-11})$, \begin{subequations} \begin{equation} \big| \widetilde{\lambda} - \lambda \big| = \bigg| \frac{\lambda_{\max}+\sqrt{\lambda_{\max}^2-4}}{2} - \lambda \bigg| \leq \frac{\lambda - 1 }{4} , \label{eq:minyu-lambda-tilde-bound} \end{equation} and as a result, \begin{equation} \widetilde{\lambda} - 1 \geq \lambda - \frac{\lambda - 1 }{4} - 1 = \frac{3(\lambda - 1) }{4} . \label{eq:minyu-lambda-tilde-minus-1-bound} \end{equation} \end{subequations} The preceding claims in \eqref{eqn:spec-finale} allow us to complete the proof of Lemma~\ref{lem:spec}. To see this, note that by virtue of expression~\eqref{eqn:spec-finale-1} and \eqref{eqn:spec-finale-3}, we can truncate the infinite sum by keeping the first $\frac{C_v\log n}{\widetilde{\lambda}-1}$ terms for some large enough constant $C_v>0$, namely, \begin{align} \bigg\|\widehat{v}^{\star}-\sum_{i=1}^{\frac{C_{v} \log n}{\widetilde{\lambda}-1}}c_{i}\omega_{i}\bigg\|_{2} & =\bigg\| \widetilde{c}_0 \sum_{i=\frac{C_{v}\log n}{\widetilde{\lambda}-1}+1}^{\infty}c_{i}\omega_{i}\bigg\|_{2} \leq O\bigg(|\widetilde{c}_0|\sum_{i=\frac{C_{v}\log n}{\widetilde{\lambda}-1}}^{\infty}\widetilde{\lambda}^{-i}\bigg) \leq O\Big(\frac{|\widetilde{c}_0|\widetilde{\lambda}^{-\frac{C_{v}\log n}{\widetilde{\lambda}-1}}}{\widetilde{\lambda}-1}\Big) \notag\\ &=O\Big(\frac{|\widetilde{c}_0| n^{-\frac{C_{v}\log\widetilde{\lambda}}{\widetilde{\lambda}-1}}}{\widetilde{\lambda}-1}\Big)=O\Big(\frac{1}{(\widetilde{\lambda}-1)n}\Big) , \label{eq:v-truncated-sum-bound135} \end{align} where the last line holds when $C_v$ is large enough and uses the fact that $|\widetilde{c}_0|\leq 1$ (cf.~\eqref{eq:coefficient-c0-small-than-1}). Notice that here we truncate at the first $\frac{C_{v} \log n}{\widetilde{\lambda}-1}$ terms. If one decides to keep the first $s$ terms for $s \geq \frac{C_{v} \log n}{\widetilde{\lambda}-1}$, it only results in a smaller truncation error. Taking the relation~\eqref{eqn:spec-finale-2} and \eqref{eq:v-truncated-sum-bound135} together allows us to demonstrate that \begin{align*} \bigg\|\widehat{v}^{\star}-\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}y_{i}\bigg\|_{2} & \leq\bigg\|\widehat{v}^{\star}-\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}\omega_{i}\bigg\|_{2}+O\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}|c_{i}|\sqrt{\frac{i^{5}\log n}{n}}\bigg)\\ & \lesssim\frac{1}{(\widetilde{\lambda}-1)n}+\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}|c_{i}|^{2}\bigg)^{\frac{1}{2}}\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}\frac{i^{5}\log n}{n}\bigg)^{\frac{1}{2}}\\ & \lesssim\bigg(\frac{\log^{7}n}{(\widetilde{\lambda}-1)^{6}n}\bigg)^{\frac{1}{2}}\lesssim \frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}, \end{align*} where the penultimate relation comes from \eqref{eq:sum-of-square-ci}, and the last relation results from \eqref{eq:minyu-lambda-tilde-minus-1-bound}. Also, similar to the arguments in \eqref{eq:v-truncated-sum-bound135} we can obtain \[ \bigg\|\widetilde{c}_{0}\sum_{i=\frac{C_{v}\log n}{\widetilde{\lambda}-1}+1}^{s}c_{i}\omega_{i}\bigg\|_{2}\lesssim\frac{1}{(\widetilde{\lambda}-1)n}\qquad\text{and}\qquad\bigg\|\widetilde{c}_{0}\sum_{i=\frac{C_{v}\log n}{\widetilde{\lambda}-1}+1}^{s}c_{i}y_{i}\bigg\|_{2}\lesssim\frac{1}{(\widetilde{\lambda}-1)n}. \] As a result, we arrive at \[ \bigg\|\widehat{v}^{\star}-\sum_{i=1}^{s}c_{i}y_{i}\bigg\|_{2}\leq\bigg\|\widehat{v}^{\star}-\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}\omega_{i}\bigg\|_{2}+\bigg\|\widetilde{c}_{0}\sum_{i=\frac{C_{v}\log n}{\widetilde{\lambda}-1}+1}^{\infty}c_{i}\omega_{i}\bigg\|_{2}\lesssim\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}. \] Repeating the same argument and recognizing that $y_{1} = v^\star$ lead to \begin{equation} \bigg\|\widehat{v}^{\star}-c_{1}v^{\star}-\frac{1}{\widetilde{\lambda}}\sum_{i=1}^{s}c_{i}\psi_{i}\bigg\|_{2} =\bigg\|\widehat{v}^{\star}-c_{1}y_{1}-\sum_{i=2}^{s+1}c_{i}\psi_{i-1}\bigg\|\lesssim\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}. \label{eq:vstar-truncated-13579} \end{equation} This concludes the proof of Lemma~\ref{lem:spec}, as long as the claims in \eqref{eqn:spec-finale} can be justified. As a consequence, the remainder of this section is dedicated to proving \eqref{eqn:spec-finale}. \subsubsection{Proof of claim~\eqref{eqn:spec-finale}} \paragraph{Proof of recurrence relation~\eqref{eqn:spec-finale-1}.} For any $i \ge 2$, it follows from the definition \eqref{eq:vhat-ci-defn} of $c_i$ and the relation \eqref{eq:c-t-i-recursion} that \begin{align} c_{i+1} + c_{i-1} = \widetilde{c}_0\sum_{t = 0}^{\infty} \lambda_{\max}^{-t}(c_{t+1}^{i+1} + c_{t+1}^{i-1}) = \widetilde{c}_0\sum_{t = 0}^{\infty} \lambda_{\max}^{-t}c_{t+2}^i = \lambda_{\max} \widetilde{c}_0\sum_{t = 1}^{\infty} \lambda_{\max}^{-t}c_{t+1}^i = \lambda_{\max}c_i, \label{eq:recursion-ci-ci} \end{align} where the last step is valid since $c_{t}^{i} = 0$ for $i > t.$ To analyze this recurrence relation \eqref{eq:recursion-ci-ci}, let us look at the two roots of the characteristic equation $r^2 - \lambda_{\max} r + 1=0$, namely, $r_1 = \frac{\lambda_{\max} - \sqrt{\lambda_{\max}^2 - 4}}{2} $, $r_2 = \frac{\lambda_{\max} + \sqrt{\lambda_{\max}^2 - 4}}{2} $. It is well known that the solution to \eqref{eq:recursion-ci-ci} can be expressed via these two roots as follows: \begin{align} c_i = a_1(r_1)^i + a_2(r_2)^i, \qquad i\geq 0 \label{eqn:ci-roots} \end{align} for some coefficients $a_1,a_2\in \ensuremath{\mathbb{R}}$. In view of \eqref{eq:lambda-max-correct-bound}, one has \[ r_2 \geq 1 + (\lambda - 1) - \bigg|\frac{\lambda_{\max}+\sqrt{\lambda_{\max}^{2}-4}}{2}-\lambda\bigg| \geq 1+ \frac{3(\lambda-1)}{4} > 1, \] which also indicates that $r_1 = 1/r_2 < 1$. In addition, we claim that \begin{align} 0\leq c_t^i \le 2^t \qquad t\geq 0,~ i\geq 0. \label{eq:ct-i-exponential} \end{align} This relation can be easily shown by induction: (i) we first learn from Lemma~\ref{lem:linear-combination-power-method-AMP} that $v^{\star}=c_1^1\omega_1 =c_1^1v^{\star}$ and hence $c_1^1=1$, which together with $c_1^i=0$ ($i=0$ or $i>1$) justifies \eqref{eq:ct-i-exponential} when $t=1$; (ii) if \eqref{eq:ct-i-exponential} is valid for $t$, then it follows from \eqref{eq:c-t-i-recursion} that $0\leq c_{t+1}^i = c_t^{i+1} + c_t^{i-1} \le 2^t+2^t\leq 2^{t+1}$, thus establishing \eqref{eq:ct-i-exponential} for $t+1$ --- and hence its validity for all $t\geq 0$. Combine \eqref{eq:ct-i-exponential} with~\eqref{eq:vhat-ci-defn} to show the boundedness of $c_i$ in the sense that: \begin{align} |c_{i}| & =|\widetilde{c}_{0}|\sum_{t=0}^{\infty}\lambda_{\max}^{-t}c_{t+1}^{i}\le2|\widetilde{c}_{0}|\sum_{t=0}^{\infty}(\lambda_{\max}/2)^{-t}=\frac{4|\widetilde{c}_{0}|\lambda_{\max}}{\lambda_{\max}-2}\leq\frac{64\lambda^2}{5(\lambda-1)^2}, \label{eq:ci-upper} \end{align} which relies on \eqref{eq:coefficient-c0-small-than-1} and \eqref{eq:lambda-max-two}. The boundedness of $c_i$ for any $i\geq 0$ necessarily implies that $a_2=0$ in \eqref{eqn:ci-roots} (otherwise $c_i$ will blow up as $i$ grows given that $r_2>1$). We can thus conclude that $c_i = a_1r_1^i$ $(i\geq 0)$ holds for some $a_1\neq 0$, thus implying that \begin{align*} \frac{c_i}{c_{i-1}} = r_1 = \frac{\lambda_{\max} - \sqrt{\lambda_{\max}^2 - 4}}{2}. \end{align*} \paragraph{Proof of inequality \eqref{eqn:spec-finale-3}.} As discussed previously, the iterates $\{\omega_{t}\}$ in \eqref{eqn:amp-for-spectral} form another sequence of AMP updates with the denoising functions taken to be the identity function. In view of Theorem~\ref{thm:recursion}, the iterates $\{\omega_{t}\}$ admit the decomposition \begin{align} \omega_t = \sum_{k = 1}^{t-1} \beta_{t-1}^k\psi_k + \xi_{t-1}; \label{eq:omegat-decomp-spec} \end{align} here, we abuse the notation by taking $\beta_{t}^k \coloneqq \langle \omega_{t}, y_k\rangle$ (cf.~\eqref{eqn:eta-decomposition}) (which satisfies $\ltwo{\beta_t} = \ltwo{\omega_t}$) and letting $\xi_{t-1}$ denote the residual term. In order to control $\omega_{t}$, we need to bound the size of $\xi_{t-1}$. Specializing the expression \eqref{eq:xi_bound} to the special choice of $\eta_t$ (i.e., the identity function), we obtain \begin{align} \label{eqn:spect-tenor} \ltwo{\xi_t} &= \Big\langle \sum_{k = 1}^{t-1} \mu_t^k\psi_k, \xi_{t-1} \Big\rangle + \sum_{k = 1}^{t-1} \mu_{t}^k\bigg[\Big\langle \psi_k, \sum_{j = 1}^{t-1} \beta_{t-1}^j\psi_j\Big\rangle - \beta_{t-1}^k - (\sqrt{2}-1)\beta_{t}^ky_k^{\top}W_k^{\prime}y_k - \sum_{i = 1}^{k - 1} \beta_t^ig_i^k - \sum_{i = k+1}^{t} \beta_{t}^ig_k^i\bigg]. \end{align} Here, $\{y_k, W_k'\}$ have been defined in expression~\eqref{eqn:spect-seq-1}, whereas $\mu_t$ is a unit vector in $\mathcal{S}^{t-2}.$ We then control each term in \eqref{eqn:spect-tenor} separately. First, observe that with probability at least $1 - O(n^{-11})$, \begin{align} \label{eqn:trio} \Big|\Big\langle \sum_{k = 1}^{t-1} \mu_t^k\psi_k, \xi_{t-1}\Big\rangle\Big| \leq \Big\|\sum_{k = 1}^{t-1} \mu_t^k\psi_k\Big\|_2 \cdot \ltwo{\xi_{t-1}} \leq \Big(1+O\Big(\sqrt{\frac{t\log n}{n}}\Big)\Big)\ltwo{\xi_{t-1}} \end{align} holds for every $t \in [n]$, where the last inequality follows from \eqref{eqn:spect-brahms}. In view of Lemma~\ref{lem:concentration}, with probability at least $1 - O(n^{-11})$ one has \begin{align} \label{eqn:spect-baritone} \bigg|\sum_{k = 1}^{t-1} (\sqrt{2}-1) \mu_{t}^k\Big[\beta_{t}^ky_k^{\top}W_k^{\prime}y_k + \sum_{i = 1}^{k - 1} \beta_t^ig_i^k + \sum_{i = k+1}^{t} \beta_{t}^ig_k^i\Big]\bigg| \lesssim \sqrt{\frac{t\log n}{n}}\ltwo{\beta_t}. \end{align} In addition, if we write matrix $\Psi \coloneqq [\psi_1,\ldots,\psi_{t-1}] \in \ensuremath{\mathbb{R}}^{n\times (t-1)}$, then property~\eqref{eqn:simple-rm} and $\|u_t\|_2=1$ give \begin{align} \notag \bigg|\Big\langle \sum_{k = 1}^{t-1} \mu_t^k\psi_k, \sum_{j = 1}^{t-1} \beta_t^j\psi_j\Big\rangle - \sum_{k = 1}^{t-1} \mu_t^k\beta_t^k \bigg| &= \Big|\mu_t^\top\Psi^\top \Psi \beta_t - \mu_t^\top I_{t-1} \beta_t \Big| \\ % &\leq \ltwo{\mu_t}\ltwo{\beta_t} \big\|\Psi^\top \Psi - I_{t-1} \big\| \lesssim \sqrt{\frac{t\log n}{n}}\ltwo{\beta_t} \label{eqn:spect-soprano} \end{align} holds with probability at least $1-O(n^{-11})$. Taking the decomposition~\eqref{eqn:spect-tenor} collectively with \eqref{eqn:trio}, \eqref{eqn:spect-baritone} and \eqref{eqn:spect-soprano} and using $\|\omega_{t-1}\|_2 = \|\beta_{t-1}\|_2$, we arrive at \begin{align} \label{eqn:spect-xi-to-beta} \ltwo{\xi_t} \leq \Big(1+ C_3 \sqrt{\frac{t\log n}{n}}\Big)\ltwo{\xi_{t-1}} + C_3 \sqrt{\frac{t\log n}{n}} \ltwo{\omega_t} \end{align} for some large enough constant $C_3>0$. Additionally, invoke \eqref{eq:omegat-decomp-spec} to obtain \begin{align} \label{eqn:spect-vt-ni} \|\omega_t\|_2 \notag \leq \Big\|\sum_{k = 1}^{t-1} \beta_{t-1}^k\psi_k\Big\|_2 + \|\xi_{t-1}\|_2 &\stackrel{(\textrm{i})}{\leq} \Big(1 + O\Big(\sqrt{\frac{t\log n}{n}}\Big)\Big) \|\beta_{t-1}\|_2 + \|\xi_{t-1}\|_2 \\ &\leq \Big(1 + C_3 \sqrt{\frac{t\log n}{n}}\Big) \|\omega_{t-1}\|_2 + \|\xi_{t-1}\|_2 , \end{align} provided that the constant $C_3>0$ is large enough. Here, (i) comes from \eqref{eqn:spect-brahms}, and we remind the readers that $\|\omega_{t-1}\|_2 = \|\beta_{t-1}\|_2$ and $\ltwo{\omega_1} = 1$. Clearly, the inequalities \eqref{eqn:spect-xi-to-beta} and \eqref{eqn:spect-vt-ni} taken together lead to a recurrence relation involving $\ltwo{\xi_t}$ and $\ltwo{\omega_t}$. Based on this, we claim that for all $t$ obeying $\frac{t^{5}\log n}{n}=o(1)$, one has \begin{equation} \|\xi_{t}\|_{2}\leq C_{5}\sqrt{\frac{t^{3}\log n}{n}}\qquad\text{and}\qquad\|\omega_{t}\|_{2}\leq1+C_{5}\sqrt{\frac{t^{5}\log n}{n}} \label{eqn:spec-xi} \end{equation} for some universal constant $C_{5}=2C_{3}$. Clearly, (\ref{eqn:spec-xi}) is satisfied when $t=1$, given that $\xi_1 = 0$ (as $\omega_{2} = Wv^\star$) and $\ltwo{\omega_1} = \ltwo{v^\star} = 1$. Suppose now that (\ref{eqn:spec-xi}) is valid for the $t$-th iteration, then we can deduce that \begin{align} \|\omega_{t+1}\|_{2} & \leq\Big(1+C_{3}\sqrt{\frac{(t+1)\log n}{n}}\Big)\|\omega_{t}\|_{2}+\|\xi_{t}\|_{2} \notag\\ & \leq\Big(1+C_{3}\sqrt{\frac{(t+1)\log n}{n}}\Big)\left(1+C_{5}\sqrt{\frac{t^{5}\log n}{n}}\right)+C_{5}\sqrt{\frac{t^{3}\log n}{n}} \notag\\ & =1+C_{5}\sqrt{\frac{(t+1)\log n}{n}}\left(\frac{C_{3}}{C_{5}}+t^{2}+C_{3}\sqrt{\frac{t^{5}\log n}{n}}+t\right) \notag\\ & \leq1+C_{5}\sqrt{\frac{(t+1)\log n}{n}}\left(t+1\right)^{2}=1+C_{5}\sqrt{\frac{(t+1)^{5}\log n}{n}}; \label{eq:omega-t-recursion}\\ \|\xi_{t+1}\|_{2} & \leq\Big(1+C_{3}\sqrt{\frac{(t+1)\log n}{n}}\Big)\|\xi_{t}\|_{2}+C_{3}\sqrt{\frac{(t+1)\log n}{n}}\,\|\omega_{t+1}\|_{2} \notag\\ & \leq C_{5}\Big(1+C_{3}\sqrt{\frac{(t+1)\log n}{n}}\Big)\sqrt{\frac{t^{3}\log n}{n}}+C_{3}\sqrt{\frac{(t+1)\log n}{n}}\left(1+C_{5}\sqrt{\frac{t^{5}\log n}{n}}\right) \notag\\ & \leq C_{5}\sqrt{\frac{(t+1)\log n}{n}}\left(t+\frac{C_{3}}{C_{5}}+2C_{3}\sqrt{\frac{t^{5}\log n}{n}}\right) \notag\\ & \leq C_{5}\sqrt{\frac{(t+1)\log n}{n}}(t+1)=C_{5}\sqrt{\frac{(t+1)^{3}\log n}{n}}; \end{align} here, the last lines in both of the above bounds hold true since $C_{3}/C_{5}=1/2$ and $\frac{t^{5}\log n}{n}=o(1)$. This in turn justifies the validity of the claim (\ref{eqn:spec-xi}) for the $(t+1)$-th iteration. Hence, by induction, we have established (\ref{eqn:spec-xi}) for all $t$ obeying $\frac{t^{5}\log n}{n}=o(1)$. Armed with \eqref{eqn:spec-xi}, we can invoke \eqref{eq:omegat-decomp-spec} again to derive \begin{align*} \|\omega_t\|_2 \notag \geq \Big\|\sum_{k = 1}^{t-1} \beta_{t-1}^k\psi_k\Big\|_2 - \|\xi_{t-1}\|_2 &\stackrel{(\textrm{i})}{\geq} \Big(1 - O\Big(\sqrt{\frac{t\log n}{n}}\Big)\Big) \|\beta_{t-1}\|_2 - \|\xi_{t-1}\|_2 \\ &\geq \Big(1 - C_3 \sqrt{\frac{t\log n}{n}}\Big) \|\omega_{t-1}\|_2 - C_{5}\sqrt{\frac{t^{3}\log n}{n}} \end{align*} for some large enough constant $C_3>0$. Repeat the argument in \eqref{eq:omega-t-recursion} to yield \[ \|\omega_{t}\|_{2}\geq 1 -C_{5}\sqrt{\frac{t^{5}\log n}{n}} . \] This taken collectively with \eqref{eqn:spec-xi} finishes the proof of inequality \eqref{eqn:spec-finale-3}. \paragraph{Proof of inequality~\eqref{eqn:spec-finale-2}.} To streamline the presentation of our proof, let us first make note of the following result, the proof of which is deferred to the end of this section: \begin{align} \label{eqn:spect-beta-ni} \big\| \big[\beta_t^{1},\beta_t^{2},\cdots, \beta_t^{t-1} \big] \big\|_2 &\lesssim \sqrt{\frac{t^5\log n}{n}}. \end{align} With this result, \eqref{eqn:spec-xi} and \eqref{eqn:spec-finale-3} in mind, we are ready to prove \eqref{eqn:spec-finale-2}. First, it follows from \eqref{eq:omegat-decomp-spec} that \begin{align*} \big\|\omega_{t}-\psi_{t-1}\big\|_{2} & \leq\big\|(\beta_{t-1}^{t-1}-1)\psi_{t-1}\big\|_{2}+\Big\|\sum_{k=1}^{t-2}\beta_{t-1}^{k}\psi_{k}\Big\|_{2}+\|\xi_{t-1}\|_{2}\\ & \leq\bigg(1+O\Big(\sqrt{\frac{t\log n}{n}}\Big)\bigg)\Big(\big|\beta_{t-1}^{t-1}-1\big|+\big\|\big[\beta_{t-1}^{1},\cdots,\beta_{t-1}^{t-2}\big]\big\|_{2}+\|\xi_{t-1}\|_{2}\Big),\\ & \leq\bigg(1+O\Big(\sqrt{\frac{t\log n}{n}}\Big)\bigg)\Big(\big|\|\omega_{t-1}\|_{2}-1\big|+2\big\|\big[\beta_{t-1}^{1},\cdots,\beta_{t-1}^{t-2}\big]\big\|_{2}+\|\xi_{t-1}\|_{2}\Big), \end{align*} where the second line results from the properties \eqref{eqn:brahms} and \eqref{eqn:simple-rm}, and the last line is valid since \[ \beta_{t-1}^{t-1}=\big\langle\omega_{t-1},y_{t-1}\big\rangle=\frac{\omega_{t-1}^{\top}(I-V_{t-2}V_{t-2}^{\top})\omega_{t-1}}{\|(I-V_{t-2}V_{t-2}^{\top})\omega_{t-1}\|_{2}}\geq0 \] \[ \Longrightarrow\quad\big|\beta_{t-1}^{t-1}-1\big|=\big||\beta_{t-1}^{t-1}|-1\big|\leq\big|\|\beta_{t-1}\|_{2}-1\big|+\big\|\big[\beta_{t-1}^{1},\cdots,\beta_{t-1}^{t-2}\big]\big\|_{2}=\big|\|\omega_{t-1}\|_{2}-1\big|+\big\|\big[\beta_{t-1}^{1},\cdots,\beta_{t-1}^{t-2}\big]\big\|_{2}. \] Taking this collectively with \eqref{eqn:spec-xi}, \eqref{eqn:spect-beta-ni} and \eqref{eqn:spec-finale-3} yields the first part of the advertised bound \eqref{eqn:spec-finale-2}. Regarding the second part of \eqref{eqn:spec-finale-2}, reorganizing the expression \eqref{eqn:spect-seq-1} of $y_{t}$ gives \begin{align*} y_t &= \frac{\omega_t - V_{t-1}V_{t-1}^{\top}\omega_t}{\big\|\omega_t - V_{t-1}V_{t-1}^{\top}\omega_t\big\|_2} % = \omega_t + \bigg(\frac{1-\big\|\omega_t - V_{t-1}V_{t-1}^{\top}\omega_t\big\|_2}{\big\|\omega_t - V_{t-1}V_{t-1}^{\top}\omega_t\big\|_2}\bigg)\omega_t - \frac{V_{t-1}V_{t-1}^{\top}\omega_t}{\big\|\omega_t - V_{t-1}V_{t-1}^{\top}\omega_t\big\|_2}. % \end{align*} In view of relations \eqref{eqn:spect-beta-ni} and \eqref{eqn:spec-finale-3}, we can deduce that \begin{align*} \big\|V_{t-1}V_{t-1}^{\top}\omega_t\big\|_2 &= \big\|V_{t-1}^{\top}\omega_t\big\|_2 = \big\|\big[\beta_{t}^{1},\cdots,\beta_{t}^{t-1}\big]\big\|_{2} \lesssim \sqrt{\frac{t^5\log n}{n}} \\ \big\|\omega_t - V_{t-1}V_{t-1}^{\top}\omega_t\big\|_2 &\leq \|\omega_t\|_2 + \big\|V_{t-1}^{\top}\omega_t \big\|_2 = 1+ O\Big(\sqrt{\frac{t^5\log n}{n}}\Big) ,\\ % \big\|\omega_t - V_{t-1}V_{t-1}^{\top}\omega_t\big\|_2 &\geq \|\omega_t\|_2 - \big\|V_{t-1}^{\top}\omega_t \big\|_2 \geq 1- O\Big(\sqrt{\frac{t^5\log n}{n}}\Big) . \end{align*} Taking the preceding bounds collectively and using \eqref{eqn:spec-finale-3} once again, we immediately reach \begin{align*} \big\| y_t - \omega_t \big\|_2 \leq O\Big(\sqrt{\frac{t^5\log n}{n}}\Big) , \end{align*} thus validating the second part of inequality~\eqref{eqn:spec-finale-2}. \paragraph{Proof of inequality~\eqref{eq:sum-of-square-ci}.} It follows from \eqref{eq:v-truncated-sum-bound135} and the triangle inequality that \begin{align*} \bigg\|\widehat{v}^{\star}-\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}y_{i}\bigg\|_{2} & \leq\bigg\|\widehat{v}^{\star}-\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}\omega_{i}\bigg\|_{2}+\bigg\|\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}(\omega_{i}-y_{i})\bigg\|_{2}\\ & \lesssim\frac{1}{(\widetilde{\lambda}-1)n}+\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}|c_{i}|^{2}\bigg)^{\frac{1}{2}}\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}\big\|\omega_{i}-y_{i}\big\|_{2}^{2}\bigg)^{\frac{1}{2}}\\ & \lesssim\frac{1}{(\widetilde{\lambda}-1)n}+\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}|c_{i}|^{2}\bigg)^{\frac{1}{2}}\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}\frac{i^{5}\log n}{n}\bigg)^{\frac{1}{2}}\\ & \lesssim\frac{1}{(\lambda-1)n}+\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}|c_{i}|^{2}\bigg)^{\frac{1}{2}} \sqrt{\frac{\log^{7}n}{(\lambda-1)^{6}n}}, \end{align*} where the penultimate inequality invokes \eqref{eqn:spec-finale-2}, and the last line results from \eqref{eq:minyu-lambda-tilde-minus-1-bound}. This combined with the orthonormality of $\{y_{i}\}$ implies that \begin{align*} \bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}(c_{i})^{2}\bigg)^{1/2} & =\bigg\|\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}y_{i}\bigg\|_{2}\leq\big\|\widehat{v}^{\star}\big\|_{2}+\bigg\|\widehat{v}^{\star}-\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}c_{i}y_{i}\bigg\|_{2}\leq1+O\Bigg(\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}|c_{i}|^{2}\bigg)^{\frac{1}{2}}\sqrt{\frac{\log^{7}n}{(\lambda-1)^{6}n}}\Bigg)\\ & \leq1+\frac{1}{2}\bigg(\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}|c_{i}|^{2}\bigg)^{\frac{1}{2}}, \end{align*} where the last line is valid as long as $n(\lambda-1)^{6}/\log^{7}n$ is sufficiently large. Rearranging terms, we are left with $\sum_{i=1}^{\frac{C_{v}\log n}{\widetilde{\lambda}-1}}(c_{i})^{2}\leq4$ as claimed. \paragraph{Proof of inequality \eqref{eqn:spect-beta-ni}.} Finally, we finish the proof by establishing inequality~\eqref{eqn:spect-beta-ni}. From the definition of $\beta_t$ and \eqref{eq:omegat-decomp-spec}, one can derive a recursive relation as follows: \begin{align} \notag\big\|\big[\beta_{t}^{1},\ldots,\beta_{t}^{t-1}\big]\big\|_{2} & =\|V_{t-1}^{\top}\omega_{t}\|_{2}=\bigg\| V_{t-1}^{\top}\Big(\sum_{k=1}^{t-1}\beta_{t-1}^{k}\psi_{k}+\xi_{t-1}\Big)\bigg\|_{2}\\ \notag & \le\Big\|\sum_{k=1}^{t-2}\beta_{t-1}^{k}\psi_{k}\Big\|_{2}+|\beta_{t-1}^{t-1}|\cdot\big\| V_{t-1}^{\top}\psi_{t-1}\big\|_{2}+\big\|\xi_{t-1}\big\|_{2}\\ \notag & \le\Big(1+C_{4}\sqrt{\frac{t\log n}{n}}\Big)\big\|\big[\beta_{t-1}^{1},\ldots,\beta_{t-1}^{t-2}\big]\big\|_{2}+\|\omega_{t-1}\|_{2} \cdot \big\| V_{t-1}^{\top}\psi_{t-1}\big\|_{2}+\big\|\xi_{t-1}\big\|_{2}\\ & \le\Big(1+C_{4}\sqrt{\frac{t\log n}{n}}\Big)\big\|\big[\beta_{t-1}^{1},\ldots,\beta_{t-1}^{t-2}\big]\big\|_{2}+C_4\|\omega_{t-1}\|_{2}\sqrt{\frac{t\log n}{n}}+\big\|\xi_{t-1}\big\|_{2} \label{eqn:intermission} \end{align} for some large enough constant $C_{4}>0$. Here, the penultimate inequality uses \eqref{eqn:spect-brahms} and the fact $|\beta_{t-1}^{t-1}|\leq \|\beta_{t-1}\|_2=\|\omega_{t-1}\|_2$, while the last inequality would be guaranteed if we could establish the following result: \begin{align} \label{eqn:second-simple-rm} \|V_{t-1}^{\top}\psi_{t-1}\|_2 \lesssim \sqrt{\frac{t\log n}{n}}. \end{align} We shall assume the validity of \eqref{eqn:second-simple-rm} for the moment, and return to prove it shortly. Taking \eqref{eqn:intermission} together with \eqref{eqn:spec-xi} yields \begin{align} \big\|\big[\beta_{t}^{1},\ldots,\beta_{t}^{t-1}\big]\big\|_{2} & \le\Big(1+C_{4}\sqrt{\frac{t\log n}{n}}\Big)\big\|\big[\beta_{t-1}^{1},\ldots,\beta_{t-1}^{t-2}\big]\big\|_{2}+C_{6}\sqrt{\frac{t^{3}\log n}{n}} \label{eqn:intermission-2} \end{align} for some sufficiently large constant $C_6>0$. We then claim that for all $t$ obeying $\frac{t^{3}\log n}{n}=o(1)$, \begin{equation} \big\|\big[\beta_{t}^{1},\ldots,\beta_{t}^{t-1}\big]\big\|_{2} \leq C_7\sqrt{\frac{t^5 \log n}{n}} \label{eq:claim-beta-ub} \end{equation} holds for some large enough constant $C_7>0$. Regarding the base case, we observe that \begin{align} \label{eqn:spect-inti-beta} |\beta_2^1| = \big| \inprod{\omega_2}{y_1} \big| = \big| v^{\star\top} W v^\star \big| \leq C_7 \sqrt{\frac{\log n}{n}} \end{align} with probability at least $1 - O(n^{-12})$, provided that $C_7>0$ is large enough. Assuming that \eqref{eq:claim-beta-ub} is valid for the $(t-1)$-th iteration, we further have \begin{align*} \notag\big\|\big[\beta_{t}^{1},\ldots,\beta_{t}^{t-1}\big]\big\|_{2} & \le C_{7}\Big(1+C_{4}\sqrt{\frac{t\log n}{n}}\Big)\sqrt{\frac{(t-1)^{5}\log n}{n}}+C_{6}\sqrt{\frac{t^{3}\log n}{n}}\\ & =C_{7}\sqrt{\frac{t^{3}\log n}{n}}\left\{ \Big(1+C_{4}\sqrt{\frac{t\log n}{n}}\Big)\left(t-1\right)+\frac{C_{6}}{C_{7}}\right\} \\ & \leq C_{7}\sqrt{\frac{t^{3}\log n}{n}}\cdot\left\{ t-1+C_{4}\sqrt{\frac{t^{3}\log n}{n}}+\frac{C_{6}}{C_{7}}\right\} \leq C_{7}\sqrt{\frac{t^{5}\log n}{n}}, \end{align*} where the last inequality holds true as long as $C_{7}\geq2C_{6}$ and $\frac{t^{3}\log n}{n}=o(1)$. This justifies the claim \eqref{eq:claim-beta-ub} for the $t$-th iteration. The standard induction argument then establishes \eqref{eq:claim-beta-ub} for all $t$ obeying $\frac{t^{3}\log n}{n}=o(1)$. We now come back to prove \eqref{eqn:second-simple-rm}. Towards this, we first note that: by construction, $\psi_{t-1}$ is independent of $V_{t-1}$. To justify this, recall that it has been established in the last paragraph of Section~\ref{sec:pf-distribution} that: $\psi_{t-1}$ follows a Gaussian distribution $\mathcal{N}(0,\frac{1}{n}I_n)$ no matter what value the sequence $\{y_k\}_{1\leq k \leq t-1}$ takes; therefore, in view of the definition of statistical independence, $\psi_{t-1}$ is independent of $\{y_k\}_{1\leq k \leq t-1}$ and hence $V_{t-1}$ (as $V_{t-1}$ is obtained by simply concatenating $y_1,\ldots,y_{t-1}$). Therefore, $V_{t-1}^{\top}\psi_{t-1}$ is essentially $\mathcal{N}(0,\frac{1}{n}I_{t-1})$, and hence standard Gaussian concentration results \citep{vershynin2018high} imply that \[ \mathbb{P}\Big( \big\| V_{t-1}^{\top}\psi_{t-1} \big\|_2 \geq 5\sqrt{\frac{t\log n}{n}} \Big) \leq O(n^{-11}) \] as claimed. This concludes the proof. \subsection{Proof of Lemma~\ref{lem:alt-x1}} \label{sec:proof-eqn:alt-x1} Repeating the proof of Lemma~\ref{lem:linear-combination-power-method-AMP}, we can show that each $W^s\widetilde{v}$ is a linear combination of $\{u_{s+1}, \ldots, u_{2s+1}\}.$ Taking this together with the decomposition~\eqref{eqn:x1-decmp-violin} and Lemma~\ref{lem:linear-combination-power-method-AMP} reveals that $x_{1}$ can be expressed as \begin{align} \label{eqn:basic-exp-x1} x_1 = \sum_{i = 1}^{2s+1} b_i\widehat{y}_i, \qquad \text{with } b_i = \inprod{x_1}{\widehat{y}_i}, \end{align} given that $\{\widehat{y}_i\}_{i=1}^{2s+1}$ are orthonormal and span the subspace containing $\{\omega_i\}_{i=1}^{s}$ and $\{u_i\}_{i=s+1}^{2s+1}.$ Next, we move on to show that $\ltwo{[b_{s+1},\ldots, b_{2s+1}]}$ is small. More specifically, recall from Lemma~\ref{lem:linear-combination-power-method-AMP} that \[ \sum_{i=0}^{s-1}a_{i}W^{i}v^{\star}\in\mathsf{span}\{\omega_{1},\cdots,\omega_{s}\}=\mathsf{span}\big\{\widehat{y}_{1},\cdots,\widehat{y}_{s}\big\}, \] and hence by virtue of \eqref{eqn:x1-decmp-violin}, \begin{align} \notag \big\|[b_{s+1},\cdots,b_{2s+1}]\big\|_{2} & =\Big\| x_{1}-\sum_{i=1}^{s}b_{i}\widehat{y}_{i}\Big\|_{2}\overset{\mathrm{(i)}}{\leq}\Big\| x_{1}-\sum_{i=0}^{s-1}a_{i}W^{i}v^{\star}\Big\|_{2}=\big\| a_{s}W^{s}\widetilde{v}\big\|_{2}\overset{\mathrm{(ii)}}{\lesssim}\frac{\|W\|^{s}n^{11.5}}{\lambda_{\max}^{s}}\\ & \asymp\bigg(1-\frac{\lambda_{\max}-\|W\|}{\lambda_{\max}}\bigg)^{s}n^{11.5}\lesssim\frac{1}{n}, \end{align} where (i) follows since $\sum_{i=1}^{s}b_{i}\widehat{y}_{i}$ is the Euclidean projection of $x_{1}$ onto $\mathsf{span}\big\{\widehat{y}_{1},\cdots,\widehat{y}_{s}\big\}$ while $\sum_{i=0}^{s-1}a_{i}W^{i}v^{\star} \in \mathsf{span}\big\{\widehat{y}_{1},\cdots,\widehat{y}_{s}\big\}$; (ii) makes use of (\ref{eqn:power-error}), and the last inequality invokes (\ref{eq:lambda-1-gap-non-asymptotic-UCB}) and (\ref{eq:lambda-1-non-asymptotic-UCB}) and is valid if $s\geq\frac{C_{v}\lambda^2 \log n}{(\lambda-1)^{2}}$ for some sufficiently large constant $C_{v}>0$. Moreover, putting expressions~\eqref{eqn:basic-exp-x1} and \eqref{eqn:spectral-expansion} together yields \begin{align} \notag \big\|[b_{1},\cdots b_{s}]-[c_{1},\cdots,c_{s}]\big\|_{2} & =\Big\|\sum_{i=1}^{s}(b_{i}-c_{i})\widehat{y}_{i}\Big\|_{2}\leq\Big\|\sum_{i=1}^{s}(b_{i}-c_{i})\widehat{y}_{i}+\sum_{i=s+1}^{2s+1}b_{i}\widehat{y}_{i}\Big\|_{2}\\ \notag & \leq\big\| x_{1}-\widehat{v}^{\star}\big\|_{2}+\Big\|\widehat{v}^{\star}-\sum_{i=1}^{s}c_{i}\widehat{y}_{i}\Big\|_{2}\\ & \lesssim\frac{1}{n^{12}}+\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}\asymp\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}} \end{align} with probability at least $1-O(n^{-11})$. In light of the above two relations, we can further derive \begin{align*} \Big\|\sum_{i = 1}^{2s+1} b_i\widehat{\psi}_i - \sum_{i = 1}^{s} c_i\psi_i\Big\|_2 & \leq \Big\|\sum_{i = 1}^{s} b_i\widehat{\psi}_i - \sum_{i = 1}^{s} c_i\psi_i\Big\|_2 + \Big\|\sum_{i = s+1}^{2s+1} b_i\widehat{\psi}_i \Big\|_2 \\ & = \Big\|\sum_{i = 1}^{s} b_i{\psi}_i - \sum_{i = 1}^{s} c_i\psi_i\Big\|_2 + \Big\|\sum_{i = s+1}^{2s+1} b_i\widehat{\psi}_i \Big\|_2 \\ &\leq \Big(1 + O\Big(\sqrt{\frac{t\log n}{n}}\Big)\Big) \Big(\big\|[b_{1},\cdots b_{s}]-[c_{1},\cdots,c_{s}]\big\|_{2} + \big\|[b_{s+1},\ldots,b_{2s+1}]\big\|_{2}\Big) \\ &\lesssim \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^6n}} \end{align*} with probability at least $1 - O(n^{-11})$, where the penultimate line applies the concentration result~\eqref{eqn:spect-brahms}. Hence, taking this collectively with \eqref{eqn:power-error} and Lemma~\ref{lem:spec}, we can demonstrate that \begin{align} \Bigg\| x_{1}-c_{1}v^{\star}-\frac{1}{\widetilde{\lambda}}\sum_{i=1}^{2s+1}b_{i}\widehat{\psi}_{i}\Bigg\|_{2} & \leq\Bigg\| x_{1}-c_{1}v^{\star}-\frac{1}{\widetilde{\lambda}}\sum_{i=1}^{s}c_{i}\psi_{i}\Bigg\|_{2}+\frac{1}{\widetilde{\lambda}}\Big\|\sum_{i=1}^{2s+1}b_{i}\widehat{\psi}_{i}-\sum_{i=1}^{s}c_{i}\psi_{i}\Big\|_{2} \notag\\ & \leq\Bigg\|\widehat{v}^{\star}-c_{1}v^{\star}-\frac{1}{\widetilde{\lambda}}\sum_{i=1}^{s}c_{i}\psi_{i}\Bigg\|_{2}+\big\| x_{1}-\widehat{v}^{\star}\big\|_{2}+O\bigg(\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}\bigg) \notag\\ & \lesssim\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}+\frac{1}{n^{12}}+\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}}\asymp\frac{\log^{3.5}n}{\sqrt{(\lambda-1)^{6}n}} . \label{eq:x1-approx-UB} \end{align} Finally, it results from \eqref{eq:lambda-max-correct-bound} and \eqref{eqn:spect-brahms} that \begin{align*} \bigg\|\frac{1}{\lambda}\sum_{i=1}^{2s+1}b_{i}\widehat{\psi}_{i}-\frac{1}{\widetilde{\lambda}}\sum_{i=1}^{2s+1}b_{i}\widehat{\psi}_{i}\bigg\|_{2} & \lesssim\frac{|\lambda-\widetilde{\lambda}|}{\lambda^{2}}\bigg\|\sum_{i=1}^{2s+1}b_{i}\widehat{\psi}_{i}\bigg\|\lesssim\bigg(1+O\Big(\sqrt{\frac{s}{n}}\Big)\bigg)|\lambda-\widetilde{\lambda}|\cdot\big\|[b_{1},\cdots,b_{2s+1}]\big\|_{2}\\ & \lesssim\sqrt{\frac{\log n}{n(\lambda-1)^{7}}}, \end{align*} where we also use the fact that $\big\|[b_{1},\cdots,b_{2s+1}]\big\|_{2}\asymp1$ (a direct consequence of \eqref{eqn:basic-exp-x1} and the orthonormality of $\{\widehat{y}_{i}\}$). This together with \eqref{eq:x1-approx-UB} and the triangle inequality immediately concludes the proof. \subsubsection{AMP with data-dependent initialization: strong SNR regime} \label{sec:result-sparse-init} Let us begin by considering the strong SNR regime where \begin{align} \label{eqn:init-1} \lambda\|v^{\star}\|_{\infty} \gtrsim \sqrt{\frac{k\log n}{n}}. \end{align} For instance, if $\|v^\star\|_\infty = O(\frac{1}{\sqrt{k}})$, then \eqref{eqn:init-1} imposes a constraint on $\lambda$ as $\lambda \gtrsim k\sqrt{\frac{\log n}{n}}$. \paragraph{Initialization scheme \#1: diagonal maximization.} Set $\eta_0(x_{0}) = 0$, and take \begin{align} x_1 = e_{\hat{s}},\qquad\text{with }~ {\hat{s}} \coloneqq \arg\max_i \left|M_{ii}\right|, \label{defi:v1} \end{align} where $e_i\in \ensuremath{\mathbb{R}}^n$ denotes the $i$-th standard basis vector. In words, this initialization simply identifies the largest diagonal entry of $M$, and forms a standard basis vector w.r.t.~this entry. Given the ambiguity of the global sign (i.e., one can only hope to recover $v^\star$ up to global sign), we shall assume $v^\star_{\hat{s}} \ge 0$ without loss of generality. As it turns out, in the strong SNR regime \eqref{eqn:init-1}, the algorithm \eqref{defi:v1} is guaranteed to find an index within the following index subset: % \begin{align} \label{eqn:sparse-set-s0} \mathcal{S}_0 \coloneqq \Big\{s \in [n] ~\mid~ |v^\star_s| \geq \frac{1}{2} \|v^\star\|_\infty \Big\}. \end{align} Moreover, executing one iteration of AMP from $x_1=e_{\hat{s}}$ is able to yield a nontrivial correlation with the truth $v^\star$. These two facts are formally stated in the following proposition, with its proof deferred to Section~\ref{sec:pf-sparse-ini}. \begin{props} \label{thm:sparse-init} Suppose the signal strength satisfies~\eqref{eqn:init-1}. With probability at least $1-O(n^{-11})$, one has: \begin{enumerate} \item the index ${\hat{s}}$ (cf.~\eqref{defi:v1}) satisfies $\hat{s} \in \mathcal{S}_{0}$; \item for every $s \in \mathcal{S}_{0}$, the AMP updates~\eqref{eqn:AMP-updates} initialized with $x_1=e_{s}$ and $\eta_0(x_0)=0$ obey \begin{align} \label{eqn:vstar-s} \big|\big\langle v^{\star}, \eta_2(x_2) \big\rangle \big| \asymp 1. \end{align} \end{enumerate} \end{props} \paragraph{Non-asymptotic theory of AMP when initialized by \eqref{defi:v1}.} Despite the statistical dependency between $\hat{s}$ and $W$, Proposition~\ref{thm:sparse-init} guarantees that it always comes from a fixed and small index subset. Consequently, basic union bounding suffices in helping us analyze the subsequent AMP iterates. This is summarized in the result below; the proof can be found in Section~\ref{sec:pf-sparse}. \begin{cors} \label{cor:sparse-init-1} If the signal strength satisfies \eqref{eqn:init-1}, then with probability at least $1-O(n^{-10})$, the AMP iterates~\eqref{eqn:AMP-updates} with initialization \eqref{defi:v1} obey \eqref{eqn:sparse-decomp-dvorak} - \eqref{eqn:soccer} for $2\leq t\lesssim \frac{n\lambda^2}{\log^3 n}$, where $\alpha_3^{\star} \asymp \lambda$. \end{cors} \subsubsection{AMP with data-dependent initialization: weak signal regime} We now move on to the following regime that violates the condition~\eqref{eqn:init-1}: \begin{align} \label{eqn:init-2} \lambda \gtrsim \frac{k}{\sqrt{n}} ~~\text{ and }~~\|v^{\star}\|_{\infty} = o\Big(\sqrt{\frac{\log n}{k}}\Big). \end{align} It is noteworthy that $\lambda$ cannot be further reduced, as a computational barrier has been widely conjectured that asserts that no algorithm can achieve consistent estimation if $\lambda = o(k/{\sqrt{n}})$ \citep{berthet2013computational,cai2015optimal,wang2016statistical,hopkins2017power}. \paragraph{Initialization scheme \#2.} Before describing our next initialization scheme, we give two remarks below. \begin{itemize} \item As shown in the prior literature, there exists a computationally feasible algorithm that allows one to find an estimate $\widehat{v}_{\mathsf{oracle}}$ that obeys $\|\widehat{v}_{\mathsf{oracle}}\|_2=1$ and % \begin{equation} \big|\big\langle \widehat{v}_{\mathsf{oracle}}, v^\star \big\rangle \big| \asymp 1 \label{eq:correlation-oracle} \end{equation} % with probability exceeding $1-O(n^{-10})$, as long as $\lambda \gtrsim k / \sqrt{n}$ in the model \eqref{eqn:wigner-sparse}. An example of this kind is the one based on covariance thresholding studied in \citet{deshpande2014sparse,krauthgamer2015semidefinite}.\footnote{While \citet{deshpande2014sparse} focused primarily on the spiked Wishart model, it is fairly easy to transfer the Wigner model \eqref{eqn:wigner-sparse} into the model therein, by using a simple Gaussian lifting trick to asymmetrize $M$.} In what follows, we shall call this algorithm as an oracle algorithm. \item The estimate returned by the above oracle algorithm, however, exhibits complicated statistical dependency on $W$, thus precluding us from directly invoking our AMP analysis framework. \end{itemize} \noindent In light of the above observations, we propose an initialization scheme based on sample splitting, which repeats the following steps for $N \asymp \log n$ rounds. In each round $j$: \begin{itemize} \item[1)] Randomly sample an index subset $\mathcal{I}_{j}$, independent of $M$, with mean size $|\mathcal{I}_{j}|=np$ (each $i \in [n]$ is included in $\mathcal{I}_{j}$ with probability $p$) and partition $M$ into four independent blocks, namely, $M_{\mathcal{I}_{j}, \mathcal{I}_{j}}$, $M_{\mathcal{I}_{j}^{c}, \mathcal{I}_{j}}$, $M_{\mathcal{I}_{j}, \mathcal{I}_{j}^{c}}$, $M_{\mathcal{I}_{j}^{c}, \mathcal{I}_{j}^{c}}$. Here and below, $M_{\mathcal{I}, \mathcal{J}}$ denotes the submatrix of $M$ with rows (resp.~columns) coming from those with indices in $\mathcal{I}$ (resp.~$\mathcal{J}$). \item[2)] Apply the oracle algorithm mentioned above with a little follow-up step to obtain a unit-norm estimate $x^j \in \ensuremath{\mathbb{R}}^{|\mathcal{I}_j^{\mathrm{c}}|}$ (see Algorithm~\ref{alg:split} for details). \item[3)] Run AMP on a smaller-dimensional (but independent) submatrix $M_{\mathcal{I}_{j}^{c}, \mathcal{I}_{j}^c}$; the size of $\mathcal{I}_{j}$ is chosen to be $o(n)$, so that the efficiency of the AMP will not degrade much. \end{itemize} Finally, we select an index set $\mathcal{I}_{\widehat{j}}$ based on the following criterion: \begin{align*} \widehat{j} \coloneqq \arg\!\max_{1\leq j \leq N} ~\Big\{ x^{j\top} M_{\mathcal{I}_j^{c},\mathcal{I}_j^{c}} x^{j}\Big\}. \end{align*} In other words, we pick an index set such that its initial estimate has the largest correlation with the complement diagonal block. The fact that $x^j$ is statistically independent from $W_{\mathcal{I}_{\widehat{j}}^{c}, \mathcal{I}_{\widehat{j}}^{c}}$ plays a crucial role in the subsequent analysis. The whole initialization scheme is summarized in Algorithm~\ref{alg:split}. We are then positioned to derive some key properties of the above initialization scheme. For ease of exposition, let us define an index subset \begin{align} \mathcal{S}_1 \coloneqq \bigg\{j \in N : \frac{\big\langle v_{\mathcal{I}_{j}^{c}}^{\star}, x^j \big\rangle}{\big\|v_{\mathcal{I}_{j}^{c}}^{\star}\big\|_2} \asymp 1\bigg\}. \end{align} We immediately make note of the following property, whose proof is provided in Section~\ref{sec:pf-prop-sparse-split}. \begin{props} \label{prop:sparse-split} Consider the regime \eqref{eqn:init-2} with $k \gg \log n$, and set $p = C_p \frac{\log n}{k}$ for some large constant $C_p$. With probability at least $1 - O(n^{-10})$, the vector $v^{\widehat{j}}$ computed in \eqref{eqn:vj} --- with $\widehat{j}$ chosen in \eqref{eqn:sparse-choose-j} --- satisfies \begin{align} \label{eq:spec-init} v^{\widehat{j}} \coloneqq M_{\mathcal{I}_{\widehat{j}}^{c}, \mathcal{I}_{\widehat{j}}}v_{\mathcal{I}_{\widehat{j}}} = \alpha_1 v_{\mathcal{I}_{\widehat{j}}^{c}}^{\star} + \phi_0 \qquad \text{with } \alpha_1 \asymp \lambda\sqrt{p}, \end{align} where $\phi_0 \sim \mathcal{N}(0, \frac{1}{n}I_{|\mathcal{I}^c_{\widehat{j}}|})$ is independent from $W_{\mathcal{I}_{\widehat{j}}^{c}, \mathcal{I}_{\widehat{j}}^{c}}$ conditional on $\mathcal{I}_{\widehat{j}}$. Moreover, one has $\widehat{j} \in \mathcal{S}_{1}$ with probability at least $1 - O(n^{-10}).$ \end{props} \paragraph{Non-asymptotic theory of AMP as initialized in Algorithm~\ref{alg:split}.} As revealed by Proposition~\ref{prop:sparse-split}, the aforementioned initialization scheme provides an almost independent estimate that enjoys non-vanishing correlation with the truth. We can then execute the AMP update rule \eqref{eqn:AMP-updates-sparse} on the submatrix $M_{\mathcal{I}_{\widehat{j}}^{c}, \mathcal{I}_{\widehat{j}}^{c}}$ in order to obtain an estimate $x_t$ for the subvector of $v^\star$ from the index subset $\mathcal{I}_{\widehat{j}}^{c}$; details are summarized in Algorithm~\ref{alg:split}. With this in mind, our theory developed so far readily leads to finite-sample characterizations of this estimate $x_t$. More specifically, Theorem~\ref{thm:sparse} together with some basic union bounds reveals that with probability at least $1 - O(n^{-10})$, the estimate $x_t$ returned by Algorithm~\ref{alg:split} satisfies \begin{align} \label{eqn:decomp-1-sparse45} x_{t+1,i} &= \alpha_{t+1} v^\star_{i} + \sum_{j = 1}^{t} \beta_{t}^j\phi_{j, i} + \xi_{t, i}, \qquad \text{for all } i \in \mathcal{I}^c_{\widehat{j}} , \end{align} where again the $\phi_j$'s are i.i.d.~drawn from $\mathcal{N}(0, \frac{1}{n} I_{|\mathcal{I}^c_{\widehat{j}}|})$, with the coefficients $\beta_{t}$, $\alpha_{t+1}$ and $\|\xi_{t}\|_2$ satisfying the predictions of Theorem~\ref{thm:sparse} (except that $\alpha_{t+1}$ should be rescaled by $\sqrt{1-p}$ to account for the reduced signal size). \begin{remark} The careful reader might remark that Algorithm~\ref{alg:split} only returns an estimate over the index subset $\mathcal{I}_{\widehat{j}}^{c}$. One still needs to estimate the remaining entries of $v^\star$. To do so, we can simply rerun the algorithm to generate different sampling sets, in the hope of producing another estimate that covers the remaining subvector (which is likely to happen given that $\mathcal{I}_{\widehat{j}}$ is vanishingly small). The two AMP outputs can then be merged easily to estimate the whole vector $v^\star$. Details are omitted here as they are not the focus of the current paper. \end{remark} \begin{algorithm}[t] \DontPrintSemicolon {\bf Input:} data matrix $M$; an oracle algorithm as described in \eqref{eq:correlation-oracle}; $p \asymp \frac{\log n}{k}$; $\tau_1 \asymp \sqrt{\frac{\log n}{n}}$.\\ {\bf Initialization:} \begin{enumerate} \item Set $N \asymp \log n$. For every $j \in [N]$, sample an index subset $\mathcal{I}_j \subset [n]$ such that each $i \in [n]$ belongs to $\mathcal{I}_{j}$ independently with probability $p$. \item For each $j\in [N]$, partition $M$ into four sub-matrices $M_{\mathcal{I}_j, \mathcal{I}_j}, M_{\mathcal{I}_j, \mathcal{I}_j^{c}}, M_{\mathcal{I}_j^{c}, \mathcal{I}_j}$ and $M_{\mathcal{I}_j^{c}, \mathcal{I}_j^{c}}$. Run the oracle algorithm to obtain a unit-norm estimate $v_{\mathcal{I}_j}$ for $v^{\star}_{\mathcal{I}_j}$ --- the subvector of $v^\star$ in the index set $\mathcal{I}_j$ --- based on $M_{\mathcal{I}_j, \mathcal{I}_j}$, which satisfies $\langle v^{\star}_{\mathcal{I}}, v_{\mathcal{I}}\rangle \asymp \big\|v^{\star}_{\mathcal{I}}\big\|_2$ with high probability. Compute \begin{align} \label{eqn:vj} v^j &\coloneqq M_{\mathcal{I}_j^{c}, \mathcal{I}_j}\cdot v_{\mathcal{I}_j}, \\ \label{eqn:sparse-init-2-j} x^j &\coloneqq \frac{\mathsf{ST}_{\tau_1}(v^{j})}{\ltwo{\mathsf{ST}_{\tau_1}(v^{j})}} . \end{align} \item Compute \begin{align} \label{eqn:sparse-choose-j} \widehat{j} \coloneqq \arg\!\max_{1\leq j \leq N} ~\Big\{ x^{j\top} M_{\mathcal{I}_j^{c},\mathcal{I}_j^{c}} x^{j}\Big\} \qquad \text{and} \qquad x_1 = x^{\widehat{j}}. \end{align} \end{enumerate} {\bf AMP:} run AMP \eqref{eqn:AMP-updates-sparse} on $M_{\mathcal{I}_{\widehat{j}}^{c}, \mathcal{I}_{\widehat{j}}^{c}}$ with initialization $x_1$ and $\eta_0(x_{0}) = 0$ to obtain $x_{t} \in \ensuremath{\mathbb{R}}^{|\mathcal{I}_{\widehat{j}}^c|}$. \caption{AMP with sample-split initialization.} \label{alg:split} \end{algorithm} \medskip \subsection{Non-asymptotic analysis of spectrally initialized AMP} \label{sec:spectral} Theorems~\ref{thm:recursion}-\ref{thm:main} are concerned with AMP iterates when initialized at a point independent of $W.$ Caution needs to be exercised, however, when these results are used to accommodate random initialization; in fact, when AMP is initialized randomly, while the decomposition still holds true, the error terms $\Delta_{\alpha,t}$ and $\|\xi_t\|_2$ might not be negligible compared to the signal component, thereby calling into question the validity of the asymptotic state evolution formula. Alternatively, one might consider AMP with a warm start --- that is, initializing AMP at some informative point. Along this line, a common approach to initialize a nonconvex iterative algorithm is the spectral method \citep{chen2021spectral,chi2019nonconvex,montanari2021estimation,keshavan2010matrix}, which attempts estimation by computing the leading eigenvector of the data matrix and has proved effective for various low-rank estimation problems. Spectrally initialized AMP has previously been analyzed when $t$ is fixed and $n$ approaches infinity \citep{montanari2021estimation,celentano2021local}. Motivated by the wide use of spectral initialization in practice, we pursue an extension of our non-asymptotic analysis framework to accommodate AMP with spectral initialization. Recognizing that the leading eigenvector of a large matrix is often computed by means of an iterative power method, we consider the following spectral estimate: \begin{itemize} \item[1)] generate an initial vector $\widetilde{v} \in \mathbb{R}^{n}$ uniformly at random on the $n$-dimensional sphere $\mathcal{S}^{n-1}$; \item[2)] run power iteration for $s$ steps (with $s$ to be specified shortly), and yield an estimate % \begin{align} \label{eqn:power-initialization} x_1 \coloneqq a_s M^s \widetilde{v} \qquad \text{with }a_s \coloneqq \frac{1}{\ltwo{M^s \widetilde{v}}} \end{align} with $a_s$ the normalization factor. \end{itemize} The reason we study this concrete power method is two fold: (i) it corresponds to the method widely implemented in practice to compute the leading eigenvector in an exceedingly accurate manner; (ii) it is iterative in nature, thus facilitating integration into the AMP analysis framework. When we employ $x_{1}$ (cf.~\eqref{eqn:power-initialization}) to initialize the AMP algorithm \eqref{eqn:AMP-updates}, Theorems~\ref{thm:recursion}-\ref{thm:main} remain valid after slight modification, with an initial signal strength $\alpha_1$ that can be characterized accurately using the property of spectral methods. Our result is formally stated below; its proof can be found in Section~\ref{sec:pf-thm-spectral}. \begin{theos} \label{thm:recursion-spectral} Suppose that the AMP algorithm \eqref{eqn:AMP-updates} is initialized with $x_0, x_1\in \ensuremath{\mathbb{R}}^n$, where $x_1$ is obtained via \eqref{eqn:power-initialization} with $s = \frac{C_v\log n}{(\lambda - 1)^2}$ for some large enough constant $C_v>0$, and $x_0$ obeys $\eta_0(x_0) = \frac{1}{\lambda}x_1$. Suppose that $1 + C_{\lambda}\big(\frac{\log n }{n}\big)^{1/9} \leq \lambda= O(1)$ for some large enough constant $C_{\lambda }>0$. Then for every $0\le t< n - 2s -1 $, the AMP iterates \eqref{eqn:AMP-updates} admit the following decomposition: \begin{align} \label{eqn:xt-decomposition-spectral} x_{t+1} = \alpha_{t+1} v^\star + \sum_{k = -2s}^{t} \beta_{t}^k\phi_k + \xi_{t}, \end{align} where the $\phi_k$'s are independent obeying $\phi_k \overset{\mathrm{i.i.d.}}{\sim} \mathcal{N}(0, \frac{1}{n}I_n)$, the $\xi_{k}$'s denote some residual vectors, and \begin{subequations} \label{eq:alpha-beta-recursion-spect} \begin{align} \alpha_{1} &=\sqrt{1-\frac{1}{\lambda^{2}}}, \qquad \alpha_{t+1} = \lambda v^{\star\top} \eta_{t}(x_t), \\ \|\beta_t\|_2 &\coloneqq \big\| \big(\beta_t^{-2s},\ldots,\beta_t^{0},\beta_t^1,\beta_t^2,\ldots,\beta_t^t \big) \big\|_2 = \left\|\eta_{t}(x_t)\right\|_2. \end{align} \end{subequations} In particular, there exist some unit vectors $\{\mu_t\}$ with $\mu_t = [\mu_t^{-2s},\ldots, \mu_t^{t}]\in \mathbb{R}^{t+2s+1}$ obeying \begin{align} \label{eqn:xi-norm-main-spectral} \|\xi_{0}\|_2 &\lesssim \frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^9n}} \\ \|\xi_{t}\|_2 &= \Big\langle \sum_{k = -2s}^{t-1} \mu_t^k\phi_k, \delta_{t}\Big\rangle - \langle\delta_{t}^{\prime}\rangle \sum_{k = -2s}^{t-1} \mu_t^k\beta_{t-1}^k + \Delta_t + O\Big(\sqrt{\frac{t\log n}{n}}\|\beta_{t}\|_2\Big), \qquad 1 \leq t < n-2s-1 \end{align} with probability at least $1-O(n^{-11})$, where we define \begin{subequations} \label{eqn:delta-chorus-spectral} \begin{align} v_t &\coloneqq \alpha_tv^\star + \sum_{k = -2s}^{t-1} \beta_{t-1}^k\phi_k \\ \label{defn:Delta-t-spectral} \Delta_t &\coloneqq \sum_{k = -2s}^{t-1} \mu_t^k \Big[\big\langle \phi_k, \eta_{t}(v_t)\big\rangle - \big\langle\eta_t^{\prime}(v_t)\big\rangle \beta_{t-1}^k\Big], \\ \label{defn:delta-t-spectral} \delta_{t} &\coloneqq \eta_{t}(x_t) - \eta_{t}(v_t), \\ \label{defn:delta-prime-t-spectral} \delta_{t}^{\prime} &\coloneqq \eta_{t}^{\prime}(x_t) - \eta_{t}^{\prime}(v_t). \end{align} \end{subequations} \end{theos} Akin to Theorem~\ref{thm:recursion}, Theorem~\ref{thm:recursion-spectral} provides a non-asymptotic characterization for each AMP iterate in the presence of spectral initialization. Even though the power method does not resemble the AMP update rule, spectrally initialized AMP shares the same decomposition structure as in Theorem~\ref{thm:recursion}, except that many summations therein include $2s+1$ more vectors/coefficients in order to incorporate the influence of spectral methods. As can be anticipated, one can immediately derive a counterpart of Theorem~\ref{thm:main} in the presence of spectral initialization by properly modifying Assumption~\ref{assump:A-H-eta}. \begin{cors} \label{cor:recursion-spectral} Consider the setting of Theorem~\ref{thm:recursion-spectral}. Suppose Assumptions~\ref{assump:eta}-\ref{assump:A-H-eta} are valid, except that the summations $\sum_{k=1}^{t-1}$ are replaced by $\sum_{k=-2s}^{t-1}$. With probability at least $1-O(n^{-11})$, the AMP iterates admit the decomposition~\eqref{eqn:xt-decomposition-spectral} with $\alpha_{t+1}$ and $\ltwo{\beta_t}$ obeying \eqref{eqn:state-evolution-finite} and the error terms satisfying \eqref{eqn:para-general}. \end{cors} On a technical level, the main step towards proving Theorem~\ref{thm:recursion-spectral} consists of showing that the spectral initialization admits a similar decomposition \begin{align} \label{eqn:spectral-x1-gist} x_1 = \alpha_1 v^{\star} + \sum_{i = -2s}^{0} \beta_0^k\phi_k + O\Big(\frac{\log^{3.5} n}{\sqrt{(\lambda - 1)^7n}}\Big), \end{align} for a set of $2s+1$ i.i.d.~Gaussian vectors $\{\phi_k\}_{-2s\leq k\leq 0}$. The primary challenge in establishing this result stems from the fact that $x_{1}$ relies heavily on $W$, which cannot be easily decoupled from $W$ as in Theorem~\ref{thm:recursion} with an independent initialization. Informally, a key observation that helps overcome this challenge (as shall be detailed in Section~\ref{sec:rep-spec}) is the following decomposition owing to power iterations: \begin{align} x_1 = \sum_{i = 0}^{s-1} a_iW^{i}v^{\star} + \frac{1}{\ltwo{M^s \widetilde{v}}}W^s\widetilde{v}, \end{align} for certain coefficients $a_{0},\ldots,a_{s-1}\in \ensuremath{\mathbb{R}}$, where $\widetilde{v}$ is the initial vector for power iterations. Inspired by this decomposition, we attempt to construct an orthonormal basis of $2s+1$ dimension that covers $x_1$ perfectly, which can be accomplished by some AMP-style algorithms. These auxiliary AMP sequences can then be merged with the subsequent AMP updates, providing a sensible way to invoke Theorem~\ref{thm:recursion}. \begin{remark} It is worth pointing out that in our non-asymptotic analysis, it is critical to ensure that $x_{1}$ lies perfectly within the constructed $(2s+1)$-dimensional subspace; otherwise, the leakage term --- albeit of tiny magnitude --- might ruin the key independence structures that underlie our theory (to be made precise in Lemma~\ref{lem:distribution}). This issue, however, does not manifests itself if one only aims for an asymptotic characterization, making our incorporation of spectral initialization more intricate compared to the asymptotic counterpart in \cite{montanari2021estimation}. \end{remark}
2,869,038,156,145
arxiv
\section{Introduction}\label{sec:intro} The supersymmetrization of the Standard Model (SM) requires enlarging the spectrum of the theory. The quarks and leptons or the Higgs scalars become components of supersymmetry group representations, the chiral superfields. The notation is presented in table \ref{tab:notations}. \begin{table}[htb]\label{tab:notations} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \footnotesize Chiral & \footnotesize Left Weyl & \footnotesize Complex & $\scriptstyle SU(3)_c\times SU(2)_L\times U(1)_Y$ \\ \footnotesize Superfield & \footnotesize fermion & \footnotesize scalar & \footnotesize representation \\ \hline &&&\\ $L $ &$ l $ &$\tilde l$ &$(1,2,-1/2)$ \\ $E^c $ &$ e^c $ &$\tilde e^c$ &$(1,1,1)$ \\ $Q $ &$ q $ &$\tilde q$ &$(3,1,1/6)$ \\ $U^c $ &$ u^c $ &$\tilde u^c$ &$(3^*,1,-2/3)$ \\ $D^c $ &$ d^c $ &$\tilde d^c$ &$(3^*,1,1/3)$ \\ $H_1 $ &$ \tilde h_1$ &$ h_1$ &$(1,2,-1/2)$ \\ $H_2 $ &$ \tilde h_2$ &$ h_2$ &$(1,2,1/2)$ \\ &&&\\ \hline \end{tabular} \end{center} \caption{Chiral superfields and corresponding component fields.} \end{table} According to the usual convention we denote the supersymmetric particles by a tilde. Notice that supersymmetry implies that two Higgs doublets, $h_1$ and $h_2,$ are present. Schemes of supersymmetry breaking allow us to generate superpartners mass patterns consistent with the present non-observation of superparticles. Theoretical arguments require these masses to be below the TeV range, making the supersymmetric extension of the Standard Model interesting for present and future searches. Each interaction of the Standard Model can be generalized in a supersymmetric invariant form. For instance the down Yukawa interactions read: \begin{equation} {\cal L}_{\stackrel{\rm\scriptscriptstyle Yukawa}{\rm\scriptscriptstyle down}}= -{ Y_D^*}_{ij} \ \left[\ h_1\ q_i\ d^c_j +\tilde h_1\ q_i\ \tilde d^c_j + \tilde h_1\ \tilde q_i\ d^c_j \right]\ + {\rm h.c.} \label{eq:DownYukawa} \end{equation} ($Y_D$ are the down Yukawa couplings; $i,j$ are family indices; $SU(2)_L$ doublets are contracted with $i\tau_2.$). Due to supersymmetry, the SM interaction induces similar interactions between pairs of superpartners. It is also easy to write interactions that have no SM analogue. This happens when superpartners behave as a dilepton, $l_i l_j \tilde e^c_k;$ or as leptoquarks, $l_i \tilde q_j d^c_k$ and $l_i q_j \tilde d^c_k;$ or finally as diquarks, $\tilde d^c_i d^c_j u^c_k$ and $d^c_i d^c_j \tilde u^c_k$ (the supersymmetric form can be easily inferred in analogy with eq.\ \refe{eq:DownYukawa}). We conclude that SM gauge invariance does not assure lepton and/or baryon number conservation in supersymmetric context. Notice that these interactions are not necessarily linked to the supersymmetric breaking mechanism, or to the structure of the Higgs sector, about which we have not direct experimental informations yet, but just depend on the spectrum of the model and on SM gauge invariance. Some or all the interactions above can be forbidden adding more symmetry to the model. Such a symmetry can be local or global, continuos or discrete. A widely used possibility is given by the $Z_2$ transformation upon which only the superpartner changes sign: the $R$-parity. Since $R$-parity forbids the terms introduced above, this assumption amounts to baryon {\em and} lepton number conservation\footnote{Even if interactions which violate $R$-parity are forbidden, there is the interesting possibility of spontaneous breaking of the lepton number, due to the vacuum expectation value (VEV) of the sneutrino \cite{spont}. Nonzero VEVs of other scalars would instead break color and/or electric charge.}. One can even speculate about the origin of such a symmetry. But there are still no experimental keys to know which is the scheme chosen by Nature. Therefore, a phenomenological attitude toward the supersymmetric paradigm requires to study the consequences of relaxing the assumption of $R$-parity conservation. The plan of the exposition is as follows: First, we define the $R$-parity breaking interactions, and study their possible manifestations and some experimental bounds. We pay particular attention to the rare and exotic low-energy processes. Then, in an effort to obtain finer control on these interactions, we consider them in the context of Grand Unification (GU). We discuss an interesting scenario, in which sizable $R$-parity breaking interactions can be reconciled with Grand Unification program. \section{$R$-parity breaking} In this section we define the $R$-parity breaking couplings, and study possible manifestations of their presence. We figure out important processes and give a feeling of the existing bounds on the couplings. The possibility-risk that $R$-parity breaking couplings make the ordinary matter unstable is analyzed. \subsection{Definitions and fundamental facts.} The superpotential $W$ gives us a compact formalism to describe the supersymmetric interactions of matter fields: By definition it is an analytic function of the chiral left superfields present in the theory. Let us decompose $W=W_R+W_{R\!\!\!\!\!\:/}.$ Considering the renormalizable interactions, the $R$-parity conserving part reads: \begin{equation} W_R=\frac{m_{e_i}}{v_1} H_1 L_i E^c_i + \frac{m_{d_i}}{v_1} H_1 Q_j V_{ji}^* D^c_i - \frac{m_{u_i}}{v_2} H_2 Q_i U^c_i + \mu H_1 H_2 \label{eq:R-parity-conserving} \end{equation} whereas the $R$-parity violating part reads: \begin{equation} W_{R\!\!\!\!\!\:/}= \lambda_{ijk} L_i L_j E^c_k +\lambda_{ijk}' L_i Q_j D^c_k +\lambda_{ijk}'' D^c_i D^c_j U^c_k +\mu_i L_i H_2 \label{eq:R-parity-violating} \end{equation} The superpotential $W$ is written in terms of superfields with fermion mass eigenstates, so that the Cabibbo-Kobayashi-Maskawa matrix $V_{ij}$ appears in \refe{eq:R-parity-conserving} and \refe{eq:R-parity-violating} explicitly as well as in $Q_i=(U_i, V_{ij} D_j)$; $m_{e_i},$ $m_{d_i},$ $m_{u_i}$ are the fermion masses. Finally, $v_{1,2}$ are the vacuum expectation values of the scalar components of the superfields $H^0_{1,2}.$ Notice that with a proper redefinition, $\mu H_1 + \mu_i L_i \to \mu H_1$ the last term can be eliminated from the superpotential. Therefore we will assume in the following $\mu_i=0.$ In passing, we remark that the presence of all but the third term in \refe{eq:R-parity-violating} is due to the fact that the Higgs $H_1$ and the three lepton doublets $L_i$ are identical from the point of view of gauge symmetry. $W_R$ is by definition the superpotential of the Minimal Supersymmetric extension of the SM (MSSM). Notice that in full analogy with the SM case \refe{eq:R-parity-conserving} conserves the four $U(1)$ numbers related to $B$ (baryon charge), ${L_1},$ ${L_2}$ and ${L_3}$ (lepton charges) where the definitions are done on the superfields\footnote{This definition is forced by the gaugino interactions. Notice that the scalar masses can provide us with sources of violation of hadronic and leptonic flavours, if they are not diagonal in the same basis in which the fermion masses are diagonal.}. Let us therefore analyze the interactions in \refe{eq:R-parity-violating} from the point of view of the global symmetries. They violate either total lepton ($\lambda,\lambda',\mu_i$) or baryon number symmetries ($\lambda''$). One can further divide in two classes the lepton violating terms: The terms in the first class, \begin{equation} L_i H_2,\ \ \ L_i Q_k D^c_l,\ \ \ L_i L_j E^c_j,\ \ \ \ \ \ \ i \neq j \label{eq:class1} \end{equation} carry charges ${L_i}$, whereas those in the second class, \begin{equation} L_1 L_2 E^c_3,\ \ \ L_3 L_1 E^c_2,\ \ \ L_2 L_3 E^c_1 \label{eq:class2} \end{equation} carry charges ${\cal L}_3=L_1+L_2-L_3,$ ${\cal L}_2=L_1-L_2+L_3$ and ${\cal L}_1=-L_1+L_2+L_3$ respectively. This classification has some importance for lepton violating phenomena. For instance, the neutrino mixing term $\nu_1\nu_2$ cannot be generated by the operators \refe{eq:class2} alone, since its charge is $1/2\ ({\cal L}_1+{\cal L}_2) + {\cal L}_3$ (it would requires ``half" vertices; similarly for the other mixings). For the same reason the terms \refe{eq:class1} cannot be induced by those of \refe{eq:class2} alone. Few remarks, in order to give a perspective to the present study.\vskip-\parskip \noindent (1) It is of course possible to ascribe $B$- and $L$-violating phenomena to $R$-parity conserving theories, for example in the case of supersymmetric $SU(5)$ model; but, due to the different underlying mechanism, the resulting phenomenology is typically different.\vskip-\parskip \noindent (2) One may consider the situation in which $R$-parity is a symmetry of the tree level lagrangian, broken by effective terms. It has been remarked \cite{Hallreview} that if we want to reconcile the theory with a global invariance, we have to consider operators at least of dimension 7, for example $(L H_2)^3$ (which is invariant under a leptonic $Z_3$). Previous argument does not however disfavor this scenario, since what matters is the symmetry of the underlying fundamental theory \cite{K}.\vskip-\parskip \noindent (3) There is the possibility that supersymmetric interactions are $R$-parity symmetric, whereas the interactions which break supersymmetry are not. To our knowledge, this possibility has not attracted a lot of attention. A case of special interest is studied in \cite{IHLee}. \subsection{Exotic interactions of ordinary matter} Let us consider the effective terms that SM inherits from the $R$-parity breaking interactions when the sleptons and the squarks fields are integrated away\footnote{There is an important consequence of $R$-parity breaking interactions regarding the supersymmetric particles: the lightest supersymmetric particle becomes unstable. See \cite{Dreiner} for searches at colliders.}. The topologies of Feynman diagrams that is necessary to consider are listed in figure \refe{fig:topologies}. The operators of greatest interest are clearly those which violate SM conservation laws, the lepton and/or the baryon numbers, or give flavor-changing neutral currents. In the case of the two fermion operators there are the Majorana neutrino masses $\nu\nu$ (fig.\ \ref{fig:topologies}a); for the six fermions operators, either those of the form $e u \bar d e u \bar d$ which trigger neutrinoless double beta decay (fig. \ref{fig:topologies}c, \ref{fig:topologies}d), or those of the form $u d d u d d,$ which give for instance $n$-$\bar n$ oscillations (fig.\ \ref{fig:topologies}c,\ref{fig:topologies}e) \footnote{For further informations see references \cite{neutrino-masses}, \cite{0nu-bb} and \cite{n-nbar}.}. We recall that the first two types of operators arise in pure lepton number violating framework, whereas the last just requires violation of baryon number; notice also that their flavor structure can be {\em a priori} generic. Now let us focus the attention on the four fermions operators, arising by diagrams of the topology of fig.\ \ref{fig:topologies}b. They are listed in table 2, together with the couplings involved, the particle exchanged and a typical process triggered. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|} \hline \footnotesize Effective & \footnotesize Particle & \footnotesize Couplings & \footnotesize Example \\ \footnotesize operator & \footnotesize exchanged & \footnotesize involved & \footnotesize process \\ \hline &&&\\ $e e \bar e\bar e $ & $\tilde \nu $ & $\lambda^2 $ & $\mu^- \to e^- e^- e^+ $ \\ $e \nu \bar e \nu $ & $\tilde e,\tilde e^c,\widetilde{ ee^c }$ & $\lambda^2 $ & $ \mu^- \to e^- \nu_e \bar \nu_\mu $ \\ $d d \bar d \bar d $ & $\tilde \nu,\tilde u^c $ &${\lambda'}^2, {\lambda''}^2$ & $ K^0-\bar K^0 $ oscill. \\ $u d \bar u \bar d $ & $\tilde e,\tilde d^c $ &${\lambda'}^2, {\lambda''}^2$ & $ B \to $ non charmed \\ $u e \bar u \bar e $ & $\tilde d^c $ &${\lambda'}^2$ & $ D^+ \to \pi^+ \mu e $ \\ $d \nu \bar d \nu $ & $\tilde d,\tilde d^c,\widetilde{ dd^c }$ & ${\lambda'}^2$ & $ B \to K \nu \bar \nu $ \\ $u e \bar d \nu $ & $\tilde e,\widetilde{ ee^c} ,\tilde d,\widetilde{ dd^c } $ & ${\lambda'}^2,\lambda{\lambda'}^2$ & $ B \to K l \bar \nu $ \\ $d e \bar d \bar e $ & $\tilde\nu,\tilde u $ & ${\lambda'}^2,\lambda{\lambda'}^2$ & $ K_L \to \mu e $ \\ $u u d e $ & $\tilde d^c $ & $\lambda' \lambda''$ & $ p \to \pi^0 e^+ $ \\ $u d d \nu $ & $\tilde d^c,\widetilde{ dd^c }$ & $\lambda' \lambda'' $ & $ p \to K^+\nu $ \\ $d d d \bar e $ & $\widetilde{ uu^c }$ & $\lambda' \lambda''$ & $ n \to K^+ e^- $ \\ &&&\\ \hline \end{tabular} \end{center} \caption{Four fermions operators resulting from $R$-parity breaking interactions. In first column $\nu$ denotes either the neutrino {\em or} the antineutrino field. The propagators like $\widetilde{ ee^c}$ in second column arise from the mixing of the scalar states $\tilde e$ and $\tilde e^c$ after $SU(2)_L$ breaking.} \end{table} The most important operators are clearly those of last three rows of table 2, since they lead to instability of nucleons. As it is well known, they arise due to violations of {\em both} the baryon and the lepton number. The fact that there are no four-fermion operators which violates only the baryon number is a general consequence of $SU(3)_c\times U(1)_{e.m.}$ symmetry. Violations of the lepton number are possible, but only in the interactions involving neutrinos: As an example, the exchange of $\widetilde {\tau\tau^c}$ induces the decay $\mu^-\to e^- \nu_e \bar \nu_\mu$ due to $\lambda_{123}$ and $\lambda_{231}$ couplings\footnote{ Unfortunately, existing limits on the single couplings render this process not experimentally interesting in the model under consideration---I thank M.\ Cooper for a clarification about this point.}. Table 2 illustrates the need to proceed carefully in introducing the $R$-parity violating couplings, since all kind of non-standard operators can be induced. According to previous observation, a safe possibility of introducing the $R$-parity violating terms is to forbid the $B$-violating terms, but to retain the lepton violating ones (or viceversa). A more daring possibility is to have both operators to a sufficiently suppressed level. We discuss these possibilities in the following, with particular attention to the manifestations in the low-energy physics. \subsection{Lepton-violating scenario}\label{sub:lviol} Suppose for a while that $B$-violating terms are absent. Consider the flavor structure of the $R$-parity breaking couplings. Are there couplings unconstrained by rare (or forbidden) processes? A partial answer is provided by table 3. \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|c||} \cline{2-4} \multicolumn{1}{c||}{ } & $K^0-\bar K^0$ & $B^0-\bar B^0$ & $K_L\to \mu e$ \\ \hline \hline 111 & & & x \\ \hline 112 & x & & x \\ \hline 121 & x & & x \\ \hline 211 & & & x \\ \hline 122 & & & x \\ \hline 212 & x & & x \\ \hline 221 & x & & x \\ \hline 222 & & & x \\ \hline 113 & & x & \\ \hline 131 & & x & x \\ \hline 311 & & & \\ \hline 123 & & x & \\ \hline 132 & & x & x \\ \hline 213 & & x & \\ \hline 231 & & x & x \\ \hline 312 & x & & x \\ \hline 321 & x & & x \\ \hline 223 & & x & \\ \hline 232 & & x & x \\ \hline 322 & & & \\ \hline 133 & & & \\ \hline 313 & & x & \\ \hline 331 & & x & \\ \hline 233 & & & \\ \hline 323 & & x & \\ \hline 332 & & x & \\ \hline 333 & & & \\ \hline \hline \end{tabular} \end{center} \caption{Rare processes in which the various $\lambda_{ijk}'$ couplings are involved. A sneutrino or an up squark is exchanged.} \end{table} It shows whether three ``delicate'' observables can be affected or not by the $\lambda'$-type couplings (the precise meaning of the table is: whether the coupling enters or not a tree level diagram relevant for the processes). We deduce from table 3 that the couplings $\lambda'_{3jj}$ and $\lambda'_{j33}$ do not give contribution to the processes. This means that large values of these couplings are not incompatible with present experimental informations. As a common feature, these couplings do not violate hadronic flavours. We can somewhat push the above argument. Let us suppose that one $\lambda'$ coupling, which is not in the class above, is large. Table 3 tell us that, in this case, some other $R$-parity couplings is constrained by present experimental bounds. To be quantitative, the observation of whichever coupling $\lambda'_{\rm obs}$ at the level of $10^{-2}$ would imply a strong suppression ($\ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$}\ 10^{-4} \times \lambda'_{\rm obs}$) of another $\lambda'$ coupling. In absence of a theoretical explanation, this scenario is questionable on the basis of naturalness. \subsection{Lepton- and baryon-violating scenario}\label{sub:lbviol} The simultaneous presence of the couplings $\lambda''$ ($B$-violating) and $\lambda'$ ($L$-violating) leads to the possibility of squark-mediated proton decay. This implies very strong bounds on the couplings which allow the decay at tree level: \begin{equation} |\lambda'\cdot \lambda''| \ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$}\ 10^{-24} \label{proton-decay-bound-on-lambda} \end{equation} for squark masses around 1 TeV \cite{HK}. The bound does not affect certain couplings involving heavy generations. But, since the bounds are so stringent, it is important to check the one loop structure of the theory. It is possible to prove that, choosing whichever pair of couplings $\lambda'$ and $\lambda'',$ there is always at least one diagram relevant for the decay at one loop level \cite{Upper}. This happens due to the flavor-changing interactions, which are present even in the absence of $R$-parity breaking, namely: the interactions of the quarks with the $W$ boson and the charged Higgs, and their supersymmetric counterparts. The less suppressed pair of couplings is still subject to a (conservative) bound on their product, \begin{equation} |\lambda'\cdot \lambda''| \ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$}\ 10^{-9}, \end{equation} according to \cite{Upper}\footnote{A different conclusion has been reached by \cite{CRS}.}. The simultaneous presence of suitably chosen couplings $\lambda$ and $\lambda''$ seems instead to be less dangerous for proton decay \cite{CRS}. Notice for instance that, due to the symmetry discussed above, the presence of operators of the class \refe{eq:class2} requires that there are three different leptons in the final states, calling for dimension 9 effective operators for nucleon decay. Coming to phenomenology, we remark that the squark-mediated nucleon decay may have a very neat experimental signature: the presence of the $(B+L)$-conserving channels\footnote{Another possible manifestation of these dynamics would be the presence of unexpected branching-ratios for the $(B-L)$-conserving channels of nucleon decay.} \cite{BplusL}. These channels are related to effective operators at least of dimension 7 \cite{dim7}. This calls for sources of $SU(2)_L$ breaking, which are provided by left-right squark mass mixing: for the top quark we have $m_{\tilde t \tilde t^c}^2 \sim m_t\ \tilde m$ where $m_t$ is the top mass, that is not expected to be very different from the typical supersymmetric mass $\tilde{m}.$ Regardless of the Lorentz structure, there is only one effective four-field operator at the quark level which mediate $(B+L)$ conserving nucleon decays: $d d s \bar l,$ where $l=e,\mu.$ It gives rise to $n \to K^+ l^-$ and $p \to K^+ l^- \pi^+$ decay channels. The first decay, which proceeds with a faster rate, provokes the decay of the neutrons in the stable nuclei. This provide us with a quite clear signal in water \v{C}erenkov detectors: \begin{equation} {}^{16}O\to {}^{15}O + \gamma(6.2\ {\rm MeV}) + \mu + l, \label{exp-sign} \end{equation} where $l$ is monochromatic, $\mu$ results from kaon decay and $\gamma$ from the transition of the excited nucleus to the ground state (a unobservable neutrino from $K$ decay is also present). A final remark. Even if it is allowed to speculate on the possibility of very small couplings, it would be much nicer to have a theoretical guideline to explain the size of the couplings. In the context of horizontal symmetry \cite{HK,HOR}, the smallness of the couplings can be related to suitably large horizontal charges. In our opinion however a defect of these approaches is that they still suffer of considerable latitude in the specification of the models. \section{Supersymmetric Grand Unification and $R$-parity breaking} \label{sec:susygut} In previous sections we assumed that:\vskip-\parskip \noindent $(i)$ The Standard Model must be embedded into a supersymmetric theory;\vskip-\parskip \noindent $(ii)$ all the interactions compatible with the gauge symmetry should be a priori present. \vskip-\parskip \noindent Unfortunately, at present, hypothesis $(i)$ lacks of experimental support. This requires to convey special attentions to the theoretical motivations for supersymmetry. Among them, it is prominent the possibility to implement in the supersymmetric context the Grand Unification program (in its minimal form). Therefore we will further specify the theoretical context, and assume that: \vskip-\parskip \noindent $(iii)$ The interactions of the supersymmetric Standard Model are the low energy manifestations of a $SU(5)$ invariant dynamics.\vskip-\parskip \noindent This hypothesis of course implies a specification of the $R$-parity breaking couplings. In the $SU(5)$ model one can introduce the following $R$-parity violating interactions \cite{Ramond} \begin{equation} \Lambda_{ijk} \bar 5_i \bar 5_j 10_{k} + \bar 5_i(M_i + h_i \Phi) H, \label{eq:lambda-su5} \label{eq:R-viol-su5} \end{equation} where $i,j,k=1,2,3$ are generation indices, $\Lambda_{ijk}$ are the coupling constants and $\bar 5_i,$ $10_i$ are the matter superfields which can be written (restating the gauge indices) as: \begin{equation} \bar 5^a = \left( \begin{array}{c} D^{c\alpha}\\ \epsilon^{AB} L_B \end{array} \right) \ \ \ \ \ \ \ \ \ \ 10_{ab} = \left( \begin{array}{cc} \epsilon_{\alpha\beta\gamma}U^{c\gamma}& -Q_{B\alpha}\\ Q_{A\beta}& \epsilon_{AB} E^c \end{array} \right). \label{eq:schematic-notation} \end{equation} where $\epsilon^{12}=\epsilon_{21}=1.$ $M_i$ are mass parameters, $h_i$ are couplings, $\Phi$ and $H$ are the 24-plet and 5-plet of Higgs multiplets. Starting from \refe{eq:R-viol-su5}, we will study in the following two possible scenarios for the $R$-parity breaking couplings. \subsection{A model with small $R$-parity breaking couplings}\label{sub:lambdamodel} We first consider the effects of $\Lambda$ couplings, in a model in which the matter-Higgs mixing (the second term in \refe{eq:R-viol-su5}) is negligible. It is convenient to define $\Lambda_{ijk}$ in the basis where $SU(2)_L$-singlets $u^c$ and $d^c$ coincide with mass eigenstates. This always can be done since $u^c$ and $d^c$ enter different $SU(5)$-multiplets. Note that due to the antisymmetry of 10-plets the interactions \refe{eq:R-viol-su5} are antisymmetric in generation indices: $\Lambda_{ijk}=-\Lambda_{jik}.$ Substituting the multiplets \refe{eq:schematic-notation} in \refe{eq:R-viol-su5} and performing the redefinitions of the couplings which bring the $R$-parity conserving part of the superpotential with light fields in the form \refe{eq:R-parity-conserving}, we find the relations between original $\lambda_{ijk}$ and $\Lambda_{ijk}$ couplings at the GU scale: \begin{equation} \begin{array}{l} \lambda_{ijk}=-\Lambda_{i'j'k'}\ {\cal U}_{i'i}\ {\cal U}_{j'j}\ {\cal V}_{k'k}\\ \lambda'_{jki}=2\Lambda_{ij'k'}\ {\cal U}_{j'j}\ {\cal W}_{k'k} \\ \lambda''_{ijk}=\Lambda_{ijk}. \label{eq:lambda-unified} \end{array} \end{equation} where ${\cal U},{\cal W},{\cal V}$ are unitary matrices. The appearance of these matrices can be explained considering that our choice of flavor basis does not fix the flavor structure of the superfield $L$ (respectively $E^c$ and $Q$) which appears together with $D^c$ ($U^c$) in the $SU(5)$ $\bar 5$-plet (10-plet). They can be calculated fixing the mechanism of mass generation: which Higgs representation are present, which non-renormalizable operators, {\em etc.}. We will consider the case: \begin{equation} \begin{array}{c} {\cal U}={\cal W}=1,\\ {\cal V}=V \end{array} \label{eq:minimal} \end{equation} which corresponds to the assumption that only Higgs 5-plets contribute to the fermion mass matrices. As a consequence of quark and lepton unification in $SU(5),$ all types of $R$-parity violating couplings appear simultaneously. Moreover, different couplings $\lambda,\lambda'$ and $\lambda''$ are determined by unique GU coupling $\Lambda.$ As follows from \refe{eq:lambda-unified} and \refe{eq:minimal}, these couplings basically coincide at GU scale: \begin{equation} -\lambda_{ijl} V^{-1}_{lk}=\frac{1}{2} \lambda'_{jki}=\lambda''_{ijk}. \label{eq:lambda-unified2} \end{equation} Notice that Grand Unification implies that the $L$-violating couplings $\lambda'_{ijk}$ should be antisymmetric in the exchange of the first and third indices: $\lambda'_{ijk}=-\lambda'_{kji},$ similarly to other couplings; in the non-unified version \refe{eq:R-parity-violating} these couplings can have also a symmetric part. The considerations above apply to the low energy theory up to minor modifications. A not completely negligible effect is the evolution of the couplings due to gauge renormalization. It leads to modification of GU relations \refe{eq:lambda-unified2} at the electroweak scale: \begin{equation} \begin{array}{l} \lambda_{ijk}=-1.5\ \Lambda_{ijl} V_{lk}\\ \lambda'_{jki}=2\ (3.4 \pm 0.3)\ \Lambda_{ijk}\\ \lambda''_{ijk}=(4.4 \pm 0.4)\ \Lambda_{ijk} \label{eq:lambda-running} \end{array} \end{equation} (the errors correspond to the uncertainty in strong coupling constant:\ $\alpha_s(M_Z)=0.12\pm 0.01$). The inclusion of other uncertainties related {\it e.g.}\ to threshold SUSY and GU corrections may require the doubling of the errors quoted. The renormalization effects due to third family Yukawa couplings do not drastically change the relations \refe{eq:lambda-running}. With previous remarks in mind, it is easy to understand that the couplings are subject to quite strong constraints from the proton decay bounds in the case under consideration. To be concrete, let us consider the bound on the coupling $\Lambda_{233}$ (which may be argued to be the dominating one). The proton decay, induced at the one loop level, implies \cite{R-GUT}: \begin{equation} \Lambda \ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$<$}\ 3\cdot 10^{-9} \end{equation} This can be thought as a conservative bound in this kind of GU models for the $R$-parity breaking couplings. We conclude that, whereas present model easily encompasses nucleon instability phenomena (in particular decays which conserve $B+L,$ or decays with exotic branching ratios), it cannot account for large $R$-parity breaking couplings. \subsection{A model with large $R$-parity breaking couplings}\label{sub:d-tmodel} Let us consider a model where the matter-Higgs mixing is the only source of $R$-parity violation. Suggesting that third generation coupling dominates, we can write the appropriate terms of the superpotential in the following way \begin{equation} \bar 5_3 \hat m H + \bar H \hat M H + y_{i}\ \bar 5_i 10_i \bar H , \label{sup} \end{equation} where $\bar 5_i$ and $10_i$ are defined in the diagonal basis for down quark Yukawa couplings $y_i$, $i=d,s,b$ so that $d^c_i$ and $d_i$ coincide, up to corrections $M_W/M_{\rm \scriptscriptstyle GU},$ with mass eigenstates. The mass matrices of \refe{sup} can be written in the doublet-triplet form as: \begin{equation} \hat m = {\rm diag}(m_{\it \scriptscriptstyle tripl}, m_{\it \scriptscriptstyle doubl}), \ \ \ \hat M = {\rm diag}(M_{\it \scriptscriptstyle tripl}, M_{\it \scriptscriptstyle doubl}), \label{matr} \end{equation} where $M_{\it \scriptscriptstyle tripl} \sim M_{\rm \scriptscriptstyle GU}$ and $M_{\it \scriptscriptstyle doubl}$, $m_{\it \scriptscriptstyle doubl}$ and $m_{\it \scriptscriptstyle tripl}$ are at the electroweak scale (large value of $m_{\it \scriptscriptstyle tripl}$ would result in the fast proton decay). The explanation of this mass pattern is clearly connected to the explanation of the doublet-triplet (DT) splitting\footnote{We will not specify any underlying mechanism for DT splitting, but simply observe that it is technically possible to implement it in the present context, carefully choosing $M_i,$ $h_i$ and $\langle \Phi\rangle $ in \refe{eq:R-viol-su5}.}. The first term in \refe{sup} can be eliminated by rotations of the doublet and the triplet components of the 5-plets: $\bar 5_3=(B^c,L_3)$ and $\bar H=(\bar {\cal T}, H_1)$. For triplet components we redefine: \begin{equation} \begin{array}{rcl} c_{\it \scriptscriptstyle tripl} \bar {\cal T} + s_{\it \scriptscriptstyle tripl} B^c & \to & \bar {\cal T} \\ c_{\it \scriptscriptstyle tripl} B^c - s_{\it \scriptscriptstyle tripl} \bar {\cal T} & \to & {B^c} , \end{array} \label{trip} \end{equation} so that ${B^c}$ and $\bar {\cal T}$ are the mass states, $c_{\it \scriptscriptstyle tripl} \equiv \cos \theta_{\it \scriptscriptstyle tripl}$, $s_{\it \scriptscriptstyle tripl} \equiv \sin \theta_{\it \scriptscriptstyle tripl},$ and \begin{equation} \frac{s_{\it \scriptscriptstyle tripl}}{c_{\it \scriptscriptstyle tripl}} = \frac {m_{\it \scriptscriptstyle tripl}}{M_{\it \scriptscriptstyle tripl}}. \label{trip-mix} \end{equation} For doublet components: \begin{equation} \begin{array}{rcl} c_{\it \scriptscriptstyle doubl} H_1 + s_{\it \scriptscriptstyle doubl} L_3 & \to & H_1 \\ c_{\it \scriptscriptstyle doubl} L_3 - s_{\it \scriptscriptstyle doubl} H_1 & \to & L_3 \end{array} \label{doub} \end{equation} and \begin{equation} \frac{s_{\it \scriptscriptstyle doubl}}{c_{\it \scriptscriptstyle doubl}} = \frac {m_{\it \scriptscriptstyle doubl}}{M_{\it \scriptscriptstyle doubl}}. \label{doub-mix} \end{equation} Since $m_{\it \scriptscriptstyle doubl}, m_{\it \scriptscriptstyle tripl}, M_{\it \scriptscriptstyle doubl}\sim M_W$ one gets from \refe{doub-mix} and \refe{trip-mix} that $s_{\it \scriptscriptstyle tripl}$ is strongly suppressed, $s_{\it \scriptscriptstyle tripl} \sim M_W/ M_{\rm \scriptscriptstyle GU} < 10^{-14}$, whereas $s_{\it \scriptscriptstyle doubl}$ can be of the order 1. Substituting the expressions \refe{trip} and \refe{doub} into \refe{sup} we obtain the effective $R$-parity violating couplings \refe{eq:R-parity-violating}. In particular the third generation Yukawa coupling gives \begin{equation} \lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff} L_3 B^c Q_3', \label{lambda333} \end{equation} where \begin{equation} \lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff} = s_{\it \scriptscriptstyle doubl}\cdot y_b, \end{equation} and $Q_3' \equiv V_{ib}^* Q_i$. Baryon violating interactions as well as pure leptonic terms are absent due to the antisymmetry. The Yukawa coupling of the second generation leads to \begin{equation} y_s\ [ s_{\it \scriptscriptstyle tripl} B^c S^c U^c_i + s_{\it \scriptscriptstyle doubl} L_3 S^c Q_i + s_{\it \scriptscriptstyle doubl} L_2 L_3 E^c_i ] \label{sec} \end{equation} (The first generation Yukawa coupling gives similar terms with the substitution $y_s V_{is}\to y_d V_{id},$ $S \to D$, $L_2 \to L_1$). The leading contribution to the proton decay is induced by $L$-violating interaction \refe{lambda333} and $B$-violating interaction \refe{sec}. The $\tilde{b}^c$ exchange dressed by $h^+$, $\tilde{h}^+$... results in the amplitude for proton decay \begin{equation} A \propto \lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff} \cdot y_s\ s_{\it \scriptscriptstyle tripl}\cdot \xi = y_s y_b\ s_{\it \scriptscriptstyle doubl} s_{\it \scriptscriptstyle tripl}\ \xi , \end{equation} where $\xi$ is the loop suppression factor. Substituting values of parameters, we find that even for large $\tan\beta$ ($y_b \sim 1$) this amplitude is small enough to allow for $s_{\it \scriptscriptstyle doubl}$, and consequently, $\lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff}$ to be of the order 1. All other diagrams give smaller contributions. (Note that in the considered example all the $B$-violating interactions contain $b^c$ quark, so that even lowest family couplings need a loop ``dressing"). \subsection{Neutrino masses and large $R$-parity breaking couplings}\label{sub:neud-t} There is another consequence of the matter-Higgs mixing \cite{R-GUT,banks,nir,H}: explicit $R$-parity violating terms in \refe{sup} induces in general VEV of sneutrino. Indeed, the relevant terms in the potential at the electroweak scale are: \begin{equation} \begin{array}{rl} V \ni & (m_{L_3}^2+\delta m^2)\ |h_1|^2 + m_{L_3}^2\ |\tilde l_3|^2\ - \\[1ex] \nonumber &[B\cdot M_{\it \scriptscriptstyle doubl}\ h_1 h_2 + (B+\delta B)\cdot m_{\it \scriptscriptstyle doubl}\ \tilde l_3 h_2 + {\rm h.c.}]. \label{scalar-potential} \end{array} \end{equation} To proceed in the discussion, we assume a definite scenario for supersymmetry breaking: the low-energy supergravity model. We suggest that soft breaking terms are universal at the scale $M_{\rm \scriptscriptstyle GU}$ suggested by gauge coupling unification. Then the parameters $\delta m^2$ and $\delta B$ \refe{scalar-potential} describe the renormalization effect due to the bottom Yukawa coupling from $M_X$ to the electroweak scale. The corresponding renormalization group equations are: \begin{equation} \begin{array}{cl}\displaystyle \frac{d}{dt} \delta B =& 3\ y_b^2\ A_b, \\[1.5ex] \nonumber \displaystyle \frac{d}{dt} \delta m^2 =& 3\ y_b^2\ (m_{Q_3}^2+m_{D^c_3}^2+m_{H_1}^2+A_b^2), \label{ren-group-for-breakers} \end{array} \end{equation} where $t=1/(4\pi)^2 \times \log(M_{\rm \scriptscriptstyle GU}^2/Q^2).$ The rotation \refe{doub} which eliminates matter-Higgs mixing term in the superpotential generates mixing terms for the sleptons: \begin{equation} V_{\rm \scriptscriptstyle L \!\!\!\!\!\;/} \approx \theta_{\it \scriptscriptstyle doubl} \times \left[ \delta m^2\ h_1^* + \delta B\cdot \mu\ h_2 \right] \tilde l_3 + {\rm h.c.} \label{lepton-violating-part} \end{equation} (for small $\theta_{\it \scriptscriptstyle doubl}$). After electroweak symmetry breaking these mixing terms, together with soft symmetry breaking masses, induce a VEV of tau sneutrino of the order: \begin{equation} \langle \tilde \nu_3 \rangle \sim v\ \theta_{\it \scriptscriptstyle doubl}\times \left(\frac{\delta m^2}{m_{L_3}^2}\ \cos\beta + \frac{\delta B\cdot \mu}{\; m_{L_3}^2} \sin\beta \right). \label{tau-sneutrino-vev} \end{equation} The factor in brackets can be estimated as $y_b^2\ (3\ \cos\beta + 0.5\ \mu/m_{L_3}\;\sin\beta),$ where the figures quoted arise from approximate integration of renormalization group equations \refe{ren-group-for-breakers}. Consequently the tau sneutrino VEV is\footnote{ Technically it is possible to implement a cancellation between the two terms in \refe{tau-sneutrino-vev} (see \cite{IHLee} for a phenomenological study of such a possibility). However we see no natural reason for this to happen in the supergravity context.} $\langle \tilde \nu_3 \rangle\sim v\ \theta_{\it \scriptscriptstyle doubl}\ y_b^2$. Due to this VEV the tau neutrino mixes with the zino, and consequently the mass of tau neutrino is generated via the see-saw mechanism: \begin{equation} \frac{g_1^2+ g_2^2}{2}\ \frac{\langle \tilde \nu_3 \rangle^2}{M_{\tilde Z}} \label{neutrino-mass} \end{equation} (see \cite{Hall-Suzuki,bhh}). In the model under consideration this contribution to tau neutrino mass is typically larger than the one produced by the loop-diagram stipulated by the interaction \refe{lambda333}. We can derive from \refe{neutrino-mass} the bound on $R$-parity violating couplings. Taking into account that $\lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff}\sim \theta_{\it \scriptscriptstyle doubl}\ y_b,$ and $\langle \tilde \nu_3 \rangle \sim v\ \theta_{\it \scriptscriptstyle doubl}\ y_b^2$ we get the relation between $\lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff}$ and neutrino mass \begin{equation} \lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff}\sim 0.06\times \left[\frac{\theta_{\it \scriptscriptstyle doubl}}{0.1\ {\rm rad.}} \right]^{1/2} \!\!\!\times \left[\frac{m_{\nu_\tau}}{10\ {\rm MeV}} \right]^{1/4} \!\!\!\times \left[\frac{M_{\tilde Z}}{1\ {\rm TeV}} \right]^{1/4}. \end{equation} Therefore it is possible to obtain large $R$-parity violating couplings with tau neutrino masses close to the present experimental limit. For $m_{\nu_\tau}= {\cal O} (30$ eV), corresponding to the cosmological bound on stable $\nu_\tau$, the coupling $\lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff}$ becomes of the order 0.002. For such values of $\lambda_{333}'{}^{\!\!\!\!\!\! \rm \ss eff}$ the detection of supersymmetric particle decays is still possible: the condition to be satisfied is in fact $\lambda_{\rm obs}'$ $\ \raisebox{-.4ex}{\rlap{$\sim$}} \raisebox{.4ex}{$>$}\ 2\cdot 10^{-5}$ $\sqrt{\gamma}$ $({\tilde m}/1\ {\rm TeV})^2$ $(150\ {\rm GeV}/ m_{\chi})^{5/2},$ where $\gamma$ is the Lorentz boost factor \cite{Dreiner}. \section{Discussion and conclusions} The $R$-parity breaking couplings offer great possibilities for phenomenological speculations, but, up to now, no effect which should be related to them has been found. This may be due to the fact that $R$-parity breaking couplings are small; in this case one could expect physical effects in rare or forbidden processes. But, just on the basis of the observed phenomena, certain $R$-parity breaking couplings may be large. This unclear situation calls either for further theoretical or experimental informations. It is encouraging that rather clear patterns for $R$-parity breaking couplings emerge in the context of supersymmetric Grand Unification. Models in which both lepton and baryon-violating couplings are small have been discussed. Large $R$-parity breaking couplings are present in another kind of models, based on the doublet-triplet splitting. In the context of the low-energy supergravity models for supersymmetry breaking, we pointed to an interesting signature of this second scenario: the correlation between the size of the $R$-parity breaking coupling and the mass of the tau neutrino. \section*{Acknowledgments} A large part of the material exposed is based on my collaboration with A.Yu.\ Smirnov, whom I thank most warmly. I would like also to thank the Organizers, and in particular N.D.\ Tracas, for the pleasant and stimulating atmosphere they were able to create for the $5^{th}$ {\em Hellenic School and Workshops on Elementary Particle Physics} (I would say in accord with the spirit of \cite{ERWIN}). Finally I take the opportunity to acknowledge pleasant and useful conversations with M.\ Bastero-Gil, Z.\ Berezhiani, B.\ Brahmachari, H.\ Dreiner, G.\ Dvali, P.\ Fayet, G.\ Fiorentini, G.\ Leontaris, A.\ Melfo, N.\ Paver, E.\ Roulet, C.\ Savoy, G.\ Senjanovi\'c and J.\ Steinberger.
2,869,038,156,146
arxiv
\section{Algorithm} \label{algorithm} We now operationalize the principles in the previous section, specifying (i) how we choose topics (Section~\ref{alg:topics}), (ii) how we estimate the baseline (Section~\ref{alg:entropy}), and (iii) how to efficiently optimize the robust objective \eqref{opt:marg} (Section~\ref{alg:opt}). \subsection{Identifying Topics} \label{alg:topics} The topic CVaR objective requires topic assignments $z$ for each sentence in order to define the uncertainty set $\mathcal{P}$. Since the topics determine the set of $p_{\text{x}}^{\text{test}}$ distribution for which the model performs well, we seek topics whose subpopulation shifts capture realistic potential test settings. We use latent Dirichlet allocation (LDA) \citep{blei03lda} to cluster the sentences into latent topics. LDA assigns each word in a sentence to a topic, and we assign each sentence to the topic with highest total posterior probability. \subsection{Estimating Baselined Losses} \label{alg:entropy} Recall that topic CVaR uses KL-divergence as the loss term (Eq.~\eqref{opt:drokl}), \begin{multline*} \dkl{\pwdist{z}}{p_\theta} := \E_{\pwdist{z}}[\log\pwprob{x}{z}] \\ - \E_{\pwdist{z}}[\logp_\theta(x)]. \end{multline*} While we can estimate the log loss term $ \E[\logp_\theta(x)]$ from samples, the entropy term $H(X \mid Z=z) := \E_{\pwdist{z}}[-\log\pwprob{x}{z}]$ is not something we can easily estimate. We thus propose to estimate the entropies $H(X\mid Z=z)$ by fitting a \emph{baseline model} $p_\beta$ for each topic, and computing $H_{\beta}(X\mid Z=z) := \E_{\pwdist{z}}[-\logp_\beta(x\mid z)]$.\footnote{$H_{\beta}$ yields accurate solutions to the topic CVaR problem as long as they capture the entropy up to a constant (i.e. $H_\beta(X \mid Z = z) \approx H(X \mid Z = z) + c$)} In practice, we use a bigram model, which was fast enough to scale and worked sufficiently well in experiments. \subsection{Online Optimization of topic CVaR} \label{alg:opt} No scalable, online algorithm exists for optimizing the topic CVaR objective. Many DRO problems admit efficient batch optimization procedures based on Lagrangian duality \cite{duchi2016}. However, this approach fails for topic CVaR, since the dual form requires exact computations rather than stochastic estimates of $\E_{\pwdist{z}}\left[- \logp_\theta(x)\right]$. Online algorithms for DRO exist \citep{namkoong2016stochastic}, but do not handle the nested maximization-expectation structure arising in topic CVaR (Eq.~\eqref{opt:marg}). Because of this, we develop an online optimization procedure for topic CVaR compatible with stochastic gradient descent methods. The topic CVaR problem is a two-player minimax game between the model parameter $\theta$ and the potential test distribution $p_{\text{z}}$. Intuitively, $p_{\text{z}}$ attempts to be the worst-case distribution and maximize the robust objective, while $\theta$ attempts to minimize the robust objective. The precise two-player minimax game is \begin{align} \label{opt:game} \inf_\theta \sup_{p_{\text{z}} \in \mathcal{P}_{\text{z}}^{\alpha}} \E_{z\simp_{\text{z}}}\left[L(z;\theta)\right], \end{align} where the expected loss for each $z$ (inner expectation) is $L(z;\theta):=\E_{x\sim\pwdist{z}}\left[\ell(x;\theta)\right]$. In the above two-player game, the game proceeds in multiple rounds $t=1,2,\dots$. At each round, the players select $p_{\text{z}}\iter{t}$ and $\theta\iter{t}$. It is standard to interleave parameter updates between the two players in minimax optimization, and we describe the precise update rules in subsequent paragraphs. To carry out these updates, we keep track of an empirical estimate of the probability $p_{\text{z}}^{\text{train}}(z)$ at each iteration $t$, which we refer to as $\hat{p}_{\text{z}}^{\text{train}}\iter{t}(z)$. We also keep track of the historical average of losses incurred for each topic so far, up to the current round $t$, which we call $\hat{L}\iter{t}(z;\theta\iter{1:t})$. Concretely, $\hat{L}\iter{t}(z;\theta\iter{1:t})$ is computed as an average of $\{\ell(x\iter{t^\prime};\theta\iter{t^\prime} ) : t^\prime\in[t], z\iter{t^\prime}=z\}$. At each iteration $t$, $p_{\text{z}}$ is updated by selecting an optimal value with respect to historical losses up to the current iteration, loosely inspired by the ``Be The Leader" algorithm. This results in the following update rule to $p_{\text{z}}$, \begin{align} \label{btl} p_{\text{z}}\iter{t} = \argmax_{p_{\text{z}}\in\mathcal{P}_{\text{z}}^{\alpha}} \E_{z\simp_{\text{z}}}\left[\hat{L}\iter{t}(z;\theta\iter{1:t})\right]. \end{align} The above $\argmax$ can computed efficiently by ordering topics in the order of decreasing average loss, and assigning each topic either $\frac{\hat{p}_{\text{z}}^{\text{train}}(z)}{\alpha}$ or the probability left to be assigned, whichever is lower.\footnote{For example with $\alpha=0.2$, $\hat{L}\iter{t} = [40, 30, 60]$, and $\hat{p}_{\text{z}}^{\text{train}}= [0.2, 0.8, 0.1]$, then $p_{\text{z}}\iter{t}=[0.5, 0, 0.5]$.} We update $\theta$ with online gradient descent, \begin{align*} \label{ogd} \theta\iter{t} = \theta\iter{t-1} - \epsilon\frac{p_{\text{z}}\iter{t}(z\iter{t})}{\hat{p}_{\text{z}}^{\text{train}}\iter{t}(z\iter{t})}\nabla\ell(x\iter{t};\theta\iter{t-1}), \end{align*} where $\epsilon$ is the learning rate To give intuition for the two updates, first note that $\frac{p_{\text{z}}\iter{t}(z)}{p_{\text{z}}^{\text{train}}\iter{t}(z)}=\frac{1}{\alpha}$ on approximately $\alpha$ fraction of the data and this ratio acts as an indicator function which determines if an example is part of the worst-case set or not. If it is, we update the model and otherwise we ignore it. \section{Robust Language Modeling} \label{approach} We will begin by applying standard distributionally robust optimization approaches to the log loss (Section \ref{approach:joint}), and showing that this na\"ive approach suffers from two drawbacks: \begin{enumerate} \itemsep=0pt \item Existing DRO uncertainty sets $\mathcal{P}$ are too conservative. \item The log loss overemphasizes topics with inherently high entropy. \end{enumerate} These drawbacks will motivate our development of a new approach we call topic CVaR, which addresses these two problems (Sections \ref{approach:topics} and \ref{approach:loss}). \subsection{Robustness to arbitrary subpopulations} \label{approach:joint} Observing that MLE is not robust because it assigns low probabilities (i.e. incurs high losses) on rare sentences, we might initially try to define $\mathcal{P}$ as individual training examples to ensure low loss on \emph{all} data points. However, this is far too conservative, since the worst-case distribution would consist of exactly one data point. Therefore, we may want to optimize a slightly more realistic uncertainty set consisting of all sufficiently large subpopulations of the training distribution. Minimizing losses over all subpopulations of the training distribution can be formulated as a type of distributionally robust optimization (DRO) problem \cite{duchi2018learning}, which has been used to regularize models \cite{duchi2016variance}, defend against adversarial examples \cite{sinha2018certifiable}, and improve the fairness of models \cite{hashimoto2018repeated}. One type of distributionally robust loss is known as conditional value at risk (CVaR) which guarantees low losses on all $\alpha$-fraction subpopulations of the training distribution \cite{rockafellar2000optimization}. This corresponds to defining the uncertainty set $\mathcal{P}$ as all sentence distributions that are \emph{$\alpha$-covered} by $p_{\text{x}}^{\text{train}}$, \begin{align} \label{def:psetx} \mathcal{P}_{\text{x}}^{\alpha} \defeq \{p_{\text{x}} : \alphap_{\text{x}}(x) \le p_{\text{x}}^{\text{train}}(x) ~~ \forall x\}. \end{align} This is equivalent to defining $\mathcal{P}_{\text{x}}^{\alpha}$ as the set of $p_{\text{x}}$ which fulfills $p_{\text{x}}^{\text{train}} = \alphap_{\text{x}} + (1-\alpha)p_{\text{x}}^{\text{other}}$ for some distribution $p_{\text{x}}^{\text{other}}$. To achieve low loss on all possible test distributions in $\mathcal{P}_{\text{x}}^{\alpha}$, we minimize the expected loss under the worst-case distribution, \begin{align} \label{opt:joint} \sup_{p_{\text{x}}\in\mathcal{P}_{\text{x}}^{\alpha}} \E_{x\simp_{\text{x}}}[\ell(x;\theta)]. \end{align} For the remainder of the paper, we will refer to this approach as \emph{sentence CVaR}, highlighting the fact that it considers robustness over arbitrary sets of sentences. It intuitively encourages uniform performance across all subpopulations of sentences by downweighting sentences with low loss, and upweighting sentences with high loss. Because sentence CVaR considers \emph{arbitrary} groups of examples, it can be too conservative in our problem setting. While sentence CVaR can prevent modeling common sentences at the cost of rare ones, it can also encourage modeling invalid sentences at the expense of valid ones. Returning to our example in Figure \ref{fig:example} with $\ell(x;\theta)=-\log p_\theta(x)$ , sentence CVaR with for sufficiently low $\alpha$ achieves perfectly uniform performance. It equalizes likelihoods across all sentences, which unfortunately also results in high probabilities assigned to the ungrammatical sentence F. \subsection{Robustness over Topics} \label{approach:topics} Sentence CVaR is too conservative since it allows for arbitrary groups --- including ones consisting of purely invalid sentences. To remedy this, we will optimize models for all \emph{meaningful} subpopulations instead of \emph{arbitrary} ones. One way to achieve this is through robustness over topics, rather than individual examples. For example, a news corpus often contains a variety of topics (politics, business, opinion, food) and a test corpus may contain these topics with different proportions. A robust language model should perform well on a wide range of topic mixtures without taking the topic identity as an input. Formally, we posit that each sentence $x$ belongs to some latent topic $z$, which has a sentence distribution $\pwdist{z}$. We want our models to be robust to shifts in the topic distribution, where we have $z\simp_{\text{z}}^{\text{train}}$ and $z\simp_{\text{z}}^{\text{test}}$. In this case, we can define a natural uncertainty set for CVaR, defined over latent topics rather than individual examples. Extending the definition of $\alpha$-covered distributions to topics, we have the set \begin{align} \label{def:pset} \mathcal{P}_{\text{z}}^{\alpha} \defeq \{p_{\text{z}} : \alphap_{\text{z}}(z)\lep_{\text{z}}^{\text{train}}(z) ~~\forall z\} \end{align} and the objective is the expected loss under the worst-case topic distribution, \begin{align} \label{opt:marg} \sup_{p_{\text{z}}\in\mathcal{P}_{\text{z}}^{\alpha}} \E_{z\simp_{\text{z}}}\left[\E_{x\sim\pwdist{z}}\left[\ell(x;\theta)\right]\right]. \end{align} This objective intuitively encourages uniform loss across topics by upweighting topics incurring high losses and downweighting topics with low losses, while keeping the conditional distribution of sentences given a topic constant. \subsection{Baselined Loss Function} \label{approach:loss} Recall that DRO depends critically on the choice of uncertainty set and loss function. Having specified the uncertainty set, we now turn to the choice of loss $\ell(x;\theta)$. While the log loss $\ell(x;\theta) = -\logp_\theta(x)$ is the standard choice in language modeling, we show that this approach has a flaw in the robust setting and propose a corrected loss. \paragraph{Log Loss.} Using log loss on CVaR encourages uniform \emph{absolute} log-likelihoods across topics even if some topics are much harder than others. For example, consider a model which performs nearly optimally on difficult topics and highly suboptimally on easy topics. Since log loss measures absolute performance, it would force the model to focus on the difficult topic \emph{even if the model can't improve further on this topic}. In the example in Figure \ref{fig:example}, news is emphasized over reviews because news has higher entropy and thus higher difficulty. Empirically, we observe that log loss with CVaR forces the models to focus almost entirely on the difficult topics such as long news stories. \paragraph{Baselined Loss.} We now propose a new baselined loss, which encourages uniform \emph{relative} performance across topics. We refer to our approach with the baselined loss as \emph{topic CVaR}. The baselined loss function $\ell(x,z;\theta) = \log\pwprob{x}{z}-\logp_\theta(x)$ evaluates the performance of the model relative to the best possible model for the topic, $\log\pwprob{x}{z}$. Although we do not observe $\log\pwprob{x}{z}$, we will show later in section \ref{alg:entropy} that we can estimate sufficient statistics of $\log\pwprob{x}{z}$ that allow us to compute the baselined loss. By using baselined loss, we intuitively encourage models to perform as well as it can on each topic while making optimal trade-offs among topics. Plugging the baselined loss into the robust objective \eqref{opt:marg}, the optimization problem is \begin{align} \label{opt:drobase} \sup_{p_{\text{z}}\in\mathcal{P}_{\text{z}}^{\alpha}} \E_{z\simp_{\text{z}}}\left[\E_{x\sim\pwdist{z}}\left[\log\pwprob{x}{z}-\logp_\theta(x)\right]\right], \end{align} which can be simplified to \begin{align} \label{opt:drokl} \sup_{p_{\text{z}}\in\mathcal{P}_{\text{z}}^{\alpha}} \E_{z\simp_{\text{z}}}\left[\dkl{\pwdist{z}}{p_\theta}\right]. \end{align} Topic CVaR thus minimizes the per-topic KL divergences, and this interpretation fits nicely with a general goal of training $p_\theta$ that matches the test distribution. Unlike in the MLE case, minimizing the KL is not equivalent to minimizing the log loss. In MLE, minimizing $\text{KL}(p_{\text{x}}^{\text{train}}\|p_\theta)=\E\left[\logp_{\text{x}}^{\text{train}}(x)-\logp_\theta(x)\right]$ is equivalent to minimizing the log loss because $\logp_{\text{x}}^{\text{train}}(x)$ can be treated as a constant. However, in topic CVaR, the analogous baseline entropy term $\log\pwprob{x}{z}$ depends on $z$ and thus is not a constant with respect to the outer supremum. In the running toy example (Figure \ref{fig:example}), topic CVaR results in robust models that perform relatively well on both news and reviews. The resulting model is a mixture of news and review distribution with equal weights on the two topics. In summary, topic CVaR contains two improvements over existing DRO approaches: using the latent topic distribution $p_{\text{z}}^{\text{train}}$ to specify the uncertainty set and defining the baselined loss. In the following section, we will describe an algorithm which optimizes this topic CVaR objective. \section{Discussion} \label{discussion} In this work, we show that the performance of language models degrade as the amount of text from outside the test distribution grows. We hypothesize that this problem arises from the tendency of MLE to optimize for common sentences in the corpus, and we propose a solution based on distributionally robust optimization. Empirically, we demonstrate that the DRO-based topic CVaR is more robust than MLE to subpopulation shifts and similar shifts. While this work focuses on DRO for language modeling, train-test mismatches under subpopulation shifts are more broadly applicable to any task where there are trade-offs between potential test distributions, and potential test distributions can be described with topics. Our work shows that topics are an effective way to encode prior information about test distributions, and baselines can properly normalize for the difficulty across these topics. \section{Experiments \label{experiments}} \newcommand{\textsc{OneBWord}}{\textsc{OneBWord}} We demonstrate that topic CVaR improves maximum likelihood language models when $p_{\text{x}}^{\text{train}}\neqp_{\text{x}}^{\text{test}}$. Section \ref{expdetails} outlines the experimental setup while Section \ref{robustsub} shows the robustness improvements and analysis of topic CVaR. \subsection{Evaluation Details} \label{expdetails} \paragraph{Datasets.} We use the following three corpora: the Yelp review corpus (\textsc{Yelp}, \shortcite{yelp2017yelp}), One Billion Word benchmark corpus (\textsc{OneBWord}), and the TripAdvisor Annotated Dataset (\textsc{TripAdv}, \citet{marcheggiani2014hierarchical}). We preprocess the corpora using \textsc{SpaCy} (\citet{honnibal2015nmdp}) by removing sentences with fewer than 10 characters, segmenting sentences, tagging named-entities, and replacing each entity with its corresponding OntoNotes tag. \paragraph{Vocabulary.} Our experiments will evaluate models using perplexity, which depends on the choice of vocabulary. To make perplexity comparable for models trained on different datasets, we use a single, fixed vocabulary formed by combining the most frequently occurring $10,000$ words in each corpus. All words in the mixtures which are not in the vocabulary (1$-$3\% in our experiments) are replaced with a special \textit{unk} token. \paragraph{Clustering.} To cluster sentences in the training set, we ran LightLDA (\citet{yuan2015lightlda}) for 100 iterations with prior hyperparameters $\alpha=0.1$, $\beta=1.0$ and $2$ Metropolis-Hastings steps. We set the model to find $10$ topics, as this resulted in stable clusters consisting of semantically similar sentences. \paragraph{Models.} Our models are Transformer \cite{vaswani2017attention} based language models trained using the \textsc{Fairseq} sequence-to-sequence toolkit \shortcite{gehring2017convolutional}. We use the same model architecture, optimizers, and hyperparameters for both MLE and CVaR. For both models, we use Nesterov's accelerated gradient descent, a fixed learning rate of 0.01, minibatch size of 500 sentences, and 30k minibatches (corresponding to 100 epochs on the \textsc{Yelp} corpus). These values were derived by tuning a MLE model trained on the \textsc{Yelp} data and tested on the \textsc{Yelp} dev set. \paragraph{Hyperparameters} Topic CVaR can be unstable at small $\alpha$ values due to the fact that we are optimizing for worst-case errors. Because of this, we make three small but important modifications to the algorithm. (i) We use $\alpha=0.2$ to estimate models for $\atr < 0.2$, as small $\alpha$s can cause gradients to become unstable; (ii) we set a minimum $p_{\text{z}}(z)/p_{\text{z}}^{\text{train}}(z)$ value of 0.1; and (iii) we compute historical losses using exponentially weighted moving averages. With these modifications, the model reliably converges to similar validation losses. \subsection{Language Model Robustness} \label{robustsub} We seek to assess the performance of MLE and CVaR models under subpopulation shift. In order to do this, we train language models on various mixtures of \textsc{Yelp} and \textsc{OneBWord}{} corpora and evaluate the models on a held-out set of \textsc{Yelp} sentences. We will construct a training corpus, whose distribution \emph{$\atr$-covers} the test distribution (i.e. $\atr$ fraction of the training distribution corresponds to the Yelp distribution). In this case, we expect that topic CVaR with $\alpha = \atr$ to perform well since the test set exactly fulfills the subpopulation shift assumption. To form a training corpus, whose distribution $\atr$-covers the \textsc{Yelp} distribution, we mix a fixed set of 500,000 sentences from \textsc{Yelp} training subset with $500,000(1 - \atr)/\atr$ sentences from \textsc{OneBWord}. This results in a dataset where $\atr$ of the training data comes from \textsc{Yelp}. The test corpus is composed of sentences from the \textsc{Yelp} test subset, with no sentence overlap with the training corpora. Since the absolute number of \textsc{Yelp} samples in the training corpora remains constant across different values of $\atr$, we expect that a model which is robust to added nuisance data will perform equally well on a \textsc{Yelp}-only test set, even as the mixture proportion of \textsc{OneBWord}{} samples in the training corpus increases. \paragraph{Oracle model.} We estimate the \emph{oracle} performance of a robust language model as running topic CVaR where the topic $z = \{\textsc{Yelp}, \textsc{OneBWord}{}\}$ and the topic assignments use the ground truth corpus identities rather than a clustering algorithm. In this case, when $\atr=\alpha$ we are directly minimizing the worst-case baselined test loss over both \textsc{Yelp} and \textsc{OneBWord}{}. \begin{figure}[ht!] \centering \includegraphics[scale=0.50]{figures/astar_vs_ppl.pdf} \caption{Topic CVaR (green) provides substantial improvements in perplexity compared to MLE (black and blue) as the amount of train-test mismatch increases ($\atr\to 0$). This performance is close to the oracle performance which uses ground truth corpus labels and early stopping (orange). } \label{fig:droplots} \end{figure} \paragraph{Topic CVaR improves robustness over MLE.} Using the \textsc{Yelp}-\textsc{OneBWord}{} mixtures, we evaluate the robustness of topic CVaR and MLE to added nuisance data. We find that with no nuisance data, the MLE model matches the topic CVaR model (Figure \ref{fig:droplots} $\atr=1.0$). As we add data from $\textsc{OneBWord}{}$ and $\atr$ decreases to 0.7, we find some \emph{positive transfer} effects where the increased data from the \textsc{OneBWord}{} corpus improves the performance on Yelp. However, as the fraction of nuisance data grows further and $\atr$ drops below 0.4 the MLE models suffer large increases in perplexity, incurring up to 10 additional points of perplexity. Early stopping according to validation perplexity on \textsc{Yelp} does not improve this substantially beyond the basic MLE model (blue star). On the other hand, applying topic CVaR with $\atr=\alpha$ provides substantial boosts to language model performance for small $\atr$, with nearly no loss of performance for large $\atr$ (green triangle). Finally, we find that the topic CVaR method we propose is close to the best possible \emph{oracle} performance. \paragraph{Topic CVaR robustness beyond subpopulation shift.} \begin{figure}[h] \centering \includegraphics[scale=0.5]{figures/tripadv_astar_vs_ppl.pdf} \caption{The robustness improvements from topic CVaR (black vs green and orange) apply even when the test set (\textsc{TripAdv} reviews) is not a subpopulation shift from the training set (\textsc{Yelp} and \textsc{OneBWord}{}). } \label{fig:drotrip} \end{figure} The prior \textsc{Yelp}-\textsc{OneBWord}{} experiment showed that topic CVaR is more robust than MLE under subpopulation shift. We now explore the more realistic setting in which the test distribution is not a subpopulation shift, but merely ``similar'' to the training distribution. We do this by testing the same model on the \textsc{TripAdv} hotel review corpus. The hotel and restaurant review distributions are similar (i.e. they both frequently mention service) but differ in that hotels reviews often mention the location and room, while restaurant reviews often mention food items. We find a similar result consistent with the earlier subpopulation shift experiment (Figure \ref{fig:drotrip}). The MLE model performance degrades rapidly between $\atr=0.7$ and $0.1$, while topic CVaR substantially reduces this degradation. This suggests that topic CVaR models provide robustness benefits in real-world settings where the topic overlaps are not exact, and the subpopulation shift assumption no longer holds. \paragraph{Ablations.} Topic CVaR extends the standard CVaR objective in two ways: the use of topics and the use of a baseline. We investigate the effect of these choices via an ablation experiment. Removing the topic structure results in dramatic loss of performance for our models: the perplexity exceeds 80 with $\alpha=0.2$ for all $\atr$. This is because the worst case group can consist of solely of disfluent sentences that do not match any real test distribution. If we remove the baseline, the resulting model is not completely degenerate, but it is not as robust as $\atr$ decreases (Figure \ref{fig:droablation}, teal). This is because \textsc{OneBWord}{} is a higher entropy corpus than \textsc{Yelp}, and forcing the model to achieve equal \emph{absolute} losses causes the model to focus nearly entirely on \textsc{OneBWord}{}, resulting in low \textsc{Yelp} performance. \begin{figure}[h] \centering \includegraphics[scale=0.5]{figures/astar_vs_ppl_abl.pdf} \caption{The robustness of topic CVaR degrades when the baseline is removed (teal), but is resistant to being over-conservative in choosing $\alpha$ (yellow). } \label{fig:droablation} \end{figure} \paragraph{Choice of $\alpha$.} Since the true train-test overlap $\atr$ is not always known, we cannot always set our hyperparameter $\alpha$ equal to $\atr$. We find that selecting suboptimal values of $\alpha$ degrades perplexity between 2--3 points depending on $\atr$. Figure \ref{fig:droablation} shows that setting $\alpha$ to the most conservative choice of 0.2 outperforms MLE on small $\atr$ while incurring only 2 points of perplexity loss over MLE at $\atr=1.0$. Figure \ref{fig:appl} further demonstrates that when $\atr = 0.1$, any choice of $\alpha$ outperforms MLE, and incorrectly selecting $\alpha$ seems to incur a linear penalty in perplexity. \begin{figure}[h] \centering \includegraphics[scale=0.5]{figures/alpha_vs_ppl.pdf} \caption{Topic CVaR outperforms MLE in the $p_{\text{x}}^{\text{train}} \neq p_{\text{x}}^{\text{test}}$ setting ($\atr=0.1$) for any small $\alpha$ (x-axis). The performance degradation is linear, implying topic CVaR is robust to small errors in the choice of $\alpha$. } \label{fig:appl} \end{figure} \begin{table*}[ht] \centering \resizebox{1.0\textwidth}{!}{ \begin{tabularx}{1.4\linewidth}{p{0.65\textwidth} | p{0.65\textwidth}} \toprule $p_{\textbf{MLE}} > p_{\textbf{CVaR}}$ & $p_{\textbf{CVaR}} > p_{\textbf{MLE}}$ \\ \midrule my girlfriend had an awful accident that hurt her leg \& ankle which resulted in a fire and rescue ride & huge servings, so plenty for leftovers.\\ \addlinespace[0.1cm] the address [PERSON] has listed is their old address & it tastes the way food should taste!\\ \addlinespace[0.1cm] wonderful location in a up and coming part of [GPE]. & every single person we spoke to on staff was absolutely incredible.\\ \addlinespace[0.1cm] \end{tabularx} } \caption{Examples from the \textsc{Yelp} corpus for which MLE outperforms topic CVaR (left column) and vice versa. Brackets indicate \textsc{OntoNotes} named-entity tags. The examples preferred by topic CVaR are stereotypical Yelp sentences, while those preferred by MLE refer to locations and accidents. } \label{tab:examplesacc} \vspace{-10pt} \end{table*} \paragraph{Error analysis and examples.} Evaluating both models trained with $\atr=0.1$ on both the \textsc{Yelp} and \textsc{OneBWord}{} test sets, we find that topic CVaR assigns higher probabilities (and therefore incurs lower losses) on sentences from Yelp (Figure \ref{fig:droscatter}, top right). We also see that MLE does particularly well on low loss examples (bottom left) while topic CVaR does well on high-loss ones (top right) as we might expect from optimizing the worst-case losses. Examining examples from the \textsc{Yelp} test set (Table \ref{tab:examplesacc}), we identify examples which have substantially higher probabilities under MLE than topic CVaR (left column) and vice versa (right column). These examples show that topic CVaR performs well by assigning high probabilities to stereotypical \textsc{Yelp} sentences that discuss food and service, while MLE performs better on sentences about accidents and locations. These examples are consistent with the observation that topic CVaR assigns higher probabilities to typical \textsc{Yelp} sentences and thus has lower perplexity, while the MLE model has high perplexity since it assigns probabilities to \textsc{Yelp} sentences primarily based on their similarity to examples from \textsc{OneBWord}{}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{figures/scatterlogp.pdf} \caption{ Log losses for sentences (points) from \textsc{Yelp} (blue) and \textsc{OneBWord}{} (red) under topic CVaR (y-axis) and MLE (x-axis). Topic CVaR performs well on \textsc{Yelp} and infrequent sentences (top right). MLE performs better on frequent sentences from \textsc{OneBWord}{} (bottom left). } \vspace{-10pt} \label{fig:droscatter} \end{figure} \section{Introduction} \label{intro} Large-scale language modeling plays a central role in both text generation \cite{sordoni2015neural,nallapati2016abstractive} and unsupervised pre-training \cite{vaswani2013decoding, dai2015semi, mccann2017learned, peters2018elmo,devlin2018BERT, radford2018improving}. In both settings, a single language model is trained on a large corpus containing a range of topics (e.g. news, fiction, and reviews). This language model is then applied in many different tasks, each with a specific test distribution (e.g., analyzing the sentiment of restaurant reviews). Can we train a single general-purpose language model that works across a wide range of potential test distributions? \begin{figure} \centering \includegraphics[scale=0.2]{figures/mle_vs_dro.pdf} \caption{ Illustration of a training corpus as a density (black) with mostly news stories (red) and a small number of restaurant reviews (blue). The standard MLE model (gray) reflects the underlying data and assigns little weight to reviews, and thus performs poorly on reviews. A more robust model should try to equalize the weight across all topics so that it can perform well regardless of which topics appear at test time. } \label{fig:figone} \vspace{-10pt} \end{figure} In this work, we first demonstrate that standard maximum likelihood training on a large, heterogeneous dataset can fail to achieve this goal. While more data is generally better, the presence of text outside the target distribution actually \emph{degrades} performance on a target test distribution. For example, a language model trained on Yelp reviews achieves a perplexity of 32, and this perplexity increases to 43 when trained on a mixture of 10\% Yelp and 90\% newswire sentences from the One Billion Word Benchmark \cite{chelba2013one}. Performance degrades because existing maximum likelihood estimation (MLE) objectives tend to emphasize model performance on more common sentences and topics at the expense of infrequent ones (Figure \ref{fig:figone}). While the above performance degradation can be mitigated by fine-tuning and domain adaptation techniques \cite{shimodaira2000improving,quinonero2009dataset,daume07easyadapt,ben2010theory,blitzer2011domain, pryzant2017domainmix,ganin2015domain,tzeng2014domain}, these methods require knowing the test distribution and training a separate model specific to each target distribution. Instead, we aim to train a \emph{single} model that performs well across many unknown test distributions. In order to do this, we will train a model that performs uniformly well over an entire family of potential test distributions. Since we cannot expect to do well on all possible test distributions, we consider the \emph{subpopulation shift} setting, in which the test distribution is a subpopulation of the training distribution, and seek good performance across all such test distributions (e.g. Yelp reviews in a Yelp-newswire mixture). \footnote{The subpopulation assumption refers to overlaps in \emph{distributions}, rather than individual examples. Our assumptions do not require overlap in the training and test data.} In other words, adding data from topics outside the test topics should not hurt. It seems reasonable to protect against subpopulation shifts, intuitively because large-scale data collection schemes are designed to cover a diverse array of topics as a way to generalize to potential test distributions. We train a model that performs well over all subpopulations by minimizing the risk for the \emph{worst-case} subpopulation, following the distributionally robust optimization (DRO) literature \cite{bental2013robust}. While an existing DRO framework called the conditional value at risk (CVaR) ensures uniformly good performance across subpopulations \cite{rockafellar2000optimization,duchi2018learning}, we demonstrate that na\"ively applying this approach to language modeling fails due to three challenges. First, the existing CVaR approach is too conservative because it considers robustness to \emph{arbitrary} subpopulations. Such worst-case subpopulations are attained by adversarially choosing the hardest, most unusual sentences. Instead, we propose to consider \emph{meaningful} subpopulations, defined by topics in a corpus \cite{hu2018does}. Second, applying CVaR directly to log loss results in a loss which is biased towards topics with high entropy, instead of those for which the model performs poorly relative to what is possible. We correct this by introducing a new \emph{baselined} loss function which measures losses relative to the entropy of each topic. Finally, existing optimization algorithms for CVaR are either inapplicable to topic-based robustness sets or unscalable because they require batch optimization. We develop a scalable online algorithm which identifies the worst-performing topics at each iteration and upweights examples from those topics. With these methodological improvements, we demonstrate that our approach, \emph{topic CVaR}, improves robustness against subpopulation shifts. Topic CVaR reduces perplexity on the Yelp review corpus by 5.5 points compared to MLE when trained on the Yelp-One Billion Word Benchmark mixture from before. We also show improved robustness even when the shift is not strictly a subpopulation shift. Topic CVaR also achieves a 4 point perplexity reduction on a test distribution (TripAdvisor hotel reviews) that is similar to, but not strictly a subpopulation of the training distribution (Yelp and newswire text). \section{Appendices} \end{document} \section{Problem Statement} \label{problemst} Our goal is to learn a language model $p_\theta$ based on sentences sampled from the training distribution $x \sim p_{\text{x}}^{\text{train}}$, such that $p_\theta$ performs well on unknown test distributions $p_{\text{x}}^{\text{test}}$. Language models $p_\theta$ are generally trained to approximate $p_{\text{x}}^{\text{train}}$ by minimizing the KL divergence $\dkl{p_{\text{x}}^{\text{train}}}{p_\theta}$ via maximum likelihood estimation (MLE) \begin{align} \label{opt:klmle} \inf_\theta \E\left[- \logp_\theta(x)\right]. \end{align} When $p_{\text{x}}^{\text{test}} = p_{\text{x}}^{\text{train}}$, classical statistical theory guarantees that a model trained via MLE performs well on the test distribution given sufficient data. However, when $p_{\text{x}}^{\text{test}}$ is not identical to $p_{\text{x}}^{\text{train}}$, MLE can perform poorly no matter how much data is observed. This is because the test set might consist solely of sentences from topics that are infrequent during training, to which MLE would assign low probabilities. \begin{figure} \centering \includegraphics[scale=0.25]{figures/example.pdf} \caption{ Toy example of a multinomial distribution over six sentences (top). Different panels illustrate models learned by different training procedures. MLE fits common topics (news) at the expense of rare ones (reviews). Sentence CVaR is too conservative, overemphasizing the ungrammatical sentence. Topic CVaR with log loss overemphasizes difficult topics (news) over easy ones (review). Topic CVaR (with baselining) balances the weight assigned to each topics, as desired. } \vspace{-7pt} \label{fig:example} \end{figure} To illustrate this point, consider the toy example drawn in Figure \ref{fig:example}. In this example, the training distribution $p_{\text{x}}^{\text{train}}$ is a multinomial distribution over six possible sentences A--F, with two from reviews and four from news. Sentence F is ungrammatical and thus has an extremely low probability. The training distribution includes $10\%$ reviews and $90\%$ news, whereas the test distribution could be all reviews, all news, or a mixture. MLE assigns low probabilities to any review and thus performs poorly when evaluated solely on reviews. To be robust, we intuitively need a more conservative objective that encourages models to assign higher probabilities to rare but valid sentences. In order to achieve this, we want to learn a model $p_\theta$ which performs well in situations where $p_{\text{x}}^{\text{train}} \neq p_{\text{x}}^{\text{test}}$ for a large set of potential test distributions $\mathcal{P}$, termed the uncertainty set. By training a model that performs well on all distributions in the uncertainty set $\mathcal{P}$, we can ensure good test performance as long as $p_{\text{x}}^{\text{test}} \in \mathcal{P}$. More formally, this approach falls under the framework of distributionally robust optimization (DRO) \cite{bental2013robust}. With DRO, we optimize a model for loss $\ell$ and a set of potential test distributions $\mathcal{P}$ by minimizing the risk under the \emph{worst-case} distribution in $\mathcal{P}$, \begin{align} \sup_{p_{\text{x}} \in\mathcal{P}} \E_{p_{\text{x}}}[\ell(x;\theta )]. \end{align} Observe that the above worst-case objective does not depend on the unknown quantity $p_{\text{x}}^{\text{test}}$. The objective also upper bounds the test risk for all $p_{\text{x}}^{\text{test}} \in \mathcal{P}$ as \begin{align} \label{klgoal} \E_{p_{\text{x}}^{\text{test}}}[\ell(x;\theta)] \leq \sup_{ p_{\text{x}} \in\mathcal{P}} \E_{p_{\text{x}}}[\ell(x;\theta )], \end{align} so optimizing the above objective gives guarantees on test performance whenever $p_{\text{x}}^{\text{test}} \in \mathcal{P}$. DRO provides a conceptually appealing framework for learning under train-test mismatch. However, it crucially depends on both the choice of uncertainty set $\mathcal{P}$ and loss $\ell$, and we will discuss these choices in the next section. \section{Related Work} \label{related} \paragraph{Domain Adaptation:} In the case of known source (train) and target (test) domains, there exist a variety of techniques to learn robust models \cite{shimodaira2000improving,quinonero2009dataset,daume07easyadapt,ben2010theory,blitzer2011domain, pryzant2017domainmix} or domain-invariant features \cite{ganin2015domain,tzeng2014domain}. However, such methods require accurate domain membership annotations. In the absence of domain membership annotations, prior multi-source domain adaptation \citep{mansour2009dams} approaches propose the use of clustering to identify candidate domains. For instance, \citet{hoffman2012discovering} and \citet{xiong2014latent} discover latent domains in classification by clustering data using class labels. \citet{gong2013reshaping} extend this work by identifying subsets which are distinct and learnable. More recent work consider errors in estimating the target domain \citep{hoffman2018msda} and derive learning bounds with respect to such errors. While these approaches make use of cluster and topic structures as prior, they still require some knowledge of the target distribution and train a model tailored to the target distribution. Instead, we assume no knowledge on the target distribution and train a single model by considering the worst case. In \emph{conditional} settings such as machine translation, prior works connect topic modeling and domain adaptation \cite{hu2014polylingual, eidelman2012topic}. However, unlike our work, these approaches use topics at \emph{test time} by inferring the domain from the input variable $x$. In language modeling, we have no inputs and thus must find models robust to unknown domain shifts at test time. In addition, it can be difficult to infer the test distribution as the distribution can rapidly change across users and time. \paragraph{Distributional Robustness:} Our approach is based upon existing work in the distributionally robust optimization (DRO) literature. Optimizing on a ball of distributions around the empirical distribution has been considered in prior work \citep{bental2013robust, namkoong2017variance, duchi2016variance, sinha2018certifiable}. Using DRO to minimize losses over subpopulations was proposed earlier in \citet{hashimoto2018repeated} and \citet{duchi2018learning}, and \citet{hu2018does} proposed incorporating problem structure via class labels. Our work derives an efficient optimization procedure for DRO with topic-based uncertainty sets, and demonstrates that naively applying DRO to log losses fails to provide robustness due to the lack of baselining.
2,869,038,156,147
arxiv
\section{Introduction} Computing the linear response functions (and the density of states) of large systems with thousands of atoms by using conventional methods requires to calculate the eigenvalues and eigenvectors of \( N \times N \) Hamiltonian matrices ($N \gg 10^{6}$) from the lowest state to the Fermi energy and beyond it. The standard diagonalization routines are too much time-consuming in treating these problems because their computing time is proportional to \( N^3\) . Therefore efficient numerical algorithms, such as recursive Green's function methods \cite{Recursive,Branger}, the Lanczos methods \cite{Lanczos,Dagotto94,Cordelli,Jaklic}, the Chebyshev polynomial expansion \cite{Recipes,Kosloff,Hoffman,Silver,Roder,Wang,Voter,Goedecker,Sankey}, and conjugate gradient methods \cite{Payne,Nomura96} have been developed and applied to various problems. In this paper, we present an efficient method for calculating the linear response functions of large quantum system. We give up the calculation of each exact eigenstates, instead we compute linear response functions by integrating the time-dependent Schroedinger equations for a finite period determined by the required energy resolution. Since it avoids \( O(N^3) \) computational efforts of matrix diagonalization, it requires only \( O(N) \) computational efforts for sparse Hamiltonian matrices. To realize this scheme, we exploit several numerical techniques such as the Chebyshev polynomial expansion of matrix functions \cite{Recipes,Kosloff,Hoffman,Silver,Roder,Wang,Voter,Goedecker,Sankey}, random state vectors \cite{Sankey,Drabold,Skilling}, Hamiltonian matrix discretized in real space \cite{Chelikowsky,Fletcher}, the time-dependent Schroedinger equation discretized in real time \cite{Askar,Lefo,Iitaka94,Iitaka94b,Iitaka95a,Iitaka95b,Iitaka96,Feit,Kuroda,Raedt1,Raedt2,Kawarabayashi,Natori,Glutsch}. \section{Time-dependent methods} \label{sec:time} In this section, we describe how we reached the conclusion that we can calculate efficiently the linear response functions of large quantum systems by using the time-dependent homogeneous Schroedinger equations. \subsection{Diagonalize or not diagonalize?} Let us compare the computational efforts of the conventional diagonalization method and the time-dependent method by counting the number of floating point multiplications as a function of matrix dimension $N$, and show that time-dependent method is more efficient when large number of eigenstates are involved. First, we review the relation between the eigenstate representation and the time-dependent representation of linear response functions. The linear response function $\chi_{BA}(\omega+i\eta)$ of an observable $B$ due to a {\em monochromatic} perturbation $H^{ex}=e^{-i(\omega+i\eta) t}A $ is calculated by time-dependent perturbation theory \cite{Imry}, \begin{eqnarray} \label{eq:chi.time0} \chi_{BA}(\omega+i\eta)&=& \displaystyle (-i) \int_0^\infty dt e^{+i(\omega+i\eta)t} \left\{ \langle E_g| e^{+iHt} B e^{-iHt} A | E_g \rangle -c.c. \right\} \\ \label{eq:chi.time} &\approx& 2 \int_0^T dt e^{+i(\omega +i\eta)t} {\rm Im} \left\{ \langle E_g| B e^{-iHt} A | E_g \rangle e^{+iE_gt}\right\} \end{eqnarray} where $|E_g\rangle$ and $E_g$ are the groundstate of the many electron system and its energy, respectively; $\omega$ and $\eta$ are the frequency and its resolution, respectively; $T\gg 1/\eta$ is the integration time. We use atomic units (a.u.), and indicate complex conjugate by ``c.c.''. In numerical calculation of (\ref{eq:chi.time}), we have to discretize it in time, e.g., \begin{equation} \label{eq:chi.time.discrete} \chi_{BA}(\omega+i\eta)= 2 \sum_{m=0}^M \Delta t e^{+i(\omega+i\eta)m \Delta t} {\rm Im} \left\{ \langle E_g| B e^{-iHm \Delta t} A | E_g \rangle e^{+iE_gm\Delta t} \right\} \end{equation} where $M=T/\Delta t$ is the number of timesteps, $T$ is the integration time in (\ref{eq:chi.time}), and $\Delta t$ is the width of timestep. On the other hand, we obtain the eigenstate representation by inserting $I=\sum_{m=1}^{N}|E_m\rangle \langle E_m| $ into (\ref{eq:chi.time0}), \begin{eqnarray} \label{eq:chi.eigen} \chi_{BA}(\omega+i\eta) &=&\sum_{m=1}^N \frac{\langle E_g| B | E_m \rangle \langle E_m| A | E_g \rangle}{(\omega+i\eta) -(E_m-E_g )} \nonumber \\ && - \sum_{m=1}^N \frac{\langle E_g| A | E_m \rangle \langle E_m| B | E_g \rangle}{(\omega+i\eta) +(E_m-E_g )} . \end{eqnarray} Next, we show the estimated computational efforts in Table~\ref{tbl:compare}. The diagonalization method for $N \times N$ Hamiltonian matrix requires memory space of $O(N^2)$ and computational effort of $O(N^3)$. On the other hand, the time-dependent method requires memory space of $O(N^2)$ and computational effort of $O(MN^2)$ where $M$ is the number of timesteps determined by the required energy resolution (See section~\ref{subsec:homo} ). By choosing an appropriate basis set, we can make the Hamiltonian a sparse matrix having only $O(N)$ non-zero matrix elements \cite{Chelikowsky,Iitaka94b}. As the result, the computational effort and the memory space of the time-dependent method are reduced to $O(MN)$ and $O(N)$, respectively. Thus the time-dependent method can be more efficient than diagonalization method in large $N$ limit. \subsection{Newton or Schroedinger?} Table~\ref{tbl:equations} classifies various time-dependent methods in terms of kind of equation and homogeneity. Though the Newton equations of harmonic oscillators \cite{Williams,Yakubo,Fukamachi,Tanaka,Hukushima} are mathematically equivalent to the Schroedinger equations in the eigenstate representation, use of the Schroedinger equation \cite{Askar,Lefo,Iitaka94,Iitaka94b,Iitaka95a,Iitaka95b,Iitaka96,Feit,Kuroda,Raedt1,Raedt2,Kawarabayashi,Natori,Glutsch} has advantage that we can exploit well developed concepts and formalism of quantum theory. It is true especially when we want to deal with quantum systems. Therefore, in this article, we study only the Schroedinger equations. \subsection{Homogeneous or inhomogeneous?} \label{subsec:homo} In this subsection we show that inhomogeneous time-dependent equations are more inefficient than homogeneous ones. This conclusion is valid not only for the Schroedinger equation (Particle Source Method \cite{Iitaka96} ) but also for the Newton equation ( Forced Oscillator Method \cite{Williams,Yakubo,Fukamachi,Tanaka,Hukushima}) because both equations are equivalent in the eigenstate representation. Let us define the computational effort of the time-dependent method by the number of timesteps $M=T/\Delta t$. Then the computational effort $M$ is determined by the integration time $T$, because the maximum width of timestep is limited by the {\em sampling theorem} \cite{Recipes} independent of the detail of the method we use. The timestep should be much smaller than the inverse of band width, $\pi/E_B$, to reproduce the correct spectrum since, otherwise, according to (\ref{eq:chi.time.discrete}) we cannot distinguish the eigenvalues \cite{Kuroda} \begin{equation} E_k=E+\frac{2\pi k}{\Delta t} \ \ \ \ (k=1,2,\cdots) . \end{equation} In the following, we evaluate $T$ for homogeneous and inhomogeneous Schroedinger equations to calculate the real-time Green's functions at many frequencies, $\omega_l= l \Delta \omega, \ l=0,\pm 1,\pm 2,\cdots$, within a required relative accuracy $\delta$. It turns out that $T$ of inhomogeneous equations can be much longer than that of homogeneous equations. This conclusion applies to the linear response functions, too. First let us try to calculate the Green's function by solving the homogeneous equation, \begin{equation} \label{eq:schroedinger.inhomogeneous.n} i \frac{d}{dt}|\phi; t \rangle = H |\phi; t \rangle \end{equation} with the initial condition $|\phi; t=0 \rangle = |j \rangle $. The auxiliary vectors are calculated as \begin{eqnarray} |\tilde{\phi}_l; T \rangle \label{eq:auxiliary.vector.n.1} &=& (-i) \int_0^{T} dt' |\phi; t' \rangle e^{+i(\omega_l+i\eta)t'} \\ &=& (-i) \int_0^{T} dt' e^{-iHt' } |j \rangle e^{+i(\omega_l+i\eta)t'} \\ \label{green.1.1} &=& \frac{1}{\omega_l+i\eta-H} \left( 1-e^{i(\omega_l+i\eta-H)T} \right)|j \rangle \\ &\approx & \frac{1}{\omega_l+i\eta-H} |j \rangle \\ \label{eq:green.1.2.many} &=& G(\omega_l+i\eta) |j \rangle \end{eqnarray} where we have neglected the second term of (\ref{green.1.1}) by assuming $T$ is large enough so that $e^{-\eta T} < \delta$. Therefore we estimate $M$ for the homogeneous equation as \[ M_1 \approx \frac{ T }{\Delta t} = \frac{-\log \delta}{\eta \Delta t} . \] Next let us calculate the Green's function by solving the inhomogeneous Schroedinger equation, \begin{equation} \label{eq:schroedinger.inhomogeneous.2} i \frac{d}{dt}|\phi; t \rangle = H |\phi; t \rangle + |j \rangle \left( \sum_{l=-L}^L e^{-i(\omega_l+i\eta)t} \right) \theta(t) \end{equation} with the initial condition $|\phi; t=0 \rangle = 0$. The solution at large $T$ becomes \begin{eqnarray} |\phi; T \rangle &\approx & \sum_l G(\omega_l+i\eta ) |j \rangle e^{-i(\omega_l +i\eta)T} \end{eqnarray} where $T$ satisfies $e^{-\eta T} \ll \delta$. Then the auxiliary vectors $|\tilde{\phi}_l; T_2 \rangle$ are defined as \begin{eqnarray} |\tilde{\phi}_{l'}; T_2 \rangle &=& \frac{1}{T_2}\int_0^{T_2} dt' | \phi; t \rangle e^{-i(\omega_{l'}+i\eta)t'} \nonumber \\ &=&\frac{1}{T_2}\int_0^{T_2} dt' \sum_l G(\omega_l+i\eta ) |j \rangle e^{-i(\omega_l-\omega_{l'})t'} \\ \label{green.m3} &=& G(\omega_{l'}+i\eta ) |j \rangle \nonumber \\ && + \sum_{ l\ne l'} G(\omega_l+i\eta ) |j \rangle \frac{i\left ( e^{-i(\omega_l-\omega_{l'})T_2}-1\right)}{T_2(\omega_l-\omega_{l'})} \\ &\approx& G(\omega_{l'}+i\eta )|j \rangle \end{eqnarray} where we have neglected the second term of (\ref{green.m3}) by assuming that $T_2$ is large enough so that $T_2 \Delta \omega \gg 1/\delta$. Therefore $M$ becomes \begin{equation} \label{eq:many.inhomo.Nprod1} M_2 \approx \frac{1}{\Delta \omega \Delta t \delta} \end{equation} which can be much larger than $M_1$ when $\Delta \omega$ is small. \section{Non-interacting Electrons} \label{sec:electron} In this section, we apply the time-dependent homogeneous Schroedinger equation to calculate efficiently the linear response functions and density of states of non-interacting electron systems, since it is well known that there exist wide and practically important areas in condensed matter physics where non-interacting electron models are useful to predict various physical properties. Hereafter we assume that the system is described by the one-electron Hamiltonian, \begin{equation} H=\frac{1}{2} \vec{p}^2 +V(\vec{r}) . \end{equation} \subsection{Linear response function} For non-interacting electrons, the linear response function (\ref{eq:chi.eigen}) can be rewritten by using one-particle eigenstates as \cite{Imry} \begin{eqnarray} \label{eq:chi.eigen.one} \chi_{BA}(\omega) &=& \sum_{E_i \le E_f, E_j>E_f } \frac{\langle i| B | j \rangle \langle j| A | i \rangle}{(\omega+i\eta) -(E_j-E_i )} \nonumber \\ && - \sum_{E_i \le E_f, E_j>E_f } \frac{\langle i| A | j \rangle \langle j | B | i \rangle}{(\omega+i\eta) +(E_j-E_i )} \end{eqnarray} where $E_f$ is the Fermi energy, and $|i\rangle$ and $|j\rangle$ are the occupied and empty one-particle states, respectively. This formula can be again rewritten in time-dependent representation as \begin{eqnarray} \label{eq:chi.time.one} \lefteqn{ \chi_{BA}(\omega+i\eta) } \\ &=& (-i) \int_0^T dt \sum_{\begin{array}{c}\scriptstyle E_i \le E_f\\ \scriptstyle E_j>E_f\end{array}} e^{+i(\omega + i\eta)t} \left\{ \langle i| e^{+iHt} B e^{-iHt} |j\rangle \langle j| A | i \rangle -c.c. \right\} \\ &=& (-i) \int_0^T dt \sum_{E_i,E_j} e^{+i(\omega + i\eta)t} \times \nonumber \\ && \left\{ \langle i| \theta(E_f-H) e^{+iHt} B e^{-iHt} \theta(H-E_f) |j\rangle \langle j| A | i \rangle - c.c.\right\}\\ &=& \label{eq:chi.numerical.1} \left\langle \left\langle \rule{0pt}{24pt} \int_0^T dt e^{+i(\omega + i\eta)t} K(t) \right\rangle \right\rangle \end{eqnarray} where the double brackets indicate the statistical average over random vectors $| \Phi \rangle $, and $K(t)$ is the time correlation function defined by \begin{equation} \label{eq:correlation} K(t)= 2 {\rm Im} \langle \Phi | \theta(E_f-H)e^{+iHt}B e^{-iHt} \theta(H-E_f)A | \Phi \rangle . \end{equation} Equations (\ref{eq:chi.numerical.1}) and (\ref{eq:correlation}) are the main result of this paper. Note that calculating the trace over the initial states $|i\rangle$ by using random vectors reduces the computational effort by a factor of $N$. As the result, the computational effort still remains $O(N)$ in spite of the double summation in (\ref{eq:chi.eigen.one}). In the above equations, we have introduced several numerical techniques. Firstly, the time-dependent statevectors, \begin{eqnarray} \label{eq:timevector} e^{-iHt} \theta(H-E_f)A | \Phi \rangle \nonumber \\ e^{-iHt} \theta(E_f-H) | \Phi \rangle \end{eqnarray} are calculated by the leap frog method \cite{Askar,Lefo,Iitaka94,Iitaka94b,Iitaka96} \begin{eqnarray} \label{eq:frog} |\phi; t + {\Delta t} \rangle &=& -2 i {\Delta t} H |\phi; t \rangle + |\phi; t - {\Delta t} \rangle \end{eqnarray} where the Hamiltonian matrix is discretized by finite difference \cite{Chelikowsky,Fletcher} \begin{eqnarray} {{\rm \partial }^{2}\phi \over \partial {\mit x}^{\rm 2}}&=& \sum\nolimits\limits_{\mit n\rm =-\mit N_{diff}}^{N_{diff}} \frac{1}{\Delta x^2} {C}_{n}^{(2)}\phi \left({{x}+\mit n\Delta x ,{\mit y},{\mit z}} \right) +\mit O\left({{\Delta x}^{\rm 2\mit N_{diff}}}\right) . \end{eqnarray} Due to this discretization, the Hamiltonian matrix becomes sparse and the matrix vector multiplication in (\ref{eq:frog}) can be done with $O(N)$ computational complexity. We use \( N_{diff}=4 \) formula in this paper. Secondly, the matrix step function for a normalized hermitian matrix $X$ whose eigenvalues \(X_i \) are in the range $[-1,1]$ is defined in its eigenstate basis \begin{equation} \theta(X)= \sum_{X_i} |X_i\rangle \ \theta(X_i) \ \langle X_i | . \end{equation} By using this step function, we can avoid the difficulties in the partial sum in (\ref{eq:chi.time.one}). Operation of this function on an arbitrary vector $|\phi\rangle$ is numerically approximated by the Chebyshev polynomial expansion \cite{Recipes,Kosloff,Hoffman,Silver,Roder,Wang,Voter,Goedecker,Sankey}, \begin{equation} f(X)|\phi\rangle \approx \sum_{k=1}^K c_k T_{k-1}(X)|\phi\rangle \end{equation} where each term of the right hand side is calculated by vector recursion formulae \begin{eqnarray} T_0(X)|\phi\rangle &=&|\phi\rangle \\ T_1(X)|\phi\rangle &=&X |\phi\rangle \\ T_{n+1}(X)|\phi\rangle &=& 2 X T_n(X)|\phi\rangle -T_{n-1}(X)|\phi\rangle \ \ n\ge1 . \end{eqnarray} To use this matrix function in (\ref{eq:timevector}), we should normalize the Hamiltonian matrix so that \( X=(H-E_f)/E_{norm} \) has eigenvalues in the range \( [-1,1] \). Thirdly, we define random vectors with random phase by \begin{equation} \label{eq:random.def} |\Phi \rangle = \sum_{n=1}^N |n\rangle e^{+i\phi_n} \end{equation} where $|n\rangle $ are basis vectors and $-\pi < \phi_n \le \pi, (n=1,\cdots,N)$ are uniform random variables that satisfy \( \left\langle \left\langle \ e^{-i\phi_{n'}} e^{i\phi_{n}} \ \right\rangle \right\rangle \ = \delta_{n'n} \). Then we can derive various useful identities such as \begin{eqnarray} \label{eq:random.norm} \langle \Phi | \Phi \rangle &=& \sum_n \langle \Phi |n \rangle \langle n | \Phi \rangle = \sum_n e^{-i\phi_{n}} e^{i\phi_{n}} =N \\ \label{eq:random.complete} \left\langle \left\langle \ | \Phi \rangle \langle \Phi | \ \right\rangle \right\rangle &=& \sum_{n'n} |n'\rangle \left\langle \left\langle \ e^{-i\phi_{n'}} e^{i\phi_{n}} \ \right\rangle \right\rangle \ \langle n | \nonumber \\ &=& \sum_n |n \rangle \langle n | = I \\ \label{eq:trace.monte.appendix} \left\langle \left\langle \ \langle \Phi | A | \Phi \rangle \ \right\rangle \right\rangle &=& \label{eq:random.trace} \sum_{n, n'} \left\langle \left\langle \ e^{i(\phi_n-\phi_{n'})} \ \right\rangle \right\rangle \ \langle n'|A|n \rangle \nonumber \\ &=&\sum_n \langle n|A|n \rangle = {\rm tr}\left[ A \right] = \sum_{E_m} \langle E_m|A|E_m \rangle \end{eqnarray} Equation (\ref{eq:random.norm}) shows that each random vector is normalized to $N$, the number of one-particle eigenstates. Equation (\ref{eq:random.complete}) shows that random vectors have normalized completeness. Equation (\ref{eq:random.trace}) shows that the expectation value of an operator by random vectors gives the trace of the operator. We used this identity to calculate the trace over $| i \rangle$ in (\ref{eq:chi.numerical.1}) and (\ref{eq:correlation}). These random vectors with random {\it phase} are more useful in calculating expectation values than random vectors with random {\it amplitude} since each random vectors are automatically normalized. Finally the formula for numerical calculation of polarizability function $\chi_{\beta \alpha}(\omega)$ with $\alpha,\beta=x,y,z$ becomes \begin{eqnarray} \label{eq:chi.dielectric.numerical} \chi_{\beta \alpha}(\omega) &\approx& \left\langle \left\langle \rule{0pt}{24pt} \int_0^T dt e^{-\eta t} \left( e^{+i \omega t}-\delta_{\beta \alpha} \right) K(t) \rule{0pt}{24pt} \right\rangle \right\rangle \\ \label{eq:chi.correlator.numerical} K(t)&=& \frac{-2}{V (\omega+i \eta)^2} {\rm Im} \langle \Phi | \theta(E_f-H)e^{+iHt} \times \nonumber \\ && p_{\beta} e^{-iHt} \theta(E_{cut}-H) \theta(H-E_f) p_{\alpha} | \Phi \rangle \end{eqnarray} where $V$ is the volume of the supercell, and the dipole moment operators \begin{eqnarray} \label{eq:dipole.operator1} \langle j |A|i \rangle &=& \langle j | x_{\alpha} |i \rangle \\ \label{eq:dipole.operator2} \langle i|B |j \rangle &=& \frac{-1}{V} \langle i |x_{\beta}|j \rangle , \end{eqnarray} are modified to momentum operators by partial integration. We also inserted a low energy projection operator $\theta(E_{cut}-H)$ into (\ref{eq:chi.correlator.numerical}) to eliminate unphysical high energy components of the random vectors. This filter is much more effective than the quadratic filter used in \cite{Wang}. In calculating very large systems, we need only few random vectors for statistical averaging, since the fluctuation becomes smaller as the system size $N$ becomes larger \cite{Iitaka96}. Figure~\ref{fig:harmonic.eps} shows the dielectric function $\epsilon_{xx}(\omega)=1+ 4\pi \chi_{xx}(\omega)$ of four electrons in three dimensional harmonic potential \begin{equation} \label{eq:harmonic.potential} V(\vec{r})=\frac{(\omega_0 r)^2}{2} \end{equation} calculated with $32^3$ cubic meshes, $\omega_0=0.1$, $\eta=10^{-4}$. Three random vectors are used. The analytical result \cite{Jackson} \begin{equation} \epsilon_{xx}(\omega)=1+ \frac{4\pi N_e}{V} \frac{1}{\omega_0^2-\omega^2-i\omega \eta} \end{equation} is also shown for comparison, where $N_e$ is the number of electrons in the supercell of volume $V$. The deviation from the exact result near $\omega=0$ is due to finite $\eta$. The result shows that our method works very well for $\omega \gg \eta$. Figure~\ref{fig:silicon.crystal.eps} shows the dielectric function with energy resolution $\eta=0.05 (eV)$ of silicon crystal consisting of $2^{15}$ Si atoms in a cubic supercell of $16^3$ unit cells. Each unit cell is divided into $8^3$ cubic meshes. One random vector is used. We used the empirical local pseudopotential in reference \cite{Zunger}. The result agrees with experimental results and other theoretical calculations \cite{Cohen,Noguez}. In some cases we may want to ask which part of the real space the electrons contributing to the linear response function come from. We can answer to this question by calculating the linear response function by restricting the range of the trace in (\ref{eq:chi.dielectric.numerical}) within a real space domain $D$. This can be done by replacing $|\Phi\rangle$ by $|\Phi'\rangle=P_D|\Phi\rangle$ where $P_D=\sum_{n \in D} |n\rangle \langle n|$ is the real space projection operator onto $D$. \subsection{Density of states} The density of states of the system can be calculated as \cite{Raedt2} \begin{eqnarray} \label{eq:dos} \rho(\omega) &=& \frac{-1}{\pi} \sum_n {\rm Im \ } G_{nn}(\omega + i\eta) = \frac{-1}{\pi} {\rm Im \ } \left( {\rm tr} \left[ G(\omega + i\eta) \right] \right) \end{eqnarray} by combining (\ref{eq:green.1.2.many}) and (\ref{eq:trace.monte.appendix}). Figure~\ref{fig:harmonic.dos} shows the numerical and analytical results of the density of states in 3D harmonic potential with $32^3$ cubic meshes, $\omega_0=0.1$, and $\eta=10^{-3}$. Three random vectors are used. Figure~\ref{fig:crystal.dos} shows the density of states of silicon crystal consisting of $2^{15}$ Si atoms in a cubic supercell of $16^3$ unit cells. Each unit cell is divided into $8^3$ cubic meshes. The energy resolution is $\eta=0.05 (eV)$. Three random vectors are used. We can also calculate the {\it local} density of states integrated in a given domain $D$ by using the real space projection operator $P_D$ to restrict the summation in (\ref{eq:random.def}) within $D$, \begin{eqnarray} \label{eq:localdos} \rho_D(\omega) &=& \frac{-1}{\pi} \sum_{n \in D} {\rm Im \ } G_{nn}(\omega + i\eta) = \frac{-1}{\pi} {\rm Im \ } \left( {\rm tr} \left[ P_D G(\omega + i\eta) \right] \right) . \end{eqnarray} Photonic band structures in two-dimensional periodic structure of dielectric material \cite{Plihal,Baba,Hirayama} can also be calculated by using (\ref{eq:dos}) or (\ref{eq:localdos}) since the Maxwell equations of this system are reduced to the Schroedinger equation with position dependent mass, i.e. , \begin{eqnarray} \label{eq:photonic.hamiltonian} {\cal H} H_z(x,y) &=&\frac{\omega^2}{c^2} H_z(x,y) = E H_z(x,y) \\ {\cal H} &=& \frac{\partial}{\partial x}\frac{-1}{\epsilon(x,y)} \frac{\partial }{\partial x} +\frac{\partial}{\partial y}\frac{-1}{\epsilon(x,y)} \frac{\partial }{\partial y} \end{eqnarray} for $H$-mode where $H_z$ is the $z$-component of the magnetic field, and \begin{eqnarray} \label{eq:photonic.hamiltonian.emode} {\cal H} E_z(x,y) &=&\frac{\omega^2}{c^2} E_z(x,y) = E E_z(x,y) \\ {\cal H} &=& \frac{-1}{\epsilon(x,y)} \left\{ \frac{\partial^2 }{\partial x^2} +\frac{\partial^2}{\partial y^2} \right\} \end{eqnarray} for $E$-mode where $E_z$ is the $z$-component of the electric field. Figure~\ref{fig:5} shows a typical structure of two-dimensional photonic crystal cavities used in our calculation, and Fig.~\ref{fig:6} shows the calculated density of states as a function of frequency and wave number. \section{Summary} In this article we proposed a new numerical method suitable for calculating the linear response functions (and the density of states) of non-interacting electrons, in which the sum over the initial one-particle states are efficiently calculated by using random vectors. The advantage of this method compared to the Chebyshev polynomial method by Wang to calculate optical absorption of non-interacting electrons \cite{Wang} is that our method can calculate not only the imaginary part but also the real part of the linear response functions at the same time, and that it can calculate them without any input-output of statevectors on external storage. As the result, our method can calculate much larger systems than Wang's method. The Chebyshev polynomial method of degree $M$ should store $O(M)$ statevectors of size $O(N)$ on external storage to make the table of $O(M^2)$ generalized Chebyshev moments $\Lambda_{m,m'}$ and may take very long I/O time. The application of this method to photonic band structures, silicon nanocrystalites, and periodic structures of chaotic systems will be presented elsewhere \cite{LDSD1,LDSD2,LDSD3}. \section*{Acknowledgments} One of the authors (T.I) wishes to thank Prof. Masuo Suzuki and the referee of this article for their encouragement. The calculations were performed on Fujitsu VPP500 at RIKEN and ISSP.
2,869,038,156,148
arxiv
\section{Introduction} Adaptive modulation is nowadays one of the key components in most wireless communication standards such as the high speed packet access (HSPA) and the worldwide interoperability for microwave access (WiMAX), since it allows for high data rates over fading channels ~\cite{c1}. In particular the transmission rate is adapted based on the channel conditions that are estimated at the receiver’s side and are made available also at the transmitter throughout a feedback channel. When adaptive modulation is implemented in conjunction with power control at the physical layer then a variable rare variable power (VRVP) modulation is considered ~\cite{c2}. Two alternative schemes of VRVP have been specified, known as continuous rate (CR) and discrete rate (DR) although the latter is more practical from implementation point of view. \let\thefootnote\relax\footnote{This paper accepted to to IEEE Vehicular Technology Conference 2011 (Wireless Networks, 49963)} On the other hand, cognitive radio (CR) has been recently proposed for enhancing spectrum utilization of licensed wireless networks, also known as primary networks (PN), when certain conditions apply ~\cite{c3}. The underutilized or unused spectrum resources can be exploited by the so called cognitive or secondary networks (SN) as long as their operation is not harmful for the PN. Two main types of cognitive radio networks (CRNs) have been identified so far, namely opportunistic spectrum access (OSA) and spectrum sharing (SS) ~\cite{c4}. The first one relies on exploiting spectrum gaps being available in the PN which are recognized by the SN via spectrum sensing and the second one relies on the coordinated sharing of a spectrum band among the PN and the SN. In addition if an SS CRN employs also spectrum sensing, then the specific CRN is known as sensing-based spectrum sharing CRN and can be assumed as a third type of CRNs ~\cite{c8}. All three CRN types exploit channel state information (CSI) in order to provide enhanced spectral efficiency over the considered wireless channels ~\cite{c5} ~\cite{c7}. To this end, one of the main techniques employed is power control which regulates the transmission of the SN users while protecting the PN ones ~\cite{c6}. In opportunistic spectrum access (OSA) systems cognitive users (CUs) are able to detect and use channels that have been originally allocated to primary users (PUs) when these are recognized as being idle ~\cite{c4}. The knowledge of the channel’s state is very important for allocating idle channels. In particular it allows an optimal power allocation on a specific channel ~\cite{c9}. Furthermore, it can lead to the selection of the most appropriate modulation scheme when adaptive modulation is being considered ~\cite{c1}. Both OSA and adaptive modulation are heavily depending on the channel state and thus their deployment could be jointly considered. In this letter, we study the incorporation of adaptive modulation in OSA systems. Specifically, by assuming a spectrum pooling, as the OSA system ~\cite{c7} ~\cite{c10}, we derive the gain achieved in spectral efficiency and obtain the optimal power allocation from the application of different kinds of adaptive modulation. The obtained numerical results denote the achievable performance gain in spectral efficiency and the power requirements from such a combination. The rest of this paper is organized as follows. Section 2 describes the corresponding system model when adaptive modulation is implemented in OSA-based CRNs. Section 3 provides the performance analysis of adaptive modulation in OSA-based CRNs over fading channels. In section 4, we present and discuss the obtained numerical results and in section 5 we provide the conclusions. \section{Opportunistic Spectrum Access CRNs System Model} \label{system} \begin{figure*} \begin{center} \includegraphics[width=6in]{figure1.eps} \end{center} \caption{System model of Opportunistic Spectrum Access CRN} \label{fig:systemmodel} \end{figure*} We assume an OSA CRN with $c \in C$ channels and $u \in U$ users where each user $u$ is served relying on a spectrum pooling strategy that first serves the PU and subsequently the SUs ~\cite{c7}. Fading channels are assumed with channel gain $g(i)$ and additive white Gaussian noise (AWGN) $n(i)$, both at time $i$. The average transmit power over the fading channel is $P$, the AWGN is with power density $N_0/2$ and the received bandwidth is $B$. An SU can access a channel $c$ if and only if a predefined level on the instantaneous transmit power $P$, is achieved. This level is determined from the channel state information (CSI) which represents the received Signal-to-Noise-Ratio (SNR), $\gamma$ that is equal to $g(i)P/N_0B$ at time $i$ and for a unit of bandwidth. Thus, the transmit power $P$ is controlled by $\gamma$, and we denote it as $P(\gamma)$. This policy is known as optimal power allocation in wireless communications and relies on the channel state estimation $g^^(i)$ at the receiver side ~\cite{c11}. The channel estimation i.e. SNR $\gamma$ is also available at the transmitter side via a feedback channel. We assume that the CSI is perfectly available at the receivers i.e. PU-Rx, SU-Rx and that the feedback channel does not induce any delays on the CSI’s transmission. Besides, a set of $M-ary$ Quadrature Amplitude Modulations (M-QAM) is considered, that their selection is governed from the CSI. Thus, the combined system first determines if a user u can access a channel c and in the sequel it selects the transmission rate $R = log2(M)$ via the selection of the appropriate $M-ary$ modulation from the signal set $M$ according to the estimated CSI. Fig.1 shows the system model of the considered OSA CRN. The model shows one PU and one SU and the channels C which are available for access from both user types at time $i$ and $i + 1$ respectively. For instance, at time the PU occupies two channels in which the specified level in SNR has been achieved and in the same way the SU occupies two different channels in the considered spectrum at time $i + 1$. Furthermore, we make the following assumptions for the considered system: a) the transmission of each symbol is accomplished with a symbol period $Ts = 1/B$ using ideal raised cosine pulses and b) the fading channel is varying slowly in time, i.e. the receiver is able to sense and track the channel fluctuations and thus it corresponds to a block flat fading channel model with an average received SNR, $\bar{\gamma}$ ~\cite{c12}. \section{Adaptive Modulation in Opportunistic Spectrum Access CRNs over Fading Channels} \label{system} \subsection{Continuous Rate adaptive modulation in OSA CRN} \label{system1} The continuous rate (CR) adaptive modulation with MQAM constellation set results in the following expression for the constellation set for a specific bit error rate (BER) ~\cite{c1} \begin{eqnarray} \label{eq1} M(\gamma) = 1 + \frac{1.5\gamma}{-ln(5BER)} \frac{5\gamma}{\bar{P}}P(\gamma) \end{eqnarray} where $\gamma$ is the received SNR. On the other hand, the aim in OSA CRNs is to allocate a channel $c$ to user $u$ which maximizes the average spectral efficiency (ASE) \footnote[1]{The term average spectral efficiency is used when fading channels are assumed} given as follows ~\cite{c7} \begin{eqnarray} \label{eq2} \nonumber S_e &=& E[log_2 M(\gamma)] \\ &=& \int log_2(1+\frac{1.5\gamma}{-ln(5BER)}\frac{P(\gamma)}{P}) d\gamma \end{eqnarray} subject to the following power constraint \begin{eqnarray} \label{eq3} \int P(\gamma)p(\gamma)d\gamma = \bar{P} \end{eqnarray} Thus, the optimal power allocation which maximizes the ASE in OSA CRNs with CR adaptive modulation is given as follows ~\cite{c2} \begin{eqnarray}\label{eq4} \frac{P(\gamma)}{\bar{P}}= \begin{cases} \ \frac{1}{\gamma_0}-\frac{1}{\gamma K}, \gamma \geq \frac{\gamma_0}{K} \\ \ 0 , \gamma < \frac{\gamma_0}{K}\\ \end{cases} \end{eqnarray} where $K$ is an effective power loss that retains the BER value and it is equal to \begin{eqnarray} \label{eq5} K = \frac{-1.5}{ln(5BER)} \end{eqnarray} Substituting (\ref{eq4}) into (\ref{eq2}), the ASE for the CR MQAM is maximized up to a cut-off level in SNR denoted as $\gamma_k=\gamma_0/K$ and thus it is obtained as follows \begin{eqnarray} \label{eq6} \langle S_e \rangle_{CR} = \int_{\gamma_k}^{\infty} log_2(\frac{\gamma}{\gamma_K})p(\gamma)d\gamma \end{eqnarray} In the considered OSA CRN, equation (\ref{eq4}) gives the ASE that the PU achieves at a channel $c$ denoted as $Se_{1,c}$ since it is served first from the OSA strategy ~\cite{c7}. Thus, the achieved ASE by a SU $u$ denoted as $Se_{u,c}$ is equal to \begin{eqnarray} \label{eq7} Se_{u,c} = \Delta_{1,c}Se_{1,c} \end{eqnarray} where $\Delta_{1,c}$ is the spectrum factor gain which represents the probability that the channel c is not occupied by the PU. This gain depends on the cut-off level in SNR $k$ of the optimal power allocation over the fading channel and hence it is obtained as follows \begin{eqnarray} \label{eq8} \Delta_{1,c} = \int_{0}^{\gamma_k} p(\gamma)d\gamma \end{eqnarray} If we generalize this strategy for $U$ users, the sum ASE which an OSA CRN provide is given as follows \begin{eqnarray} \label{eq9} Se_{sum} = \Sigma_{u=1}^{U}Se_{u,c} = = \Sigma_{u=1}^{U}\Delta_{1,c}Se_{1,c} = \frac{1-\Delta_{1,c}^{U}}{1-\Delta_{1,c}}Se_{1,c} \end{eqnarray} The term $1-\Delta_{1,c}^{U}/1-\Delta_{1,c}$ is called total band factor gain and it represents the percentage of the channels that are remained unused and thus they can be used by the SUs which are served with a specific priority by the OSA CRN. \subsection{Discrete Rate adaptive modulation in OSA CRN} \label{system2} We now consider a discrete rate (DR) MQAM with a constellation set of size $N$ with $M_0 = 0$,$M_1 = 2$ and $M_j = 2^{2(j-1)}$ for $j = 2,...,N$. At each symbol time, the system transmits with a constellation from the set ${M_j = 0,1,...,N}$ ~\cite{c2}. The choice of a constellation depends on the $\gamma$ fade level i.e. SNR over that symbol time while the $M_0$ constellation corresponds to no data transmission. Therefore, in OSA CRNs, for each value of $\gamma$, the SU-Tx decides which constellation $M$ to transmit and what is the associated transmit power $P$ in order to maximize the average spectral efficiency (ASE). The ASE is now defined as the sum of the data rates of each constellation multiplied with the probability that this constellation will be selected and thus it is given as follows \begin{eqnarray} \label{eq10} \langle Se \rangle_{DR} = \Sigma_{j=1}^{N}log_2(M_j)p(\gamma_i\leq\gamma\leq\gamma_{i+1}) \end{eqnarray} subject to the following power constraint \begin{eqnarray} \label{eq11} \Sigma_{j=1}^{N} \int_{\gamma_j}^{\gamma_{j+1}} \frac{P_j(\gamma)}{\bar{P}} p(\gamma)d\gamma = 1 \end{eqnarray} where $P_j(\gamma)/\bar{P}$ is the optimal power allocation that is obtained from (\ref{eq3}) for each constellation $M_j$ with a fixed BER as follows \begin{eqnarray}\label{eq12} \frac{P_j(\gamma)}{\bar{P}}= \begin{cases} \ (M_j-1)\frac{1}{\gamma_K} -\frac{1}{\gamma K}, M_j \leq \frac{\gamma}{\gamma^*} \leq M_{j+1}\\ \ 0 , M_j=0\\ \end{cases} \end{eqnarray} where $\gamma^*$ is the cut-off level in SNR of the optimal power allocation which optimize the amount of the fading regions $\gamma_j$ for $j = 0,1,...,N$ according to $\gamma_j=\gamma^*M_j$ and thus the maximization of the spectral efficiency is being accomplished. Therefore, the band factor gain and the sum ASE in OSA CRNs which implement a DR adaptive modulation depends on this the cut-off level in SNR, $\gamma^*$ and they are obtained by equations ~\cite{c8} and ~\cite{c9} accordingly. \section{Numerical Results} In Fig.2 we present the results obtained when continuous rate (CR) adaptive modulation in OSA CRN over fading channel is considered. We assume a Rayleigh distribution for the fading channel with probability density function equal to $1/\bar{\gamma}exp(-\gamma/\bar{\gamma})$ where $\gamma$ and $\bar{\gamma}$ are the instantaneous and the average received SNR respectively. We depict the results for bit error rates (BER) equal to $10^{-3}$ and $10^{-6}$ respectively. With solid lines are shown the results when only the PU is considered, which is the case of a conventional network i.e. which does not serve any SUs. With dashed lines are shown the results obtained for the OSA CRN with a number of users equal to $U = 5$. An important performance gain in the OSA CRN is observed in comparison with the performance of the conventional network. In detail, for both BER cases the additional ASE is close to $0.5 bits/sec/Hz$ at low average SNR regions e.g. $ \bar{\gamma}= 0dB$. Besides, the additional ASE is close to $0.3 bits/sec/Hz$ at moderate average SNR regions e.g. $\bar{\gamma}= 10dB$ and finally the ASE is close to $0.1bits/sec/Hz$ at high average SNR regions e.g. $ \bar{\gamma}= 20dB$. This behavior in particular is explained from the fact that the probability that a channel is not allocated to the PU is larger for the low average SNRs. In other words, for the low average SNR regions, the cut-off level in SNR $K$ is getting higher and in consequence the band factor gain in equation (\ref{eq8}) is getting higher too. The opposite is applied for the high average SNRs where the PU is more likely to transmit on the channel or a unit bandwidth in general since the cut-off level in SNR $K$ is getting lower and thus the constraint for allocating a channel is relaxed. Fig.3a and Fig.3b show the results obtained when the discrete rate (DR) adaptive modulation in OSA CRN over a Rayleigh fading channel is considered. We assume the aforementioned Rayleigh distribution with to be the average received SNR. We depict the results for a bit error rate (BER) equal to and $10^{03}$ in Fig.3a and the results for a BER equal to 10􀀀6 in Fig.3b. With solid lines are shown the results for the conventional network i.e. $U = 1$ and with dashed lines are shown the results obtained for the OSA CRN with a number of users equal to $U = 5$. We consider different fading regions i.e. 5 fading regions with a set of MQAM constellations {0,2,4,16, 64} i.e. 4 fading regions with a set of MQAM constellations {0,2,4,16} and 3 fading regions with a set of MQAM constellations {0,2,4}. Again, the performance gain is remarkable for low average SNR regions for the same reasons as with the CR OSA CRN. It should be noticed that the performance gain is identical at low average SNR regions for all fading regions i.e. 5,4 and 3 that is close to $0.3bits/sec/Hz$. On the other hand, the performance gain is negligible at high average SNR regions. Regarding the different BER values, the tighter the BER is get i.e. $BER = 10^{-6}$ , the larger the performance gain is become, something that we discuss in detail in Fig.5 which illustrates the total band factor gain for the CR and DR implementations of adaptive modulation in OSA CRNs. \begin{figure} \includegraphics[width=95mm,height=70mm]{figure2.eps}\\ \caption{Average spectral efficiency of CR adaptive modulation in OSA CRN over Rayleigh fading channel} \label{fig:2} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure3.eps}\\ \caption{Average spectral efficiency of DR ($BER = 10^{-3}$) adaptive modulation in OSA CRN over Rayleigh fading channel} \label{fig:3a} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure4.eps}\\ \caption{Average spectral efficiency of DR ($BER = 10^{-6}$) adaptive modulation in OSA CRN over Rayleigh fading channel} \label{fig:3b} \end{figure} Fig.4 shows the total band factor gain of adaptive modulation in OSA CRN as it is obtained from equation (\ref{eq9}). We depict both the cases of CR and DR adaptive modulation over a Rayleigh fading channel. With solid lines are shown the results obtained for a BER equal to $10^{-3}$ and with dashed lines are shown the results obtained for a BER equal to $10^{-6}$. The OSA CRN is considered with a number of users equal to $U = 5$. Notably, the largest total band factor gain is achieved in case of CR adaptive modulation with a BER equal to $10^{-6}$ and the smallest one is achieved in case of DR adaptive modulation with 3 regions and a BER equal to 10􀀀3 . Therefore, the tighter the BER criterion is become, the larger the advantage of the application of the OSA strategy in CRNs. We should further notice that the gain of the CR and DR adaptive modulations with a high number of regions i.e. 5 was expected due to the transmission with high bit rates in terms of bits per symbol and in consequence with a high average spectral efficiency. \begin{figure} \includegraphics[width=\columnwidth]{figure5.eps}\\ \caption{Total band factor gain of adaptive modulation in OSA CRN} \label{fig:4} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figure6.eps}\\ \caption{Optimal power allocation for different adaptive modulation schemes versus the received SNR for a $5db$ average transmit power} \label{fig:5} \end{figure} The optimal power allocation versus the received SNR for an average transmit power of 5db is shown in Fig.5. This value is selected from Fig. 4 since we observe that an important gain in spectral efficiency is achieved in this average SNR region. By decreasing the number of modulations in the adaptive scheme as well as the upper bound of the error probability, the consumed power is decreasing. On the other hand, the most power demanding case is when no modulations are being used. \section{Conclusion} In this work, the incorporation of adaptive modulation in opportunistic spectrum access cognitive radio networks over fading channels is studied. In particular we have shown that the usage of adaptive modulation in OSA systems leads to an increased spectral efficiency and to decreased power consumption. This improvement is getting better when lower number of regions and tighter error probabilities are applied. Future work includes the assessment of spectral efficiency when adaptive modulation is considered for spectrum sharing cognitive radio networks with or without sensing information availability.
2,869,038,156,149
arxiv
\section{Introduction} Since the expanding universe is homogeneous and isotropic on scales larger than about $100$-Mpc, it can be modeled by the so-called FRW metric \cite{roos} \begin{eqnarray}\label{frw} ds^{2}=dt^{2}-a^{2}\left( t\right) \left[ \frac{dr^{2}}{1-kr^{2} +r^{2}d\Omega ^{2}\right], \end{eqnarray} where $k=0,\pm1$ is the curvature constant corresponding to a flat, closed and open universe, respectively. Additionally, $a(t)$ is the scale factor written as $a(t)=a_0t^{\frac{2}{3(1+\omega)}}$ for $\omega>-1$ and $a(t)=a_0\exp{Ht}$ when $\omega=-1$, whiles $\omega=\frac{p}{\rho}$ is the state parameter of prefect dominated fluid. In addition, $H\equiv\frac{\dot{a}}{a}$ is the Hubble parameter \cite{roos}. Moreover, for Phantom regimes ($\omega<-1$) the scale factor is written as $a(t)=a_0(t_{br}-t)^{\frac{2}{3(1+\omega)}}$, where $t_{br}$ is the big rip singularity time, everything will be decomposed to its fundamental constituents at that time, \cite{phan}. Additionally, it is shown that one can use the conformal form of this metric to describe the inhomogeneity of the cosmos in scales smaller than $100$-Mpc \cite{rma}. In the standard cosmology a primary inflationary expansion era is used to get a suitable theoretical description for horizon problem which emerges in the study of Cosmic Microwave Background (CMB) \cite{roos}. Observational data signals us a universe with $\dot{a}\geq0$ and $\ddot{a}\geq0$ \cite{ac1,ac2,ac3,ac4}, which means that we need to modify the gravitational theory \cite{mod,meeq,de} or considering an unknown source, named dark energy (DE), for describing this phase of expansion \cite{de,de1,mod1}. A simple model used to explain DE considers an unknown fluid with constant density, pressure and $\omega_D=-1$ called the cosmological constant (CC), and leads to an exponential expansion ($a(t)=a_0\exp{Ht}$) \cite{roos}. We should note that the current expanding phase of the universe is in full agreement with both of the thermodynamics equilibrium conditions and the rise of complexity content of the universe meaning that the universe may maintain its current expanding phase \cite{mr}. More studies on the thermodynamics of DE and the final state of universe can be found in refs.~\cite{noj,noj1,pavon2}. Bearing the primary inflationary era together with the CC model of DE in mind, two difficulties including the \textit{fine tuning} and \textit{coincidence} problems are inevitable \cite{roos}. It is also useful to mention here that since the CC model has a satisfactory match to the observational data, it formes a basis for the standard cosmology \cite{roos}. Observational data support a DE candidate with varying energy density \cite{d1,d2,d3,d4,d5,d6}. Indeed, there are various attempts to model the source of the primary and current accelerating eras by introducing a varying model for the DE candidate \cite{meeq,de,de1,mod1,gde,ven,GGDE,GGDE1,sola,sola1,sola2,sola3,sola4,saha,tere1,tere,tere2,lima}. Recently, Lima and co-workers proposed that a universe filled by a dynamical vacuum energy density can avoid the big bang as well as the big crunch singularities, the fine tuning and coincidence problems \cite{lima}. Indeed, since the vacuum density is decreased as a function of the Hubble parameter, the Lima's model has enough potential for solving the fine tuning and coincidence problems \cite{lima,lima2}. Additionally, because in their model, the cosmos began to expand from a primary unstable de-Sitter spacetime, and finally reaches to another eternal de-Sitter spacetime, the horizon problem as well as the big crunch problem are naturally solved \cite{lima}. Moreover, It is shown that the ultimate de-Sitter spacetime is in accordance with the thermodynamic equilibrium conditions, and therefore the cosmos may serve its final stage \cite{lima1}. In this model, the state parameter of the vacuum energy satisfies the $\omega_D=-1$ condition, and thus the vacuum energy decays into the other fields confined to the apparent horizon of the FRW universe \cite{lima,lima3}. It is worthwhile to mention here that the decay of vacuum into the other fields is due to a mutual interaction between the cosmos sectors leading to leave thermal fluctuations into the cosmos in this model \cite{lima3}. It is a good feature, because observations allows a mutual interaction between the cosmos sectors \cite{int1,int11,ob1,ob2,ob3}. Thermodynamics of such possible mutual interactions are also studied in various theories of gravity by considering various models for DE \cite{int11,te2,te3,te4,int3}. In fact, the relation between such possible interactions, coincidence and fine tuning problems and thermal fluctuations attracts more investigators to itself \cite{lima3,int1,int2,int3}. Similarity between the Black Holes laws and those of thermodynamics motivates us to define a temperature as \begin{eqnarray}\label{temp} T=\frac{\kappa}{2\pi}, \end{eqnarray} where $\kappa$ is the surface gravity of Black Hole \cite{pois}. On one hand, for some spacetimes, such as the de-Sitter spacetime, surface gravity and thus the corresponding temperature are negative \cite{pois}, and therefore we need to define $T=\frac{|\kappa|}{2\pi}$ in order to get the positive values for temperature \cite{CaiKim,GSL1}. Whiles, on the other hand, one can get the Einstein equations on the event horizon of Black Holes (as a causal boundary) by applying the first law of thermodynamics on the event horizon and considering Eq.~(\ref{temp}) as a suitable definition for temperature \cite{J1,T11,J11}. Indeed, it seems that this similarity is much more than a mere resemblance \cite{J1,J11,M1,M11,T1,T11,T12,T13,T14,T15,r1,r2,r3}. The apparent horizon of the FRW universe, as the marginally trapped surface, is evaluated by \begin{eqnarray}\label{ah2} \partial_{\alpha}\zeta\partial^{\alpha}\zeta=0\rightarrow r_H, \end{eqnarray} where $\zeta=a(t)r$, and can be considered as the causal boundary \cite{Hay2,Hay22,Bak}. Therefore, One gets \cite{sheyw1,sheyw2} \begin{eqnarray}\label{ah} \tilde{r}_A=\frac{1}{\sqrt{H^2+\frac{k}{a(t)^2}}}. \end{eqnarray} Moreover, the surface gravity associated with the apparent horizon of the FRW universe can be evaluated by using \begin{equation}\label{SG} \kappa=\frac{1}{2\sqrt{-h}}\partial_{a}(\sqrt{-h}h^{ab}\partial_{b}\zeta). \end{equation} where $h_{ab}=\textmd{diag}(-1,a(t)^2)$ \cite{sheyw1,sheyw2}. Since the WMAP data indicates a flat universe, from now we set $k=0$ \cite{roos,phan}. Thus, simple calculations lead to \begin{eqnarray} \kappa=-H(1+\frac{\dot{H}}{2H^2}), \end{eqnarray} and therefore \begin{eqnarray}\label{t} T=\frac{\kappa}{2\pi}=-\frac{H}{2\pi}(1+\frac{\dot{H}}{2H^2}). \end{eqnarray} where we have used Eq.~(\ref{temp}) to obtain this equation \cite{GSL1,Bak,Cai3,hel,hel1}. It is useful to note here that for the FRW universe supported by a fluid with $\rho=-p=constant$ ($\omega=-1$), $\dot{H}=0$ and therefore this equation covers the result of de-Sitter spacetime ($T=-\frac{H}{2\pi}$) \cite{pois,GSL1}. In cosmological setups, some authors use various definition of temperature and get the corresponding Einstein equations (Friedmann equations) on the apparent horizon \cite{Cai2,Cai3,CaiKim,GSL1,Bak,hel,hel1}. In order to avoid the negative temperature, authors in \cite{CaiKim}, have defined $T=\frac{H}{2\pi}\simeq\frac{|\kappa|}{2\pi}$ and used the first law of thermodynamics (in the $TdS_A=-dQ$ form) to get the Friedmann equations. In their approach $S_A=\pi\tilde{r}_A^2$ (the Bekenstein limit) and $Q$ are the horizon entropy and energy crossed the apparent horizon, respectively \cite{GSL1}. Indeed, authors argued that the extra minus sign in the first law of thermodynamics is the result of universe expansion leading to decrease the energy of confined fluid together with increase the size of the universe and thus $S_A$. Therefore, by using original definition of temperature~(\ref{temp}) and thus~(\ref{t}), called the Hayward-Kodama temperature \cite{GSL1,Bak,hel,hel1}, together with the $TdS_A=dQ$ form of the first law of thermodynamics we can cover the Friedmann equation. Moreover, it seems that $W=\frac{1}{2}h_{ab}T^{ab}$, where $T^{ab}$ is the energy momentum tensor of fluid which spreads over the cosmos, plays the role of pressure in the dynamics spacetimes and thus the FRW universe \cite{Cai2,Cai3,sh1,GSL1,Bak,hel,hel1}. Following this argument, authors in \cite{Cai2,Cai3,Bak,hel,hel1,GSL1} have used~(\ref{temp}) and the work density definition ($W$) to get the Friedmann equations by applying the first law of thermodynamics ($TdS_A=dQ=dE-WdV$) on the apparent horizon of the FRW universe in various theories of gravity, whiles $E$ is the energy confined to the apparent horizon. It is also shown that Loop Quantum Gravity corrects the horizon entropy which leads to modify the Friedmann equations on the apparent horizon if one considers~(\ref{temp}) together with $TdS_A=dE-WdV$ \cite{sh1,r1}. The entropy of a self-gravitating system depends on the gravitational theory used to describe the gravity field. Accordingly, it seems that the self-gravitating systems satisfy the Bekenstein limit of entropy in the Einstein general relativity framework. But, since the origin of DE is unknown, it may have either a geometrical or physical origin, one can expect that the DE candidate may affect the horizon entropy. By the same token, it is shown that the ghost dark energy and its generalization, as the dynamics candidates for DE, may also add an additional term to the entropy of various horizons leading to modify the Bekenstein limit \cite{cana,cana1}. Therefore, it seems that the dynamics model of DE may lead to modify the horizon entropy and thus, the Bekenstein limit. Recently, it is also shown that a mutual interaction between the cosmos sectors may change the horizon entropy \cite{mitra}. The second law of thermodynamics states that the horizon entropy may meet the $\frac{dS_A}{dt}\geq0$ condition \cite{haw}. Nowadays, thanks to the Bekenstein works \cite{bek,bek2}, it is believed that the rate of the total entropy of a gravitational system should be positive meaning that $\frac{dS_A}{dt}+\frac{dS_{in}}{dt}\geq0$, while $S_{in}$ is the entropy of confined fluid. The latter is called the general second law of thermodynamics \cite{bek,bek2,GSL1}. Comprehensive reviews on the various temperature definitions in cosmological setups, their motivations together with the validity of the first, second and generalized second laws of thermodynamics can be found in refs.~\cite{hel,hel1,GSL1}. Now, one can ask how a DE candidate and its probable interaction with other parts of cosmos affect the horizon entropy, the second and generalized second laws of thermodynamics? In this paper, we point to the unified first law of thermodynamics and assume that it is available on the apparent horizon of the flat FRW universe, while $T$ (the horizon entropy) corresponds to the Hayward-Kodama definition of temperature~(\ref{t}) on the apparent horizon of the FRW universe~\cite{Bak,hel,hel1}, and show that a DE candidate may lead to a new bound for the horizon entropy, whiles the cosmos sectors do not interact with each other. Additionally, we show that any mutual interaction between the cosmos sectors may also modify the horizon entropy. The relationships with similar works are also studied. Moreover, the results of considering the Cai-Kim temperature are also derived. Finally, the validity of the second law of thermodynamics and its generalization is also addressed. Since the physics behind the Lima's model~\cite{lima} is completely different from the ordinary models, introducing for describing DE, we point to results of considering this model. the paper is organized as follows. In the next section, after a brief review on the previous related works, we apply the unified first law of thermodynamics on the apparent horizon of the flat FRW universe, and show that how a dynamic candidate for DE may change the horizon entropy, whiles the cosmos sectors do not interact with each other. Thereinafter, we generalize our study to the interacting case and get a modification for the horizon entropy due to the mutual interaction between the cosmos sectors. In section ($\textmd{III}$), we study the validity of the second law of thermodynamics and its generalization. Section ($\textmd{IV}$) is devoted to a summary and concluding remarks. Throughout this paper, we set $G=c=\hbar=1$ for simplicity. \section{Horizon Entropy and the unified first law of thermodynamics} The unified first law of thermodynamics, which is available in some theories of gravity, is written as \begin{eqnarray}\label{ufl} dE=A\Psi+WdV, \end{eqnarray} where $W=\frac{1}{2}h_{ab}T^{ab}$ and $E=\frac{\zeta}{2}(1-h^{ab}\partial_a \zeta \partial_b \zeta)|_{\zeta=\tilde{r}_A}$ are the work density and the Misner-Sharp energy confined to the apparent horizon, respectively \cite{CaiKim,Hay2,Bak,r1,r2,r3,cana,mitra,cana1,Hay22}. In addition, $A$ and $\Psi$ are the area of horizon and the energy supply vector, respectively, and \begin{eqnarray}\label{ufl1} A\Psi=A\psi_a dx^a, \end{eqnarray} while \begin{eqnarray}\label{ufl2} \psi_a = T^b_a\partial_b \zeta + W\partial_a \zeta, \end{eqnarray} is the projection of the total four-dimensional energy-momentum tensor $T_{\mu \nu}$ in the normal direction of the two-dimensional sphere. Consider a perfect fluid source ($T^{\mu}_{\nu}=diag(-\rho_T,p_T,p_T,p_T)$) together with Friedmann equations, by simple calculations we get $E=\rho_T V$, \begin{eqnarray}\label{uf3} dE-WdV=Vd\rho_T+\frac{p_T+\rho_T}{2}dV, \end{eqnarray} and \begin{eqnarray}\label{uf4} A\Psi=-AH\zeta(\frac{\rho_T+p_T}{2})dt+Aa(\frac{\rho_T+p_T}{2})dr, \end{eqnarray} where $a$ is the scale factor. Using the energy-momentum conservation law ($\nabla^{\mu}T_{\mu \nu}=0$) \begin{eqnarray}\label{energymomentum0} \dot{\rho}_T+3H(\rho_T+p_T)=0, \end{eqnarray} and $adr=d\zeta-rda$ in rewriting Eq.~(\ref{uf4}) to obtain \begin{eqnarray}\label{uf5} A\Psi=Vd\rho_T+\frac{p_T+\rho_T}{2}dV, \end{eqnarray} where $dV=Ad\zeta$ and $A=4\pi\zeta^2$. By comparing this equation with~(\ref{uf3}), we get \begin{eqnarray}\label{uf6} A\Psi=dE-WdV, \end{eqnarray} which is the unified first law of thermodynamics. This result is independent of the number and nature of fluids which support the background spacetime. In addition, one may decompose $T_{\mu \nu}$ into \begin{eqnarray}\label{ef7} T_{\mu \nu}=T_{\mu \nu}^{DE}+T_{\mu \nu}^{m}, \end{eqnarray} where $T_{\mu \nu}^{DE}$ and $T_{\mu \nu}^{m}$ are the energy momentum tensors of DE and the material parts of cosmos (radiation, matter and etc.), respectively. In this situation, it is apparent that $\Psi=\Psi^{DE}+\Psi^{m}$, $W=W^{DE}+W^{m}$ and $E=E^{DE}+E^{m}$, where $E^{DE}=\rho^{DE}V$ and $E^{m}=\rho^{m}V$. Therefore, by following the above argument, whenever $\nabla^{\mu}T_{\mu \nu}^{m}=\nabla^{\mu}T_{\mu \nu}^{DE}=0$, we get \begin{eqnarray}\label{uf81} A\Psi^m= A\Psi-A\Psi^{DE}=dE^{m}-W^{m}dV. \end{eqnarray} $\delta Q$ (the heat flow crossing the horizon) is determined by the pure matter energy-momentum tensor ($T^m_{\mu \nu}$) as \cite{r1,r2,r3,CaiKim,cana,cana1,mitra} \begin{eqnarray}\label{ufl100} \delta Q\equiv A\Psi^m. \end{eqnarray} For some gravitational theories, one can use the Clausius relation together with Eq.~(\ref{t}) to get the horizon entropy ($S_A$) by using \cite{r1,r2,r3,CaiKim,cana,cana1,mitra} \begin{eqnarray}\label{ufl10} TdS_A=\delta Q\equiv A\Psi^m, \end{eqnarray} which leads to \begin{eqnarray}\label{uf8} TdS_A=A\Psi^m= A\Psi-A\Psi^{DE}=dE^{m}-W^{m}dV, \end{eqnarray} where we have used~(\ref{uf81}) to get the last equation. It is useful to note that for some theories such as the $f(R)$ gravity, Eq.~(\ref{ufl10}) and thus~(\ref{uf8}) is not always available \cite{r2}. Recently, some authors considered the Hayward-Kodama definition of temperature~(\ref{t}), a universe filled by either a ghost dark energy or its generalized form together with a pressureless matter and use the $TdS_A=A\Psi-A\Psi^{DE}$ relation to get an expression for the entropy ($S_A$) \cite{cana}. Their results show that the entropy of the matter fields differs from the Bekenstein entropy due to the DE effects. They argued that their results are in agreement with the entropy of the apparent horizon in the DGP braneworld model which signals that this approach may be used to get the entropy of apparent horizon in other theories of gravity. This adaptation between these entropies signals that one may find a geometrical interpretation for the origin of the ghost dark energy model (as a DE candidate) by using the DGP braneworld model of gravity. Motivated by this work, Sheykhi extended their work to the apparent horizon of the FRW universe and used the $TdS_A=dE^{m}-W^{m}dV$ relation to get the same result as that of ref.~\cite{cana} for the entropy. Finally, he concludes that the obtained relation for the entropy ($S_A$) may be interpreted as the corrected relation for the apparent horizon entropy \cite{cana1}. It is useful to stress here that Eq.~(\ref{uf81}) clarifies the reason of getting the same result for the horizon entropy by authors in Refs.~\cite{cana,cana1}. Moreover, their results are available only when $\nabla^{\mu}T_{\mu \nu}^{m}=\nabla^{\mu}T_{\mu \nu}^{DE}=0$ which means that the cosmos sectors do not interact with each other. Recently, by considering the FRW universe in which the cosmos sectors interact with each other, Mitra et al. use the $TdS_A=A\Psi-A\Psi^{DE}$ relation to get the trapping horizon entropy in the Einstein relativity framework. They argued that the obtained relation for the entropy differs from the Bekenstein entropy due to the mutual interaction between the cosmos sectors \cite{mitra}. In continue, Mitra et al. extended their hypothesis to other different gravity theories \cite{mitra}. Moreover, it is also shown that a gravitationally induced particle production process as the DE candidate may change the horizon entropy \cite{lima3}. Bearing the Lovelock theory in mind, some authors used the $TdS_A=A\Psi-A\Psi^{DE}$ relation to get the entropy of the apparent horizon in cosmological setup \cite{r3}. Another study including the loop quantum cosmology can be found in ref \cite{r1}. Here, by considering a varying DE candidate, we are going to find a general relation for the entropy of the apparent horizon in both of the interacting and non-interacting cosmoses and investigate the second and generalized second laws of thermodynamics in the Einstein relativity frame work where Eq.~(\ref{ufl10}) and thus~(\ref{uf8}) are valid. For this propose, consider the flat FRW universe with Friedmann equation \begin{eqnarray}\label{fried1} H^2=\frac{8\pi}{3}(\rho+\rho_D), \end{eqnarray} where $\rho_D$ is the density of dark energy component. In addition, $\rho$ is the density of rest fluids in the cosmos which may include baryonic matters, dark matter and etc, leading to \begin{eqnarray}\label{oden} \rho=\rho_{bm}+\rho_{DM}+... \end{eqnarray} Therefore, $\rho$ is nothing but $\rho^m$ which is previously introduced. For the sake of simplicity, we omit the $m$ label throughout the paper. From Eq.~(\ref{fried1}) and the Bianchi identity we get \begin{eqnarray}\label{friedde} 2HdH-\frac{8\pi}{3}d\rho_D=\frac{8\pi}{3}d\rho, \end{eqnarray} and \begin{eqnarray}\label{energymomentum} \dot{\rho}+3H(\rho+p)+\dot{\rho}_{D}+3H(\rho_{D}+p_{D})=0, \end{eqnarray} which is nothing but the energy-momentum conservation law, respectively. In this equation, $p_D$ and $p$ denote the vacuum pressure and the pressure corresponding to the density $\rho$, respectively. Dot is also denoted as the time derivative. Consider a dark energy candidate with density profile \begin{eqnarray}\label{dden} \rho_D=\frac{3\alpha+3\beta H^2+3\gamma H^{2n}}{8\pi}, \end{eqnarray} which converges to CC whiles $\beta=\gamma=0$. Whiles $n=\frac{1}{2}$, it covers the ghost dark energy model and its generalization for $\alpha=\beta=0$ and $\alpha=0$, respectively \cite{gde,ven,GGDE}. The $\gamma=0$ case has been extensively studied in the literatures \cite{GGDE1,sola,sola1,sola2,sola3,sola4}. The results of considering either an arbitrary value for $n$ or optional function of $H$ for $\rho_D=f(H)$ can be found in \cite{saha}. Moreover, the cosmological applications of considering model with $\alpha=0$, $n=\frac{3}{2}$ and $\omega_D=-1$ has also been studied \cite{tere1,tere}. More similar density profiles for the DE candidate with $\omega_D=-1$ can also be found in \cite{tere2}. Another attractive case proposed by Lima et al. is obtainable by imposing the $n>1$ condition together with $\omega_D=-1$ to the density profile of the DE candidate, whenever $n$ is also an integer number \cite{lima}. It is useful to note that $E_D=\rho_D V$ and $E=\rho V$ are the energy of dark energy component and the energy corresponding to the density $\rho$, respectively. Therefore, $E$ is nothing but $E^m$ mentioned previously and we omit the $m$ label for the sake for simplicity. \subsection{Non-Interacting Models} At the first step we consider a universe in which the cosmos sectors do not interact with each other. Therefore, the energy-momentum conservation law implies~(\ref{energymomentum}) \begin{eqnarray}\label{ocon1} \dot{\rho}+3H(\rho+p)=0, \end{eqnarray} and \begin{eqnarray}\label{dcon1} \dot{\rho}_{D}+3H(\rho_{D}+p_{D})=0. \end{eqnarray} Substituting~(\ref{dden}) into~(\ref{fried1}) to get \begin{eqnarray}\label{oden1} \frac{8\pi}{3}\rho=H^2-\alpha-\beta H^2-\gamma H^{2n}. \end{eqnarray} Bearing Eq.~(\ref{ocon1}) in mind, by using Eq.~(\ref{friedde}) we reach at \begin{eqnarray}\label{dif1} (2H(1-\beta)-2n\gamma H^{2n-1})dH=-8\pi H(\rho+p)dt. \end{eqnarray} Using the Hayward-Kodama temperature relation ($-T=\frac{H}{2\pi}(1+\frac{\dot{H}}{2H^2})$)~\cite{GSL1} to obtain \begin{eqnarray}\label{dif2} -T(2H(1-\beta)-2n\gamma H^{2n-1})dH=-4H^2(\rho+p)dt-2(\rho+p)dH. \end{eqnarray} From Eq.~(\ref{ocon1}), since $E=\rho V$ and $dV=-\frac{4\pi}{H^4}dH$, we get $dE=-4\pi \rho H^{-4}dH-4\pi H^{-2}(\rho+p)dt$ leading to \begin{eqnarray}\label{dif3} (\rho+p)dt=-\frac{H^2dE}{4\pi}-\frac{\rho dH}{H^2}. \end{eqnarray} If we combine this equation with~(\ref{dif2}) we obtain \begin{eqnarray}\label{dif4} T(-2H(1-\beta)+2n\gamma H^{2n-1})dH=\frac{H^4}{\pi}dE+2(\rho-p)dH. \end{eqnarray} It is easy to show that this equation can be rewritten as \begin{eqnarray}\label{dif5} T[(-\frac{2\pi}{H^3}(1-\beta)+2n\gamma\pi H^{2n-5})dH]=dE-WdV. \end{eqnarray} In this equation $W=\frac{\rho-p}{2}$ is the work density required for applying a hypothetical displacement $d\tilde{r}_A$ to the apparent horizon \cite{cana1,mitra}. By comparing this result with Eq.~(\ref{uf8}), one gets \begin{eqnarray}\label{hentropy1} dS_A=(-\frac{2\pi}{H^3}(1-\beta)+2n\gamma\pi H^{2n-5})dH \end{eqnarray} leading to \begin{eqnarray}\label{hentropy} S_A=\frac{A}{4}(1-\beta)+\frac{n\gamma\pi^{n-1}}{n-2}A^{2-n}, \end{eqnarray} where $A=4\pi\tilde{r}_A^2=\frac{4\pi}{H^2}$ is the area of horizon. Therefore, $\frac{n\gamma\pi^{n-1}}{n-2}A^{2-n}$ is a new term besides the area term. In addition, since the entropy is not an absolute quantity, we have set the integral constant to zero. It is also apparent that, for $n=2$, entropy is not well-defined. In order to eliminate this weakness, let us restart from Eq.~(\ref{hentropy1}), by substituting $n=2$ and taking integration from that, we get \begin{eqnarray}\label{hentropy2} S_A=\frac{A}{4}(1-\beta)-\frac{\gamma \pi \ln\pi}{2}+\frac{\gamma \pi \ln A}{2}+S_0. \end{eqnarray} Finally, since entropy is not an absolute quantity, one can set $S_0=\frac{\gamma \pi \ln\pi}{2}$, and gets \begin{eqnarray}\label{hentropy3} S_A=\frac{A}{4}(1-\beta)+\frac{\gamma \pi \ln A}{2}. \end{eqnarray} Therefore, models with $n=2$ induce a logarithmic correction to the horizon entropy. Logarithmic correction terms have been previously proposed by some authors which either consider the thermal equilibrium and quantum fluctuations in loop quantum gravity framework \cite{l1,l2,l3,l4,l5,l6,l7,l8,l9,l,l0,l10} or the thermal fluctuations of system about its thermodynamic equilibrium state \cite{l11,l12}. Indeed, logarithmic correction due to the thermal fluctuations are valid in all physical systems \cite{lan}. Let us study some choices with $n=\frac{1}{2}$. Bearing Eq.~(\ref{hentropy}) in mind, For a constant vacuum energy density ($\rho_D=\alpha$), we face with the $\Lambda CDM$ theory and we get $S_A=\frac{A}{4}$ which is in agreement with previous studies \cite{Cai2,Cai3,CaiKim}. Moreover, for $\alpha=0$, $\beta=0$ and $n=\frac{1}{2}$, we have \begin{eqnarray}\label{gde} \rho_D=\frac{3\gamma}{8\pi}H, \end{eqnarray} which is the profile density of ghost dark energy model \cite{ven,gde}. In this limit, from Eq.~(\ref{hentropy}), we get \begin{eqnarray}\label{hentropygde} S_A=\frac{A}{4}-\frac{\gamma}{3\sqrt{\pi}}A^{\frac{3}{2}}, \end{eqnarray} which is in agreement with the ghost dark energy modification to the entropy evaluated previously \cite{cana,cana1}. Here, we have used the Hayward-Kodama definition of temperature~(\ref{t}) together with the apparent horizon of the FRW universe to get this relation whiles, author in \cite{cana1}, has considered $T=\frac{|\kappa|}{2\pi}$ to get~(\ref{hentropygde}) on the apparent horizon. Moreover, authors in~\cite{cana} used trapping horizon and the temperature definition $T=\frac{|\kappa|}{2\pi}$ to get this relation. Additionally, equation~(\ref{dden}), for $\alpha=0$ and $n=\frac{1}{2}$, reduces to \begin{eqnarray}\label{ggde} \rho_D=\frac{3\beta}{8\pi}H^2+\frac{3\gamma}{8\pi}H, \end{eqnarray} which is the profile density of generalized ghost dark energy model \cite{GGDE,ggde2}. By considering this profile density we get \begin{eqnarray}\label{hentropyggde} S_A=\frac{A}{4}(1-\beta)-\frac{\gamma}{3\sqrt{\pi}}A^{\frac{3}{2}}, \end{eqnarray} as the modification of the generalized ghost dark energy model to the horizon entropy \cite{cana}. Although this result is previously obtained by authors in ref.~\cite{cana}, but our derivation is completely different from that of they. Here, we worked on the apparent horizon whiles they have considered the trapping horizon and found the similar results. In a more general case, for arbitrary functional form of $\rho_D$, by following the above recipe we get \begin{eqnarray}\label{general} dS_A=(-\frac{2\pi}{H^3}+\frac{8\pi^2}{3H^4}\rho_D^{\prime})dH, \end{eqnarray} where prime denotes derivative with respect to $H$. Taking integral to obtain \begin{eqnarray}\label{general1} S_A=\frac{A}{4}+\frac{8\pi^2}{3}\int\frac{1}{H^4}d\rho_D +C, \end{eqnarray} where $C$ is the integral constant. Therefore, a varying DE candidate imposes a correction term to the horizon entropy in accordance with the first law of thermodynamics and thus, the second term of the RHS of Eq.~(\ref{general1}). It is also useful to note that the result of considering CC ($S_A=\frac{A}{4}$) is obtainable by substituting $d\rho_D=0$ in this equation \cite{Cai2,Cai3}. Now, let us use the Cai-Kim temperature ($T=\frac{H}{2\pi}$) \cite{CaiKim} to get the entropy of apparent horizon. In order to achieve this goal, we follow the approach of authors in ref.~\cite{CaiKim}, where $TdS_A=-dQ$ and $dV=0$. Using this argument and bearing Eqs.~(\ref{ufl10}) and~(\ref{uf8}) in mind to reach \begin{eqnarray} dS_A=-\frac{V}{T}d\rho. \end{eqnarray} Now, by substituting $d\rho$ from Eq.~(\ref{friedde}) into this equation, one gets \begin{eqnarray} dS_A=-\frac{2\pi}{H^3}dH+\frac{8\pi^2}{3H^4}d\rho_D, \end{eqnarray} which leads to \begin{eqnarray} S_A=\frac{A}{4}+\frac{8\pi^2}{3}\int \frac{d\rho_D}{H^4}+C, \end{eqnarray} where $C$ is the integration constant. Therefore, once again, we get a relation for the horizon entropy which is in full agreement with the previous result~(\ref{general1}), obtained by considering the Hayward-Kodama temperature. \subsection{Interacting Models} When the cosmos sectors interact with each other, energy-momentum conservation law implies~(\ref{energymomentum}) \begin{eqnarray}\label{energymomentum12} \dot{\rho}+3H(\rho+p)=-\dot{\rho}_{D}-3H(\rho_{D}+p_{D}), \end{eqnarray} meaning that \begin{eqnarray}\label{energymomentum12f} d\rho=-3H(\rho+p)dt-d\rho_{D}-3H(\rho_{D}+p_{D})dt. \end{eqnarray} Therefore, by considering Eq.~(\ref{friedde}) and following the recipe which leads to Eq.~(\ref{general1}), we get \begin{eqnarray}\label{general2} dS_A=-\frac{2\pi}{H^3}dH-\frac{8\pi^2}{H^3}(\rho_D+p_D)dt, \end{eqnarray} which yields \begin{eqnarray}\label{general12} S_A=\frac{A}{4}-8\pi^2\int\frac{\rho_D+p_D}{H^3}dt + C, \end{eqnarray} where $C$ is again an integral constant. Therefore, the second term of RHS of this equation is nothing but the entropy correction due to the mutual interaction between the cosmos sectors. For interacting models in which the state parameter of the DE candidate meets the $\omega_D=-1$ condition, and therefore $\rho_D+p_D=0$, this additional term is zero meaning that the horizon entropy in these models satisfies the Bekenstein limit \cite{bek}. For instance, in the model proposed by Lima et al. \cite{lima}, in which vacuum decays into the other parts of cosmos and $\rho_D+p_D=0$, the horizon entropy of the flat FRW universe meets the Bekenstein limit \cite{bek}. It is in agreement with the initial and final de-Sitter spacetimes of this model, since the horizon of de-Sitter spacetime meets the $S_A=\frac{A}{4}$ condition \cite{Cai2,Cai3,CaiKim}. Now, let us derive Eq.~(\ref{general12}) by using the unified first law of thermodynamics. Bearing the definition of $\Psi$ in mind, simple calculations lead to \begin{eqnarray}\label{51} A\Psi^{DE}=-\frac{3V(\rho_D+p_D)H}{2}dt+\frac{A(\rho_D+p_D)}{2}[d\zeta-\zeta H dt], \end{eqnarray} where we have used the $rda=\zeta H dt$ relation to obtain this equation. It is a matter of calculation to show \begin{eqnarray} A\Psi^{DE}=-\frac{4\pi(\rho_D+p_D)}{H^2}[1+\frac{\dot{H}}{2H^2}]dt, \end{eqnarray} where we have used $dV=-\frac{3V}{H}\dot{H}dt$ to get this equation. Since we work in the Einstein general relativity framework, Eq.~(\ref{ufl10}) is valid, and thus, simple calculations lead to \begin{eqnarray}\label{53} TdS_A=A\Psi-A\Psi^{DE}=-\frac{H}{2\pi}[1+\frac{\dot{H}}{2H^2}](-\frac{2\pi}{H^3}dH-\frac{8\pi^2}{H^3}(\rho_D+p_D)dt), \end{eqnarray} where we have used the $A\Psi=T(-\frac{2\pi}{H^3}dH)$ relation, while $T$ is the Hayward-Kodama temperature, in obtaining this relation \cite{GSL1,Bak,hel,hel1,mitra}. It is apparent that this equation is nothing but~(\ref{general2}) which leads to Eq.~(\ref{general12}). Our result is in agreement with the recent work by Mitra et al. \cite{mitra}. Whereas, we have started from the Friedmann equations and considered the apparent horizon as the causal bound, Mitra et al. used the trapping horizon and relation $\delta Q^{m}\equiv A\Psi-A\Psi^{DE}$ to obtain~(\ref{general12}). It is apparent that Eq.~(\ref{53}) clarifies that why both of us get the same results, while, our start points differ from each other. Finally, let us consider the Cai-Kim temperature to estimate the horizon entropy. In this situation, for an infinitesimal time $dV=0$, and from Eqs.~(\ref{uf5}) and~(\ref{51}) we get \begin{eqnarray}\label{530} TdS_A=-A\Psi^m=-A\Psi+A\Psi^{DE}=-V(d\rho+d\rho_D)-\frac{4\pi(\rho_D+p_D)}{H^2}dt, \end{eqnarray} where we have followed the approach of authors in ref.~\cite{CaiKim} in order to define $TdS_A=-\delta Q$. Now, bearing Eq.~(\ref{fried1}) in mind, since $T=\frac{H}{2\pi}$, simple calculations lead to \begin{eqnarray}\label{533} S_A=\frac{A}{4}-8\pi^2\int\frac{\rho_D+p_D}{H^3}dt + C, \end{eqnarray} where $C$ is an integration constant. Therefore, by using the Cai-Kim temperature and taking into account an infinitesimal time, we get the same result for the horizon entropy as the result obtained in Eq.~(\ref{general12}). \section{The second and generalized second laws of thermodynamics} On one hand, since cosmos is enclosed by the apparent horizon, it forms a closed system and therefore, the entropy of its horizon should increase during the universe expansion meaning that \cite{haw} \begin{eqnarray}\label{SL} \frac{dS_A}{dt}\geq0. \end{eqnarray} It is called the second law of thermodynamics. Whereas, on the other hand, the total entropy of the closed systems should be increased. Since cosmos includes spacetime and its contents, which includes the fluids supporting the geometry of background spacetime, its total entropy consists of two parts including the horizon ($S_A$) and the confined fluids components ($S_{in}$) \cite{bek,bek2}. In fact, the generalized second law of thermodynamics states that the rate of the total entropy of cosmos including the horizon and confined fluids entropies cannot be negative or briefly \cite{bek,bek2} \begin{eqnarray}\label{GSL} \frac{dS_A}{dt}+\frac{dS_{in}}{dt}\geq0. \end{eqnarray} Indeed, the total entropy of gravitational systems should meet~(\ref{GSL}) \cite{bek,bek2}. But, here we point to the required conditions for satisfying both of the above criterions. \subsection{Non-Interacting case} For the non-interacting cases and while $\rho_D$ meets~(\ref{dden}), by taking a time derivative of the Friedmann equation~(\ref{fried1}) and using the energy-momentum conservation law~(\ref{ocon1}) to get the Raychaudhuri equation \begin{eqnarray}\label{rey} \dot{H}=-4\pi(\rho+p)\frac{1}{1-\beta-n\gamma H^{2n-2}}. \end{eqnarray} Since during the cosmos life $\dot{H}<0$ \cite{roos}, we get $1-\beta-n\gamma H^{2n-2}>0$ leading to $H<(\frac{1-\beta}{n\gamma})^{\frac{1}{2n-2}}$ while $\rho+p>0$, and $1-\beta-n\gamma H^{2n-2}<0$ which yields $H>(\frac{1-\beta}{n\gamma})^{\frac{1}{2n-2}}$ for $\rho+p<0$. Using Eqs.~(\ref{dif2}) and~(\ref{hentropy1}) to obtain \begin{eqnarray}\label{hentropyf} T\frac{dS_A}{dt}=-\frac{4\pi(\rho+p)}{H^2}[1+\frac{\dot{H}}{2H^2}]. \end{eqnarray} It seems that horizons may satisfy the second law of thermodynamics meaning that the $dS_A\geq0$ condition should be valid \cite{pois,Cai2,Cai3,CaiKim}. In order to check the validity of the second law of thermodynamics we insert $T=-\frac{H}{2\pi}(1+\frac{\dot{H}}{2H^2})$ into this equation, and get \begin{eqnarray}\label{hentropyf2} \frac{dS_A}{dt}=\frac{8\pi^2(\rho+p)}{H^3}, \end{eqnarray} meaning that the second law of thermodynamics is available for the apparent horizon whiles $\rho+p\geq0$. This conditions leads to $\omega\geq-1$ for the state parameter $\omega$. Moreover, by combining Eqs.~(\ref{hentropy}) and~(\ref{dcon1}) with together, we get \begin{eqnarray}\label{fff} \frac{dS_A}{dt}=-\frac{2\pi\dot{H}}{H^3}(1+4\pi\frac{\rho_D+p_D}{\dot{H}}), \end{eqnarray} which means that the second law of thermodynamics is satisfied if $1+4\pi\frac{\rho_D+p_D}{\dot{H}}\geq0$. Finally, the second law of thermodynamics ($\frac{dS_A}{dt}\geq0$) is met by the horizon component when $\rho_D+p_D\geq-\frac{\dot{H}}{4\pi}$ and $\rho+p\geq0$ are satisfied, simultaneously. It is useful to mention here that one can get \begin{eqnarray}\label{rey0} \dot{H}=-4\pi(\rho+p+\rho_D+p_D), \end{eqnarray} by equating Eqs.~(\ref{fff}) and~(\ref{hentropyf2}), which is nothing but the Raychaudhuri equation obtainable by taking time derivative from Eq.~(\ref{fried1}) and using~(\ref{energymomentum}). Therefore, when $\rho+p\geq0$ and $\frac{\dot{H}}{4\pi}\geq-(\rho_D+p_D)$ are available, then $\rho+p+\frac{\dot{H}}{4\pi}\geq-(\rho_D+p_D)$ is obtainable which is in agreement with the Raychaudhuri equation~(\ref{rey0}). For the fluids confined to the apparent horizon with total density $\rho$, the Gibbs law implies \cite{gibs}. \begin{eqnarray}\label{fentropyf} T_{in}\frac{dS_{in}}{dt}=\frac{dE}{dt}+p\frac{dV}{dt}=V\frac{d\rho}{dt}-(\rho+p)\frac{4\pi \dot{H}}{H^4}, \end{eqnarray} where $T_{in}\geq0$ is the temperature corresponding to the confined fluids. Now, using~(\ref{ocon1}) and $V=\frac{4\pi}{3H^3}$ to get \begin{eqnarray}\label{fentropyf2} T_{in}\frac{dS_{in}}{dt}=-\frac{4\pi(\rho+p)}{H^2}[1+\frac{\dot{H}}{H^2}], \end{eqnarray} telling us that, for $\rho+p\geq0$, $\frac{dS_{in}}{dt}\geq0$ is obtainable when $1+\frac{\dot{H}}{H^2}\leq0$ which leads to $H\leq\frac{1}{t}$. The latter means that for the perfect fluids with state parameter $\omega$ which either meets the $\omega\leq-1$ or $-\frac{1}{3}\leq\omega$ conditions, $\frac{dS_{in}}{dt}\geq0$. Additionally, for a prefect fluid with state parameter $-1\leq\omega\leq-\frac{1}{3}$, the $\rho+p\geq0$ condition is satisfied but $\frac{dS_{in}}{dt}\leq0$. Finally, for a prefect fluid with state parameter $\omega$ which satisfies the $-\frac{1}{3}\leq \omega$ condition the generalized second law of thermodynamics ($\frac{dS_A}{dt}+\frac{dS_{in}}{dt}\geq0$) will be satisfied if the $\rho+p\geq0$ and $\rho_D+p_D\geq-\frac{\dot{H}}{4\pi}$ conditions are valid. It is useful to mention here that $\omega=-1$ leads to $\frac{dS_A}{dt}=0$ and $\frac{dS_{in}}{dt}=0$ meaning that the generalized second law of thermodynamics is marginally satisfied. Moreover, for a more general manner in which $\omega$ is not a constant, by using the Raychaudhuri equation, we get \begin{eqnarray}\label{rey10} 1+\frac{\dot{H}}{H^2}=1-4\pi(\rho+p)\frac{1}{H^2(1-\beta)-n\gamma H^{2n}}. \end{eqnarray} Thus, $1+\frac{\dot{H}}{H^2}\leq0$ leads to \begin{eqnarray}\label{rey10} H^2(1-\beta)-n\gamma H^{2n}\leq4\pi(\rho+p), \end{eqnarray} which indicates that $\frac{dS_{in}}{dt}\geq0$. Therefore, if this condition is valid, then the generalized second law of thermodynamics will be satisfied. For the $\rho+p<0$ case, it is obvious that, from Eq.~(\ref{hentropyf2}), $\frac{dS_A}{dt}<0$. In addition, when $H$ meets the $1+\frac{\dot{H}}{H^2}\leq0$ condition, $\frac{dS_{in}}{dt}\leq0$ and thus $\frac{dS_A}{dt}+\frac{dS_{in}}{dt}<0$ meaning that the generalized second law is not satisfied. Briefly, for a prefect fluid with $\omega<-1$, the generalized second law is not satisfied. If the Hubble parameter satisfies the $1+\frac{\dot{H}}{H^2}>0$ condition Eq.~(\ref{fentropyf2}) leads to $\frac{dS_{in}}{dt}\geq0$ and therefore, it is legally possible to meet the generalized second law of thermodynamics. Using Eq.~(\ref{rey}) to get \begin{eqnarray}\label{rey1} 1+\frac{\dot{H}}{H^2}=1-4\pi(\rho+p)\frac{1}{H^2(1-\beta)-n\gamma H^{2n}}. \end{eqnarray} Therefore, the $1+\frac{\dot{H}}{H^2}>0$ condition leads to \begin{eqnarray}\label{rey2} 4\pi(\rho+p)<H^2(1-\beta)-n\gamma H^{2n}. \end{eqnarray} Finally, we can say that if this condition is valid, then $\frac{dS_{in}}{dt}\geq0$ which may lead to satisfy the generalized second law of thermodynamics. For the horizon entropy of the flat FRW universe supported by a DE candidate with unknown density profile $\rho_D$, we can use Eqs.~(\ref{general}) and~(\ref{friedde}) to obtain \begin{eqnarray}\label{general0} \frac{dS_A}{dt}=\frac{8\pi^2(\rho+p)}{H^3}, \end{eqnarray} whenever, it is easy to check that Eqs.~(\ref{fentropyf2}) and~(\ref{fff}) are also valid in this manner. Bearing Eq.~(\ref{rey0}) in mind, $\dot{H}<0$ leads to $\rho+p>-\rho_D-p_D$. Similarities with the previous case, in which $\rho_D$ meets Eq.~(\ref{dden}), are obvious. In fact, in order to achieve a more detailed resolution about the validity of generalized second law of thermodynamics, we need to know the dependence of either $\rho_D$ or $\rho$ to the Hubble parameter. We should note again that the horizon component satisfies the second law of thermodynamics by ($\frac{dS_A}{dt}\geq0$) if the $\rho_D+p_D\geq-\frac{\dot{H}}{4\pi}$ and $\rho+p\geq0$ conditions are met, which is in agreement with the Raychaudhuri equation~(\ref{rey0}). Moreover, since $\frac{dS_{in}}{dt}\geq0$ is valid when $-\frac{1}{3}\leq\omega$, the generalized second law of thermodynamics $\frac{dS_A}{dt}+\frac{dS_{in}}{dt}\geq0$ will be available if the $-\frac{1}{3}\leq\omega$ and $\rho_D+p_D\geq-\frac{\dot{H}}{4\pi}$ conditions are met simultaneously. More studies on the availability of the second law of thermodynamics and its generalization needs to know the exact form of $\rho$. \subsection{Interacting Case} For this case, by using~(\ref{general2}), we get again \begin{eqnarray}\label{lim} \frac{dS_A}{dt}=-\frac{2\pi\dot{H}}{H^3}(1+\frac{4\pi\rho_D(1+\omega_D)}{\dot{H}}). \end{eqnarray} On one hand, when $\rho_D(1+\omega_D)\geq-\frac{\dot{H}}{4\pi}$, since observationally $\dot{H}<0$ \cite{roos}, it seems that $\frac{dS_A}{dt}\geq0$ is valid everywhere. On the other hand, by combining Eqs.~(\ref{general12}),~(\ref{energymomentum}) and~(\ref{friedde}), once again we get \begin{eqnarray}\label{lim1} \frac{dS_A}{dt}=\frac{8\pi^2(\rho+p)}{H^3}, \end{eqnarray} meaning that $\frac{dS_A}{dt}\geq0$ is valid everywhere, if $\rho_D(1+\omega_D)\geq-\frac{\dot{H}}{4\pi}$ and $\rho+p\geq0$ are satisfied simultaneously. Therefore, the quality of validity of the second law of thermodynamics is similar with the non-interacting case. In addition, Eq.~(\ref{fentropyf}) leads to \begin{eqnarray}\label{fentropyf22} T_{in}\frac{dS_{in}}{dt}=-\frac{4\pi(\rho+p)}{H^2}[1+\frac{\dot{H}}{H^2}] -V\dot{H}[\rho_D^{\prime}+3\frac{H}{\dot{H}}(\rho_D+p_D)], \end{eqnarray} where prime denotes derivative with respect to the Hubble parameter, again. Here, we focus on the model proposed by Lime st al. \cite{lima}. In this model, $\omega_D=-1$ while the vacuum density meets Eq.~(\ref{dden}). Substituting into the above equation to get \begin{eqnarray}\label{fentropyf222} T_{in}\frac{dS_{in}}{dt}=-\frac{4\pi(\rho+p)}{H^2}[1+\frac{\dot{H}}{H^2}] -V\dot{H}[\frac{3\beta H+ 3n\gamma H^{2n-1}}{4\pi}]. \end{eqnarray} Since $\dot{H}<0$, the second term of RHS of this equation ($-V\dot{H}\rho_D^{\prime}$) is positive everywhere and therefore, the validity of $\frac{dS_{in}}{dt}>0$ and thus the generalized second law of thermodynamics depends on the value of the first term of RHS ($-\frac{4\pi(\rho+p)}{H^2}[1+\frac{\dot{H}}{H^2}]$). It is useful to mention here that for a perfect fluid either obeying $\omega\leq-1$ or $-\frac{1}{3}\leq\omega$, the Hubble parameter meets the $H\leq\frac{1}{t}$ condition leading to $1+\frac{\dot{H}}{H^2}\leq0$ and thus $\frac{dS_{in}}{dt}>0$. Moreover, from Eqs.~(\ref{lim}) and~(\ref{lim1}) it is apparent that $\frac{dS_{A}}{dt}>0$ when $\omega_D=-1$ and $-1\leq\omega$, respectively. Therefore, for the flat FRW universe embraced a prefect fluid which satisfies $-\frac{1}{3}\leq\omega$, the generalized second law of thermodynamics is convinced. As again, more studies on the availability of the second law of thermodynamics and its generalization needs to know the exact form of $\rho$. \section{Summary and concluding remarks} Throughout this paper, we considered the FRW universe filled by a DE candidate together a fluid, which is the agent of the other possible sources, which may include the baryonic and non-baryonic matters, enclosed by the apparent horizon of the flat FRW universe. In continue, we proposed a profile density for the DE candidate which covers proposals including CC and dynamic models of DE such as ghost dark energy model, its generalization, the Lima's model and etc. Moreover, by taking into account the Hayward-Kodama definition of the temperature definition of apparent horizon as well as the Friedmann equation, we could find the horizon entropy for models in which the DE candidate does not interact with the other parts of the cosmos. Our study shows that the DE candidate may modify the horizon entropy. We have shown that our formula for entropy~(\ref{hentropy}) is compatible with previous results about the ghost dark energy and its generalization \cite{cana,cana1}. Indeed, similar result with~(\ref{hentropyggde}) is reported by authors in ref.~\cite{cana}. But, our derivation is completely different. Here, we have considered the apparent horizon as the causal bound of the system, whiles authors in~\cite{cana} used the trapping horizon as the causal bound to get the associated horizon entropy. In addition, we have generalized our formulation to models in which the DE candidate is an arbitrary unknown function, and showed that the DE candidate may modify the horizon entropy~(\ref{general1}) independent of the other parts of cosmos. We have also used the Cai-Kim temperature to get the horizon entropy, and found out that the same result for the horizon entropy is obtainable if one considers an infinitesimal time in which $dV=0$. Thereinafter, we focused on the models in which the DE candidate interacts with the other parts of cosmos. We found that the mutual interaction between the cosmos sectors may also modify the apparent horizon entropy~(\ref{general12}). Our study shows that for models in which $\omega_D=-1$, such as the model proposed by Lima et al.~\cite{lima}, the mutual interaction between the cosmos sectors does not disturb the Bekenstein limit of the horizon entropy. It means that there is no modification to the horizon entropy for interacting models with $\omega_D=-1$ and therefore, $S_A=\frac{A}{4}$ is available in these models. The same as the non-interacting case, we tried to get a relation for the horizon entropy in the interacting models by using the Cai-Kim temperature. Our study shows that the same result as that of obtained by considering the Hayward-Kodama temperature is available for the horizon entropy. Additionally, we pointed to the some required conditions for availability of the second law of thermodynamics and its generalization in the interacting and non-interacting models. Our studies show that for the non-interacting case, whiles $\rho_D+p_D\geq-\frac{\dot{H}}{4\pi}$, the second law of thermodynamics and its generalization are inevitably valid if the state parameter of other parts of the cosmos satisfies the $-\frac{1}{3}\leq\omega$ condition. It is because $\frac{dS_{A}}{dt}>0$ and $\frac{dS_{in}}{dt}>0$ are separately valid in this situation. Finally, our study shows that for the interacting case with $\omega_D=-1$, $\frac{dS_{A}}{dt}>0$ and $\frac{dS_{in}}{dt}>0$ will be met if $-\frac{1}{3}\leq\omega$ and therefore, the generalized second law of thermodynamics will be available in an unavoidable way. \section*{Acknowledgments} We are grateful to the anonymous referee for the constructive worthy comments which help us increase our understanding of the subject. The work of H. M. has been supported financially by Research Institute for Astronomy \& Astrophysics of Maragha (RIAAM) under research project No. $1/4165-6$.
2,869,038,156,150
arxiv
\section{Introduction} \label{sec:intro} The ongoing search for extrasolar planets has been spectacularly successful, with over 1500 confirmed planets discovered to date\footnote{{\tt http://exoplanets.org/}}, including many small objects suspected of being rocky. For a subset of these smallest detected exoplanets, both precision radial velocity measurements and transit photometry have been obtained. This provides a measurement of their masses and radii, and therefore their bulk densities. Intriguingly, these densities have a wide spread, and do not follow a simple mass-radius relationship \citep{weiss+marcy14-1,dressingetal15-1}. This may imply that some small exoplanets have compositions distinct from the rocky (and icy) planets and moons of the Solar System, which are all, to first order, a combination of H$_2$O, MgSiO$_3$ and Fe \citep{allegreetal01-1}. Modelling exoplanets with a greater variety of bulk chemistries may account for the differences in bulk densities. However, it is impossible to unambiguously infer the internal composition of a planet from its density alone. \cite{seageretal07-1} and \cite{sohletal12-1} computed mass-radius relationships for different planetary compositions, finding a significant degeneracy between different densities, interior structures and compositions. It has been hypothesized that enhanced C/O levels (relative to the Solar value) in a protoplanetary disc could change the condensation sequence of planetary solids, preferentially forming carbon compounds \citep{kuchner+seager05-1,moriartyetal14-1}. Under conditions where carbon is the most abundant metal, ``carbon planets'' may form. The alternative condensation sequence begins with the formation of CO, incorporating all of the available oxygen and restricting the formation of silicates. Excess carbon then forms SiC and graphite, for example. An Earth-sized carbon planet would likely form with an Fe-rich core, surrounded by a mantle of graphite, carbides and, at higher pressures, diamond. \cite{bondetal10-1} showed that this carbon-based chemistry could become important in protoplanetary discs with C/O\,$\gtrsim0.8$. Carbon could contribute more than half the mass of solid exoplanets formed in such an environment, with only trace oxygen present. Observational identification of carbon planets is hindered by the inability to measure planetary compositions in-situ, with the exception of the upper atmospheres of a few objects \citep{demingetal13-1,kreidbergetal14-1}. Given the diversity in atmospheric composition between the otherwise chemically similar terrestrial planets in the Solar System, such observations cannot be used to infer the bulk compositions of rocky exoplanets. Neither are the C/O ratios of exoplanet host stars a reliable tracer of disc composition \citep{teskeetal13-1}. Carbon to oxygen ratios in protoplanetary discs computed by \cite{thiabaudeta115-1} show only a weak dependence on the host star abundances. This ratio will also vary within a protoplanetary disc due to regional temperature variations and collisions, amongst other factors \citep{obergetal11-1,gaidos15-1}. The only method to reliably determine compositions of exoplanetary bodies is via detection of their debris in the photospheres of white dwarfs \citep{zuckermanetal07-1}. Recent studies have shown that 25--50\,percent of all white dwarfs are polluted by debris from planetesimals \citep{zuckermanetal03-1, zuckermanetal10-1,koesteretal14-1,barstowetal14-1}, ranging in mass from small asteroids to objects as large as Pluto \citep{girvenetal12-1,wyattetal14-1}. The bulk composition of these exoplanetary bodies can be inferred from the debris detected in the white dwarf photosphere, analogous to how the compositions of Solar System bodies are inferred from meteorites \citep{lodders+fegley11-1}. High-resolution spectroscopy of over a dozen metal-polluted white dwarfs has revealed accretion of numerous atomic species, allowing detailed studies of the chemical composition of extrasolar planetesimals \citep{kleinetal11-1,gaensickeetal12-1,dufouretal12-1,juraetal12-1,farihietal13-2,xuetal14-1,raddietal15-1, wilsonetal15-1}. Overall, these objects have chemical compositions similar to inner Solar System bodies, dominated by O, Si, Mg and Fe, and volatile depleted \citep{jura+young14-01}. However, the detailed compositions can be very diverse, with objects having enhanced levels of core material \citep{melisetal11-1,gaensickeetal12-1,wilsonetal15-1}, evidence of post-nebula processing \citep{xuetal13-1}, and significant mass fractions of water \citep{farihietal13-2,raddietal15-1}. Thus far, studies of planetesimal compositions at white dwarfs have predominately focused on individual objects. However, the growing sample of abundance studies now allows conclusions to be derived regarding the overall chemical abundances of (solid) exoplanet precursors in a statistically significant sample of systems. Here, we use these data to constrain the occurrence frequency of carbon planets. \begin{figure} \centering \includegraphics[width=8.5cm]{f2.eps} \caption{Enlarged sections of the spectrum of WD\,1953--715 showing photospheric \ion{O}{i}\,1152.2, 1302.2, 1304.9, 1306.0\,\AA, \ion{Si}{ii}\,1304.4\,\AA\, and \ion{C}{ii}\,1334.5, 1335.6\,\AA\ absorption lines. The model atmosphere fit used to calculate the abundances is overlaid in blue. Interstellar components of the \ion{O}{i}\,1302.2\,\AA, \ion{Si}{ii}\,1304.4, 1305.6\,\AA\ and \ion{C}{ii}\,1334.5\,\AA\ absorption lines are marked with dashed grey lines. \protect\label{fig:lines}} \end{figure} \begin{table*} \centering \caption{New atmospheric parameters and debris accretion rate measurements for ten white dwarfs identified by \citet{koesteretal14-1}. Spectra of the first six are shown in Fig. \ref{fig:spectra}. $^1$Updated from \citet{gaensickeetal12-1} } \begin{tabular}{lcccc} \hline\hline Name & $T_{\mathrm{eff}}~(\mathrm{K})$ & $\log g~(\mathrm{cm\,s^{-2}})$ & $\dot M(\mathrm{C})~(\mathrm{g\,s^{-1}})$ & $\dot M(\mathrm{O})~(\mathrm{g\,s^{-1}})$\\\hline WD\,2058+181 & $17308\pm235$ & $7.920\pm0.089$ & $(2.06\pm0.95)\times10^6$ & $(1.03\pm0.47)\times10^7$ \\ WD\,1647+375 & $22803\pm310$ & $7.902\pm0.089$ & $(1.14\pm0.52)\times10^7$ & $(2.16\pm0.75)\times10^8$ \\ WD\,1013+256 & $22133\pm301$ & $8.022\pm0.089$ & $(1.92\pm0.66)\times10^6$ & $(3.2\pm1.6)\times10^7$ \\ WD\,1953--715 & $18975\pm258$ & $7.957\pm0.089$ & $(1.56\pm0.72)\times10^6$ & $(2.8\pm1.3)\times10^7$ \\ WD\,1943+163 & $19451\pm264$ & $7.896\pm0.089$ & $(1.40\pm0.64)\times10^6$ & $(1.97\pm0.91)\times10^7$ \\ WD\,0059+257 & $20491\pm278$ & $8.002\pm0.089$ & $\leq2.9\times10^4$ & $(3.4\pm1.6)\times10^7$ \\ PG\,0843+516$^1$ & $22412\pm304$ & $7.902\pm0.089$ & $(2.42\pm1.11)\times10^5$ & $(1.09\pm0.50)\times10^8$ \\ PG\,1015+161$^1$ & $18911\pm257$ & $8.042\pm0.089$ & $\leq6.9\times10^4$ & $(4.9\pm2.3)\times10^7$ \\ SDSS\,J1228+1040$^1$ & $20713\pm281$ & $8.150\pm0.089$ & $(1.70\pm0.78)\times10^5$ & $(4.4\pm2.0)\times10^8$ \\ GALEX\,J1931+0117$^1$ & $21457\pm291$ & $7.900\pm0.089$ & $(7.1\pm4.9)\times10^5$ & $(9.0\pm6.2)\times10^8$ \\ \hline \end{tabular} \label{tab:new_wds} \end{table*} \begin{table} \centering \caption{List of the absorption lines used for the debris abundance measurements.} \begin{tabular}{ll} \hline\hline Ion & Vacuum rest wavelength (\AA) \\\hline \ion{C}{ii} & 1334.530, 1335.660, 1335.708 \\ \ion{C}{iii} & 1174.930, 1175.260, 1175.590, 1175.710, 1175.987, 1176.370 \\ \ion{O}{i} & 1152.150, 1302.170, 1304.860, 1306.030 \\ \hline \end{tabular} \label{tab:all_lines} \end{table} \section{Carbon and Oxygen debris abundances at white dwarfs} \label{sec:wds} We present debris abundance measurements for ten white dwarfs observed with the Cosmic Origins Spectrograph on board the {\em Hubble Space Telescope} ({\em HST}/COS) as part of Program IDs 12169, 12869, and 12474 \citep{gaensickeetal12-1,koesteretal14-1}. Table\,\ref{tab:new_wds} presents their effective temperatures ($T_{\mathrm{eff}}$) and surface gravities ($\log g$), as well as elemental accretion rates. The techniques used to determine these results are described in detail in \cite{koesteretal14-1}, so we only briefly summarise here. Firstly, optical spectra from the SPY survey were refitted with the latest model grid to determine temperatures and surface gravities. If no SPY spectra were available, we used parameters from \cite{gianninasetal11-1}. After correcting for a small systematic difference between the two determinations, we fixed the surface gravity to the value obtained from the optical data, and then determined the temperature from a fit to the ultraviolet COS spectra. For this we used the slope between the optical photometry and the absolutely calibrated COS spectra as additional constraint. The best fit atmospheric parameters were then used to create synthetic spectra containing approximately 14\,000 spectral lines from 14 elements.The atmospheric metal abundances were varied until a good fit was obtained between the synthetic spectra and the observed absorption lines. Adjusting the abundances by $\pm\,0.2$\,dex around the best fit values allowed an estimate of the abundance uncertainties. The uncertainty in the atmospheric parameters has only a small effect on the element abundances (${<}0.04$\,dex). Table \ref{tab:all_lines} lists the absorption lines used to determine the carbon and oxygen abundances. The oxygen abundances are primarily measured from the \ion{O}{i}\,1152.15\,\AA\ line. The \ion{O}{i} lines around 1300\,\AA\ are affected by geocoronal emission in several of the spectra, which is not corrected for by the COS pipeline. Where no geocoronal emission is present, these lines are still affected by blending with \ion{Si}{ii} and interstellar \ion{O}{i} lines, but still provide (less accurate) abundance determinations which agree with measurements from the \ion{O}{i}\,1152.15\,\AA\ line. As the metals diffuse out of the white dwarf atmosphere on different time scales, the element abundances in the white dwarf photosphere do not necessarily match those of the debris material. The diffusion time scales were calculated using the same atmospheric models as for the spectral fitting \citep{koester09-1}. As the diffusion time scales for these hydrogen atmosphere white dwarfs are of order days to, at most, months, it is reasonable to assume that the white dwarfs are currently accreting, and accretion and diffusion are in equilibrium. The accretion rate is therefore the ratio of the atmospheric abundance to the diffusion time scale.Radiative levitation, which can change the diffusion time scales or even keep an element in the atmosphere without ongoing accretion \citep{chayer+dupuis10-1}, is taken into account when calculating the diffusion time scales, but has a negligible effect on carbon and no effect on oxygen over the temperature range of our sample. Finally, the C/O ratio by number is calculated as the ratio of the accretion rates, weighted by the relative atomic masses. Analysis of the debris in four of these white dwarfs were presented in \citet{gaensickeetal12-1}, but the abundances used here have been updated with new calculations. Ultraviolet spectra of the remaining six white dwarfs are shown in Fig.\,\ref{fig:spectra}, featuring photospheric absorption lines from a variety of metals, including both carbon and oxygen (Fig.\,\ref{fig:lines}). In addition to these new measurements, we have assembled all published abundances for carbon and oxygen at white dwarfs both observed with COS and analysed with the same model described above \citep{wilsonetal15-1,xuetal14-1,farihietal13-2, xuetal13-1}. These criteria create a homogeneous sample, which avoids systematic uncertainties that may result from comparing different data sources and models. Where more than one measurement is available we use the most recent result, and we adopt the most commonly used white dwarf designations. In total, we discuss C/O measurements for debris in eleven systems and firm upper limits for another five. Four of the white dwarfs in our sample have helium dominated atmospheres, labelled in Fig \ref{fig:c_o} and Table \ref{tab:c_o_tab}. These stars develop deep convective envelopes, which may lead to dredge-up of core-carbon into the atmosphere. Dredge-up typically occurs in cool ($T_{\mathrm{eff}}\la12\,000$\,K) white dwarfs, but it has been suggested that a small number of white dwarfs may have helium envelopes thin enough to pollute the atmosphere with core-carbon even at higher temperatures \citep{koesteretal14-2, wilsonetal15-1}. Thus, although we treat the C/O ratios in helium atmosphere white dwarfs as firm detections, this caveat should be kept in mind when discussing the planetary abundances at individual white dwarfs. The majority of our sample (12 out of 16) have hydrogen atmospheres, which are unaffected by dredge up. \begin{figure*} \centering \includegraphics[width=17.0cm]{f3.eps} \caption{C/O number ratios of the planetesimal debris in our sample, plotted against the effective temperature ($T_{\mathrm{eff}}$) of the host white dwarfs and compared with various Solar System bodies. The colour scheme is intended to aid identification and has no physical significance. White dwarfs with helium atmospheres, which may be affected by convective carbon dredge-up that could enhance their carbon abundances (Sect.\,\ref{sec:wds}), are marked with *. Objects with similar temperatures have been offset slightly for clarity. The shaded area shows the range of values present in the literature for Earth's C/O (Section \,\ref{sec:disc}). \protect\label{fig:c_o}}. \end{figure*} \section{Discussion} \label{sec:disc} Figure\,\ref{fig:c_o} and Table\,\ref{tab:c_o_tab} show the C/O ratios of the planetesimal debris at the 16 systems in our sample as a function of effective temperature (and therefore the age since white dwarf formation). We compare these ratios with those for the CI chondritic meteorites \citep{lodders+fegley11-1}, bulk Earth \citep{allegreetal01-1,marty12-1}, Comet Halley \citep{lodders+fegley98-1}, and the Solar photosphere \citep{vonsteiger+zurbuchen16-1}. As carbon chemistry is thought to become an important factor in protoplanetary discs with C/O\,$>0.8$ ($\log\mathrm{(C/O)}>-0.097$), we take this as a lower limit for a planetesimal formed in a carbon-rich environment. We note, however, that planets formed in such discs are predicted to potentially have C/O\,$\gg1.0$ \citep{bondetal10-1}. We find no planetary debris with C/O\,$>0.8$. The debris at WD\,2058+181 has the highest ratio, with $\log\mathrm{(C/O)}=-0.57\pm0.28$, still below the Solar value. Applying binomial statistics, we find that planetesimals with C/O\,$>0.8$ occur in $<17$\,percent of systems at a $2\,\sigma$ confidence level, falling to $<6.5$\,percent with $1\,\sigma$ confidence. Our upper limit on high planetary C/O is consistent with that found in stellar abundances by \cite{fortney12-1}, who showed that the fraction of stars with C/O\,$>0.8$ is no more than 10--15\,percent. None of the planetesimal debris in the 16 systems has C/O similar to that of Comet Halley ($\log\mathrm{(C/O)}=-0.04$), supporting the conclusions of \cite{verasetal14-2} that comets are not a significant population of parent bodies for the debris detected at many white dwarfs. There are no observed trends in C/O with the post-main sequence (cooling) age. Although none of the systems are carbon-rich, the material does appear to fall into two distinct populations, with an apparent gap between $\log\mathrm{(C/O)}\approx-1$ and $\log\mathrm{(C/O)}\lesssim-2$. Six systems have relatively high C/O ratios, with $\log({<}\mathrm{C/O}{>})=0.12\pm0.07$ (where the error is the $1\sigma$ spread). This is consistent with the CI chondrite meteorites \citep{lodders+fegley11-1}, which are thought to be representative of the primordial composition of the rocky Solar System. It is likely that the debris in these systems originated as small asteroids, which had not undergone significant post-nebula differentiation . The remaining ten systems all have C/O less than or equal to that of the bulk Earth. Comparing the relative abundances of carbon and oxygen in this group with the other elements detected in their debris shows that they have a high oxygen abundance (relative to, for example, Si), rather than being relatively poor in carbon. A speculative explanation for this is that the parent bodies of the debris contained a significant amount of water, similar to Ceres or the large moons of the gas giants. High mass fractions of water have already been detected in debris at GD\,61 \citep{farihietal13-2}, which has an upper limit on its C/O ratio placing it in the low C/O group. Addition of water to a planetesimal with an otherwise Earth-like composition would increase the abundance of oxygen, but leave the carbon abundance unchanged, decreasing the C/O ratio. A potential caveat to this argument is the study by \cite{jura+xu12-1} of hydrogen in helium atmosphere white dwarfs in the 80\,pc sample. Their results suggested that water makes up less that one percent of the mass accreting onto the white dwarfs in their sample. However, both the amount and origin of hydrogen in helium atmosphere white dwarfs, and its relevance to debris accretion, are subject to ongoing discussion \citep{koester+kepler15-1, bergeronetal11-1}. Additionally, the carbon content of the Earth, and in particular the core, is still subject to discussion. \cite{allegreetal01-1} find a mass fraction of 0.17--0.36\,percent, resulting in a log (C/O) between -1.8 and -2.16. In contrast, \cite{marty12-1} instead calculate a carbon mass fraction of only 0.053\,percent. Using the oxygen fraction from \cite{allegreetal01-1}, this lowers the $\log(\mathrm{C/O}$ to -2.7, consistent with the average of the low C/O systems ($\log({<}\mathrm{C/O}{>})=-2.5\pm0.36$).The range of proposed C/O ratios for Earth is shown by the shaded area in Fig.\ref{fig:c_o}. By providing a strong lower limit of the occurrence of carbon-rich planetesimals, we show that debris-polluted white dwarfs are likely the most powerful diagnostics of carbon chemistry in extrasolar planetesimals, and increasing the sample size will provide stronger constraints on the existence, or lack thereof, of carbon planets. More generally, abundance studies of the debris at white dwarfs are sensitive to a wide variety of elements, making them the ideal tool to systematically investigate the full range of non-gaseous planetary chemistry. \begin{table} \centering \caption{C/O ratios by number shown in Fig.\ref{fig:c_o}, in order of increasing C/O. White dwarfs with helium atmospheres are marked with *. References: 1.\,This work; 2.\,\citet{xuetal13-1}; 3.\,\citet{farihietal13-2}; 4.\,\citet{xuetal14-1}; 5.\,\citet{wilsonetal15-1}. } \begin{tabular}{lcr} \hline\hline Name & $\log\mathrm{(C/O)}$ & Ref.\\\hline SDSS\,J1228+1040 & $-3.3\pm0.28$ & 1\\ GD\,61* & $\leq-3.0$ & 3\\ GALEX\,J1931+0117 & $-3.0\pm0.42$ & 1\\ WD\,0059+257 & $\leq-2.9$ & 1\\ G241-6* & $\leq-2.9$ & 2\\ PG\,1015+161 & $\leq-2.7$ & 1\\ PG\,0843+516 & $-2.5\pm0.28$ & 1\\ GD\,13 & $\leq-2.2$ & 4\\ GD\,40* & $-2.2\pm0.22$ & 2\\ G29-38 & $-2.1\pm0.17$ & 4\\ WD\,1647+375 & $-1.2\pm0.25$ & 1\\ WD\,1013+256 & $-1.1\pm0.25$ & 1\\ WD\,1953-715 & $-1.1\pm0.28$ & 1\\ WD\,1943+163 & $-1.0\pm0.28$ & 1\\ SDSS\,J0845+2257* & $-0.84\pm0.28$ & 5\\ WD\,2058+181 & $-0.57\pm0.28$ & 1\\ \hline \end{tabular} \label{tab:c_o_tab} \end{table} \section*{Acknowledgements} The authors thank Yilen Gomez Maqueo Chew and the anonymous referee for constructive comments, and Mark Hollands for statistics advice.. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 320964 (WDTracer). JF gratefully acknowledges the support of the STFC via an Ernest Rutherford fellowship This paper is based on observations made with the NASA/ESA {\em Hubble Space Telescope}, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program IDs 12169, 12869 and 12474. \bibliographystyle{mnras}
2,869,038,156,151
arxiv
\section{Vorwort} \markboth{{\rm Der quantisierte {\sc Hall}-Effekt.} {\it Version 2.01 (30.\,06.\,97)\/}} {{\rm Der quantisierte {\sc Hall}-Effekt.} {\it Version 2.01 (30.\,06.\,97)\/}} Die experimentelle Entdeckung des quantisierten {\sc Hall}-Effekts durch {\sc von\,Klit\-zing},% \linebreak {\sc Dor\-da} und {\sc Pepper} besteht in der {\it Beobachtung von Plateaux\/} im {\sc Hall}-Widerstand \begin{equation} R_H=U_H/I \end{equation} von sogenannten MOS-({\it metal-oxide-semiconductor-\/})\,Strukturen bei niedrigen Temperaturen und hohen Magnetfeldern. \par {\sc Von\,Klitzing} sah als erster, da\ss\ die {\it exakte\/} H\"ohe dieser Plateaux gegeben ist durch ganzzahlige Bruchteile von $h/e^2$, \begin{equation} R_H = \frac{h}{ie^2}, \phantom{xxx} i=1,2,3,\,{\dots} \phantom{xxx}, \end{equation} wobei - wie \"ublich - $h$ das {\sc Planck}sche Wirkungsquantum und $e$ die elektrische Elementarladung bezeichnen. F\"ur seine sensationelle Entdeckung erhielt {\sc von\,Klitzing} 1985 den Nobelpreis. Die heutige offizielle Definition des Widerstandsnormals basiert auf diesem Effekt. \par Die Originalarbeit ist zu finden unter \cite{Klitzing80}. \bild{qhe_000k}{Der Entdecker des QHE: {\sc Klaus von Klitzing} \cite{Landwehr90, Karikatur}}{10} \par Im vorliegenden Praktikumsversuch wollen wir ein \"ahnliches Experiment an einer sogenannten Heterostruktur durchf\"uhren. Dieses Skript soll die f\"ur den Versuch notwendigen Kenntnisse vermitteln. Es ist so geschrieben, da\ss\ man es in einem Zug durchlesen und auch verstehen kann, vorausgesetzt, der Leser bringt eine Reihe von Vorkenntnissen mit, die zum Stoff des Hauptstudiums Physik geh\"oren: \begin{itemize} \item Klassische Elektrodynamik (Vektorpotential und lokale Eichinvarianz), \item Quantenmechanik (Quantisierung, Harmonischer Oszillator, Dichteoperator, Erwartungswert), \item Thermodynamik (chemisches Potential), \item Festk\"orperphysik (Bandstruktur, Halbleiter), \item Me\ss technik, insbesondere Lock-In-Verst\"arker-Technik. \end{itemize} \par Selbstverst\"andlich kann man diesen Versuch auch gleich nach dem Vordiplom durchziehen, wenn man bereit ist, sich einige Dinge autodidaktisch beizubringen. Dazu gehe man den Text durch und streiche diejenigen Begriffe an, deren Bedeutung nicht klar ist. Ein gutes Physiklexikon und ein Nachmittag in der Bibliothek helfen meist schon weiter. Trotzdem habe ich mich bem\"uht, diese Anleitung so {\it self-contained\/} wie m\"oglich zu halten. In ihr ist eigentlich alles zu finden, was zum Verst\"andnis der grundlegenden Prinzipien ben\"otigt wird. ({\it Tip:\/} Der eilige Leser ben\"otigt zun\"achst nur die Kapitel 2.1.-2.3.) \par Zwei Dinge sollten die Studenten vor Beginn des Praktikums erledigen, n\"amlich \begin{enumerate} \item sich eine Kopie der Originalarbeit besorgen \cite{Klitzing80}, \item sich eine Kopie der Seiten 3-1 und 3-2 aus der Anleitung des Lock-In-Verst\"arkers SR850, erh\"altlich bei der Technik der Arbeitsgruppe, aush\"andigen lassen \cite{Stanford}. \end{enumerate} \par Viel Erfolg! \vfill\eject\noindent% \section{Theoretische Grundlagen} \subsection{Widerstand und Leitf\"ahigkeit} Nach dem wohlbekannten {\sc Ohm}schen Gesetz ist der durch einen metallischen Leiter konstanter Temperatur flie\ss ende elektrische Strom $I$ proportional zur angelegten Spannung $U$ \begin{equation} I \propto U. \end{equation} Ausgedr\"uckt durch den absoluten Widerstand $R$ (gemessen in Ohm $\Omega=V/A$) bzw.\ durch den absoluten Leitwert $\Gamma$ (gemessen in Siemens $S=A/V$) k\"onnen wir schreiben \begin{equation} I \cdot R = U \phantom{1234} \mbox{bzw.} \phantom{1234} I = \Gamma \cdot U. \end{equation} Im Labor haben wir es \"ublicherweise mit Proben zu tun, die bestimmte Abmessungen haben; die Naturgesetze sollten aber in Termen von {\it Invarianten\/} ausgedr\"uckt werden.% \footnote{Dieses entspricht dem Geist der Werke von {\sc Helmholtz}, {\sc van\,der\,Waerden} und {\sc Weyl}.} Diese Forderung legt uns nahe, die obigen Beziehungen unter Zuhilfenahme {\it spezifischer Gr\"o\ss en\/}, das hei\ss t abmessungsunabh\"angiger Gr\"o\ss en, zu formulieren. R\"aumliche Isotropie vorausgesetzt, schreiben wir \begin{equation} {\bf j}\,\varrho = \varrho\,{\bf j} = {\bf E} \phantom{1234} \mbox{bzw.} \phantom{1234} {\bf j} = \sigma\,{\bf E} \, , \end{equation} wobei die Stromdichte gegeben ist durch \begin{equation} {\bf j} = \frac{I}{A} \cdot {\bf u}, \end{equation} mit $A$ als Betrag der durchflossenen Fl\"ache und ${\bf u}$ als zugeordnetem Normalenvektor der L\"ange $1$ (engl.\ {\it unit vector\/}). $\varrho$ nennen wir den {\it spezifischen Widerstand\/} und $\sigma$ die {\it spezifische Leitf\"ahigkeit\/}. Verstehen wir diese beiden Gr\"o\ss en als skalare Gr\"o\ss en, so k\"onnen wir setzen \begin{equation} \varrho=\frac{R \cdot A}{l} \phantom{1234} \mbox{bzw.} \phantom{1234} \sigma=\frac{l}{R \cdot A}. \end{equation} Im allgemeinen Fall, in dem die Proben sich in elektromagnetischer Hinsicht anisotrop verhalten (zum Beispiel unter Einflu\ss\ eines homogenen Magnetfeldes oder der Symmetrie-Eigenschaften der Kristallstruktur), {\it m\"ussen\/} wir $\varrho$ und $\sigma$ als Matrixoperatoren (oder Tensoren) auffassen. Wir schreiben sie daher (wie Vektoren) fett: \begin{equation} {\bf E} = \mbox{\boldmath$\varrho$} \,{\bf j} \phantom{1234} \mbox{bzw.} \phantom{1234} {\bf j} = \mbox{\boldmath$\sigma$} \,{\bf E}. \end{equation} Nun hat in $D$ Raumdimensionen der spezifische Widerstand - definiert als Widerstand mal durchflossene (im allgemeinen $D$-$1$-dimensionale) Querschnittsfl\"ache pro (im allgemeinen 1-dimensionale) L\"ange - die physikalische Dimension \begin{quote} \glqq Widerstand mal L\"ange hoch $D$$-$$2$\grqq\, , \end{quote} somit der spezifische Leitwert die physikalische Dimension \begin{quote} \glqq L\"ange hoch $D$$-$$2$ durch Widerstand\grqq. \end{quote} Das bedeutet, da\ss\ in einem idealisierten 2-dimensionalen System, in der sich der Strom l\"angs einer 2-dimensionalen Fl\"ache bewegt und die durchflossenen Querschnitte $1$-di\-men\-si\-o\-nal sind, der spezifische Widerstand die Einheit eines absoluten Widerstandes, die spezifische Leitf\"ahigkeit die Einheit eines absoluten Leitwerts besitzt. \par Letzteres allein k\"onnte schon die Vermutung nahelegen, da\ss\ es ein Leit\-f\"a\-hig\-keits\-ex\-pe\-ri\-ment an einer quasi-2-dimensionalen Probe geben k\"onnte, das die {\it spezifischen\/} Gr\"o\ss en {\it ohne\/} Bezugnahme auf die {\it tats\"achlichen Abmessungen\/} der Probe mi\ss t! \par Ein weiterer interessanter Aspekt ergibt sich aus einer einfachen Dimensionsanalyse, die sogar mit nur rudiment\"aren Kenntnissen quantenmechanischer Prinzipien erfolgen kann. Einerseits gilt n\"amlich f\"ur die physikalische Dimension (phys.\,dim.) des Widerstandes \begin{eqnarray} \mbox{phys.\,dim.}\, R &=& \mbox{phys.\,dim.}\, \frac{U}{I} \nonumber\\ & & \phantom{+} \nonumber\\ &=& \frac{\mbox{{\it Spannung\/}}} {\mbox{{\it Stromst\"arke\/}}} \nonumber\\ & & \phantom{+} \nonumber\\ &=& \frac{\mbox{{\it Energie pro Ladung\/}}} {\mbox{{\it Ladung pro Zeit\/}}} \nonumber\\ & & \phantom{+} \nonumber\\ &=& \frac{\mbox{{\it Energie mal Zeit\/}}} {\mbox{{\it Ladung zum Quadrat\/}}} \nonumber\\ & & \phantom{+} \nonumber\\ &=& \frac{\mbox{{\it Wirkung\/}}} {\mbox{{\it Ladung zum Quadrat\/}}} \, ; \end{eqnarray} andererseits d\"urfen wir zumindestens formal schreiben \begin{eqnarray} \mbox{phys.\,dim.}\, R &=& \frac{\mbox{{\it Wirkung\/}}} {\mbox{{\it Ladung zum Quadrat\/}}} \nonumber\\ & & \phantom{+} \nonumber\\ &=& \frac{\mbox{{\it Quantenzahl\/}}} {\mbox{{\it Quantenzahl zum Quadrat\/}}} \cdot \frac{\mbox{{\it Wirkungsquantum\/}}} {\mbox{{\it Elementarquantum zum Quadrat\/}}} \nonumber\\ & & \phantom{+} \nonumber\\ &=& \mbox{{\it rationale Zahl\/}} \cdot \frac{h}{e^2}, \end{eqnarray} wobei \begin{equation} \frac{h}{e^2}=25.812\,805\,{\dots}{\rm k}\Omega. \end{equation} \par Allein aus den soeben plausibel gemachten Konzepten h\"atte man die Spekulation rechtfertigen k\"onnen, da\ss\ es m\"oglicherweise ein idealisiertes 2-dimensionales Experiment geben k\"onnte, in dem Quanteneffekte eine wesentliche Rolle spielen (also zum Beispiel bei tiefen Temperaturen) und in dem unabh\"angig von den Abmessungen der Probe rational quantisierte Widerst\"ande bzw.\ rational oder gar {\it integral quantisierte\/} Leitf\"ahigkeiten - in Einheiten von $e^2/h$ - beobachtbar sind. Und vielleicht h\"atte es gar nicht so fern gelegen, sich vorzustellen, dies k\"onnte ein {\sc Hall}-Experiment sein. Schon {\sc Sommerfeld} und {\sc Bethe} spekulierten 1933, also viele Jahre vor {\sc von\,Klitzing}s Entdeckung, in Ihrem klassischen Lehrbuch \"uber den Einflu\ss\ der Quantisierung der Elektronenbahnen auf das Verhalten des (longitudinalen) Magnetowiderstandes hinreichend kalter Proben \cite{SommerfeldBethe1933}. \bild{qhe_000s}{Vermutetes Verhalten nach {\sc Sommerfeld} und {\sc Bethe} 1933 \cite{SommerfeldBethe1933}}{8} \par Noch aufregender sind die Daten von {\sc Kawaji} {\it et al.\/} aus dem Jahre 1975 \cite{Igarashi75}, die bereits die wesentliche Struktur sichtbar machen. \bild{qhe_000i}{Die Daten von {\sc Kawaji} {\it et al.\/} aus dem Jahre 1975}{8} Es bedurfte aber einer genialen Interpretation dieser Stufen, n\"amlich in Termen fundamentaler Naturkonstanten, und dem Schlu\ss, da\ss\ die Quantisierung {\it exakt\/} ist, um aus dem Ph\"a\-no\-men eine bahnbrechende Entdeckung zu machen. \vfill\eject\noindent% \subsection{Der klassische Hall-Effekt} Der klassische {\sc Hall}-Effekt beschreibt die Wirkung eines Magnetfeldes ${\bf B}$ auf einen elektrischen Strom ${\bf j}$ in einer leitenden Probe mit einer geeignet gew\"ahlten Geometrie: Liege der rechteckige leitende Streifen mit den Abmessungen $L_x,L_y,L_z$ in x-, y-, z-Richtung in der xy-Ebene und flie\ss e der Strom in x-Rchtung. Die Einwirkung eines Magnetfeldes in z-Richtung f\"uhrt zum Abfall einer Spannung in y-Richtung, die wir zwischen der oberen und unteren Kante der Probe abgreifen k\"onnen. Eine Umkehrung der Stromrichtung oder der Richtung des senkrecht zur xy-Ebene ausgerichteten Magnetfeldes \"andert die Polarit\"at der beobachteten Spannung. \bild{qhe_000h}{Das klassische Experiment nach {\sc Hall} \cite{Hall}}{15} \par In mathematischen Termen: Sei also \begin{eqnarray} {\bf j} &=& (j_x,0,0), \\ {\bf E} &=& (0,E_y,0), \end{eqnarray} sowie \begin{eqnarray} {\bf B} = (0,0,B_z). \end{eqnarray} Im klassischen Regime ist der {\it longitudinale Magnetowiderstand\/} \begin{equation} \varrho_{xx} ({\bf B}) = \frac{E_x}{j_x} \end{equation} vom Magnetfeld unabh\"angig, der {\it transversale Magnetowiderstand\/} \begin{equation} \varrho_{yx} ({\bf B}) = \frac{E_y}{j_x} \end{equation} proportional zum angewendeten Magnetfeld, so da\ss\ es Sinn macht, die sogenannte {\it Hall-Konstante\/} \begin{equation} R_{\mbox{\it Hall}} = \frac{R_H}{B} = \frac{E_y}{j_x\,B} \end{equation} zu definieren. \bild{qhe_000t}{Typische {\sc Hall}-Geometrie}{8} \par Die Ursache dieses sogenannten {\sc Hall}-Effektes liegt nat\"urlich in der {\sc Lorentz}-Kraft, welche auf die durch den Festk\"orper sich bewegenden Ladungstr\"ager wirkt. Aus ihrer Kenntnis k\"onnen wir das Verh\"altnis von beobachteter {\sc Hall}-Spannung zu dem in x-Rich\-tung flie\ss enden Strom, den sogenannten {\sc Hall}-Widerstand, herleiten. Diese Vorstellung kann bereits im Rahmen des relativ einfachen Einteilchen-Bildes entwickelt werden. \par Sei ${\bf v}$ die Geschwindigkeit der Elektronen. Mit $e$ als Elementarladung schreibt sich die bekannte, universell g\"ultige hydrodynamische Beziehung, welche die Stromdichte eines Transportph\"anomens in Relation zu Teilchendichte und -geschwindigkeit setzt, f\"ur einen Strom von Elektronen {\it in drei Raumdimensionen\/} als \begin{equation} {\bf j}_{3D} = e\,n_{3D}\,{\bf v}. \end{equation} $n_{3D}$ bezeichnet hier die $3D$-Dichte, das hei\ss t, f\"ur den Fall unserer Probengeometrie ist (mit $N_e$ als Anzahl der Ladungstr\"ager) \begin{equation} n_{3D} = \frac {N_e} { L_x \cdot L_y \cdot L_z } \end{equation} und somit \begin{equation} {\bf j}_{3D} = e \cdot \frac { N_e } { L_x \cdot L_y \cdot L_z } \cdot {\bf v}. \end{equation} \par {\it Bemerkung:\/} Um uns jetzt und in Zukunft Schreibarbeit zu ersparen, verwenden wir {\it per conventionem\/} die sogenannte {\it technische Stromrichtung\/}: Die relevanten Ladungstr\"ager flie\ss en vom Pluspol zum Minuspol. Mit anderen Worten: Wenn wir nicht ausdr\"ucklich etwas Gegenteiliges behaupten, tun wir so, als seien die Elektronen positiv geladen. \par Nun entspricht der Betrag $v$ der Geschwindigkeit ${\bf v}$ eines Ladungstr\"agers in x-Richtung gerade dem Quotienten $L_x/t$ aus der Probenl\"ange $L_x$ und dem Zeitintervall $t$, die er braucht, um eben diese L\"ange zu durchlaufen, wir haben also \begin{equation} j_{3D} = e \cdot \frac { N_e \cdot v } { L_x \cdot L_y \cdot L_z } = e \cdot \frac { N_e \cdot L_x/t } { L_x \cdot L_y \cdot L_z } = e \cdot \frac { N_e /t } { L_y \cdot L_z } = \frac { I } { L_y \cdot L_z }, \end{equation} wobei wir ausgenutzt haben, da\ss\ der Strom gerade die mit der Elementarladung $e$ multiplizierte Anzahl $N_e/t$ von Ladungstr\"agern ist, die pro Zeitintervall $t$ die Probe durchlaufen haben. \par Fassen wir das System als r\"aumlich 2-dimensional auf, so ignorieren wir die Ausdehnung in z-Richtung. In diesem Fall setzen wir einfach \begin{equation} {\bf j}_{2D} = e\,n_{2D}\,{\bf v}. \end{equation} Insbesondere gilt f\"ur die von uns gew\"ahlte Geometrie \begin{equation} j_{{2D},{x}}=\frac{I}{L_y}=e\,n_{2D}\,v_x. \end{equation} \par Zur\"uck zum {\sc Hall}-Effekt: Einen station\"aren Zustand haben wir offensichtlich dann vorliegen, wenn die {\sc Lorentz}-Kraft ${\bf F}_B$ und die elektrostatische Kraft ${\bf F}_H$, welche proportional zum Gradienten der {\sc Hall}-Spannung $U_H$ ist, sich gerade ausgleichen: \begin{equation} {\bf F}_B = - {\bf F}_H \end{equation} mit \begin{eqnarray} {\bf F}_H &=& e\, {\bf E}_H , \\ {\bf F}_B &=& e\, ({\bf v} \times {\bf B}) . \end{eqnarray} Einsetzen ergibt sofort \begin{equation} ({\bf v} \times {\bf B}) = - {\bf E}_H. \end{equation} Da in unserer Geometrie \begin{eqnarray} {\bf v} \times {\bf B} &=& (v_x, 0, 0) \times (0, 0, B_z) \;=\; (0, -v_x B_z, 0) , \\ {\bf E}_H &=& (0 , E_y, 0 ) \;=\; (0, U_H/L_y, 0) , \end{eqnarray} erhalten wir \begin{equation} v_x\,B_z = E_y \end{equation} und mit \begin{equation} v_x = \frac {I}{L_y} \, \frac{1}{e\,n_{2D}} \end{equation} schlie\ss lich \begin{equation} \frac{I}{L_y} \, \frac{B_z}{e\,n_{2D}} = \frac{U_H}{L_y}. \end{equation} Der Ausdruck f\"ur die {\sc Hall}-Spannung lautet somit \begin{equation} U_H = \frac{B_z}{e\,n_{2D}} \, I =: R_H I, \end{equation} das hei\ss t, es ist \begin{equation} R_H=\frac{B_z}{e\,n_{2D}} \end{equation} und somit \begin{equation} \sigma_H=\frac{e\,n_{2D}}{B_z}. \end{equation} \par \par Wie bereits dargestellt, normiert man den {\sc Hall}-Widerstand $R_H$ auch auf das angelegte Magnetfeld gem\"a\ss\ \begin{equation} R_{\mbox{\it Hall}} = \frac{R_H}{B} = \frac{1}{e\,n_{2D}} \end{equation} und definiert dadurch die schon erw\"ahnte {\sc Hall}-Konstante. Als {\sc Edwin Herbert Hall} in den 80er Jahren des vorigen Jahrhunderts seine Experimente durchf\"uhrte, bemerkte er, da\ss\ das Vorzeichen der Ladung $e$ - je nach betrachtetem Material - sowohl positiv als auch negativ sein konnte. Dies war der erste Hinweis auf den Unterschied zwischen Elektronen- und L\"ocher-Leitung. Die im allgemeinen magnetfeldabh\"angige Beziehung \begin{equation} {\bf E} = \mbox{\boldmath$\varrho$} \,{\bf j}, \end{equation} wobei $\mbox{\boldmath$\varrho$}$ als Tensorgr\"o\ss e (bzw.\ Matrixoperator) zu verstehen ist, nennen Theoretiker das {\sc Ohm}-{\sc Hall}-Gesetz. \vfill\eject\noindent% \subsection{Eine m\"ogliche Quantisierung des Hall-Effekts} Die klassische Beziehung \begin{equation} R_H=\frac{B_z}{en_{2D}} \end{equation} kann man nat\"urlich auch anders schreiben, n\"amlich - unter Verwendung des {\sc Planck}schen Wirkungsquantums - als \begin{equation} R_H=\frac{h} {\left( { \displaystyle\frac{hn_{2D}}{eB_z} } \right) e^2 }. \end{equation} Nun erkennen wir in \begin{equation} \Phi_0=\frac{h}{e} \end{equation} das {\sc London}sche Flu\ss quantum wieder, eine Gr\"o\ss e, die vielerlei Bedeutung hat (siehe unten), die aber am einfachsten zu verstehen ist als die quantenmechanische Realisierung des Konzepts der magnetischen Feldlinie. Messen wir die St\"arke des Magnetfeldes $B_z$ in Einheiten von $\Phi_0$, so z\"ahlen wir magnetische Feldlinien ab. Ihre Anzahl ist gegeben durch \begin{equation} n_{\Phi_0} = \frac{B_z}{\Phi_0} = \frac{B_z}{h/e}, \end{equation} so da\ss\ \begin{equation} R_H= \frac{h} {\left( { \displaystyle\frac{n_{2D}}{n_{\Phi_0}} } \right) e^2 } =: \frac{h}{\nu e^2}. \end{equation} An dieser Stelle k\"onnen wir bereits den Schlu\ss\ ziehen: Treten in einen idealisierten 2-dimensionalen System nicht nur die elektrischen Ladungen, sondern auch die magnetischen Flu\ss linien in Form von elementaren Quanten auf, so ist der {\it F\"ullfaktor\/} \begin{equation} \nu = \frac{n_{2D}}{n_{\Phi_0}} \end{equation} eine ganze (bzw.\ rationale) Zahl. Der {\sc Hall}-Widerstand erf\"ullt dann eine Quantisierungsregel gem\"a\ss\ \begin{equation} R_H=\frac{h}{\nu e^2}. \end{equation} Die zentrale Frage ist, worin die mikroskopische Ursache f\"ur ein solches Verhalten zu suchen ist. Zur Diskussion dieser Frage legen wir unser Augenmerk zun\"achst auf die Kinematik des fraglichen Systems. \vfill\eject\noindent% \subsection{Die Zyklotronbewegung der Elektronen} Springen wir noch einmal zur\"uck zu der im station\"aren Fall g\"ultigen Beziehung \begin{equation} {\bf v} \times {\bf B} = - {\bf E}_H, \end{equation} die wir aus Gr\"unden der Mnemonik - unter Weglassung des Index - jetzt schreiben als \begin{equation} 0 = e \, ( {\bf E} + {\bf v}\times{\bf B} ). \end{equation} Manchmal ist es n\"utzlich, zu einer etwas abstrakteren und gleichzeitig eleganteren Formulierung \"uberzugehen. Statt \begin{equation} v_x = \frac{1}{e\,n_{2D}} \, \frac{I} {L_y} \end{equation} schreiben wir jetzt \begin{equation} {\bf v} = \frac{1}{e\,n_{2D}} \cdot {\bf j}_{2D}, \end{equation} so da\ss\ wir die Gleichung \begin{equation} 0 = e\,n_{2D}\,U_H - I B_z \end{equation} in der Form \begin{equation} 0 = e\,n_{2D} \cdot {\bf E} + {\bf j}_{2D} \times {\bf B} \end{equation} wiedererkennen. Ihre L\"osung kann - ebenso elegant - geschrieben werden als \begin{equation} {\bf j}_{2D} = e\,n_{2D} \cdot \frac{ {\bf B} \times {\bf E} }{ {\bf B}^2 } , \end{equation} was f\"ur die Absolutbetr\"age impliziert \begin{equation} | {\bf j}_{2D} | = \frac{e\,n_{2D}}{B} \, | {\bf E} | = R_H \, | {\bf E} |, \phantom{xxx} {\bf j}_{2D} \perp {\bf E}. \end{equation} \par Gehen wir jetzt \"uber die station\"are N\"aherung hinaus und betrachten die volle klassische Dynamik unter Einbeziehung des Kreisens der Elektronen im Magnetfeld, der sogenannten {\it Zyklotronbewegung\/}. Ihre physikalische Ursache ist wieder - die {\sc Lorentz}-Kraft. \par Erinnern wir uns an das, was wir in der Vorlesung \"uber Theoretische Mechanik gelernt haben \cite{Goldstein80}: Die Gleichung f\"ur die {\sc Lorentz}-Kraft kann hergeleitet werden aus der {\sc La\-gran\-ge}-Funktion \begin{equation} L({\bf x},\dot{\bf x},t)= T-U= \frac{m}{2}\dot{\bf x}^2 + e\, {\bf A}({\bf x},t)\cdot\dot{\bf x} - e\, V({\bf x},t) \end{equation} vermittels Anwendung der {\sc Lagrange}-Gleichungen \begin{equation} \frac{d}{dt} \frac{\partial L}{\partial\dot x_i} - \frac{\partial L}{\partial x_i} = 0. \end{equation} Man zeigt dieses, indem man die totale Zeitableitung unter Anwendung der Kettenregel in eine Summe partieller Ableitungen entwickelt \begin{equation} \frac{d}{dt} = \dot x \, \frac{\partial}{\partial x} + \ddot x \, \frac{\partial}{\partial \dot x} + \frac{\partial}{\partial t}. \end{equation} und den resultierenden Ausdruck in die {\sc Lagrange}-Gleichungen einsetzt \begin{equation} \frac{\partial^2 L}{\partial\dot x_j\partial\dot x_i}\ddot x_j + \frac{\partial^2 L}{\partial x_j\partial\dot x_i} \dot x_j + \frac{\partial}{\partial t} \frac{\partial L}{\partial\dot x_i} - \frac{\partial L}{\partial x_i} = 0. \end{equation} Wir erhalten \begin{equation} m\ddot x_i + e\, \frac{\partial A_i}{\partial x_j}\dot x_j + e\, \frac{\partial A_i}{\partial t} + \left( e\, \frac{\partial V}{\partial x_i} - e\, \frac{\partial A_j}{\partial x_i}\dot x_j \right) = 0 \end{equation} bzw.\ \begin{eqnarray} m\ddot x_i &=& e\, \left( - \frac{\partial V}{\partial x_i} - \frac{\partial A_i}{\partial t} \right) + e\,\dot x_j\, \left( \frac{\partial A_j}{\partial x_i} - \frac{\partial A_i}{\partial x_j} \right) \nonumber\\ &=& e\, \left( - \frac{\partial V}{\partial x_i} - \frac{\partial A_i}{\partial t} \right) + e\,\dot x_j\, \left( ( \delta_{il}\delta_{jm} - \delta_{jl}\delta_{im} ) \, \frac{\partial A_m}{\partial x_l} \right) \nonumber\\ &=& e\, \left( - \frac{\partial V}{\partial x_i} - \frac{\partial A_i}{\partial t} \right) + e\,\dot x_j\, \left( ( \varepsilon_{ijk}\varepsilon_{klm} ) \, \frac{\partial A_m}{\partial x_l} \right) \nonumber\\ &=& e\, \left( - \frac{\partial V}{\partial x_i} - \frac{\partial A_i}{\partial t} \right) + e\,\varepsilon_{ijk}\dot x_j\, \left( ( \varepsilon_{klm} ) \, \frac{\partial A_m}{\partial x_l} \right). \end{eqnarray} Dabei haben wir die {\sc Einstein}sche Summenkonvention verwendet, welche annimmt, da\ss\ grunds\"atzlich \"uber gleiche Indizes summiert wird. \par $\varepsilon_{klm}$ ist der total antisymmetrische oder {\sc Levy}-{\sc Civita}-Tensor in 3D.% \footnote{% $\varepsilon_{123}= 1$, $\varepsilon_{231}=-1$ und zyklische Vertauschungen $\dots$ alle anderen gleich $0$} Er erf\"ullt die von uns soeben ausgenutzte Identit\"at \begin{equation} \varepsilon_{ijk}\varepsilon_{klm} = \delta_{il}\delta_{jm}-\delta_{jl}\delta_{im} \end{equation} und definiert das bekannte Vektor- oder {\sc Gibbs}-Produkt durch \begin{equation} ( {\bf a} \times {\bf b} )_k = \varepsilon_{klm} a_l b_m. \end{equation} \par In einer etwas gel\"aufigeren Schreibweise erkennen wir in der obigen Bewegungsgleichung diejenige f\"ur ein Teilchen wieder, welches sich unter dem Einflu\ss\ eine {\sc Lorentz}-Kraft bewegt \begin{eqnarray} m \dot {\bf v} &=& e \, ( {\bf E} + {\bf v} \times ( {\bf \mbox{\boldmath$\nabla$}} \times {\bf A} )) \nonumber\\ &=& e \, ( {\bf E} + {\bf v} \times {\bf B} ) , \end{eqnarray} wobei \begin{equation} {\bf v} = \dot {\bf x}. \end{equation} Es sei daran erinnert, da\ss, obwohl die homogenen {\sc Lagrange}schen Gleichungen hier ihre G\"ultigkeit behalten, das hier betrachtete System nicht konservativ ist im \"ublichen Sinne: Die {\sc Lorentz}-Kraft ist nicht der {\it Gradient\/} eines Potentials \begin{equation} {\bf F}\not=-{\bf grad}\;U, \end{equation} sondern es gilt \begin{equation} {\bf F}= -{\bf grad}\;U+ \frac{d}{dt}\,{\bf grad}\,_{\dot{\bf x}}\;U, \end{equation} mit dem {\it geschwindigkeitsabh\"angigen Potential\/} \begin{equation} U= e\, V- e\, {\bf A}\dot{\bf x}. \end{equation} \par Eine Bewegungsgleichung f\"ur das {\sc Hall}-Problem, welche \"uber die station\"are N\"aherung hinausgeht, hat somit die Form \begin{equation} m \, \dot {\bf v} = e \,( {\bf E} + {\bf v}\times{\bf B} ), \end{equation} oder als Bewegungsgleichung f\"ur den Strom geschrieben \begin{equation} \frac{m}{e}\,\frac{d{\bf j}_{2D}}{dt} = e\,n_{2D} \cdot {\bf E} + {\bf j}_{2D} \times {\bf B}. \end{equation} Ihre L\"osung ist gegeben durch \begin{equation} {\bf j}_{2D}(t) = e\,n_{2D} \cdot \frac{ {\bf B} \times {\bf E} }{ {\bf B}^2 } + \exp(t\mbox{\boldmath$\omega$}_c \times) \, {\bf j}_{2D,0}, \end{equation} wobei \begin{equation} \mbox{\boldmath$\omega$}_c=(0,0,\omega_c), \end{equation} mit der trickreichen als Operatoridentit\"at auffa\ss baren Schreibweise \begin{equation} (\mbox{\boldmath$\omega$}_c\times)= \left( \begin{array}{ccc} 0 & -\omega_c & 0 \\ \omega_c & 0 & 0 \\ 0 & 0 & 0 \end{array} \right). \end{equation} Der Exponentialoperator ist durch seine Entwicklung in seine Potenzreihe definiert% \footnote{% Man sieht sofort, was gemeint ist, wenn man sich klarmacht, da\ss\ f\"ur drei normierte und orthogonale Einheitsvektoren ${\bf e}_x,{\bf e}_y,{\bf e}_z$ gilt \begin{equation} \exp\,(\varphi\,{\bf e}_z\times)\,{\bf e}_x = (\cos\,\varphi)\,{\bf e}_x + (\sin\,\varphi)\,{\bf e}_y. \end{equation}} und ${\bf j}_{2D,0}$ durch die Anfangsbedingung zu einer festen Zeit gegeben. Einsetzen der L\"osung in die Gleichung ergibt f\"ur den Betrag der Winkelgeschwindigkeit oder die sogenannte {\it Zyklotronfrequenz\/} \begin{equation} |\mbox{\boldmath$\omega$}_c| = \frac{eB}{m}. \end{equation} \vfill\eject\noindent% \subsection{Das Quanten-Regime} Im Bereich tiefer Temperaturen dominieren Quanteneffekte (der {\sc Hamilton}-{\it Operator\/} dominiert \"uber $kT$). Verhalten sich die Ladungstr\"ager in der Probe wie ein 2-dimensionales Elektronengas (also wie ein idealisiertes System nicht wechselwirkender Elektronen, die sich in einem 2-dimensionalen Raum aufhalten), so ist der entsprechende Einteilchen-{\sc Hamilton}-Operator durch {\sc Landau}s Formel gegeben \cite{LandauLifshitz}: \begin{equation} H_L=\frac{1}{2}m{\bf v}^2=\frac{({\bf p}-e{\bf A})^2}{2m}, \end{equation} wobei ${\bf A}$ hier - wie \"ublich - das Vektorpotential bezeichnet. \par In dem hier betrachteten Problem ist das Magnetfeld zeitlich konstant, r\"aumlich homogen und in z-Richtung orientiert. Ein solches Magnetfeld wird am besten durch die Rotation eines Vektorpotentials in der {\it isotropen Eichung\/} dargestellt, das hei\ss t, es ist \begin{equation} {\bf B}={\bf rot}\;{\bf A} ={\bf rot}\;\left(\frac{1}{2}\,{\bf B}\times{\bf r}\right) \end{equation} mit \begin{equation} {\bf B}=(0,0,B_z), \phantom{xxx} {\bf A}=(- B_z y/2, B_z x/2, 0) \end{equation} und \begin{equation} B=B_z=\mbox{\it const\/}. \end{equation} W\"ahrend die Komponenten des kanonischen Impulses ${\bf p}$ die kanonischen Vertauschungsrelationen erf\"ullen, also untereinander kommutieren, tun dies die Komponenten des sogenannten kinetischen Impulses (dem Produkt aus Masse und Geschwindigkeit) \begin{equation} K_j=p_j-eA_j=mv_j \end{equation} nicht, sondern es gilt in der f\"ur uns relevanten Geometrie \begin{equation} \lbrack K_x,K_y \rbrack = i \hbar \cdot eB_z \end{equation} mit \begin{equation} H=\frac{K_x^2+K_y^2}{2m} \end{equation} und \begin{equation} B=B_z=\mbox{\it const\/}. \end{equation} Diese Situation ist analog zu der des harmonischen Oszillators. Insbesondere haben wir die Entsprechungen \begin{eqnarray} \mbox{2D-{\sc Landau}} &\longleftrightarrow& \mbox{1D-Oszillator} \nonumber\\ &\phantom{=}& \nonumber\\ H=\frac{K_y^2}{2m }+\frac{K_x^2}{2m} &\longleftrightarrow& H=\frac{P^2 }{2\mu}+\frac{1}{2}\mu \Omega^2 Q^2 \nonumber\\ &\phantom{=}& \nonumber\\ K_y &\longleftrightarrow& P \nonumber\\ &\phantom{=}& \nonumber\\ m &\longleftrightarrow& \mu \nonumber\\ &\phantom{=}& \nonumber\\ K_x &\longleftrightarrow& \mu\Omega\cdot Q \nonumber\\ &\phantom{=}& \nonumber\\ \lbrack K_x,K_y\rbrack=i \hbar eB_z &\longleftrightarrow& \lbrack \mu\Omega\cdot Q, P \rbrack= \mu\Omega \cdot \lbrack Q,P\rbrack= \mu\Omega \cdot i \hbar \, ; \end{eqnarray} letzteres hei\ss t nat\"urlich \begin{eqnarray} \lbrack Q,P \rbrack = i\hbar. \end{eqnarray} F\"ur die beteiligten $c$-Zahlen% \footnote{$c$-Zahlen sind als {\it classical numbers\/} kommutierende Gr\"o\ss en, $q$-Zahlen als {\it quantized numbers\/} im allgemeinen nicht-kommutierende Gr\"o\ss en, also Operatoren. Diese Sprechweise ist in der mathematischen Physik weit verbreitet und geht auf fr\"uhe Zeiten der Quantentheorie zur\"uck.} gelten die Zuordnungen \begin{eqnarray} \mbox{2D-{\sc Landau}} &\longleftrightarrow& \mbox{1D-Oszillator} \nonumber\\ &\phantom{=}& \nonumber\\ i\hbar\cdot eB_z &\longleftrightarrow& i\hbar\cdot\mu\Omega \nonumber\\ &\phantom{=}& \nonumber\\ m &\longleftrightarrow& \mu \nonumber\\ &\phantom{=}& \nonumber\\ i\hbar\cdot \frac{eB_z}{m} &\longleftrightarrow& i\hbar\cdot\Omega =i\hbar\cdot\sqrt{\frac{D}{\mu}}, \nonumber\\ \end{eqnarray} mit $D$ als Federkonstante. \par Mit anderen Worten: Ein 2-dimensionales mechanisches System in einem konstanten \"au\ss eren Magnetfeld entspricht von seiner algebraischen Struktur her einem ein-di\-men\-si\-o\-na\-len harmonischen Oszillator.% \footnote{% Die Nichtkommutativit\"at hat aber in beiden F\"allen einen unterschiedlichen physikalischen Ursprung. Neuerdings nennt man in der mathematischen Physik eine ganze Reihe von nicht-kommutativen Strukturen \glqq quantisiert\grqq, obwohl sie keinen direkten {\it physikalischen\/} Bezug zur Quantentheorie im Sinne von {\sc Heisenberg}-{\sc Born}-{\sc Jordan}-{\sc Dirac} haben.} \par Entsprechend der Analogie-Beziehung \begin{equation} E_n = \left( n+\frac{1}{2} \right) (\hbar eB_z) \left( \frac{1}{m} \right) \phantom{x}\longleftrightarrow\phantom{x} E_n = \left( n+\frac{1}{2} \right) \hbar\Omega \end{equation} bzw.\ \begin{equation} E_n = \left( n+\frac{1}{2} \right) \hbar \left( \frac{eB_z}{m} \right) \phantom{x}\longleftrightarrow\phantom{x} E_n = \left( n+\frac{1}{2} \right) \hbar\Omega \end{equation} haben die Energieniveaux der Ladungstr\"ager diskrete Werte, die {\it modulo\/} einer gemeinsamen additiven Konstante, welche den Energienullpunkt festlegt, allesamt Vielfache der {\sc Planck}\-schen Konstante $\hbar$ sind. Die Bedeutung der Bewegungskonstante \begin{equation} \omega_c=\frac{eB}{m} \, , \end{equation} wobei wir statt $B_z$ ab jetzt nur noch $B$ schreiben, wird deutlich wenn wir uns ver\-ge\-gen\-w\"ar\-ti\-gen, da\ss\ die Elektronen in einem konstanten homogenen Magnetfeld bei verschwindendem \"au\ss erem elektrischen Feld eine Zyklotronbewegung (engl.\ {\it cyclotron motion\/}) vollf\"uhren, also Kreisbahnen durchlaufen, f\"ur welche die {\sc Lorentz}-Kraft der Zentralkraft die Waage h\"alt. Es ist also \begin{equation} \frac{mv^2}{r} = evB \end{equation} und somit \begin{equation} m\omega_c^2r = e \cdot \omega_c r \cdot B, \end{equation} woraus obige Beziehung folgt. \par F\"ur die quantisierten Energieniveaux der Zyklotronbewegung folgt unter Einbeziehung der Tatsache, da\ss\ der Energienullpunkt im allgemeinen separat festgelegt werden mu\ss\ \begin{equation} E_n=\left( n+\frac{1}{2} \right)\, \hbar\omega_c+\mbox{\it const}. \end{equation} \par {\it Bemerkung:\/} Eine nicht zu untersch\"atzende Vorsicht mu\ss\ darin ge\"ubt werden, es nicht zu Verwechselungen von klassischen und quantenmechanischen Konzepten kommen zu lassen. Dies ist nicht anders als beim gel\"aufigen Beispiel des harmonischen Oszillators: Klassisch berechnet sich seine Energie gem\"a\ss\ \begin{eqnarray} E_{cl} &=& E_{kin}+E_{pot} \nonumber\\ &\phantom{=}& \nonumber\\ &=& \frac{1}{2} \mu v^2 + \frac{1}{2} Dx^2 \nonumber\\ &\phantom{=}& \nonumber\\ &=& \frac{1}{2} \mu v^2 + \frac{1}{2} \mu\Omega^2x^2 \nonumber\\ &\phantom{=}& \nonumber\\ &=& \frac{1}{2} \mu\Omega^2A^2(\cos\Omega t)^2 + \frac{1}{2} \mu\Omega^2A^2(\sin\Omega t)^2 \nonumber\\ &\phantom{=}& \nonumber\\ &=& \frac{1}{2} \mu\Omega^2A^2. \end{eqnarray} Quantenmechanisch hingegen ist \begin{eqnarray} E_{qu} &=& \left(n+\frac{1}{2}\right)\,\hbar\Omega. \end{eqnarray} Klassisch h\"angt seine Energie quadratisch von der Frequenz $\Omega$ und quadratisch von der Amplitude $A$ ab. Quantenmechanisch hingegen h\"angt seine Energie linear von der Frequenz $\Omega$ ({\it modulo\/} einer Nullpunktsenergie) und linear von der Quantenzahl $n$ ab. F\"ur hohe Quantenzahlen allerdings mu\ss\ die quantentheoretische Beschreibung in die klassische \"u\-ber\-ge\-hen. Dies ist dann der Fall, wenn \begin{equation} n\rightarrow\infty \end{equation} und asymptotisch gilt \begin{equation} n \approx \frac{\mu\Omega A^2}{2\hbar}. \end{equation} F\"ur die Zyklotronbewegung ist klassisch \begin{eqnarray} E_{cl} &=& \frac{1}{2} mv^2 \nonumber\\ &\phantom{=}& \nonumber\\ &=& \frac{1}{2} m\omega_c^2r^2 \; , \end{eqnarray} quantenmechanisch hingegen \begin{eqnarray} E_{qu} &=& \left(n+\frac{1}{2}\right)\,\hbar\omega_c. \end{eqnarray} Der klassische Grenzfall der quantenmechanischen Beschreibung ist nun charakterisiert durch \begin{equation} n\rightarrow\infty \end{equation} mit \begin{equation} n \approx \frac{m\omega_c r^2}{2\hbar}. \end{equation} \par F\"ur die klassische Zyklotronbewegung erhalten wir den Bahnradius aus Gleichsetzen von {\sc Lorentz}-Kraft und Zentralkraft \begin{equation} r_{class}=\frac{mv^2}{evB} =\frac{\sqrt{2m \cdot mv^2/2}}{eB} =\frac{\sqrt{2mE}}{eB}. \end{equation} Setzen wir anstelle der klassischen kinetischen Energie \begin{equation} E_{kin} = \frac{1}{2} mv^2/2 \end{equation} die quantenmechanische Nullpunktsenergie \begin{equation} E = \frac{1}{2} \hbar\omega_c \end{equation} in die obige Gleichung ein, definieren wir eine {\it magnetische L\"ange\/} \begin{equation} l_{B} = \frac{\sqrt{2m\hbar\omega_c/2}}{eB} = \sqrt{\frac{\hbar}{eB}}. \end{equation} Dies ist die fundamentale L\"ange des {\sc Landau}-Problems. Wir schreiben auch \begin{equation} \frac{h}{eB} = \frac{2\pi\hbar}{eB} = 2\pi l_B^2. \end{equation} Das zugeordnete Nullpunktsorbital, welches man sich {\it nicht\/} als eine Kreisbewegung im klassischen Sinne vorstellen darf, entspricht - im Sinne der von uns betrachteten Analogie - der Nullpunktsschwingung eines harmonischen Oszillators. \vfill\eject\noindent% \subsection{Flu\ss quanten versus Ladungstr\"ager} Die Gr\"o\ss e \begin{equation} \Phi_0=\frac{h}{e} \end{equation} nennen wir das einer elektrischen Ladung $e$ zugeordnete {\it magnetische Flu\ss quantum\/}. Seine Bedeutung liegt im {\sc Aharonov}-{\sc Bohm}-Effekt \cite{Franz39, EhrenbergSiday49, AharonovBohm59}: Wie bereits {\sc Franz} bemerkte \cite{Franz39}, beeinflu\ss t der von einer Elektronenwelle eingeschlossene magnetische Flu\ss\ deren Wellenzahl ({\it besser:\/} Phase). {\sc Ehrenberg} und {\sc Siday} wiesen in ihrer bahnbrechenden elektronenoptischen Arbeit darauf hin, da\ss\ ein solcher Einflu\ss\ auch dann bestehen bleibt, wenn die in zwei koh\"arente Teilstrahlen aufgespaltene Elektronenwelle nirgendwo ein magnetisches Feld durchl\"auft. {\sc Aharonov} und {\sc Bohm} gaben diesem Effekt schlie\ss lich Ihren Namen und hoben die physikalische Bedeutung des Vektorpotentials in der Quantenmechanik hervor. \par Das Interferenzmuster in einem Beugungsexperiment eines elektrisch abgeschirmten magnetischen Flusses $\Phi$ h\"angt also ab vom {\it nicht-integrablen Phasenfaktor\/} \cite{WuYang75} \begin{equation} \exp \, \frac{ie}{\hbar} \, \oint_{\partial S} {\bf A} \, d{\bf l} = \exp \, \frac{ie}{\hbar} \, \int\!\!\int_{S} {\bf rot}\,{\bf A} \, d{\bf F} = \exp \, \frac{ie}{\hbar} \, \Phi, \end{equation} der gerade ein Vielfaches der Eins ist, wenn \begin{equation} \Phi = n \cdot \frac{h}{e} =: n \cdot \Phi_0, \phantom{xxx} n\in{\bf Z}. \end{equation} In diesem Fall ist die Flu\ss linie quantenmechanisch nicht observabel. In der {\sc Landau}-{\sc Ginzburg}-Theorie der Supraleitung ist die zentrale Gr\"o\ss e ein Ordnungsparameter, der eine gewisse formale \"Ahnlichkeit zu einer quantenmechanischen Wellenfunktion besitzt. Die Theorie ist allerdings nur dann mit der mikroskopischen Theorie \`a la {\sc Bardeen}, {\sc Cooper} und {\sc Schrieffer} vertr\"aglich, wenn die elektrische Ladung der \glqq supraleitenden Elektronen\grqq, der sogenannten {\sc Cooper}-Paare, statt $e$ genau doppelt so hoch ist, n\"amlich $2e$. Demzufolge haben die magnetischen Flu\ss quanten der Supraleitung den Wert $h/2e$. Sie manifestieren sich als Flu\ss schl\"auche oder {\sc Abrikosov}-Vortizes, die in das supraleitende Medium unter bestimmten Umst\"anden eindringen k\"onnen.% \footnote{Und es ist geradezu eine Ironie der Geschichte, da\ss\ in der vollst\"andigen Theorie die Flu\ss quanten wieder observabel werden, n\"amlich durch Quasiteilchen-Moden im Vortexkern, sogenannte chirale Fermionen, die man sich als eingefangene Irrl\"aufer \glqq halber {\sc Cooper}-Paare\grqq\ denken darf.} \par Wir wollen nun zeigen, da\ss\ auch im quantisierten {\sc Hall}-Effekt die Flu\ss quanten eine entscheidende Rolle spielen. \par Wegen der (n\"aherungsweisen) Translationsinvarianz des Systems (in x- und y-Richtung) ist jedes dieser {\sc Lan\-dau}-Niveaux hochgradig entartet.% \footnote{F\"ur die explizite Behandlung der {\sc Schr\"odinger}-Gleichung im konstanten \"au\ss eren Magnetfeld sei der Leser auf das Lehrbuch von {\sc Landau} und {\sc Lifshitz} verwiesen \cite{LandauLifshitz}.} \footnote{An dieser Stelle sei nur erw\"ahnt, da\ss\ - wegen der mathematischen Form des Vektorpotentials - die Eigenzust\"ande der {\sc Hamilton}-Funktion des {\sc Landau}-Problems denen der z-Komponente des Bahndrehimpulsoperators ${\bf L} = - i \hbar {\bf r} \times {\bf grad}$ entsprechen.} Um exakt zu sein: Im Falle eines unendlich ausgedehnten Systems ist die Entartung tats\"achlich unendlich; denn aus einer vorgegebenen L\"osung k\"onnen wir durch Anwendung von x- und y-Translationen beliebig viele andere erzeugen. Im Falle eines endlich ausgedehnten Systems, zum Beispiel mit den Abmessungen $L_x \times L_y$ k\"onnte man versuchen, in der \"ublichen Weise die Zu\-stands\-dich\-te auszurechnen \cite{Kittel}: Man stelle sich die Wellenfunktionen als stehende Wellen vor \begin{eqnarray} \psi(0,y) &=& \psi(L_x, y) \;=\; 0 \nonumber\\ \psi(x,0) &=& \psi( x,L_y) \;=\; 0 \end{eqnarray} oder fordere - dem Transportproblem angemessener - wenigstens periodische Randbedingungen \begin{eqnarray} \psi(x+L_x,y ) &=& \psi(x ,y ) \nonumber\\ \psi(x ,y+L_y) &=& \psi(x ,y ) \nonumber \end{eqnarray} und z\"ahle die Zust\"ande bis hin zu einer vorgegebenen Grenzenergie ab \cite{Kittel}. Die nach dieser Energie abgeleitete Anzahl der Zust\"ande ist die Zustanddichte des 2-dimensionalen Elektronengases in dem betrachteten endlich ausgedehnten System. Weiter unten werden wir diese Rechnung explizit durchf\"uhren und auch das richtige Ergebnis erhalten. \par Dieser Zugang ist aber nicht wirklich begr\"undet, wenn wir davon ausgehen, da\ss\ das System sich in einem homogenen Magnetfeld befindet und die Wellenfunktionen ausgedehnt sind. Die Wahl der Randbedingungen mu\ss\ n\"amlich die lokale Eichinvarianz ({\it engl.\/} local gauge invariance) erf\"ullen, eines der grundlegenden Prinzipien von Quantenmechanik und Elektrodynamik. Eine lokal eichinvariante Randbedingung, welche die Anwesenheit des Magnetfeldes respektieren w\"urde, h\"atte die Form \begin{eqnarray} \psi(x+L_x,y) &=&\exp\, \left\{ i\,\frac{e}{\hbar}\,\alpha(x,y) \right\} \cdot\psi(x,y) \nonumber\\ \psi(x ,y+L_y) &=&\exp\, \left\{ i\,\frac{e}{\hbar}\,\beta (x,y) \right\} \cdot\psi(x,y) \end{eqnarray} und gleichzeitig \begin{eqnarray} {\bf A}(x+L_x,y )&=&{\bf A}(x,y)+{\bf grad}\,\alpha(x,y) \nonumber\\ {\bf A}(x ,y+L_y)&=&{\bf A}(x,y)+{\bf grad}\,\beta (x,y). \end{eqnarray} Somit ist f\"ur ein geeignet gew\"ahltes Koordinatensystem: ($\partial$ steht f\"ur Rand) \begin{eqnarray} \Phi &=& B\cdot L_x L_y \nonumber\\ &\phantom{=}& \nonumber\\ &=& \int_{L_x\times L_y} {\bf B}(x,y)\,d{\bf S} \nonumber\\ &\phantom{=}& \nonumber\\ &=& \int_{L_x\times L_y} {\bf rot}\,{\bf A}(x,y)\,d{\bf S} \nonumber\\ &\phantom{=}& \nonumber\\ &=& \oint_{\partial(L_x\times L_y)} {\bf A}(x,y)\,d{\bf l} \nonumber\\ &\phantom{=}& \nonumber\\ &\phantom{=}& \nonumber\\ &=& \int_{(0 ,0 )\rightarrow(L_x,0 )} {\bf A}(x,y)\,d{\bf l}+ \int_{(L_x,0 )\rightarrow(L_x,L_y)} {\bf A}(x,y)\,d{\bf l}+ \nonumber\\ &\phantom{=}& \nonumber\\ &\phantom{=}& \nonumber\\ & &\phantom{12345678} +\int_{(L_x,L_y)\rightarrow(0 ,L_y)} {\bf A}(x,y)\,d{\bf l}+ \int_{(0 ,L_y)\rightarrow(0 ,0 )} {\bf A}(x,y)\,d{\bf l} \nonumber\\ &\phantom{=}& \nonumber\\ &\phantom{=}& \nonumber\\ &=& \int_{(0 ,0 )\rightarrow(L_x,0 )} {\bf A}(x,y)\,d{\bf l} -\int_{(0 ,L_y)\rightarrow(L_x,L_y)} {\bf A}(x,y)\,d{\bf l}+ \nonumber\\ &\phantom{=}& \nonumber\\ & &\phantom{12345678} +\int_{(L_x,0 )\rightarrow(L_x,L_y)} {\bf A}(x,y)\,d{\bf l} -\int_{(0 ,0 )\rightarrow(0 ,L_y)} {\bf A}(x,y)\,d{\bf l} \nonumber\\ &\phantom{=}& \nonumber\\ &\phantom{=}& \nonumber\\ &=& \int_{(0 ,0 )\rightarrow(L_x,0 )} {\bf A}(x ,y )\,d{\bf l} -\int_{(0 ,0 )\rightarrow(L_x,0 )} {\bf A}(x ,y+L_y)\,d{\bf l}+ \nonumber\\ &\phantom{=}& \nonumber\\ & &\phantom{12345678} +\int_{(0 ,0 )\rightarrow(0 ,L_y)} {\bf A}(x+L_x,y )\,d{\bf l}+ -\int_{(0 ,0 )\rightarrow(0 ,L_y)} {\bf A}(x ,y )\,d{\bf l} \nonumber\\ &\phantom{=}& \nonumber\\ &\phantom{=}& \nonumber\\ &=& -\int_{(0 ,0 )\rightarrow(L_x,0 )} {\bf grad}\,\beta (x ,y )\,d{\bf l} +\int_{(0 ,0 )\rightarrow(0 ,L_y)} {\bf grad}\,\alpha(x ,y )\,d{\bf l} \nonumber\\ &\phantom{=}& \nonumber\\ &=& - \left[\, \beta (L_x,0 ) - \beta (0,0) \,\right] + \left[\, \alpha(0 ,L_y) - \alpha(0,0) \,\right] . \end{eqnarray} Wenn also $\alpha$ und $\beta$ identisch verschwinden w\"urden - wie im Falle periodischer Randbedingungen im konventionellen Sinne - w\"are der Gesamtflu\ss\ durch die Probe identisch Null! Lokale Eichinvarianz {\it erzwingt\/} die Verwendung von Randbedingungen, welche die Wellenfunktionen {\it modulo\/} einer Phase festlegen. Diese ist im allgemeinen wegabh\"angig ({\it griech.\/} anholonom). \par Eindeutigkeit der Wellenfunktion verlangt nun, da\ss\ die Phase $\varphi$ der Wellenfunktion \begin{equation} \Psi = \varrho \cdot \exp\,i\varphi \end{equation} sich am Rand um ein Vielfaches von $2\pi$ dreht. Das hei\ss t, die Schleife \begin{equation} \varphi(0 ,0 ) \rightarrow \varphi(L_x,0 ) \rightarrow \varphi(L_x,L_y) \rightarrow \varphi(0 ,L_y) \rightarrow \varphi(0 ,0 ) \end{equation} mu\ss\ (f\"ur $L_x=L_y$) gehen wie \begin{equation} 0 \rightarrow \frac{1}{4} \cdot 2\pi n \rightarrow \frac{2}{4} \cdot 2\pi n \rightarrow \frac{3}{4} \cdot 2\pi n \rightarrow \frac{4}{4} \cdot 2\pi n . \end{equation} F\"ur unseren Fall bedeutet dies, da\ss\ \begin{eqnarray} 1 &=& \exp \, \left\{ i \,\frac{e}{\hbar} \cdot (\, - \left[\, \beta (L_x,0 ) - \beta (0,0) \,\right] + \left[\, \alpha(0 ,L_y) - \alpha(0,0) \,\right] \,) \right\} \nonumber\\ &=& \exp \, \left\{ i \,\frac{e}{\hbar} \,\Phi \right\} \end{eqnarray} und somit \begin{equation} \Phi = 2\pi \, \frac{\hbar}{e} \, n = \frac{h}{e} \, n =: n_{\Phi_0} \cdot \Phi_0. \end{equation} Mit anderen Worten: Wenn die Wellenfunktion ausgedehnt ist, mu\ss\ der Flu\ss\ $\Phi$ in Einheiten des Flu\ss quantums \begin{equation} \Phi_0=h/e \end{equation} quantisiert sein. Die Anzahl $n_{\Phi_0}$ der Flu\ss quanten pro Einheitsfl\"ache berechnet sich aus \begin{equation} B = n_{\Phi_0} \cdot \Phi_0. \end{equation} Die Probenfl\"ache wird durch die Anwesenheit der Flu\ss quanten parkettiert. Je h\"oher das Magnetfeld ist, desto kleiner werden die Fliesen des Parketts. Da in einem endlich ausgedehnten System die Randbedingungen nur noch diskrete Translationen erlauben, welche die Parkettierung respektieren, entspricht der Entartungsgrad pro Einheitsfl\"ache gerade der Anzahl der Flu\ss quanten pro Einheitsfl\"ache und ist gegeben durch \begin{equation} n_{\Phi_0} := \frac{eB}{h}. \end{equation} Ein Check der Einheiten zeigt, das alles mit rechten Dingen zugeht, denn $eB$ hat die Dimension einer Wirkung pro Fl\"ache und $n_{\Phi_0}$ ist eine Fl\"achendichte, also ein Ma\ss\ f\"ur die Teilchenzahl pro Fl\"ache. \par Wieviele Ladungstr\"ager pro Fl\"acheneinheit untergebracht werden k\"onnen, wird nicht nur durch den Entartungsgrad, sondern auch durch die Tatsache vorgegeben, da\ss\ die Ladungstr\"ager Fermionen sind.% \footnote{Mit anderen Worten: Die Elektronen erf\"ullen das {\sc Pauli}-Prinzip. Letzteres ist eine Konsequenz der Tatsache, da\ss\ die Elektronen Fermionen sind, also ununterscheidbare Teilchen, die durch eine total antisymmetrische Vielteilchen-Wellenfunktion beschrieben werden.} Dies legt die Anzahl der {\it erlaubten\/} Zust\"ande pro {\sc Landau}-Niveau und Ein\-heits\-fl\"a\-che fest. Wenn wir der Einfachheit halber den Spin und m\"ogliche zus\"atzliche Quantenzahlen au\ss er acht lassen, kann ein {\sc Landau}-Niveau mit $eB/h$ ununterscheidbaren fermionischen Ladungstr\"agern pro Einheitsfl\"ache aufgef\"ullt werden. Wenn $i$ Landau-Niveaus vollst\"andig gef\"ullt sind, mu\ss\ sich die {\sc Hall}-Leitf\"ahigkeit ergeben als \begin{equation} \sigma_H=\frac{e\,n_{2D} }{B}=i\cdot\frac{e^2}{h}, \end{equation} wobei $n_{2D}$ - wie oben - die 2-dimensionale Ladungstr\"agerdichte bezeichnet. Offensichtlich kann diese Bedingung auch geschrieben werden als \begin{equation} n_{2D} = i \cdot \frac{eB}{h}. \end{equation} Wenn also - im quantenmechanischen Limes $T\rightarrow 0$ - das chemische Potential% \footnote{Das chemische Potential charakterisiert die {\sc Fermi}-Verteilung. Am absoluten Nullpunkt trennt es die besetzten von den unbesetzten Zust\"anden. (Eine detaillierte Erkl\"arung folgt weiter unten.)} genau zwischen zwei {\sc Landau}-Niveaus liegt, mu\ss\ die {\sc Hall}-Leitf\"ahigkeit quantisiert sein! Allerdings gibt es keinen Grund zu erwarten, da\ss\ in einem realen System das chemische Potential immer genau von der Mitte der einen L\"ucke zur Mitte der n\"achsten springt. \vfill\eject\noindent% \subsection{Beweis f\"ur die Abwesenheit des QHE im idealen 2DEG} In der Tat, ein mathematisch rigoroser Beweis zeigt, da\ss\ das ideale freie Elektronengas im unendlich gro\ss en Volumen {\it keinen\/} quantisierten {\sc Hall}-Effekt zeigt. Im folgenden sei dieser Beweis kurz vorgestellt. \par Aus der {\sc Lagrange}-Funktion \begin{equation} L({\bf x},\dot{\bf x},t)= T-U= \frac{m}{2}\dot{\bf x}^2 + e\, {\bf A}({\bf x},t)\cdot\dot{\bf x} - e\, V({\bf x},t) \end{equation} erhalten wir via {\sc Legendre}-Transformation die {\sc Hamilton}-Funktion \begin{eqnarray} H(x,p,t) &=& {\bf p}\dot{\bf x} - L({\bf x},\dot{\bf x},t) \nonumber\\ &=& ( m\dot{\bf x} + e\,{\bf A} ) \,\dot{\bf x} - \frac{m}{2} \dot{\bf x}^2 - e\,{\bf A}\,\dot{\bf x} + e\,V({\bf x},t) \nonumber\\ &=& \frac{m}{2} \dot{\bf x}^2 + e\,V( {\bf x},t ). \end{eqnarray} Im {\sc Landau}-Fall gilt \begin{eqnarray} {\bf B}(x,t) &=& {\bf B} \,=\, {\it const\/} \\ {\bf A}(x,t) &=& \frac{1}{2} \, {\bf B} \times {\bf x} \\ {\bf E}(x,t) &=& {\bf E} \,=\, {\it const\/} \\ V (x,t) &=& \int_0^{\bf x} {\bf E}({\bf x}',t)\,d{\bf x} \,=\, {\bf E} \cdot {\bf x}. \end{eqnarray} Die entsprechende {\sc Hamilton}-Funktion bezeichnen wir als $H_{LE}$. \par Die {\sc Hamilton}schen Bewegungsgleichungen k\"onnen auch durch die {\sc Poisson}-Klam\-mern% \begin{equation} \lbrace A,B \rbrace = \sum_i \, \frac{\partial A}{\partial q_i} \frac{\partial B}{\partial p_i} - \frac{\partial B}{\partial p_i} \frac{\partial A}{\partial q_i} \end{equation} ausgedr\"uckt werden \cite{Goldstein80}. So ist \begin{eqnarray} \dot{\bf x} &=& \lbrace {\bf x}, H_{LE} \rbrace , \\ \dot{\bf v} &=& \lbrace {\bf v}, H_{LE} \rbrace , \\ \frac {d{\bf j}_{2D}}{dt} &=& \lbrace {\bf j}_{2D}, H_{LE} \rbrace . \end{eqnarray} Diese Formulierung ist von Nutzen, wenn wir das vorliegende System quantisieren wollen. Wir ersetzen die klassischen {\sc Poisson}-Klammern durch die quantenmechanischen Kommutatoren multipliziert mit einem Faktor $-i/\hbar$. F\"ur die quantenmechanischen Operatorgr\"o\ss en gilt somit% \footnote{% Man beachte, da\ss\ die durch die {\sc Poisson}-Klammern erzeugte algebraische Struktur im allgemeinen {\it nicht\/} zu der durch die quantenmechanischen Kommutatoren erzeugte Struktur isomorph ist, obwohl die Analogie sehr suggestiv ist und zum Repertoire vieler einf\"uhrender Vorlesungen (und professionell arbeitender Wissenschaftler) geh\"ort. W\"are dem so, w\"urden sich klassische Mechanik und Quantenmechanik nicht wesentlich unterscheiden. Das Theorem von {\sc Groenewold} und {\sc van\,Hove} zeigt, da\ss\ eine \glqq kanonische Quantisierung\grqq, also die nach einem Kanon - einer Regel also - erfolgte Konstruktion eines quantenmechanischen Systems aus einem klassischen {\it nicht\/} m\"oglich ist \cite{AbrahamMarsden, Groenwold46, vanHove51}. Im Grunde genommen ist dies nicht verwunderlich, ist doch die klassische Physik nur eine N\"aherung der quantenmechanischen. Und es ist nicht zu erwarten, da\ss\ man die exakte Theorie in einer wohldefinierten Weise aus einer approximativen herleitet: Kanonische Quantisierung ist allenfalls eine Heuristik wenn nicht die besondere Form eines akademischen Ratespiels.}% \begin{eqnarray} \dot{\bf x} &=& - \frac{i}{\hbar} \, \lbrack {\bf x}, H_{LE} \rbrack, \\ \dot{\bf v} &=& - \frac{i}{\hbar} \, \lbrack {\bf v}, H_{LE} \rbrack, \\ \frac{d{\bf j}_{2D}}{dt} &=& - \frac{i}{\hbar} \, \lbrack {\bf j}_{2D}, H_{LE} \rbrack. \end{eqnarray} Nun ist es eine wohlbekannte Tatsache, da\ss\ f\"ur den Fall, da\ss\ die {\sc Hamilton}-Funktion quadratisch in den Orts- und Impulsvariablen ist, die quantenmechanischen Bewegungsgleichungen den klassischen entsprechen. Eine explizite Evaluation der Operatorgleichung verifiziert dies. So erhalten wir als L\"osung \begin{equation} {\bf j}_{2D}(t) = e\,n_{2D} \cdot \frac{ {\bf B} \times {\bf E} }{ {\bf B}^2 } + \exp(t \mbox{\boldmath$\omega$}_c \times) \, {\bf j}_{2D,0} . \end{equation} Um eine Vorhersage \"uber den Me\ss wert des elektrischen Stroms in unserem {\sc Hall}-Ex\-peri\-ment zu machen, m\"ussen wir den zeitgemittelten Erwartungswert \begin{equation} \overline{<{\bf j}_{2D}(t)>} = \lim_{T\rightarrow\infty} \, \frac{1}{T} \, \int_0^T \, <{\bf j}_{2D}(t)> \end{equation} des {\sc Heisenberg}-Strom\-ope\-ra\-tors ${\bf j}_{2D}(t)$ ausrechnen. Dabei ist zu beachten, da\ss\ wir thermodynamische Gleichgewichtszust\"ande eines quantisierten fermionischen Systems betrachten m\"ussen. \par {\it Zur Erinnerung:\/} In der ersten Quantenmechanik-Vorlesung f\"angt man mit {\it reinen Zu\-st\"an\-den\/} an, die durch Zustandsvektoren $|\Psi>$ beschrieben werden.% \footnote{% Meist verwendet man die Spektralamplituden $\psi(q)$ der Zerlegung \begin{equation} |\Psi> = \int_Q dq \, |q> \, <q| \Psi> =: \int_Q dq \, |q> \, \psi(q) \end{equation} des abstrakten Zustandsvektors $|\Psi>$ nach Eigenzust\"anden $|q>$ des Ortsperators $q$, im Volksmund auch {\sc Schr\"odinger}-Wellenfunktionen genannt.} Der Erwartungswert einer Observablen $A$ ist in diesem Fall gegeben durch \begin{equation} <A> = <\Psi|A|\Psi> = \mbox{Spur} \, \lbrace \, |\Psi><\Psi|A \, \rbrace. \end{equation} In einem {\it gemischten Zustand} ist der Erwartungswert der Observable $A$ gegeben durch \begin{equation} <A>=\mbox{Spur}\,\lbrace \varrho A \rbrace, \end{equation} wobei der sogenannte {\it Dichteoperator\/} oder die {\it Dichtematrix\/} geschrieben werden kann als \begin{equation} \varrho = \sum_i \lambda_i\,|\Psi_i><\Psi_i|, \end{equation} mit den statistischen Gewichten \begin{equation} \lambda_i > 0, \phantom{xxx} \sum_i \lambda_i = 1. \end{equation} Die {\it thermische Dichtematrix\/} \begin{equation} \varrho = \exp\,-\beta H \end{equation} ist nichts anderes als ein Operator, welcher in Analogie zum {\sc Maxwell}-{\sc Boltzmann}-Faktor gesehen werden mu\ss\ und einen gemischten Zustand repr\"asentiert, der ein quantenmechanisches System im thermodynamischen Gleichgewicht bei endlicher Temperatur beschreibt. \par Ein System von Fermionen wird nun nicht durch den {\sc Maxwell}-{\sc Boltzmann}-Faktor, sondern durch die {\sc Fermi}-Verteilungsfunktion \begin{equation} f(E) = \frac{1}{1 + \exp\,\beta(E - \mu)} \end{equation} charakterisiert, so da\ss\ der Erwartungswert einer Observable $A$ f\"ur ein durch ein {\sc Ha\-mil\-ton}-Operator $H$ beschriebenes fermionisches System mit Volumen $V$ die Form \begin{equation} <A>_{\beta,\mu} = \mbox{Spur}_V\,\lbrace\,f(H)A\,\rbrace \end{equation} hat. $\,\mbox{Spur}_V\,$ deutet an, da\ss\ die Summe \"uber die Diagonalelemente der betrachteten Operatoren in Wirklichkeit Integrale \"uber ein endliches Volumen $V$ sind. Besondere Vorsicht ist geboten, wenn wir die Systemgr\"o\ss e gegen unendlich gehen lassen, das hei\ss t den thermodynamischen Limes durchf\"uhren. \par Zur Beantwortung der zu Beginn dieses Abschnitts gestellten Frage nach dem quantenmechanischen Pendant des {\sc Hall}-Effekts eines unendlich ausgedehnten 2-dimensionalen Systems von Elektronen berechnen wir nun den Zeitmittelwert des Erwartungswertes des elektrischen Stroms. Es ist \begin{eqnarray} \overline{<{\bf j}_{2D}(t)>} &=& \lim_{T\rightarrow\infty} \frac{1}{T} \int_0^T dt <{\bf j}_{2D}(t)>_{\beta,\mu} \nonumber\\ &\phantom{=}& \nonumber\\ &=& e\cdot \lim_{V\rightarrow\infty} V^{-1} \mbox{Spur}_V\,\lbrace\,f(H_L)\,\rbrace \cdot\frac{{\bf B}\times{\bf E}}{{\bf B}^2} \nonumber\\ &=& e \cdot n_{2D} \cdot \frac{{\bf B}\times{\bf E}}{{\bf B}^2}, \end{eqnarray} was impliziert, da\ss\ die 2-dimensionale Ladungstr\"agerdichte \begin{equation} n_{2D}(\beta,\mu) = \lim_{V\rightarrow\infty} \, \mbox{Spur}_V \lbrace \, f(H_L) \, \rbrace \end{equation} eine glatte Funktion in $\beta$ und $\mu$ ist. \par Fazit: Wir reproduzieren das klassische Resultat. Da es keinen Grund f\"ur die Erwartung gibt, da\ss\ in einem realen System das chemische Potential $\mu$ immer zwischen den {\sc Landau}-Niveaus hin- und herspringt, insbesondere von L\"uckenmitte zu L\"uckenmitte, schlie\ss en wir aus dem Ergebnis, da\ss\ mit dem Auff\"ullen der {\sc Landau}-Niveaus die {\sc Hall}-Leitf\"ahigkeit {\it kontinuierlich\/} steigen m\"u\ss te. Wir m\"ussen also nach einem {\it zus\"atzlichen\/} Mechanismus suchen, wenn wir eine befriedigende Erk\"arung des beobachteten Quanteneffektes fin\-den wol\-len. \par Die Untersuchung dieses Problems definiert ein aktuelles Forschungsgebiet. Selbst heute sind noch viele Fragen offen \cite{Janssen94}, vielleicht aber ist auch das grundlegende Prinzip noch gar nicht verstanden. Bevor man sich im Detail mit diesen Fragen auseinandersetzt, sollte man zun\"achst die experimentelle Realisierung der zugeh\"origen physikalischen Systeme verstehen. \vfill\eject\noindent% \section{Von der Theorie zur Messung} \subsection{Experimentelle Realisierung von 2-dimensionalen Elektronengasen} 2-dimensionale Elektronen- oder L\"ochergase k\"onnen an Halbleiter-Grenz\-fl\"a\-chen, so zum Beipiel in einer {\it Metall-Oxid-Halbleiter-Struktur (MOS)\/} oder an einer Grenzfl\"ache zwischen Halbleitern verschiedener Bandl\"ucke, einer sogenannten {\it Heterostruktur\/}, realisiert werden \cite{AndoFowlerStern82, Sze81}. Im ersten Fall ist es die extern angelegte Gatespannung, im zweiten Fall die geeignete Dotierung zusammen mit einem Bandl\"uckensprung, die eine Bandverbiegung verursacht, welche zu einem in erster N\"aherung dreieckf\"ormigen Potentialtopf f\"uhrt. Durch die endliche Ausdehnung des elektronischen Systems in Richtung der dritten Dimension zeichnen sich die Elektronen (oder L\"ocher) durch quantisierte Energieniveaus in einer Richtung aus - ganz \"ahnlich wie im Fall des in den Quantenmechanik-Lehrb\"uchern diskutierten Kastenpotentials. Im betrachteten Fall des Dreieckpotentials sind die Energie-Eigenfunktionen allerdings {\sc Airy}-Funktionen. \bild{qhe_001}{Kastenpotential versus Dreieckspotential}{6} \par W\"ahrend die Elektronen (oder L\"ocher) in Richtung der z-Achse durch die dargestellten Randbedingungen eingeschr\"ankt sind, k\"onnen sie sich in den anderen beiden Richtungen frei bewegen. Somit ist ihr Wellenzahlvektor ${\bf k}$ eine gute Quantenzahl lediglich f\"ur die zwei Dimensionen x und y, nicht aber f\"ur die dritte, die z-Richtung. Somit erhalten wir eine Reihe von sogenannten {\it Subb\"andern\/} $0,1,\dots$ f\"ur jeden Energieeigenwert $E_0,E_1,\dots\;$. In der N\"ahe des absoluten Nullpunktes ist nur das unterste Niveau oder Subband besetzt. Das System wird somit physikalisch {\it exakt\/} 2-dimensional. \bild{qhe_002}{Energieniveaus im Dreieckspotential}{9} \par Die diesen Plateaux zugeordneten {\sc Hall}-Leitf\"ahigkeiten sind quantisiert gem\"a\ss \begin{equation} \sigma_H=i\cdot\frac{e^2}{h}. \end{equation} Den Plateaux entsprechen Nullstellen in der longitudinalen Leif\"ahigkeit $\sigma_{xx}$. Es soll hier nur erw\"ahnt werden, da\ss\ sp\"atere Experimente von {\sc Tsui}, {\sc St\"ormer} und {\sc Gossard} in Heterostrukturen hoher Elektronenbeweglichkeit unter sehr starken Magnetfeldern sogar Plateaux zu gebrochenen Werten (haupts\"achlich mit ungeradem Nenner) zeigten \cite{Tsui82}. Dieser fraktionell-quantisierte {\sc Hall}-Effekt ist ein Ph\"a\-no\-men f\"ur sich und soll hier nicht im Detail diskutiert werden \cite{Chakraborty1988}. \par Es mu\ss\ hervorgehoben werden, da\ss\ die Kinematik des {\sc Hall}-Effektes zusammen mit der {\sc Landau}-Quantisierung die Existenz der Plateaux {\it nicht\/} erkl\"art. Ein zus\"atzlicher Aspekt mu\ss\ hinzukommen, n\"amlich ein physikalischer Mechanismus, aus dem f\"ur die {\sc Hall}-Leit\-f\"a\-hig\-keit bei kontinuierlicher und gleichm\"a\ss iger Erh\"ohung des Magnetfeldes der Wechsel von Ansteigen und Verharren auf dem quantisierten Wert folgt. Es herrscht \"Uber\-einstimmung darin, da\ss\ dieser Mechanismus mit der Existenz lokalisierter Elektronenzust\"ande zwischen zwei {\sc Landau}-Niveaus zusammenh\"angt. Das Auff\"ullen dieser lokalisierten Zust\"ande ver\-\"an\-dert den Transport-Koeffizienten nicht. Hierdurch werden die Punkte, f\"ur welche die Quantisierungsbedingung erf\"ullt ist, zu einer plateauf\"ormigen Linie ausgezogen. \par Elektronen-Lokalisierung h\"angt mit Unordnung in kondensierter Materie zusammen: Etwas salopp formuliert, ist der integral-quantisierte {\sc Hall}-Effekt ein \glqq Dreckeffekt\grqq, das hei\ss t, er ben\"otigt die Pr\"asenz von Streuzentren bzw.\ St\"orstellen. In dieser Hinsicht steht er im Gegensatz zum fraktionell-quantisierten {\sc Hall}-Effekt, der nur in ultra-reinen Proben zu beobachten ist. Die schwierige physikalische Frage besteht nun darin, wie es kommt, da\ss\ sich Lokalisierung und Magnetotransport so arrangieren k\"onnen, da\ss\ wir das Ph\"a\-no\-men des integral-quantisierten {\sc Hall} beobachten. Die Genauigkeit der Quantisierung und die M\"oglichkeit, im Rahmen eines Festk\"orper-Experiments vermittels Bestimmung der sogenannten {\sc von\,Klitzing}-Konstante \begin{equation} R_{vK}=\frac{h}{e^2}=25.812\,805\,{\dots}{\rm k}\Omega \end{equation} durch \glqq simple Widerstandsmessungen\grqq\ die {\sc Sommerfeld}sche Feinstrukturkonstante \begin{equation} \alpha = \frac{1}{\hbar c}\cdot\frac{e^2}{4\pi\varepsilon_0} = \frac { \mu_0 c e^2 } {2h } \end{equation} {\it unabh\"angig\/} von Feinheiten der Probengeometrie bestimmen zu k\"onnen, deutet auf einen {\it topologischen Quantisierungs\-mechanismus\/} hin - ganz \"ahnlich der Quantisierung des magnetischen Flusses durch einen supraleitenden Ring \cite{Kinoshita96}. Solche Quantisierungen sind er\-fah\-rungs\-ge\-m\"a\ss\ sehr robust, m\"u\ss ten also zum Beispiel unsensibel gegen\"uber der Art und Form der Verunreinigung sein. Interessanterweise scheint aber die {\it Existenz\/} der letzteren gerade eine {\it Voraussetzung\/} f\"ur das Zustandekommen des Effektes zu sein. \par Unter Experimentalphysikern und Ph\"anomenologen gilt der quantisierte {\sc Hall}-Effekt heute als verstanden: Empirisch gesehen wei\ss\ man ziemlich genau, was passiert, und kann auch bei komplizierten Probengeometrien und -topologien verl\"a\ss liche Vorhersagen machen. Der Stand der theoretischen Forschung allerdings ist der, da\ss\ es eine vollst\"andig ausgearbeitete und gleichzeitig rigorose Theorie des quantisierten {\sc Hall}-Effektes im Rahmen der Quantentransporttheorie (der Quantenfeldtheorie des Nichtgleichgewichts) nicht gibt. Strenggenommen gibt es nicht einmal einen Beweis daf\"ur, ob die {\sc von\,Klitzing}-Konstante tats\"achlich mit der {\sc Sommerfeld}schen Feinstrukturkonstante ({\it modulo\/} einer Umrechnungsvorschrift) gleichzusetzen ist (man denke an Abweichungen von dem Typ eines {\sc Lamb}-Shifts). Aber es sind gerade die topologischen Argumente, die - im Rahmen fundierter semiph\"anomenologischer Theorien - darauf hindeuten, da\ss\ die {\sc von\,Klitzing}\-sche Quantisierungsregel tats\"achlich ein fundamentales Naturgesetz darstellt. Die Situation ist vielleicht vergleichbar mit der in der Quantenelektrodynamik: Obwohl die Theorie noch Fehlstellen und Inkonsistenzen beinhaltet, z\"ahlt sie zu den erfolgreichsten Konzepten der modernen Physik. Im Literaturverzeichnis ist eine Liste von Fachartikeln und Lehr\-b\"u\-chern \"uber die quantisierten {\sc Hall}-Effekte zu finden, auf die der interessierte Leser (und vielleicht der zuk\"unftige Forscher auf diesem Gebiet) verwiesen sei. \vfill\eject\noindent% \subsection{MOS-(metal-oxide-semiconductor-)Strukturen} Eine MOS-(metal-oxide-semiconductor-)Struktur ist - wie der Name schon andeutet - ein Schichtsystem mit der Abfolge Metall-Siliziumdioxid-Silizium. Letzteres ist entweder eine Si-MOS-Struktur in p-dotierten Bereichen (man spricht auch von {\it p-Substraten\/} oder {\it p-Wannen\/}) oder in n-dotierten Bereichen ({\it n-Substraten\/} oder {\it n-Wannen\/}). Die Metall-Schicht wird als sogenannte Gate-Elektrode verwendet, da\ss\ hei\ss t, die an sie angelegte Spannung $V_G$ bestimmt Bandstruktur und Ladungstr\"agerverteilung. Die Einstellung der Besetzungsgrenze der Zust\"ande im Metall durch $V_G$ bestimmt die Verbiegung der Bandstruktur im Silizium. \par Bekanntlich unterscheiden wir in dotierten Halbleitern zwischen Majorit\"atsladungs\-tr\"a\-gern (das sind diejenigen, welche durch die Dotierung in der Mehrzahl auftreten) und Minorit\"atsladungstr\"agern (das sind diejenigen, welche infolge der thermischen Anregung von Elektronen-Loch-Paaren als Minderheit immer vorhanden sind). Beschr\"anken wir uns auf die Betrachtung von p-MOS-Systemen. In diesem Fall sind die Majorit\"atsladungstr\"ager die L\"ocher, die Minorit\"atsladungstr\"ager die Elektronen. Wir unterscheiden drei F\"alle: \begin{list} { } {\setlength{\leftmargin}{3.75cm} \setlength{\labelsep}{0.75cm} \setlength{\labelwidth}{2.75cm} \setlength{\itemsep}{0cm} } \item[$V_G < 0$] Die Valenzbandkante $E_V$ r\"uckt an die {\sc Fermi}-Energie $E_F$, und es ist f\"ur die L\"ocher g\"unstiger, sich nahe der Grenzfl\"ache anzuordnen. Wenn $E_V>E_F$ wird, entsteht an der $\mbox{\rm Si-SiO}_2$-Grenzfl\"ache eine positive Raumladung von L\"ochern. Dieser Zustand wird Anreicherung (Akkumulation) genannt. ({\it Achtung\/}: Wir d\"urfen noch einmal daran erinnern, da\ss\ wir gerade den Fall eines p-Substrats behandeln.) \item[$V_G = V_{FB} \approx 0$] Ist $V_G=0$, so verlaufen die B\"ander nicht flach, da im Oxid Ladungen eingelagert sind, welche die B\"ander verbiegen. Durch eine angelegte Spannung, der sogenannten Flachbandspannung $V_{FB}$, kann man diese Verbiegung kompensieren. Wird $V_G<V_{FB}$, tritt zun\"achst Verarmung (Depletion) auf, da es f\"ur die L\"ocher energetisch g\"unstiger ist, sich von der Oberfl\"ache entfernt anzuordnen. ($V_{FB}$ ist nahe null Volt, daher wird hier $V_{FB} \approx 0$ angenommen.) \item[$V_G > 0$] Die Leitungsbandkante $E_L$ wird unter die {\sc Fermi}-Energie $E_F$% \linebreak gedr\"uckt, und Elektronen aus dem Valenzband fallen in die so entstehende dreieckf\"ormige Potentialtasche und besetzen somit die Zust\"ande nahe der Grenzschicht. Es entsteht ein 2-dimensionales Elektronengas. Der Wert von $V_G$ bei Einsetzen dieser sogenannten Inversion nennt man Threshold-Spannung $V_{th}$. \end{list} \bild{qhe_003}{p-MOS-Struktur}{6} \par F\"ur eine Si-MOS-Struktur auf einem n-Substrat gilt entsprechendes; das hei\ss t, die Ver\-h\"alt\-nis\-se drehen sich um. Insgesamt haben wir also sechs physikalisch unterschiedliche F\"alle zu unterscheiden. \par Eine Si-MOS-Struktur kann als Kondensator der Fl\"ache $A$ und Dicke $d$ gef\"ullt mit einem Dielektrikum der Permittivit\"at $\varepsilon_{{\rm SiO}_2}$ aufgefa\ss t werden, so da\ss\ wir f\"ur die Kapazit\"at setzen d\"urfen \begin{equation} C=\frac{\varepsilon_{{\rm SiO}_2}\varepsilon_0 A}{d}, \end{equation} mit $\varepsilon_0=8.8542\cdot 10^{-12}\,C/Vm$ und $\varepsilon_{{\rm SiO}_2}=3.8$. F\"ur die 2-dimensionale Ladungstr\"agerdichte $n_{2D}$ ergibt sich die sogenannte {\it Kondensatorformel\/} \begin{equation} n_{2D}=\frac{C}{eA}\cdot(V_G-V_{th}) =\frac{\varepsilon_{{\rm SiO}_2}\varepsilon_0}{ed} \cdot(V_G-V_{th}), \end{equation} die insbesondere im Falle der Inversion von Bedeutung ist. \par Silizium ist ein indirekter Halbleiter: Das absolute Minimum der Leitungsbandkante liegt nicht bei ${\bf k}=0$ sondern bei endlichen k-Werten. In [100]-Ober\-fl\"a\-chen\-ori\-en\-tie\-rung haben wir zwei T\"aler (engl.\ {\it valleys\/}), die - obwohl im k-Raum nicht wegzusammenh\"angend - energetisch \"aquivalent sind. Somit sind die energetischen Zust\"ande in [100]-ober\-fl\"a\-chen\-ori\-en\-tier\-tem Silizium zweifach entartet. So ergibt sich f\"ur die Entartung eines Landauniveaus f\"ur [100]-Silizium \begin{equation} n_{L} = g_s g_v \cdot \frac{eB}{h}=4 \cdot \frac{eB}{h} \end{equation} mit \begin{equation} g_s=2 \end{equation} als Spinentartungsfaktor und \begin{equation} g_v=2 \end{equation} als Valleyentartungsfaktor. Die Valleyentartung wird nur in sehr starken Magnetfeldern me\ss bar aufgehoben. \vfill\eject\noindent% \subsection{Heterostrukturen} Eine Heterostruktur ist ein Schichtsystem aus verschiedenen Materialien (griech.\ {\it heteros\/} = ver\-schieden), hier aus Halbleitern verschiedener Bandstruktur. Heterostrukturen werden realisiert durch Molekularstrahlepitaxie, einem programmgesteuerten Pr\"azisionsverfahren f\"ur das Wachstum wohldefinierter Kristalle \cite{Herman89}. Die gemeinsame Verwendung von Galliumarsenid und Aluminium-Gallium-Arsenid erm\"oglicht die Kombination von Halbleitern verschiedener Bandl\"ucke bei gleicher Gitterkonstante, mit anderen Worten: ein Aufwachsen von verschiedenen Schichten ohne Fehlanpassung (engl.\ {\it mismatch\/}). \bild{qhe_000l}{Bandl\"ucken versus Gitterkonstanten von III-V-Halbleitern}{9} \par Eine typische Al$_x$Ga$_{1-x}$As-GaAs-Heterostruktur sieht wie folgt aus: Auf einem semi-isolierenden Substrat aus Galliumarsenid werden nacheinander aufgewachsen \begin{enumerate} \item $1-4\,\mu{\rm m}$ GaAs, \item $10-40\,nm$ undotiertes Al$_x$Ga$_{1-x}$As (der sogenannte {\it Spacer\/}), \item $20-50\,nm$ Si-(n-)dotiertes Al$_x$Ga$_{1-x}$As, \item $10-20\,nm$ GaAs (die sogenannte {\it Cap\/}). \end{enumerate} Im thermischen Gleichgewicht mu\ss\ die {\sc Fermi}-Energie $E_F$ \"uber die verschiedenene Grenz\-fl\"a\-chen hinweg konstant sein. Die Erf\"ullung dieser Bedingung erzwingt die Verbiegung der Leitungsbandkante zur Grenzfl\"ache hin. Der energetische Unterschied der Bandl\"ucken von GaAs und Al$_x$Ga$_{1-x}$As ist so gro\ss, da\ss\ die Leitungsbandkante des GaAs-Puffers an der $\mbox{Al$_x$}$$\mbox{Ga$_{1-x}$}$$\mbox{As}$-GaAs-Grenzfl\"ache bis unter die {\sc Fermi}-Energie $E_F$ gedr\"uckt wird. Dort bildet sich ein n\"aherungsweise dreieckf\"ormiges Potential unterhalb von $E_F$ aus, welches vergleichbar ist mit dem oben beschriebenen Potentialverlauf in MOS-Inversionsschichten. Somit besetzten die Elektronen nahe an der Al$_x$Ga$_{1-x}$As-GaAs-Grenzfl\"ache Zust\"ande im Leitungband und bilden durch ihre Beweglichkeit in x- und y-Richtung eine leitf\"ahige Schicht, das 2-dimensionale Elektronengas. \bild{qhe_004}{Heterostruktur}{9} \par Die Beweglichkeit der Elektronen wird durch die Streueffekte im Kristall eingeschr\"ankt. Der Spacer wird eingef\"ugt, um die Elektronen in der Grenzschicht m\"oglichst weit von den ionisierten Donatoratomen (hier Silizium als IV-Element auf IIIer-Pl\"atzen) in der dotierten Al$_x$Ga$_{1-x}$As-GaAs-Schicht zu trennen. \par Da Galliumarsenid ein direkter Halbleiter ist, gibt es hier keine Valleyentartung; die {\sc Fermi}-Fl\"ache ist die Oberfl\"ache einer Kugel. \vfill\eject\noindent% \subsection{Vom 2-dimensionalen zum 3-dimensionalen Elek\-tro\-nen\-gas: Sub\-b\"an\-der} Der {\sc Hamilton}-Operator eines quantenmechanischen Systems freier Elektronen in drei Raumdimensionen besitzt ein nach unten beschr\"anktes und nach oben unbeschr\"anktes kontinuierliches Spektrum. Die Energieeigenwerte lassen sich als quadratische Funktion des kontinuierlichen Wellenzahlvektors ${\bf k}$ auffassen und haben die Form \begin{equation} E({\bf k})=\frac{\hbar^2{\bf k}^2}{2m} =\frac{\hbar^2k_x^2+\hbar^2k_y^2+\hbar^2k_z^2}{2m}. \end{equation} Ist die Bewegung in eine der drei Raumrichtungen (\"ublicherweise die z-Richtung) ein\-ge\-schr\"ankt, so wird das Spektrum der zugeordneten Komponente des Wellenzahlvektors (z.\,B.\ $k_z$) diskret. Im Idealfall wird eine solche Einschr\"ankung durch das Potential eines Kastens mit unendlich hohen W\"anden definiert. Die Betrachtung der realistischen Situation eines Potentials mit n\"aherungsweise dreieckf\"ormigen Verlauf unterscheidet sich davon qualitativ nicht, wohl aber quantitativ, das hei\ss t, hinsichtlich der Abst\"ande der Energieniveaus und der Ausdehnung der Wellenfunktionen. Auf jeden Fall k\"onnen wir schreiben \begin{equation} E^j(k_x,k_y)=\frac{\hbar^2k_x+\hbar^2k_y}{2m}+E^j_z. \end{equation} Bei Diskretisierung einer Raumdimension also zerf\"allt der Raum der Energieeigenwerte des 3-dimensionalen Systems in eine diskrete Summe von Unterr\"aumen von Energie-Eigen\-werten, die zu einem idealisierten 2-dimensionalen System geh\"oren. \par Nun haben wir in einem 3-dimensionalen Festk\"orper infolge der Gitterperiodizit\"at nach dem Theorem von {\sc Bloch} nicht einfach ein nach unten beschr\"anktes und nach oben unbeschr\"anktes Energiekontinuum, sondern Energieb\"ander vorliegen. Durch die Ein\-schr\"an\-kung einer Raumdimension zerfallen diese somit in eine abz\"ahlbare Menge 2-di\-men\-si\-o\-na\-ler Un\-ter\-b\"an\-der, sogenannter Subb\"ander. F\"ur kleine Wellenzahlen gilt \begin{equation} E^j(k_x,k_y)=\frac{\hbar^2k_x+\hbar^2k_y}{2m}+E^j_z, \end{equation} wobei \begin{equation} m = 0.067 \, m_e \end{equation} die hier in der freien Elektronenmasse $m_e$ ausgedr\"uckte effektive Masse des Elektrons in Galliumarsenid ist. \par Wenn man die St\"arke der Dotierung in Al$_x$Ga$_{1-x}$As so w\"ahlt, da\ss\ die Anzahl der Elektronen in der Grenzschicht nur zur Besetzung des untersten Subbandes $j=0$ ausreicht, k\"onnen wir davon ausgehen, da\ss\ im Limes verschwindender Temperatur tats\"achlich nur das unterste Subband besetzt ist. Zwar hat das betrachtete physikalische System noch eine Ausdehnung in z-Richtung, aber es besteht keine kinematische Freiheit in dieser Richtung mehr; das System verh\"alt sich, \glqq wie wenn es\grqq\ (lat.\ {\it quasi\/}) exakt 2-dimensional w\"are: Wir sprechen daher auch von einem quasi-2-dimensionalen Elektronengas. \par Erlauben wir uns an dieser Stelle einen kleinen Exkurs und berechnen die Zu\-stands\-dich\-te f\"ur ein Elektronengas in $n$ Raumdimensionen. Das Volumen einer $n$-dimensionalen Kugel oder $n$-{\it Disk\/} ${\bf D}^n$ mit Radius $r$ ist bekanntlich gegeben durch \begin{equation} \mbox{{\rm Vol}}({\bf D}^n)= \frac{\pi^{n/2}}{\Gamma(\frac{n}{2}+1)}\,r^n. \end{equation} Wir erinnern daran, da\ss\ die $\Gamma$-Funktion die Eigenschaften \begin{eqnarray} \Gamma(n+1) &=& n!\,, \phantom{1234} \mbox{{\rm f\"ur}} \phantom{x} n=0,1,2,\dots, \\ \Gamma(x+1) &=& x\,\Gamma(x), \\ \Gamma(1/2) &=& \sqrt{\pi} \end{eqnarray} erf\"ullt. Insbesondere ist \begin{eqnarray} \mbox{{\rm Vol}}({\bf D}^3) &=& \frac{4}{3} \pi \cdot r^3, \\ &\phantom{=}& \nonumber\\ \mbox{{\rm Vol}}({\bf D}^2) &=& \pi \cdot r^2, \\ &\phantom{=}& \nonumber\\ \mbox{{\rm Vol}}({\bf D}^1) &=& 2 \cdot r . \end{eqnarray} Damit erhalten wir f\"ur eine $n=3,2,1$-dimensionale {\sc Fermi}-Kugel \begin{eqnarray} \mbox{{\rm Vol}}({\bf D}^3_F) &=& \frac{4}{3} \cdot \pi k^3_F \;=\; \frac{4}{3} \cdot \pi \cdot \frac{(2mE_F)^{3/2}}{\hbar^3}, \\ & & \nonumber \\ \mbox{{\rm Vol}}({\bf D}^2_F) &=& \pi k^2_F \;=\; \pi \cdot \frac{ 2mE_F }{\hbar^2}, \\ & & \nonumber \\ \mbox{{\rm Vol}}({\bf D}^1_F) &=& 2k_F \;=\; \frac{2{(2mE_F)}^{1/2}}{\hbar}. \end{eqnarray} In einem spin- und valleyentarteten $n$-dimensionalen System (mit einem w\"urfelf\"ormigen Volumen der Kantenl\"ange $L$) ist die Gesamtzahl der erlaubten Zust\"ande% \footnote{% Man beachte, da\ss\ diese Formel von verschwindenden bzw.\ periodischen Randbedingungen f\"ur die Wellenfunktionen ausgeht \cite{Kittel}.} gegeben durch \begin{equation} n_{nD} (E_F) = g_s g_v \cdot \frac{ {\rm Vol}({\bf D}^n_F) } { (2\pi/L)^n }, \end{equation} wobei wir einen Faktor $L^n$ unterdr\"ucken, wenn wir uns - wie \"ublich - auf ein Einheitsvolumen beziehen wollen. F\"ur die Herleitung dieser f\"ur die Festk\"orperphysik fundamentalen Formel sei der Leser zum Beispiel auf das Lehrbuch von {\sc Kittel} verwiesen \cite{Kittel}. \par F\"ur die Zu\-stands\-dich\-ten in $n=3,2,1$ Raumdimensionen erhalten wir somit \begin{eqnarray} & & \nonumber\\ D_{3D}(E) \;=\; \frac{d n_{3D} (E)}{dE} &=& g_s g_v \cdot \frac{4\pi m\sqrt{2mE}}{h^3}, \\ & & \nonumber\\ D_{2D}(E) \;=\; \frac{d n_{2D} (E)}{dE} &=& g_s g_v \cdot \frac{2\pi m} {h^2}, \\ & & \nonumber\\ D_{1D}(E) \;=\; \frac{d n_{1D} (E)}{dE} &=& g_s g_v \cdot \frac{1}{h}\sqrt{\frac{2m}{E}}, \end{eqnarray} wobei wir f\"ur $n$$=$$3$ einfach $E$$=$$E_F$ setzen, in den niederdimensionalen F\"allen $n$$=$$2,1$ dagegen \begin{equation} E=E_F-E^j_z \end{equation} f\"ur ein Subband $j$. \par Wir erinnern noch einmal daran, da\ss\ wir f\"ur den Spinentartungsfaktor \begin{equation} g_s=2 \end{equation} setzen. Der Valleyentartungfaktor $g_v$ ist nur f\"ur indirekte Halbleiter wie Silizium von Bedeutung. F\"ur Galliumarsenid setzen wir diesen Faktor gleich $1$. \par Damit sei unserer Exkurs \"uber die Zu\-stands\-dich\-te in $n$ Dimensionen beendet.- Wesentlich ist die Beobachtung, da\ss\ die Subband-Zustanddichte in zwei Raumdimensionen konstant ist (!). F\"ur die 2-dimensionale Ladungstr\"agerdichte erhalten wir im Falle der Besetzung des untersten Subbandes: \begin{equation} n_{2D} = \int_{E^0_z}^{E_F} D_{2D}(E)\,dE = g_s g_v \cdot \frac{2\pi m}{h^2} \cdot E, \end{equation} mit $E=E_F-E^0_z$. \vfill\eject\noindent% \subsection{2-dimensionales Elektronengas im Magnetfeld} Wir betrachten nun den Fall, in dem ein Magnetfeld senkrecht zur Grenzfl\"ache, an der sich das 2-dimensionale Elektronengas ausgebildet hat, angelegt ist. Diese Anordnung wird auch {\sc Faraday}-Geometrie genannt (im Gegensatz zur {\sc Vogt}-Geometrie $B\parallel\mbox{{\rm Fl\"ache}}$). \par Die klassische Kinematik folgt aus der Bedingung, da\ss\ sich {\sc Lorentz}-Kraft und Zentralkraft die Waage halten m\"ussen. Dies ist genau dann der Fall, wenn sich die Elektronen auf Kreisbahnen mit der {\it Zyklotronfrequenz\/} \begin{equation} \omega_c=\frac{eB}{m} \end{equation} bewegen. Die Projektion einer solchen Bewegung entspricht der eines harmonischen Oszillators. Es ist daher naheliegend zu vermuten, da\ss\ die quantenmechanische Behandlung der vorliegenden Situation zu einer Quantisierungsregel f\"uhrt, welche an die quan\-ten\-me\-cha\-ni\-sche Behandlung des harmonischen Oszillators erinnert. In der Tat zeigt man durch L\"o\-sen der zugeordneten {\sc Schr\"odinger}-Gleichung, da\ss\ die Elektronen quantisierte E\-ner\-gie\-ei\-gen\-wer\-te mit dem Abstand $\hbar\omega_c$ einnehmen, die sogenannten {\sc Landau}-Niveaus \cite{LandauLifshitz}. F\"ur die Gesamtenergie der Elektronen erh\"alt man\begin{equation} E^j_n=\hbar\omega_c(n+\frac{1}{2})+E^j_z, \end{equation} mit $n=0,1,2\dots\;$. Da sich die Elektronen nur noch auf {\sc Landau}-Niveaus aufhalten d\"urfen, ist die Zu\-stands\-dich\-te nicht mehr gleichm\"a\ss ig auf die Energien bis hin zu $E_F$ verteilt, sondern kondensiert in gleichm\"a\ss iger Weise auf den Werten der {\sc Landau}-Niveaus. F\"ur die Anzahl der Zust\"ande auf einem {\sc Landau}-Niveau, dem sogenannten {\it Entartungsgrad\/}, erhalten wir \begin{equation} n_{L} = \int_{E^j_n-\frac{1}{2}\hbar\omega_c} ^{E^j_n+\frac{1}{2}\hbar\omega_c} D_{2D}(E)\,dE = g_s g_v \cdot \frac{2\pi m}{h^2} \cdot \hbar\omega_c = g_s g_v \cdot \frac{eB}{h}. \end{equation} Der beschriebene Typ von Entartung kann auch als Drehimpulsentartung aufgefa\ss t werden. Die Anzahl gef\"ullter {\sc Landau}-Niveaus berechnet sich aus der Anzahl $n_{2D}$ der Ladungstr\"ager pro Einheitsfl\"ache bezogen auf den Entartungsgrad $n_{L}$ eines {\sc Landau}-Niveaus: \begin{equation} \mbox{{\rm Anzahl der gef\"ullten {\sc Landau}-Niveaus}} = \frac{ n_{2D} }{ n_{L} } = \frac{1}{g_sg_v} \cdot \frac{h n_{2D} }{eB} =: \frac{1}{g_sg_v} \cdot \nu. \end{equation} Der F\"ullfaktor $\nu$ gibt die Anzahl gef\"ullter spin- und valley-aufgespalteter Niveaus an. Die vollst\"andige F\"ullung eines {\sc Landau}-Niveaus entspricht also einem F\"ullfaktor von zum Beispiel $\nu=4$ in Silizium und einem F\"ullfaktor von $\nu=2$ in Galliumarsenid. \par Beziehen wir die {\sc Zeemann}-Aufspaltung der Elektronen im Magnetfeld mit ein, so erhalten wir f\"ur die Gesamtenergie der Elektronen eines 2-dimensionalen Elektronengases im Magnetfeld: \begin{equation} E^j_{n,s} = \hbar\omega_c(n+\frac{1}{2}) + g^* \mu_B B \cdot b + E^j_z, \end{equation} mit $s=\pm 1/2$ und $n=0,1,2,\dots\;$. Durch Streuprozesse an Verunreinigungen und Defekten im Kristall werden die im Idealfall scharf definierten {\sc Landau}-Niveaus verbreitert. Diese Verbreiterung spielt eine bedeutende Rolle im Verst\"andnis des quantisierten {\sc Hall}-Effektes. \par {\it Bemerkung:\/} Auch wenn wir von falschen Randbedingungen f\"ur die Wellenfunktionen ausgegangen sind, erhalten wir das richtige Ergebnis f\"ur die Entartung eines {\sc Landau}-Niveaus. Die \"Ubereinstimmung erkl\"art sich aus der Tatsache, da\ss\ sich die {\sc Fermi}-Energie bzw.\ das chemische Potential beim Einschalten des Magnetfeldes nicht \"andert. Durch Vergleich mit der oben vorgestellten eichinvarianten Formulierung haben wir dies hiermit sogar bewiesen. \vfill\eject\noindent% \subsection{Drude-Modell f\"ur den klassischen Hall-Effekt} In der Einleitung haben wir den klassischen {\sc Hall}-Effekt im Einteilchen-Bild hergeleitet. Im folgenden wollen wir die Herleitung im Rahmen des {\sc Drude}-Bildes wiederholen. Der Vorteil dieses Rahmens besteht darin, da\ss\ er mikroskopische Gr\"o\ss en wie die Relaxationszeit zwischen zwei St\"o\ss en in Relation zu den makroskopischen Gr\"o\ss en wie spezifische Leit\-f\"a\-hig\-keit und spezifischen Widerstand setzt. \par An dieser Stelle ist es n\"utzlich, die grundlegenden Annahmen des {\sc Drude}-Modells zu rekapitulieren: \begin{enumerate} \item Elektronen bewegen sich als freie Teilchen, wenn wir von den St\"o\ss en mit den als harte Kerne dargestellten Ionen (engl.\ {\it hard core ions\/}) absehen. Sowohl die Elektron-Elektron-Wechselwirkung zwischen den St\"o\ss en wird vernachl\"assigt (engl.\ {\it independent electron approximation\/}), als auch die Elektron-Ion-Wechselwirkung (engl.\ {\it free electron approximation\/}). \item Ohne irgendwelche Annahmen \"uber den detailierten Mechanismus der Streuprozesse zu machen, wird die Arbeitshypothese aufgestellt, da\ss\ die Elektronen von Zeit zu Zeit an den {\it hard core ions\/} streuen. Durch einen derartigen Streuproze\ss\ \"andert sich die Impulsrichtung eines gestreuten Elektrons. Die Streung kann als elastisch an\-ge\-nom\-men werden. \item Die Wahrscheinlichkeit einer Streuung bzw.\ die \"Anderung der Impulsrichtung eines Elektrons wird durch den Kehrwert einer Beruhigungszeit, der sogenannten {\sc Drude}-Impulsrelaxationszeit $\tau$, quantifiziert. \item Es wird angenommen, da\ss\ die Elektronen das thermische Gleichgewicht (in Bezug auf ihre Umgebung) ausschlie\ss lich durch die beschriebenen Streuprozesse erreichen. \end{enumerate} Eine kritische W\"urdigung der {\sc Drude}-Theorie f\"uhrt unmittelbar zu fundamentalen noch offenen Grundlagenfragen der Thermodynamik (mikroskopische Reversibilit\"at versus makroskopische Irreversibilit\"at, {\sc Boltzmann} versus {\sc Gibbs}, Me\ss proze\ss\ usw.). Eine Einbeziehung der Quantenkinematik zeigt, da\ss\ sogar die weiterf\"uhrende {\sc Boltzmann}sche Transporttheorie f\"ur die vollst\"andige Beschreibung der Ph\"a\-no\-me\-ne, die f\"ur uns von Interesse sind, unzureichend ist. \par Im folgenden soll lediglich plausibel gemacht werden, da\ss\ die {\sc Drude}-Theorie f\"ur die dynamischen Gleichungen der makroskopischen Transportgr\"o\ss en Reibungsterme liefert. Ein wesentlicher Punkt ist, da\ss\ die Streuung im {\sc Drude}-Bild zwar als elastisch an\-ge\-nom\-men werden darf, da\ss\ sie aber aus Sicht der Quantenmechanik als inkoh\"arent betrachtet werden mu\ss\ und daher schon ein Element der Irreversibilit\"at (Entropiezunahme) auf mikroskopischer (oder sollten wir sagen mesoskopischer?) Ebene einf\"uhrt. Daher ist es nicht verwunderlich, da\ss\ sich die makroskopische Beschreibung auf eine effektive dissipative Ki\-ne\-ma\-tik/Dy\-na\-mik reduziert. \par Es ist n\"utzlich, sich eine Analogie zwischen der klassisch-mechanischen Dynamik und der Dynamik des elektrischen Transports zu vergegenw\"artigen. Das {\sc Newton}sche Gesetz \begin{equation} {\bf F} = m \, \frac{d{\bf v}}{dt} \phantom{xxxxx} \mbox{({\rm falls $m=const.$})} \end{equation} setzt die auf einen K\"orper ausge\"ubte Kraft ${\bf F}$ in Beziehung zu seiner Beschleunigung $\dot{\bf v}$ und steht zum Reibungsgesetz \begin{equation} {\bf F}=\kappa{\bf v} \end{equation} in genau derselben Relation wie die einen idealen Leiter beschreibende 2.\ {\sc London}sche Gleichung \begin{equation} {\bf E}=\mbox{{\it const\/}} \, \frac{d{\bf j}_{2D}}{dt} \end{equation} zum {\sc Ohm}schen Gesetz \begin{equation} {\bf E}=\varrho\,{\bf j}_{2D}. \end{equation} \par {\it Bemerkung:\/} Die 2.\ Londonsche Gleichung wird zuweilen auch Beschleunigungsgleichung genannt. Sie beschreibt einen idealen Leiter. Um das Ph\"a\-no\-men der Supraleitung zu beschreiben (ideale Leitf\"ahigkeit bei Verdr\"angung des Magnetfeldes) reicht sie allein nicht aus! \par Heben wir noch einmal den wesentlichen Punkt hervor: In einem realen Leiter bewegen sich die Elektronen von Sto\ss\ zu Sto\ss\ ballistisch, werden also auf der dazwischen liegenden Strecke beschleunigt. Die mittlere Zeit dieser freien Bewegung ist die {\sc Drude}-Im\-puls\-re\-la\-xa\-ti\-ons\-zeit $\tau$. \"Ubrigens ist sie theoretisch sehr schwer zu bestimmen. \par Angenommen, ein Elektron h\"atte nach einem Sto\ss\ gerade die Geschwindigkeit ${\bf v}_0$ (fett\-ge\-schrie\-be\-ne Buchstaben stehen f\"ur Vektoren!), so beschleunigt das \"au\ss ere stromtreibende Feld ${\bf E}$ dieses bis zum n\"achsten Sto\ss\ von ${\bf v}_0$ auf ${\bf v}_0-(e{\bf E}/m)t$, wobei $t$ die verstrichene Zeit bezeichnet. Wenn wir annehmen, da\ss\ alle m\"oglichen ${\bf v}_0$ in etwa genauso h\"aufig vorkommen, so erhalten wir f\"ur die mittlere Driftgeschwindigkeit \begin{equation} {\bf v}_{mean}=-\frac{e{\bf E}\tau}{m}, \end{equation} mit $\tau$ als Zeitmittel und f\"ur die makroskopische Stromdichte in dem uns hier interessierenden Fall von zwei Raumdimensionen \begin{equation} {\bf j}_{2D} = - n_{2D} e{\bf v}_{mean} = \left(\frac{ n_{2D} e^2\tau}{m}\right)\,{\bf E} = \sigma {\bf E}, \end{equation} wobei $n_{2D}$ - wie oben - die 2-dimensionale Ladungstr\"agerdichte bezeichnet. \par {\it Bemerkung:\/} Den Quotienten aus dem Betrag der mittleren Driftgeschwindigkeit und dem Betrag der angelegten elektrischen Feld\-st\"ar\-ke \begin{equation} \mu= \frac{|{\bf v}_{mean}|}{{\bf E}}= \frac{1}{e}\cdot\frac{\sigma}{n_{2D}}= \frac{e\tau}{m} \end{equation} nennen wir {\it Beweglichkeit\/} oder {\it Mobilit\"at\/}. \par Die im {\sc Ohm}schen Gesetz \begin{equation} {\bf j}_{2D}=\sigma{\bf E} \end{equation} durch die {\sc Drude}-Impulsrelaxationszeit ausgedr\"uckte Gr\"o\ss e \begin{equation} \sigma_0=\frac{n_{2D}e^2\tau}{m} \end{equation} nennen wir die {\sc Drude}-Leitf\"ahigkeit. \par Mit anderen Worten: Messen wir im Experiment die spezifische Leitf\"ahigkeit $\sigma$ bzw.\ den spezifischen Widerstand $\varrho$, so k\"onnen wir bei Kenntnis der Ladungstr\"agerdichte die {\sc Drude}-Impulsrelaxationszeit $\tau$ bestimmen. \par Die Analogie zwischen dissipativer Mechanik und Transporttheorie wird noch sichtbarer, wenn wir die {\sc Newton}-Gleichung mit Sto\ss term herleiten. Schreiben wir das {\sc Newton}\-sche Gesetz als \begin{equation} \frac{d{\bf p}(t)}{dt}={\bf f}(t), \end{equation} so erhalten wir f\"ur die Impuls\"anderung \begin{equation} d{\bf p}(t)={\bf f}(t)dt, \end{equation} was sich in endlicher N\"aherung liest als \begin{equation} \Delta{\bf p}={\bf p}(t+\Delta t)-{\bf p}(t)\approx{\bf f}(t)\Delta t \end{equation} bzw.\ als \begin{equation} {\bf p}(t+\Delta t)= {\bf p}(t)+{\bf f}(t)\Delta t+O(\Delta t^2). \end{equation} Offensichtlich ist die Kollisionswahrscheinlichkeit $\Delta t/\tau$, die Wahrscheinlichkeit f\"ur das dazu komplement\"are Verhalten, das \glqq {\sc Newton}-Verhalten\grqq, gerade $1-\Delta t/\tau$. Letzteren Term f\"uhren wir als Gewichtsfaktor in die Absch\"atzung f\"ur den inkrementierten Impuls ein, so da\ss\ wir setzen d\"urfen \begin{eqnarray} {\bf p}(t+\Delta t) &=& \left( 1 - \frac{\Delta t}{\tau} \right) ({\bf p}(t)+{\bf f}(t)\Delta(t) + O(\Delta t^2)) \nonumber\\ &=& {\bf p}(t) + {\bf f}(t)\Delta t - \frac{\Delta t}{\tau} {\bf p}(t) + O(\Delta t^2). \end{eqnarray} F\"ur das Impuls-Inkrement erhalten wir somit \begin{equation} \Delta{\bf p} = {\bf p}(t+\Delta t)-{\bf p}(t) = \Delta t \left( {\bf f}(t)-\frac{{\bf p}(t)}{\tau} \right) + O(\Delta t^2), \end{equation} so da\ss\ die {\sc Newton}-Gleichung mit Sto\ss term die folgende Form haben mu\ss: \begin{equation} \frac{d{\bf p}(t)}{dt} = {\bf f}(t)-\frac{{\bf p}(t)}{\tau}. \end{equation} Beschreibt ${\bf f}$ die {\sc Lorentz}-Kraft, so wird die obige Gleichung zu \begin{equation} \frac{d{\bf p}(t)}{dt} = e \left( {\bf E}+ \frac{{\bf p}}{m}\times{\bf B} \right) - \frac{{\bf p}}{\tau}. \end{equation} Ein station\"arer Zustand stellt sich ein, wenn die linke Seite dieser Gleichung verschwindet, das hei\ss t, es ist in Komponenten geschrieben \begin{eqnarray} 0 &=& e E_x + \omega_c p_y - \frac{p_x}{\tau}, \nonumber \\ 0 &=& e E_y - \omega_c p_x - \frac{p_y}{\tau}, \end{eqnarray} mit der {\it Zyklotronfrequenz\/} \begin{equation} \omega_c=eB/m. \end{equation} Multiplikation mit $n_{2D}e\tau/m$ und umarrangieren der Summanden ergibt \begin{eqnarray} \sigma_0 E_x := \left( \frac{n_{2D}e^2\tau}{m} \right) \, E_x & = & - \omega_c \tau j_y + j_x, \nonumber \\ \sigma_0 E_y := \left( \frac{n_{2D}e^2\tau}{m} \right) \, E_y & = & \phantom{+} \omega_c \tau j_x + j_y, \end{eqnarray} wobei $\sigma_0$ die {\sc Drude}-Leitf\"ahigkeit f\"ur $B \equiv 0$ bezeichnet. Unter der Bedingung, da\ss\ kein transversaler Strom ${\bf j}_{2D,y}$ flie\ss t, erhalten wir f\"ur das {\sc Hall}-Feld \begin{equation} E_y = \left( \frac{\omega_c\tau}{\sigma_0} \right) j_x = \left( \frac{B}{n_{2D}e} \right) j_x. \end{equation} F\"ur den nicht-diagonalen Anteil des spezifischen Widerstandes erhalten wir somit \begin{equation} \varrho_{yx} = \frac{E_{y,H}}{j_x} = \frac{B}{n_{2D}e}. \end{equation} Es ist n\"utzlich, das {\sc Ohm}-{\sc Hall}-Gesetz in einer Matrix- bzw.\ Tensor-Schreibweise zu notieren: \begin{equation} \left( \begin{array}{c} E_x \\ E_y \end{array} \right) = \left( \begin{array}{cc} \varrho_{xx} & \varrho_{xy} \\ \varrho_{yx} & \varrho_{yy} \end{array} \right) \, \left( \begin{array}{c} j_x \\ j_y \end{array} \right), \end{equation} explizit geschrieben als \begin{equation} \left( \begin{array}{c} E_x \\ \phantom{=} \\ E_y \end{array} \right) = \left( \begin{array}{cc} \displaystyle{ \frac{m}{n_{2D}e^2\tau}} & \displaystyle{ - \frac{B}{n_{2D}e} } \\ \phantom{=} & \phantom{=} \\ \displaystyle{ \frac{B}{n_{2D}e} } & \displaystyle{ \frac{m}{n_{2D}e^2\tau}} \end{array} \right) \, \left( \begin{array}{c} j_x \\ \phantom{=} \\ j_y \end{array} \right) = \sigma_0^{-1}\cdot \left( \begin{array}{cc} \displaystyle{ 1 } & \displaystyle{ - \omega_c\tau } \\ \phantom{=} & \phantom{=} \\ \displaystyle{ \omega_c\tau } & \displaystyle{ 1 } \end{array} \right) \, \left( \begin{array}{c} j_x \\ \phantom{=} \\ j_y \end{array} \right) . \end{equation} {\it Klassisch\/} (und nur klassisch!) gilt also \begin{equation} \frac{d\varrho_{yx}}{dB} = \frac{1}{n_{2D}e} = \frac{e\tau}{m} \cdot \frac{m}{n_{2D}e^2\tau} = \mu \varrho_{xx}, \end{equation} kompakt geschrieben als \begin{equation} \frac{d\varrho_{yx}}{dB} = \mu \varrho_{xx} \phantom{xxx} \mbox{(klassisch).} \end{equation} Der {\sc Hall}-Konstante $R_{Hall}$ ist {\it per definitionem\/} der auf das Magnetfeld $B$ normierte nicht-diagonale Anteil des spezifischen Widerstandes: \begin{equation} R_{Hall}:=\frac{R_H}{B} =\frac{1}{B} \cdot \frac{U_y}{I_x} =\frac{1}{B} \cdot \frac{E_y}{j_{2D,x}} =\frac{1}{B} \cdot \varrho_{yx}. \end{equation} \par {\it Bemerkung:\/} Beachte, da\ss\ $R_{Hall}$ eine andere Einheit hat als $\varrho_{yx}$. \par {\it Bemerkung:\/} Beachte ferner, da\ss\ der spezifische Widerstand sich aus dem absoluten Widerstand der Probe in drei Dimensionen anders berechnet als in zwei Dimensionen. Im ersteren Fall haben wir ihre Dicke zu ber\"ucksichtigen, im zweiten Fall nicht. Seien die Abmessungen der Probe in drei Dimensionen $L_x \cdot L_y \cdot L_z$, in zwei Dimensionen $L_x \cdot L_y$, dann sind die {\sc Hall}-Widerst\"ande \begin{equation} R_{Hall} = \frac{R_H}{B} = \frac{1}{B} \cdot \frac{U_y}{I_x} = \frac{1}{B} \cdot \frac{E_y \cdot L_y}{j_{2D,x} \cdot L_y} = \frac{1}{B} \cdot \frac{E_y}{j_{2D,x}} \end{equation} bzw.\ \begin{equation} R_{Hall} = \frac{R_H}{B} = \frac{1}{B} \cdot \frac{U_y}{I_x} = \frac{1}{B} \cdot\frac{U_y\cdot L_z}{I_x}, = \frac{1}{B} \cdot \frac{E_y \cdot L_y}{j_{3D,x} \cdot L_y \cdot L_z} = \frac{1}{B} \cdot \frac{E_y}{j_{3D,x} \cdot L_z}, \end{equation} wobei im 2D-Fall $j_{2D,x}$ eine Stromdichte bezogen auf einen linienf\"ormigen Querschnitt bezeichnet, im 3D-Fall $j_{3D,x}$ eine Stromdichte bezogen auf einen fl\"achenf\"ormigen Querschnitt bezeichnet. \par Die Inversion \begin{equation} \left( \begin{array}{c} j_x \\ j_y \end{array} \right) = \left( \begin{array}{cc} \sigma_{xx} & \sigma_{xy} \\ \sigma_{yx} & \sigma_{yy} \end{array} \right) \, \left( \begin{array}{c} E_x \\ E_y \end{array} \right) \end{equation} des {\sc Ohm}-{\sc Hall}-Gesetzes folgt aus elementarer Matrixalgebra. Wir erhalten \begin{eqnarray} \sigma_{xx} &=& \frac{ \varrho_{xx}}{\varrho_{xx}^2+\varrho_{xy}^2} = \sigma_0 \, \frac{1 }{1+\omega_c^2\tau^2}, \nonumber\\ \sigma_{xy} &=& \frac{-\varrho_{xy}}{\varrho_{xx}^2+\varrho_{xy}^2} = \sigma_0 \, \frac{\omega_c\tau}{1+\omega_c^2\tau^2}. \end{eqnarray} Im Limes gro\ss er Magnetfelder und damit gro\ss er {\sc Hall}-Spannungen wird $E_y \gg E_x$ und $\varrho_{yx} \gg \varrho_{xx}$. Wir erhalten f\"ur die Komponenten der Leitf\"ahigkeiten die N\"aherungen \begin{eqnarray} \sigma_{xx} &\approx& \phantom{+} \frac{\varrho_{xx}}{\varrho_{xy}^2}, \nonumber\\ \sigma_{xy} &\approx& - \frac{1 }{\varrho_{xy} }. \end{eqnarray} Es ist also m\"oglich, da\ss\ man gleichzeitig \begin{eqnarray} \sigma_{xx} &=& 0 , \\ \varrho_{xx} &=& 0 , \end{eqnarray} im Gegensatz zu jeglicher Intuition. \vfill\eject\noindent% \subsection{Beobachtung des quantisierten Hall-Effekts} Im klassischen Regime erhalten wir somit die folgenden Plots: \begin{enumerate} \item $\varrho_{yx}$ {\it versus\/} $B$: linear-monoton ansteigend; \item $\varrho_{xx}$ {\it versus\/} $B$: konstant; \item $\varrho_{yx}$ {\it versus\/} $n_{2D}$: hyperbolisch abfallend; \item $\varrho_{xx}$ {\it versus\/} $n_{2D}$: hyperbolisch abfallend. \end{enumerate} \bild{qhe_004a}{QHE-Messung (schematisch)}{6} \par Im Rahmen der Tieftemperatur-Transportexperimente am 2-dimensionalen Elektronengas beobachten wir hingegen f\"ur \begin{enumerate} \item $\varrho_{yx}$ {\it versus\/} $B$ eine vor\"ubergehend ansteigende Funktion mit breiten sogenannten {\sc Hall}-Plateaux bei den Werten \begin{equation} \varrho_{yx}= \frac{B}{e\,n_{2D}}= \frac{B}{e}\cdot\frac{h}{\nu eB}= \frac{h}{\nu e^2}, \end{equation} mit $\nu=g_sg_v \cdot i$ und $i=1,2,3\dots\;$. Sie stehen in Korrespondenz zur Bedingung \begin{equation} n_{2D}=\frac{\nu eB}{h}, \end{equation} mit $n_{2D}$ als 2-dimensionale Ladungstr\"agerdichte. \par Diese Bedingung beschreibt $\nu$ vollst\"andig gef\"ullte {\sc Landau}-Niveaus. Bei hohen Magnetfeldern wird die Spinentartung ($g_s$$=$$2$) und bei sehr hohen Magnetfeldern die Valleyentartung (im Falle von Halbleitern mit Valleys) aufgehoben. Bei vollst\"andiger Aufhebung aller Entartungen entspricht der F\"ullfaktor $\nu$ der ganzen Zahl $i$; \item $\varrho_{xx}$ {\it versus\/} $B$ bei steigendem $B$ eine zun\"achst konstante bis leicht abfallende Funktion, die bei mittleren Magnetfeldern Oszillationen, sogenannte {\sc Shubnikov}-{\sc de\,Haas}-Os\-zil\-la\-ti\-o\-nen zeigt, deren Amplitude mit zunehmenden Magnetfeld anw\"achst, bis sie schlie\ss lich bei den Magnetfeldern, bei denen $\varrho_{yx}$ Plateaux zeigt, identisch verschwindet; \item $\varrho_{yx}$ {\it versus\/} $n_{2D}$ eine vor\"ubergehend abfallende Funktion mit Plateaux bei den Werten \begin{equation} \varrho_{yx}= \frac{B}{e\,n_{2D}}= \frac{B}{e}\cdot\frac{h}{\nu eB}= \frac{h}{\nu e^2}, \end{equation} mit $\nu=g_sg_v \cdot i$ und $i=1,2,3\dots\;$; \item $\varrho_{xx}$ {\it versus\/} $n_{2D}$ eine Funktion, die Ihre endlichen Werte lediglich bei den Magnetfeldern hat, an denen $\varrho_{yx}$ seinen Wert \"andert, dagegen bei den Magnetfeldern verschwindet, an welchen letztere ihre Plateaux besitzt. \end{enumerate} F\"ur die Leitf\"ahigkeiten $\sigma_{xx}$ und $\sigma_{xy}$ gelten die entsprechenden Ergebnisse. \bild{qhe_005}{QHE versus Magnetfeld (typische Me\ss kurve)}{12} \bild{qhe_005a}{Breite der QHE-Plateaux vs.\ Beweglichkeit (schematisch)}{6} \vfill\eject\noindent% \subsection{Das chemische Potential} In der Physik der Halbleiter werden h\"aufig die Begriffe \glqq {\sc Fermi}-Energie\grqq\ und \glqq chemisches Potential\grqq\ durcheinandergebracht. Wir heben daher an dieser Stelle hervor \cite{AshcroftMermin}: \begin{itemize} \item Die {\sc Fermi}-{\it Energie\/} oder das {\sc Fermi}-{\it Niveau\/} $E_F$ (engl.\ {\it Fermi level\/}) ist {\it per definitionem\/} die Energie, welche die besetzten von den unbesetzten Einteilchen-Zust\"anden trennt. \item Das chemische Potential $\mu$ ist die freie Energie pro Teilchenzahl. Dabei setzen wir stillschweigend voraus, da\ss\ die Teilchenzahl eine ladungsartige, erhaltene Gr\"o\ss e ist.% \footnote{Photonen z.\,B.\ tragen {\it keine\/} ladungsartige Quantenzahl und haben somit {\it kein\/} chemisches Potential.} Es ist also eine thermodynamische Gr\"o\ss e, die zum Beipiel in Analogie zum elektrischen Potential (Energie pro Ladung) gesehen werden kann. (Man vergegenw\"artige sich, das die freie Energie die um den Summanden $TS$ (mit $S$ als Entropie) verminderte Energie eines makroskopischen Systems ist.) \end{itemize} Betrachten wir zun\"achst ein Metall. Wir werden dann die Frage stellen, wie sich die Konzepte auf den Fall des Halbleiters \"ubertragen. \par Bei einem Metall fallen {\sc Fermi}-Niveau und chemisches Potential am absoluten Nullpunkt der Temperaturskala zusammen. Die Verteilungsfunktion ist eine Stufen-Funktion, die an der Stelle $E=\mu$ auf Null abf\"allt. Bei endlichen Temperaturen $T>0$ wird diese Stufe abgerundet, da einige Elektronen unterhalb von $\mu$ auf Niveaus oberhalb von $\mu$ thermisch angeregt sind. H\"aufig bezeichnet man auch den Wert des chemischen Potentials am absoluten Nullpunkt als {\sc Fermi}-{\it Energie\/}, also \begin{equation} E_F:=\mu\mid_{T=0} \, . \end{equation} Man beachte, da\ss\ dies eine andere Definition der {\sc Fermi}-Energie ist als die oben vogestellte. Durch die Verwendung verschiedener Konventionen kann es leicht zu Verwechselungen kommen, zumal sich das chemische Potential $\mu$ (und nat\"urlich auch die {\sc Fermi}-Energie - im eigentlichen Sinne definiert als die Grenze zwischen besetzten und unbesetzten Einteilchen-Zust\"anden) in Abh\"angigkeit von der Temperatur \"andert. \par Die Angelegenheit wird komplexer, wenn wir Systeme kondensierter Materie mit E\-ner\-gie\-l\"ucke (engl.\ {\it gap\/}) betrachten. Sind zum Beispiel alle Zust\"ande unterhalb des Gaps besetzt und oberhalb des Gaps unbesetzt (wie bei einem sogenannten intrinsischen Halbleiter, z.\,B.\ ideal reinem Silizium am absoluten Nullpunkt), so erf\"ullt {\it jedes\/} Energieniveau in der Energiel\"ucke die Definition einer {\sc Fermi}-Energie im eigentlichen Sinne (!). Wenn Halbleiterphysiker von \glqq der\grqq\ {\sc Fermi}-Energie eines intrinsischen Halbleiter sprechen, meinen sie das chemische Potential, welches f\"ur endliche Temperaturen wohldefiniert ist und im Limes verchwindender absoluter Temperatur sich bei undotierten Halbleitern in der Mitte des Gaps befindet. \par Auch wir wollen zuweilen dem allgemeinen Sprachgebrauch folgen und das chemische Potential \glqq {\sc Fermi}-Niveau\grqq\ nennen; denn gerade bei Systemen mit L\"ucken im Energiespektrum kann nur ersteres gemeint sein! \par Wir heben noch einmal hervor: Man ist stets auf der sicheren Seite, wenn man den thermodynamisch sauber definierten Begriff des chemischen Potentials verwendet. Halbleiterphysiker sind es gewohnt, dieses (nicht ganz korrekt) als {\sc Fermi}-Energie zu bezeichnen. \vfill\eject\noindent% \subsection{Shubnikov-de\,Haas-Oszillationen} Ein System mit L\"ucken im Energiespektrum ist das von uns betrachtete System von Elektronen in einem konstanten Magnetfeld. Im Idealfall besteht es aus einer \"aquidistanten Menge von hochentarteten Niveaus, die im realen Fall durch Streuprozesse an Verunreinigungen verbreitert sind. Sowohl die Distanz als auch der Entartungsgrad (im idealisierten Fall) sind proportional zum \"au\ss eren Magnetfeld. Im Grenzfall niedriger Temperaturen treten beide in Konkurrenz, wenn die Elektronen sich bei dem sich ver\"andernden Mag\-net\-feld umverteilen. \par Betrachten wir ein Beispiel: Das Magnetfeld und damit der Entartungsgrad seien so stark, da\ss\ alle Elektronen im untersten {\sc Landau}-Niveau kondensiert sind. Vermindern wir nun langsam die St\"arke des Magnetfeldes, so kommt irgendwann der Zeitpunkt, zu dem der Entartungsgrad sich so vermindert hat, da\ss\ einige der Elektronen auf das n\"achsth\"ohere {\sc Landau}-Niveau ausweichen. Dieser Vorgang wiederholt sich entsprechend f\"ur die h\"oheren Niveaus. Man kann es auch so sehen: Die {\sc Fermi}-Energie $E_F$ durchwandert die {\sc Landau}-Niveaus. In der Realit\"at sind diese nun verbreitert, d.\,h.\ wir finden eine schwankende Zu\-stands\-dich\-te bei $E_F$. Theoretische Analysen, die \"uber die {\sc Drude}-Transporttheorie hinausgehen, zeigen, da\ss\ auch die {\sc Drude}-Impulsrelaxationszeit $\tau$ schwanken kann. Hierbei spielt die Dynamik sogenannter {\it Screening-Effekte\/} (das sind Abschirmungseffekte von St\"orstellen) eine wesentliche Rolle. Schwankt aber die {\sc Drude}-Impulsrelaxationszeit, so schwankt auch die Beweglichkeit der Elektronen. Eine kleine Zu\-stands\-dich\-te an der {\sc Fermi}-Kante entspricht somit einer kleinen Beweglichkeit. \par Bei vollst\"andiger F\"ullung eines {\sc Landau}-Niveaus, wenn also das chemische Potential genau zwischen zwei {\sc Landau}-Niveaus liegt, finden wir auf der Hauptdiagonalen der Tensoren ein Minimum der diagonalen Leitf\"ahigkeit $\sigma_{xx}$ und damit auch ein Minimum des spezifischen Widerstandes $\varrho_{xx}$. Die Schwankungen der Probenleitf\"ahigkeit in Abh\"angigkeit des \"au\ss eren Magnetfeldes sind die schon oben genannten {\sc Shubnikov}-{\sc de\,Haas}-Os\-zil\-la\-ti\-o\-nen. Die {\sc Shubnikov}-{\sc de\,Haas}-Minima liegen periodisch in $1/B$. Berechnen wir die Periode: Sei der F\"ullfaktor im allgemeinsten Fall (mit Valley- und Spin-Entartung) \begin{equation} \nu = g_s g_v \cdot i, \end{equation} so k\"onnen wir f\"ur die Anzahl der Ladungstr\"ager schreiben: \begin{equation} n_{2D} = i \cdot n_{L} = i \cdot g_s g_v \cdot \frac{eB_{(i)}}{h}, \end{equation} das hei\ss t \begin{equation} \frac{1}{B_{(i)}} = \frac{ig_sg_ve}{h n_{2D} } \end{equation} und damit \begin{equation} \frac{1}{B_{(i+1)}} = \frac{(i+1)g_sg_ve}{h n_{2D} }. \end{equation} Die Differenz beider Terme ergibt die Periode der Oszillationen: \begin{equation} \Delta(\frac{1}{B}) = \frac{1}{B_{(i+1)}}-\frac{1}{B_{(i)}} = g_sg_v\cdot\frac{e}{h n_{2D} }. \end{equation} \bild{qhe_005b}{Zustandekommen des Magnetowiderstandes}{6} \par Voraussetzung f\"ur die Beobachtung der {\sc Shubnikov}-{\sc de\,Haas}-Oszillationen sind \begin{enumerate} \item die Tatsache, da\ss\ das Produkt aus \"au\ss erem Magnetfeld $B$ und Beweglichkeit $\mu=e\tau/m$ der Ladungstr\"ager m\"oglichst hoch ist \begin{equation} \mu B \gg 1, \end{equation} was \"aquivalent ist zur Bedingung \begin{equation} \omega_c\tau \gg 1. \end{equation} \par Die Elektronen sollten also mindestens einmal, m\"oglichst mehrmals kreisen, ohne an einem Streuproze\ss\ teilzunehmen. Diese Bedingung bestimmt den Einsatz der {\sc Shubnikov}-{\sc de\,Haas}-Oszillationen und ist erst bei niedrigen Temperaturen relevant. In Zahlen: Sei \begin{equation} T=3\,{\rm K}, \end{equation} dann ist \begin{equation} 4\,kT \approx 1\,{\rm meV}. \end{equation} Gleichzeitig ist \begin{equation} \hbar\omega_c = 1.65\,{\rm meV}\,B(T), \end{equation} mit der effektiven Masse f\"ur Elektronen in Galliumarsenid \begin{equation} m = 0.07\,m_e \, ; \end{equation} \item die Tatsache, da\ss\ die thermische Aufweichung der {\sc Fermi}-Kante deutlich schmaler ist als der energetische Abstand zweier {\sc Landau}-Niveaus \begin{equation} \hbar\omega_c\gg k_B\cdot T, \end{equation} typisch \begin{equation} \hbar\omega_c \ge 4k_BT, \end{equation} das hei\ss t, die verwendeten Temperaturen sollten m\"oglichst niedrig sein. \end{enumerate} \newpage% \vspace*{5cm} \bild{qhe_005c}{{\sc Shubnikov}-{\sc de\,Haas}-Oszillationen (schematisch)}{8} \newpage% \section{Deutung des Effektes} \subsection{Eichtheoretisches Argument nach Laughlin} Der quantisierte {\sc Hall}-Effekt geh\"ort offensichtlich in die Kategorie makroskopischer Quanteneffekte bzw.\ topologischer Quantisierungseffekte; siehe hierzu die beigef\"ugte Tabelle. \begin{table}\vspace{0.5cm} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Konzept & Ph\"anomen & & Entdeckungsjahr \\ \hline\hline & $\bullet$ & Beobachtung der Supraleitung in Hg & 1911 \\ \hline $\bullet$ & & {\sc Dirac}-Monopol (hypothetisch) & 1931 \\ \hline & $\bullet$ & {\sc Meissner}-{\sc Ochsenfeld}-Effekt & 1933 \\ \hline $\bullet$ & & {\sc London}-Theorie & 1935 \\ \hline & $\bullet$ & Suprafl\"ussiges Helium-4 & 1938 \\ \hline $\bullet$ & & {\sc Ginzburg}-{\sc Landau}-Theorie & 1950 \\ \hline $\bullet$ & & BCS-Supraleitung & 1956/1957 \\ \hline $\bullet$ &$\bullet$ & {\sc Aharonov}-{\sc Bohm}-Effekt & 1959 \\ \hline $\bullet$ &$\bullet$ & Flu\ss quantisierung & 1961 \\ \hline $\bullet$ & $\bullet$ & {\sc Josephson}-Effekt & 1962 \\ \hline & $\bullet$ & Suprafl\"ussiges Helium-3 & 1972 \\ \hline & $\bullet$ & Quantisierter {\sc Hall}-Effekt & 1981 \\ \hline & $\bullet$ & Hochtemperatur-Supraleitung & 1986 \\ \hline & $\bullet$ & Supraleitung in Fullerenen & 1991 \\ \hline \end{tabular} \normalsize \end{center} \vspace{0.5cm} \caption{Makroskopische Quanteneffekte und topologische Quantisierungseffekte} \vspace{0.75cm}\end{table} \par Es war {\sc F.\ London}, der als erster die Idee hatte, das von {\sc Weyl} vorgeschlagene Eichfeldkonzept, welches das elektromagnetische Vektorpotential in Relation zu einer spekulativen L\"angen\"anderung von Vektoren unter Parallelverschiebung im Rahmen der Allgemeinen Relativit\"atstheorie (!) setzt, neu zu interpretieren, n\"amlich als den Generator einer Drehung der Phase der quantenmechanischen Wellenfunktion \cite{Yang79}. {\sc F.\ London} war auch der erste, der \"uber die Existenz quantisierter magnetischer Fl\"usse spekulierte \cite{Yang70}. \par In diese Kategorie von Argumenten f\"allt auch das Eichargument von {\sc Laughlin}, das auf einem Gedankenexperiment beruht \cite{Laughlin81}: Unter der Annahme, da\ss\ die Wellenfunktion in der {\sc Hall}-Probe ausgedehnt ist und als makroskopische Gr\"o\ss e eindeutig ist, erhalten wir eine der Flu\ss quantisierung analoge Bedingung, wenn wir die Probe zu einem Ring so zusammenbiegen, da\ss\ Strominjektions- (engl.\ {\it source\/}) und Stromextraktionskontakt (engl.\ {\it drain\/}) miteinander identifiziert sind. Der quantisierte Flu\ss\ durch den Ring sei mit $\Phi$ bezeichnet und ist von dem magnetischen Flu\ss, der dem {\sc Hall}-Magnetfeld entspricht, welches an jeder Stelle normal zur Probenfl\"ache ausgerichtet ist, wohl zu unterscheiden. Man nennt ihn auch den {\it fiktiven magnetischen Flu\ss\/}. \bild{qhe_006}{{\sc Laughlin}s Eichargument}{10} \par Uns interessiert die Beziehung von totalem Strom $I$ durch die Probe zum Spannungsabfall zwischen den Seitenkanten am Rande des Transportweges der Ladungstr\"ager. Wir k\"onnen sie aus dem {\sc Faraday}schem Induktionsgesetz herleiten. Letzteres schreiben wir nicht, wie \"ublich als \begin{equation} U_{ind}=-n\cdot\frac{\Delta\Phi}{\Delta t} \, , \end{equation} sondern als \begin{equation} I=\frac{\partial E_{ww}}{\partial\Phi}. \end{equation} \"Uberpr\"ufen wir diese Formel: Man \"uberzeugt sich leicht, da\ss\ die Einheiten stimmen: Energie wird in Volt-Ampere-Sekunde gemessen und der magnetische Flu\ss\ in Volt-Sekunde. Andererseits ist die Wechselwirkungsenergie im Falle der hier gew\"ahlten Probengeometrie (mit den Kantenl\"angen $a$,$b$,$c$) \begin{equation} E_{ww}=\int {\bf j}_{2D}{\bf A} \, d^3 x =|{\bf j}_{2D}| \, b \cdot |{\bf A}| \, c = I \cdot \Phi. \end{equation} Differentiation nach $\Phi$ reproduziert das ge\-w\"unsch\-te Ergebnis. Wegen der Quantisierung des fiktiven magnetischen Flusses m\"ussen wir die partielle (adiabatische) Ableitung durch eine Differenz ersetzen. F\"ur $i$ Elektronen, die von der einen zur gegen\"uberliegenden Seitenkante transportiert werden, erhalten wir die Energie- bzw.\ Strombilanz \begin{equation} I=\frac{\Delta E_{ww}}{\Delta\Phi} =i\cdot\frac{\mbox{Elektronenenergie}}{\mbox{Flu\ss quantum}} =\frac{ieU_H}{h/e} =\frac{ie^2}{h}\,U_H . \end{equation} Das hei\ss t, da\ss\ wir unter der Voraussetzung, da\ss\ die dem superstrom-\"ahnlichen {\sc Hall}-Strom zugeordnete Wellenfunktion die Eichinvarianz der Theorie in der Ebene respektiert, die ge\-w\"unsch\-te Quantisierung erhalten. \par Wie wir schon oben gesehen haben, liefert das Denken in Termen von Flu\ss quanten tiefe Einsichten in den fundamentalen Charakter des quantisierten {\sc Hall}-Effektes: Offensichtlich k\"onnen wir die St\"arke des Magnetfeldes durch die auf eine Fl\"acheneinheit bezogene Anzahl $n_{\Phi_0}$ von Flu\ss quanten $\Phi_0$ ausdr\"ucken; das hei\ss t, es ist \begin{equation} B = n_{\Phi_0} \cdot \Phi_0 = n_{\Phi_0} \cdot \frac{h}{e}. \end{equation} Somit erkennen wir in der Gr\"o\ss e \begin{equation} g_sg_v \cdot n_{\Phi_0} = g_sg_v \cdot \frac{eB}{h} = n_{L} \end{equation} den Entartungsgrad eines {\sc Landau}-Niveaus wieder. Wir k\"onnen auch schreiben \begin{eqnarray} \mbox{Anzahl der gef\"ullten {\sc Landau}-Niveaus} & = & \frac{1}{g_sg_v} \cdot \frac{\mbox{Anzahl der Ladungstr\"ager}} {\mbox{Anzahl der Flu\ss quanten}} \nonumber\\ & \phantom{=} & \nonumber\\ & = & \frac{1}{g_sg_v} \cdot \frac{h n_{2D} }{eB} \, = \, \frac{1}{g_sg_v}\cdot\nu \, = \, \frac{ n_{2D} }{ n_{L} } \; . \end{eqnarray} Wenn wir der Einfachheit halber $g_s=g_v=1$ annehmen, haben wir f\"ur $\nu=1$ gerade einen Zustand vorliegen, in dem sich in der Probe genauso viele Ladungstr\"ager wie Flu\ss quanten befinden. Nach {\sc Kivelson}, {\sc Lee} und {\sc Zhang} liegt dann ein makroskopischer Quantenzustand vor, in dem bosonische Quasiteilchen, welche man sich als Bindungszust\"ande von je einer elektrischen Ladung und je einem magnetischen Flu\ss quant vozustellen hat, in einen durch eine makroskopische Wellenfunktion zu beschreibenden Quantenzustand kondensiert sind \cite{Kivelson92, Kivelson96, Zhang89, Zhang92}. Dieser Zustand ist vergleichbar mit einem supraleitenden Grundzustand geladener Bosonen. (Analoges gilt f\"ur die h\"oheren F\"ullfaktoren.) \par So suggestiv diese Vorstellungen auch sein m\"ogen - ihre Rechtfertigung erg\"abe sich erst aus einer {\it mikroskopischen\/} Beschreibung des beobachteten Effektes. Die Situation ist hier ganz \"ahnlich wie in der gew\"ohnlichen Supraleitung: So erm\"oglicht die in Termen einer makroskopischen Wellefunktion formulierte {\sc Landau}-{\sc Ginzburg}-Theorie eine ziemlich treffende Beschreibung (i) des Phasen\"ubergangs vom normalleitenden zum supraleitenden Zustand, (ii) des Verhaltens gegen\"uber \"au\ss eren Magnetfeldern sowie (iii) der Eigenschaften von Flu\ss schl\"auchen; eine mikroskopische Erkl\"arung liefert aber erst die Theorie von {\sc Bardeen}, {\sc Cooper} und {\sc Schriefer}, ausgehend von der Idee einer von Phononen vermittelten schwach attraktiven Wechselwirkung von Elektronen und der Bildung von sogenannten {\sc Cooper}-Paaren, die - als Quasibosonen - schlie\ss lich in den supraleitenden Grundzustand kondensieren. \par F\"ur den integral-quantisierten {\sc Hall}-Effekt bleibt also die Frage: Was ist die fundamentale Elektron-Elektron-Wechselwirkung, welche den makroskopischen Quantenzustand herbeif\"uhrt? Diese Frage steht im klaren Gegensatz zur weitverbreiteten Lehrmeinung, der integral-quantisierte {\sc Hall}-Effekt lasse sich im Rahmen einer Theorie nicht-wech\-sel\-wir\-ken\-der Elektronen vollst\"andig verstehen. Eine befriedigende Antwort steht bis heute noch aus. \vfill\eject\noindent% \subsection{Lokalisierungsbild (bulk states)} Durch unterschiedlich starke Streuung der Elektronen an St\"orstellen wird die hochgradige Entartung der {\sc Landau}-Niveaus (teilweise) aufgehoben, so da\ss\ diese sich zu B\"andern verbreitern. Wenn wir den quantisierten {\sc Hall}-Effekt erkl\"aren wollen, m\"ussen wir annehmen, \begin{itemize} \item da\ss\ die Elektronen, die im Zentrum eines solchen Bandes liegen, nicht gestreut werden, und somit ausgedehnte Zust\"ande bilden, die zum Stromtransport beitragen, \item da\ss\ Elektronen, die in einer gewissen Umgebung des Zentrums liegen, so gestreut werden, da\ss\ sie lokalisierte Zust\"ande bilden. \end{itemize} \bild{qhe_007}{Lokalisierung versus Delokalisierung}{8} \par Die St\"orstellen in der Probe bilden ein \glqq Potentialgebirge\grqq. Die elektronische Wellenfunktion neigt dazu, sich in dessen T\"alern zu lokalisieren. In Konkurrenz dazu neigen Tunneleffekte zwischen benachbaren Minima dazu, die Wellenfunktion zu delokalisieren. Zus\"atzlich versucht das Magnetfeld, die Elektronen auf Kreisbahnen zu zwingen. Da\ss\ sich nun alle Effekte zusammen so arrangieren, da\ss\ der beschriebene Lo\-ka\-li\-sie\-rungs-De\-lo\-ka\-li\-sie\-rungs-\"Ubergang mit einer so robusten Quantisierung vertr\"aglich ist, mu\ss\ in einer mikroskopischen Theorie erst einmal gezeigt werden, und zwar rigoros, nicht nur vermittels Computer-Simulation. In dem vorliegenden Rahmen wollen wir uns darauf beschr\"anken, dies als plausible Annahme gelten zu lassen. \par Entscheidend f\"ur das Auftreten des quantisierten {\sc Hall}-Effektes ist nun, wie sich das System beim Durchfahren des chemischen Potentials durch die verbreiterten {\sc Landau}-Niveaus verh\"alt. F\"ur den Fall eines ansteigenden Magnetfeldes beobachten wir: \begin{itemize} \item Solange Bereiche lokalisierter Zust\"ande durchfahren werden, k\"onnen Zust\"ande, die mit Elektronen be- oder entv\"olkert werden, nichts zur Leitf\"ahigkeit der Probe beitragen. In diesem Bereich \"andert sich nichts am {\sc Hall}-Widerstand der Probe und der quantisierte Wert bleibt erhalten. Die longitudinale Leitf\"ahigkeit (und damit der longitudinale Widerstand) verschwindet. \item Werden Bereiche ausgedehnter Zust\"ande durchfahren, liefern die Zust\"ande, die mit Elektronen be- oder entv\"olkert werden, einen Beitrag zur Leitf\"ahigkeit. In diesem Bereich \"andert sich die {\sc Hall}-Leitf\"ahigkeit der Probe. Die longitudinale Leitf\"ahigkeit ist gerade die Steigung dieser \"Anderung versus \"au\ss erem Magnetfeld. (Letztere Aussage ist im Sinne eines ph\"anomenologischen Fits zu verstehen; einen einfachen formalen Beweis, der von einfachen physikalischen Voraussetzungen ausgeht, gibt es leider nicht.) \end{itemize} \bild{qhe_007a}{Lokalisierung-Delokalisierung und Percolation}{12} \par Die bisherigen experimentellen Fakten legen nahe, da\ss\ der quantisierte {\sc Hall}-Effekt un\-ab\-h\"an\-gig von Geometrie und Gr\"o\ss e der Probe ist. Eine mikroskopische Theorie, welche den Effekt an sich erkl\"art, sollte auch diese Tatsache erkl\"aren. Wir erwarten daher, da\ss\ die Theorie wesentlich auf Renormierungsgruppen-Argumente, also auf Analysen von Skalenverhalten einer {\it Quantentheorie eines ungeordneten Systems im Magnetfeld\/} bauen mu\ss. In der Tat ist dies genau der Ansatz, von dem in modernen quantenfeldtheoretischen Zug\"angen zum quantisierten {\sc Hall}-Effekt ausgegangen wird. Der interessierte Leser sei auf den hervorragenden, gut zu lesenden einf\"uhrenden Artikel von {\sc Khurana} in {\it Physics today\/} hingewiesen, der auch einen Hinweis auf interessante Originalarbeiten gibt \cite{Khurana8809}. \par An dieser Stelle seien kurz die wesentlichen Ideen der bahnbrechenden Arbeiten von {\sc Levine}, {\sc Libby}, {\sc Pruisken} und {\sc Khmel'nitzkii} skizziert \cite{Levine84, Pruisken84, Khmelnitzkii83}. \par Wie {\sc Khurana} in seinem einf\"uhrenden Artikel hinweist, gibt es Analogien zur Quantenfeldtheorie der Elementarteilchen. In der Transporttheorie spielen Leitf\"ahigkeiten die Rolle inverser Kopplungkonstanten in der Quantenfeldtheorie. Die Untersuchung der allgemeinen Struktur einer Elementarteilchentheorie f\"uhrt stets auf eine Renormierungsgruppenanalyse: Wie skalieren die Kopplungskonstanten beim Skalieren des Impuls\"ubertrages? Im Falle der nichtabelschen Eichtheorien beobachtet man zum Beispiel {\it asymptotische Freiheit\/} (das Verschwinden der Wechselwirkungen der Quarks im Hochenergie-Limes), in der Sprache der Renormierungsgruppe ultraviolett stabiler Fixpunkt genannt. Umgekehrt ist das Verhalten der Kopplungskonstante im Infrarot-Limes so, da\ss\ die Wechselwirkung immer st\"arker wird: Quark Confinement. F\"ur eine Diskussion all dieser Fragen siehe \cite{Itzykson80}. \par Im Falle des Stromtransports entspricht dies dem Fall verschwindender Leitf\"ahigkeit bei wachsender Systemgr\"o\ss e. Dies ist exakt der Fall, den wir im Verschwinden metallischen (ohmschen) Verhaltens beim Vorliegen von {\it Lokalisierung\/} in niederen Dimensionen haben. \par Alle beschriebenen Systeme lassen sich in einem feldtheoretischen {\sc Lagrange}schen Rahmen beschreiben. Dabei kann man aus den klassischen Bewegungsgleichungen nur einen ganz kleinen Teil der Physik ablesen; die wesentlichen Strukturen werden im allgemeinen erst im Rahmen der Quantisierungsprozedur ({\sc Green}-Funktionen, {\sc Feynman}-Graphen etc.) vermittels oft m\"uhseliger Rechnungen sichtbar. \par Bestimmte Strukturen haben kein klassisches Analogon: So ist aus der Quan\-ten\-feld\-the\-o\-rie wohlbekannt, da\ss\ man Terme zur {\sc Lagrange}-Funk\-ti\-on hinzuaddieren kann, welche die Bewegungsgleichungen nicht \"andern, wohl aber die quantenmechanische Phase drehen, die ja proportional zu $\exp\,iS$ ist, mit $S=\int L\,dt$ als Wirkung. (In der klassischen Me\-cha\-nik sind dieses die Terme proportional zu einer totalen Zeitableitung.) In der \"ublichen Elektrodynamik in drei Raumdimensionen w\"are dies ein Zusatzterm proportional zu ${\bf E}\cdot{\bf B}$, ein sogenannter {\sc Chern}-{\sc Pontryagin}-Term (im Gegensatz zum \"ublichen {\sc Maxwell}-Term proportional zu ${\bf E}^2-{\bf B}^2$), in der 2-dimensionalen Version ein sogenannter {\sc Chern}-{\sc Simons}-Term proportional ${\bf A}\times{\bf B}$. Fordern wir, da\ss\ die Wellenfunktionen $\psi\sim\exp\,iS$ der betrachteten Quantisierung eindeutig sind, so m\"ussen die diesen topologischen Termen zugeordneten Kopplungskonstanten quantisiert sein. \par Es sind {\sc Lagrange}-Funktionen genau dieses Typs, welche in der Magnetotransporttheorie eine Rolle spielen und die sich durch zwei Kopplungsparameter, einem konventionellen ($\sigma_{xx}$) und einem topologischen ($\sigma_{xy}$), auszeichnen. Das entsprechende {\it two-parameter scaling\/} zeigt, da\ss\ bei wachsender Systemgr\"o\ss e die topologische Kopplungskonstante (die gerade der {\sc Hall}-Leitf\"ahigkeit entspricht) quantisierte Werte annimmt, die gew\"ohnliche Kopplungskonstante hingegen das beobachtete oszillatorische Verhalten zeigt. Diese \glqq Auf\-bl\"at\-te\-rung\grqq\ von Quantenfeldtheorien mit topologischen Termen ist ein universelles Ph\"a\-no\-men und den Theoretikern wohlbekannt \cite{Cardy82a, Cardy82b}. (Die Abbildung zeigt die entsprechenden Renormierungsgruppenfl\"usse, wobei die Pfeile in Richtung wachsender Systemgr\"o\ss e zeigen \cite{Khmelnitzkii83}.) \bild{qhe_008}{{\sc Pruisken}-{\sc Khmel'nitzkii} two-parameter scaling}{10} \par Interessant dabei ist, da\ss\ die aus der Lokalisierungstheorie hervorgegangenen mikroskopischen Quantenfeldtheorien des quantisierten {\sc Hall}-Effektes gerade vom ersten Typus \cite{Pruisken84}, die ph\"anomenologischen Theorien hingegen vom zweiten Typus sind \cite{Frohlich93}. Die zugeordneten {\sc Lagrange}-Dichten haben stets die Form \begin{equation} {\cal L}=\sigma_{xx}\cdot{\cal L}_0+\sigma_{xy}\cdot{\cal L}_{top}. \end{equation} Dies erkl\"art schlie\ss lich, warum die Quantisierung exakt sein mu\ss. Die Sache hat nur zwei Haken: Zum einen steckt in der Wahl der geeigneten {\sc Lagrange}-Dichte - gleichg\"ultig, auf welchem Level sie erfolgt - immer eine Modell-Annahme; zum anderen ist es sehr schwer, die Quantenfeldtheorie dieses Problems rigoros zu l\"osen. Letzteres gilt aber auch f\"ur die experimentell am erfolgreichsten verifizierte Theorie schlechthin, die Quan\-ten\-e\-lek\-tro\-dy\-na\-mik. \par {\it Bemerkung:\/} Man k\"onnte an dieser Stelle vielleicht einwenden, da\ss\ die hier nur angerissenen Formalismen ein wenig zu akademisch sind und da\ss\ alles auch viel einfacher gehen m\"u\ss te. Unter Experten besteht aber durchaus ein Konsensus darin, da\ss\ die Anwendung quantenfeldtheoretischer Methoden f\"ur den Lokalisierungs-Delokalisierungs-\"Ubergang im \"au\ss eren Magnetfeld der geeignete Rahmen ist (siehe z.\,B.\ \cite{Janssen94}). \vfill\eject\noindent% \subsection{Randkanal-Bild (edge states)} Die Physik des quantisierten {\sc Hall}-Systems ist ein ausgezeichnetes Beipiel f\"ur die Anwendung einer quantenelektrodynamischen Theorie auf ein reales System der kondensierten Materie. Diese mu\ss\ nat\"urlich die lokale Eichinvarianz, die ein sehr fundamentales physikalisches Prinzip darstellt, respektieren. Wenn das betrachtete physikalische System, die Probe, eine endliche Ausdehnung und damit einen Rand besitzt, mu\ss\ die Eichinvarianz auf dem Rand (engl.\ {\it edge\/}) kompatibel zur Eichinvarianz im Hauptteil (engl.\ {\it bulk\/}) sein. H\"atten wir zum Beispiel ein Modell des quantisierten {\sc Hall}-Systems in einem {\sc Lagrange}schen Rahmen formuliert und w\"urden durch Variation der Wirkung die klassischen Bewegungsgleichungen ableiten, so m\"u\ss ten wir bei der Variation Randterme ber\"ucksichtigen, die im unendlich ausgedehnten Fall meistens wegdiskutiert werden. \par Es ist eine wohlbekannte Tatsache aus der mathematischen Physik, da\ss\ Begriffe wie chemisches Potential und Strom ihren Ursprung in der Eichinvarianz haben \cite{Araki77}. Eichinvarianz impliziert die Existenz von Ladungen und zugeordneten Erhaltungsgesetzen. Meist ist die Ladung in elementarer Weise mit den Teilchen verkn\"upft, so da\ss\ wir auch von Erhaltung der Teilchenzahl $N$ sprechen d\"urfen. (Dies ist aber nicht der allgemeinste Fall.) Thermodynamische Gleichgewichtszust\"ande solcher Systeme werden nicht nur durch die inverse Temperatur $\beta=1/kT$, der kanonisch konjugierten Variablen zur Energie, sondern durch ein chemisches Potential $\mu$, der kanonisch konjugierten Variablen zur Teilchenzahl (im allgemeinen Fall zur Gesamtladung), gekennzeichnet. Zwischen zwei Kontakten, die ja die Verbindungen der Probe zu Reservoirs, die sich im thermodynamischen Gleichgewicht befinden, darstellen, mu\ss\ die Differenz des chemischen Potentials endlich sein, wenn wir einen elektrischen Stromtransport haben wollen.% \footnote{Bei Bosonen ist die Endlichkeit des chemischen Potentials Voraussetzung f\"ur die M\"oglichkeit der sogenannten {\sc Bose}-Kondensation, mit deren Hilfe man den im Praktikumsexperiment beobachtbaren $\lambda$-\"Ubergang (nach der Form des Verlaufs der spezifischen W\"arme) des fl\"ussigen Helium-4 bei 2.18 K in den suprafl\"ussigen Zustand erkl\"aren kann. Es gibt auch eine Theorie des quantisierten {\sc Hall}-Effekts, welche sich auf das Prinzip der {\sc Bose}-Kondensation bezieht. Hier sind die Bosonen gedachte Bindungszust\"ante aus Flu\ss quanten und Elektronen, welche bei ganzzahligen (oder auch bestimmten rationalen) F\"ullfaktoren kondensieren (siehe auch \cite{Kivelson92, Kivelson96, Zhang89, Zhang92}). \par Im Falle von {\sc Bose}-Gasen unbeschr\"ankter Teilchenerzeugung, oder etwas pr\"aziser gesprochen, im Falle von Gasen von {\sc Bosonen} ohne ladungsartige erhaltende Quantenzahl (wie Photonen und Phononen) gibt es kein chemisches Potential und somit auch {\it nicht\/} die M\"oglichkeit der {\sc Bose}-Kondensation. Da\ss\ man das Photonengas in einem Hohlraum lediglich durch einen Parameter, n\"amlich $\beta=1/kT$, charakterisieren kann, ist gerade einer der Eckpfeiler der ber\"uhmten Hypothese {\sc Planck}s und seiner Entdeckung der nach ihm benannten Konstante (vgl.\ \cite{Weidlich, Haag}). } \bild{qhe_009}{Randzust\"ande versus Volumenzust\"ande}{12} \par Nun besteht elektrische Leitf\"ahigkeit immer dann, wenn sich die Elektronen zugunsten einer bestimmten Bewegungs- oder Vorzugsrichtung im $k$-Raum (auf den Parabeln der Subb\"ander) umverteilen k\"onnen. Dabei sind lediglich die Elektronen nahe der {\sc Fermi}-Kante wesentlich, da alle energetisch tieferliegenden Zust\"ande besetzt sind und ihre Umbesetzung durch die Ununterscheidbarkeit der Elektronen trivial ist, das hei\ss t nichts zum Transport beitr\"uge. Ferner m\"ussen freie Zust\"ande einer Vorzugsrichtung im $k$-Raum vorhanden sein, d.\,h.\ die Zu\-stands\-dich\-te bei der {\sc Fermi}-Energie darf nicht verschwinden. Wie wir gesehen haben, wird diese Bedingung gerade in der Umgebung der {\sc Landau}-Niveaus erf\"ullt. \par Der wesentliche Punkt ist nun der folgende: Neben dem schon vorgestellten Szenario, da\ss\ die {\sc Fermi}-Energie ein r\"aumlich starr vorgegebenes Spektrum von {\sc Landau}-B\"andern durchl\"auft, d\"urfen wir davon ausgehen, da\ss\ letztere sich selbst an dem Rand der Probe so nach oben biegen, da\ss\ sie ihrerseits eine global konstante {\sc Fermi}-Kante durchlaufen. Denn die Tatsache, da\ss\ der Probenrand f\"ur die Elektronen ein un\"uberwindliches Hindernis darstellt, kann so gedeutet werden, da\ss\ die Elektronen der Probe in einem Potentialtopf oder einsperrendem Potential (engl.\ {\it confining potential\/}) gefangen sind. Somit kreuzt jedes {\sc Landau}-Niveau infolge des starken Confining-Potentials die {\sc Fermi}-Energie am Rand der Probe, an dem somit ein Kanal nicht verschwindender Zu\-stands\-dich\-te entsteht. \par Die Bewegung der Elektronen am Rand der Probe kann man sich aus Teilkreisen (engl.\ {\it skipping orbits\/}) zusammengesetzt denken. Eine Streuung nach innen oder gar von Probenrand zu Probenrand besitzt eine verschwindende Wahrscheinlichkeit, da ein vom Rand in die Probe hineinkreisendes Elektron, wenn es auf eine St\"orstelle trifft, nach einem weiteren Umlauf wieder an den Rand zur\"uckgestreut wird. Offensichtlich haben {\it skipping orbits\/} eine h\"ohere Frequenz als die Vollkreise, das hei\ss t, ihre Energie ist gr\"o\ss er. Dies erkl\"art auf semiklassischen Niveau, warum die Randzust\"ande eine h\"ohere Energie haben m\"ussen als die Zust\"ande im Innern. \par Wir fassen zusammen: Solange wir uns also nicht im Zentrum eines {\sc Landau}-Niveaus befinden, verschwindet die Leif\"ahigkeit im Probeninnern: Dort stehen keine Zust\"ande f\"ur den Stromflu\ss\ zur Verf\"ugung. F\"ur jedes {\sc Landau}-Niveau bildet sich ein Randkanal aus, wobei der dem niedrigsten {\sc Landau}-Level zugeordnete Kanal am weitesten au\ss en liegt. \par Beschreiben wir nun den Stromtransport am Rand der Probe im Rahmen einer 1-dimensionalen Transporttheorie von {\sc Landauer} und {\sc B\"uttiker} \cite{Buettiker90, Landauer87}. Eine solche Theorie kann in Analogie zur W\"armeleitung gesehen werden. Dort wird die Frage behandelt, wie Energie von einer hei\ss en zu einer kalten Stelle transportiert wird. Energie und Temperatur sind in gleicher Weise duale oder konjugierte Variablen wie Teilchenzahl (oder besser Ladung) und chemisches Potential. Stellen wir uns also die auf dem Rand kontaktierte Probe als ein System vor, in dessen Rand an bestimmten Stellen das chemische Potential lokal fixiert ist. Zur Berechnung der gemessenen Leitf\"ahigkeiten m\"ussen wir einen Ausdruck f\"ur einen 1-dimensionalen Strom in Termen der 1-dimensionalen Zu\-stands\-dich\-te angeben. \par Diese ist f\"ur den allgemeinen Fall gegeben durch \begin{equation} D_{1D}(E) = g_s g_v \cdot \frac{1}{h}\sqrt{\frac{m}{2E}}. \end{equation} Es ist hervorzuheben, da\ss\ es zu jedem Energieeigenwert $E$ {\it zwei\/} Gruppengeschwindigkeiten $v_g$ gibt, die sich durch ihr Vorzeichen unterscheiden. Wir k\"onnen schreiben: \begin{equation} v_g(E) = \frac{1}{\hbar} \left. \frac{dE}{dk} \right|_E \nonumber\\ = \frac{1}{\hbar} % \left. \frac{d(\hbar^2k^2/2m)}{dk} \right|_E \nonumber\\ % = \left. \frac{\hbar k}{m} \right|_E \nonumber\\ = \pm \sqrt{ \frac{2E}{m} }. \end{equation} In der Berechnung des elektrischen Stroms in einem 1D-Elektronensystem, der sich aus dem Produkt von Gruppengeschwindigkeit $v_g$ und Elektronenladung $e$ gewichtet mit der Zu\-stands\-dich\-te $D_{1D}(E)$ und der {\sc Fermi}-{\sc Dirac}-Verteilungsfunktion \begin{equation} f(E)=\frac{1}{\exp\,(E-\mu)/kT+1} \end{equation} ergibt, darf nur ein Zweig ber\"ucksichtigt werden. Somit ist \begin{eqnarray} I &=& \int_0^\infty \frac{1}{2}\,ev_g D_{1D}(E) f(E) \,dE \nonumber\\ & & \nonumber\\ &=& \frac{e}{2} \cdot \int_0^\infty \sqrt{\frac{2E}{m}} \cdot g_sg_v \cdot \frac{1}{h} \sqrt{\frac{2m}{E}} \cdot f(E) \,dE \nonumber\\ & & \nonumber\\ &=& g_sg_v \cdot \frac{e}{h} \cdot \int_0^\infty f(E) \,dE \nonumber\\ & & \nonumber\\ &=& g_sg_v \cdot \frac{e}{h} \, \mu. \end{eqnarray} Man erinnere sich, da\ss\ im Grenzfall $T \rightarrow 0$ die {\sc Fermi}-{\sc Dirac}-Verteilungsfunktion die Form \begin{equation} f(E)=\left\{ \begin{array}{ccl} 1, & \phantom{123} & \mbox{ {\rm falls $E<\mu$,} }\\ 0, & \phantom{123} & \mbox{ {\rm falls $E>\mu$,} } \end{array} \right. \end{equation} hat. Durch Einsetzen dieses Ausdrucks in das obige Integral verifiziert man leicht das Ergebnis der Rechnung f\"ur den Fall verschwindender Temperatur. \par Betrachten wir nun einen direkten Halbleiter mit $g_v=1$, so folgt aus der zweifachen Spinentartung $g_2=2$ f\"ur den Transport in 1D-Kan\"alen mit \begin{equation} \mu=eU \end{equation} die Beziehung \begin{equation} I=i \cdot \frac{2e^2}{h} \cdot U=:R_{ball}\cdot U, \end{equation} f\"ur $i$ besetzte Subb\"ander mit \begin{equation} R_{ball}=\frac{U}{I} =\frac{h}{2e^2} =12.9064 {\dots} \, k\Omega. \end{equation} Eine solche Quantisierung der elektrischen Leitf\"ahigkeit im Regime des ballistischen Transports (in dem die Bewegung der Elektronen streufrei erfolgt) wurde 1988 erstmalig entdeckt \cite{Wees88, Wharam88a, Wharam88b, Wharam90}, siehe auch \cite{Khurana8811}. In starken Magnetfeldern (und auch in starken elektrischen Feldern) wird die Spinentartung me\ss bar aufgehoben, und es entwickeln sich f\"ur die Leitf\"ahigkeit zus\"atzlich Halb-Plateaux, d.\,h.\ ungerade Vielfache von $e^2/h$. Neuere Forschungen in Bochum und Cambridge haben gezeigt, da\ss\ unterhalb von $2e^2/h$ ein Zwischenplateau auch ohne \"au\ss eres Magnetfeld auftritt, wobei noch nicht ganz klar ist, ob es sich dabei um eine $0.5\cdot(2e^2/h)$- oder um eine $0.7\cdot(2e^2/h)$-{\it Struktur\/} handelt. Unabh\"angig davon ist {\it schon die Struktur an sich\/} ein Hinweis auf eine spontane Spin-Polarisation bzw.\ einen spontanen Magnetismus infolge einer Elektron-Elektron-Wechselwirkung \cite{Thomas96,TscheuschnerWieck96}. \par Im folgenden wollen wir die Quantisierung der {\sc Hall}-Leitf\"ahigkeit durch die Quantisierung der 1D-Randzust\"ande verstehen. Dabei k\"onnen wir davon ausgehen, da\ss\ die soeben vorgestellte Spinentartung aufgehoben ist. Wir setzen also $g_s=1$ und setzen ferner $g_v=1$ an, was z.\,B.\ f\"ur GaAs-Heterostrukturen gilt. \par Wir nehmen im folgenden an, da\ss\ unsere Probe die Form eines typischen {\sc Hall}-Stabes (engl.\ {\it Hall bar\/}) habe und mit sechs Kontakten $\mu_1,\dots \mu_6$ (im Uhrzeigersinn gez\"ahlt) ver\-se\-hen ist, welche die chemischen Potentiale festlegen. {\it Per conventionem\/} sei $\mu_1$ die Source (hier flie\ss t der Strom $j_x$ hinein) und $\mu_4$ der Drain (hier flie\ss t der Strom $j_x$ ab). Die {\sc Hall}-Spannung kann zwischen den jeweils gegen\"uberliegenden Kontakten $\mu_2$ und $\mu_6$ bzw.\ $\mu_3$ und $\mu_5$ abgenommen werden. Entsprechend messen wir den longitudinalen Spannungsabfall zwischen $\mu_2$ und $\mu_3$ bzw.\ $\mu_5$ und $\mu_6$. \"Uber die Kontakte $\mu_2,\mu_3,\mu_5,\mu_6$ sollten keine Str\"ome ab- oder zuflie\ss en, so da\ss\ $\mu_1$ und $\mu_4$ den Gesamtstrom tragen m\"ussen. Dies ist gleichbedeutend mit der Bedingung, da\ss\ \glqq spannungsrichtig gemessen wird\grqq, das hei\ss t, der Innenwiderstand unseres Voltmeters gegen unendlich geht. \par Der totale Strom eines Kontaktes oder Reservoirs ist die Differenz der Str\"ome, die von einlaufenden und auslaufenden Kan\"alen getragen werden. Wir m\"ussen also den ankommenden Strom von dem injizierten Strom abziehen, um den totalen Strom zu erhalten. \ejec \bild{qhe_010}{Randstromkan\"ale}{16} Unter der Annahme, da\ss\ der Transportstrom sich im Uhrzeigersinn bewegt, erhalten wir \begin{equation} \begin{array}{lc} \mbox{Source reservoir} \phantom{123} & \mu_1:\phantom{x}I_{tot}=\phantom{-}I=i\cdot\frac{e}{h}(\mu_1-\mu_6) \\ \mbox{Potential reservoir} \phantom{123} & \mu_2:\phantom{x}I_{tot}=\phantom{-}0=i\cdot\frac{e}{h}(\mu_2-\mu_1) \\ \mbox{Potential reservoir} \phantom{123} & \mu_3:\phantom{x}I_{tot}=\phantom{-}0=i\cdot\frac{e}{h}(\mu_3-\mu_2) \\ \mbox{Drain reservoir} \phantom{123} & \mu_4:\phantom{x}I_{tot}= - I=i\cdot\frac{e}{h}(\mu_4-\mu_3) \\ \mbox{Potential reservoir} \phantom{123} & \mu_5:\phantom{x}I_{tot}=\phantom{-}0=i\cdot\frac{e}{h}(\mu_5-\mu_4) \\ \mbox{Potential reservoir} \phantom{123} & \mu_6:\phantom{x}I_{tot}=\phantom{-}0=i\cdot\frac{e}{h}(\mu_6-\mu_5) \end{array} \end{equation} Der {\sc Hall}-Widerstand ergibt sich somit als \begin{equation} R_H=\frac{U_H}{I} =\frac{(\mu_3-\mu_5)/e}{I} =\frac{(\mu_3-\mu_5)}{i(\mu_3-\mu_5)e^2/h} =\frac{h}{ie^2}, \end{equation} der longitudinale Magnetowiderstand als \begin{equation} R_{xx}=\frac{U_{xx}}{I} =\frac{(\mu_2-\mu_3)/e}{I} =0. \end{equation} Man beachte, da\ss\ die Leitf\"ahigkeit pro Randkanal gerade \begin{equation} \delta\sigma=\frac{e^2}{h} \end{equation} ist, die Summe somit \begin{equation} \sigma=i\cdot\frac{e^2}{h} \, . \end{equation} \par Das Randkanal-Bild ber\"ucksichtigt, im Gegensatz zum Lokalisierungsbild, die Geometrie der Probe und das Vorhandensein von Kontakten. Es erkl\"art die Quantisierung des {\sc Hall}-Widerstandes als Folge der 1D-Quantiserung der Leitf\"ahigkeit. Als versteckte Annahme enth\"alt es aber die Lokalisierung im Bulk und kann daher nicht f\"ur sich isoliert betrachtet werden. Wohl aber mu\ss\ eine Theorie der Edge-States vertr\"aglich sein mit einer solchen, welche den Bulk beschreibt. Das Randkanal-Bild erkl\"art allerdings {\it nicht\/} die Form der \"Uberg\"ange zwischen den Plateaux und deren Breite. \vfill\eject\noindent% \section{Aufgaben} Es ist f\"ur das Verst\"andnis des Versuches {\it sehr n\"utzlich\/}, die theoretischen Aufgaben schon im Rahmen der Vorbereitungen zu l\"osen. Die experimentellen Aufgaben sollten im Rahmen der Durchf\"uhrung und Auswertung des Versuchs {\it vollst\"andig\/} bearbeitet werden. Die relevanten Daten der Ger\"ate-Eichung und der MBE-Wachstumsprotokolle erfragen Sie bitte beim Assistenten. \subsection{Theorie} Die folgenden Aufgaben sollten einen Anhaltspunkt daf\"ur geben, was von den Studierenden im Rahmen der Vorbereitung erwartet wird. \par Eine Bemerkung zum Literaturstudium: Obwohl dieses Skript so {\it self-contained\/} wie m\"oglich gehalten wurde, sei zumindest eine Lekt\"ure der Quellen \cite{Klitzing80}, \cite{Klitzing86} und \cite{Klitzing90} em\-pfoh\-len. Wer noch ein wenig \"uber unseren Horizont blicken m\"ochte, sei auf \cite{Kivelson96} verwiesen. \par W\"ahrend des Versuchs besteht ausreichend Zeit, \"uber den theoretischen Hintergrund des Experiments zu diskutieren und offene Fragen zu kl\"aren. Die relevanten Daten der Probe, das hei\ss t, die MBE-Wachstumsprotokolle, und die Eichung der Ger\"ate erfragen sie bitte beim Assistenten. \par Transportmessungen im Quantenregime sind Pr\"azisionsmessungen. Besondere Sorgfalt mu\ss\ daher auf eine wirksame Unterdr\"uckung von St\"oreinfl\"ussen (Netzbrummen, St\"orungen durch Radio- und Fernsehsender, thermisches Hintergrund-Rauschen etc.) gelegt werden. Eine sehr effektive Methode in diesem Zusammenhang ist die Lock-In-Technik. Im Addendum zu dieser Anleitung finden Sie einen Auszug aus dem einf\"uhrenden Kapitel der Anleitung eines kommerziellen Lock-In-Verst\"arkers, in dem das Prinzip auch f\"ur den elektronischen Laien verst\"andlich erkl\"art wird \cite{Stanford}. \begin{enumerate} \item In der obigen Einf\"uhrung wurde die Formel f\"ur den klassischen {\sc Hall}-Effekt in einem Einteilchen-Bild hergeleitet. Verifizieren Sie noch einmal die Formel im Rahmen der {\sc Drude}-Theorie des elektrischen Transports in Termen der inelastischen {\sc Drude}-Impulsrelaxationszeit $\tau$. \item Verifizieren Sie die Inversion des Leitf\"ahigkeitstensors zum Widerstandstensor (das hei\ss t: suche die inverse Matrix $\varrho_{ij}$ von $\sigma_{ij}$) und diskutieren Sie den Fall sehr hoher Magnetfelder. \item Sch\"atzen Sie die 2-dimensionale Ladungstr\"agerdichte $n_{2D}$ und die elektrische Feld\-st\"ar\-ke $F_s$ an der Grenzfl\"ache einer $\mbox{Ga}\mbox{As}$-$\mbox{Al}_{0.3}\mbox{Ga}_{0.7}\mbox{As}$-% Heterostruktur ab, die sich durch eine Bandkantendiskontiuit\"at von $300 \;\mbox{meV}$ auszeichnet. \item F\"ur ein unendlich hohes Kastenpotential der Breite $L$ ergeben sich die Energieeigenwerte (mit $m_z$ als effektive Masse der Ladungstr\"ager in z-Richtung) \begin{equation} E_j = \frac{\hbar^2k_j^2}{2m_z}, \phantom{^123} k_j = \pi\,\frac{j+1}{L}, \phantom{123} (j=0,1,\dots), \end{equation} das hei\ss t als \begin{equation} E_j=\frac{h^2(j+1)^2}{2m_zL^2}, \phantom{123} (j=0,1,\dots). \end{equation} F\"ur ein Dreieckpotential ergeben sich die Energieeigenwerte n\"aherungsweise als \begin{equation} E_j \approx \left( \frac{\hbar^2}{2m_z} \right) ^{1/3} \left[ \frac{3\pi e F_s}{2} \left(j+\frac{3}{4}\right) \right] ^{2/3}, \phantom{123} (j=0,1,\dots). \end{equation} Berechnen Sie mit der elektrischen Feld\-st\"ar\-ke $F_s$ aus Aufgabe 3 und $m_z=0.07\,m_0$ die Subbandenergien $E_0$ und $E_1$. \item Sei $m_x=m_y=0.07\,m_0$.% \footnote{Die effektive Masse kann in verschiedene Richtungen verschiedene Werte annehmen.} Berechnen Sie die {\sc Fermi}-Energie $E_F$ f\"ur \begin{equation} n = 3 \times 10^{11} {\rm cm}^{-2}. \end{equation} Wieviele Subb\"ander sind bei $T=0\,K$ und der 2-dimensionalen Ladungstr\"agerdichte $n_{2D}$ aus Aufgabe 3 besetzt? \item Machen Sie sich mit der expliziten L\"osung der {\sc Schr\"odinger}-Gleichung im konstanten \"au\ss eren Magnetfeld vertraut \cite{LandauLifshitz}. Was geschieht im Falle einer {\sc Vogt}-Geometrie und in Falle eines schr\"agen (engl.\ {\it tilted\/}) Magnetfeldes (qualitativ) \cite{Clausnitzer85}? \item Zeichnen Sie qualitativ den Verlauf von $\varrho_{xx}$ und $\varrho_{yx}$ als Funktion des Magnetfeldes f\"ur folgende F\"alle: \begin{itemize} \item[(a)] klassischer Grenzfall; \item[(b)] $T=0$ ohne lokalisierte Zust\"ande; \item[(c)] $T=0$ mit lokalisierten Zust\"anden. \end{itemize} Kennzeichnen Sie auf der Magnetfeldachse bei b) und c) die Stelle $\mu B=1$. \item Erkl\"aren Sie das Prinzip der Lock-In-Me\ss technik, und entwerfen Sie einen Aufbau f\"ur das {\sc Hall}-Experiment. \end{enumerate} \vfill\eject\noindent% \subsection{Experiment} Das Experiment sollte in etwa wie folgt ablaufen: \begin{enumerate} \item Zusammen mit dem Assistenten machen sich die Studenten mit der Me\ss apparatur vertraut (Helium-Kanne bzw.\ Helium-Kryostat, Heliumgas-R\"uckf\"uhrleitungssystem, Magnet und Netzger\"at, Probenhalter, {\sc Hall}-Bar und seine Geometrie etc.). \item Sodann wird der Probenhalter mit der Probe beladen und in die Helium-Kanne vorsichtig (!) eingebracht. \item Die Messungen sind zun\"achst bei der Temperatur des fl\"ussigen Heliums unter Normaldruck durchzuf\"uhren und mehrmals zu wiederholen. \item Mit Hilfe des vom Autor ersonnenen \glqq {\sc von\,Klitzing}\,izer\grqq, einem Steckfeld bestehend aus einer Kollektion von Wendelpotis, die auf die Werte der quantisierten {\sc Hall}-Widerst\"ande justiert sind, pr\"ufe man das grunds\"atzliche Funktionieren der Me\ss\-ein\-rich\-tung. So k\"onnen grobe Fehler ausgeschlossen werden. \item Erst jetzt wird begonnen, den Gasdruck \"uber dem fl\"ussigen He-Spiegel langsam zu abzusenken (\glqq Abpumpen\grqq). \item W\"ahrend dieses Abpumpens soll sowohl $\varrho_{yx}$ als auch $\varrho_{xx}$ mehrfach in $B$-sweeps gemessen werden, um die Temperaturabh\"angigkeit zu studieren. \item Nachdem die niedrigstm\"ogliche Temperatur erreicht ist, werden die Messungen wiederholt. \item Sodann wird die Probe durch die eingebaute LED mit IR-Lichtblitzen unterschiedlicher Dauer beleuchtet. Die Messungen sind zu wiederholen, um den Einflu\ss\ der Beleuchtung zu studieren. Es soll analysiert werden, wie sich das Verhalten der Probe nach Belichtung im Laufe der Zeit \"andert. Dazu ist es notwendig, die Messungen noch mehrfach zu wiederholen (typ.\ 10 sec., 2 min., 20 min., 2 h nach Beleuchtung). \item Die abschlie\ss enden Messungen von $\varrho_{yx}$ und $\varrho_{xx}$ werden am darauffolgenden Tag durchgef\"uhrt. \item ({\it Fakultativ:\/}) W\"ahrend des Versuchs gibt es Totzeiten. Diese sollen nicht nur genutzt werden zur Diskussion der Theorie des Effektes, sondern auch - wenn m\"oglich - zur Hospitation an der MBE-Anlage der Arbeitsgruppe w\"ahrend des Wachsens einer Probe (\glqq wachsen\grqq\ hier verstanden als transitives Verb), die Sie selbst am zweiten Tag kontaktieren und messen d\"urfen. \end{enumerate} \vspace*{1.0cm} \bild{qhe_901}{Helium-Kanne (schematisch) \cite{Hensel96}}{16} \bild{qhe_901a}{F\"ullkurve der Helium-Kanne}{12} \bild{qhe_993}{{\sc Hall}-Streifen (Skizze)}{12} \bild{qhe_993a}{{\sc Hall}-Streifen (Photo)}{12} \bild{qhe_902}{Bondplan f\"ur die {\sc Hall}-Probe (schematisch) \cite{Hensel96}}{12} \vfill\eject\noindent% \subsection{Auswertung} Wenn Sie Schritt f\"ur Schritt die folgenden Punkte durchgehen, d\"urfte die Auswertung Ihnen keine besonderen Probleme bereiten. Halten Sie sich daher bitte an das folgende {\it Curriculum\/}: \begin{enumerate} \item Machen Sie sich eine Zeichnung der {\sc Hall}-Probe und tragen Sie in diese ein, in welcher Richtung der Strom flie\ss t, in welcher Richtung das Magnetfeld wirkt und wo welche Spannungen abgegriffen werden. \item Machen Sie sich eine Zeichnung Ihrer Verschaltung (Lock-In-Verst\"arker, Trenntrafo, Vorwiderstand, XY-Schreiber) und tragen Sie in diese ein, wo welcher Strom flie\ss t und wo welche Spannung anliegt. \item Machen Sie eine Bestandsaufnahme der Bereichseinstellungen der verwendeten Ge\-r\"a\-te (Lock-In-Verst\"arker, XY-Schreiber). Bereiten Sie Ihre Graphen so auf, da\ss\ alle Achsen beschriftet sind. \item Bestimmen Sie die in Abwesenheit eines Magnetfeldes vorliegende Leitf\"ahigkeit $\sigma_0$ aus dem \"uber den Vorwiderstand in die Probe injizierten Strom und der gemessenen Spannung: \begin{equation} \sigma_0 = \frac{j_x}{E_x} = \frac{I_x}{U_y}. \end{equation} \item Bestimmen Sie $\varrho_{xx}$ (klassisch gegeben durch $m/n_{2D}e^2\tau$) und $\varrho_{yx}$ (klassisch gegeben durch $B/n_{2D}e$) als Funktion des Magnetfeldes aus den von Ihnen aufgenommenen Me\ss kurven. \par {\it Hinweis:\/} Auf dem XY-Schreiber k\"onnen Sie die gemessenen Spannungen direkt ablesen, wenn Sie alle Umrechnungsfaktoren aus den Regler- und Bereichseinstellungen ber\"ucksichtigen. Es ist \begin{equation} R_H = \frac{U_y}{I_x} = \frac{h}{\nu e^2}. \end{equation} Um herauszufinden, bei welchem $\nu$ man jeweils liegt, tr\"agt man entsprechend der Relation \begin{equation} \frac{1}{R_H} = \nu \cdot \frac{e^2}{h} = \frac{e\,n_{2D}}{B} \end{equation} die Kehrwerte der gemessenen Plateau-Widerst\"ande gegen $1/B$ auf. Man sieht sofort, da\ss\ die einzelnen inversen Widerstandsstufen einen konstanten Abstand haben und kann sogleich die Frage beantworten, welches die kleinste Stufe f\"ur den inversen {\sc Hall}-Widerstand ist. \item Extrapolieren Sie auf die Magnetfeldst\"arke, bei welcher der F\"ullfaktor $\nu=1$ erreicht ist. \item Bestimmen Sie die Elektronendichte nach drei verschiedenen Methoden: \begin{itemize} \item[(a)] klassisch aus der Steigung der gemittelten Kurve entsprechend der Beziehung \begin{equation} n_{2D} = \frac{B}{\varrho_{yx}e} ; \end{equation} \item[(b)] aus den Plateaux des quantisierten {\sc Hall}-Effektes, das hei\ss t, aus den Plateaux von $\varrho_{yx}$ und den ermittelten F\"ullfaktoren $\nu$, entsprechend der Beziehung \begin{equation} n_{2D} = \frac{B} {\left( {\displaystyle \frac{h}{\nu e^2} } \right) \,e} = \frac{\nu e B}{h} ; \end{equation} \item[(c)] aus den {\sc Shubnikov}-{\sc de Haas}-Oszillationen. Tragen Sie dabei $1/B_i$ als Funktion ganzer Zahlen $i$ auf und identifizieren Sie die Minima von $\varrho_{xx}$ mit dem zu\-ge\-h\"o\-ri\-gen {\sc Landau}-Niveau-Index $i$ bzw.\ F\"ullfaktor $\nu$. \end{itemize} Vergleichen Sie die drei unabh\"angigen Resultate inklusive einer Fehlerabsch\"atzung (!). Decken sich die Werte? \item Berechnen Sie die Beweglichkeit \begin{equation} \mu=\frac{e\tau}{m} \end{equation} und die {\sc Drude}-Impulsrelaxationszeit $\tau$. aus $\sigma_0$ und der von Ihnen bestimmten 2-dimensionalen La\-dungs\-tr\"a\-ger\-dich\-te $n_{2D}$. Vergleichen Sie $\tau$, $\mu$ und $m$ mit typischen Werten f\"ur einen 3-dimensionalen Halbleiter und f\"ur ein Metall bei $4.2\,K$. \item Berechnen Sie die {\sc Fermi}-Energie $E_F$ aus der 2-dimensionalen Zu\-stands\-dich\-te% \linebreak $D_{2D}(E)$ und 2-dimensionalen Ladungstr\"agerdichte $n_{2D}$. \item Bestimmen Sie $h/e^2$ aus dem {\sc Hall}-Plateau mit dem kleinsten Index $i$. \item Mit der Beleuchtung der Probe haben Sie die 2-dimensionale Ladungstr\"agerdichte $n_{2D}$ in der Probe erh\"oht. Was ist dabei der relevante physikalische Mechanismus? Die Beleuchtung bleibt erhalten, auch wenn das Licht ausgeschaltet wird, solange die Probe nicht erw\"armt wird. (Dieses Ph\"anomen h\"angt mit den sogenannten DX-Zentren% \footnote{% DX steht f\"ur {\it donor complex\/}, siehe z.\,B.\ \cite{LangLogan79}. } in $\mbox{Al}_x\mbox{Ga}_{1-x}\mbox{As}$ zusammen, die sich durch die $\mbox{Si}$-Dotierung bilden, sowie mit Tunnelprozessen im allgemeinen.) Bestimmen Sie die 2-dimensionale Ladungs\-tr\"a\-ger\-dich\-te $n_{2D}$ und die Beweglichkeit $\mu$ {\it nach\/} Abschalten der Beleuchtung $t$$=$$0$. Tragen Sie dazu beide Gr\"o\ss en gegen $\log\,t$ auf. Beantworten Sie durch Extrapolation der Daten die Frage: Wie lange m\"u\ss ten wir warten, bis sich der Zustand vor Beleuchtung wieder eingestellt hat? \item Die inverse {\sc Sommerfeld}sche Feinstrukturkonstante ist - unabh\"angig von den Einheiten - gegeben durch \begin{equation} \alpha^{-1}=(h/e^2)(2/\mu_0c)=137,036 \, \dots\;, \end{equation} wobei wir f\"ur die Permeabilit\"at des Vakuums \begin{equation} \mu_0=4\pi*10^{-7}\,\mbox{Hm} \end{equation} und f\"ur die Lichtgeschwindigkeit im Vakuum \begin{equation} c=2.9979*10^{8}\,\mbox{m/s} \end{equation} angesetzt haben. Berechnen Sie diese aus dem Me\ss wert f\"ur die {\sc von\,Klitzing}-Kon\-stan\-te \begin{equation} R_{vK}=\frac{h}{e^2}. \end{equation} \item Diskutieren Sie die Rolle der Temperatur f\"ur das Auftreten des quantisierten {\sc Hall}-Effektes. Vergleichen Sie dabei $kT$ mit den charakteristischen Energien des Systems, n\"amlich \begin{itemize} \item der {\sc Fermi}-Energie $E_F$, \item der {\sc Landau}-Aufspaltung $\hbar\omega_c$, \item der Breite der ausgedehnten Zust\"ande $\hbar/\tau$. \end{itemize} \item ({\it Fakultativ:\/}) Zeichnen Sie die \"Aquipotentiallinien und die Strompfade bei einer {\sc Hall}-Geometrie in einem {\sc Hall}-Plateau. \item ({\it Fakultativ:\/}) Warum tritt der quantisierte {\sc Hall}-Effekt nicht in einem $3$-di\-men\-si\-o\-na\-len Elektronengas auf? Wie k\"onnen Sie sich experimentell den \"Ubergang (engl.\ {\it crossover\/}) von $2D$ nach $3D$ vorstellen (Hinweis: Multilayer)? \end{enumerate} \vfill\eject\noindent% \section{Anhang: Physikalische Formelsammlung} Nach \cite{Barnett96}. \par Alle Gr\"o\ss en sind N\"aherungsgro\ss en, wenn es nicht ausdr\"ucklich anders angegeben ist. \begin{equation} a = 0.815\,(10) \end{equation} bedeutet, da\ss\ mit $70\,\%$ \glqq{\it confidence level\/}\grqq\ der wahre Wert zwischen $0.805$ und $0.825$ liegt. \par Alle Gr\"o\ss en sind in SI-Einheiten angegeben, wenn es nicht ausdr\"ucklich anders spezifiziert wird.% \footnote{Die Idee, die Formeln in dieser Weise zusammenzustellen, geht zwar nicht auf {\sc A.D.\ Wieck} zur\"uck; aber er war es, der auf die Idee kam, sich eine kleine Karte anzufertigen, die er in das Etui seines wissenschaftlichen Taschenrechners {\sc Casio} FX-50F, welcher zudem die Naturkonstanten festverdrahtet hat, hineinschieben konnte. {\sc Dirk de\,Vries} hat die Formelsammlung weiter verbessert. Die vorliegende Version enth\"alt zus\"atzlich noch einige weitere, f\"ur die Praxis n\"utzliche Fakten. \"Ahnliche Formelsammlungen - zugeschnitten auf die Anwendung innerhalb der experimentellen Festk\"orperphysik - findet man an vielen Stellen, zum Beispiel auch in der \glqq Bibel\grqq\ von {\sc Ando}, {\sc Fowler} und {\sc Stern} \cite{AndoFowlerStern82}.} \subsection{Allgemeine Physik} \subsubsection{Naturkonstanten} {\sc Boltzmann}-Konstante \begin{equation} k=1.380\,658\,(12) \times 10^{-23}\;{\rm J}/{\rm K} \end{equation} {\sc Planck}sches Wirkungsquantum \begin{eqnarray} h &=& 6.626\,075\,5\,(40) \times 10^{-34}\;{\rm J}{\rm s} \end{eqnarray} Mit \begin{equation} \hbar = \frac{h}{2\pi} \end{equation} ist \begin{eqnarray} \hbar &=& 1.054\,576\,6\,(63) \times 10^{-34}\;{\rm J}{\rm s} \nonumber\\ &=& 6.582\,122\,0\,(20) \times 10^{-22}\;{\rm MeV}\,{\rm s} \end{eqnarray} Lichtgeschwindigkeit im Vakuum (exakter Wert) \begin{equation} c=\frac{1}{\sqrt{\varepsilon_0\mu_0}} =2.997\,924\,58 \times 10^8\;{\rm m}/{\rm s} \end{equation} {\sc Newton}sche Gravitationskonstante \begin{eqnarray} G_N &=& 6.672\,59\,(85) \times 10^{-11} \;{\rm m}^3/{\rm k}{\rm g}\,{\rm s}^2 \nonumber\\ &=& 6.707\,11\,(86) \times 10^{-39} \;\hbar c \, ({\rm GeV}/c^2)^{-2} \end{eqnarray} \subsubsection{Eigenschaften von Elementarteilchen} Elektrische Elementarladung \begin{equation} e=1.602\,177\,33\,(49) \, \times 10^{-19}\;{\rm C} \end{equation} Elektronenmasse \begin{eqnarray} m_e &=& 9.109\,389\,7\,(54) \, \times 10^{-31}\;{\rm k}{\rm g} \nonumber\\ &=& 0.510\,999\,06\,(15) \, {\rm MeV}/c^2 \end{eqnarray} Protonenmasse \begin{eqnarray} m_p &=& 1.672\,623\,1\,(10) \, \times 10^{-27}\;{\rm k}{\rm g} \nonumber\\ &=& 938.272\,31\,(28) \, {\rm MeV}/c^2 \end{eqnarray} Neutronenmasse \begin{eqnarray} m_n &=& 1.674\,928\,6\,(10) \, \times 10^{-27}\;{\rm k}{\rm g} \nonumber\\ &=& 939.565\,63\,(28) \, {\rm MeV}/c^2 \end{eqnarray} \subsubsection{Quantisierter Hall Effekt} {\sc von\,Klitzing}-Konstante \begin{equation} R_{vK} = \frac{h}{e^2} = 25.812\,805\,{\dots}{\rm k}\Omega \end{equation} \begin{table}\vspace{0.5cm} \begin{center} \begin{tabular}{|l|r|} \hline $ h/e^2 $ & $ 25.812\,805\, {\dots} {\rm k}\Omega $ \\ \hline $ h/2e^2 $ & $ 12.906\,402\, {\dots} {\rm k}\Omega $ \\ \hline $ h/3e^2 $ & $ 8.604\,268\, {\dots} {\rm k}\Omega $ \\ \hline $ h/4e^2 $ & $ 6.453\,201\, {\dots} {\rm k}\Omega $ \\ \hline $ h/5e^2 $ & $ 5.162\,561\, {\dots} {\rm k}\Omega $ \\ \hline $ h/6e^2 $ & $ 4.302\,134\, {\dots} {\rm k}\Omega $ \\ \hline $ h/7e^2 $ & $ 3.687\,543\, {\dots} {\rm k}\Omega $ \\ \hline $ h/8e^2 $ & $ 3.226\,600\, {\dots} {\rm k}\Omega $ \\ \hline \end{tabular} \normalsize \end{center} \vspace{0.5cm} \caption{Quantisierte {\sc Hall}-Widerst\"ande} \vspace{0.75cm}\end{table} \subsubsection{Energie {\it versus\/} Frequenz {\it versus\/} Temperatur etc.} \begin{equation} E=eU=\frac{hc}{\lambda}=h\nu=\hbar\omega=kT \end{equation} Um ein Gef\"uhl f\"ur Gr\"o\ss enordnungen zu bekommen, sei ein Blick in die beigef\"ugte Tabelle empfohlen.- Ferner ist es manchmal n\"utzlich, die folgenden Beziehungen zu kennen: \begin{eqnarray} 1\,{\rm J} &=& 6.241 \,{\dots}\, \times 10^{ 18}\,{\rm eV} \\ h &=& 4.136 \,{\dots}\, \times 10^{-15}\,{\rm eV}\cdot{\rm s} \\ 1\,{\rm kg} &=& 0.8988 \,{\dots}\, \times 10^{ 17}\,{\rm J} \,/\, c^2 \,=\, 5.609 \,{\dots}\, \times 10^{ 35}\,{\rm eV} \,/\, c^2 \\ 1\,{\rm eV} &=& 1.602\,177\,33\,(49)\,\times 10^{-19}\,{\rm J} \\ 1\,{\rm eV}/c^2 &=& 1.782\,662\,70\,(54)\,\times 10^{-36}\,{\rm kg}\ \end{eqnarray} \begin{table}\vspace{0.5cm} \begin{center} \begin{tabular}{|c|c|c|} \hline $eU$ & $\nu$ & $T$ \\ \hline \hline ${\bf 1\,eV}$ & $242\,{\rm THz}$ & 11\,600\,{\rm K} \\ \hline ${\bf 1\,meV}$ & $242\,{\rm GHz}$ & 11.6\,{\rm K} \\ \hline ${\bf 1}\,\mbox{\boldmath$\mu$}{\bf eV}$ & $242\,{\rm MHz}$ & 11.6\,{\rm mK} \\ \hline \hline $26\,{\rm meV}$ & $6.24\,{\rm THz}$ & ${\bf 300\,K}$ \\ \hline $86\,\mu{\rm eV}$ & $20.8\,{\rm GHz}$ & ${\bf 1\,K}$ \\ \hline $8.6\,\mu{\rm eV}$ & $2.08\,{\rm GHz}$ & ${\bf 100\,mK}$ \\ \hline $0.86\,\mu{\rm eV}$ & $208 \,{\rm MHz}$ & ${\bf 10\,mK}$ \\ \hline $86\,{\rm neV}$ & $20.8\,{\rm MHz}$ & ${\bf 1\,mK}$ \\ \hline \end{tabular} \normalsize \end{center} \vspace{0.5cm} \caption{Beispiele f\"ur Energie versus Frequenz versus Temperatur etc.} \vspace{0.75cm}\end{table} \subsubsection{Elektrodynamik} Magnetische Vakuum-Permeabilit\"at (\glqq Nachgiebigkeit\grqq) \begin{equation} \mu_0=4\pi \times 10^{-7}\;{\rm H}/{\rm m} =12.566\,370\,614\,\times 10^{-7}\;{\rm H}/{\rm m} \end{equation} Elektrische Vakuum-Permittivit\"at (\glqq Durchl\"assigkeit\grqq) \begin{equation} \varepsilon_0=1/\mu_0c^2 =8.854\,187\,817\,\times 10^{-12}\;{\rm F}/{\rm m} \end{equation} \subsubsection{Geophysikalische Konstanten} Nomineller atmosph\"arischer Druck auf der Erdoberfl\"ache \begin{equation} 1\;\mbox{{\rm atm}}=1.01325 \times 10^5\;{\rm N}/{\rm m}^2 \end{equation} Historische Einheit f\"ur den Druck \begin{equation} 1\,\mbox{{\rm Torr}} =(1/760)\,\mbox{{\rm atm}} =133.322 \,{\dots}\, {\rm N}/{\rm m}^2 \end{equation} Nominelle Fallbeschleunigung auf der Erdoberfl\"ache \begin{equation} g=9.80665\;{\rm m}/{\rm s}^2 \end{equation} \subsubsection{Thermodynamik der Hohlraumstrahlung} {\sc Stefan}-{\sc Boltzmann}-Gesetz: \begin{equation} \int_0^\infty F(\lambda)\,d\lambda = \sigma T^4 \end{equation} mit \begin{equation} \sigma=5.6697 \times 10^{-8}\;{\rm W}/{\rm m}^2{\rm K}^4 \end{equation} Leistung der Strahlung eines schwarzen K\"orpers \begin{equation} P(\mbox{{\rm pW}}) = 5.67\,A({{\rm cm}}^{2}) \cdot b \cdot (T_1^4-T_2^4) \end{equation} wobei \begin{eqnarray} b=1.00 & & \mbox{f\"ur ideal schwarzen K\"orper} \nonumber\\ b=0.05 & & \mbox{f\"ur poliertes Metall} \end{eqnarray} {\sc Wien}sches Verschiebungsgesetz \begin{equation} \lambda_{Pmax} \cdot T = \mbox{{\it const}} \end{equation} insbesondere haben wir \begin{equation} \lambda_{Pmax}=\frac{2898\,\mu{\rm m}}{T} \end{equation} \subsubsection{Atomphysik} Elektronenmasse \begin{equation} m_e = 9.109\,389\,7\,(54) \times 10^{-31}\;{\rm k}{\rm g} \end{equation} {\sc Bohr}scher Atomradius \begin{equation} a_0=4\pi\varepsilon_0 \cdot \frac{\hbar^2}{m_e e^2} =\frac{\varepsilon h^2} {2\pi m_e e^2} =5.292 \times 10^{-11}\,\mbox{{\rm m}} \end{equation} {\sc Rydberg}-Konstante \begin{equation} 1\,\,\mbox{\rm Ry}=\frac{1}{4\pi\varepsilon_0}\cdot \frac{e^2}{2a_0} =\frac{e^4m_e}{(4\pi\varepsilon_0)^2\hbar^2} =13.61 \, {\rm eV} \end{equation} {\sc Bohr}sche Energienieveaus im Atom \begin{equation} E_n=\frac{h^2n^2}{8 m_e a_0^2} \end{equation} \subsubsection{Molek\"ulphysik} {\sc Avogadro}-Konstante \begin{equation} N_{Avo}=6.022045 \times 10^{23}\;\mbox{{\rm mol}}^{-1} \end{equation} \subsection{Physik der Halbleiter, insbesondere GaAs} \subsubsection{Galliumarsenid} \lq\lq built-in potential\rq\rq\ in GaAs \begin{equation} V_{bi}=1.24\,{\rm V} \end{equation} Dielektrische Konstante im Vakuum versus dielektrische Konstante in GaAs \begin{equation} \varepsilon_*:=\varepsilon_{\rm GaAs}=13.1\,\varepsilon_0 \end{equation} Elektronenmasse versus effektive Elektronenmasse \begin{eqnarray} m_e &=& 0.910{\dots} \times 10^{-30}\;{\rm kg} \\ m_e^* &=& 0.637{\dots} \times 10^{-31}\;{\rm kg} \,=\, 0.07\,m_e \end{eqnarray} {\sc Bohr}scher Atomradius versus k\"unstlichem {\sc Bohr}schen Atomradius \begin{eqnarray} a_0 &=& 4\pi\varepsilon_0\cdot\frac{\hbar^2}{m_e e^2} \,=\, \frac{\varepsilon_0 h^2}{2\pi m_e e^2} \,=\, 5.292 \times 10^{-11}\,\mbox{{\rm m}} \\ a_0^* &=& 4\pi\varepsilon_*\cdot\frac{\hbar^2}{m_e^* e^2} \,=\, \frac{\varepsilon_* h^2}{2\pi m_e^* e^2} \,=\, 9.90 \,\mbox{{\rm nm}} \,=\, \frac{13.1}{0.07}\,a_0 \end{eqnarray} {\sc Rydberg}-Konstante versus effektive {\sc Rydberg}-Konstante \begin{eqnarray} 1\,\,\mbox{\rm Ry} &=& \frac{1}{4\pi\varepsilon_0}\cdot \frac{e^2}{2a_0} \,=\, \frac{e^4m_e}{(4\pi\varepsilon_0)^2\hbar^2} \,=\, 13.61 \, {\rm eV} \\ 1\,\,\mbox{\rm Ry}^* &=& \frac{1}{4\pi\varepsilon_0^*}\cdot \frac{e^2}{2a_0^*} \,=\, \frac{e^4m_e^*}{(4\pi\varepsilon_0^*)^2\hbar^2} \,=\, 5.552 \, {\rm meV} \,=\, \frac{0.07}{(13.1)^2}\,\mbox{\rm Ry} \end{eqnarray} \subsubsection{Energieniveaus im Kastenpotential} Diese Beziehung ist wichtig f\"ur {\it quantum wells\/} etc.\ \begin{equation} E_n=\frac{h^2n^2}{8 m_e^* {a_0^*}^2} =5377\,{\rm meV} \cdot n^2\,/a\,({\rm nm})^2 \end{equation} \subsubsection{Zu\-stands\-dich\-ten und Fermi-Energien} Zu\-stands\-dich\-te \begin{eqnarray} & & \nonumber\\ D_{nD} &=& \frac{\mbox{Zu\-stands\-dich\-te in $n$ Raumdimensionen incl.\ Spin}}{m^n} \end{eqnarray} wobei \begin{eqnarray} & & \nonumber\\ D_{3D}(E) &=& \frac{8\pi m\sqrt{2mE}}{h^3} \\ & & \nonumber\\ D_{2D}(E) &=& \frac{4\pi m} {h^2} \\ & & \nonumber\\ D_{1D}(E) &=& \frac{2}{h}\sqrt{\frac{2m}{E}} \end{eqnarray} {\sc Fermi}-Energie \begin{eqnarray} & & \nonumber\\ E_{F_3} &=& \frac{h^2 }{8 m} \left( \frac{3 n_{3D} }{\pi} \right)^{2/3} \,=\, 52.1 \; meV \cdot \left( n_{3D}(10^{18}\;cm^{-3}) \right)^{2/3} \\ & & \nonumber\\ E_{F_2} &=& \frac{h^2 n_{2D} }{4\pi m} \phantom{xxxxxii} \,=\, 34.2 \; meV \cdot \left( n_{2D} (10^{12}\;cm^{-2}) \right) \\ & & \nonumber\\ E_{F_1} &=& \frac{h^2n_{1D}^2}{32m } \phantom{xxxxxii} \,=\, 13.43 \; meV \cdot \left( n_{1D} (10^{ 6}\;cm^{-1}) \right)^2 \\ & & \end{eqnarray} {\sc Fermi}-Wellenl\"ange \begin{eqnarray} \lambda_{F_3} &=& 2\sqrt[3]{\frac{ \pi}{3 n_{3D} }} \\ \lambda_{F_2} &=& \sqrt {\frac{2\pi}{ n_{2D} }} \\ \lambda_{F_1} &=& {\frac{4 }{ n_{1D} }} \end{eqnarray} \subsubsection{Ladungstransport im klassischen Regime} $n$-dimensionale Ladungstr\"agerdichte \begin{eqnarray} & & \nonumber\\ n_{nD} &=& \frac{\mbox{Anzahl der Ladungstr\"ager}} {\mbox{$n$-dimensionales Volumen}} \end{eqnarray} Spezifischer Widerstand (Resistivit\"at) in $n$ Raumdimensionen \begin{eqnarray} & & \nonumber\\ \varrho_n &=& R \,\cdot\, \frac{\mbox{$n$$-$$1$-dimensionale Querschnittsfl\"ache}} {\mbox{$1$-dimensionale L\"ange}} \,=\, \frac{1}{e \mu n_{nD}} \end{eqnarray} Beweglichkeit in $n$ Raumdimensionen \begin{eqnarray} & & \nonumber\\ \mu &=& \frac{\mbox{mittlere Driftgeschwindigkeit}} {\mbox{angelegte elektrische Feldst\"arke}} \nonumber\\ &\phantom{=}& \nonumber\\ &=& \frac{1}{e} \,\cdot\, \frac{\mbox{$n$-dimensionale Leitf\"ahigkeit}} {\mbox{$n$-dimensionale Ladungstr\"agerdichte}} \,=\, \frac{1}{e \varrho_n n_{nD}} \end{eqnarray} Elastischer mittlerer freier Weg \begin{eqnarray} & & \nonumber\\ l_{3D} &=& \frac{\mu h}{2e} \sqrt[3]{\frac{3 n_{3D} }{ \pi}} =203.6\,\mbox{{\rm nm}} \cdot\mu\cdot \sqrt[3]{ n_{3D} \,(10^{18}\,\mbox{{\rm cm}}^{-3})} \\ & & \nonumber\\ l_{2D} &=& \frac{\mu h}{ e} \sqrt {\frac{ n_{2D} }{2\pi}}\phantom{i} =165.0\,\mbox{{\rm nm}} \cdot\mu\cdot \sqrt { n_{2D} \,(10^{12}\,\mbox{{\rm cm}}^{-2})} \\ & & \nonumber\\ l_{1D} &=& \frac{\mu h}{4e} n_{1D} \phantom{xii} =103.4\,\mbox{{\rm nm}} \cdot\mu\cdot { n_{1D} \,(10^{ 6}\,\mbox{{\rm cm}}^{-1})} \end{eqnarray} {\sc Drude}-Relaxationszeit \begin{equation} \tau=\frac{m}{e}\cdot\mu =398\,\mbox{{\rm fs}}\cdot\mu \end{equation} \subsubsection{Hall-Messungen nach van der Pauw} Wir setzen eine quadratische Geometrie der Probe voraus, die wir an ihren vier Ecken kontaktieren. Dann erhalten wir im klassischen Regime nach {\sc van\,der\,Pauw} \cite{Pauw1958}: \vspace*{0.5cm} \par\noindent% 2-dimensionale Ladungstr\"agerdichte \begin{equation} n_{2D} = \frac{BI}{eV_H} \end{equation} Beweglichkeit in $2$ Raumdimensionen \begin{equation} \mu=\frac{1}{\varrho n_{2D} e} =\frac{{\rm ln}\,2}{\pi} \frac{I}{V} \frac{1}{ n_{2D} e} =\frac{{\rm ln}\,2}{\pi} \frac{V_H}{V} \frac{I}{ n_{2D} eV_H} =\frac{{\rm ln}\,2}{\pi} \frac{V_H}{\,VB} =0.2206\,\frac{V_H}{V\,B} \end{equation} Umrechnung von Laboreinheit auf SI \begin{equation} \mu(\mbox{cm}^2\mbox{(Vs)}^{-1}) = 10\,000\cdot\mu(\mbox{ m}^2\mbox{(Vs)}^{-1}) = 10\,000\cdot\mu \end{equation} \begin{table}\vspace{0.5cm} \begin{center} \begin{tabular}{|c|c|} \hline $\mu(\mbox{cm}^2\mbox{(Vs)}^{-1})$ & Qualit\"at der Probe \\ \hline \hline $200\,000$ & immerhin \\ \hline $600\,000$ & gut \\ \hline $1\,000\,000$ & sehr gut \\ \hline $2\,000\,000$ & exzellent \\ \hline $10\,000\,000$ & Weltspitze (1/2 Jahr ausheizen usw.) \\ \hline \end{tabular} \normalsize \end{center} \vspace{0.5cm} \caption{Probenqualit\"aten in Termen ihrer Beweglichkeit} \vspace{0.75cm}\end{table} \subsubsection{Gated 2DEG} Mit $V_g$ als Gate-Spannung und $d$ als isolierender Dicke ist die 2-dimensionale La\-dungs\-tr\"a\-ger\-dich\-te in GaAs \begin{equation} n_{2D} = \frac{\varepsilon_*V_g}{ed} = 72.4 \times 10^{12}\,\mbox{cm}^{-2}\,V_g(V)/d(nm) \end{equation} \subsubsection{Elektronen im Magnetfeld} Zentralkraft versus {\sc Lorentz}-Kraft \begin{equation} \frac{mv^2}{r} = evB \end{equation} Zyklotronfrequenz \begin{equation} m \omega^2 r = e \cdot \omega r \cdot B, \phantom{xxxx}\mbox{daraus:}\phantom{x} \omega_c := \omega = \frac{eB}{m} \end{equation} klassischer magnetischer Radius \begin{equation} r_{class}=\frac{mv^2}{evB} =\frac{\sqrt{2m \cdot mv^2/2}}{eB} =\frac{\sqrt{2mE}}{eB} =28.21\,\mbox{{\rm nm}}\cdot \sqrt{E\,(\mbox{{\rm meV}})}\,/\,B \end{equation} magnetische L\"ange% \footnote{% Radius der Zyklotron-\glqq Nullpunktsbahn\grqq\ entsprechend der Nullpunktsenergie $E=\hbar\omega_c/2$. {\it Vorsicht:\/} Semiklassisch zu denken, kann in die Irre f\"uhren!} \begin{equation} l_{B} = \frac{\sqrt{2m\hbar\omega_c/2}}{eB} = \sqrt{\frac{\hbar}{eB}}, \phantom{12345} 2\pi \cdot l_{B}^2 = \frac{h}{eB} \end{equation} Wegen $n_{2D}=\nu\cdot eB/h$ ist \begin{equation} \nu=\frac{ n_{2D} \,h}{eB} =41.36 \cdot \frac{ n_{2D} \,(10^{12}\,\mbox{{\rm cm}}^{-2})}{B} \end{equation} Der quantenmechanische magnetische Radius \begin{equation} r_{quant} = \sqrt{\frac{h(\nu+1)}{2\pi eB}} = 25.656\,\mbox{{\rm nm}}\cdot \sqrt{\frac{\nu+1}{B}} \end{equation} erf\"ullt n\"aherungsweise \begin{equation} r_{quant} \approx \frac{h}{eB} \, \sqrt{\frac{ n_{2D} }{2\pi}} = 165\,\mbox{{\rm nm}}\cdot \sqrt{ n_{2D}\,(10^{12}\,\mbox{{\rm cm}}^{-2})}\,/\,B \end{equation} Zyklotronenergie \begin{equation} \hbar\omega_c=\frac{\hbar eB}{m} =1.654\,\mbox{{\rm meV}}\cdot B \end{equation} \subsubsection{Verarmung (Depletion)} Verarmungsl\"ange ({\it depletion length\/}) \begin{equation} l_{depl}=\sqrt{\frac{2\varepsilon}{e n_{3D} } (V_{bi}-\frac{2kT}{e}) } \approx \frac{1.31\,\mu m} {\sqrt{ n_{3D} \,(10^{15}\,\mbox{{\rm cm}}^{-3})}} \end{equation} Das hei\ss t: F\"ur eine geringe Dotierung von $10^{15}\,\mbox{cm}^{-3}$ betr\"agt sie typischerweise $1\,\mu\mbox{m}$, f\"ur eine gro\ss e Dotierung einige $nm$. \subsection{Mesa-\"Atze f\"ur GaAs} Zur Pr\"aparierung bzw.\ Strukturierung eines {\sc Hall}-Streifens ({\it engl.\/} {\sc Hall} bar) mu\ss\ man die Parameter kennen, die f\"ur das Heraus\"atzen eines {\it Mesas\/} ({\it in Alaska:\/} Tafelberg) entscheidend sind. \begin{equation} 1\,\mbox{Teil}\,{\rm H}_2{\rm SO}_4\,:\, 8\,\mbox{Teile}\,{\rm H}_2{\rm O}_2\,:\, 1000\,\mbox{Teile}\,{\rm H}_2{\rm O} \end{equation} \"atzen \begin{equation} 150\,{\rm nm}\,\,\,\,{\it in}\,\,\,\,3.5\,{\rm min} \end{equation} \subsection{Weitere n\"utzliche Formeln} \vspace*{0.3cm} \begin{eqnarray} \frac{k}{k'} &\approx& 0.737\,\log\left( \frac{0.1}{1-\frac{a}{b}} \right) + 1.384, \nonumber\\ &=& 0.647 - 0.737\,\log\left( 1-\frac{a}{b} \right), \phantom{1234} 0.7 < \frac{a}{b} < 1 \end{eqnarray} \vspace*{0.3cm} \begin{equation} \frac{k}{k'} \approx 0.844\,\frac{a}{b} + 0.36, \phantom{1234} 0.1 < \frac{a}{b} < 0.7 \end{equation} \vspace*{0.3cm} \begin{equation} \frac{k}{k'} \approx - \,\frac{0.57}{\log\frac{a}{b}} = \frac{0.57}{\log\frac{b}{a}}, \phantom{1234} 0 < \frac{a}{b} < 0.01 \end{equation} \begin{table}\vspace{0.5cm} \begin{center} \renewcommand{\baselinestretch}{1.33} \scriptsize \begin{tabular}{|l|} \hline All quantities in SI, unless specified otherwise\\ $D_n$=$n$-dimens.density of states$/\mbox{m}^n$ incl.spin \\ $m$=effective charge carrier mass, here $0.07\, \sbm{m}{e}$\\ $\epsilon$=dielectric constant, here $13.1\, \epsilon _\circ$\\ $\sbm{V}{bi}$=built-in potential, here $\du{1.24}{V}$ \hfill $a_\circ$=Bohr's radius\\ $l_n$=$n$-dimens.elast.mean free path \hfill$\tau$=momen.scatt.time\\ $N_n$=$n$-dimens. carrier density \hfill $E$=energy\\ $\mu$=charge carrier mobility \hfill $\sbm{E}{F}$=Fermi-energy\\ $\varrho_n$=$n$-dimens. resistivity \hfill $\lambda$=wavelength\\ $T$=temperature \hfill $\epsilon_\circ=\du{8.854188\times 10^{-12}}{F/m}$\\ $\sbm{l}{\scriptstyle depl}$=depletion length \hfill $\sbm{V}{g}$=gate voltage\\ $\sbm{r}{\scriptstyle class}$=classical magnetic length \hfill $B$=magnetic field\\ $\sbm{r}{\scriptstyle quant}$=quant.mech.magn.length \hfill $A$=area\\ $\nu$=2-dimens.spin-split filling factor \hfill $d$=insulat.thickness\\ $\sbm{\omega}{c}$=cyclotron frequency \hfill $e$=electron charge\\ $P$=black body radiation power \hfill $h$=Planck's quantum\\ $b$=black body coeff: 1 (black) $\ldots$ 0.05 (pol.metal)\\ $N_2=\frac{\epsilon \sbm{V}{g}}{ed}=72.4\times 10^{12}\, \mbox{cm}^{-2}\, \sbm{V}{g}(\mbox{V})/d\,(\mbox{nm})$\\ $ D_1=\frac{2}{h}\sqrt{\frac{2m}{E}} \hfill D_2=\frac{4\pi m}{h^2} \hfill D_3=\frac{8\pi m\sqrt{2mE}}{h^3}$\\ $E_{\mbox{F}_1}=\frac{h^2N_1^2}{32m}=\du{13.43}{meV}\, (N_1(\du{10^6}{cm}^{-1}))^2$\\ $E_{\mbox{F}_2}=\frac{h^2N_2}{4\pi m}=\du{34.2}{meV}\, N_2(\du{10^{12}}{cm}^{-2})$\\ $E_{\mbox{F}_3}=\frac{h^2}{8m}(\frac{3N_3}{\pi})^{2/3}=\du{52.1}{meV}\, (N_3(\du{10^{18}}{cm}^{-3}))^{2/3}$\\ $ \lambda_{\mbox{F}_1}=\frac{4}{N_1} \hfill \lambda_{\mbox{F}_2}=\sqrt{2\pi/N_2} \hfill \lambda_{\mbox{F}_3}=2\sqrt[3]{\frac{\pi}{3N_3}}$\\ $l_1=\frac{\mu h}{4e}N_1=\du{103.4}{nm}\,\mu N_1(\du{10^6}{cm}^{-1}) \hfill \varrho_n=\frac{1}{e\mu N_n}$\\ $l_2=\frac{\mu h}{e}\sqrt{\frac{N_2}{2\pi}}= \du{165}{nm}\,\mu\sqrt{N_2(\du{10^{12}}{cm}^{-2})}$\\ $l_3=\frac{\mu h}{2e}\sqrt[3]{\frac{3N_3}{\pi}}= \du{203.6}{nm}\, \mu \sqrt[3]{N_3(\du{10^{18}}{cm}^{-3})}$\\ $\sbm{l}{depl}=\sqrt{\frac{2\epsilon}{eN_3} (\sbm{V}{bi}-\frac{2kT}{e})}\approx \frac{\du{1.31}{$\mu$m}}{\sqrt{N_3(\du{10^{15}}{cm}^{-3})}}$\\ $\sbm{r}{class}=\frac{\sqrt{2mE}}{eB} =\du{28.21}{nm}\, \sqrt{E(\mbox{meV})}/B$\\ $\sbm{r}{quant}\approx\frac{h}{eB}\sqrt{\frac{N_2}{2\pi}} =\du{165}{nm}\, \sqrt{N_2(\du{10^{12}}{cm}^{-2})}/B$\\ $\sbm{r}{quant}=\sqrt{\frac{h(\nu +1)}{2\pi eB}} =\du{25.656}{nm}\sqrt{\frac{\nu +1}{B}}$\\ $\nu =\frac{N_2h}{eB}=41.36\,\frac{N_2(\du{10^{12}}{cm}^{-2})}{B} \hfill \tau=\frac{m}{e}\mu=\du{398}{fs}\,\mu$\\ $\hbar\sbm{\omega}{c}=\frac{\hbar eB}{m}=\du{1.654}{meV}\,B \hfill a_\circ=\frac{\epsilon h^2}{2\pi e^2 m}=\du{9.90}{nm}$\\ $P(\mbox{pW})=5.67\, A(\mbox{cm}^2)b(T_1^4-T_2^4) \hfill \lambda_{\sbm{P}{max}} =\frac{\du{2898}{$\mu$m}}{T}$\\ \hline \end{tabular} \renewcommand{\baselinestretch}{1.33} \normalsize \end{center} \vspace{0.5cm} \caption{{\it Zum Kopieren und Ausschneiden:\/} Vorderseite der Formelkarte} \vspace{0.75cm}\end{table} \begin{table}\vspace{0.5cm} \begin{center} \renewcommand{\baselinestretch}{1.33} \scriptsize \begin{tabular}{|l|} \hline $k=\du{1.380\,658(12)\times 10^{-23}}{J/K}$\\ $h=\du{6.626\,0755(40)\times 10^{-34}}{Js}$\\ $c=\du{2.997\,924\,58\times 10^8}{m/s}$\\ $e=\du{1.602\,177\,33(49)\times 10^{-19}}{C}$\\ $\sbm{m}{e} =\du{9.109\,534(54)\times 10^{-31}}{kg}$\\ $\sbm{m}{p}=\du{1.672\,6231(10)\times 10^{-27}}{kg}$\\ $\sbm{m}{n}=\du{1.674\,9286(10)\times 10^{-27}}{kg}$\\ $\mu_\circ =\du{4\pi\times 10^{-7}}{H/m}=\du{1.256\,637\ldots\times 10^{-6}}{H/m}$\\ $\varepsilon_\circ =1/\mu_\circ c^2=\du{8.854\,1878\ldots\times 10^{-12}}{F/m}$\\ $\du{1}{atm}=\du{1.01325\times 10^5}{N/m}^2$\\ $\du{1}{Torr}=\du{(1/760)}{atm}=\du{133.322\ldots}{N/m}^2$\\ $g=\du{9.80665}{m/s}^2$\\ $G =\du{6.672\,59(85)\times 10^{-11}}{m}^3\mbox{kg}^{-1}\mbox{s}^{-2}$\\ $\sbm{N}{\scriptstyle Avo}=\du{6.022\,1367(36)\times 10^{23}}{mol}^{-1}$\\ $\sigma =\du{5.670\,51(19)\times 10^{-8}}{W}\mbox{m}^{-2}\mbox{K}^{-4}$\\ $E_n=\frac{h^2n^2}{8ma^2}$\\ $N_2=\frac{BI}{eV_H}$\\ $\mu=\frac{\ln2\,V_H}{\pi\,VB}=0.2206\frac{V_H}{VB}$\\ \phantom{% $l_n$=$n$-dimens.elast.mean free path \hfill$\tau$=momen.scatt.time}\\[182pt] \hspace*{\fill}A.D.\,Wieck\,/\,D.K.\,de\,Vries\,/\,R.D.T.\,1992-1997\\ \hline \end{tabular} \renewcommand{\baselinestretch}{1.00} \normalsize \end{center} \vspace{0.5cm} \caption{{\it Zum Kopieren und Ausschneiden:\/} R\"uckseite der Formelkarte} \vspace{0.75cm}\end{table} \vfill\eject\noindent%
2,869,038,156,152
arxiv
\subsection*{Acknowledgements} We are grateful to the referees for their interest in this work and helpful remarks, which helped improve the presentation. \addtocontents{toc}{\protect\setcounter{tocdepth}{2}} \makeatletter \def\@pnumwidth{2em} \def\@tocrmarg {3.5em} \makeatother \tableofcontents \section{Preliminaries}\label{preliminaries} \subsection{The groups}\label{the groups} Let $l\geq1$ be an integer. Let $B_{\GL_l}=T_{\GL_l}\ltimes N_{\GL_l}$ denote the Borel subgroup of upper triangular invertible matrices, where $N_{\GL_l}$ is its unipotent radical. The standard parabolic subgroups of $\GL_l$ can be identified with the set of compositions $\beta=(\beta_1,\ldots,\beta_a)$ of $l$ ($\beta_i\geq0$, $a\geq1$), where $P_{\beta}=M_{\beta}\ltimes V_{\beta}$ denotes the parabolic subgroup with $M_{\beta}=\GL_{\beta_1}\times\ldots\times \GL_{\beta_a}$ and $V_{\beta}<N_{\GL_l}$. Let $J_l$ be the permutation matrix with $1$ on the anti-diagonal and $0$ otherwise. For $g\in\GL_l$, ${}^tg$ denotes the transpose of $g$, and $g^*=J_l{}^tg^{-1}J_l$. For $x\in \R$, $\lfloor x \rfloor$ (resp., $\lceil x \rceil$) denotes the largest integer smaller (resp., greater) than or equal to $x$. For an even $l$, define \begin{align*} \Sp_{l}=\{g\in\GL_l:{}^tg\left(\begin{smallmatrix}&J_{l/2}\\-J_{l/2}\end{smallmatrix}\right)g=\left(\begin{smallmatrix}&J_{l/2}\\-J_{l/2}\end{smallmatrix}\right)\}. \end{align*} Let $B_{\Sp_{l}}=\Sp_{l}\cap B_{\GL_l}$. For any $l$, let $\SO_l=\{g\in\SL_l:{}^tgJ_lg=J_l\}$ and fix $B_{\SO_{l}}=\SO_{l}\cap B_{\GL_l}$. Let $\Spin_l$ be the algebraic double cover of $\SO_l$, with the Borel subgroup which is the preimage of $B_{\SO_l}$. This defines the set of simple roots $\alpha_0,\ldots,\alpha_{\lfloor l/2\rfloor-1}$ where $\alpha_i=\epsilon_i-\epsilon_{i+1}$ for $0\leq i<\lfloor l/2\rfloor-1$, and $\GSpin_l$ can be defined as the Levi subgroup of $\Spin_{l+2}$ obtained by removing $\alpha_0$. For $l=0,1$, $\GSpin_l=\GL_1$, and $\GSpin_2=\GL_1\times\GL_1$. Henceforth we fix one of the families of groups $\GL_l$, $\Sp_{l}$ (when $l$ is even), $\SO_l$ or $\GSpin_l$, and for a given $l$ denote the member by $\mathcal{G}_l$, e.g., $\mathcal{G}_l=\Sp_l$. Write the Borel subgroup in the form $B_{\mathcal{G}_l}=T_{\mathcal{G}_l}\ltimes N_{\mathcal{G}_l}$, where $N_{\mathcal{G}_l}$ is the unipotent radical. For a parabolic subgroup $R<\mathcal{G}_l$, $\delta_R$ denotes its modulus character, and we write $R=M_R\ltimes U_R$ where $M_R$ is the Levi part and $U_R$ is the unipotent radical. If $U<\mathcal{G}_l$ is a unipotent subgroup, $U^-$ denotes the opposite subgroup. The Weyl group of $\mathcal{G}_l$ is denoted $W(\mathcal{G}_l)$, and similar notation is used for any reductive group. The center of an algebraic group $X$ is denoted $C_X$, and its connected component by $C_X^{\circ}$. The unipotent subgroups of $\GSpin_l$ are isomorphic (as algebraic groups) to the unipotent subgroups of $\SO_l$, and $W(\GSpin_l)$ is isomorphic to $W(\SO_l)$. Also $C_{\GSpin_{2l+1}}$ is connected and for $l>2$, $C_{\GSpin_{l}}^{\circ}\cong\GL_1$. Let $F$ be a local field with characteristic $0$. Throughout, we identify $F$-groups with their $F$-points, e.g., $\mathcal{G}_l=\mathcal{G}_l(F)$. The additive group of $l\times l'$ matrices (over $F$) is denoted $\Mat_{l\times l'}$ and $\Mat_l=\Mat_{l\times l}$. The trace map is denoted $\tr$. If $F$ is non-archimedean, we let $q$ denote the cardinality of its residue field. When we say that a property holds outside a discrete subset of $s$, over a non-archimedean field we mean for all but finitely many values of $q^{-s}$. For any group $X$, $x,y\in X$ and $Y<X$, ${}^xy=xyx^{-1}$ and ${}^xY=\{{}^xy:y\in Y\}$. \subsection{Representations}\label{representations} We describe the general notation involving representations that appear in this work. In this section $\mathcal{G}_l$ can be replaced with any reductive algebraic group. By a representation of a closed subgroup of $\mathcal{G}_l$ we always mean a smooth representation on a complex vector space. Over archimedean fields, an admissible representation is understood to be admissible Fr\'{e}chet of moderate growth. If $\pi$ is a representation of a closed subgroup $Y<\mathcal{G}_l$, $\pi^{\vee}$ is the representation contragredient to $\pi$, and for $x\in\mathcal{G}_l$, ${}^x\pi$ denotes the representation of ${}^xY$ on the same space of $\pi$, with the action given by ${}^x\pi(y)=\pi({}^{x^{-1}}y)$. Parabolic induction is normalized. Morphisms are continuous and induction is smooth, and $\otimes$ is the complete tensor product, over archimedean fields. In this work supercuspidal representations are not automatically irreducible (or unitary). When the field is non-archimedean, a representation of a group which does not have unipotent subgroups is also (trivially) supercuspidal. By definition, supercuspidal representations only exist over non-archimedean fields. For a closed unipotent subgroup $U<\mathcal{G}_l$, denote the set of (unitary) characters of $U$ by $\widehat{U}$. Let $\pi$ be a representation of $U$ on a space $\mathcal{V}$. For $\psi\in\widehat{U}$, let $\mathcal{V}(U,\psi)\subset \mathcal{V}$ be the subspace spanned by the vectors $\pi(u)\xi-\psi(u)\xi$ for all $u\in U$ and $\xi\in\mathcal{V}$ over non-archimedean fields, and over archimedean fields $\mathcal{V}(U,\psi)$ is the closure of this subspace. The Jacquet module $J_{U,\psi}(\pi)$ is the quotient $\mathcal{V}(U,\psi)\backslash\mathcal{V}$. Assume $R<\mathcal{G}_l$ is a closed subgroup containing $U$. Denote the normalizer of $U$ in $R$ by $N_R(U)$. If $\pi$ is a representation of $R$, $J_{U,\psi}(\pi)$ is a representation of the subgroup of $N_R(U)$ which stabilizes $\psi$. We do not twist the action, i.e., we do not multiply by a modulus character. For any $r\in R$, we have an isomorphism ${}^rJ_{U,\psi}(\pi)\cong J_{{}^rU,{}^r\psi}(\pi)$ of representations of ${}^rU$ (use $\xi\mapsto\pi(r)\xi$). In particular if $r\in N_R(U)$, ${}^rJ_{U,\psi}(\pi)\cong J_{U,{}^r\psi}(\pi)$. Over non-archimedean fields, if $U$ is abelian and $N_R(U)$ acts on $\widehat{U}$ with finitely many orbits, by \cite[5.9--5.12]{BZ1} if $J_{U,\psi'}(\pi)=0$ when $\psi'$ varies over a complete set of representatives for the nontrivial orbits, $U$ acts trivially on the space of $\pi$, i.e., $\pi=J_{U,1}(\pi)$. Let $J_{U,\psi}(\pi)^*$ be the algebraic dual of $J_{U,\psi}(\pi)$ over a non-archimedean field, and the continuous dual over archimedean fields. By definition $\Hom_{U}(\pi,\psi)=J_{U,\psi}(\pi)^*$. Over archimedean fields we will also need the notion of generalized Jacquet modules. Let $\pi$ be a representation of $\mathcal{G}_l$, and $R=M_R\ltimes U_R<\mathcal{G}_l$ be a parabolic subgroup. Denote the Lie algebra of $U_R$ by $\mathfrak{u}_R$. For any positive integer $i$, we call $\pi/\overline{\mathfrak{u}^i\pi}$ the $i$-th generalized Jacquet module of $\pi$. \begin{lemma}\label{lem:adm} If $\pi$ is an admissible finite length representation of $\mathcal{G}_l$, the $i$-th generalized Jacquet module is an admissible finite length representation of $M_R$. \end{lemma} This lemma is proven in the same way as the classical case ($i=1$), see Wallach \cite[Lemma~4.3.1]{Wal88}. \begin{lemma}\label{lem:adm 2 discrete} Assume $\pi$ is an admissible finite length representation of $\mathcal{G}_l$. The set of central exponents of $\pi/\mathfrak{u}^i\pi$, i.e., the central characters of the irreducible constituents of $\pi/\mathfrak{u}^i\pi$ as a representation of $M_R$, where $i$ varies over the positive integers, belong in a discrete set. \end{lemma} \begin{proof} Let $V$ denote the Harish-Chandra module of $\pi$, i.e., the space of $K$-finite vectors, where $K\subset \mathcal{G}_l$ is a maximal compact subgroup. By \cite[Proposition 2.2]{Cass}, $V$ is dense in $\pi$, and by \cite[Proposition 5.1 and Lemma 5.3]{Cass}, $V/\mathfrak{u} V$ has finitely many central exponents. In other words, for any $X$ in the Lie algebra of the center of $\mathcal{G}_l$ there exists a polynomial $p$ such that $p(X)$ acts by zero on $V/\mathfrak{u} V$. Now, $V/\mathfrak{u}^iV$ is filtered by the modules $\mathfrak{u}^jV/\mathfrak{u}^{j+1}V$, $0\leq j<i$, and each of these is a quotient of $\mathfrak{u}^j\otimes V/\mathfrak{u}V$. When $i$ varies, the set of central exponents of $V/\mathfrak{u}^iV$ is contained in the set of central exponents of $\mathfrak{u}^j\otimes V/\mathfrak{u}V$, $j\geq0$. Regarding $\mathfrak{u}^j$, its central exponents can be computed using the adjoint action, and when $j$ varies they belong in a lattice. Since $V/\mathfrak{u}V$ admits only finitely many central exponents, the central exponents of $V/\mathfrak{u}^iV$ for all $i$ lie in a finite union of lattices. Finally, note that for any $i$, the set of central exponents of $\pi/\mathfrak{u}^i\pi$ lies in the set of central exponents of $V/\mathfrak{u}^iV$. Indeed, if $p(X)$ acts by zero on $V/\mathfrak{u}^iV$ then it acts by zero on $\pi/\mathfrak{u}^i\pi$, since $V$ is dense in $\pi$. \end{proof} \begin{remark} In particular, the set of central exponents of the $i$-th generalized Jacquet modules of $\pi$, where $i$ varies over the positive integers, belongs in a discrete set. \end{remark} Let $\psi$ be a nontrivial additive character of $F$. For $v\in V_{(c^l)}$, write $v=(v_{i,j})_{1\leq i,j\leq l}$ with $v_{i,j}\in\Mat_c$. Denote \begin{align*} \psi_{l}(v)=\psi(\sum_{i=1}^{l-1}\tr(v_{i,i+1})). \end{align*} For a representation $\pi$ of $\GSpin_l$ which admits a central character, let $\chi_{\pi}$ be the restriction of the central character of $\pi$ to $C_{\GSpin_l}^{\circ}$. \subsection{Distribution vanishing theorem}\label{sec:KV} Let a real algebraic group $C$ act on a real algebraic manifold $X$. Let $E$ be a smooth representation of $C$ in a Fr\'{e}chet space. Assume the actions of $C$ on $X$ and on $E$ extend to a Lie group $A$, which contains $C$ as a closed normal subgroup. Let $Z\subset X$ be a closed subset which is a union of finitely-many locally closed $A$-orbits. For any $\nu \in\Z_{>0}$ and $z\in Z$ let $\Lambda_z^{\nu}$ be the symmetric $\nu$-th power of the conormal space at $z$ to the orbit $Cz$ in $X$. Let $C_z$ denote the stabilizer of $z$ in $C$, and $\delta$ be the ratio of modular functions of $C$ and $C_z$. Denote the space of $E$-valued distributions on $X$, i.e., functionals on the space of compactly supported smooth $E$-valued functions on $X$, by $\cD'(X,E)$, and let $\cD'_{Z}(X,E)\subset \cD'(X,E)$ denote the subspace of distributions supported on $Z$. For a smooth character $\chi$ of $Z$, let $\cD'_{Z}(X,E)^{C,\chi}\subset \cD'_{Z}(X,E)$ be the subspace of $(C,\chi)$-equivariant distributions. The following theorem follows from Theorem~\ref{theorem:convenient vanishing} in the appendix: \begin{theorem}\label{thm:AG} Assume that for any $z\in Z$, the set $\{\chi^a|_{C_z}:a\in A\}$ is a union of finitely many locally closed orbits, under the action of the stabilizer $A_z$ of $z$ in $A$. Suppose also that for any $z\in Z$ and any $\nu\geq 0$, \begin{align}\label{=KV} ((E\otimes \Lambda_{\nu}\otimes \delta)^*)^{C_z,\chi}=0. \end{align} Then $\cD'_{Z}(X,E)^{C,\chi}=0$. \end{theorem} \begin{remark} If $\chi$ is trivial or $A=C$, the theorem already follows from \cite[Theorem 3.15, Cases (i,ii)]{KV}. Note that in both cases $\chi^a=\chi$ for any $a\in A$. \end{remark} \begin{remark} If $A,X$ and the action of $A$ on $X$ are semi-algebraic, the $A$-orbits in $X$ are automatically locally closed. If in addition $C$ is semi-algebraic and $C_z$ is unipotent, the condition \eqref{=KV} is equivalent to $(E^*)^{C_z,\chi}=0$, independent of $\nu$ (see \cite{Sun}). \end{remark} In order to check the conditions of the theorem we will need the following lemma. \begin{lemma}\label{lem:Uhat} Let $H$ be a real reductive group and $Q<H$ be a parabolic subgroup, with a unipotent radical $U=U_Q$. The set $\widehat{U}$ (the unitary characters of $U$) is a finite union of locally closed $Q$-orbits. \end{lemma} \begin{proof} Let $\mathfrak{u}$ denote the Lie algebra of $U$. There exists a hyperbolic semi-simple element $S\in H$ such that $\mathfrak{u}$ is the sum of positive eigenspaces of the adjoint action $\mathrm{ad}(S)$. The eigenspace $\mathfrak{u}_1$ corresponding to the smallest positive eigenvalue of $\mathrm{ad}(S)$ is called the first internal Chevalley module of $Q$. Clearly, $\mathfrak{u}_1$ projects onto (and in fact identifies with) the space of characters of $\mathfrak{u}$, which in turn identifies with $\widehat{U}$ by multiplying by $i$ and exponentiation. By \cite[Theorem E']{Ric}, $Q$ has finitely many orbits on $\mathfrak{u}_1$, and each orbit is locally closed since the action is algebraic. \end{proof} \subsection{Representations of type $(k,c)$}\label{kc representations} Let $k$ and $c$ be positive integers. For a partition $\sigma$ of $kc$, let $V(\sigma)<N_{\GL_{kc}}$ denote the corresponding unipotent subgroup, and $\widehat{V}(\sigma)_{\mathrm{gen}}$ denote the set of generic characters. If $\sigma'$ is another partition of $kc$, write $\sigma'\succsim\sigma$ if $\sigma'$ is greater than or non-comparable with $\sigma$, with respect to the natural partial ordering on partitions. See \cite{G2}, \cite[\S~5]{CM} and \cite{Cr} for details on these notions. For convenience, we provide the definition of $V(\sigma)$. Identify $\sigma$ with an $l$-tuple of integers $(a_1,\ldots,a_l)$ such that $a_1 \geq\ldots\geq a_l>0$. Let $p_{\sigma}$ be the $kc$-tuple of integers obtained by arranging the multi-set $\{a_i-2j+1:1\leq i \leq l,\,1\leq j\leq a_i\}$ in decreasing order. For any $x\in F^*$, put $x^{p_{\sigma}}=\diag(x^{p_{\sigma}(1)},\ldots,x^{p_{\sigma}(kc)})\in T_{\GL_{kc}}$. The one-parameter subgroup $\{x^{p_{\sigma}}:x\in F^*\}$ acts on the Lie algebra of $N_{\GL_{kc}}$ by conjugation, and $V(\sigma)$ is the subgroup generated by the weight subspaces of weight at least $2$. For the orbit $(k^c)$, $V((k^c))=V_{(c^k)}$, the group $M_{(c^k)}$ acts transitively on the set $\widehat{V}((k^c))_{\mathrm{gen}}$, and $\psi_k\in \widehat{V}((k^c))_{\mathrm{gen}}$. The stabilizer of $\psi_k$ in $M_{(c^k)}$ is then the diagonal embedding $\GL_c^{\triangle}$ of $\GL_c$ in $M_{(c^k)}$. Let $\rho$ be a representation of $\GL_{kc}$. We say that $\rho$ is a $(k,c)$ representation if $\Hom_{V(\sigma)}(\rho,\psi')=0$ for all $\sigma\succsim(k^c)$ and $\psi'\in\widehat{V}(\sigma)_{\mathrm{gen}}$, and $\dim\Hom_{V_{(c^k)}}(\rho,\psi_{k})=1$. We briefly recall the definition of the wave-front set (see e.g., \cite[\S~4.1]{GourevitchSahi2019} for some more details). When $\rho$ is admissible of finite length, its character defines a distribution on a neighborhood of $0$ in the Lie algebra of $\GL_{kc}$. This distribution (in the non-archimedean case) or the leading term of its asymptotic expansion near $0$ (archimedean case) is a combination of Fourier transforms of Haar measures of nilpotent coadjoint orbits (\cite{Howe1974}, \cite[p.~180]{Harish-Chandra1977}, \cite[Theorems~1.1 and 4.1]{BV80}). For a nilpotent orbit $\mathcal{O}$, let $c_{\mathcal{O}}$ denote its coefficient in this expansion (for a suitable normalization of the measures). The wave-front set $\mathrm{WF}(\rho)$ of $\rho$ is defined to be the set of orbits $\mathcal{O}$ such that $c_{\mathcal{O}}\ne0$ and for each orbit $\mathcal{O}'$ containing $\mathcal{O}$ in its closure, $c_{\mathcal{O}'}=0$. In this case, an equivalent definition of a $(k,c)$ representation can be given in terms of $\mathrm{WF}(\rho)$: Now $\rho$ is $(k,c)$ if $(k^c)$ is the unique maximal orbit in $\mathrm{WF}(\rho)$ and the dimension of the space of degenerate Whittaker functionals on $\rho$ with respect to $V_{(c^k)}$ and $\psi_{k}$ is $1$ (see \cite[Theorem~E]{GGS}). For $c=1$, a representation is $(k,1)$ if and only if it affords a unique Whittaker model. On the other end, a representation is $(1,c)$ if and only if $\dim\Hom_{V_{(c)}}(\rho,1)=1$, equivalently $\rho$ is a character ($V_{(c)}$ is the trivial group). For a $(k,c)$ representation $\rho$, $\dim J_{V_{(c^k)},\psi_k}(\rho)^*=1$, hence $\dim J_{V_{(c^k)},\psi_k}(\rho)=1$ so that $\SL_c^{\triangle}$ acts trivially on $J_{V_{(c^k)},\psi_k}(\rho)$ and $\GL_c^{\triangle}$ acts on $J_{V_{(c^k)},\psi_k}(\rho)$ by a character. We recall the map $\rho_c$ defined (implicitly) in \cite[\S~2.2]{CFK} from irreducible generic representations of $\GL_k$ to admissible finite length $(k,c)$ representations of $\GL_{kc}$. For an irreducible tempered representation $\tau$ of $\GL_k$, $\rho_c(\tau)$ is the generalized Speh representation, i.e., the unique irreducible quotient of $\Ind_{P_{(k^c)}}^{\GL_{kc}}((\tau\otimes\ldots\otimes\tau)\delta_{P_{(k^c)}}^{1/(2k)})$ (see \cite{Jac4,MW4}). Then if $\tau=\Ind_{P_{\beta}}^{\GL_k}(\otimes_{i=1}^d|\det|^{a_i}\tau_i)$ where $\beta$ is a composition of $d$ parts of $k$, $a_1>\ldots>a_d$ and each $\tau_i$ is tempered, $\rho_c(\tau)=\Ind_{P_{\beta c}}^{\GL_{kc}}(\otimes_{i=1}^d|\det|^{a_i}\rho_c(\tau_i))$. By \cite[Theorem~5]{CFK} the representation $\rho_c(\tau)$ is $(k,c)$. The definition of $\rho_c(\tau)$ was also extended to unramified principal series $\Ind_{B_{\GL_k}}^{\GL_k}(\otimes_{i=1}^k|\det|^{a_i}\tau_i)$, where $\tau_i$ are unramified unitary quasi-characters of $F^*$ and $a_1\geq\ldots\geq a_k$, again by letting $\rho_c(\tau)=\Ind_{P_{(c^k)}}^{\GL_{kc}}(\otimes_{i=1}^k|\det|^{a_i}\rho_c(\tau_i))$ (note that $\rho_c(\tau_i)=\tau\circ\det_{\GL_c}$). While $\rho_c(\tau)$ might be reducible in the general case, it is still admissible, of finite length and admits a central character. Also note that (over any local field) $\GL_c^{\triangle}$ acts on $J_{V_{(c^k)},\psi_k}(\rho_c(\tau))$ by $g\mapsto\tau((\det g)I_k)$ (\cite[Lemma~14]{CFK}). We mention that over non-archimedean fields, certain structural properties of irreducible $(k,c)$ representations follow from \cite[\S~II.2]{MW3}. For principal series representations, irreducible or not, over any local field, a representation is $(k,c)$ if and only if it takes the form $\Ind_{P_{(c^k)}}^{\GL_{kc}}(\otimes_{i=1}^k\chi_i\det_{\GL_c})$ for quasi-characters $\chi_i$ of $F^*$. This follows from \cite{AGS2015a,AGS2015b,GGS} (their focus was archimedean; the non-archimedean case essentially follows from \cite{BZ1,MW3}). \subsection{Doubling setup}\label{Doubling setup} We define the basic setup for the doubling method: the groups $G$ and $H$, the image of $G\times G$ in $H$, and the definition of the local integral. The precise details depend on $G$. Let $c,k\geq1$ be integers, $G=\mathcal{G}_c$ and $H=\mathcal{G}_{2kc}$ (if $G=\Sp_c$, $c$ must be even). Let $n=\lfloor c/2\rfloor$ if $G\ne\GL_c$, otherwise $n=c$. Also set $\epsilon_0=-1$ for $G=\Sp_{c}$ and $\epsilon_0=1$ otherwise, and if $G=\SO_c,\GSpin_c$ and $c$ is odd, define $(\epsilon_1,\epsilon_2)=(1,1/2)$ if $k$ is even and $(\epsilon_1,\epsilon_2)=(1/2,1)$ if $k$ is odd. Recall $B_H=T_H\ltimes N_H$ is our fixed Borel subgroup in $H$ (see \S~\ref{the groups}). Set $H_0=\mathcal{G}_{2c}$. Let $Q=M_Q\ltimes U_Q$ be the standard parabolic subgroup of $H$ such that its Levi part $M_Q$ is isomorphic to $\GL_c\times\ldots\times\GL_c\times H_0$ if $H\ne\GL_{2kc}$, otherwise $Q=P_{(c^{k-1},2c,c^{k-1})}$. Denote $U=U_Q$. We construct the following character $\psi_U$ of $U$. For $k>1$, denote the middle $4c\times 4c$ block of an element in $U$ by \begin{align}\label{matrix:middle 4c block of u} \left(\begin{smallmatrix}I_c&u&v\\&I_{2c}&u'\\&&I_c\end{smallmatrix}\right). \end{align} Let $u^{1,1}$ be the top left $n\times n$ block of $u$, and if $H\ne\GL_{2kc}$, denote the bottom right $n\times n$ block of $u$ by $u^{2,2}$. For $H=\GL_{2kc}$, $u^{2,2}$ is defined to be the top $c\times c$ block of $u'$. If $H=\SO_{2kc},\GSpin_{2kc}$ and $c$ is odd, denote the middle two coordinates of row $n+1$ of $u$ by $(u^3,u^4)\in\Mat_{1\times2}$. If $H\ne\GL_{2kc}$ and $k>1$, the character $\psi_U$ restricts to $\psi_{k-1}$ on the group $V_{(c^{k-1})}$, identified with a subgroup of $U$ via the embedding $v\mapsto\diag(v,I_{2c},v^*)\in U$. For $H=\GL_{2kc}$ and $k>1$, $\psi_U$ restricts to $\psi_{k-1}^{-1}$ on each of the two copies of $V_{(c^{k-1})}$, embedded in $U$ via $(v_1,v_2)\mapsto\diag(v_1,I_{2c},v_2)$ ($v_1,v_2\in V_{(c^{k-1})}$). The character $\psi_U$ is given on \eqref{matrix:middle 4c block of u} by \begin{align*} \begin{cases} \psi(\tr(-u^{1,1}+u^{2,2}))&H=\GL_{2kc},\\ \psi(\tr(u^{1,1}+u^{2,2}))&H=\Sp_{2kc},\SO_{2kc},\GSpin_{2kc},\text{ even $c$},\\ \psi(\tr(u^{1,1}+u^{2,2})+\epsilon_1u^3-\epsilon_2u^4)&H=\SO_{2kc},\GSpin_{2kc},\text{ odd $c$}. \end{cases} \end{align*} For $k=1$, $U$ and thereby $\psi_U$ are trivial. Now consider the case $H\ne\GSpin_{2kc}$. In this case $G\times G$ is embedded in the stabilizer of $\psi_U$ in $M_Q$. Explicitly, assume $k\geq1$ and $g_1,g_2\in G$. If $H=\Sp_{2kc},\SO_{2kc}$ with an even $c$, write $g_1=\left(\begin{smallmatrix}g_{1,1}&g_{1,2}\\g_{1,3}&g_{1,4}\end{smallmatrix}\right)$, $g_{1,i}\in \Mat_n$, then \begin{align*} (g_1,g_2)= \diag(g_1,\ldots,g_1,\left(\begin{smallmatrix} g_{1,1}&&g_{1,2}\\ &g_2&\\ g_{1,3}&&g_{1,4}\end{smallmatrix}\right),g_1^*,\ldots,g_1^*), \end{align*} where $g_1^*$ appears $k-1$ times. For $H=\GL_{2kc}$, \begin{align*} (g_1,g_2)= \diag(g_1,\ldots,g_1,g_1,g_2,g_1,\ldots,g_1). \end{align*} Here $g_1$ appears $k$ times on the left of $g_2$ and $k-1$ on the right. For odd $c$ and $H=\SO_{2kc}$, take column vectors $e_{\pm i}$, $1\leq i\leq c$, whose Gram matrix is $J_{2c}$ (i.e., ${}^te_{i}e_{-j}=\delta_{i,j}$). Let \begin{align*} &b=(e_1,\ldots,e_{c-1},\epsilon_1e_{c}-\epsilon_2e_{-c},\epsilon_1e_{c}+\epsilon_2e_{-c},e_{-c+1},\ldots,e_{-1}),\\ &b_1=(e_1,\ldots,e_{n},\epsilon_1e_{c}-\epsilon_2e_{-c},e_{-n},\ldots,e_{-1}),\\ &b_2=(e_{n+1},\ldots,e_{c-1},\epsilon_1e_{c}+\epsilon_2e_{-c},e_{-c+1},\ldots,e_{-n-1}),\\ &m=\diag(I_{c-1},\left(\begin{smallmatrix}\epsilon_1&\epsilon_1\\-\epsilon_2&\epsilon_2\end{smallmatrix}\right),I_{c-1}). \end{align*} The Gram matrices of $(b,b_1,b_2)$ are $(J_{2c},\diag(I_n,-1,I_n)J_{c},J_{c})$. The left (resp., right) copy of $\SO_c$ acts on the subspace spanned by $b_1$ (resp., $b_2$); the left copy is defined by \begin{align*} \{g_1\in\SL_c:{}^tg_1\diag(I_n,-1,I_n)J_{c}g_1=\diag(I_n,-1,I_n)J_{c}\}, \end{align*} and the right copy is defined using the convention of \S~\ref{the groups} (the Gram matrix of $b_2$ is $J_c$). Extend $g_i$ by letting it fix the vectors of $b_{3-i}$, then write this extension as a matrix $g_i'\in\SO_{2c}$ with respect to $b$, $i=1,2$. The matrices ${}^mg_1'$ and ${}^mg_2'$ commute and the embedding is given by \begin{align*} (g_1,g_2)=\diag(g_1,\ldots,g_1,{}^mg_1'\,{}^mg_2',g_1^*,\ldots,g_1^*). \end{align*} The notation $(1,g)$ or $(g,1)$ is used for the embedding of one of the copies of $G$ in $H$, where $1$ denotes the identity element of $G$. \begin{example}\label{example:odd c} Here are a few examples for the embedding in the odd orthogonal case, adapted from \cite[Example~15]{CFK}. Consider the standard Siegel parabolic subgroup $R$ of $G$. For $a,b\in\GL_n\cong M_R$, \begin{align*} (a,b)=\diag(\diag(a,1,a^*)^{\triangle'},\diag(a,b,I_2,b^*,a^*)), \end{align*} where $\triangle'$ denotes the diagonal embedding of $\GL_c$ in $\GL_{(k-1)c}$, and we omitted, here and below, the bottom right $(k-1)c\times(k-1)c$ block of $(a,b)$ (it is uniquely defined by the given blocks and $H$). The images of $(U_R,1)$ and $(1,U_R)$ take the form \begin{align*} &\diag(\left(\begin{smallmatrix}I_n&x&y\\&1&x'\\&&I_n\end{smallmatrix}\right)^{\triangle'}, \left(\begin{smallmatrix}I_n&&\epsilon_2x&-\epsilon_1x&&y\\&I_{n}\\&&1&&&\epsilon_1x'\\&&&1&&-\epsilon_2x'\\&&&&I_n\\&&&&&I_n\end{smallmatrix}\right)),\\ &\diag(I_{(k-1)c}, \left(\begin{smallmatrix}I_n\\&I_n&\epsilon_2x&\epsilon_1x&y\\&&1&&\epsilon_1x'\\&&&1&\epsilon_2x'\\&&&&I_n\\&&&&&I_n\end{smallmatrix}\right)), \end{align*} where $x'$ is uniquely determined given $x$ and $H$. We also note that \begin{align*} &(\left(\begin{smallmatrix}I_{n-1}\\&&&1\\&&-1\\&1\\&&&&I_{n-1}\end{smallmatrix}\right),1)= \diag(\left(\begin{smallmatrix}I_{n-1}\\&&&1\\&&-1\\&1\\&&&&I_{n-1}\end{smallmatrix}\right)^{\triangle'}, \left(\begin{smallmatrix}I_{n-1}\\&&&&&&1\\&&I_n\\&&&&2\epsilon_1^2\\&&&2\epsilon_2^2\\&&&&&I_n\\&1\\&&&&&&&I_{n-1}\end{smallmatrix}\right)),\\ &(1,\left(\begin{smallmatrix}I_{n-1}\\&&&1\\&&-1\\&1\\&&&&I_{n-1}\end{smallmatrix}\right))= \diag(I_{(k-1)c}, \left(\begin{smallmatrix}I_{2n-1}\\&&&&1\\&&&-2\epsilon_1^2\\&&-2\epsilon_2^2\\&1&&\\&&&&&I_{2n-1}\end{smallmatrix}\right)). \end{align*} \end{example} The case of $H=\GSpin_{2kc}$ is slightly more complicated: we have an embedding of \begin{align}\label{embedding G G in H GSpin} \{(z,z):z\in C_G^{\circ}\}\backslash G\times G \end{align} in $M_Q$, in the stabilizer of $\psi_U$. Here with a minor abuse of notation $(z,z)$ is regarded as an element of $G\times G$. For details see \cite[\S~3.5]{CFK}. We define the space of the induced representation of $H$, which is used for the construction of the integral. First assume $H=\Sp_{2kc},\SO_{2kc}$. Let $P=M_P\ltimes U_P$ be a standard maximal parabolic subgroup of $H$ such that $M_P\cong\GL_{kc}$ and $M_P<M_{(kc,kc)}$. Let $\rho$ be a representation of $\GL_{kc}$. For a complex parameter $s$, let $V(s,\rho)$ be the space of $\Ind_{P}^{H}(|\det|^{s-1/2}\rho)$. For $H=\GSpin_{2kc}$ we take the standard parabolic subgroup $P$ obtained by removing the simple root $\alpha_{kc}$, then $M_P\cong\GL_{kc}\times\GL_1$ and note that $\GL_1$ is identified with $C_H^{\circ}$. Let $\rho$ be as above, and $\eta$ be a quasi-character of $F^*$. Then $V(s,\rho\otimes\eta)$ is the space of the induced representation $\Ind_{P}^{H}(|\det|^{s-1/2}\rho\otimes\eta)$. For $H=\GL_{2kc}$ we take $P=P_{(kc,kc)}$, $\rho=\rho_1\otimes\rho_2$ for two representations $\rho_1$ and $\rho_2$ of $\GL_{kc}$, and $V(s,\rho)$ denotes the space of $\Ind_{P}^{H}(|\det|^{s-1/2}\rho_1\otimes|\det|^{-s+1/2}\rho_2)$. Assume $H\ne\GSpin_{2kc}$. Take $\delta_0\in H$ satisfying ${}^{\delta_0}U_P=U_P^-$ if $kc$ is even or $H=\GL_{2kc}$; otherwise take $\delta_0'\in\Orth_{2kc}$ with ${}^{\delta_0'}U_P=U_P^-$ and let $\delta_0$ be the product of $\delta_0'$ and a representative of the transposition in $\Orth_{2kc}$ which normalizes $N_H$ (to obtain $\det\delta_0=1$). Let $\delta_1\in H_0\cap U_P$ ($H_0<M_Q$) be such that its natural identification with a matrix in $\Mat_c$ is of rank $c$, unless $H=\SO_{2kc}$ and $c$ is odd, in which case the rank is $c-1$ (this is the maximal rank). Put $\delta=\delta_0\delta_1$. Then let $\iota$ be an involution of $G$ such that ${}^{\delta}\{(g,{}^{\iota}g):g\in G\}<M_P$. One concrete choice of $\delta_0,\delta_1$ and $\iota$ was given in \cite{CFK}: \begin{align*} &\delta_0=\begin{dcases}\left(\begin{smallmatrix} &I_{kc}\\ \epsilon_0 I_{kc}\end{smallmatrix}\right)&\text{$H\ne\SO_{2kc}$ or $c=2n$},\\ \left(\begin{smallmatrix} &I_{kc}\\ I_{kc}\end{smallmatrix}\right) \diag(I_{(k-1)c},\left(\begin{smallmatrix}I_n\\&&(-1)^{k}\\&I_n\end{smallmatrix}\right), \left(\begin{smallmatrix}&I_n\\(-1)^{k}&&\\&&I_n\end{smallmatrix}\right),I_{(k-1)c})\jmath_{kc}&\text{$H=\SO_{2kc}$, $c=2n+1$}, \end{dcases} \end{align*} where $\jmath_{kc}=\diag(I_{kc-1},\left(\begin{smallmatrix} &1\\ 1\end{smallmatrix}\right)^{kc},I_{kc-1})$, $\delta_1=\diag(I_{(k-1)c},\left(\begin{smallmatrix}I_c &A \\ &I_c\end{smallmatrix}\right),I_{(k-1)c})$ with \begin{align*} A=\begin{dcases} I_c&H=\Sp_{2kc},\GL_{2kc},\\ \left(\begin{smallmatrix}-I_{n}\\&I_{n}\end{smallmatrix}\right) &\text{$H=\SO_{2kc}$, $c=2n$},\\ \left(\begin{smallmatrix}&-I_{n}\\&&I_{n}\\0\end{smallmatrix}\right) &\text{$H=\SO_{2kc}$, $c=2n+1$},\\ \end{dcases} \end{align*} and \begin{align*} \iota=\begin{dcases} \left(\begin{smallmatrix}&I_{n}\\-\epsilon_0I_{n}\end{smallmatrix}\right)&\text{$H=\Sp_{2kc},\SO_{2kc}$, $c=2n$},\\ I_c&H=\GL_{2kc},\\ \left(\begin{smallmatrix}&&I_{n}\\&I_2\\I_{n}\end{smallmatrix}\right)&\text{$H=\SO_{2kc}$, $c=2n+1$, $k$ is odd},\\ \left(\begin{smallmatrix}&&I_{n}\\&\begin{smallmatrix}&-2\epsilon_1^2\\-2\epsilon_2^2\end{smallmatrix}\\I_{n}\end{smallmatrix}\right)& \text{$H=\SO_{2kc}$, $c=2n+1$, $k$ is even}. \end{dcases} \end{align*} Also set $U_0=U\cap {}^{\jmath_{kc}}U_P$, then $\psi_U$ is a character of $U_0$ by restriction. For $H=\GSpin_{2kc}$, $\delta_0\in H$ is defined using the isomorphism $W(H)\cong W(\SO_{2kc})$, $\delta_1$ is the element taken for $\SO_{2kc}$, and $\iota$ satisfies the same condition as above (the concrete examples of $\iota$ extend to involutions of $\GSpin_c$ as well). The important observation for us here concerning $\GSpin_{2kc}$, is that the images of unipotent subgroups, and subgroups $\GL_l$ occurring as direct factors of standard Levi subgroups, can be read off the corresponding orthogonal cases. For any representation $\pi$ of $G$, $\pi^{\iota}$ is the representation on the space of $\pi$, with the action defined by $\pi^{\iota}(g)=\pi({}^{\iota}g)$. The definitions imply $(\pi^{\iota})^{\vee}=(\pi^{\vee})^{\iota}$. We define the (local) doubling integral. Let $\pi$ be an irreducible admissible representation of $G$. If $H\ne\GL_{2kc}$, let $\rho$ be an admissible finite length $(k,c)$ representation of $\GL_{kc}$ which admits a central character. Otherwise $\rho=\rho_1\otimes\chi^{-1}\rho_2$ where $\rho_1$ and $\rho_2$ are admissible finite length $(k,c)$ representations of $\GL_{kc}$, each admitting a central character, and such that the central character of $\rho_1$ is the inverse of the central character of $\rho_2$, and $\chi$ is a quasi-character of $F^*$. Let $\omega$ be a matrix coefficient of $\pi^{\vee}$. Let $f$ be a holomorphic section of $V(s,\rho)$ if $H\ne\GSpin_{2kc}$, and for $\GSpin_{2kc}$, $f$ is a holomorphic section of $V(s,\rho\otimes\chi_{\pi})$. The doubling integral for $\pi\times\rho$ is defined by \begin{align}\label{local integral} Z(s,\omega,f)=\int\limits_{G}\int\limits_{U_0} \omega(g)f(s,\delta u_0(1,{}^{\iota}g))\,\psi_U(u_0)\,du_0\,dg. \end{align} Here if $H=\GSpin_{2kc}$, the domain of integration is $C_G^{\circ}\backslash G$ instead of $G$. \begin{theorem}\label{theorem:basic props of doubling integrals}\cite[Propositions~17, 20, 21]{CFK} Integral \eqref{local integral} enjoys the following properties. \begin{enumerate}[leftmargin=*] \item\label{props 1}Formally, it belongs to the space \begin{align}\label{eq:doubling Hom} \Hom_{(G,G)}(J_{U,\psi_U^{-1}}(V(s,\rho\otimes\chi_{\pi})),\chi^{-k}\pi^{\vee}\otimes\pi^{\iota}). \end{align} Here $\chi_{\pi}$ and $\chi$ are omitted for the cases where they are undefined. \item\label{props 2}It is absolutely convergent for $\Real(s)\gg0$, independent of the data $(\omega,f)$. \item\label{props 3}Over non-archimedean fields there is data $(\omega,f)$, where $f$ is a polynomial section in $q^{\mp s}$, such that $Z(s,\omega,f)$ is absolutely convergent in $\C$ and equals a nonzero constant (independent of $s$). Over archimedean fields for each $s$ there is $\omega$ and a smooth section $f$ such that the integral is nonzero at $s$. \end{enumerate} \end{theorem} \begin{proof} The theorem was proved in \textit{loc. cit.}, for the representation $\rho=\rho_c(\tau)$ ($H\ne\GL_{2kc}$) or $\rho=\rho_c(\tau)\otimes\chi^{-1}\rho_c(\tau^{\vee})$ ($\rho_c(\tau)$ was defined in \S~\ref{kc representations}). However, the proofs of these statements remain valid when we take the more general representation $\rho$ as described above. \end{proof} Over non-archimedean fields, once we prove that \eqref{eq:doubling Hom} is at most one-dimensional outside a discrete subset of $s$, Theorem~\ref{theorem:basic props of doubling integrals} together with Bernstein's continuation principle (in \cite{Banks}) imply that for a rational section $f$, $Z(s,\omega,f)$ admits meromorphic continuation to a rational function in $q^{-s}$. Over archimedean fields for the choice of $\rho$ in \cite{CFK}, the meromorphic continuation of the integral and continuity of this continuation in the input data were proved in \cite[\S~6.13]{CFK}. \section{Uniqueness results}\label{Uniqueness results} \subsection{Outline of the proof of Theorem~\ref{theorem A}}\label{outline} Let $\pi_1$ and $\pi_2$ be admissible finite length representations of $G$. If $H\ne\GL_{2kc}$, let $\rho$ be an admissible finite length $(k,c)$ representation of $\GL_{kc}$. For $H=\GL_{2kc}$ put $\rho=\rho_1\otimes\rho_2$ where each $\rho_i$ is an admissible finite length $(k,c)$ representation of $\GL_{kc}$, and let $\chi_0$ be the quasi-character of $F^*$ such that the diagonal action of $\GL_c^{\triangle}$ on $J_{V_{(c^k)},\psi_k}(\rho_1)\otimes J_{V_{(c^k)},\psi_k}(\rho_2)$ is given by $g\mapsto\chi_0(\det g)$ ($g\in \GL_c$). If $H=\GSpin_{2kc}$, assume in addition that $\chi_{\pi_1},\chi_{\pi_2}$ exist and $\chi_{\pi_1}^{-1}=\chi_{\pi_2}$, and put $\eta=\chi_{\pi_1}^{-1}$. To preserve uniform notation the characters $\chi_0,\chi_{\pi_i}$ and $\eta$ are simply ignored in all other cases. Let $D=U\rtimes (G,G)<H$. We will prove our main result by analyzing distributions on the orbits of the right action of $D$ on the homogeneous space $P\backslash H$. The space $P\backslash H/D$ is finite if $k=1$ (see \cite[Lemma~2.1]{PSR}) or $c=1$ (then either $n=0$ and $U=N_H$, or $G=\GL_1$ and $U$ contains all the roots of $N_H$ but one, on which $(G,G)$ acts with $2$ orbits). Otherwise it is infinite, even uncountable (e.g., for $H=\Sp_{2kc}$ and $k>2$), but contains a unique Zariski open orbit which is $P\delta D$. This follows by showing that the dimension of $P\delta D$ is equal to the dimension of $PU_P^-$. For $h,h'\in H$, write $h\sim h'$ if $PhD=Ph'D$, otherwise $h\not\sim h'$. Regard $\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee}$ as a representation of $D$. For $H=\GSpin_{2kc}$, $(G,G)$ is a homomorphic image of $G\times G$ (see \eqref{embedding G G in H GSpin}), and the condition $\chi_{\pi_1}^{-1}=\chi_{\pi_2}$ above implies that $\pi_1^{\vee}\otimes\pi_2^{\vee}$ is a representation of $(G,G)$. Consider the space \begin{align}\label{eq:Hom L 0} \Hom_{(G,G)}(J_{U,\psi_U^{-1}}(V(s,\rho\otimes\eta)),\pi_1\otimes\pi_2)\cong \Hom_{D}(V(s,\rho\otimes\eta)\otimes\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee},1), \end{align} which is isomorphic to \begin{align}\label{eq:Hom L} \Hom_{D}(\Ind_{P\times D}^{H\times D}\left((|\det|^{s-1/2}\rho\otimes\eta)\otimes(\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee})\right),1). \end{align} Here the action of $D$ on the space of functions $\xi$ on $H\times D$ is given by $d\cdot\xi(h',d')=\xi(h'd,d'd)$; and if $H=\GL_{2kc}$, $|\det|^{s-1/2}\rho$ is short for $|\det|^{s-1/2}\rho_1\otimes|\det|^{-s+1/2}\rho_2$. For any $h\in H$, denote $P_{h}={}^{h^{-1}}P\cap D$. We will study \eqref{eq:Hom L} by considering the following spaces of distributions on the orbits $PhD$ (this is well defined, see below): \begin{align}\label{space:Hom orbit} \Hom_{D}(\ind_{P_{h}}^D\left({}^{h^{-1}}((|\det|^{s-1/2}\rho\otimes\eta)\delta_{P}^{1/2})\otimes(\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee}\otimes\Lambda_{\nu})\right),1). \end{align} Here over non-archimedean fields $\ind$ denotes the compact non-normalized induction, while for archimedean fields $\ind$ is the Schwartz induction of \cite[\S~2]{duCloux1991} (see also \cite[\S~2.3]{GGS2}); and $\Lambda_0$ is the trivial character. If the field is non-archimedean or $h\sim\delta$, we only have $\nu=0$. Over archimedean fields when $h\not\sim\delta$, we further have for each integer $\nu>0$, a finite dimensional algebraic representation $\Lambda_{\nu}$ which is the algebraic dual of the symmetric $\nu$-th power of the normal bundle to the double coset. Note that when $h\sim\delta$, i.e., for the open orbit $P\delta D$, the tangent space to the double coset coincides with the total tangent space, and thus the normal space is trivial. By the Frobenius reciprocity \eqref{space:Hom orbit} is isomorphic to \begin{align}\label{H(h)} \mathcal{H}_{\nu}(h)=\Hom_{P_{h}}({}^{h^{-1}}(|\det|^{s-1/2}\rho\otimes\eta)\otimes(\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee})\otimes\Lambda_{\nu},\theta_h). \end{align} Here $\theta_h(x)=\delta_{P_h}(x)\delta_D^{-1}(x)\delta_P^{-1/2}({}^hx)$ ($x\in P_h$). We define $\mathcal{H}(h)=\mathcal{H}_0(h)$ if $F$ is non-archimedean or $h\sim\delta$, otherwise $\mathcal{H}(h)=\oplus_{\nu}\mathcal{H}_{\nu}(h)$. Our main result --- Theorem~\ref{theorem:uniqueness} below, is that \eqref{eq:Hom L 0} is at most one-dimensional outside a discrete subset of $s$. We will prove there is a discrete subset $\mathcal{B}\subset\C$, such that for all $s\notin\mathcal{B}$, $\mathcal{H}(h)=0$ for all $h\not\sim\delta$, and $\dim \mathcal{H}(\delta)\leq1$. Over non-archimedean fields this already implies \eqref{eq:Hom L 0} is at most one-dimensional outside $\mathcal{B}$. Indeed this follows from the theory of distributions on $l$-sheafs of \cite{BZ1}. In more detail, let $\mathcal{F}$ be the $l$-sheaf of the induced representation in \eqref{eq:Hom L}. The right action of $D$ on $P\backslash H$ is constructive, by \cite[Theorem~A]{BZ1} applied to $X(F)$ where $X$ is the algebraic $F$-variety $P\backslash H$. Each $\mathcal{H}(h)$ (see \eqref{space:Hom orbit}) is the space of distributions on the restriction of $\mathcal{F}$ to the orbit $PhD$ (the orbits are locally closed, hence this restriction is well defined). Fix $s\notin\mathcal{B}$ and let $\mathcal{T},\mathcal{T}'$ be nonzero distributions in \eqref{eq:Hom L}. Since $P\delta D$ is open, by \cite[1.16]{BZ1} both $\mathcal{T}$ and $\mathcal{T}'$ restrict to distributions on $\mathcal{H}(\delta)$, which is one-dimensional, hence there is $\alpha\in\C$ such that $\alpha\mathcal{T}|_{P\delta D}=\mathcal{T}'|_{P\delta D}$. Then $\alpha\mathcal{T}-\mathcal{T}'$ is well defined on the quotient $l$-sheaf $\mathcal{F}(P\delta D)\backslash\mathcal{F}$ (see \cite[1.16]{BZ1} for the definition and notation), which is an $l$-sheaf on the complement of $P\delta D$ in $H$. Since there are no nonzero distributions on any $\mathcal{H}(h)$ for $h\not\sim\delta$, by \cite[Theorem~6.9]{BZ1} we deduce $\alpha\mathcal{T}-\mathcal{T}'$ vanishes on $\mathcal{F}$, i.e., $\alpha\mathcal{T}=\mathcal{T}'$. Over archimedean fields the argument also depends on the precise methods we use in order to handle each $\mathcal{H}(h)$. We describe this below. \subsubsection{Basic properties of $\mathcal{H}(h)$}\label{Basic properties} In general every algebraic representation of a unipotent group is unipotent, i.e., admits a (finite) filtration such that the group acts trivially on each of its quotients. We can hence filter each $\Lambda_{\nu}$ and consider these quotients. If \eqref{H(h)} is nonzero, it is nonzero when $\Lambda_{\nu}$ is replaced by one of these quotients. Since we will prove $\mathcal{H}_{\nu}(h)=0$ for all $\nu>0$, we can consider each of these quotients, re-denoted $\Lambda_{\nu}$ (at the cost of re-enumerating the index set of $\nu$), separately, so that we assume $\Lambda_{\nu}$ is a trivial representation of $U$ for all $\nu\geq0$. In general if $Y<{}^hU\cap M_P$, then ${}^{h^{-1}}Y<P_h$ and by definition any morphism in $\mathcal{H}(h)$ factors through $J_{Y,{}^{h}\psi_U^{-1}}(\rho)$. Indeed since ${}^{h^{-1}}Y<U$, for $y\in Y$ we have \begin{align*} {}^{h^{-1}}(|\det|^{s-1/2}\rho\otimes\eta)({}^{h^{-1}}y)=\rho(y),\qquad (\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee}\otimes\Lambda_{\nu})({}^{h^{-1}}y)=\psi_U({}^{h^{-1}}y), \end{align*} then if $\mathcal{T}\in\mathcal{H}_{\nu}(h)$ for some $\nu$, and $\xi_{\rho}\otimes\xi$ is a pure tensor in the space of $\rho\otimes(\pi_1^{\vee}\otimes\pi_2^{\vee}\otimes\Lambda_{\nu})$, \begin{align*} \psi_U({}^{h^{-1}}y)\mathcal{T}(\rho(y)\xi_{\rho}\otimes\xi)= \mathcal{T}(\psi_U({}^{h^{-1}}y)\rho(y)\xi_{\rho}\otimes\xi)=\mathcal{T}(\xi_{\rho}\otimes\xi). \end{align*} Thus \begin{align}\label{eq:T on Jacquet} \mathcal{T}((\rho(y)\xi_{\rho}-{}^{h}\psi_U^{-1}(y)\xi_{\rho})\otimes\xi)=0. \end{align} This means that $\mathcal{T}$ factors through $J_{Y,{}^{h}\psi_U^{-1}}(\rho)$, where in the archimedean case note that $\mathcal{T}$ is continuous, and because the argument is applicable to all $\nu$, we conclude that any morphism in $\mathcal{H}(h)$ factors through $J_{Y,{}^{h}\psi_U^{-1}}(\rho)$. \subsubsection{The vanishing of $\mathcal{H}(h)$}\label{the vanishing 3 types} One can prove the vanishing of $\mathcal{H}(h)$ using three types of arguments. First we have an incompatibility condition: assume $h$ is such that \begin{align} \label{psi U nontrivial} &\psi_U|_{U\cap {}^{h^{-1}}U_P}\ne1. \end{align} In this case we can take a subgroup $Y<U$ such that ${}^hY<U_P$ and $\psi_U|_Y\ne1$. Then $Y<P_h$ and both ${}^{h^{-1}}(|\det|^{s-1/2}\rho\otimes\eta)$ and $\pi_1^{\vee}\otimes\pi_2^{\vee}\otimes\Lambda_{\nu}$ are trivial on $Y$ (because ${}^hY<U_P$ and $Y<U$), hence the action on the left hand side in $\mathcal{H}_{\nu}(h)$ is given by $\psi_U$ which is nontrivial by \eqref{psi U nontrivial}. However, the action on the right hand side is trivial, because it is given by a modulus character and $Y<U$. Thus $\mathcal{H}_{\nu}(h)=0$ for all $\nu$, and $\mathcal{H}(h)=0$. Note that while a priori \eqref{psi U nontrivial} depends on $h$, we will actually prove it only depends on the double coset $PhQ$ (this is only important for the archimedean parts). Second, if any morphism in $\mathcal{H}(h)$ factors through $J_{V(\sigma),\psi'}(\rho)$, where $\sigma\succsim(k^c)$ and $\psi'\in\widehat{V}(\sigma)_{\mathrm{gen}}$, then $J_{V(\sigma),\psi'}(\rho)=0$ because $\rho$ is $(k,c)$, a fortiori $\mathcal{H}(h)=0$. Let us remark, that these two methods for proving vanishing will be applied to all but finitely many representatives. In fact, consider the Bruhat decomposition $H=\coprod_{w'}Pw'Q$ where $w'$ are representatives for Weyl elements of $H$, and let $w_0$ denote the representative of the longest reduced Weyl element. Then the orbit $Pw_0Q$ is open. The above arguments prove vanishing on $H-Pw_0Q$. The remaining orbit $Pw_0Q$ is the disjoint union of finitely many orbits $PhD$, namely $n+1$ orbits when $H\ne\GL_{2kc}$ and $(c+1)(c+2)/2$ orbits for $H=\GL_{2kc}$. In particular for $\delta$, as explained in \S~\ref{Doubling setup}, one can choose $\delta=\delta_0\delta_1$ with $\delta_0=w_0$ and $\delta_1\in N_{H_0}$, then $P\delta D \subset Pw_0Q$. The orbits in $Pw_0Q$ must be handled using the third method, which we turn to describe. Third, assume there is a composition $\beta$ of $kc$ and a character $\psi$ of $V_{\beta}$, which may depend on $h$, such that any morphism in $\mathcal{H}(h)$ factors through $J_{V_{\beta},\psi}(\rho)$. The vanishing argument in this case will be applicable to all but a discrete subset of $s$. We first describe the non-archimedean case. Assume there is a proper parabolic subgroup $R=M_R\ltimes U_R<G$, with $M_R$ containing $\GL_l$ as a direct factor, $l\geq1$, such that $J_{V_{\beta},\psi}(\rho)$ is a trivial representation of ${}^h(1,U_R)$. Therefore any morphism in $\mathcal{H}(h)$ also factors through $J_{U_R}(\pi_2^{\vee})$ which is an admissible finite length representation of $M_R$ (if $\pi_2$ is supercuspidal, we immediately deduce $\mathcal{H}(h)=0$). On each irreducible constituent of $J_{U_R}(\pi_2^{\vee})$, as a representation of $M_R$, $C_{\GL_l}<C_{M_R}$ acts by a character, and there are only finitely many such characters possible, depending only on $\pi_2$ and $U_R$ (thereby on $h$). Also assume $J_{V_{\beta},\psi}(\rho)$ admits a finite length filtration as a representation of ${}^h(1,\GL_l)$, and on each of the (not necessarily irreducible) constituents, ${}^h(1,C_{\GL_l})$ acts by a character. Again this character belongs to a finite set, now depending only on $\rho$ and on the character $\psi$ (which depends on $h$). If $0\ne\mathcal{T}\in\mathcal{H}(h)$, we can take constituents $\mathcal{V}$ of $J_{V_{\beta},\psi}(\rho)$ and $\mathcal{V}'$ of $J_{U_R}(\pi_2^{\vee})$, such that $\mathcal{T}$ is well defined and nonzero on $\mathcal{V}\otimes\pi_1^{\vee}\otimes\mathcal{V}'$. We then obtain a relation \begin{align}\label{eq:relation for T with s} \mu(a)|a|^{bs}\mathcal{T}(\xi)=\mathcal{T}(\left({}^{h^{-1}}(|\det|^{s-1/2}\rho\otimes\eta)\otimes(\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee})\right)(1,a)\xi)=\theta_h((1,a))\mathcal{T}(\xi), \end{align} where $\mu$ is a quasi-character of $F^*$ which belongs to a finite set depending only on $(\pi_2,\eta,\rho,h)$, and $b$ is a constant which depends only on $h$, and we assume $b\ne0$. We deduce $\mu(a)|a|^{bs}=\theta_h((1,a))$ for all $a\in F^*$. This excludes at most a discrete subset of $s$, and if we apply this argument to only finitely many representatives $h$, the set of these values of $s$ can be taken to be our $\mathcal{B}$. Now assume the field is archimedean. Let $\mathfrak{u}_R$ denote the Lie algebra of $U_R$. Assume ${}^h(1,\mathfrak{u}_R)$ acts locally nilpotently on $J_{V_{\beta},\psi}(\rho)^*$. Then there is a countable increasing filtration of (closed subspaces) $\mathcal{W}_i$ of $J_{V_{\beta},\psi}(\rho)^*$ by the order of nilpotency. The orthogonal complements $\mathcal{V}_{i}=(\mathcal{W}_i)_{\bot}\subset J_{V_{\beta},\psi}(\rho)$ form a decreasing filtration of $J_{V_{\beta},\psi}(\rho)$, exhausting in the sense that $\bigcap_i \mathcal{V}_i=0$. For each $i$, $J_{V_{\beta},\psi}(\rho)/\mathcal{V}_{i}$ is a quotient of a generalized Jacquet module of $\rho$ with respect to ${}^h(1,\mathfrak{u}_R)$. Since any morphism in $\mathcal{H}_{\nu}(h)$ lies in some $\mathcal{W}_i$, it is annihilated by $\mathfrak{u}_R^i$. Thus it factors through a generalized Jacquet module $\pi_2^{\vee}/\overline{\mathfrak{u}_R^i\pi_2^{\vee}}$. The latter is an admissible finite length representation of $M_R$ by Lemma~\ref{lem:adm}, in particular admits a finite filtration such that $C_{\GL_l}$ acts by a character on each constituent. Assume in addition, that there exists a parabolic subgroup of $\GL_{kc}$, whose Levi part contains ${}^h(1,\GL_l)$ as a direct factor, such that the Lie algebra $\mathfrak{v}$ of its unipotent radical acts locally nilpotently on $J_{V_{\beta},\psi}(\rho)^*$. Repeating the argument in the last paragraph, any morphism in $\mathcal{H}_{\nu}(h)$ factors through a generalized Jacquet module $\rho/\overline{\mathfrak{v}^j\rho}$, and the latter --- by Lemma~\ref{lem:adm} --- has a finite filtration with ${}^h(1,C_{\GL_l})$ acting by a character on its constituents. Now if $0\ne\mathcal{T}\in\mathcal{H}_{\nu}(h)$, there are constituents $\mathcal{V}$ of $\rho/\overline{\mathfrak{v}^j\rho}$, $\mathcal{V}'$ of $\pi_2^{\vee}/\overline{\mathfrak{u}_R^i\pi_2^{\vee}}$ and $\mathcal{V}''$ of $\Lambda_{\nu}$ (considering $\Lambda_{\nu}$ as a representation of $(1,\GL_l)$), such that $\mathcal{T}$ is well defined and nonzero on $\mathcal{V}\otimes\pi_1^{\vee}\otimes \mathcal{V}'\otimes \mathcal{V}''$. Again we can apply \eqref{eq:relation for T with s} and obtain a relation $\mu(a)|a|^{bs}=\theta_h((1,a))$ (with $b\ne0$) for all $a\in F^*$. Here $\mu$ is uniquely determined by $\mathcal{V},\mathcal{V}',\mathcal{V}''$ and $h$. In the archimedean case this condition excludes one $s$. As we vary $\mathcal{V}$ and $\mathcal{V}'$ over the finite filtrations of $\rho/\overline{\mathfrak{v}^j\rho}$ and $\pi_2^{\vee}/\overline{\mathfrak{u}_R^i\pi_2^{\vee}}$, and also vary $j$ and $i$, the actions of ${}^h(1,C_{\GL_l})$ and $C_{\GL_l}$ are given by a discrete set of characters, by Lemma~\ref{lem:adm 2 discrete}. The action of $(1,C_{\GL_l})$ on $\mathcal{V}''$ is also given by a discrete set of characters, because the central characters of the set of irreducible constituents of $\{\Lambda_{\nu}\}_{\nu}$ as representations of $(1,\GL_l)$ form a lattice. Thus the total subset of $s$ we exclude is still discrete (for each $h$). Again, repeating this for finitely many $h$, we will obtain a discrete set $\mathcal{B}$. \subsubsection{The space \eqref{eq:Hom L 0} is at most one-dimensional outside $\mathcal{B}$: archimedean case} Re-denote the Bruhat cells appearing in the decomposition $P\backslash H/ Q$ by $Y_0,\ldots, Y_l$, numbered such that if $Y_i\subset \overline{Y_j}$, $i>j$. In particular $Y_0$ is the open Bruhat cell (i.e., $Y_0=Pw_0Q$). We have \begin{align*} \Hom_{D}(V(s,\rho\otimes\eta),\psi_U^{-1}\otimes\pi_1\otimes\pi_2)&\cong ((V(s,\rho\otimes\eta)\otimes \psi_U\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2)^*)^{\Delta D}\\&\cong \cD'(H,(|\det|^{s-1/2}\rho\otimes\eta)\otimes \psi_U\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2)^{D\times P}. \end{align*} First we show that outside $\mathcal{B}$, \begin{align}\label{1=Goal} \cD'_{\overline{Y_1}}(H,(|\det|^{s-1/2}\rho\otimes\eta)\otimes \psi_U\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2)^{D\times P}=0. \end{align} For any $i>0$, let $X_i=\bigcup_{j=0}^i Y_j$, it is an open subset of $H$ and $Y_i$ is a closed submanifold of $X_i$. It is enough to show that for any $i>0$, outside $\mathcal{B}$ we have \begin{align}\label{=Step} \cD'_{Y_i}(X_i,(|\det|^{s-1/2}\rho\otimes\eta)\otimes \psi_U\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2)^{D\times P}=0. \end{align} Indeed we show by induction on $i$ that for any distribution $\mathcal{T}$ belonging to the left hand side of \eqref{=Step}, the restriction $\mathcal{T}|_{X_i}$ vanishes. The base case $i=0$ holds by definition, and the induction step is \eqref{=Step}. Since $X_k=H$ we get $\mathcal{T}=0$. To prove \eqref{=Step}, we divide $Y_i$ into two cases depending on the first two vanishing arguments from \S~\ref{the vanishing 3 types} (which apply to all $s$). Assume \eqref{psi U nontrivial} holds and recall this condition only depends on the double coset (this is proved in Proposition~\ref{proposition:1st reduction of w} below). In this case we show, for all $s$, \begin{align*} \cD'_{Y_i}(X_i,(|\det|^{s-1/2}\rho\otimes\eta)\otimes \psi_U\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2)^{U\times U_P}=0. \end{align*} Indeed by \cite[\S~2, p.~70]{KV}, the left hand side can be identified with the subspace of $(U\times U_P)$-invariant maps from $C_c^{\infty}(X_i,\psi_U)$ supported on $Y_i$ to $\left((|\det|^{s-1/2}\rho\otimes\eta)\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2\right)^*$ (recall that over archimedean fields ${}^*$ denotes the continuous dual). Since $U\times U_P$ acts trivially on $(|\det|^{s-1/2}\rho\otimes\eta)\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2$, such a nonzero map $\mathcal{L}$ would define a nonzero distribution in $\cD'_{Y_i}(X_i,\psi_U)^{U\times U_P}$ (e.g., fix some functional which is nonzero on the image of $\mathcal{L}$). But $\cD'_{Y_i}(X_i,\psi_U)^{U\times U_P}=0$ by \cite[Theorem~3.15, case (iii)]{KV} (in their notation $M_y^{(r)}=\Lambda_v$ which can be taken to be trivial as explained above, and $\mathcal{O}=Y_i$). Now assume \eqref{psi U nontrivial} does not hold. We prove a more general result: for all $s$, \begin{align}\label{=Goal} \cD'_{Y_i}(X_i,(|\det|^{s-1/2}\rho\otimes\eta)\otimes \psi_U\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2)^{U\times P}=0. \end{align} We deduce it from Theorem \ref{thm:AG} as follows. Let $X=X_i$, $Y=Y_i$, $C=U\times P$; $E=(|\det|^{s-1/2}\rho\otimes\eta)\otimes \pi^{\vee}_1\otimes\pi^{\vee}_2$ with $U$ acting trivially and $P$ acting only on $|\det|^{s-1/2}\rho\otimes\eta$; and $\chi=\psi_U^{-1}\times 1$. Let $A=Q\times P$ and extend the action of $C$ on $E$ to an action of $A$ by letting $Q$ act trivially. Condition \eqref{=KV} follows from our proof of $\mathcal{H}(h)=0$ in this case (which uses the fact that $\rho$ is $(k,c)$). Note that $\mathcal{H}(h)$ can indeed be identified with the space of distributions on the orbit $PhD$ by, e.g., \cite[Theorem~5.2.4.5]{Warner1972I}. The set $\{\chi^a|_{C_z}:a\in A\}$ is finite: first, $\psi_U|_{U\cap {}^{h^{-1}}U_P}=1$ and because this condition is independent of the representative $h$ in the double coset $PhQ$, $\chi^a$ is trivial on $U\cap{}^{h^{-1}}U_P$; and second, $Q\cap {}^{h^{-1}}M_P$ is a parabolic subgroup of $M_P$ and $U\cap{}^{h^{-1}}M_P$ is its unipotent radical (see \eqref{eq:beta} below). From this and Lemma \ref{lem:Uhat} we deduce that the set $\{\chi^a|_{C_z}:a\in A\}$ is a finite union of orbits. All orbits are locally closed since they are orbits of an algebraic action of an algebraic group (note that the characters are unitary). Thus Theorem \ref{thm:AG} implies \eqref{=Goal}. Altogether we have shown \eqref{1=Goal}. Therefore restriction of $D\times P$-equivariant distributions from $H$ to $Y_0$ is injective. Now $D\times P$ acts on $Y_0$ with finitely many orbits $Z_0,\ldots,Z_r$, enumerated such that $Z_i\subset \overline{Z_j}$ implies $i>j$, in particular, $Z_0=P\delta D$ is the open orbit. As above is suffices to prove that for any $i>0$, but for $s\notin \mathcal{B}$, \begin{align}\label{2=Step} \cD'_{Z_i}(\bigcup_{j=0}^i Z_j,(|\det|^{s-1/2}\rho\otimes\eta)\otimes \psi_U\otimes\pi^{\vee}_1\otimes\pi^{\vee}_2)^{D\times P}=0. \end{align} Let $A=C=D\times P$, $E=(|\det|^{s-1/2}\rho\otimes\eta)\otimes \pi^{\vee}_1\otimes\pi^{\vee}_2$ with $D$ acting only on $\pi^{\vee}_1\otimes\pi^{\vee}_2$ ($U$ acting trivially), $P$ acting only on $|\det|^{s-1/2}\rho\otimes\eta$; and $\chi=\psi_D\times 1$, where $\psi_D$ is the character of $D$ defined by $\psi_U^{-1}$ extended trivially to $(G,G)$. Our proof of $\mathcal{H}(h)=0$ in this case (using \eqref{eq:relation for T with s}) implies \eqref{=KV} for $s\notin\mathcal{B}$, and Theorem \ref{thm:AG} implies \eqref{2=Step}. It then follows that restriction of $D\times P$-equivariant distributions from $H$ to $Z_0$ is injective. Combining this with the fact that $\dim\mathcal{H}(\delta)\leq1$, we are done. \subsubsection{The main result} We turn to formulate our main result. Define \begin{align*} d(s,\rho,\eta,\pi_1,\pi_2)=\dim\Hom_{(G,G)}(J_{U,\psi_U^{-1}}(V(s,\rho\otimes\eta)),\pi_1\otimes\pi_2). \end{align*} \begin{theorem}\label{theorem:uniqueness} Let $\pi_1$, $\pi_2$ and $\rho$ be as above. \begin{enumerate}[leftmargin=*] \item\label{part1}Outside a discrete subset of $s$, $d(s,\rho,\eta,\pi_1,\pi_2)\leq\dim\Hom_{G}(\chi_0\pi_1^{\vee},\pi_2^{\iota})$. \item\label{part2}If $\pi_1$ and $\pi_2$ are irreducible, outside a discrete subset of $s$, $d(s,\rho,\eta,\pi_1,\pi_2)=0$ unless $\pi_1=\chi_0(\pi_2^{\iota})^{\vee}$ in which case $d(s,\rho,\eta,\pi_1,\pi_2)\leq1$. \end{enumerate} Furthermore, assume $\pi_2$ is supercuspidal and $\rho$ is not necessarily of finite length. Then the assertions of \eqref{part1} and \eqref{part2} hold for all $s$, granted one of the following: \begin{enumerate}[leftmargin=*,label=(\alph*)] \item\label{part3}$H\ne\GL_{2kc}$ and $c>2$; or $H=\Sp_{4k}$ ($G=\Sp_{2}$); or $H\ne\GL_{2kc}$, $c=2$ and $\rho=\rho_c(\tau)$ for an irreducible supercuspidal representation $\tau$ of $\GL_k$ and $k>1$. \item\label{part4}$H=\GL_{2kc}$, $ck>1$, $\pi_1$ is also supercuspidal, and $\rho_i=\rho_c(\tau_i)$ for irreducible supercuspidal representations $\tau_1$ and $\tau_2$ of $\GL_k$. \end{enumerate} \end{theorem} \begin{remark} If $\eta\ne\chi_{\pi_1}^{-1}$, $d(s,\rho,\eta,\pi_1,\pi_2)=0$ outside a discrete subset of $s$. \end{remark} The proof of the theorem occupies \S~\ref{section not GL}--\S~\ref{section GL}. Note that the case of $\GL_1\times\GL_k$ over non-archimedean fields was proved in \cite[Lemma~35]{CFGK2} for $\pi_1=\pi_2^{\vee}$ and $\chi_0=1$ ($P\backslash H/D$ is finite in this case). Recall the representations $\pi$ and $\rho$ defined in \S~\ref{Doubling setup}: $\pi$ is an irreducible admissible representation of $G$, and $\rho$ is either an admissible finite length $(k,c)$ representation of $\GL_{kc}$ which admits a central character, or the tensor product $\rho_1\otimes\chi^{-1}\rho_2$ of two such representations $\rho_i$ of $\GL_{kc}$, with a quasi-character $\chi$ of $\GL_k$, in which case $\chi_0=\chi^{-k}$ (the central character of $\rho_1$ is the inverse of the central character of $\rho_2$). This is a minor generalization of \cite{CFK}, where $\rho$ was taken to be $\rho_c(\tau)$ (or $\rho_i=\rho_c(\tau_i)$, $i=1,2$). Combining Theorem~\ref{theorem:uniqueness} with the doubling integral, we obtain the following. \begin{corollary}\label{coro:meromorphic continuation for doubling 1} Let $F$ be non-archimedean and consider \eqref{local integral} for the representations $\pi$ and $\rho$ defined in \S~\ref{Doubling setup}. \begin{enumerate}[leftmargin=*] \item\label{mero coro:part1}If $f$ is a rational section in $q^{-s}$, $Z(s,\omega,f)$ admits meromorphic continuation to a rational function in $q^{-s}$. \item\label{mero coro:part2}$d(s,\rho,\chi_{\pi},\chi_0\pi^{\vee},\pi^{\iota})\geq1$ for all $s$. \end{enumerate} \end{corollary} \begin{proof} For part~\eqref{mero coro:part1}, by Theorem~\ref{theorem:uniqueness} with $\pi_1=\chi_0\pi^{\vee}$ and $\pi_2=\pi^{\iota}$ (then $\eta=\chi_{\pi_1}^{-1}=\chi_{\pi}$), the dimension of \eqref{eq:doubling Hom} is at most $1$ outside a discrete subset of $s$. Now the meromorphic continuation follows from Theorem~\ref{theorem:basic props of doubling integrals} and Bernstein's continuation principle (\cite{Banks}). For part~\eqref{mero coro:part2}, fix some $s_0$. Consider the family $\mathcal{I}$ of integrals $Z(s,\omega,f)$, where $\omega$ varies over the matrix coefficients of $\pi^{\vee}$, and $f$ varies over the sections of $V(s,\rho\otimes\chi_{\pi})$ that are polynomial in $q^{\mp s}$. The set of poles of $Z(s,\omega,f)\in\mathcal{I}$ belongs to a finite set of values of $q^{-s}$ which depends only on the representations $\pi$ and $\rho$, by \cite{Banks} (we do not claim the multiplicity of a pole is bounded independently of $\omega$ and $f$). Therefore, there is $r>0$ such that all integrals of $\mathcal{I}$ are holomorphic in the punctured disk of radius $r$ around $s_0$. Let $\gamma$ be the boundary of this disk. Moreover, by Theorem~\ref{theorem:basic props of doubling integrals} \eqref{props 3}, there is $Z(s,\omega,f)\in\mathcal{I}$ which is a nonzero constant at $s_0$. Thus Cauchy's integral formula gives a nonzero morphism $(\omega,f)\mapsto\tfrac1{2\pi i}\oint_{\gamma}\frac{Z(s,\omega,f)}{s-s_0}\,ds$ in \eqref{eq:doubling Hom}. \end{proof} \begin{corollary}\label{coro:meromorphic continuation for doubling 2} Consider \eqref{local integral} for the representations $\pi$ and $\rho$ defined in \S~\ref{Doubling setup}. Assume $\pi$ is irreducible supercuspidal and the additional assumptions \ref{part3} or \ref{part4} of Theorem~\ref{theorem:uniqueness} hold. \begin{enumerate}[leftmargin=*] \item\label{supercusp coro:part1}$d(s,\rho,\chi_{\pi},\chi_0\pi^{\vee},\pi^{\iota})=1$ for all $s$. \item\label{supercusp coro:part2}If $f$ is a polynomial section in $q^{\mp s}$, $Z(s,\omega,f)$ admits analytic continuation to a polynomial function in $q^{\mp s}$. \end{enumerate} \end{corollary} \begin{proof} The first assertion follows from Theorem~\ref{theorem:uniqueness} combined with Corollary~\ref{coro:meromorphic continuation for doubling 1} \eqref{mero coro:part2}. The second holds because when $d(s,\rho,\chi_{\pi},\chi_0\pi^{\vee},\pi^{\iota})=1$ for all $s$, by the corollary in \cite{Banks} the continuation is to a polynomial. \end{proof} \subsection{The case $H\ne\GL_{2kc}$}\label{section not GL} As explained in \S~\ref{outline}, we will consider each $\mathcal{H}(h)$ separately. We prove that all but finitely many spaces $\mathcal{H}(h)$ vanish using the first two methods, show the vanishing of the remaining $\mathcal{H}(h)$ with $h\not\sim\delta$ outside a discrete subset $\mathcal{B}$, then prove $\dim\mathcal{H}(\delta)\leq1$. Recall $n=\lfloor c/2\rfloor$, and since we prove the result for both odd and even $c$ simultaneously, we also use $\lceil c/2\rceil$, which is $n$ when $c$ is even and $n+1$ otherwise. We start with describing a choice of representatives. Since $P\backslash H/ N_H$ can be identified with $W(M_P)\backslash W(H)$ and $Q=M_Q\ltimes U$, we can write $P\backslash H/ D=\coprod_hPhD$ with $h=wu$, where $w$ is a representative from $W(M_P)\backslash W(H)$ and $u\in M_Q\cap N_H$. Since $W(M_P)\backslash W(H)$ is embedded in $\mathbb{Z}_2^{kc}$, we can identify $w$ with a $kc$-tuple of $0$ and $1$, where the $i$-th coordinate corresponds to the permutation matrix \begin{align*} \left(\begin{smallmatrix} I_{kc-i} \\&0&&1 \\&&I_{2(i-1)} \\&\epsilon_0&&0 \\&&&& I_{kc-i} \end{smallmatrix}\right). \end{align*} If $H\ne\Sp_{2kc}$, only even products of such matrices can appear in $w$. In this case denote for an integer $a\geq0$, $\jmath_a=(1,0^{kc-1})$ if $a$ is odd otherwise $\jmath_a=(0^{kc})$. Note that $\jmath_a$ normalizes $N_H$. If $H=\Sp_{2kc}$, we set $\jmath_a=(0^{kc})$ for uniformity. We use $*$ in an expression for $w$ to signify an undetermined coordinate (either $0$ or $1$). For the case $k=1$, we can parameterize $P\backslash H/D=P\backslash H/(G,G)$ using the elements \begin{align*} \jmath_l(0^{c-l},1^l)({}^{\jmath_l}u_l), \qquad 0\leq l\leq n, \end{align*} where \begin{align}\label{eq:weyl rep k=1} (0^{c-l},1^l)=\left(\begin{smallmatrix} &&&&&J_{{l}}\\&I_{2(c-l)}\\\epsilon_0J_{l} \end{smallmatrix}\right) \end{align} and \begin{align}\label{eq:unipotent rep k=1} u_l= \left(\begin{smallmatrix} I_l&&I_l\\&I_{n-l}\\&&I_l\\&&&I_{2c-2(n+l)}\\&&&&I_l&&-I_l\\&&&&&I_{n-l}\\&&&&&&I_l \end{smallmatrix}\right). \end{align} Here if $l<\lceil c/2\rceil$, ${}^{\jmath_l}u_l=u_l$ (always the case for an odd $c$). The double cosets for $k=1$ were described in \cite[\S~2]{PSR}; see also \cite[\S~4]{GRS7} for $\Sp_{2c}$; $\SO_{2c}$ and $\GSpin_{2c}$ with even $c$ are similar, and for odd $c$ also refer to the description of the embedding of $\SO_c\times \SO_c$ in $\SO_{2c}$ given in \cite[Example~15]{CFK} (see Example~\ref{example:odd c}). We start by generalizing this description, to some extent, to all $k\geq1$. For $x\in M_Q$, denote its projection into the direct product of $k-1$ copies of $\GL_c$ by $\ell(x)$, then $x=\ell(x)\ell_0(x)$, where $\ell_0(x)\in H_0$. For $k=1$, $x=\ell_0(x)$ and $\ell(x)$ is trivial. If $y\in M_Q$, \begin{align*} \ell({}^xy)={}^{\ell(x)}\ell(y),\qquad \ell_0({}^xy)={}^{\ell_0(x)}\ell_0(y). \end{align*} For any $(g_1,g_2)\in (G,G)$, \begin{align}\label{eq:conj x by g1 g2} {}^{(g_1,g_2)}x=\ell({}^{(g_1,g_2)}x)\ell_0({}^{(g_1,g_2)}x), \end{align} but because $(1,g_2)\in H_0$, \begin{align}\label{eq:conj x by g2} {}^{(1,g_2)}x=\ell(x)({}^{(1,g_2)}\ell_0(x)). \end{align} \begin{proposition}\label{proposition:structure of w u} Let $h=wu$, where $w$ is a representative from $W(M_P)\backslash W(H)$ and $u\in M_Q\cap N_H$. Then $h\sim\hat{w}\hat{u}$ with the following properties. There is $0\leq l\leq n$ such that \begin{align}\label{eq:start condition of w} &\hat{w}=\jmath_a(0^{c-l},1^{l},w_2,\ldots,w_k),\qquad\forall i, w_i\in\{0,1\}^{c}, \end{align} where $a$ is the sum of coordinates $1$ in $(0^{c-l},1^{l},w_2,\ldots,w_k)$. Additionally $\hat{u}\in M_Q$, there is $\sigma=(g,1)\in(G,1)$ such that $g$ is a representative of an element in $W(G)$ and ${}^{\sigma}\hat{u}\in M_Q\cap N_H$, ${}^{\jmath_a}\ell_0(\hat{u})$ takes the form \begin{align}\label{eq:second form u final} \left(\begin{smallmatrix} I_l&&A_l&&&&\\&I_{n-l}\\&&I_l\\&&&I_{2c-2(n+l)}\\&&&&I_l&&A_l'\\&&&&&I_{n-l}\\&&&&&&I_l \end{smallmatrix}\right), \end{align} and there are no zero rows in $A_l$. \end{proposition} \begin{proof} Let $E=M_{E}\ltimes U_{E}$ denote the standard parabolic subgroup of $H_0$ such that $M_{E}=\GL_n\times\mathcal{G}_{2(c-n)}$, and identify $N_{\GL_n}$ with its natural image in $M_{E}$. According to the description of $(G,G)$ in \S~\ref{Doubling setup}, $N_{\GL_n}\ltimes C_{U_E}<\ell_0((G,1))$. Let $g\in N_G$ be with $\ell_0((g,1))\in N_{\GL_n}\ltimes C_{U_E}$, such that the projection of $\ell_0(u(g,1))$ into $N_{H_0}$ is trivial on $N_{\GL_n}$. Put $u_1=u(g,1)\in M_Q\cap N_H$, $wu\sim wu_1$. We also have some control over the projection of the unipotent part of the representative into $C_{U_E}$ (see below). If $c$ is even, then also $N_{\mathcal{G}_{2(c-n)}}<(1,G)$, and for $g\in N_{\mathcal{G}_{2(c-n)}}$ such that the projection of $\ell_0(u_1(1,g))$ into $N_{\mathcal{G}_{2(c-n)}}$ is trivial, $wu_1\sim wu_2$ with $u_2=u_1(1,g)$. The projection of $\ell_0(u_2)$ into $N_{\GL_n}\ltimes C_{U_E}$ coincides with that of $\ell_0(u_1)$. If $c$ is odd, $N_{\mathcal{G}_{2(c-n)}}/(1,N_G)\cong\Mat_{n\times1}$. Choosing $g\in N_G$ and taking $u_2=u_1(1,g)$, we can assume the projection of $\ell_0(u_2)$ into $N_{\mathcal{G}_{2(c-n)}}$ takes the form \begin{align}\label{eq:remaining inner column odd c} \left(\begin{smallmatrix}I_{n}&y_1&y_2&y''\\ &1&&y_1'\\&&1&y_2'\\&&&I_n \end{smallmatrix}\right)\in N_{\mathcal{G}_{2(n+1)}}, \end{align} where $y_i'$, $y''$ uniquely depend on $y_1,y_2$ and $H$, and we can choose either $y_1=0$ or $y_2=0$ (see Example~\ref{example:odd c}). Observe that $w$ conjugates precisely one of the columns $y_1$ or $y_2$ in \eqref{eq:remaining inner column odd c} into $P$, so that if we choose the other column to be zero (i.e., define $g\in N_G$ accordingly), then now we already have both zero. In other words we can write $u_3=z^{-1}u_2$ for $z$ defined by $y_1$ or $y_2$ such that ${}^{w}z\in P$, then $wu_1\sim wu_2=wzu_3\sim wu_3$. If $c$ is even, put $u_3=u_2$, so that the projection of $\ell_0(u_3)$ into $N_{\mathcal{G}_{2(c-n)}}$ is trivial now for both odd and even $c$. One can take a representative $g$ of an element in $W(G)$ such that \begin{align}\label{eq:w_1} w_1=w(1,g)=j_a(0^{\lceil c/2\rceil},*^{kc-\lceil c/2\rceil}). \end{align} Then $wu_3\sim wu_3(1,g)=w_1({}^{(1,g)^{-1}}u_3)$. Put $u_4={}^{(1,g)^{-1}}u_3$, it is of the same form as $u_3$: this conjugation merely permutes the columns in the projection of $\ell_0(u_3)$ into $C_{U_{E}}\backslash U_{E}$. Now we can write \begin{align*} \ell_0(u_4)=\left(\begin{smallmatrix}I_n&v&v''\\&I_{2(c-n)}&v'\\&&I_n\end{smallmatrix}\right)\in N_{H_0}, \end{align*} where $v\in\Mat_{n\times 2(c-n)}$ is arbitrary and $v',v''$ are uniquely defined given $v$ and $H$. Put $v=(v_1,v_2)$, $v_1=(z_1,z_2)$ and $v_2=(z_3,z_4)$, where $z_1,z_4\in\Mat_{n\times \lceil c/2\rceil-1}$ and $z_2,z_3\in\Mat_{n\times1}$. The element $w_1$ does not permute any column of $z_1$ or $z_4$, and conjugates the block $z_4$ into $M_P$. Hence one can write $u_5=z^{-1}u_4$, where $z\in N_{H_0}$ is defined by $z_4$, and the corresponding block $z_4$ of $\ell_0(u_5)$ is $0$, then $w_1u_4=w_1zu_5\sim w_1u_5$. Next we see that $w_1$ also conjugates precisely one column $z_j$ out of $\{z_2,z_3\}$ into $P$. If $a$ is even, $j=3$ and we can assume $z_3=0$. Otherwise $j=2$, we can assume $z_2=0$. In both cases we multiply $u_5$ on the left by a suitable matrix $z^{-1}$, and $w_1u_5\sim w_1u_6$ with $u_6=z^{-1}u_5$. We deduce \begin{align*} {}^{\jmath_a}\ell_0(u_6)=\left(\begin{smallmatrix}I_n&v&0&v''\\&I_{c-n}&&0\\&&I_{c-n}&v'\\&&&I_n\end{smallmatrix}\right)\in N_{H_0}. \end{align*} If $c$ is odd, $v$ contains $n+1$ columns. We show that the rightmost column of ${}^{\jmath_a}\ell_0(u_6)$ can be made $0$. Indeed we can take $g\in N_{G}$ such that \begin{align*} \ell_0((g,1))=\left(\begin{smallmatrix}I_n&&\epsilon_2x&-\epsilon_1x&&y\\&I_{n}\\&&1&&&\epsilon_1x'\\&&&1&&-\epsilon_2x'\\&&&&I_n\\&&&&&I_n\end{smallmatrix}\right) \in N_{H_0} \end{align*} (see Example~\ref{example:odd c}). The element $w_1$ permutes precisely one of the middle $2$ columns into $P$, either the column with $\epsilon_2x$ or with $-\epsilon_1x$. Then if $\ell_0((g,1))$ is chosen such that the other column is $0$ in $u_6(g,1)$, and $z_j\in N_{H_0}$ is defined by the column of $\ell_0((g,1))$ which is permuted by $w_1$ into $P$ (thus ${}^{w_1}z_j\in P$), \begin{align*} w_1u_6\sim w_1u_6(g,1)=w_1z_jz_j^{-1}u_6(g,1)\sim w_1z_j^{-1}u_6(g,1). \end{align*} Put $u_7=z_j^{-1}u_6(g,1)$, then \begin{align*} {}^{\jmath_a}\ell_0(u_{7})=\left(\begin{smallmatrix}I_n&v&&&v''\\&I_{n}&&&\\&&I_{2(c-2n)}\\&&&I_{n}&v'\\&&&&I_n\end{smallmatrix}\right)\in N_{H_0}. \end{align*} For uniformity, denote $u_7=u_6$ when $c$ is even. By the definition of $H$, the block $v''$ in ${}^{\jmath_a}\ell_0(u_{7})$ above can be taken independently of $v$. Hence we can multiply $u_7$ on the right by $(g,1)$ with $g\in N_G$, where $\ell_0((g,1))\in C_{U_E}$ is defined using $v''$, and obtain $u_8=u_7(g,1)$ such that ${}^{\jmath_a}\ell_0(u_8)$ is of the form \begin{align}\label{eq:starting form u''} \left(\begin{smallmatrix}I_n&v&&&\\&I_{n}&&&\\&&I_{2(c-2n)}\\&&&I_{n}&v'\\&&&&I_n\end{smallmatrix}\right)\in H_0. \end{align} Then $w_1u_7\sim w_1u_8$. If $c$ is odd, $\ell_0(u_8)$ commutes with $\jmath_a$ (since then $2(c-2n)=2$). At this point we still have $u_8\in M_Q\cap N_H$, since the only changes from $u$ to $u_8$ involve multiplying by elements of $M_Q\cap N_H$ (on the right or left). For any matrix $u_0$ of the form \eqref{eq:starting form u''}, denote the block of $v$ by $v(u_0)$. For any representative $w'=(*^{kc})$, let $\mathcal{R}(w')$ denote the set of $1\leq i\leq n$ such that $w'$ permutes the $i$-th row of the block $v$ of a general matrix \eqref{eq:starting form u''}. Note that $\mathcal{R}(w')$ only depends on the coordinates $\lceil c/2\rceil +1,\ldots,c$ of $w'$ (enumerating the coordinates of $w'$ from left to right). For each row $i$ of $v({}^{\jmath_a}\ell_0(u_8))$, one can always write $u_9=z_i^{-1}u_8$, where the $i$-th row of $v({}^{\jmath_a}\ell_0(u_9))$ is zero, ${}^{\jmath_a}z_i$ is of the form \eqref{eq:starting form u''} and any row $j\ne i$ in $v({}^{\jmath_a}z_i)$ is zero. Moreover, $z_i\in P$, and if $i\notin\mathcal{R}(w_1)$, $w_1$ commutes with $z_i$. Hence $w_1u_8=z_iw_1u_9\sim w_1u_9$. Since we can apply this separately to each row, we can assume that for each $1\leq i\leq n$, either the $i$-th row of $v({}^{\jmath_a}\ell_0(u_9))$ is zero or $i\in\mathcal{R}(w_1)$. The difference between $u_8$ and $u_9$, is that the nonzero rows of $v({}^{\jmath_a}\ell_0(u_9))$ occur only at rows $i$ which $w_1$ permutes. Consider $i$ such that both the $i$-th row of $v({}^{\jmath_a}\ell_0(u_9))$ is zero and $i\in\mathcal{R}(w_1)$. In this case take $\sigma_1=(g,1)$ where $g\in G$ is a representative of an element of $W(G)$ of minimal length, such that $\mathcal{R}(\sigma_1)=\{i\}$. More specifically take $g$ with $\ell_0((g,1))=\jmath_1(0^{c-i},1,0^{i-1})$ (if $c$ is odd, the right hand side is multiplied by $\diag(I_{c-1},2\epsilon_1^2,2\epsilon_2^2,I_{c-1})$, see Example~\ref{example:odd c}). Since $\sigma_1=\ell(\sigma_1)\ell_0(\sigma_1)$ and $\ell(\sigma_1)\in P$, \begin{align*} w_1u_9\sim w_1u_9\sigma_1= w_1\sigma_1({}^{\sigma_1^{-1}}u_9)= \ell(\sigma_1)({}^{\ell(\sigma_1)^{-1}}w_1)\ell_0(\sigma_1)({}^{\sigma_1^{-1}}u_9)\sim ({}^{\ell(\sigma_1)^{-1}}w_1)\ell_0(\sigma_1)({}^{\sigma_1^{-1}}u_9). \end{align*} Put $w_2=({}^{\ell(\sigma_1)^{-1}}w_1)\ell_0(\sigma_1)$, it is again a representative from $W(M_P)\backslash W(H)$ and \begin{align*} \mathcal{R}(w_2)= \mathcal{R}(w_1\ell_0(\sigma_1))=\mathcal{R}(w_1)-\{i\}. \end{align*} Let $u_{10}={}^{\sigma_1^{-1}}u_9$. We have $\ell_0(u_{10})=\ell_0(u_9)$ if $H=\Sp_{2kc}$ or $c$ is odd, otherwise $\ell_0(u_{10})$ differs from $\ell_0(u_9)$ only in the middle $2$ columns: these columns are exchanged because of $\jmath_1$. The element $\ell(u_{10})$ need not be in $N_H$ anymore, only in $M_Q$, but ${}^{\sigma_1}u_{10}\in M_Q\cap N_H$ and by \eqref{eq:conj x by g1 g2}, also ${}^{\ell(\sigma_1)}\ell(u_{10})\in (H_0\backslash M_Q)\cap N_H$. Since we can apply this procedure separately to each row $i$, we can assume the $i$-th row of $v({}^{\jmath_a}\ell_0(u_{10}))$ is nonzero if and only if $i\in\mathcal{R}(w_2)$. However, we can no longer assume $\ell(u_{10})\in N_H$. Regard $\GL_n$ as the direct factor of the standard Levi subgroup $\GL_n\times\mathcal{G}_{c-2n}$ of $G$. For any representative $g$ of an element of $W(\GL_n)$, set $\sigma_2=(g,1)$. Given arbitrary sets $\mathcal{R}(w')$ and $\mathcal{R}'\subset\{1,\ldots,n\}$ of the same size, one can find $\sigma_2$ for which $\mathcal{R}({}^{\sigma_2^{-1}}w')=\mathcal{R}'$. Because $i\in \mathcal{R}(w_2)$ if and only if the $i$-th row of $v({}^{\jmath_a}\ell_0(u_{10}))$ is nonzero, we can choose $\sigma_2$ such that $\mathcal{R}({}^{\sigma_2^{-1}}w_2)=\{1,\ldots,l\}$, where $0\leq l\leq n$ is the size of $\mathcal{R}(w_2)$, and simultaneously \begin{align}\label{eq:starting form u'''} \ell_0({}^{\jmath_a}({}^{\sigma_2^{-1}}u_{10}))= \left(\begin{smallmatrix} I_l&&v&&&&\\&I_{n-l}\\&&I_n\\&&&I_{2(c-2n)}\\&&&&I_n&&v'\\&&&&&I_{n-l}\\&&&&&&I_l \end{smallmatrix}\right) \end{align} where none of the rows of $v$ are zero. Set $w_3={}^{\sigma_2^{-1}}w_2$ and $u_{11}={}^{\sigma_2^{-1}}u_{10}$. Since $\sigma_2\in (G,1)\cap P$, \begin{align*} w_2u_{10}\sim w_2u_{10}\sigma_2= \sigma_2({}^{\sigma_2^{-1}}w_2)({}^{\sigma_2^{-1}}u_{10})\sim w_3u_{11}. \end{align*} Now $\mathcal{R}(w_3)=\{1,\ldots,l\}$ and note that $w_3$ is still of the form \eqref{eq:w_1} (with possibly a different $a$, but of the same parity), because when we pass to $w_2$ and then to $w_3$, we do not change the coordinates $2,\ldots,\lceil c/2\rceil$ of $w_1$ ($c-i\geq \lceil c/2\rceil$ for $i\leq n$). Moreover, ${}^{w_3}({}^{\jmath_a}(1,g))\in M_P$ for any $g\in\GL_n$ (where $\GL_n<\GL_n\times\mathcal{G}_{c-2n}<G$); if $\jmath_a$ is trivial, $w_3$ simply commutes with $(1,\GL_n)$. Also \begin{align}\label{eq:important prop of u9} {}^{\sigma_1\sigma_2}u_{11}={}^{\sigma_1}u_{10}\in M_Q\cap N_H. \end{align} The rank of $v$ in \eqref{eq:starting form u'''} is at most $l$, whence we can further use ${}^{\jmath_a}(1,g_0)$ with $g_0\in\GL_n$ to reduce $v$ to an $l\times l$ block (e.g., in a column reduced echelon form). Denote $\hat{w}=w_3$ and $\hat{u}={}^{{}^{\jmath_a}(1,g_0)^{-1}}u_{11}$. Now $\mathcal{R}(\hat{w})=\{1,\ldots,l\}$ and $\hat{w}$ takes the form \eqref{eq:start condition of w}, namely $\jmath_a(0^{c-l},1^{l},*^{(k-1)c})$. Since ${}^{w_3}{}^{\jmath_a}(1,g)\in M_P$ for any $g\in\GL_n$, $w_3u_{11}\sim\hat{w}\hat{u}$. Regarding $\hat{u}$, ${}^{\jmath_a}\ell_0(\hat{u})$ takes the form \eqref{eq:second form u final} with $A_l=v$. Denote $\sigma=\sigma_1\sigma_2$ with the notation above. We claim ${}^{\sigma}\hat{u}\in N_H$ (clearly ${}^{\sigma}\hat{u}\in M_Q$). Since the conjugation by ${}^{\jmath_a}(1,g_0)^{-1}$ only affects the columns of $v$ and rows of $v'$ in \eqref{eq:starting form u'''}, the result follows from \eqref{eq:important prop of u9}. \end{proof} While it is relatively straightforward to obtain condition \eqref{psi U nontrivial} when $h=w$, the representatives $wu$ are more difficult to describe, because of the form of $\ell(u)$. The following lemma implies that (with our current structure of $u$) it is sufficient to obtain \eqref{psi U nontrivial} for $w$. \begin{lemma}\label{lemma:easier condition on psiU} Let $h=wu$, where $w$ and $u$ are given by Proposition~\ref{proposition:structure of w u}. Assume \begin{align}\label{psi U nontrivial using only w} \psi_U|_{U\cap {}^{w^{-1}}U_P}\ne1. \end{align} Then \eqref{psi U nontrivial} holds as well, namely $\psi_U|_{U\cap {}^{h^{-1}}U_P}\ne1$. \end{lemma} \begin{proof} By \eqref{psi U nontrivial using only w} there exists a root in $U$, such that for the subgroup $Y<U$ generated by this root, ${}^wY<U_P$ and $\psi_U|_Y\ne1$. Since $u\in M_Q$, it normalizes $U$ whence ${}^{u^{-1}}Y<U$, and also ${}^h({}^{u^{-1}}Y)={}^wY<U_P$. It remains to show $\psi_U|_{{}^{u^{-1}}Y}\ne1$, since then \eqref{psi U nontrivial} holds. We can identify the quotient of $U$ by its commutator subgroup, with the direct product of $k-2$ copies of $\Mat_c$, and one copy of $\Mat_{c\times 2c}$. The root defining $Y$ corresponds to a coordinate $(i,j)$ in one of these copies. Looking at the definition of $\psi_U$, we can be more specific. Either $(i,j)$ belongs to one of the $k-2$ blocks of size $c\times c$, on which $\psi_U$ is given by $\psi\circ\tr$, or $(i,j)$ belongs to one of two $n\times n$ blocks inside $\Mat_{c\times 2c}$ ($u^{1,1}$ or $u^{2,2}$, see \eqref{matrix:middle 4c block of u}), and again $\psi_U$ restricts to $\psi\circ\tr$ on these blocks. Denote the block by $B$. In both cases, since $\psi_U|_Y\ne1$, the coordinate $(i,j)$ appears as a diagonal coordinate of $B$. When $c$ is odd there is a third possibility, that $(i,j)$ appears in the block $B\in\Mat_{1\times2}$ on which $\psi_U$ is given by $\psi(\epsilon_1B_{1,1}-\epsilon_2B_{1,2})$, and $(i,j)$ is either the coordinate of $B_{1,1}$ or $B_{1,2}$. In this case also note that $w$ of the prescribed structure can not permute both $B_{1,1}$ and $B_{1,2}$ into $U_P$. Write ${}^{\sigma}u=u_0^{-1}\in M_Q\cap N_H$, where $\sigma\in (G,1)$ is given by Proposition~\ref{proposition:structure of w u}. Then ${}^{\sigma}Y$ is again a root subgroup, and since conjugation by $\sigma$ permutes the coordinates of $U$ and stabilizes $\psi_U$, ${}^{\sigma}Y$ is still defined be a coordinate $(i,j)$ which belongs to one of the blocks $B'$ described above. In fact if $B\in\Mat_{c}$, we must have $B'=B$; if $B\in\Mat_{n}$, there are two options for $B'$, one of which is $B$; and for $B\in\Mat_{1\times 2}$, $B'=B$. Of course $\psi_U$ is nontrivial on ${}^{\sigma}Y$. The conjugation of ${}^{\sigma}Y$ by $u_0$ must be performed with more care, because $u_0$ normalizes $U$ but may not stabilize $\psi_U$. First consider the case where $B=B'\in\Mat_{c}$. Then ${}^{\sigma}Y$ is the root subgroup defined by the $(d,d)$-th diagonal coordinate in $B$, for some $1\leq d\leq c$. For a fixed element $y\in Y$, assume the $(d,d)$-th coordinate of ${}^{\sigma}y$ is $x\ne0$. It is the only nonzero coordinate in the projection of $y$ to $B$. Since $u_0\in M_Q$, the nontrivial coordinates of ${}^{u_0}({}^{\sigma}y)$ are still contained in $B$. This means that the only nonzero coordinates of ${}^{u_0}({}^{\sigma}y)$ on which $\psi_U$ can possibly be nontrivial, are coordinates in the block $B$. Because $u_0\in M_Q\cap N_H$, the $(d,d)$-th coordinate of ${}^{u_0}({}^{\sigma}y)$ is still $x$, and all other nontrivial coordinates belong to the set of coordinates in $B$ of the form $\{(i',j')\ne(d,d):i'\leq d,j'\geq d\}$, i.e., are above or to the right of the $(d,d)$-th coordinate. Therefore $\psi_U({}^{u_0}({}^{\sigma}y))=\psi(x)$, hence $\psi_U$ is nontrivial on ${}^{u_0\sigma}Y$. Next assume $B'\in\Mat_{n}$ and proceed with similar notation. Now ${}^{u_0}({}^{\sigma}y)$ can contain nontrivial coordinates outside $B'$. Assume $B'$ is the top left $n\times n$ block in $\Mat_{c\times2c}$ (i.e., $u^{1,1}$). Then ${}^{u_0}({}^{\sigma}y)$ contains $x$ in the $(d,d)$-th coordinate, $1\leq d\leq n$, and arbitrary elements in the coordinates $(i',j')\ne(d,d)$, where $i'\leq d$ only varies over the rows of $B'$, but $j'$ varies over all columns $j'\geq d$ of $B'$ and also the columns to the right of $B'$, up to the rightmost column of $\Mat_{c\times2c}$ (this is the $(k+1)c$-th column as a matrix in $U$). Otherwise $B'$ is the bottom right block (which is $u^{2,2}$). Then ${}^{u_0}({}^{\sigma}y)$ contains $x$ in the $(d,d)$-th coordinate and may contain nontrivial coordinates for $(i',j')\ne(d,d)$, where $i'$ varies over the rows $i'\leq d$ of $B'$ and the rows above $B'$, up to the first row of $\Mat_{c\times2c}$ (row $(k-2)c+1$ for matrices in $U$), and $j'\geq d$ only varies over columns of $B'$. In both cases $\psi_U$ is trivial on all of the possibly nonzero coordinates $(i',j')$, and the $(d,d)$-th coordinate is $x$, thus $\psi_U|_{{}^{u_0\sigma}Y}\ne1$. If $c$ is odd we also consider $B=B'\in\Mat_{1\times2}$. Observe that now $\psi_U$ is trivial on all coordinates above or to the right of $B_{1,2}$, and also on all coordinates above or to the right of $B_{1,1}$ except $B_{1,2}$. Hence if the nonzero coordinate $x$ of ${}^{\sigma}y$ is in $B_{1,2}$, $\psi_U|_{{}^{u_0\sigma}Y}\ne1$, but also if $x$ is in $B_{1,1}$ we have $\psi_U|_{{}^{u_0\sigma}Y}\ne1$, because multiplying $u_0({}^{\sigma}y)$ on the right by $u_0^{-1}$ leaves $B_{1,2}$ zero (when $c$ is odd, the $(kc,kc+1)$-th coordinate of any element of $N_H$ is zero). Now because $\sigma\in (G,1)$, it immediately follows that $\psi_U$ is nontrivial on ${}^{\sigma^{-1}u_0\sigma}Y={}^{u^{-1}}Y$, completing the proof of the lemma. \end{proof} Re-denote $h=wu$ where $w$ and $u$ satisfy the properties of Proposition~\ref{proposition:structure of w u}. In particular $w$ defines the integer $0\leq l\leq n$. \begin{proposition}\label{proposition:1st reduction of w} We have $\mathcal{H}(h)=0$ unless \begin{align}\label{eq:w_i first reduction} &w_i=(1^{\lceil c/2\rceil},*^{n-l},1^{{l}}),\qquad\forall 1<i\leq k. \end{align} \end{proposition} \begin{proof} For $k=1$ there is nothing to prove, assume $k>1$. Write $w_2=(w_2',w_2'')$ with $w_2'\in\{0,1\}^{\lceil c/2\rceil}$, $w_2''\in\{0,1\}^{n}$. If $w_2'$ is not of the form $(1^{{l}},*^{\lceil c/2\rceil-l})$, then $l>0$ (for $l=0$, $w_2'$ is automatically of the form $(1^{{l}},*^{\lceil c/2\rceil-l})=(*^{\lceil c/2\rceil})$). Let $Y<U$ be the subgroup of elements with the middle $2(c+l)\times2(c+l)$ block of the form \begin{align*} \left(\begin{smallmatrix} I_l&&&&&y&&yA_l'\\ &I_l&&&&&&&-A_ly'\\ &&I_{n-l}&&\\ &&&I_l&&&&&y'\\ &&&&I_{2c-2(n+l)}\\ &&&&&I_l\\ &&&&&&I_{n-l}\\ &&&&&&&I_l\\ &&&&&&&&I_l\end{smallmatrix}\right). \end{align*} Recall $\psi_U$ restricts to $\psi\circ\tr$ on the block $u^{2,2}$ (see \eqref{matrix:middle 4c block of u}). The block $yA_l'$ above occupies the bottom right $l\times l$ block of $u^{2,2}$. Since there are no zero rows in $A_l$, there are no zero columns in $A_l'$. Hence for each $1\leq i\leq l$, the form $y\mapsto (yA_l')_{i,i}$ on $\Mat_l$ is not identically $0$. Then if one of the first $l$ coordinates of $w_2'$ is $0$, we can take a subgroup $Y_i<Y$ on which $\psi_U|_{Y_i}\ne1$ and ${}^hY_i<U_P$, hence $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial}. Thus we can write $w_2'=(1^{l},*^{\lceil c/2\rceil-l})$ (whether $l>0$ or $l=0$). If $w_2'\ne (1^{n},*^{\lceil c/2\rceil-n})$, one of the rows from the top left $n-l\times n-l$ block of $u^{2,2}$ is conjugated by $w$ into $U_P$. Hence we can take a subgroup $Y<U$ such that $\psi_U|_Y\ne1$ and ${}^wY<U_P$, then $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial using only w}. If $c$ is odd, $\psi_U$ restricts to a nontrivial character on the middle two coordinates $(u^3,u^4)$ of row $n+1$ in \eqref{matrix:middle 4c block of u}, and the columns of $u^3$ and $u^4$ are either swapped or remain unchanged by $w$. Then if $w_2'\ne (1^{\lceil c/2\rceil})$, one of these coordinates is conjugated by $w$ into $U_P$, and if $Y<U$ is defined by this coordinate, we have $\psi_U|_Y\ne1$ and ${}^wY<U_P$. Thus $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial using only w}. (Because $2c-2(n+l)\geq2$, ${}^uY=Y$ and we can also apply \eqref{psi U nontrivial} directly.) Thus $w_2'=(1^{\lceil c/2\rceil})$ whether $c$ is even or odd. We proceed for all $c$. Recall $\psi_U$ restricts to $\psi\circ\tr$ on the top left $l\times l$ block of $u^{1,1}$. Since the first $c$ coordinates of $w$ are $\jmath_a(0^{c-l},1^l)$, $w$ permutes the columns of this block into columns in $U_P$, hence if $w_2''\ne(*^{n-{l}},1^{{l}})$, we can again find $Y<U_P$ such that $\psi_U|_Y\ne1$ and ${}^wY<U_P$, so that $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial using only w}. Altogether $w_2=(1^{\lceil c/2\rceil},*^{n-l},1^{l})$. If $k=2$ we are done, assume $k>2$. We show $w_3=(1^{\lceil c/2\rceil},*^{n-l},1^{l})$. Recall $V_{(c^{k-1})}<U$. Because $w_2=(1^{\lceil c/2\rceil},*^{n-l},1^{l})$, $w$ conjugates the last $\lceil c/2\rceil$ and first $l$ columns of $v_{k-2,k-1}$ (see \S~\ref{representations} for this notation) into $U_P$. Hence if $w_3\ne(1^{\lceil c/2\rceil},*^{n-l},1^{l})$, a diagonal coordinate of one of the blocks inside $v_{k-2,k-1}$, namely the bottom right $\lceil c/2\rceil\times\lceil c/2\rceil$ block if $w_3\ne(1^{\lceil c/2\rceil},*^{n})$, or the top left $l\times l$ block if $w_3\ne(*^{c-{l}},1^{{l}})$, is conjugated by $w$ into $U_P$, so that if $Y<U$ is generated by this coordinate, $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial using only w}. Proceeding in this manner for $3<i\leq k$, each time using $v_{k-i+1,k-i+2}$ and \eqref{psi U nontrivial using only w}, we deduce $w_i=(1^{\lceil c/2\rceil},*^{n-l},1^{l})$. \end{proof} For each $1<i\leq k$, since $w_i$ takes the form \eqref{eq:w_i first reduction}, we can uniquely identify a maximal integer $0\leq d_{i-1}\leq n-l$ such that $w_i=(1^{\lceil c/2\rceil},*^{n-l-d_{i-1}},1^{l+d_{i-1}})$. By maximality $(*^{n-l-d_{i-1}})=(*^{n-l-d_{i-1}-1},0)$ (if $d_{i-1}<n-l$), but the remaining coordinates are still undetermined. As we show next if $\mathcal{H}(h)\ne0$, we can replace $h$ by a representative for which $(*^{n-l-d_{i-1}})=(0^{n-l-d_{i-1}})$ (this may mean the integers $d_{i-1}$ are larger), and even fix an ascending order on $d_{i-1}$. Note that for $k=1$, the integers $d_{i-1}$ are undefined. \begin{proposition}\label{proposition:2nd reduction of w} We have $\mathcal{H}(h)=0$, unless $h\sim \hat{w}\hat{u}$ where \begin{align}\label{eq:w_i second reduction} &\hat{w}=\jmath_a(0^{c-l},1^{l},w_2,\ldots,w_k),\quad\forall 1<i\leq k, \, w_i=(1^{\lceil c/2\rceil},0^{n-l-d_{i-1}},1^{l+d_{i-1}}),\quad d_1\leq\ldots\leq d_{k-1}, \end{align} and $\hat{u}$ satisfies the conditions of Proposition~\ref{proposition:structure of w u}, in particular $\ell_0(\hat{u})$ takes the form \eqref{eq:second form u final} (and $A_l$ does not have any zero row). \end{proposition} \begin{proof} Write $w_i=(1^{\lceil c/2\rceil},w_i',1^l)$ with $w'_i\in\{0,1\}^{n-l}$. The rightmost $d_{i-1}$ coordinates of $w'_i$ are $1$. We start with the following observation. Let $1\leq j\leq n-l$ and assume $1<i_0\leq k$ is minimal such that $w_{i_0}'[j]$ (the $j$-th coordinate of $w_{i_0}'$) equals $1$. We claim $\mathcal{H}(h)=0$ unless $w_i'[j]=1$ for all $i\geq i_0$. Otherwise, assume $i>i_0$ is minimal with $w_i'[j]=0$. Write the top left $n\times n$ block of the block $v_{k-i+1,k-i+2}$ of $V_{(c^{k-1})}$ in the form $\left(\begin{smallmatrix} v^1 & v^2 \\ v^3 & v^4 \end{smallmatrix}\right)$ with $v^1\in\Mat_{l}$ and $v^4\in\Mat_{n-l}$. Then a unipotent subgroup $Y$ containing coordinates from $v^4$ will satisfy $\psi_U|_Y\ne1$ and ${}^wY<U_P$, whence $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial using only w}. We turn to show that we can sort the coordinates of $w_2,\ldots,w_k$ to obtain \eqref{eq:w_i second reduction}. Identify $\GL_{n-l}$ with its natural image in the middle factor of the standard Levi subgroup $\GL_l\times \GL_{n-l}\times \mathcal{G}_{c-2n}$ of $G$. Then $P\cap(\GL_{n-l},1)$ contains a full set of representatives for $W(\GL_{n-l})$. Given such representative $g$, $h\sim h(g,1)^{-1}\sim ({}^{(g,1)}w)({}^{(g,1)}u)$ where $\hat{u}={}^{(g,1)}u$ still satisfies the conditions of Proposition~\ref{proposition:structure of w u} and $\ell_0(\hat{u})=\ell_0(u)$ ($(g,1)$ commutes with \eqref{eq:second form u final}). Hence one can use such conjugations to permute the entries in each $w_i$, while maintaining the prescribed structure of $u$. Using transpositions from $W(\GL_{n-l})$ we can permute each consecutive pair $(w_i'[j],w_i'[j+1])$. If $w_i'[j]=w_i'[j+1]$, the conjugation has no affect on this pair. Choose some $j$ such that there is a minimal $i_0$ with $(w_{i_0}'[j],w_{i_0}'[j+1])=(1,0)$. If $j$ does not exist, then $w_i'=(0^{n-l-d_{i-1}},1^{d_{i-1}})$ for all $1<i\leq k$, and by what we have proved, if $\mathcal{H}(h)\ne0$, $d_1\leq\ldots\leq d_{k-1}$ so that \eqref{eq:w_i second reduction} holds. If $j$ exists, then again by the above observation (assuming $\mathcal{H}(h)\ne0$), for $i>i_0$, either $(w_i'[j],w_i'[j+1])=(1,0)$, in which case the order is swapped, or $w_i'[j]=w_i'[j+1]=1$. Proceeding in this manner we obtain \eqref{eq:w_i second reduction}. \end{proof} Re-denote $h=wu$, with $w$ and $u$ given by Proposition~\ref{proposition:2nd reduction of w}, in particular $w$ satisfies \eqref{eq:w_i second reduction}. Recall that in general if $Y<{}^hU\cap M_P$, ${}^{h^{-1}}Y<P_h$ and by definition any morphism in $\mathcal{H}(h)$ factors through $J_{Y,{}^{h}\psi_U^{-1}}(\rho)$ (see \S~\ref{Basic properties}). We turn to compute ${}^hU\cap M_P$. To simplify the presentation we slightly alter $w$, using multiplication on the left by representatives of $W(M_P)$, which we identify with permutation matrices in $\GL_{kc}$. First, we multiply $w$ on the left by $\diag(I_{(k-1)c},J_l,I_{c-l})$, this changes the innermost block $J_l$ into $I_l$ (see \eqref{eq:weyl rep k=1}). Then for $w_i$, $1<i\leq k$, we multiply $w$ on the left by \begin{align*} \diag(I_{(k-i)c},J_{l+d_{i-1}},I_{n-l-d_{i-1}},J_{\lceil c/2\rceil},I_{c(i-1)}). \end{align*} For example if $k=2$, \begin{align}\label{eq:example k=2} w=\jmath_a \left(\begin{smallmatrix} &&&&&&&&I_{l+d_1}\\ &I_{n-l-d_1}&&&&&&&\\ &&&&&&I_{\lceil c/2\rceil}&&\\ &&&&&I_l&&&\\ &&&&I_{2(c-l)}&&&&\\ &&&\epsilon_0I_l&&&&&\\ &&\epsilon_0I_{\lceil c/2\rceil}&&&&&&\\ &&&&&&&I_{n-l-d_1}&\\ \epsilon_0I_{l+d_1}&&&&&&&& \end{smallmatrix}\right). \end{align} For $1\leq j\leq k-1$, define $\gamma_j\in\GL_{kc}$ by \begin{align*} \gamma_j=&\diag(I_{n-l-d_{k-j}+\sum_{i=1}^{j-1}n-l-d_{k-i}},\left(\begin{smallmatrix}&I_{kc-\lceil c/2\rceil-(j-1)c-n}\\I_{\lceil c/2\rceil}\end{smallmatrix}\right),I_{l+d_{k-j}+\sum_{i=1}^{j-1}\lceil c/2\rceil+l+d_{k-i}})\\ &\times \diag(I_{\sum_{i=1}^{j-1}n-l-d_{k-i}},\left(\begin{smallmatrix}&I_{kc-(j-1)c-l-d_{k-j}}\\I_{l+d_{k-j}}\end{smallmatrix}\right),I_{\sum_{i=1}^{j-1}\lceil c/2\rceil+l+d_{k-i}}). \end{align*} E.g., \begin{align*} \gamma_1=\diag(I_{n-l-d_{k-1}},\left(\begin{smallmatrix}&I_{kc-\lceil c/2\rceil-n}\\I_{\lceil c/2\rceil}\end{smallmatrix}\right),I_{l+d_{k-1}}) \left(\begin{smallmatrix}&I_{kc-l-d_{k-1}}\\I_{l+d_{k-1}}\end{smallmatrix}\right). \end{align*} Further multiply $w$ on the left by $\gamma_{k-1}\cdot\ldots\cdot\gamma_1$ (henceforth we only use this form for $w$). For the computation of ${}^hU\cap M_P$ also note that ${}^hU={}^wU$. Now we see that ${}^hU\cap M_P=V_{\beta}$, where $\beta$ is the composition of $kc$ given by \begin{align}\label{eq:beta} \beta=(n-l-d_{k-1},\ldots,n-l-d_{1},c,\lceil c/2\rceil+l+d_{1},\ldots,\lceil c/2\rceil+l+d_{k-1}). \end{align} (The purpose of the elements $\gamma_i$ was to obtain an upper triangular ${}^hU\cap M_P$.) The character ${}^{h}\psi_U$ is a character of $V_{\beta}$ by restriction, denote $\psi_{V_{\beta}}={}^{h}\psi_U|_{V_{\beta}}$. We can not fully describe $\psi_{V_{\beta}}$ without determining $\ell(u)$, but the lemma below will provide the information we need. First we describe ${}^{w\ell_0(u)}\psi_U|_{V_{\beta}}$. For $v\in V_{\beta}$ write \begin{align}\label{eq:v in V beta} v=\left(\begin{smallmatrix}I_{n-l-d_{k-1}}&b_1&\cdots\\&\ddots&\ddots&\\ &&I_{n-l-d_{1}}&b_{k-1}&\cdots\\&&&I_c&b_k&\cdots\\&&&&\ddots&\ddots\\&&&&&I_{\lceil c/2\rceil+l+d_{k-2}}&b_{2k-2} \\&&&&&&I_{\lceil c/2\rceil+l+d_{k-1}} \end{smallmatrix}\right). \end{align} Here \begin{align*} (b_i)_{1\leq i\leq 2k-2}=(b_1,\ldots,b_{k-2},b_{k-1},b_k,b_{k+1},\ldots,b_{2k-2}) \end{align*} is a general element of the product \begin{align*} \prod_{j=k-1}^2\Mat_{n-l-d_j\times n-l-d_{j-1}}\times \Mat_{n-l-d_1\times c} \times \Mat_{c\times \lceil c/2\rceil+l+d_1} \times \prod_{j=1}^{k-2}\Mat_{\lceil c/2\rceil+l+d_j\times \lceil c/2\rceil+l+d_{j+1}}. \end{align*} Note that for $k=2$, $(b_i)_{1\leq i\leq 2k-2}\in \Mat_{n-l-d_1\times c} \times \Mat_{c\times \lceil c/2\rceil+l+d_1}$. Then \begin{align}\label{psi_U on V beta 0} &{}^{w\ell_0(u)}\psi_U(v)=\psi(\sum_{j=k-1}^{2}\tr(b_{k-j}\left(\begin{smallmatrix}0_{d_j-d_{j-1} \times n-l-d_j} \\ I_{n-l-d_{j}}\end{smallmatrix}\right)) +\tr(b_{k-1}\left(\begin{smallmatrix}0_{l+d_1\times n-l-d_1}\\ I_{n-l-d_1} \\ 0_{\lceil c/2\rceil\times n-l-d_1}\end{smallmatrix}\right)) \\&\quad\quad -\tr(b_{k}\left( \begin{smallmatrix}0&0&-\epsilon_0A_l&0&0\\0&\epsilon_0I_{n-l}&0&0&0 \\0&0&0&0&I_{c-2n} \\0&0&0&0_{d_1\times n-l}&0 \\I_l&0&0&0&0 \end{smallmatrix}\right)) -\sum_{j=1}^{k-2}\tr(b_{k+j}\left(\begin{smallmatrix}I_{\lceil c/2\rceil}& 0_{\lceil c/2\rceil\times d_j+l} \\0_{d_{j+1}-d_j\times \lceil c/2\rceil} & 0_{d_{j+1}-d_j\times d_j+l} \\0_{d_j+l\times \lceil c/2\rceil} & I_{d_j+l}\end{smallmatrix}\right))).\nonumber \end{align} Here the sum $\sum_{j=k-1}^{2}$ is omitted if $k=2$; and if $c-2n=1$ ($0\leq c-2n\leq1$), the coordinate $I_{c-2n}=1$ initially depends on the constants $\epsilon_1,\epsilon_2$ (see \S~\ref{Doubling setup}, $2\epsilon_1\epsilon_2=1$), but we can use another conjugation of $w$ by an element of $M_P$ to fix this coordinate to be $1$ (without otherwise changing \eqref{psi_U on V beta 0}). Additionally, for $l=n$ and $A_l$ of rank $l$, the character \eqref{psi_U on V beta 0} belongs to the orbit of $\psi_k^{-1}$. \begin{example}\label{eq:example k=2,3} For $k=2$ and an even $a$, after multiplying \eqref{eq:example k=2} on the left by $\gamma_1$ we have \begin{align}\label{eq:example k=2,3 final w for k=2} &w= \left(\begin{smallmatrix} 0&I_{n-l-d_1}&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&I_l&0&0&0\\ 0&0&0&0&I_{c-l}&0&0&0&0&0\\ 0&0&0&0&0&0&0&I_{\lceil c/2\rceil}&0&0\\ 0&0&0&0&0&0&0&0&0&I_{l+d_1}\\ \epsilon_0I_{l+d_1}&0&0&0&0&0&0&0&0&0\\ 0&0&\epsilon_0I_{\lceil c/2\rceil}&0&0&0&0&0&0&0\\ 0&0&0&0&0&I_{c-l}&0&0&0&0\\ 0&0&0&\epsilon_0I_l&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&I_{n-l-d_1}&0 \end{smallmatrix}\right), \\&\nonumber V_{\beta}=V_{(n-l-d_1,c,\lceil c/2\rceil+l+d_1)}=\{\left(\begin{smallmatrix} I_{n-l-d_1} & b_1 & * \\ & I_c & b_2 \\ & & I_{\lceil c/2\rceil+l+d_1}\end{smallmatrix}\right)\} \end{align} and $\psi_{V_{\beta}}$ depends only on $b_1$ and $b_2$. E.g., if $u=\ell_0(u)$, its restriction to $b_1$ is given by $\psi$ composed with the trace of the $n-l-d_1\times n-l-d_1$ block of $b_1$ starting at column $l+d_1+1$ of $b_1$. For $k=3$ and again an even $a$, after multiplying $w$ on the left by $\gamma_2\gamma_1$ we obtain \begin{align*} &\left(\begin{smallmatrix} 0&I_{n-l-d_2}&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&I_{n-l-d_1}&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&I_l&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&I_{c-l}&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&I_{\lceil c/2\rceil}&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&I_{l+d_1}&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&I_{\lceil c/2\rceil}&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&I_{l+d_2}\\ \epsilon_0I_{l+d_2}&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&\epsilon_0I_{\lceil c/2\rceil}&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&\epsilon_0I_{l+d_1}&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&\epsilon_0I_{\lceil c/2\rceil}&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&I_{c-l}&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&\epsilon_0I_l&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&I_{n-l-d_1}&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&I_{n-l-d_2}&0\\ \end{smallmatrix}\right), \\& \beta=(n-l-d_2,n-l-d_1,c,\lceil c/2\rceil+l+d_1,\lceil c/2\rceil+l+d_2). \end{align*} \end{example} \begin{remark}\label{remark:convenient computations of V beta} It is convenient to compute $V_{\beta}$ in $2$ steps: first compute ${}^wU\cap M_P$ using $w$ without the elements $\gamma_i$, e.g., \eqref{eq:example k=2}, then conjugate by these elements in order to obtain $V_{\beta}$. \end{remark} \begin{proposition}\label{proposition:wu_0 nontrivial implies h nontrivial orbit} Assume $k>1$ and $l<n$. If $\mathcal{H}(h)\ne0$, $\psi_{V_{\beta}}$ belongs to the orbit of \begin{align}\label{psi_U on V beta} &v\mapsto \psi(\sum_{j=k-1}^{2}\tr(b_{k-j}({*}_{n-l-d_{j-1} \times n-l-d_j})) +\tr(b_{k-1}\left(\begin{smallmatrix}{*}_{l+d_1 \times n-l-d_1}\\ I_{n-l-d_1} \\ {*}_{\lceil c/2\rceil\times n-l-d_1}\end{smallmatrix}\right)) \\&\quad\quad -\tr(b_{k}\left( \begin{smallmatrix}0&0&-\epsilon_0A_l&0&0\\0&\epsilon_0I_{n-l}&0&0&0 \\0&0&0&0&I_{c-2n} \\0&0&0&0_{d_1\times n-l}&0 \\I_l&0&0&0&0 \end{smallmatrix}\right)) -\sum_{j=1}^{k-2}\tr(b_{k+j}\left(\begin{smallmatrix}I_{\lceil c/2\rceil}& {0}_{\lceil c/2\rceil\times d_j+l} \\{*}_{d_{j+1}-d_j\times \lceil c/2\rceil} & {*}_{d_{j+1}-d_j\times d_j+l}\\{*}_{d_j+l\times \lceil c/2\rceil} & {*}_{d_j+l}\end{smallmatrix}\right))).\nonumber \end{align} Here $*$ means undetermined block entries. When $\ell(u)$ is the identity element, all coordinates were computed above and \eqref{psi_U on V beta} coincides with \eqref{psi_U on V beta 0}. \end{proposition} \begin{proof} We introduce notation for blocks of unipotent matrices in $M_Q$ and $U$. Recall from Lemma~\ref{lemma:easier condition on psiU} that $\psi_U$ is defined by $k-2$ blocks $B_i\in\Mat_c$, $1\leq i\leq k-2$, $2$ blocks $B_1',B_2'\in\Mat_n$ and when $c$ is odd also by $B''\in\Mat_{1\times2}$. Set $d_0=0$ and $d_k=d_{k-1}$. For each $1\leq i\leq k-2$, $B_i$ is further divided into subblocks by writing it as the upper right block of \begin{align*} \left(\begin{smallmatrix} I_{l+d_{k-i-1}}&&&&B_i^{1,1}&B_i^{1,2}&B_i^{1,3}&B_i^{1,4}\\ &I_{d_{k-i}-d_{k-i-1}}&&&B_i^{2,1}&B_i^{2,2}&B_i^{2,3}&B_i^{2,4}\\ &&I_{n-l-d_{k-i}}&&B_i^{3,1}&B_i^{3,2}&B_i^{3,3}&B_i^{3,4}\\ &&&I_{\lceil c/2\rceil}&B_i^{4,1}&B_i^{4,2}&B_i^{4,3}&B_i^{4,4}\\ &&&&I_{l+d_{k-i-1}}\\ &&&&&I_{d_{k-i}-d_{k-i-1}}\\ &&&&&&I_{n-l-d_{k-i}}\\ &&&&&&&I_{\lceil c/2\rceil} \end{smallmatrix}\right). \end{align*} The blocks $B_1',B_2'$ are contained in the following blocks: \begin{align*} \left(\begin{smallmatrix} I_l&&&&{B'}_1^{1,1}&{B'}_1^{1,2}&{B'}_1^{1,3}\\ &I_{d_1}&&&{B'}_1^{2,1}&{B'}_1^{2,2}&{B'}_1^{2,3}\\ &&I_{n-l-d_1}&&{B'}_1^{3,1}&{B'}_1^{3,2}&{B'}_1^{3,3}\\ &&&I_{\lceil c/2\rceil}&{B'}_1^{4,1}&{B'}_1^{4,2}&{B'}_1^{4,3}\\ &&&&I_l\\ &&&&&I_{d_1}\\ &&&&&&I_{n-l-d_1} \end{smallmatrix}\right), \left(\begin{smallmatrix} I_{l+d_1}&&&&&{B'}_2^{1,1}&{B'}_2^{1,2}&{B'}_2^{1,3}\\ &I_{n-l-d_1}&&&&{B'}_2^{2,1}&{B'}_2^{2,2}&{B'}_2^{2,3}\\ &&I_{\lceil c/2\rceil-l}&&&{B'}_2^{3,1}&{B'}_2^{3,2}&{B'}_2^{3,3}\\ &&&I_{l}&&{B'}_2^{4,1}&{B'}_2^{4,2}&{B'}_2^{4,3}\\ &&&&I_{2c-n-l}\\ &&&&&I_l\\ &&&&&&I_{n-l}\\ &&&&&&&I_{l} \end{smallmatrix}\right). \end{align*} If $c$ is odd, we also have the $c\times 2$ block containing $B''$ which we write in the form \begin{align*} &\left(\begin{smallmatrix} {B''}^{1}\\{B''}^{2}\\{B''}^{3}\\{B''}^{4} \end{smallmatrix}\right), {B''}^{1}\in\Mat_{l+d_1\times2},{B''}^{2}\in\Mat_{n-l-d_1\times2},{B''}^{3}\in\Mat_{1\times2},{B''}^{4}\in\Mat_{n\times2}. \end{align*} For $1\leq i\leq 4$, $B''^{i}=(B''^{i,1},B''^{i,2})$. In terms of the blocks $B_i$, $B'_i$ and $B''$, $\psi_U$ is given by \begin{align}\label{eq:blocks of psi_U} \psi(\sum_{i=1}^{k-2}\sum_{j=1}^4\tr(B_i^{j,j})+\sum_{j=1}^3\tr({B'}_1^{j,j})+\tr(\left(\begin{smallmatrix}0_{n-l\times c-2n} & I_{n-l}\end{smallmatrix}\right){B'}_2^{3,2})+\tr({B'}_2^{4,3})+ {B''}^{3}\left(\begin{smallmatrix}\epsilon_1\\-\epsilon_2\end{smallmatrix}\right)). \end{align} Let $\mathscr{M}_P$, $\mathscr{U}_P$ and $\mathscr{U}_P^-$ denote the lists of blocks $B_i^{t,t'},{B'}_i^{t,t'},B''^{t,t'}$ conjugated by $w$ into $M_P$, $U_P$ and $U_P^-$, respectively (these can still be computed using \eqref{eq:w_i second reduction}, $w$ differs from \eqref{eq:w_i second reduction} by an element of $M_P$). If $c$ is odd, let $a_0\in\{1,2\}$ be the column of $B''$ which $w$ conjugates into column $kc+1$, it consists of the blocks $(B''^{1,a_0},B''^{2,a_0},B''^{3,a_0},B''^{4,a_0})$. We see that \begin{align*} \mathscr{M}_P=&\{B_i^{1,1},B_i^{1,4},B_i^{2,1},B_i^{2,4},B_i^{3,2},B_i^{3,3},B_i^{4,1},B_i^{4,4}:1\leq i\leq k-2\}\\&\coprod \{{B'}_1^{1,1},{B'}_1^{2,1},{B'}_1^{3,2},{B'}_1^{3,3},{B'}_1^{4,1},{B'}_2^{1,1},{B'}_2^{1,2},{B'}_2^{2,3},{B'}_2^{3,1},{B'}_2^{3,2},{B'}_2^{4,1},{B'}_2^{4,2}\},\\ &\coprod\{{B''}^{1,a_0},{B''}^{2,3-a_0},{B''}^{3,a_0},{B''}^{4,a_0}\},\\ \mathscr{U}_P=&\{B_i^{3,1},B_i^{3,4}:1\leq i\leq k-2\}\coprod\{{B'}_1^{3,1},{B'}_2^{2,1},{B'}_2^{2,2},{B''}^{2,a_0}\},\\ \mathscr{U}_P^-=&\{B_i^{1,2},B_i^{1,3},B_i^{2,2},B_i^{2,3},B_i^{4,2},B_i^{4,3}:1\leq i\leq k-2\}\\&\coprod\{{B'}_1^{1,2},{B'}_1^{1,3},{B'}_1^{2,2},{B'}_1^{2,3},{B'}_1^{4,2},{B'}_1^{4,3},{B'}_2^{1,3},{B'}_2^{3,3},{B'}_2^{4,3}, {B''}^{1,3-a_0},{B''}^{3,3-a_0},{B''}^{4,3-a_0}\}. \end{align*} Since ${}^{\ell(\sigma)}\ell(u)\in M_Q\cap N_H$ with $\sigma\in (G,1)$, we can write $\ell(u)=\diag(z_{1},\ldots,z_{k-1})$, for $z_i={}^{w_{\sigma}}v_i$ where $w_{\sigma}\in W(\GL_c)$ corresponds to the projection of $(\sigma,1)^{-1}$ into the $i$-th copy of $\GL_c$ and $v_i\in N_{\GL_c}$. Note that if we write a general element of ${}^{w_{\sigma}}N_{\GL_c}$ in the form \begin{align*} \left(\begin{smallmatrix} X_1&X_2&X_3\\ X_4&X_5&X_6\\ X_7&X_8&X_9 \end{smallmatrix}\right) \end{align*} where $X_1,X_5$ and $X_9$ are square matrices (of arbitrary sizes), then $X_1,X_5,X_9$ are already invertible, and so are $\left(\begin{smallmatrix}I&X_2\\-X_4&I\end{smallmatrix}\right)$ and $\left(\begin{smallmatrix} I&X_6\\-X_8&I\end{smallmatrix}\right)$, whence $I+X_2X_4$, $I+X_4X_2$, $I+X_6X_8$ and $I+X_8X_6$ are also invertible ($X_i,X_j$ need not be square matrices). Since the left coset of $w$ in $W(M_P)\backslash W(H)$ is still represented by \eqref{eq:w_i second reduction}, we can write $z_i=z_i'm_i$ where ${}^w\diag(z_1',\ldots,z_{k-1}',I_{2c},z_1'^*,\ldots,z_{k-1}'^*)\in M_P$ and \begin{align*} &m_i= \left(\begin{smallmatrix} I_{l+d_{k-i}}+M_i^1M_i^2&M_i^1&0\\ M_i^2&I_{n-l-d_{k-i}}+M_i^3M_i^4&M_i^3\\ 0&M_i^4&I_{\lceil c/2\rceil} \end{smallmatrix}\right)\in\GL_c,\\ &I_{l+d_{k-i}}+M_i^1M_i^2\in\GL_{l+d_{k-i}},\qquad I_{n-l-d_{k-i}}+M_i^3M_i^4\in \GL_{n-l-d_{k-i}}. \end{align*} These matrices are invertible because $m_i={}^{w_{\sigma}}v_i'$ where $v_i'\in N_{\GL_c}$. We have \begin{align*} &m_i^{-1}= \left(\begin{smallmatrix} I_{l+d_{k-i}}&-M_i^1&M_i^1M_i^3\\ -M_i^2&I_{n-l-d_{k-i}}+M_i^2M_i^1&-(I_{n-l-d_{k-i}}+M_i^2M_i^1)M_i^3\\ M_i^4M_i^2&-M_i^4(I_{n-l-d_{k-i}}+M_i^2M_i^1)&I_{\lceil c/2\rceil}+M_i^4(I_{n-l-d_{k-i}}+M_i^2M_i^1)M_i^3 \end{smallmatrix}\right). \end{align*} Since $h\sim ph$ for any $p\in P$, we can already assume $z_i=m_i$. We show that $\psi_{V_{\beta}}$ belongs to the orbit of a character whose restriction to the blocks $b_{k-1},b_k,b_{k+1},\ldots,b_{2k-2}$ agrees with \eqref{psi_U on V beta}, otherwise $\mathcal{H}(h)=0$. This will complete the proof. To this end it suffices to compute ${}^{u}\psi_U$ on the blocks of $U$ conjugated by $w$ into $b_{k-1},b_k,b_{k+1},\ldots,b_{2k-2}$. The contribution of $\ell_0(u)$ is simple to compute and was essentially given in \eqref{psi_U on V beta 0}. To determine ${}^{\ell(u)}\psi_U$ (thereby ${}^u\psi_U$) we compute \begin{align*} m_{k-1}^{-1}B'_1,\qquad m_{k-1}^{-1}B'_2,\qquad m_{k-1}^{-1}B'',\qquad m_i^{-1}B_im_{i+1},\qquad\forall 1\leq i\leq k-2. \end{align*} Columns $l+d_1+1,\ldots, n$ of $b_{k-1}$ (the only columns of $b_{k-1}$ where \eqref{psi_U on V beta} is determined) consist of the block $B_1'^{3,3}$, conjugated to $b_{k-1}$ by $w$ (other columns are conjugated from $B_1'^{3,2},B_2'^{2,3}$ and columns between the columns of $u^{1,1}$ and $u^{2,2}$). The coordinates of $b_k$ are uniquely defined by \begin{align*} B_1'^{1,1},B_1'^{2,1},B_1'^{4,1},B_2'^{1,1},B_2'^{1,2},B_2'^{3,1},B_2'^{3,2},B_2'^{4,1},B_2'^{4,2}, {B''}^{1,a_0},{B''}^{2,a_0},{B''}^{3,3-a_0},{B''}^{4,a_0} \end{align*} and by additional $l+d_1+\lceil c/2\rceil \times n-l$ coordinates appearing to the left of $B_2'^{1,1},B_2'^{3,1},B_2'^{4,1}$ ($\psi_U$ and ${}^{\ell_0(u)}\psi_U$ are trivial on the corresponding columns, thereby also ${}^u\psi_U$ because multiplying on the left by $m_{k-1}^{-1}$ can not introduce a character on a column where ${}^{\ell_0(u)}\psi_U$ was trivial, so we do not provide notation for these), as well as the form defining $H$. Note that $B''$ is omitted if $c$ is even. When we multiply $m_{k-1}^{-1}B'_1$ we see that if the top $l$ rows of $M_{k-1}^1$ are nonzero, ${}^u\psi_U$ is nontrivial on $B_1'^{3,1}\in\mathscr{U}_P$ and then $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial}. Hence we can assume the top $l$ rows of $M_{k-1}^1$ are $0$, which implies ${}^u\psi_U$ is trivial on the coordinates of $b_k$ obtained from $B_1'$, namely $B_1'^{1,1},B_1'^{2,1},B_1'^{4,1}$ ($\psi_U$ and ${}^{\ell_0(u)}\psi_U$ are also trivial there). Additionally ${}^u\psi_{U}$ restricts to $\psi(\tr((I_{n-l-d_1}+M_{k-1}^2M_{k-1}^1)B_1'^{3,3}))$ on $B_1'^{3,3}$, and since \begin{align*} {}^w\diag(I_{(k-2)c+l+d_1},I_{n-l-d_1}+M_{k-1}^2M_{k-1}^1,I_{2(\lceil c/2\rceil+c)},(I_{n-l-d_1}+M_{k-1}^2M_{k-1}^1)^*,I_{l+d_1+(k-2)c})\in M_P, \end{align*} $\psi_{V_{\beta}}$ belongs to the orbit of a character which agrees with \eqref{psi_U on V beta} on $b_{k-1}$ and the coordinates of $b_k$ conjugated from $B_1'$. The character ${}^{\ell_0(u)}\psi_U$ is given on the blocks of $B'_2$, which $w$ conjugates into $b_k$, by \begin{align*} \psi(\tr(\varphi_k {B'_2}^{\circ})),\qquad \varphi_k=\left( \begin{smallmatrix} 0_{l\times l+d_1}&0_{l\times n-l-d_1}&0_{l\times \lceil c/2\rceil-l}& A(X)\\ 0_{n-l\times l+d_1}&0_{n-l\times n-l-d_1}&\left(\begin{smallmatrix}0_{n-l\times c-2n} & I_{n-l}\end{smallmatrix}\right) & 0_{n-l\times l} \end{smallmatrix}\right). \end{align*} Here ${B'_2}^{\circ}$ is the $c\times n$ block consisting of ${B'_2}^{t,t'}$ with $1\leq t\leq 4$ and $1\leq t'\leq 2$ (all of these blocks except for $t=2$ are conjugated into $b_k$). Multiplying $\varphi_km_{k-1}^{-1}$ we deduce $\mathcal{H}(h)=0$, unless the product of $\varphi_k$ and columns $l+d_1+1,\ldots,n$ of $m_{k-1}^{-1}$ defines a trivial character on $({B'_2}^{2,1},{B'_2}^{2,2})\in\mathscr{U}_P$, which amounts to \begin{align*} \left( \begin{smallmatrix} 0_{l\times \lceil c/2\rceil-l}& A(X)\\ \left(\begin{smallmatrix}0_{n-l\times c-2n} & I_{n-l}\end{smallmatrix}\right)& 0_{n-l\times l} \end{smallmatrix}\right) (-M_{k-1}^4(I_{n-l-d_1}+M_{k-1}^2M_{k-1}^1))=0_{\lceil c/2\rceil}. \end{align*} Hence the product of $\varphi_k$ and last $\lceil c/2\rceil$ columns of $m_{k-1}^{-1}$ equals \begin{align*} \left( \begin{smallmatrix} 0_{l\times \lceil c/2\rceil-l}& A(X)\\ \left(\begin{smallmatrix}0_{n-l\times c-2n} & I_{n-l}\end{smallmatrix}\right)& 0_{n-l\times l} \end{smallmatrix}\right) (I_{\lceil c/2\rceil}+M_{k-1}^4(I_{n-l-d_1}+M_{k-1}^2M_{k-1}^1)M_{k-1}^3)=\left( \begin{smallmatrix} 0_{l\times \lceil c/2\rceil-l}& A(X)\\ \left(\begin{smallmatrix}0_{n-l\times c-2n} & I_{n-l}\end{smallmatrix}\right)& 0_{n-l\times l} \end{smallmatrix}\right), \end{align*} thus ${}^u\psi_U$ agrees with $\psi_{U}$ on the blocks contained in $B'_2$. If $c$ is odd, the restriction of ${}^{u}\psi_U$ to $B''$ is given by \begin{align*} \psi(\tr(\left( \begin{smallmatrix}0_{2\times n}&\left(\begin{smallmatrix}\epsilon_1\\-\epsilon_2\end{smallmatrix}\right)&0_{2\times n}\end{smallmatrix}\right)m_{k-1}^{-1}B'')). \end{align*} Since $B''^{2,a_0}\in\mathscr{U}_P$ and $\epsilon_1\epsilon_2\ne0$, we deduce the first row of $-M_{k-1}^4(I_{n-l-d_1}+M_{k-1}^2M_{k-1}^1)$ is $0$, and because $I_{n-l-d_1}+M_{k-1}^2M_{k-1}^1$ is invertible we obtain that the first row of $M_{k-1}^4$ is $0$. Then the first row of $m_{k-1}^{-1}$ is $\left(\begin{smallmatrix}0_{n}&1&0_n\end{smallmatrix}\right)$, whence ${}^u\psi$ and $\psi_U$ agree on $B''$. Altogether we have shown that $\psi_{V_{\beta}}$ belongs to the orbit of a character which agrees with \eqref{psi_U on V beta} on $b_{k-1}$ and $b_k$. Consider $b_{k+i}$, $1\leq i\leq k-2$. The coordinates of $b_{k+i}$ are uniquely defined by the blocks \begin{align*} B_{k-i-1}^{1,1},B_{k-i-1}^{1,4},B_{k-i-1}^{2,1},B_{k-i-1}^{2,4},B_{k-i-1}^{4,1},B_{k-i-1}^{4,4}. \end{align*} More precisely if we denote for $X\in\Mat_{a\times b}$, $X'=-J_b{}^tXJ_a$, \begin{align}\label{eq:b k+1} b_{k+i}=\left(\begin{smallmatrix} (B_{k-i-1}^{4,4})' & (B_{k-i-1}^{2,4})' & (B_{k-i-1}^{1,4})' \\ (B_{k-i-1}^{4,1})' & (B_{k-i-1}^{2,1})' & (B_{k-i-1}^{1,1})' \end{smallmatrix}\right). \end{align} We multiply $m_{k-i-1}^{-1}B_{k-i-1}m_{k-i}$. Since $\psi_U$ restricts to $\psi\circ\tr$ on $B_{k-i-1}$, the restriction of ${}^u\psi_U$ to $B_{k-i-1}$ is given by $\psi(\tr(m_{k-i}m_{k-i-1}^{-1}B_{k-i-1}))$. If this restriction is nontrivial on $B_{k-i-1}^{3,1},B_{k-i-1}^{3,4}\in\mathscr{U}_P$, we obtain $\mathcal{H}(h)=0$. On $B_{k-i-1}^{3,4}$, ${}^u\psi_U$ is given by the product of the last $\lceil c/2\rceil$ rows of $m_{k-i}$ and columns $l+d_{i+1}+1,\ldots,n$ of $m_{k-i-1}^{-1}$; $\mathcal{H}(h)=0$ unless this product vanishes: \begin{align*} \left(\begin{smallmatrix} 0_{\lceil c/2\rceil\times l+d_i}&M_{k-i}^4&I_{\lceil c/2\rceil} \end{smallmatrix}\right) \left(\begin{smallmatrix}-M^1_{k-i-1} \\ I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1}\\-M^4_{k-i-1}(I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1})\end{smallmatrix}\right)=0. \end{align*} Thus the product of the last $\lceil c/2\rceil$ rows of $m_{k-i}$ and last $\lceil c/2\rceil$ columns of $m_{k-i-1}^{-1}$ is \begin{align*} \left(\begin{smallmatrix} 0_{\lceil c/2\rceil\times l+d_i}&M_{k-i}^4&I_{\lceil c/2\rceil} \end{smallmatrix}\right) \left(\begin{smallmatrix}M^1_{k-i-1}M^3_{k-i-1} \\ -(I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1})M^3_{k-i-1}\\I_{\lceil c/2\rceil}+M^4_{k-i-1}(I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1})M^3_{k-i-1}\end{smallmatrix}\right)=I_{\lceil c/2\rceil}. \end{align*} This means that the restriction of ${}^u\psi_U$ to $B_{k-i-1}^{4,4}$, which corresponds to the bottom right $\lceil c/2\rceil\times\lceil c/2\rceil$ block of $m_{k-i}m_{k-i-1}^{-1}$, is $\psi\circ\tr$, so that it agrees with $\psi_U$ on this block. On $B_{k-i-1}^{3,1}$, ${}^u\psi_U$ is defined by the product of the first $l+d_i$ rows of $m_{k-i}$ and columns $l+d_{i+1}+1,\ldots,n$ of $m_{k-i-1}^{-1}$. Then $\mathcal{H}(h)=0$ unless \begin{align*} \left(\begin{smallmatrix}I_{l+d_i}+M_{k-i}^1M_{k-i}^2&M_{k-i}^1&0_{l+d_i\times\lceil c/2\rceil}\end{smallmatrix}\right) \left(\begin{smallmatrix}-M^1_{k-i-1} \\ I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1}\\ -M^4_{k-i-1}(I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1})\end{smallmatrix}\right)=0. \end{align*} Hence \begin{align*} \left(\begin{smallmatrix}I_{l+d_i}+M_{k-i}^1M_{k-i}^2&M_{k-i}^1&0_{l+d_i\times\lceil c/2\rceil}\end{smallmatrix}\right) \left(\begin{smallmatrix}M^1_{k-i-1}M^3_{k-i-1} \\ -(I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1})M^3_{k-i-1}\\I_{\lceil c/2\rceil}+M^4_{k-i-1}(I_{n-l-d_{i+1}}+ M^2_{k-i-1}M^1_{k-i-1})M^3_{k-i-1}\end{smallmatrix}\right)=0. \end{align*} Therefore the restrictions of ${}^u\psi_U$ and $\psi_U$ to $B_{k-i-1}^{4,1}$, which correspond to the top right $l+d_i\times\lceil c/2\rceil$ block of $m_{k-i}m_{k-i-1}^{-1}$, are both trivial. Finally using \eqref{eq:b k+1} and noting that the leftmost $l$ columns of $X$ are the bottom $l$ rows of $X'$ (and the entries are permuted), we find that ${}^u\psi_U$ is given on the blocks which $w$ conjugates into $b_{k+i}$ by \begin{align*} &\psi(\tr(\left(\begin{smallmatrix} (B_{k-i-1}^{4,4})' & (B_{k-i-1}^{2,4})' & (B_{k-i-1}^{1,4})' \\ (B_{k-i-1}^{4,1})' & (B_{k-i-1}^{2,1})' & (B_{k-i-1}^{1,1})' \end{smallmatrix}\right) \left(\begin{smallmatrix} I_{\lceil c/2\rceil} & 0_{\lceil c/2\rceil\times l+d_i}\\ {*}_{l+d_{i+1}\times\lceil c/2\rceil} & {*}_{l+d_{i+1}\times l+d_i} & \\ \end{smallmatrix}\right))). \end{align*} We conclude $\psi_{V_{\beta}}$ belongs to the orbit of \eqref{psi_U on V beta}. \end{proof} \begin{proposition}\label{proposition:d_1 < n-l} Assume $d_1<n-l$ (in particular $k>1$ and $l<n$, because $d_1\geq0$). Then $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)=0$, in particular $\mathcal{H}(h)=0$. \end{proposition} \begin{proof} Any morphism in $\mathcal{H}(h)$ factors through $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$. We show $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)=0$. Suppose otherwise. The subgroup $V_{\beta}$ and character $\psi_{V_{\beta}}^{-1}$ define a degenerate Whittaker model in the sense of \cite{MW3,GGS}. The character $\psi_{V_{\beta}}^{-1}$ uniquely defines a nilpotent element ${}^t\varphi\in\Mat_{kc}$ such that $\psi_{V_{\beta}}^{-1}(v)=\psi(\tr(v({}^t\varphi)))$ for all $v\in V_{\beta}$. Then $\varphi\in \Mat_{kc}$ is an upper triangular nilpotent matrix. We prove $\varphi$ is nilpotent of order at least $k+1$. By \cite[Theorem~E]{GGS}, the orbit of $\varphi$ belongs to the closure of the wave-front set $\mathrm{WF}(\rho)$ of $\rho$, but this orbit is greater than or non-comparable with $(k^c)$, contradicting the fact that $\rho$ is $(k,c)$. When $\pi_2$ is supercuspidal (in particular, the field is non-archimedean) and $\rho$ is not necessarily of finite length, we derive the same contradiction from \cite[Theorem~A]{GGS}. By Proposition~\ref{proposition:wu_0 nontrivial implies h nontrivial orbit}, we can assume $\psi_{V_{\beta}}$ is given by \eqref{psi_U on V beta}. Since $\varphi+I_{kc}\in V_{\beta}$, we let $b_1,\ldots,b_{2k-2}$ denote the blocks of $\varphi$ above the principal diagonal (see \eqref{eq:v in V beta}). These can be read off \eqref{psi_U on V beta}, namely $b_i$ is the transpose of the block appearing to the right of $b_i$ in \eqref{psi_U on V beta} (one should include the signs appearing in \eqref{psi_U on V beta} before $\tr{}$). E.g., $b_{k-1}=-\left(\begin{smallmatrix}{*}_{n-l-d_1 \times l+d_1}& I_{n-l-d_1} & {*}_{n-l-d_1\times \lceil c/2\rceil}\end{smallmatrix}\right)$. We apply a sequence of conjugations to $\varphi$, conjugating $k$ nonzero coordinates from $\varphi$, one coordinate from each block $b_{k-1},b_k,\ldots,b_{2k-2}$: since $d_1<n-l$, $\psi_{V_{\beta}}$ is nontrivial on $b_{k-1}$, so that the block $b_{k-1}$ of $\varphi$ is nonzero; and because $k>1$, these are $k$ blocks. The only nonzero blocks of $\varphi$ are the blocks $b_{1},\ldots,b_{2k-2}$, and these blocks contain nonzero entries at the coordinates defined by \eqref{psi_U on V beta}. Define the following partial sums $d^j$ of the integers appearing in the composition $\beta$, from right to left (see \eqref{eq:beta}): \begin{align*} &d^j=\begin{cases} \sum_{i=1}^j\lceil c/2\rceil+l+d_{k-i}& 1\leq j\leq k-1,\\ c+d^{k-1}& j=k,\\ n-l-d_1+d^k& j=k+1. \end{cases} \end{align*} First we conjugate $\varphi$ by \begin{align*} \varepsilon_1=\diag(I_{kc-d^{k-1}-\lceil c/2\rceil-1},\left(\begin{smallmatrix}&&1\\&I_{\lceil c/2\rceil-1}&\\1\end{smallmatrix}\right),I_{d^{k-1}}). \end{align*} Note that $\varepsilon_1$ normalizes $V_{\beta}$. The $(n,n)$-th coordinate of $b_k$, which is $\epsilon_0=\pm1$ because $l<n$, becomes the $(c,n)$-th coordinate, and the $(n-l-d_1,n)$-th coordinate of $b_{k-1}$, which is $-1$, becomes the $(n-l-d_1,c)$-th coordinate --- the bottom right coordinate. Both of these coordinates are independent of the blocks where $\psi_{V_{\beta}}$ is undetermined (denoted $*$ in \eqref{psi_U on V beta}), and are the only nonzero entries on their columns. We can further conjugate ${}^{\varepsilon_1}\varphi$ by an element of the group \begin{align*} \{\diag(I_{(\sum_{i=1}^{k-1}n-l-d_i)+c}, \left(\begin{smallmatrix}b\\&I_{l+d_1}\end{smallmatrix}\right),\ldots,\left(\begin{smallmatrix}b\\&I_{l+d_{k-1}}\end{smallmatrix}\right)):b\in\GL_{\lceil c/2\rceil}\}<M_{\beta}, \end{align*} to take the $(c,n)$-th coordinate of $b_k$ into the $(c,1)$-th coordinate, without affecting any of the blocks $b_{k-1},\ldots,b_{2k-2}$ of ${}^{\varepsilon_1}\varphi$ except the block $b_k$ (the diagonal embedding of $b\in\GL_{n}$ instead of $b\in\GL_{\lceil c/2\rceil}$ in each of the last $k-1$ blocks is sufficient). Now let \begin{align*} \varepsilon_2=\diag(I_{(\sum_{i=1}^{k-1}n-l-d_i)+c}, \left(\begin{smallmatrix}&&1\\&I_{\lceil c/2\rceil+l+d_1-2}&\\1\end{smallmatrix}\right),\ldots, \left(\begin{smallmatrix}&&1\\&I_{\lceil c/2\rceil+l+d_{k-1}-2}&\\1\end{smallmatrix}\right))\in M_{\beta}, \end{align*} where if $\lceil c/2\rceil+l+d_{k-i}=1$, the corresponding block of size $1+(\lceil c/2\rceil+l+d_{k-i}-2)+1$ is $I_1$. Again $\varepsilon_2$ normalizes $V_{\beta}$. Conjugating ${}^{\varepsilon_1}\varphi$ by $\varepsilon_2$, the $(c,1)$-th coordinate of $b_k$ is taken into the $(c,\lceil c/2\rceil+l+d_1)$-th coordinate (the bottom right coordinate), and the top left coordinate of $b_{k+i}$, which is independent of the undetermined blocks and is the only nonzero coordinate on its column, is taken into the bottom right coordinate for each $1\leq i\leq k-2$. We conclude that the bottom right coordinate of each of the blocks $b_{k-1},\ldots,b_{2k-2}$ of ${}^{\varepsilon_2\varepsilon_1}\varphi$ is nonzero ($-1$ for $b_{k-1}$, $\epsilon_0$ for $b_k$, $1$ for all other blocks), and in each of these blocks, the bottom right coordinate is the only nonzero entry in its column. Therefore $\varphi$ is nilpotent of order at least $k+1$. \end{proof} \begin{example} Consider $c=k=2$, $G=\Sp_2$, $H=\Sp_8$ and $l=0$. Then $0\leq d_1\leq n-l=1$. For $d_1=0$, $w\in M_P(0^2,w_2)$ with $w_2=(1,0)$. One can take \begin{align*} w=\left(\begin{smallmatrix} 1 & & & & \\ & & & 1 & \\ & & I_4 & & \\ & -1 & & & \\ & & & & 1 \end{smallmatrix}\right). \end{align*} Then \begin{align*} u=\left(\begin{smallmatrix}I_2&x&y\\&I_4&x'\\&&I_2\end{smallmatrix}\right),\qquad \psi_U(u)=\psi(x_{1,1}+x_{2,4}),\qquad {}^wU\cap M_P=\left\{\left(\begin{smallmatrix} 1 & y_{1,1} & x_{1,1} & x_{1,2} \\ & 1 & & \\ & x_{2,4} & 1 & \\ & x_{2,3} & & 1 \end{smallmatrix}\right)\right\}. \end{align*} Multiplying $w$ on the left by $\gamma_1=\diag(1,\left(\begin{smallmatrix}&I_2\\1\end{smallmatrix}\right))$, \begin{align*} V_{\beta}=V_{(1,2,1)}=\left\{\left(\begin{smallmatrix} 1 & x_{1,1} & x_{1,2} & y_{1,1} \\ & 1 & & x_{2,4} \\ & & 1 & x_{2,3} \\ & & & 1 \end{smallmatrix}\right)\right\},\qquad\varphi=\left(\begin{smallmatrix} 0 & -1 & 0 &0 \\ & 0 & 0 & -1 \\ & & 0 & 0 \\ & & & 0 \end{smallmatrix}\right). \end{align*} The nilpotency order of $\varphi$ is $3$, and since $\rho$ is a $(k,c)=(2,2)$ representation, $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)=0$. \end{example} Now consider the cases $d_1=n-l$ or $k=1$. There are only finitely many representatives (i.e., representatives $h_i,h_j$ with $h_i\not\sim h_j$) satisfying this condition. This is trivial when $k=1$. For $k>1$ recall $h=wu$ with $\ell_0(u)$ of the form \eqref{eq:second form u final}. Since $d_1=n-l$, and thereby $d_1=\ldots=d_{k-1}=n-l$, we finally have for any $m\in M_Q$, ${}^w\ell(m)\in P$, in particular ${}^w\ell(u)\in P$ hence $h\sim w\ell_0(u)$. We simplify $\ell_0(u)$ and deduce that there are only finitely many representatives remaining. Regard $\GL_l$ as the direct factor of the standard Levi subgroup $\GL_l\times\mathcal{G}_{c-2l}$ of $G$. For $g_1,g_2\in\GL_l$, because (finally) ${}^w(g_1,g_2)\in P$, \begin{align*} w\ell_0(u)\sim w\ell_0(u)(g_1,g_2)\sim w({}^{(g_1,g_2)^{-1}}\ell_0(u)). \end{align*} Looking at \eqref{eq:second form u final}, we can now assume $A_l=\diag(I_{l'},0_{l-l'})$, where $l'\leq l$ is the rank of $A_l$. There are only finitely many such representatives. Furthermore if $l'<l$, take a representative $g$ of an element of $W(G)$ such that $w\ell_0((g,1))$ does not permute the rows $l'+1,\ldots,l$ of \eqref{eq:second form u final}. Since (as opposed to the proof of Proposition~\ref{proposition:structure of w u}) ${}^w\ell((g,1))\in P$, $w(g,1)\sim w\ell_0((g,1))$. Then \begin{align*} w({}^{(g_1,g_2)^{-1}}\ell_0(u))\sim w(g,1)({}^{(g,1)^{-1}(g_1,g_2)^{-1}}\ell_0(u))\sim w\ell_0((g,1))({}^{(g,1)^{-1}(g_1,g_2)^{-1}}\ell_0(u)). \end{align*} Since $l'\leq l$, $w\ell_0((g,1))$ now trivially satisfies \eqref{eq:w_i second reduction} for $l'$ with $d_1=\ldots=d_{k-1}=n-l'$ (the multiplications on the left by elements of $W(M_P)$ do not matter for this), and if we re-denote $w=w\ell_0((g,1))$ and $l=l'$, we have $h=w({}^{\jmath_{(k-1)c+l}}u_l)$. If $c$ is odd, $u_l$ commutes with any $\jmath_a$, otherwise $\jmath_{(k-1)c+l}=\jmath_l$. Hence $h=w({}^{\jmath_{l}}u_l)$ (as in the $k=1$ case). Thus there are only $n+1$ representatives $h$ to consider, and note that the representative $w({}^{\jmath_n}u_n)$ satisfies $h\sim\delta$. \begin{proposition}\label{proposition:d1 = n-l l < n} Assume $d_1=n-l$ or $k=1$, and $l<n$. Then $\mathcal{H}(h)=0$ outside a discrete subset of $s$. Moreover, under each one of the conditions of \ref{part3}, e.g., $\pi_2$ is supercuspidal (and $c>2$ or $G=\Sp_2$), $\mathcal{H}(h)=0$ for all $s$. \end{proposition} \begin{proof} Now $V_{\beta}=V_{(c^k)}$ which is trivial when $k=1$, in which case we set $\psi_{V_{\beta}}=1$. If $k>1$, since now $\ell(u)$ is trivial, one can read $\psi_{V_{\beta}}$ directly off \eqref{psi_U on V beta 0}, then \begin{align}\label{psi_U on V beta d1 n-l} \psi_{V_{\beta}}(v)=&\psi(-\tr(b_{k} \left(\begin{smallmatrix}0&0&-\epsilon_0I_l&0&0\\0&\epsilon_0I_{n-l}&0&0&0 \\0&0&0&0&I_{c-2n} \\0&0&0&0_{n-l}&0 \\I_l&0&0&0&0 \end{smallmatrix}\right))-\sum_{j=1}^{k-2}\tr (b_{k+j})). \end{align} (Note that $A_l$ was replaced by $I_l$ in the first matrix.) Consider the parabolic subgroup $R=M_R\ltimes U_R<G$ where $M_R\cong\GL_{n-l}\times\mathcal{G}_{c-2(n-l)}$ and \begin{align*} {}^{\jmath_{(c+1)l}}U_R=\left\{\left(\begin{smallmatrix}I_l&&&u_1&0\\u_2&I_{n-l}&u_4&u_3&u_1'\\&&I_{c-2n}&u_4'\\&&&I_{n-l}\\&&&u_2'&I_l\end{smallmatrix}\right)\right\}. \end{align*} Here if $c$ is even, $\jmath_{(c+1)l}=\jmath_l$, otherwise $\jmath_{(c+1)l}$ is trivial. Since $l<n$, this is a nontrivial parabolic subgroup unless $c=2$ and $G\ne\Sp_2$. If $c$ is odd, the image of $U_R$ in $H_0$ is given by (see Example~\ref{example:odd c}) \begin{align*} \left\{\left(\begin{smallmatrix}I_l&&\epsilon_2u_4&\epsilon_1u_4&u_1&0\\u_2&I_{n-l}&&&u_3&u_1'\\&&1&&&\epsilon_1u_4'\\&&&1&&\epsilon_2u_4'\\&&&&I_{n-l}\\&&&&u_2'&I_l\end{smallmatrix}\right)\right\}. \end{align*} Denote the Lie algebra of $U_R$ by $\mathfrak{u}_R$. \begin{lemma}\label{lemma:Jacquet module is a trivial rep of U_R} The following holds for all $s$. \begin{enumerate}[leftmargin=*] \item If $F$ is non-archimedean, $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,{}^{\jmath_{(c+1)l}}U_R)$. \item For an archimedean field, ${}^h(1,{}^{\jmath_{(c+1)l}}\mathfrak{u}_R)$ acts locally nilpotently on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)^*$. \end{enumerate} \end{lemma} The proofs of this lemma and the following one appear after the proof of the proposition. If $\pi_2$ is supercuspidal (nontrivially), the proposition follows immediately from Lemma~\ref{lemma:Jacquet module is a trivial rep of U_R}, in particular we do not need to exclude any $s$ (see also the discussion preceding \eqref{eq:relation for T with s}). Identify the group $\GL_{n-l}$ with its image in $M_R$, \begin{align*} {}^h(1,\GL_{n-l})=\left\{\diag(I_{n+l},a,I_{c-2n+(k-1)c}):a\in\GL_{n-l}\right\}, \end{align*} where the right hand side is implicitly regarded as a subgroup of $M_P$. By \eqref{psi_U on V beta d1 n-l}, ${}^h(1,\GL_{n-l})$ stabilizes $\psi_{V_{\beta}}$, and because it also normalizes $V_{\beta}$, ${}^h(1,\GL_{n-l})$ acts on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$. \begin{lemma}\label{lemma:Jacquet module is a finite length} The following holds for all $s$. \begin{enumerate}[leftmargin=*] \item\label{p1} If $F$ is non-archimedean, $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ admits a finite length filtration as a representation of ${}^h(1,\GL_{n-l})$, where ${}^h(1,C_{\GL_{n-l}})$ acts by a character on each constituent. (The constituents need not be irreducible.) \item Over archimedean fields, there is a maximal parabolic subgroup of $\GL_{kc}$ whose Levi part contains ${}^h(1,\GL_{n-l})$ as a direct factor, such that the Lie algebra $\mathfrak{v}$ of its unipotent radical acts locally nilpotently on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)^*$. \item\label{p3} If $c=2$, $\rho=\rho_c(\tau)$ for an irreducible supercuspidal representation $\tau$ of $\GL_k$ and $k>1$, $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)=0$. \end{enumerate} \end{lemma} This implies $\mathcal{H}(h)=0$ outside a discrete subset of $s$, because ${}^{h^{-1}}(|\det|^{s-1/2})(1,aI_{n-l})=|a|^{(n-l)(s-1/2)}$ for all $a\in F^*$, and $l<n$, then we can apply \eqref{eq:relation for T with s}. Also Lemma~\ref{lemma:Jacquet module is a finite length}~\eqref{p3} implies $\mathcal{H}(h)=0$ for all $s$, in the remaining case of \ref{part3}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:Jacquet module is a trivial rep of U_R}] First consider a non-archimedean field. We show $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,{}^{\jmath_{(c+1)l}}U_R)$. For $z\in U_R$, ${}^h(1,{}^{\jmath_{(c+1)l}}z)=m_zz'$ where $z'\in U_P$ and \begin{align*} m_z=\diag(\left(\begin{smallmatrix}I_l\\&I_{n-l}\\&&I_l\\u_1'&&u_2&I_{n-l}&\epsilon u_4\\&&&&I_{c-2n}\end{smallmatrix}\right),I_{(k-1)c}). \end{align*} Here $\epsilon=\epsilon_1$ or $\epsilon_2$ depending on $\jmath_{l}$; also for the computation note that since $l<n$, $\jmath_{l}$ commutes with $u_l$. When $l=0$ and $c$ is even, $m_z$ is trivial whence ${}^h(1,{}^{\jmath_{(c+1)l}}U_R)<U_P$, so that $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is immediately a trivial representation of ${}^h(1,{}^{\jmath_{(c+1)l}}U_R)$. Let $Z$ be the subgroup of $M_P$ generated by the matrices $m_z$ as $z$ varies in $U_R$. It is an abelian group. The rank of $Z$ is $(2l+c-2n)(n-l)$. Since $\psi_{V_{\beta}}^{-1}$ restricts to a trivial character on the rows $2n,\ldots,2n+1-(n-l)$ of $b_k$ (which are the last $n-l$ rows if $c$ is even), $Z$ stabilizes $\psi_{V_{\beta}}^{-1}$ (it clearly normalizes $V_{\beta}$). Thus $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a representation of $Z$ and for each character $\chi$ of $Z$, \begin{align*} J_{Z,\chi}(J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho))=J_{V_{\beta}\ltimes Z,\psi_{V_{\beta}}^{-1}\otimes\chi}(\rho). \end{align*} A similar identity holds for any subgroup of $Z$. For $b\in\GL_c$, denote $b^{\triangle}=\diag(b,b^{\triangle'})\in M_{(c^k)}$ where $b^{\triangle'}$ is the diagonal embedding of $\GL_c$ in $M_{(c^{k-1})}$. The group $\diag(I_c,\GL_c^{\triangle'})$ stabilizes the restriction of $\psi_{V_{\beta}}^{-1}$ to the blocks $b_{k+1},\ldots,b_{2k-2}$ (but not to $b_k$). The group $\GL_l\times\GL_l\times\GL_{n-l}\times\GL_{c-2n}$ embedded in $M_{(c^k)}$ by \begin{align}\label{eq:b1 b2 b3} [x_1,x_2,x_3,x_4]=\diag(\left(\begin{smallmatrix}x_1\\&I_{n-l}\\&&x_2\\&&&x_3\\&&&&x_4\end{smallmatrix}\right), \left(\begin{smallmatrix}x_2\\&I_{n-l}\\&&x_4\\&&&I_{n-l}\\&&&&x_1\end{smallmatrix}\right)^{\triangle'}), \end{align} where $x_1,x_2\in\GL_l$, $x_3\in\GL_{n-l}$ and $x_4\in\GL_{c-2n}$, acts on the set of characters of $Z$ with infinitely many orbits, but precisely $2$ orbits separately on each block $Z'=u_1', u_2$ and $\epsilon u_4$, and stabilizes $\psi_{V_{\beta}}^{-1}$. It is enough to prove that each block $Z'$ acts trivially on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$. By \cite[5.9--5.12]{BZ1}, it suffices to show that for each nontrivial character $\chi'$ of $Z'$, \begin{align}\label{eq:Z chi vanishing} J_{V_{\beta}\ltimes Z',\psi_{V_{\beta}}^{-1}\otimes\chi'}(\rho)=0, \end{align} since then for each $Z'$, $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)=J_{V_{\beta}\ltimes Z',\psi_{V_{\beta}}^{-1}}(\rho)$, and thus $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)=J_{V_{\beta}\ltimes Z,\psi_{V_{\beta}}^{-1}}(\rho)$. This implies that ${}^h(1,{}^{\jmath_{(c+1)l}}U_R)$ acts trivially on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$. Let $\chi'$ be a nontrivial character of $Z'$. As in the proof of Proposition~\ref{proposition:d_1 < n-l}, we let $\varphi$ denote the transpose of the nilpotent element defined by the character $\psi_{V_{\beta}}^{-1}\otimes\chi'$ of $V_{\beta}\ltimes Z'$, and show that $\varphi$ is nilpotent of order at least $k+1$, then \eqref{eq:Z chi vanishing} essentially follows from \cite[Theorems~A, E]{GGS} because $\rho$ is $(k,c)$, but an additional argument is used because $V_{\beta}\ltimes Z'$ does not correspond to a unipotent orbit (see below). Conjugating $\varphi$ by a suitable element \eqref{eq:b1 b2 b3}, we can assume the bottom left coordinate of $Z'$ in $\varphi$ is $1$, and all other coordinates on the same column of $Z'$ are $0$. Assume (momentarily) $c$ is even or $l>0$. One can permute $\epsilon u_4$ (for odd $c$) with the first column of $u_2$ using conjugation by \begin{align*} \diag(\left(\begin{smallmatrix}I_n\\&&&-1\\&&I_{n-1}\\&-1\end{smallmatrix}\right), \left(\begin{smallmatrix}&&1\\&I_{n-1}\\1\\&&&I_n\end{smallmatrix}\right)^{\triangle'}). \end{align*} This element normalizes $V_{\beta}$ and stabilizes the blocks $b_{k},\ldots,b_{2k-2}$ of $\varphi$. Moreover, we can exchange the blocks $u_1'$ and $u_2$ using conjugation by \begin{align}\label{eq:matrix conjugating u1 and u2} \diag(\left(\begin{smallmatrix}&&I_l\\&I_{n-l}\\I_l\\&&&I_{\lceil c/2\rceil-l}\end{smallmatrix}\right),I_{(k-1)c}), \end{align} hence we can always assume, for any $Z'$, that after a conjugation the bottom left coordinate of the block $u_1'$ of $\varphi$ is $1$. The matrix \eqref{eq:matrix conjugating u1 and u2} normalizes $V_{\beta}$, fixes the blocks $b_{k+1},\ldots,b_{2k-2}$ of $\varphi$, but permutes the coordinates of the block $b_k$ of $\varphi$. In particular the $(n+1,1)$-th coordinate in the block $b_k$ of $\varphi$, which is $-\epsilon_0$, is conjugated into the $(1,1)$-th coordinate of this block. If $Z'=u_1'$ we can skip this conjugation. When $c$ is odd and $l=0$, in which case $Z'=\epsilon u_4$ (evidently $u_1'$ and $u_2$ are trivial when $l=0$), we use conjugation by \begin{align*} \diag(\left(\begin{smallmatrix}&&1\\&I_{c-2}\\1\end{smallmatrix}\right), \left(\begin{smallmatrix}&&1\\&I_{c-2}\\1\end{smallmatrix}\right)^{\triangle'}), \end{align*} so that the $(c,n+1)$-th coordinate of the block $b_k$ of $\varphi$, which is $1$, is permuted to the $(1,n+1)$-th coordinate, and the bottom left coordinate of $Z'$ in $\varphi$ is permuted to the $(2n,1)$-th coordinate of $\varphi$. At any rate, $\varphi$ has a nonzero coordinate in the first row of $b_k$. Conjugating $\varphi$ by an element of $\diag(I_c,\GL_c^{\triangle'})$, we can always assume $\varphi$ contains $1$ on the top right coordinate of $b_k$, and additionally (still) contains $1$ in the $(2n,1)$-th coordinate (the bottom left coordinate of $u_1'$ if $l>0$). If $c$ is odd, we can permute this coordinate to the $(c,1)$-th coordinate using conjugation by $\diag(I_{c-2},\left(\begin{smallmatrix}&1\\1\end{smallmatrix}\right),I_{(k-1)c})$. Now conjugating $\varphi$ by \begin{align*} \diag(\left(\begin{smallmatrix}&&1\\&I_{c-2}\\1\end{smallmatrix}\right) ,I_{k(c-1)}), \end{align*} the top right coordinate of $b_k$ is permuted into its bottom right, so that now $\varphi$ has $1$ in the bottom right coordinate of each of the blocks $b_k,\ldots,b_{2k-2}$, and the $(c,1)$-th coordinate of $\varphi$ is permuted into the $(1,c)$-th coordinate. Moreover, the only nonzero entry in column $c$ is the $(1,c)$-th coordinate, and in each $b_k,\ldots,b_{2k-2}$ the only nonzero entry on the rightmost column is the bottom right one. This brings us to the situation in the proof of Proposition~\ref{proposition:d_1 < n-l}, with two exceptions. First, instead of $-1$ in the bottom right coordinate of the block $b_{k-1}$, we have $1$ in the $(1,c)$-th coordinate (the first coordinate to the left of the top left coordinate of $b_k$). It again follows that $\varphi$ is nilpotent of order at least $k+1$. Second and more importantly, the unipotent subgroup $V_{\beta}\ltimes Z'$ does not correspond to a unipotent orbit (i.e., it is not of the form $V(\sigma)$, see \S~\ref{kc representations}). However, we reduced \eqref{eq:Z chi vanishing} to the vanishing of $J_{V_{\beta}\ltimes Z'',\psi_{V_{\beta}}^{-1}\otimes\chi^{''}}(\rho)$, where $Z''<N_{\GL_{kc}}$ corresponds to the $(1,c)$-th coordinate and $\chi''$ is a nontrivial character of $Z''$. We see that $V_{\beta}\ltimes Z''<V_{(1,c-1,c^{k-1})}$ and any character $\psi'$ of $V_{(1,c-1,c^{k-1})}$ which extends $\psi_{V_{\beta}}^{-1}\otimes\chi^{''}$ is still nilpotent of order $k+1$. Also, the torus $T_{\GL_{c-1}}$ acts on the added $c-2$ new coordinates of $V_{(1,c-1,c^{k-1})}$ with finitely many orbits (one can identify each diagonal coordinate from $T_{\GL_{c-1}}$ with an element of $T_{\GL_{kc}}$ which fixes $\psi_{V_{\beta}}^{-1}\otimes\chi^{''}$). Thus (by \cite[5.9--5.12]{BZ1}) $J_{V_{\beta}\ltimes Z'',\psi_{V_{\beta}}^{-1}\otimes\chi^{''}}(\rho)=0$ if for any $\psi'$ as above, $J_{V_{(1,c-1,c^{k-1})},\psi'}(\rho)=0$. The latter holds by \cite[Theorems~A, E]{GGS} because $\rho$ is $(k,c)$. We conclude \eqref{eq:Z chi vanishing} for any nontrivial $\chi'$ and any block $Z'$. For the archimedean case, again the result is trivial if $l=0$ and $c$ is even. The (abelian) Lie algebra $\mathfrak{z}$ of $Z$ decomposes into the direct sum of one-dimensional Lie algebras $\mathfrak{z}_{i,j}$, corresponding to the coordinates $Z_{i,j}$ of $Z$ (which can be identified with roots of $\GL_{kc}$). For each $(i,j)$, there is a subgroup of \eqref{eq:b1 b2 b3} which acts on the characters of $Z_{i,j}$ with $2$ orbits; one can identify this subgroup with $T_{\GL_2}$. Then we can proceed as in the non-archimedean case and prove $J_{V_{\beta}\ltimes Z_{i,j},\psi_{V_{\beta}}^{-1}\otimes\chi'}(\rho)=0$ for any nontrivial character $\chi'$, where to deduce this from the vanishing of $J_{V_{(1,c-1,c^{k-1})},\psi'}(\rho)=0$ for all $\psi'$ we apply \cite[Corollary~3.0.2]{GGS2} (instead of \cite{BZ1}). By the transitivity of the Jacquet functor, this implies that there are no continuous distributions on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ that transform on the left under $Z_{i,j}$ by $\chi'$, i.e., $(J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)^*)^{(Z_{i,j},\chi')}=0$. Hence by \cite[Proposition 3.0.1]{GGS2}, $\mathfrak{z}_{i,j}$ acts locally nilpotently on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)^*$. Note that for the proof of \eqref{eq:Z chi vanishing} above we really used only one coordinate, the bottom left one, for each block, and using conjugation we can assume this coordinate is $Z_{i,j}$. We deduce that each $\mathfrak{z}_{i,j}$ acts locally nilpotently, hence so does $\mathfrak{z}$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lemma:Jacquet module is a finite length}] Consider a non-archimedean field. We prove that $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ admits a finite length filtration as a representation of ${}^h(1,\GL_{n-l})$, and on each constituent ${}^h(1,C_{\GL_{n-l}})$ acts by a character, by showing $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ factors through a Jacquet module along a unipotent radical of a certain parabolic subgroup, with respect to a trivial character. After conjugating $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ by \begin{align*} &\kappa=\diag(\left(\begin{smallmatrix}&&I_{n-l}\\&I_{n}\\I_l\\&&&I_{c-2n}\end{smallmatrix}\right),I_{k(c-1)}), \end{align*} we regard $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ as a representation of ${}^{\kappa}({}^h(1,\GL_{n-l}))=\diag(\GL_{n-l},I_{kc-(n-l)})$. In addition, this conjugation only changes the restriction of $\psi_{V_{\beta}}^{-1}$ to $b_k$, now given by \begin{align*} \psi(\tr(b_{k}\left(\begin{smallmatrix}0&0&-\epsilon_0I_l&0&0\\0&\epsilon_0I_{n-l}&0&0&0\\0&0&0&0&I_{c-2n}\\0_{n-l}&0&0&0&0\\0&0&0&I_l&0\end{smallmatrix}\right))) \end{align*} (use \eqref{psi_U on V beta d1 n-l}). Now $\psi_{V_{\beta}}^{-1}$ is trivial on the top $n-l$ rows of $b_k$, hence $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a representation of $V_{(n-l,c-(n-l))}$, which we identify with its image in the top left $c\times c$ block of $\GL_{kc}$. We claim \begin{align}\label{eq:Jacquet admissible Levi non archimedean} J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)=J_{V_{(n-l,c-(n-l),c^{k-1})},\psi_{V_{\beta}}^{-1}}(\rho). \end{align} Here $\psi_{V_{\beta}}^{-1}$ is extended trivially to $V_{(n-l,c-(n-l))}$. Before proving \eqref{eq:Jacquet admissible Levi non archimedean}, we explain how it leads to the result. Because $V_{(n-l,kc-(n-l))}<V_{(n-l,c-(n-l),c^{k-1})}$, the right hand side of \eqref{eq:Jacquet admissible Levi non archimedean} becomes \begin{align*} J_{\diag(I_{n-l},V_{(c-(n-l),c^{k-1})}),\psi_{V_{\beta}}^{-1}}(J_{V_{(n-l,kc-(n-l))}}(\rho)). \end{align*} Since $\rho$ is an admissible finite length representation of $\GL_{kc}$, $J_{V_{(n-l,kc-(n-l))}}(\rho)$ is an admissible finite length representation of $M_{(n-l,kc-(n-l))}$. As such, it admits a finite filtration with irreducible admissible constituents. On each constituent $\mathcal{V}$, $C_{M_{(n-l,kc-(n-l))}}$ acts by a character, and because ${}^{\kappa h}(1,C_{\GL_{n-l}})<C_{M_{(n-l,kc-(n-l))}}$, ${}^{\kappa h}(1,C_{\GL_{n-l}})$ also acts by a character. Note that $\mathcal{V}$ may certainly be reducible (or not admissible) as a representation of ${}^{\kappa h}(1,\GL_{n-l})$. By the exactness of the Jacquet functor, $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ admits a finite filtration where on each constituent ${}^{\kappa h}(1,C_{\GL_{n-l}})$ still acts by the same character. This completes the proof of the main assertion --- part~\eqref{p1} --- for the non-archimedean case. Regarding \eqref{p3}, when $c=2$, $\rho=\rho_c(\tau)$ for an irreducible supercuspidal representation of $\GL_k$ and $k>1$, the Jacquet module $J_{V_{(n-l,kc-(n-l))}}(\rho)$ vanishes since $n-l=1$ (\cite[2.13 (a)]{BZ2}). Write $v\in V_{(n-l,c-(n-l))}$ in the form $v=(v_1,v_2,v_3,v_4)$ with $v_1\in\Mat_{n-l}$, $v_2,v_3\in\Mat_{n-l\times l}$ and $v_4\in\Mat_{n-l\times c-2n}$. The group ${}^{\kappa}[x_1,x_2,x_3,x_4]$ (see \eqref{eq:b1 b2 b3}), together with the group $\GL_{n-l}$ embedded in $M_{(c^k)}$ by $x_5\mapsto\diag(I_{n-l},x_5,I_{2l+c-2n})^{\triangle}$, stabilizes $\psi_{V_{\beta}}^{-1}$ and acts on the set of characters of $V_{(n-l,c-(n-l))}$ with infinitely many orbits, but only $2$ on each block $v_i$ separately. Using the transitivity of the Jacquet functor and \cite[5.9--5.12]{BZ1}, \eqref{eq:Jacquet admissible Levi non archimedean} follows at once if we prove separately, that for each block $Z'=v_i$ and nontrivial character $\chi'$ of $Z'$, \begin{align}\label{eq:V chi vanishing} J_{V_{\beta}\ltimes Z',\psi_{V_{\beta}}^{-1}\otimes\chi'}(\rho)=0. \end{align} Let $\varphi$ denote the transpose of the nilpotent element defined by the character $\psi_{V_{\beta}}^{-1}\otimes\chi'$ of $V_{\beta}\ltimes Z'$. We prove $\varphi$ is nilpotent of order at least $k+1$, then since $\rho$ is $(k,c)$, \cite[Theorems~A, E]{GGS} imply \eqref{eq:V chi vanishing}. First we show that, after possibly a suitable conjugation, $\varphi$ is nontrivial on the $(1,c)$-th coordinate, and all other blocks remain unchanged except the block $b_k$, where there is a nonzero entry on the bottom right coordinate. We can assume the top right coordinate of $Z'$ in $\varphi$ is nonzero, and it is the only nonzero entry on that column. If $Z'=v_4$, the $(1,c)$-th entry of $\varphi$ is nonzero. Using conjugation by an element of $\diag(I_c,\GL_c^{\triangle'})$, the $(c,n+1)$-th coordinate of $b_k$ becomes its $(c,c)$-th coordinate, and the other blocks $b_{k+1},\ldots,b_{2k-2}$ are unchanged. For $Z'=v_2$, we conjugate $\varphi$ by \begin{align*} \diag(\left(\begin{smallmatrix}I_{n-l}\\&I_{n-l}\\&&&I_l\\&&I_l\\&&&&I_{c-2n}\end{smallmatrix}\right), \left(\begin{smallmatrix}&&I_{l}\\&I_{c-2l}\\I_l\end{smallmatrix}\right)^{\triangle'}), \end{align*} and for $Z'=v_1$ we conjugate by \begin{align*} \diag(\left(\begin{smallmatrix}I_{n-l}\\&&&I_l\\&&I_l\\&I_{n-l}\\&&&&I_{c-2n}\end{smallmatrix}\right), \left(\begin{smallmatrix}I_{l}\\&&&I_{l}\\&&I_{c-n-l}&\\&I_{n-l}\end{smallmatrix}\right)^{\triangle'}). \end{align*} Both conjugations preserve $b_{k+1},\ldots,b_{2k-2}$, the $(1,2n)$-th coordinate of $\varphi$ becomes nontrivial, and the rightmost column of $b_k$ has (precisely) one nontrivial coordinate, which is the $(c-1,c)$-th coordinate if $c$ is odd, otherwise it is the bottom coordinate (the $(c,c)$-th coordinate). For an even $c$, $\varphi$ is of the prescribed form; if $c$ is odd, using another conjugation by $\diag(I_{c-2},\left(\begin{smallmatrix}&1\\1\end{smallmatrix}\right),I_{(k-1)c})$, the $(c-1,c)$ entry of block $b_k$ is permuted into its bottom right coordinate, and the $(1,2n)$-th coordinate of $\varphi$ becomes its $(1,c)$-th coordinate. We conclude that in all cases of $Z'$, when $\chi'$ is nontrivial, the $(1,c)$-th coordinate of $\varphi$ and the bottom right coordinate of each block $b_k,\ldots,b_{2k-2}$ of $\varphi$ are nonzero (these coordinates are all $1$ except for $b_k$, where the coordinate is $\pm1$), and the corresponding nonzero entry is the unique one in its column. Thus as in the proof of Lemma~\ref{lemma:Jacquet module is a trivial rep of U_R} (and again considering all extensions to characters of $V_{(1,c-1,c^{k-1})}$ in order to ``adjust" $V_{\beta}\ltimes Z'$ to a unipotent radical), $\varphi$ is nilpotent of order at least $k+1$ and \eqref{eq:V chi vanishing} follows. Over archimedean fields, as in the proof of Lemma~\ref{lemma:Jacquet module is a trivial rep of U_R}, we deduce that the Lie algebra $\mathfrak{v}_{(n-l,c-(n-l))}$ of $V_{(n-l,c-(n-l))}$ acts locally nilpotently on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)^*$ by carrying out the proof of \eqref{eq:V chi vanishing} and applying \cite[Proposition~3.0.1]{GGS2} separately for each coordinate of each $v_i$. Let $\mathfrak{v}'$ denote the Lie algebra of the unipotent subgroup $V_{(n-l,kc-(n-l))}\cap V_{\beta}$. Since $\mathfrak{v}'$ acts trivially on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$, $\mathfrak{v}_{(n-l,kc-(n-l))}=\mathfrak{v}_{(n-l,c-(n-l))}\oplus \mathfrak{v}'$ acts locally nilpotently on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)^*$. \end{proof} \begin{example} Consider $c=4$, $k=2$, $G=\Sp_4$, $H=\Sp_{16}$ and $l=1$. Assume $d_1=n-l=1$. Then $w\in M_P(0^3,1,w_2)$, $w_2=(1^4)$ and $u=\ell_0(u)\in H_0$ is given by \begin{align*} u=\diag(I_4, \left(\begin{smallmatrix} 1&&1\\&1\\&&1\end{smallmatrix}\right),I_2,\left(\begin{smallmatrix} 1&&-1\\&1\\&&1\end{smallmatrix}\right),I_4). \end{align*} The element $w$ is given by \eqref{eq:example k=2,3 final w for k=2}, note that $\gamma_1=\diag(\left(\begin{smallmatrix} &I_4 \\I_2\end{smallmatrix}\right),I_2)\left(\begin{smallmatrix}& I_6 \\I_2 & \end{smallmatrix}\right)$ (see Remark~\ref{remark:convenient computations of V beta}). We have $\beta=(4^2)$, and if we write an element of $V_{\beta}$ in the form $v=\left(\begin{smallmatrix} I_4 & x \\&I_4\end{smallmatrix}\right)$, $\psi_{V_{\beta}}(v)=\psi(-x_{1,4}+x_{2,2}-x_{3,1})$. The Jacquet module $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a representation of ${}^h(1,U_R)$ with \begin{align*} &U_R=\left\{\left(\begin{smallmatrix} 1 & & u_1 & 0 \\ u_2 & 1 & u_3 & u_1 \\ & &1 & \\ & & -u_2 & 1 \end{smallmatrix}\right)\right\}. \end{align*} We see that for $z\in U_R$, \begin{align*} \qquad{}^{u}(1,z)=\diag(I_4,\left(\begin{smallmatrix} 1 & & & & u_1 & & & \\ & 1 & & & & & & \\ & & 1 & & u_1 & & & \\ & & u_2 & 1 & u_3 & u_1 & & u_1 \\ & & & & 1 & & & \\ & & & & -u_2 & 1 & & \\ & & & & & & 1 & \\ & & & & & & & 1 \end{smallmatrix}\right),I_4). \end{align*} Then ${}^h(1,z)=m_zz'$ with $m_z\in M_P$ and $z'\in U_P$, and such that as an element of $\GL_8$, \begin{align*} m_z=\diag(\left(\begin{smallmatrix} 1 & & & \\ & 1 & & \\ & &1 & \\ u_1 & & u_2 & 1 \end{smallmatrix}\right),I_4). \end{align*} Denote the subgroup of elements of this form by $Z$, then $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a representation of $Z$. We proceed over non-archimedean fields. To show that $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,U_R)$ amounts to proving $J_{V_{\beta}\ltimes Z,\psi_{V_{\beta}}^{-1}\otimes\chi}(\rho)=0$ for any nontrivial character $\chi$ of $Z$. Combining $Z$ and $V_{\beta}$ together we are considering the following unipotent subgroup and character: \begin{align*} \{v=\left(\begin{smallmatrix} 1 & & & & x_{1,1} & x_{1,2} &x_{1,3} & x_{1,4} \\ & 1 & & & x_{2,1} & x_{2,2} &x_{2,3} & x_{2,4} \\ & & 1 & & x_{3,1} & x_{3,2} &x_{3,3} & x_{3,4} \\ u_1 & & u_2 & 1 & x_{4,1} & x_{4,2} &x_{4,3} & x_{4,4} \\ & & & & 1 & & & \\ & & & & & 1 & & \\ & & & & & & 1 & \\ & & & & & & & 1 \end{smallmatrix}\right)\},\qquad (\psi_{V_{\beta}}^{-1}\otimes\chi)(v)=\psi(x_{1,4}-x_{2,2}+x_{3,1}+\alpha_1 u_1+\alpha_2 u_2). \end{align*} We have an action of $T_{\GL_2}$ on $u_1$ and $u_2$ separately, given by $\diag(x_1,I_2,x_3,I_3,x_1)$ (for $u_1$) and $\diag(I_2,x_2,x_3,x_2,I_3)$. When considering each coordinate separately, there are $2$ orbits. The corresponding $\varphi$ takes the form \begin{align*} \varphi=\left(\begin{smallmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ \alpha_1 & 0 & \alpha_2 & 0 & 0 & 0 & 0 & 0 \\ & & & & 0 & & & \\ & & & & & 0 & & \\ & & & & & & 0 & \\ & & & & & & & 0 \end{smallmatrix}\right), \end{align*} and a nontrivial $\chi$ means $(\alpha_1,\alpha_2)\ne(0,0)$. Using conjugations by $\diag(J_3,I_5)$ and by $\diag(I_4,g)$ for a permutation matrix $g\in\GL_4$ if necessary, we can assume the $(4,1)$-th and $(1,8)$-th coordinates of $\varphi$ are nonzero, then conjugating by $\diag(\left(\begin{smallmatrix}& & 1 \\& I_2 & \\1 & & \end{smallmatrix}\right),I_4)$, we see that $\varphi$ is nilpotent of order at least $3$. Thus $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,U_R)$ ($\rho$ is $(k,c)=(2,4)$). The Jacquet module $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ is also a representation of \begin{align*} {}^h(1,\GL_{n-l})=\{\diag(I_3,a,I_4):a\in F^*\}. \end{align*} Conjugating by $\kappa=\diag(\left(\begin{smallmatrix}&&1\\&I_2\\1\end{smallmatrix}\right),I_4)$, we can regard $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ as a representation of $\diag(V_{(1,3)},I_4)$. Combining the coordinates of $V_{(1,3)}$ and $V_{\beta}$, we then have \begin{align*} \left\{v=\left(\begin{smallmatrix} 1 & v_1 & v_2 & v_3 & x_{4,1} & x_{4,2} &x_{4,3} & x_{4,4} \\ & 1 & & & x_{2,1} & x_{2,2} &x_{2,3} & x_{2,4} \\ & & 1 & & x_{3,1} & x_{3,2} &x_{3,3} & x_{3,4} \\ & & & 1 & x_{1,1} & x_{1,2} &x_{1,3} & x_{1,4} \\ & & & & 1 & & & \\ & & & & & 1 & & \\ & & & & & & 1 & \\ & & & & & & & 1 \end{smallmatrix}\right)\right\}, \end{align*} and note that we permuted the coordinates of $x_{i,j}$ (so that $\psi_{V_{\beta}}^{-1}$ remains as above). The tensor product of an arbitrary character $\chi$ of $V_{(1,3)}$ with $\psi_{V_{\beta}}^{-1}$ takes the form \begin{align*} (\psi_{V_{\beta}}^{-1}\otimes\chi)(v)=\psi(x_{1,4}-x_{2,2}+x_{3,1}+\vartheta_1 v_1+\vartheta_2 v_2+\vartheta_3 v_3). \end{align*} As above, we have the action of $T_{\GL_2}$ on each of the coordinates $v_i$: $\diag(x_3,x_5,I_3,x_5,I_2)$ for $v_1$, $\diag(x_3,1,x_2,1,x_2,I_3)$ for $v_2$, and $\diag(x_3,I_2,x_1,I_3,x_1)$ for $v_3$. The corresponding $\varphi$ is then nilpotent of order at least $3$, when $\vartheta_1\vartheta_2\vartheta_3\ne0$. This is immediate when $\vartheta_3\ne0$, using $x_{1,4}$. If $\vartheta_3=0$ and $\vartheta_2\ne0$, we conjugate by $\diag(I_2,J_2,\left(\begin{smallmatrix}&&1\\&I_2\\1\end{smallmatrix}\right))$ and use $x_{3,1}$, otherwise $\vartheta_1\ne0$ and we conjugate by $\diag(1,J_3,1,J_3)$ and use $x_{2,2}$. Therefore $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ factors through $V_{(1,7)}$, so that the original module $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ factors through ${}^{\kappa^{-1}}V_{(1,7)}$. Since $\det(\diag(I_3,a,I_4))=a$, we can use \eqref{eq:relation for T with s} to deduce $\mathcal{H}(h)=0$. \end{example} Propositions~\ref{proposition:structure of w u}--\ref{proposition:d1 = n-l l < n} imply $\mathcal{H}(h)=0$ for all $h$ such that $h\not\sim\delta$. Finally consider $h=w({}^{\jmath_n}u_n)\sim\delta$. We prove that for all $s$, $\dim\mathcal{H}(\delta)=\dim\Hom_{G}(\pi_1^{\vee},\pi_2^{\iota})$. In this case $P_{\delta}=(G^{\iota}\times C_H^{\circ})\ltimes V_{(c^k)}$, where $G^{\iota}=\{(g,{}^{\iota}g):g\in G\}$, $\psi_{V_{\beta}}^{-1}$ belongs to the orbit of $\psi_k$ (the choice of $\delta$ of \cite{CFK} gives precisely $\psi_k$), and any morphism in $\mathcal{H}(\delta)$ factors through $J_{V_{(c^k)},\psi_k}(\rho)$. Note that $C_H^{\circ}$ is trivial unless $H=\GSpin_{2kc}$, in which case $C_H^{\circ}<P_{\delta}$ because $C_G^{\circ}$ is mapped by the embedding $g\mapsto (g,1)$ bijectively into $C_H^{\circ}$ (see also \eqref{embedding G G in H GSpin}). Therefore \begin{align*} \mathcal{H}(\delta)&=\Hom_{G^{\iota}\times C_H^{\circ}}({}^{\delta^{-1}}J_{V_{(c^k)},\psi_k}(\rho)\otimes\eta\otimes\pi_1^{\vee}\otimes\pi_2^{\vee},1). \end{align*} Here $|\det|^{s-1/2}$ and $\theta_h$ are absent because they are trivial on $G^{\iota}\times C_H^{\circ}$. For $H=\GSpin_{2kc}$ we assumed $\chi_{\pi_1},\chi_{\pi_2}$ exist, then $\mathcal{H}(\delta)=0$ unless $\eta=\chi_{\pi_1}^{-1}=\chi_{\pi_2}$, because for $z_1,z_2\in C_G^{\circ}$, $(z_1,z_2)$ is the element $z_1^{-1}z_2$ of $C_H^{\circ}$. When this condition holds, we can finally ignore $C_H^{\circ}$ and $\eta$ altogether. Recall that $\GL_c^{\triangle}$ denotes the diagonal embedding of $G$ in $M_{(c^k)}$, and $J_{V_{(c^k)},\psi_k}(\rho)$ is a trivial representation of $\SL_c^{\triangle}$. Since ${}^{\delta}G^{\iota}<\SL_c^{\triangle}$ (for $H\ne\GSpin_{2kc}$, ${}^{\delta}G^{\iota}=G^{\triangle}$), the action of $G^{\iota}$ on ${}^{\delta^{-1}}J_{V_{(c^k)},\psi_k}(\rho)$ is trivial, and because $\dim J_{V_{(c^k)},\psi_k}(\rho)=1$ (see \S~\ref{kc representations}), \begin{align*} \Hom_{G^{\iota}}({}^{\delta^{-1}}J_{V_{(c^k)},\psi_k}(\rho)\otimes\pi_1^{\vee}\otimes\pi_2^{\vee},1) =\Hom_{G}(\pi_1^{\vee}\otimes(\pi_2^{\vee}){}^{\iota},1)=\Hom_{G}(\pi_1^{\vee},\pi_2^{\iota}). \end{align*} This completes the proof of the first part of the theorem. For the second part, clearly when $\pi_1$ and $\pi_2$ are irreducible, $\dim\Hom_{G}(\pi_1^{\vee},\pi_2^{\iota})\leq1$ and is zero unless $\pi_1=(\pi_2^{\iota})^{\vee}$. Under the assumptions of \ref{part3} (e.g., $\pi_2$ is supercuspidal and $c>2$), we do not need to exclude any $s$, by Proposition~\ref{proposition:d1 = n-l l < n} (which is the only case where the vanishing depends on $s$). \subsection{The case $H=\GL_{2kc}$}\label{section GL} Write $P\backslash H/ D=\coprod_hPhD$ with $h=wu$, where $w$ is a representative from $W(M_P)\backslash W(H)$ and $u\in M_Q\cap N_H$. Throughout this section we fix the standard identification of $W(H)$ with permutation matrices in $H$. One can still describe $w$ as a $2kc$-tuple of $\{0,1\}$: if the $i$-th coordinate of $w$ is $1$, $w$ permutes the $i$-th row into one of the first $kc$ rows, and if it is $0$, then $w$ permutes this row into one of the last $kc$ rows. Of course only vectors whose total sum of coordinates is $kc$ are permissible ($U_P$ contains $kc$ nontrivial rows). Let $p_0(w)$ denote the middle $2c$ coordinates of $w$, and $p_1(w)$ (resp., $p_2(w)$) denote the first (resp., last) $(k-1)c$ coordinates. Also note that in general, $w$ permutes row $i$ into $U_P$ if and only if $w^{-1}$ permutes column $i$ out of $U_P$. For the case $k=1$, we can parameterize $P\backslash H/(G\times G)$ using the elements \begin{align*} (0^l,1^{c-l},0^{c-l},1^l)u_{l,j}, \qquad 0\leq l\leq c, \qquad 0\leq j\leq l, \end{align*} where \begin{align*} (0^l,1^{c-l},0^{c-l},1^l)=\left(\begin{smallmatrix}&&I_{l}\\&I_{2(c-l)}\\I_{l}\end{smallmatrix}\right),\qquad u_{l,j}=\left(\begin{smallmatrix}I_{l}&&A_{l,j}\\&I_{2(c-l)}\\&&I_{l}\end{smallmatrix}\right) , \quad A_{l,j}=\left(\begin{smallmatrix}&I_j\\0\end{smallmatrix}\right). \end{align*} The choice of matrix for $(0^l,1^{c-l},0^{c-l},1^l)$ is not canonical, but can be used for convenience. Assume $k\geq1$. Recall \begin{align*} M_Q=\GL_c\times\ldots\times\GL_c\times H_0\times\GL_c\times\ldots\times\GL_c,\qquad H_0=\GL_{2c}. \end{align*} Given $x\in M_Q$, denote its projection into the left (resp., right) direct product of $k-1$ copies of $\GL_c$ by $\ell_1(x)$ (resp., $\ell_2(x)$), put $\ell(x)=\ell_1(x)\ell_2(x)$, and let $\ell_0(x)$ be the projection into $H_0$. We have the analogs of \eqref{eq:conj x by g1 g2} and \eqref{eq:conj x by g2}, in particular since $(1,G)<H_0$, conjugation by elements of $(1,G)$ does not affect $\ell(x)$. \begin{proposition}\label{proposition:GL structure of w u} Let $h=wu$, where $w$ is a representative from $W(M_P)\backslash W(H)$ and $u\in M_Q\cap N_H$. Then $h\sim\hat{w}\hat{u}$, where $p_0(\hat{w})=(0^{l},1^{c-l},0^{c-l'},1^{l'})$ for some $0\leq l,l'\leq c$, $\hat{u}\in M_Q$, there is $\sigma\in(W(G),W(G))$ with ${}^{\sigma}\hat{u}\in M_Q\cap N_H$, and $\ell_0(\hat{u})$ takes the form \begin{align}\label{eq:GL second form u final} \left(\begin{smallmatrix}I_{l}&&X\\&I_{2c-l-l'}\\&&I_{l'}\end{smallmatrix}\right), \end{align} for some $X$. \end{proposition} \begin{proof} Identify $N_{\GL_c}\times N_{\GL_c}$ with its image in $M_{(c,c)}<H_0$. Since $N_{\GL_c}\times N_{\GL_c}<\ell_0((G,G))$, one can assume $\ell_0(u)\in V_{(c,c)}$. Through this $\ell(u)$ is multiplied by an element in $M_Q\cap N_H$. One can find $\sigma\in(W(G),W(G))$ such that $w\sigma=w_1$ satisfies $p_0(w_1)=(0^{l},1^{c-l},0^{c-l'},1^{l'})$ for some $0\leq l,l'\leq c$. Denote $u_1={}^{\sigma^{-1}}u$. Since $\ell_0((G,G))<M_{(c,c)}$, $\ell_0(u_1)\in V_{(c,c)}$. Also $\ell(u_1)\in M_Q$, but might not be in $N_H$. Then $wu\sim wu\sigma=w_1u_1$. Write the top right $c\times c$ block of $\ell_0(u_1)$ in the form $\left(\begin{smallmatrix}X^1 & X^2 \\X^3 & X^4\end{smallmatrix}\right)$, $X^2\in\Mat_{l\times l'}$. Now $w_1$ conjugates the blocks $X^i$, $i\ne2$, into $P$. Hence if $u_2=z^{-1}u_1$ with $z\in V_{(c,c)}$ defined by these blocks, $w_1u_1=w_1zu_2\sim w_1u_2$. Now $\ell_0(u_2)$ takes the form \eqref{eq:GL second form u final}. Note that $\ell(u_2)=\ell(u_1)$, whence $\ell({}^{\sigma}u_2)=\ell({}^{\sigma}u_1)=\ell(u)\in M_Q\cap N_H$, and $\ell_0({}^{\sigma}u_2)\in N_{H_0}$ because $\ell_0((G,G))<M_{(c,c)}$. Thus ${}^{\sigma}u_2\in M_Q\cap N_H$. Then $\hat{w}=w_1$ and $\hat{u}=u_2$ satisfy the required properties. \end{proof} \begin{lemma}\label{lemma:GL easier condition on psiU} Let $h=wu$, where $w$ and $u$ are given by Proposition~\ref{proposition:GL structure of w u}. Assume \begin{align}\label{GL psi U nontrivial using only w} \psi_U|_{U\cap {}^{w^{-1}}U_P}\ne1. \end{align} Then \eqref{psi U nontrivial} also holds, i.e., $\psi_U|_{U\cap {}^{h^{-1}}U_P}\ne1$. \end{lemma} \begin{proof} The proof is a repetition of the proof of Lemma~\ref{lemma:easier condition on psiU}, with only one case to consider. In the notation of that proof, we only have to consider the case where the coordinate $(i,j)$ defining $Y$ belongs to a block $B\in\Mat_c$. Then ${}^{\sigma}Y$ is also defined by a coordinate in the same block $B$. One change here is that $\sigma\in (W(G),W(G))$ instead of $(W(G),1)$, but this does not make any difference. In fact, $(1,G)$ commutes with all of the blocks of $U$ where $\psi_U$ is nontrivial. \end{proof} Re-denote $h=wu$ where $w$ and $u$ satisfy the properties of Proposition~\ref{proposition:GL structure of w u}. In particular $w$ defines the integers $0\leq l',l\leq c$. Write \begin{align*} w=(w^1_{k},\ldots,w^1_{1},w^2_{1},\ldots,w^2_{k}),\qquad \forall i,j,w^j_i\in\{0,1\}^c. \end{align*} With this notation \begin{align*} p_0(w)=(w^1_1,w^2_1),\qquad w^1_1=(0^{l},1^{c-l}),\qquad w^2_1=(0^{c-l'},1^{l'}). \end{align*} \begin{proposition}\label{proposition:GL 1st reduction of w} We have $\mathcal{H}(h)=0$ unless \begin{align}\label{eq:GL w_i first reduction} w^1_i=(0^l,*^{c-l}),\qquad w^2_i=(*^{l},1^{c-{l}}),\qquad \forall 1<i\leq k. \end{align} \end{proposition} \begin{proof} For $k=1$ there is nothing to prove, assume $k>1$. Since $w^1_1=(0^{l},1^{c-{l}})$, conjugation by $w$ leaves the last $c-{l}$ rows of $u^{2,2}$ (see \eqref{matrix:middle 4c block of u}) in $U_P$, these are rows \begin{align*} (k-1)c+{l}+1,\ldots,kc. \end{align*} The character $\psi_U$ is nontrivial on the bottom right $c-{l}\times c-{l}$ block of $u^{2,2}$, whence $\mathcal{H}(h)=0$ by \eqref{GL psi U nontrivial using only w}, unless $w^{-1}$ permutes the last $c-{l}$ columns of $u^{2,2}$, columns \begin{align*} (k+1)c+{l}+1,\ldots,(k+2)c, \end{align*} outside of $U_P$. This means $w$ permutes rows \begin{align*} (k+1)c+{l}+1,\ldots,(k+2)c \end{align*} into $U_P$, i.e., $w^2_2=(*^{l},1^{c-{l}})$. But these are also the last $c-{l}$ rows of the block $v_{1,2}$ of the bottom right copy of $V_{(c^{k-1})}$ in $U$. Since $\psi_U$ restricts to $\psi\circ\tr$ on $v_{1,2}$, $\mathcal{H}(h)=0$ by \eqref{GL psi U nontrivial using only w}, unless $w^{-1}$ permutes the last $c-{l}$ columns of $v_{1,2}$, columns \begin{align*} (k+2)c+{l}+1,\ldots,(k+3)c, \end{align*} outside of $U_P$, so that $w$ permutes rows \begin{align*} (k+2)c+{l}+1,\ldots,(k+3)c \end{align*} into $U_P$, $w^2_3=(*^{l},1^{c-{l}})$, and these are the bottom $c-{l}$ rows of $v_{2,3}$. Proceeding similarly ($\psi_U$ is $\psi\circ\tr$ on $v_{j,j+1}$) we obtain $w^2_i=(*^{l},1^{c-{l}})$ for all $1<i\leq k$. In addition, $w^1_1=(0^{l},1^{c-{l}})$ implies $w^{-1}$ permutes the first $l$ columns of $u^{1,1}$, columns \begin{align*} (k-1)c+1,\ldots,(k-1)c+l, \end{align*} into $U_P$. Since $\psi_U$ restricts to $\psi\circ\tr$ on $u^{1,1}$, $\mathcal{H}(h)=0$ by \eqref{GL psi U nontrivial using only w}, unless $w$ permutes the first $l$ rows of $u^{1,1}$ outside of $U_P$, these are rows \begin{align*} (k-2)c+1,\ldots,(k-2)c+l, \end{align*} and we obtain $w^1_2=(0^l,*^{c-l})$. Then $w^{-1}$ permutes the first $l$ columns of the block $v_{k-1,k}$ of the top left copy of $V_{(c^{k-1})}$ in $U$, columns \begin{align*} (k-2)c+1,\ldots,(k-2)c+l \end{align*} into $U_P$, so that $\mathcal{H}(h)=0$ by \eqref{GL psi U nontrivial using only w}, unless $w$ permutes the first $l$ rows of $v_{k-1,k}$, rows \begin{align*} (k-3)c+1,\ldots,(k-3)c+l, \end{align*} outside of $U_P$, i.e., $w^1_3=(0^l,*^{c-l})$. Similarly, we deduce $w^1_i=(0^l,*^{c-l})$ for all $1<i\leq k$. \end{proof} For each $1<i\leq k$, let $0\leq d^1_{i-1}\leq c-l$ and $0\leq d^2_{i-1}\leq l$ be maximal such that for all $1<i\leq k$, \begin{align*} w^1_i=(0^{l+d^1_{i-1}},*^{c-l-d^1_{i-1}}),\qquad w^2_i=(*^{l-d^2_{i-1}},1^{c-l+d^2_{i-1}}). \end{align*} The integer $d^j_{i-1}$ is defined since $w^j_i$ takes the form \eqref{eq:GL w_i first reduction}. \begin{proposition}\label{proposition:GL 2nd reduction of w} We have $\mathcal{H}(h)=0$ unless $h\sim \hat{w}\hat{u}$, $p_0(\hat{w})=(0^l,1^{c-l},0^{c-l'},1^{l'})$, for each $1<i\leq k$, \begin{align}\label{eq:GL w_i second reduction} w^1_i=(0^{l+d^1_{i-1}},1^{c-l-d^1_{i-1}}),\quad w^2_i=(0^{l-d^2_{i-1}},1^{c-l+d^2_{i-1}}),\quad d^1_1\leq\ldots\leq d^1_{k-1}, \quad d^2_1\leq\ldots\leq d^2_{k-1}, \end{align} and $\hat{u}$ satisfies the conditions of Proposition~\ref{proposition:GL structure of w u}, in particular $\ell_0(\hat{u})$ takes the form \eqref{eq:GL second form u final}. \end{proposition} \begin{proof} For each $1<i\leq k$, put $w^2_i=((w^2)'_i,1^{c-l})$ with $(w^2)'_i\in\{0,1\}^l$. Let $1\leq j\leq l$ and assume $1<i_0\leq k$ is minimal such that $(w^2)'_{i_0}[j]=1$. Assume $i>i_0$ is minimal with $(w^2)'_i[j]=0$. Since $(w^2)'_{i-1}[j]=1$, $w$ permutes row $(k+i-2)c+j$ into $U_P$. This row contains coordinates of a row from $v_{i-2,i-1}$, and $\psi_U$ is $\psi\circ\tr$ on $v_{i-2,i-1}$, so that on row $(k+i-2)c+j$ it is nontrivial on the column $(k+i-1)c+j$. Thus \eqref{GL psi U nontrivial using only w} implies $\mathcal{H}(h)=0$, unless $w^{-1}$ permutes column $(k+i-1)c+j$ outside of $U_P$, which means that $w$ permutes row $(k+i-1)c+j$ into $U_P$, contradicting the assumption $(w^2)'_i[j]=0$. Therefore $(w^2)'_i[j]=1$ for all $i\geq i_0$ (or $\mathcal{H}(h)=0$). Now we are in a situation similar to the proof of Proposition~\ref{proposition:2nd reduction of w}. If $i_0$ is minimal with $(w^2)'_{i_0}[j]=1$ and $(w^2)'_{i_0}[j+1]=0$, then for each $i>i_0$, either $(w^2)'_{i}[j]=1,(w^2)'_{i}[j+1]=0$ or $(w^2)'_{i}[j]=(w^2)'_{i}[j+1]=1$. Using transpositions from $(\diag(W(\GL_{l}),I_{c-l}),1)$, one can sort the coordinates of the blocks $w^2_i$ so that $d^2_1\leq\ldots\leq d^2_{k-1}$ holds. Any $b\in(\diag(W(\GL_{l}),I_{c-l}),1)$ fixes the last $c-l$ rows of $w^1_i$ and keeps the first $l$ rows of $w^1_{i}$ in $w^1_{i}$, for each $1<i\leq k$, and since $w^1_i$ starts with $(0^l)$, $p_1({}^{b}w)=p_1(w)$ (for brevity, we identify $w^1_i$ with the rows it is affecting: $(k-i)c+1,\ldots,(k-i)c+c$). Additionally $b$ fixes the last $2c-l$ rows of $p_0(w)$ while keeping the first $l$ rows in $p_0(w)$, thus $p_0({}^{b}w)=p_0(w)$. Similarly denote $w^1_i=(0^l,(w^1)'_i)$ with $(w^1)'_i\in\{0,1\}^{c-l}$, and consider $1\leq j\leq c-l$. Suppose $i_0>1$ is minimal with $(w^1)'_{i_0}[j]=0$ and $i>i_0$ is minimal with $(w^1)'_{i}[j]=1$. On the one hand $(w^1)'_{i-1}[j]=0$ hence $w$ permutes row $(k-i+1)c+j$ outside of $U_P$, so that $w^{-1}$ permutes column $(k-i+1)c+j$ into $U_P$. On the other hand $(w^1)'_{i}[j]=1$, whence $w$ permutes row $(k-i)c+j$ into $U_P$. Again $\mathcal{H}(h)=0$ because of \eqref{GL psi U nontrivial using only w}, otherwise we deduce that if $i_0$ exists, $(w^1)'_{i}[j]=0$ for all $i\geq i_0$. This means that unless $\mathcal{H}(h)=0$, if $i_0$ is minimal with $(w^1)'_{i_0}[j]=1$ and $(w^2)'_{i_0}[j+1]=0$, then for each $i>i_0$, either $(w^2)'_{i}[j]=1,(w^2)'_{i}[j+1]=0$ or $(w^2)'_{i}[j]=(w^2)'_{i}[j+1]=0$. Again we use transpositions, now from $(\diag(I_l,W(\GL_{c-l})),1)$, to rearrange the coordinates of the blocks $w^1_i$ and obtain $d^1_1\leq\ldots\leq d^1_{k-1}$. For $b\in(\diag(I_l,W(\GL_{c-l})),1)$, $b$ fixes the first $l$ rows of $w^2_{i}$ and leaves the last $c-l$ rows in $w^2_{i}$ for $1<i\leq k$, and because $w^2_{i}$ (still) ends with $(1^{c-l})$, $p_2({}^bw)=w$. Also $p_0({}^bw)=p_0(w)$. Now condition \eqref{eq:GL w_i second reduction} holds, and note that the conjugations affect $u$, but it still satisfies the conditions of Proposition~\ref{proposition:GL structure of w u}. As opposed to the proof of Proposition~\ref{proposition:2nd reduction of w}, we do not claim $\ell_0(\hat{u})=\ell_0(u)$, it might not hold because $(\diag(W(\GL_{l}),I_{c-l}),1)$ does not commute with \eqref{eq:GL second form u final}. \end{proof} Re-denote $h=wu$, with $w$ and $u$ given by Proposition~\ref{proposition:GL 2nd reduction of w}. Since the total sum of coordinates of $w$ must be $kc$, we have \begin{align*} c-l+l'+\sum_{i=1}^{k-1}(c-l-d^1_i)+\sum_{i=1}^{k-1}(c-l+d^2_i)=kc, \end{align*} hence \begin{align}\label{eq:GL compatibility for w} \sum_{i=1}^{k-1}d^2_i-d^1_i=(k-1)(2l-c)+l-l'. \end{align} We can multiple $w$ on the left by an element of $W(M_P)$, so that the matrix corresponding to $w$ takes the form \begin{align*} &\left(\begin{smallmatrix} 0&&&&&&&&I_{c-l+d^2_{k-1}}\\ &I_{c-l-d^1_{k-1}}&&&&&&0&\\ &&0&&&&\iddots &&\\ &&&\ddots&&0&&&\\ &&&&\left(\begin{smallmatrix}&&&I_{l}\\&I_{2c-l-l'}\\I_{l'}\end{smallmatrix}\right)&&&&\\ &&&0&&I_{l-d^2_1}&&&\\ &&\iddots&&&&0&\\ &0&&&&&&\ddots&\\ I_{l+d^1_{k-1}}&&&&&&&&0 \end{smallmatrix}\right). \end{align*} For example if $k=2$, \begin{align*} w=\left(\begin{smallmatrix} 0 & 0 & 0 & 0 & 0 & 0 & I_{c-l+d^2_1} \\ 0 & I_{c-l-d^1_1} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & I_{l} & 0 & 0 \\ 0 & 0 & 0 & I_{2c-l-l'} & 0 & 0 & 0 \\ 0 & 0 & I_{l'} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & I_{l-d^2_1} & 0 \\ I_{l+d^1_1} & 0 & 0 & 0 & 0 & 0 & 0 \end{smallmatrix}\right), \end{align*} and for $k=3$, \begin{align*} w=\left(\begin{smallmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & I_{c-l+d^2_2} \\ 0 & I_{c-l-d^1_2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & I_{c-l+d^2_1} & 0 & 0 \\ 0 & 0 & 0 & I_{c-l-d^1_1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & I_{l'} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & I_{2c-l-l'} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & I_l & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & I_{l-d^2_1} & 0 & 0 & 0 \\ 0 & 0 & I_{l+d^1_1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & I_{l-d^2_2} & 0 \\ I_{l+d^1_2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{smallmatrix}\right). \end{align*} For $1\leq j\leq k-1$, denote \begin{align*} \gamma_j=&\diag(I_{\sum_{i=1}^{j-1}c-l-d^1_{k-i}},\left(\begin{smallmatrix}&I_{kc-(2j-1)(c-l)-d^2_{k-j} +\sum_{i=1}^{j-1}d^1_{k-i}-d^2_{k-i}}\\I_{c-l+d^2_{k-j}}\end{smallmatrix}\right),I_{\sum_{i=1}^{j-1}c-l+d^2_{k-i}})\in W(\GL_{kc}),\\ \gamma_j'=&\diag(I_{\sum_{i=1}^{j-1}l+d^1_{k-i}},\left(\begin{smallmatrix}&I_{l+d^1_{k-j}}\\ I_{kc-(2j-1)l-d^1_{k-j} +\sum_{i=1}^{j-1}d^2_{k-i}-d^1_{k-i}}\end{smallmatrix}\right),I_{\sum_{i=1}^{j-1}l-d^2_{k-i}})\in W(\GL_{kc}), \end{align*} and multiply $w$ on the left by $\diag(\gamma_{k-1}\cdot\ldots \cdot\gamma_1,\gamma_{k-1}'\cdot\ldots \cdot\gamma_1')$. Now we see that ${}^hU\cap M_P=V_{\beta}\times V_{\beta'}$, where $\beta$ and $\beta'$ are the compositions of $kc$ given by \begin{align*} &\beta=(c-l-d^1_{k-1},\ldots,c-l-d^1_{1},l'+c-l,c-l+d^2_{1},\ldots,c-l+d^2_{k-1}),\\ &\beta'=(l+d^1_{k-1},\ldots,l+d^1_{1},c-l'+l,l-d^2_{1},\ldots,l-d^2_{k-1}). \end{align*} Both $\beta$ and $\beta'$ are indeed compositions of $kc$, by \eqref{eq:GL compatibility for w}. Put $\psi_{V_{\beta}\times V_{\beta'}}={}^{h}\psi_U|_{V_{\beta}\times V_{\beta'}}$, $\psi_{V_{\beta}}={}^{h}\psi_U|_{V_{\beta}\times I_{kc}}$ and $\psi_{V_{\beta'}}={}^{h}\psi_U|_{I_{kc}\times V_{\beta'}}$, and note that \begin{align}\label{eq:GL psi beta factors through tensor} J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}}(\rho)= J_{V_{\beta},\psi_{V_{\beta}}}(\rho_1)\otimes J_{V_{\beta'},\psi_{V_{\beta'}}}(\rho_2). \end{align} We start with describing ${}^{w\ell_0(u)}\psi_U|_{V_{\beta}\times V_{\beta'}}$. For $v\in V_{\beta}$ and $v'\in V_{\beta'}$, write \begin{align*} &v=\left(\begin{smallmatrix}I_{c-l-d^1_{k-1}}&b_1&\cdots\\&\ddots&\ddots&\\ &&I_{c-l-d^1_2}&b_{k-2}&\cdots\\ &&&I_{c-l-d^1_1}&e&b_{k-1}&\cdots\\ &&&&I_{l'}&0&b_{k}&\cdots\\ &&&&&I_{c-l}&b_{k+1}&\cdots\\ &&&&&&\ddots&\ddots\\ &&&&&&&I_{c-l+d^2_{k-2}}&b_{2k-1}\\ &&&&&&&&I_{c-l+d^2_{k-1}} \end{smallmatrix}\right),\\ &v'=\left(\begin{smallmatrix}I_{l+d^1_{k-1}}&b'_1&\cdots\\&\ddots&\ddots&\\ &&I_{l+d^1_2}&b'_{k-2}&\cdots\\ &&&I_{l+d^1_1}&e'&b'_{k-1}&\cdots\\ &&&&I_{c-l'}&0&f'&\cdots\\ &&&&&I_{l}&b'_{k}&\cdots\\ &&&&&&\ddots&\ddots\\ &&&&&&&I_{l-d^2_{k-2}}&b'_{2k-2}\\ &&&&&&&&I_{l-d^2_{k-1}} \end{smallmatrix}\right). \end{align*} The dimensions of the blocks $b_i$ and $b'_i$ are clear, note that $b_k$ and $b_{k+1}$ (resp., $b'_k$) have $c-l+d^2_{1}$ (resp., $l-d^2_{1}$) columns. Then \begin{align}\label{GL psi_U on V beta 0} &{}^{w\ell_0(u)}\psi_U(\diag(v,I_{kc}))=\psi(-\sum_{j=k-1}^{2}\tr(b_{k-j}\left(\begin{smallmatrix}0_{d^1_j-d^1_{j-1} \times c-l-d^1_j} \\ I_{c-l-d^1_{j}}\end{smallmatrix}\right)) -\tr(b_{k-1}\left(\begin{smallmatrix}0_{d^1_1\times c-l-d^1_1}\\I_{c-l-d^1_1}\end{smallmatrix}\right)) \\&\qquad -\tr(b_{k}\left(\begin{smallmatrix}A(X)\\0_{c-l\times l'}\end{smallmatrix}\right)) +\tr(b_{k+1}\left( \begin{smallmatrix}0_{d^2_1\times c-l}\\I_{c-l}\end{smallmatrix}\right)) -\sum_{j=1}^{k-2}\tr(b_{k+1+j}\left(\begin{smallmatrix}0_{d^2_{j+1}-d^2_j\times c-l+d^2_{j}}\\I_{c-l+d^2_j}\end{smallmatrix}\right))),\nonumber \end{align} \begin{align}\label{GL psi_U on V beta' 0} {}^{w\ell_0(u)}\psi_U(\diag(I_{kc},v'))=&\psi(-\sum_{j=k-1}^{2}\tr(b'_{k-j}\left(\begin{smallmatrix} I_{l+d^1_{j-1}}&0_{l+d^1_{j-1}\times d^1_{j}-d^1_{j-1}}\end{smallmatrix}\right)) -\tr(b'_{k-1}\left(\begin{smallmatrix}I_l&0_{l\times d^1_1}\end{smallmatrix}\right)) \\&\quad +\tr(b'_{k}\left( \begin{smallmatrix}I_{l-d^2_1}&0_{l-d^2_1\times d^2_1}\end{smallmatrix}\right)) -\sum_{j=1}^{k-2}\tr(b'_{k+j}\left(\begin{smallmatrix}I_{l-d^2_{j+1}}&0_{l-d^2_{j+1}\times d^2_{j+1}-d^2_{j}}\end{smallmatrix}\right)))\nonumber. \end{align} In both formulas, the sums $\sum_{j=k-1}^{2}$ are omitted when $k=2$. The matrix $A(X)\in\Mat_{d^2_1\times l'}$ in \eqref{GL psi_U on V beta 0} is uniquely defined given the block $X$ of $\ell_0(u)$, and in particular $A(0)=0$ and when $d^2_1=l=l'$, $A(I_l)=I_l$. For $l=c$, we have $d^1_i=0$ for all $i$, and since $d^2_i\leq c$ and $l'\leq c$, \eqref{eq:GL compatibility for w} implies $d^2_i=c$ for all $i$ and $l=l'$, then \eqref{GL psi_U on V beta' 0} becomes $\psi_k^{-1}$, and when $X=I_c$, \eqref{GL psi_U on V beta 0} also becomes $\psi_k^{-1}$. \begin{proposition}\label{proposition:GL wu_0 nontrivial implies h nontrivial orbit} Assume $k>1$ and $\mathcal{H}(h)\ne0$. The character $\psi_{V_{\beta}}$ belongs to the orbit of \begin{align}\label{GL psi_U on V beta} v\mapsto&\psi(-\sum_{j=k-1}^{2}\tr(b_{k-j}*) -\tr(b_{k-1}\left(\begin{smallmatrix}*_{d^1_1\times c-l-d^1_1}\\I_{c-l-d^1_1}\end{smallmatrix}\right)) \\&\quad -\tr(b_{k}\left(\begin{smallmatrix}{*}_{d_1^2\times l'}\\0_{c-l\times l'}\end{smallmatrix}\right)) +\tr(b_{k+1}\left( \begin{smallmatrix}0_{d^2_1\times c-l}\\I_{c-l}\end{smallmatrix}\right)) -\sum_{j=1}^{k-2}\tr(b_{k+1+j}\left(\begin{smallmatrix}*_{d^2_{j+1}-d^2_j\times c-l+d^2_{j}}\\I_{c-l+d^2_j}\end{smallmatrix}\right))),\nonumber \end{align} and $\psi_{V_{\beta'}}$ belongs to the orbit of \begin{align}\label{GL psi_U on V beta'} v'\mapsto &\psi(-\sum_{j=k-1}^{2}\tr(b'_{k-j}\left(\begin{smallmatrix} I_{l+d^1_{j-1}}&{*}_{l+d^1_{j-1}\times d^1_{j}-d^1_{j-1}}\end{smallmatrix}\right)) -\tr(b'_{k-1}\left(\begin{smallmatrix}I_l&0_{l\times d^1_1}\end{smallmatrix}\right)) \\&\quad +\tr(b'_{k}\left( \begin{smallmatrix}I_{l-d^2_1}&{*}_{l-d^2_1\times d^2_1}\end{smallmatrix}\right)) -\sum_{j=1}^{k-2}\tr(b'_{k+j}*))\nonumber. \end{align} Here $*$ means undetermined block entries. When $\ell(u)$ is the identity, \eqref{GL psi_U on V beta}--\eqref{GL psi_U on V beta'} coincide with \eqref{GL psi_U on V beta 0}--\eqref{GL psi_U on V beta' 0}. \end{proposition} \begin{proof} The proof is similar to the proof of Proposition~\ref{proposition:wu_0 nontrivial implies h nontrivial orbit}. Now $\psi_U$ is defined by $2(k-1)$ blocks in $\Mat_c$. Let $B_{1,k-2}$ (resp., $B_{2,0}$) be the block corresponding to $u^{1,1}$ (resp., $u^{2,2}$), and $B_{1,0},\ldots,B_{1,k-3}$ (resp., $B_{2,1},\ldots,B_{2,k-2}$) be the blocks corresponding to the top left (resp., bottom right) embedding of $V_{(c^{k-1})}<M_P$ (see \S~\ref{Doubling setup}). Set $d^1_0=d^2_0=0$. For $0\leq i\leq k-2$, write $B_{1,i}$ as the upper right block of \begin{align*} \left(\begin{smallmatrix} I_{l+d^1_{k-i-2}}&&&B_{1,i}^{1,1}&B_{1,i}^{1,2}&B_{1,i}^{1,3}\\ &I_{d^1_{k-i-1}-d^1_{k-i-2}}&&B_{1,i}^{2,1}&B_{1,i}^{2,2}&B_{1,i}^{2,3}\\ &&I_{c-l-d^1_{k-i-1}}&B_{1,i}^{3,1}&B_{1,i}^{3,2}&B_{1,i}^{3,3}\\ &&&I_{l+d^1_{k-i-2}}\\ &&&&I_{d^1_{k-i-1}-d^1_{k-i-2}}\\ &&&&&I_{c-l-d^1_{k-i-1}} \end{smallmatrix}\right) \end{align*} and $B_{2,i}$ as the upper right block of \begin{align*} \left(\begin{smallmatrix} I_{l-d^2_{i+1}}&&&B_{2,i}^{1,1}&B_{2,i}^{1,2}&B_{2,i}^{1,3}\\ &I_{d^2_{i+1}-d^2_i}&&B_{2,i}^{2,1}&B_{2,i}^{2,2}&B_{2,i}^{2,3}\\ &&I_{c-l+d^2_i}&B_{2,i}^{3,1}&B_{2,i}^{3,2}&B_{2,i}^{3,3}\\ &&&I_{l-d^2_{i+1}}\\ &&&&I_{d^2_{i+1}-d^2_i}\\ &&&&&I_{c-l+d^2_i} \end{smallmatrix}\right). \end{align*} Recall $\psi_U$ is given by \begin{align}\label{eq:GL blocks of psi_U} \psi(-\sum_{i=0}^{k-2}\sum_{t=1}^{3}\tr(B_{1,i}^{t,t})+\sum_{t=1}^{3}\tr(B_{2,0}^{t,t}) -\sum_{i=1}^{k-2}\sum_{t=1}^{3}\tr(B_{2,i}^{t,t})). \end{align} Let $\mathscr{M}_P$ (resp., $\mathscr{U}_P$, $\mathscr{U}_P^-$) denote the list of blocks conjugated by $w$ into $M_P$ (resp., $U_P$, $U_P^-$): \begin{align*} \mathscr{M}_P=&\{B_{j,i}^{1,1},B_{j,i}^{2,1},B_{j,i}^{3,2},B_{j,i}^{3,3}:1\leq j\leq 2,0\leq i\leq k-2\},\\ \mathscr{U}_P=&\{B_{j,i}^{3,1}:1\leq j\leq 2,0\leq i\leq k-2\},\\ \mathscr{U}_P^-=&\{B_{j,i}^{1,2},B_{j,i}^{1,3},B_{j,i}^{2,2},B_{j,i}^{2,3}:1\leq j\leq 2,0\leq i\leq k-2\}. \end{align*} We can assume $\ell_j(u)=\diag(z_{j,0},\ldots,z_{j,k-2})$, $j=1,2$, with $z_{j,i}\in{}^{w_{\sigma}}N_{\GL_c}$. Here $w_{\sigma}$ is the projection of $(\sigma,1)^{-1}$ into the $i$-th copy of $\GL_c$ ($(1,\sigma)$ commutes with $\ell(u)$). We can then write $z_{j,i}=z_{j,i}'m_{j,i}$ where \begin{align*} {}^w\diag(m_{1,0},\ldots,m_{1,k-2},I_{2c},m_{2,0},\ldots,m_{2,k-2})\in M_P \end{align*} and for all $0\leq i\leq k-2$, \begin{align*} &m_{1,i}= \left(\begin{smallmatrix} I_{l+d^1_{k-i-1}}&M_{1,i}^{1}\\ M_{1,i}^{2}&I_{c-l-d^1_{k-i-1}}+M_{1,i}^{2}M_{1,i}^{1} \end{smallmatrix}\right)\in\GL_c,\qquad m_{2,i}= \left(\begin{smallmatrix} I_{l-d^2_{i+1}}&M_{2,i}^{1}\\ M_{2,i}^{2}&I_{c-l+d^2_{i+1}}+M_{2,i}^{2}M_{2,i}^{1} \end{smallmatrix}\right)\in\GL_c,\\ &I_{c-l-d^1_{k-i-1}}+M_{1,i}^{2}M_{1,i}^{1}\in\GL_{l+d^1_{k-i-1}},\qquad I_{c-l+d^2_{i+1}}+M_{2,i}^{2}M_{2,i}^{1}\in\GL_{c-l+d^2_{i+1}}. \end{align*} Then \begin{align*} &m_{1,i}^{-1}= \left(\begin{smallmatrix} I_{l+d^1_{k-i-1}}+M_{1,i}^{1}M_{1,i}^{2}&-M_{1,i}^{1}\\ -M_{1,i}^{2}&I_{c-l-d^1_{k-i-1}} \end{smallmatrix}\right)\in\GL_c,\qquad m_{2,i}^{-1}= \left(\begin{smallmatrix} I_{l-d^2_{i+1}}+M_{2,i}^{1}M_{2,i}^{2}&-M_{2,i}^{1}\\ -M_{2,i}^{2}&I_{c-l+d^2_{i+1}} \end{smallmatrix}\right)\in\GL_c. \end{align*} Henceforth we assume $z_{j,i}=m_{j,i}$. To compute ${}^{\ell(u)}\psi_U$ we calculate \begin{align*} m_{1,k-2}^{-1}B_{1,k-2},\quad m_{1,i}^{-1}B_{1,i}m_{1,i+1},\quad B_{2,0}m_{2,0},\quad m_{2,i}^{-1}B_{2,i+1}m_{2,i+1},\quad\forall 0\leq i\leq k-3. \end{align*} We start with $\psi_{V_{\beta}}$ and show that it belongs to the orbit of \eqref{GL psi_U on V beta}, otherwise $\mathcal{H}(h)=0$. This amounts to the description of its restriction to $b_{k-1},\ldots,b_{2k-1}$ and $e$. The rightmost $c-l$ columns of $b_{k-1}$ consist of $B_{1,k-2}^{3,2}$ and $B_{1,k-2}^{3,3}$ (the leftmost $l'$ columns are conjugated from the $c\times c$ block to the right of $u^{1,1}$). Looking at $m_{1,k-2}^{-1}B_{1,k-2}$, if the top $l$ rows of $M_{1,k-2}^1$ are nonzero, ${}^u\psi_U$ is nontrivial on $B_{1,k-2}^{3,1}\in\mathscr{U}_P$. Hence $\mathcal{H}(h)=0$ by \eqref{psi U nontrivial} in this case. Also ${}^u\psi_U$ restricts to $\psi\circ\tr$ on $B_{1,k-2}^{3,3}$. Hence $\psi_{V_{\beta}}$ agrees with \eqref{GL psi_U on V beta} on $b_{k-1}$. The block $b_{k+1}$ consists of $(B_{2,0}^{3,2},B_{2,0}^{3,3})$ and we consider $B_{2,0}m_{2,0}$. If the last $c-l$ columns of $M_{2,0}^1$ are nonzero, ${}^u\psi_U$ is nontrivial on $B_{2,0}^{3,1}\in\mathscr{U}_P$. Unless $\mathcal{H}(h)=0$, we obtain that the last $c-l$ columns of $M_{2,0}^1$ are $0$, then it follows that ${}^u\psi_U$ and $\psi_U$ coincide on $(B_{2,0}^{3,2},B_{2,0}^{3,3})$. Regarding $b_k$, it is conjugated by $w$ from the $c\times c$ block below $u^{2,2}$. Denote this block by $B_0$, which we further divide by writing it as the upper right block of \begin{align*} &\left(\begin{smallmatrix} I_{c-l'}&&&B_0^{1,1}&B_0^{1,2}&B_0^{1,3}\\ &I_{l'-d_1^2}&&B_0^{2,1}&B_0^{2,2}&B_0^{2,3}\\ &&I_{d_1^2}&B_0^{3,1}&B_0^{3,2}&B_0^{3,3}\\ &&&I_{l-d_1^2}\\ &&&&I_{d_1^2}\\ &&&&&I_{c-l} \end{smallmatrix}\right). \end{align*} Here \begin{align*} &B_0^{1,1},B_0^{2,2},B_0^{2,3},B_0^{3,2},B_0^{3,3}\in \mathscr{M}_P,\quad B_0^{2,1},B_0^{3,1}\in \mathscr{U}_P,\quad B_0^{1,2},B_0^{1,3}\in \mathscr{U}_P^-. \end{align*} The blocks conjugated into $b_k$ are $B_0^{2,2},B_0^{2,3},B_0^{3,2}$ and $B_0^{3,3}$. The conjugation of $U$ by $\ell(u)$ multiplies $B_0$ on the right by $m_{2,0}^{-1}$. The restriction of ${}^{\ell_0(u)}\psi_U$ to $B_0^{2,2}$ and $B_0^{3,2}$ is defined by $A(X)$, but ${}^{\ell_0(u)}\psi_U$ can also be nontrivial on $B_0^{2,1}$ or $B_0^{2,2}$ (or we could have, e.g., $d^2_1=0,l$). We can assume ${}^{\ell_0(u)}\psi_U$ is given on $B_0$ by \begin{align}\label{eq:psi u0 on B0} \psi(\tr(\varphi_k\left(\begin{smallmatrix}B_0^{1,1}&B_0^{1,2}&B_0^{1,3}\\ B_0^{2,1}&B_0^{2,2}&B_0^{2,3}\\B_0^{3,1}&B_0^{3,2}&B_0^{3,3}\end{smallmatrix}\right))),\qquad \varphi_k=\left(\begin{smallmatrix}0_{l-d_1^2\times c-l'}&A_1(X)\\ 0_{d_1^2\times c-l'}&A(X)\\0_{c-l\times c-l'}&0_{c-l\times l'}\end{smallmatrix}\right),\qquad A_1(X)\in\Mat_{l-d_1^2\times l'}. \end{align} Here $A_1(X)$ defines the restriction of ${}^{\ell_0(u)}\psi_U$ to $B_0^{2,1}$ and $B_0^{3,1}$. When we consider $m_{2,0}\varphi_k$ we see that the restriction of ${}^u\psi_U$ to $B_0^{2,1},B_0^{3,1}\in\mathscr{U}_P$ is given by the first $l-d_1^2$ rows of $m_{2,0}$ multiplied by the last $l'$ columns of $\varphi_k$, this restriction should vanish, and the restriction to the blocks conjugated into $b_k$ corresponds to the last $c-l+d_1^2$ rows of $m_{2,0}$ multiplied by the last $l'$ columns of $\varphi_k$. Since the last $c-l$ columns of $M_{2,0}^1$ are $0$, we can denote $M_{2,0}^1=\left(\begin{smallmatrix}\alpha&0_{l-d_1^2\times c-l}\end{smallmatrix}\right)$. Also put $M_{2,0}^2=\left(\begin{smallmatrix}\beta^1\\\beta^2\end{smallmatrix}\right)$ with $\beta^1\in\Mat_{d_1^2\times l-d_1^2}$, then \begin{align*} m_{2,0}\left(\begin{smallmatrix}A_1(X)\\A(X)\\0_{c-l\times l'}\end{smallmatrix}\right) &=\left(\begin{smallmatrix}I_{l-d_1^2}&\alpha&0\\\beta^1&I_{d_1^2}+\beta^1\alpha&0\\\beta^2&\beta^2\alpha&I_{c-l}\end{smallmatrix}\right) \left(\begin{smallmatrix}A_1(X)\\A(X)\\0_{c-l\times l'}\end{smallmatrix}\right)= \left(\begin{smallmatrix}A_1(X)+\alpha A(X)\\\beta^1A_1(X)+I_{d_1^2}+\beta^1\alpha A(X)\\ \beta^2A_1(X)+\beta^2\alpha A(X)\end{smallmatrix}\right). \end{align*} Now if $\mathcal{H}(h)\ne0$, we must have $A_1(X)+\alpha A(X)=0$ thereby $\beta^2A_1(X)+\beta^2\alpha A(X)=0$, in which case $\psi_{V_{\beta}}$ agrees with \eqref{GL psi_U on V beta} on $b_k$. Thus both characters agree on $b_{k-1}$, $b_k$ and $b_{k+1}$. Consider $b_{k+i}$, $2\leq i\leq k-1$. We multiply $m_{2,i-2}^{-1}B_{2,i-1}m_{2,i-1}$. The block $b_{k+i}$ consists of $B_{2,i-1}^{3,2}$ and $B_{2,i-1}^{3,3}$. Recall $B_{2,i-1}^{3,1}\in\mathscr{U}_P$. Then $\mathcal{H}(h)=0$ unless the top right $l-d_i^2\times c-l+d_{i-1}^2$ block of $m_{2,i-1}m_{2,i-2}^{-1}$ is $0$: \begin{align*} \left(\begin{smallmatrix}I_{l-d_i^2}&M_{2,i-1}^1\end{smallmatrix}\right)\left(\begin{smallmatrix}-M_{2,i-2}^1\\ I_{c-l+d_{i-1}^2}\end{smallmatrix}\right)=0. \end{align*} In this case the restriction of ${}^u\psi_U$ to $(B_{2,i-1}^{3,2},B_{2,i-1}^{3,3})$, which corresponds to the bottom right $c-l+d_i^2\times c-l+d_{i-1}^2$ block of $m_{2,i-1}m_{2,i-2}^{-1}$, is defined by \begin{align*} \left(\begin{smallmatrix}M_{2,i-1}^{2}&I_{c-l+d^2_{i}}+M_{2,i-1}^{2}M_{2,i-1}^{1}\end{smallmatrix}\right)\left(\begin{smallmatrix}-M_{2,i-2}^1\\ I_{c-l+d_{i-1}^2}\end{smallmatrix}\right)&= \left(\begin{smallmatrix}0_{c-l+d_{i}^2\times l-d_i^2}&I_{c-l+d^2_{i}}\end{smallmatrix}\right)\left(\begin{smallmatrix}-M_{2,i-2}^1\\ I_{c-l+d_{i-1}^2}\end{smallmatrix}\right)\\&= \left(\begin{smallmatrix}*_{d_i^2-d_{i-1}^2\times c-l+d_{i-1}^2}\\I_{c-l+d_{i-1}^2}\end{smallmatrix}\right). \end{align*} Therefore $\psi_{V_{\beta}}$ agrees with \eqref{GL psi_U on V beta} on $b_{k+i}$. Also $\psi_{V_{\beta}}|_e=1$, because $e$ is conjugated from the $c\times c$ block to the right of $u^{1,1}$. This completes the proof for $\psi_{V_{\beta}}$. We turn to the restriction of $\psi_{V_{\beta'}}$ to $b'_{1},\ldots,b'_{k}$, $e'$ and $f'$, and prove that unless $\mathcal{H}(h)=0$, $\psi_{V_{\beta'}}$ and \eqref{GL psi_U on V beta'} coincide. The block $b'_k$ corresponds to $B_{2,0}^{1,1}$ and $B_{2,0}^{2,1}$. Considering $B_{2,0}m_{2,0}$, the restriction of ${}^u\psi_U$ to these blocks is given by the top left $l-d_1^2\times l$ block of $m_{2,0}$, namely \begin{align*} \psi(\tr(\left(\begin{smallmatrix} I_{l-d_1^2} & {*}_{l-d_1^2\times d_1^2} \end{smallmatrix}\right)\left(\begin{smallmatrix} B_{2,0}^{1,1}\\B_{2,0}^{2,1}\end{smallmatrix}\right))). \end{align*} Hence $\psi_{V_{\beta'}}$ and \eqref{GL psi_U on V beta'} coincide on $b'_k$. The block $b'_{k-1}$ is conjugated from $B_{1,k-2}^{1,1}$ and $B_{1,k-2}^{2,1}$. This is similar to $b_{k+1}$. We multiply $m_{1,k-2}^{-1}B_{1,k-2}$ and if the first $l$ rows of $M_{1,k-2}^1$ are nonzero, ${}^u\psi_U$ is nontrivial on $B_{1,k-2}^{3,1}\in\mathscr{U}_P$ whence $\mathcal{H}(h)=0$. Henceforth we can assume the first $l$ rows of $M_{1,k-2}^1$ are $0$, then the top left $l\times l+d_1^1$ block of $m_{1,k-2}^{-1}$ equals $\left(\begin{smallmatrix}I_l&0_{l\times l+d_1^1}\end{smallmatrix}\right)$, so that the restriction of ${}^u\psi_U$ to $B_{1,k-2}^{1,1}$ and $B_{1,k-2}^{2,1}$ coincides with the restriction of $\psi_U$ ($\psi\circ\tr$ on the former, trivial on the latter). Consider $b'_{i}$, $1\leq i\leq k-2$. We multiply $m_{1,i-1}^{-1}B_{1,i-1}m_{1,i}$. The block $b'_{i}$ is conjugated from $(B_{1,i-1}^{1,1},B_{1,i-1}^{2,1})$. This is similar to the case of $b_{k+i}$. Since $B_{1,i-1}^{3,1}\in\mathscr{U}_P$, $\mathcal{H}(h)=0$ unless the top right $l+d_{k-i-1}^1\times c-l-d_{k-i}^1$ block of $m_{1,i}m_{1,i-1}^{-1}$ is $0$, i.e., \begin{align*} \left(\begin{smallmatrix} I_{l+d^1_{k-i-1}}&M_{1,i}^{1} \end{smallmatrix}\right) \left(\begin{smallmatrix} -M_{1,i-1}^{1}\\ I_{c-l-d^1_{k-i}} \end{smallmatrix}\right)=0. \end{align*} Then the restriction of ${}^u\psi_U$ to $(B_{1,i-1}^{1,1},B_{1,i-1}^{2,1})$, which corresponds to the top left $l+d_{k-i-1}^1\times l+d_{k-i}^1$ block of $m_{1,i}m_{1,i-1}^{-1}$ becomes \begin{align*} \left(\begin{smallmatrix} I_{l+d^1_{k-i-1}}&M_{1,i}^{1} \end{smallmatrix}\right) \left(\begin{smallmatrix} I_{l+d^1_{k-i}}+M_{1,i-1}^{1}M_{1,i-1}^{2}\\ -M_{1,i-1}^{2}& \end{smallmatrix}\right)&= \left(\begin{smallmatrix} I_{l+d^1_{k-i-1}}&M_{1,i}^{1} \end{smallmatrix}\right) \left(\begin{smallmatrix} I_{l+d^1_{k-i}}\\ 0_{c-l-d^1_{k-i}\times l+d^1_{k-i}}& \end{smallmatrix}\right) \\&=\left(\begin{smallmatrix} I_{l+d^1_{k-i-1}}& {*}_{l+d^1_{k-i-1}\times d^1_{k-i}-d^1_{k-i-1}} \end{smallmatrix}\right), \end{align*} hence $\psi_{V_{\beta'}}$ agrees with \eqref{GL psi_U on V beta'} on $b'_{i}$. The character $\psi_{V_{\beta'}}$ is trivial on $f'$, because $f'$ is conjugated from $B_0^{1,1}$ (see \eqref{eq:psi u0 on B0}, the top left $l-d_1^2\times c-l'$ block of $\varphi_k$). It is also trivial on $e'$ since it is conjugated from the $c\times c$ block to the right of $u^{1,1}$ (this is similar to $e$). \end{proof} \begin{proposition}\label{proposition:GL d_1 < n-l} Consider $k>1$. Assume $d^1_1<c-l$ (in particular $l<c$) or $d^2_1<l$ (in particular $0<l$). Then $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)=0$ and $\mathcal{H}(h)=0$. \end{proposition} \begin{proof} We argue as in the proof of Proposition~\ref{proposition:d_1 < n-l}. By Proposition~\ref{proposition:GL wu_0 nontrivial implies h nontrivial orbit}, we can assume $\psi_{V_{\beta}}$ (resp., $\psi_{V_{\beta'}}$) is given by \eqref{GL psi_U on V beta} (resp., \eqref{GL psi_U on V beta'}). Let $\varphi$ be the transpose of the nilpotent element defined by $\psi_{V_{\beta}}^{-1}$ (resp., $\psi_{V_{\beta'}}^{-1}$). By \eqref{eq:GL psi beta factors through tensor} and \cite[Theorems~A, E]{GGS}, because $\rho_1$ (resp., $\rho_2$) is $(k,c)$, it is enough to show that $\varphi$ is nilpotent of order at least $k+1$. Consider $d^1_1<c-l$. Looking at \eqref{GL psi_U on V beta}, we have $k$ nontrivial blocks $b_{k-1},b_{k+1},\ldots,b_{2k-1}$, and for each block, the bottom right coordinate is nontrivial and the other coordinates on its column in $\varphi$ are $0$. This does not depend on the undetermined coordinates of the character. To see this use the assumption $c-l-d^1_1>0$ for $b_{k-1}$ and $l<c$ for $b_{k+1}$, and the bottom right coordinate of $b_{k+1}$ is the only nonzero coordinate on its column in $\varphi$ because on the $l'\times c-l+d^2_1$ block $b_k$ above $b_{k+1}$, $\varphi$ is $0$ on the last $c-l$ columns (if $l'=0$, this is trivially true). It follows that $\varphi$ is nilpotent of order at least $k+1$. For the case $d^2_1<l$, the blocks $b'_1,\ldots,b'_{k}$ are $k$ nontrivial blocks, the top left coordinate of each block is nontrivial (use $l>0$ and $d^2_1<l$) independently of undetermined coordinates, and is the only nonzero coordinate on its row (for $b'_{k-1}$ use the fact that \eqref{GL psi_U on V beta'} is trivial on $e'$!). Again $\varphi$ is nilpotent of order at least $k+1$. \end{proof} \begin{remark} If $l=l'$, the conditions $d^1_1=c-l$ and \eqref{eq:GL compatibility for w} already imply $d^2_i=l$ for all $i$. \end{remark} For the remaining cases $k=1$ or both $d^1_1=c-l$ and $d^2_1=l$, in which case $d^1_i=c-l$ and $d^2_i=l$ for all $i$, whence by \eqref{eq:GL compatibility for w} we have, for all $k\geq1$, $l'=l$. Up to left multiplication by an element of $W(M_P)$, $w$ equals \begin{align*} \left(\begin{smallmatrix} & & & & & & I_c \\ & & & & & \iddots & \\ & & & & I_c & & \\ & & & \left(\begin{smallmatrix}&&I_{l}\\&I_{2(c-l)}\\I_{l}\end{smallmatrix}\right) & & & \\ & & I_c & & & & \\ & \iddots & & & & & \\ I_c & & & & & & \end{smallmatrix}\right), \end{align*} so that ${}^w\ell(u)\in P$ and $h\sim w\ell_0(u)$ (we still do not change $w$, in order to use $\beta,\beta'$ and the formulas for the characters given above). Therefore $\psi_{V_{\beta}}$ and $\psi_{V_{\beta'}}$ are already given by \eqref{GL psi_U on V beta 0} and \eqref{GL psi_U on V beta' 0}. Considering the action of $(\GL_l,\GL_l)$, where $\GL_l$ is the natural subgroup of $M_{(l,c-l)}$, we can already write $X=A_{l,j}=\left(\begin{smallmatrix}&I_{j}\\0_{l-j}\end{smallmatrix}\right)$ with $0\leq j\leq l$. We deduce there are only finitely many more representatives to analyze, but as opposed to \S~\ref{section not GL}, we must handle each $0\leq j\leq l$ separately (i.e., we can not easily reduce to $j=l$). The form of representatives is finally similar to the case $k=1$. For the representative $h$ such that $j=l=c$ we have $h\sim\delta$. \begin{proposition}\label{proposition:GL d1 = n-l l < n} Assume $d^1_1=c-l$ or $k=1$, and $0\leq l<c$. Then $\mathcal{H}(h)=0$ outside a discrete subset of $s$. Furthermore, if $l>0$ (forcing $c>1$) and $\pi_2$ is supercuspidal, or $ck>1$, $l=0$, $\pi_1$ and $\pi_2$ are supercuspidal and $\rho_2=\rho_c(\tau_2)$ for an irreducible supercuspidal representation $\tau_2$ of $\GL_k$, then $\mathcal{H}(h)=0$ for all $s$. \end{proposition} \begin{proof} Now $V_{\beta}=V_{\beta'}=V_{(c^k)}$. Consider the parabolic subgroup $R<G$ with $M_R=M_{(c-l,l)}$ and $U_R=V_{(c-l,l)}^-$. Note that $V_{(c-l,l)}^-$ is trivial if $l=0$. Identify the group $\GL_{c-l}$ (nontrivial for $0\leq l<c$) with its natural image in $M_R$. For convenience, we multiply $w$ on the left by $\diag(I_{2kc-c},\left(\begin{smallmatrix}&I_{l}\\I_{c-l}\end{smallmatrix}\right))$. This permutation normalizes $V_{\beta}\times V_{\beta'}$, fixes $\psi_{V_{\beta}}$ and conjugates $\psi_{V_{\beta'}}$ into \begin{align}\label{GL psi_U on V beta''} &\left(\begin{smallmatrix} I_c&b'_1\\&\ddots&\ddots\\ &&I_c&b'_{k-1}&e'\\ &&&I_{l}\\&&&&I_{c-l} \end{smallmatrix}\right)\mapsto\psi(-\sum_{j=k-1}^{2}\tr(b'_{k-j}) -\tr(b'_{k-1}\left(\begin{smallmatrix}I_l&0_{l\times c-l}\end{smallmatrix}\right))). \end{align} Now ${}^h(1,\GL_{c-l})=\diag(I_{2kc-(c-l)},\GL_{c-l})$, and since the character \eqref{GL psi_U on V beta''} is trivial on $e'$, $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$ is a well defined representation of ${}^h(1,\GL_{c-l})$. Over non-archimedean fields, we simultaneously prove that for all $s$, $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,U_R)$, and admits a finite length filtration as a representation of ${}^h(1,\GL_{n-l})$, where ${}^h(1,C_{\GL_{n-l}})$ acts by a character on each constituent. For archimedean fields we prove that ${}^h(1,\mathfrak{u}_R)$ acts locally nilpotently on $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)^*$, and the Lie algebra $\mathfrak{v}_{((k-1)c+l,c-l)}$ of $\diag(I_{kc},V_{((k-1)c+l,c-l)})$ acts locally nilpotently on $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)^*$. Note that ${}^h(1,\GL_{n-l})$ is a direct factor of $\diag(I_{kc},M_{((k-1)c+l,c-l)})$. Cf. Lemmas~\ref{lemma:Jacquet module is a trivial rep of U_R} and \ref{lemma:Jacquet module is a finite length}. Granted that, since ${}^{h^{-1}}(|\det|^{s-1/2})(1,aI_{c-l})=|a|^{-(c-l)(s-1/2)}$, one can apply \eqref{eq:relation for T with s} to deduce $\mathcal{H}(h)=0$ outside a discrete subset of $s$. For $l>0$, if $\pi_2$ is supercuspidal, $\mathcal{H}(h)=0$ for all $s$ (because $J_{U_R}(\pi_2^{\vee})=0$). Henceforth we identify $\GL_{kc}$ with the bottom right block of $M_P$. For $u\in U_R$, ${}^h(1,u)=m_uu'$ with $u'\in U_P$ and $m_u=\diag(I_{(k-1)c},\left(\begin{smallmatrix}I_{l}&A_{l,j}u\\&I_{c-l}\end{smallmatrix}\right))$. Let $Z=\diag(I_{(k-1)c},V_{(l,c-l)})$. This (abelian) group stabilizes \eqref{GL psi_U on V beta''}. In addition, the subgroups $\diag(I_{kc-(c-l)},\GL_{c-l})$ and $\diag(\GL_l,I_{c-l})^{\triangle}<\GL_{kc}$ stabilize \eqref{GL psi_U on V beta''}, and act on the characters of $Z$ with $2$ orbits. Over non-archimedean fields, we show that for any nontrivial character $\chi$ of $Z$, \begin{align}\label{eq:GL Jacquet functor V beta U_R} J_{V_{\beta'}\ltimes Z,\psi_{V_{\beta'}}^{-1}\otimes\chi}(\rho_2)=0, \end{align} which implies (by \cite[5.9--5.12]{BZ1}) \begin{align}\label{eq:GL Jacquet functor V beta U_R implies} J_{V_{\beta'},\psi_{V_{\beta'}}^{-1}}(\rho_2)= J_{V_{\beta'}\ltimes Z,\psi_{V_{\beta'}}^{-1}}(\rho_2). \end{align} Thus $J_{V_{\beta'},\psi_{V_{\beta'}}^{-1}}(\rho_2)$ is a trivial representation of ${}^h(1,U_R)$ and this Jacquet module factors through $J_{V_{((k-1)c+l,c-l)}}(\rho_2)$, which is an admissible finite length representation of $M_{((k-1)c+l,c-l)}$. By exactness $J_{V_{\beta'},\psi_{V_{\beta'}}^{-1}}(\rho_2)$ admits a finite length filtration such that on each constituent, ${}^h(1,C_{\GL_{c-l}})$ acts by character. Now by \eqref{eq:GL psi beta factors through tensor}, $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,U_R)$ and admits a finite filtration with ${}^h(1,C_{\GL_{c-l}})$ acting by a character on each constituent. For the proof of \eqref{eq:GL Jacquet functor V beta U_R} we can assume $l>0$, otherwise \eqref{eq:GL Jacquet functor V beta U_R implies} is trivial. Identifying $Z$ with $\Mat_{l\times c-l}$, we can assume $\chi$ is nontrivial on the bottom right coordinate of $Z$, and trivial on the other coordinates of the rightmost column. Let $\varphi$ be the transpose of the nilpotent element defined by the inverse of \eqref{GL psi_U on V beta''} and by $\chi$. Note that $\varphi$ is independent of $A_{l,j}$. After conjugating $\varphi$ by $\diag(\left(\begin{smallmatrix}&I_{c-l}\\I_l\end{smallmatrix}\right)^{\triangle'},I_c)$ ($\GL_c^{\triangle'}$ is the diagonal embedding of $\GL_c$ in $\GL_{(k-1)c}$), it has nontrivial entries on the bottom right coordinates of $k$ blocks: $b'_1,\ldots,b'_{k-1}$ and $Z$, and in each block there is only one nontrivial coordinate on the rightmost column. Therefore $\varphi$ is nilpotent of order at least $k+1$ and \eqref{eq:GL Jacquet functor V beta U_R} holds, because $\rho_2$ is $(k,c)$. Over archimedean fields we repeat the proof of \eqref{eq:GL Jacquet functor V beta U_R} and apply \cite[Proposition~3.0.1]{GGS2} for each coordinate of $Z$ separately, exactly as in the proofs of Lemmas~\ref{lemma:Jacquet module is a trivial rep of U_R} and \ref{lemma:Jacquet module is a finite length}. It remains to prove the stronger assertion when $ck>1$, $l=0$, both $\pi_1$ and $\pi_2$ are supercuspidal and $\rho_2=\rho_c(\tau_2)$ for an irreducible supercuspidal $\tau_2$. Since $\tau_2$ is supercuspidal and $J_{V_{\beta'},\psi_{V_{\beta'}}^{-1}}(\rho_2)$ factors through $J_{V_{((k-1)c,c)}}(\rho_2)=J_{V_{((k-1)c,c)}}(\rho_c(\tau_2))$, we obtain $\mathcal{H}(h)=0$ for all $s$, unless $c=tk$ for some integer $t\geq1$ (use \cite[2.13 (a)]{BZ2}). If $t>1$, in particular $c>k$, and we claim $J_{V_{((k-1)c,c)}}(\rho_2)$ is trivial on ${}^h(1,V_{\delta})$ for some composition $\delta$ of $c$, then because $\pi_2$ is supercuspidal, $\mathcal{H}(h)=0$ for all $s$. This follows by repeatedly applying the derivatives of Bernstein and Zelevinsky \cite{BZ1,BZ2} to $J_{V_{((k-1)c,c)}}(\rho_2)$. Indeed for $1\leq i\leq c$, let $\chi_i$ be the character of $V_{((k-1)c,c-i,1^i)}$ given by $\chi_i(z)=\psi(\sum_{i'=1}^{i}z_{kc-i',kc-i'+1})$. Then either \begin{align*} J_{V_{((k-1)c,c)}}(\rho_2)=J_{V_{((k-1)c,c-1,1)}}(\rho_2), \end{align*} in which case our claim is proved with $\delta=(c-1,1)$, or \begin{align*} J_{V_{((k-1)c,c)}}(\rho_2)=J_{V_{((k-1)c,c-1,1)},\chi_1}(\rho_2). \end{align*} We proceed with $i=2$. After $i$ steps, our claim is either proved with $\delta=(c-i,1^i)$, or \begin{align*} J_{V_{((k-1)c,c)}}(\rho_2)=J_{V_{((k-1)c,c-i,1^i)},\chi_i}(\rho_2). \end{align*} However, since $c>k$, for $i=c$ we already obtain $J_{V_{((k-1)c,1^c)},\chi_c}(\rho_2)=0$, because the highest derivative of $\rho_2$ is $k$ (put differently, the suitable $\varphi$ is nilpotent of order at least $k+1$). Lastly for $c=k>1$, $J_{V_{((k-1)c,c)}}(\rho_2)=|\det|^{\alpha_1}\rho_{c-1}(\tau_2)\otimes|\det|^{\alpha_2}\tau_2$, where $\alpha_1,\alpha_2\in\tfrac12\Z$ (see \cite[3.4]{Z3}) and $\rho_{c-1}(\tau_2)$ is $(k,c-1)$, then (because $l=0$) \begin{align*} J_{V_{\beta'},\psi_{V_{\beta'}}^{-1}}(\rho_2)=|\det|^{\alpha_1}J_{V_{(c^{k-1})},\psi_{k-1}}(\rho_2)\otimes|\det|^{\alpha_2}\tau_2. \end{align*} Hence $\diag(\SL_c^{\triangle'},I_c)$ acts trivially on $J_{V_{\beta'},\psi_{V_{\beta'}}^{-1}}(\rho_2)$. Additionally because $\psi_{V_{\beta}}^{-1}$ belongs to the orbit of $\psi_k$ ($l=0$, see \eqref{GL psi_U on V beta 0}), $\SL_c^{\triangle}$ acts trivially on $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho_1)$. Thus ${}^h(\SL_c,1)$ acts trivially on $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$, in particular $\mathcal{H}(h)=0$ for all $s$, because $\pi_1$ is supercuspidal. \end{proof} For the remaining cases $l=c$ and $0\leq j\leq c$ (recall $j$ is the rank of $A_{l,j}$). The cases $j<c$ are similar to $l<c$, but involve $V_{\beta}$ and $\psi_{V_{\beta}}$. \begin{proposition}\label{proposition:GL d1 = n-l j < l = c} Assume $0\leq j<l=c$ or $k=1$. Then $\mathcal{H}(h)=0$ outside a discrete subset of $s$. Furthermore, if $j>0$ and $\pi_2$ is supercuspidal, or $ck>1$, $j=0$, $\pi_1$ and $\pi_2$ are supercuspidal and $\rho_1=\rho_c(\tau_1)$ for an irreducible supercuspidal representation $\tau_1$ of $\GL_k$, then $\mathcal{H}(h)=0$ for all $s$. \end{proposition} \begin{proof} In this case $X=A_{c,j}=\left(\begin{smallmatrix}&I_{j}\\0_{c-j}\end{smallmatrix}\right)$, so that if we consider $R<G$ with $M_R=M_{(c-j,j)}$ and $U_R=V_{(c-j,j)}^-$, we can repeat most of the proof of Proposition~\ref{proposition:GL d1 = n-l l < n} (with $j$ instead of $l$), except we use $V_{\beta}$ instead of $V_{\beta'}$ (hence, e.g., $\tau_1$ instead of $\tau_2$). Identify $\GL_{c-j}$ with its natural image in $M_R$. We have ${}^h(1,\GL_{c-j})=\diag(\GL_{c-j},I_{(2k-1)c})$. The character $\psi_{V_{\beta}}$ is now given by \begin{align*} &\left(\begin{smallmatrix} I_c&b_k\\ &I_c&b_{k+2}\\ &&\ddots&\ddots\\ &&&I_{c}&b_{2k-1}\\ &&&&I_{c} \end{smallmatrix}\right)\mapsto\psi(-\tr(b_kA_{c,j})-\sum_{j=2}^{k-1}\tr(b_{k+j})), \end{align*} and $\psi_{V_{\beta}'}=\psi_k^{-1}$. Then $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$ is a well defined representation of ${}^h(1,\GL_{c-j})$. We proceed over non-archimedean fields, and prove that for all $s$, $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,U_R)$ and factors through $J_{V_{(c-j,j+(k-1)c)}}(\rho_1)$. Since ${}^{h^{-1}}(|\det|^{s-1/2})(1,aI_{c-j})=|a|^{(c-j)(s-1/2)}$, $\mathcal{H}(h)=0$ outside a discrete subset of $s$ by \eqref{eq:relation for T with s}. For $j>0$ (then $l=c>1$), if $\pi_2$ is supercuspidal, $\mathcal{H}(h)=0$ for all $s$. For more details and the archimedean case see the proof of Proposition~\ref{proposition:GL d1 = n-l l < n}. Identify $\GL_{kc}$ with the top left block of $M_P$. Let $Z=\diag(V_{(c-j,j)},I_{(k-1)c})\cong\Mat_{c-j\times j}$. For $u\in U_R$ we have ${}^h(1,u)=m_uu'$ with $m_u\in Z$ and $u'\in U_P$. The group $Z$ stabilizes $\psi_{V_{\beta}}$, and the set of characters of $Z$ is partitioned into $2$ orbits with respect to the action of $\diag(\GL_{c-j},I_{kc-(c-j)})$ and $\{\diag(I_{c-j},g,\diag(g,I_{c-j})^{\triangle'}):g\in\GL_j\}$. We show that for any character $\chi\ne1$ of $Z$, \begin{align}\label{eq:GL Jacquet functor V beta U_R j<l=c} J_{V_{\beta}\ltimes Z,\psi_{V_{\beta}}^{-1}\otimes\chi}(\rho_1)=0. \end{align} This implies that $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$ is a trivial representation of ${}^h(1,U_R)$ and factors through $J_{V_{(c-j,j+(k-1)c)}}(\rho_1)$. For the proof of \eqref{eq:GL Jacquet functor V beta U_R j<l=c} assume $j>0$. We can assume $\chi$ is nontrivial on the bottom right coordinate of $Z$, and trivial on the other coordinates on the rightmost column. Let $\varphi$ be the transpose of the nilpotent element defined by $\psi_{V_{\beta}}^{-1}\otimes\chi$, which now depends on $A_{l,j}$ (as opposed to the proof of Proposition~\ref{proposition:GL d1 = n-l l < n}). Using a conjugation by $\diag(I_c,\left(\begin{smallmatrix}&I_{c-j}\\I_j\end{smallmatrix}\right)^{\triangle'})$, we obtain $\varphi$ which has nontrivial entries on the bottom right coordinates of $b_k,b_{k+2},\ldots,b_{2k-1}$ and $Z$ ($k$ blocks). This proves \eqref{eq:GL Jacquet functor V beta U_R j<l=c}, because $\rho_1$ is $(k,c)$. The proof of the assertion for the case $ck>1$, $j=0$, supercuspidal representations $\pi_1$ and $\pi_2$, and $\rho_1=\rho_c(\tau_1)$ for an irreducible supercuspidal $\tau_1$, proceeds as in the proof of Proposition~\ref{proposition:GL d1 = n-l l < n}. Since now $J_{V_{(c-j,j+(k-1)c)}}(\rho_1)=J_{V_{(c,(k-1)c)}}(\rho_1)$, $\mathcal{H}(h)=0$ for all $s$ unless $c=tk$, $t\geq1$. For $t>1$ we use derivatives along $V_{(1^i,c-i,(k-1)c)}$, $1\leq i\leq c$. For $c=k>1$, $J_{V_{(c,(k-1)c)}}(\rho_1)=|\det|^{\alpha_3}\tau_1\otimes|\det|^{\alpha_4}\rho_{c-1}(\tau_1)$ and $\rho_{c-1}(\tau_1)$ is $(k,c-1)$, hence ${}^h(\SL_c,1)$ acts trivially on $J_{V_{\beta}\times V_{\beta'},\psi_{V_{\beta}\times V_{\beta'}}^{-1}}(\rho)$. \end{proof} Propositions~\ref{proposition:GL structure of w u}--\ref{proposition:GL d1 = n-l j < l = c} imply $\mathcal{H}(h)=0$ for all $h$ unless $h\sim\delta$. We prove $\dim\mathcal{H}(\delta)=\dim\Hom_{G}(\chi_0\pi_1^{\vee},\pi_2)$, for all $s$. Now $P_{\delta}=G^{\iota}\ltimes (V_{(c^k)}\times V_{(c^k)})$ with $G^{\iota}=\{(g,g):g\in G\}$ (for $H=\GL_{2kc}$ one can take $\iota=I_c$, we keep the notation $G^{\iota}$ for uniformity) and any morphism in $\mathcal{H}(\delta)$ factors through $J_{V_{(c^k)}\times V_{(c^k)},\psi_k\otimes\psi_k}(\rho)$. Hence \begin{align*} \mathcal{H}(\delta)&=\Hom_{G^{\iota}}({}^{\delta^{-1}}J_{V_{(c^k)}\times V_{(c^k)},\psi_k\otimes\psi_k}(\rho)\otimes\pi_1^{\vee}\otimes\pi_2^{\vee},1). \end{align*} Note that $|\det|^{s-1/2}\otimes|\det|^{-s+1/2}$ and $\theta_h$ are trivial on $G^{\iota}$. We can assume $\delta$ commutes with $G^{\iota}$ ($G^{\iota}$ is simply the diagonal embedding of $G$ in $H$). Then as a representation of $G^{\iota}$, \begin{align*} {}^{\delta^{-1}}J_{V_{(c^k)}\times V_{(c^k)},\psi_k\otimes\psi_k}(\rho)= J_{V_{(c^k)},\psi_k}(\rho_1)\otimes J_{V_{(c^k)},\psi_k}(\rho_2). \end{align*} Recall that the action of $G^{\iota}$ on $J_{V_{(c^k)},\psi_k}(\rho_1)\otimes J_{V_{(c^k)},\psi_k}(\rho_2)$ is given by $g\mapsto \chi_0(\det g)$ ($g\in G$) for some quasi-character $\chi_0$ of $F^*$ (see \S~\ref{outline}), therefore \begin{align*} \Hom_{G^{\iota}}({}^{\delta^{-1}}J_{V_{(c^k)}\times V_{(c^k)},\psi_k\otimes\psi_k}(\rho)\otimes\pi_1^{\vee}\otimes\pi_2^{\vee},1) =\Hom_{G}(\chi_0\pi_1^{\vee},\pi_2). \end{align*} The remaining parts of the proof now follow as in \S~\ref{section not GL}, and note that when $ck>1$, $\pi_1$ and $\pi_2$ are supercuspidal and $\rho_i=\rho_c(\tau_i)$ for irreducible supercuspidal representations $\tau_i$ of $\GL_k$, $i=1,2$, we do not need to exclude any $s$. \section{Applications}\label{Applications} \subsection{Covering groups}\label{Covering groups} In this section we describe the extension of Theorem~\ref{theorem:uniqueness} to certain covering groups. We proceed with the definitions and notation of \S~\ref{Doubling setup}. Let $m\geq1$. Assume $F^*$ contains the full group of $m$-th roots of unity $\mu_m$. A topological central extension of $G(F)$ by $\mu_m$ is an exact sequence of topological groups \begin{align*} 1\rightarrow \mu_m\xrightarrow{i} G^{(m)}\xrightarrow{p} G(F)\rightarrow 1, \end{align*} where $i$ and $p$ are continuous, $i(\mu_m)$ is closed and belongs to the center of $G^{(m)}$, and $p$ induces an isomorphism $i(\mu_m)\backslash G^{(m)}\cong G(F)$ as topological groups. We call $G^{(m)}$ an $m$-fold covering group of $G(F)$; it is in general not unique, but for $G(F)=\Sp_c(F)$ is it uniquely defined given a Steinberg symbol (e.g., a Hilbert $m$-th order symbol). The covering groups under consideration here were constructed, in increasing level of generality, through a series of works including \cite{Weil2,Kubota,Moore,Stein,Kubota2,Mats,KP,BD}. For further reference see \cite{BLS,McNamara}. In this section we assume the field is non-archimedean, then $G^{(m)}$ is an $l$-group in the sense of \cite{BZ1}. For $m>2$, an archimedean field is already complex, then the cover is split over the group so that the results in this case are immediate from the linear case. As above, we identify $F$-groups with their $F$-points. Of course this only applies to $G$ and its subgroups; $G^{(m)}$ is not an algebraic group. In general if $X<G$, $\widetilde{X}$ denotes the covering of $X$ (precisely: of $X(F)$) defined by restriction from $G^{(m)}$. This covering depends on the embedding of $X$ inside $G$. We say that $\widetilde{X}$ is split over $X$ if there is a group embedding $X\rightarrow\widetilde{X}$. If $X$ is perfect (as an $F$-group), such a splitting, if exists, is unique. Note that since $F$ is of characteristic $0$, $\Sp_c$ and $\SL_c$ are perfect. The coverings under consideration are split canonically over unipotent subgroups, hence the notions of Jacquet functors and unipotent orbits extend to the covering in the obvious way. If $Y$ is a unipotent subgroup of $G$, denote by $\varphi_Y:Y\rightarrow\widetilde{Y}$ the splitting of $Y$. Since $\varphi_Y$ is canonical, we usually omit it from the notation, e.g., if $R<G$ is a parabolic subgroup and we consider a genuine representation $\sigma$ of $\widetilde{M}_R$, for the induced representation $\Ind_{\widetilde{R}}^{G^{(m)}}(\sigma)$ we extend $\sigma$ trivially on $U_R$, which more precisely means on $\varphi_Y(U_R)$. Since we are considering central coverings, $G$ acts on $G^{(m)}$ by conjugation. In particular \begin{align}\label{eq:covering conjugating canonical splitting} {}^h\varphi_Y(y)=\varphi_{{}^hY}({}^hy),\qquad\forall y\in Y. \end{align} We describe a general system of assumptions for covering groups, under which the doubling construction is well defined, then state the analog of Theorem~\ref{theorem:uniqueness}. For the particular cases of the covering $\Sp_c^{(m)}$ of \cite{Mats} and the covering $\widetilde{\GL}_c$ obtained by restriction from $\Sp_{2c}^{(m)}$, these assumptions were verified in \cite{me12}. More details are given below, see also Corollary~\ref{corollary:covering integral props and gamma}. Fix a covering group $G^{(m)}$. Assume there is a covering $\widetilde{H}$ of $H$ (typically $\widetilde{H}=H^{(m)}$) with the following properties. \begin{enumerate}[leftmargin=*] \item \label{covering:GSpin center}For $H=\GSpin_{2kc}$, the preimage $\widetilde{C}_H^{\circ}$ of $C_H^{\circ}$ in $\widetilde{H}$ belongs to the center of $\widetilde{H}$, and $\widetilde{C}_H^{\circ}$ is split over $C_H^{\circ}$. The same properties are satisfied by the preimage $\widetilde{C}_G^{\circ}$ of $C_G^{\circ}$ in $\widetilde{G}$. \item \label{covering:G G e1, e_2}Let $\mathfrak{e}_1(g)=(g,1)$ and $\mathfrak{e}_2(g)=(1,g)$. These are the embeddings of $G$ into $M_Q$ in the linear case. Assume they extend to embeddings $\widetilde{\mathfrak{e}}_i:G^{(m)}\rightarrow\widetilde{\mathfrak{e}_i(G)}$. \item \label{covering:G G e_2}The restriction of $\widetilde{\mathfrak{e}}_2$ to $\mu_m$ is the identity. (Here we regard $\mu_m$ as a subgroup of $G^{(m)}$.) \item\label{covering:G G}The images of $\widetilde{\mathfrak{e}}_1$ and $\widetilde{\mathfrak{e}}_2$ commute in $\widetilde{H}$, and give rise to a homomorphism \begin{align}\label{eq:covering G X G in H} \{(\epsilon_1,\epsilon_2)\in\mu_m\times\mu_m:\epsilon_1=\epsilon_2\} \backslash G^{(m)}\times G^{(m)}\rightarrow \widetilde{M}_Q. \end{align} This is (automatically) an embedding unless $H=\GSpin_{2kc}$, in which case we further assume that $\widetilde{\mathfrak{e}}_1(z)\widetilde{\mathfrak{e}}_2(z)$ is the identity for $z\in C_G^{\circ}$, then the left hand side of \eqref{eq:covering G X G in H} is further divided by the subgroup $\{(z,z):z\in C_G^{\circ}\}$ (a subgroup by \eqref{covering:GSpin center}). Cf. \eqref{embedding G G in H GSpin}. Denote the left hand side of \eqref{eq:covering G X G in H} by $(G,G)^{(m)}$. \item \label{covering:GL Levi}For $H=\GL_{2kc}$, the preimages of the direct factors $\GL_{kc}$ of $M_P$ commute in $\widetilde{H}$, and the coverings $\widetilde{\GL}_{kc}$ of each copy of $\GL_{kc}$ are isomorphic. \item \label{covering:GLkc def}Identify $\widetilde{\GL}_{kc}$ with $\widetilde{M}_P$ if $H\ne\GL_{2kc}$, or with the covering of one of the copies of $\GL_{kc}$ in $M_P$ for $H=\GL_{2kc}$. Assume $\widetilde{\GL}_{kc}$ is split over $\SL_c^{\triangle}$. \item \label{covering:MP split over GL GL}For $H=\GL_{2kc}$, assume $\widetilde{M}_P$ is split over $\{\diag(g^{\triangle},g^{\triangle}):g\in\GL_c\}$. \item \label{covering:inv iota}The involution $\iota$ extends to an involution of $G^{(m)}$ and for a genuine representation $\pi$ of $G^{(m)}$, $(\pi^{\vee})^{\iota}=(\pi^{\iota})^{\vee}$. \item\label{covering:center of GLl}For any maximal parabolic subgroup $R<G$ whose Levi part contains $\GL_l$, the covering $\widetilde{\GL}_l$ has the property that for a sufficiently large integer $d$, the preimage of $C_{\GL_l}^d=\{x^d:x\in C_{\GL_l}\}$ belongs to the center of $\widetilde{\GL}_l$. \end{enumerate} First we use these properties to construct the basic data for the doubling method. Define $\widetilde{\GL}_{kc}$ by \eqref{covering:GLkc def}. Let $\rho$ be a genuine representation of $\widetilde{\GL}_{kc}$. We say that $\rho$ is a $(k,c)$ representation if $\Hom_{V(\sigma)}(\rho,\psi')=0$ for all $\sigma\succsim(k^c)$ and $\psi'\in\widehat{V}(\sigma)_{\mathrm{gen}}$, and $\dim\Hom_{V_{(c^k)}}(\rho,\psi_{k})=1$. By \eqref{covering:GLkc def}, the action of $\SL_c^{\triangle}$ on $J_{V_{(c^k)},\psi_k}(\rho)$ is well defined, then it is trivial. If $H\ne\GL_{2kc}$, let $\rho$ be a genuine admissible finite length $(k,c)$ representation of $\widetilde{\GL}_{kc}$. For $H=\GSpin_{2kc}$, by \eqref{covering:GSpin center} the irreducible representations of $\widetilde{C}_H^{\circ}$ are the lifts of quasi-characters of $F^*$ to genuine characters, therefore if $\eta$ is a quasi-character of $F^*$ which we regard also as a character of $\widetilde{C}_H^{\circ}$, the representation $\rho\otimes\eta$ is well defined. For $H=\GL_{2kc}$, by \eqref{covering:GL Levi} we have \begin{align*} \{(\epsilon_1,\epsilon_2)\in\mu_m\times\mu_m:\epsilon_1\epsilon_2=1\}\backslash \widetilde{\GL}_{kc}\times \widetilde{\GL}_{kc} \cong\widetilde{M}_P. \end{align*} Hence $\rho=\rho_1\otimes\rho_2$ is defined for genuine representations $\rho_1$ and $\rho_2$, which we take to be admissible finite length and $(k,c)$. The space $V(s,\rho\otimes\eta)$ is now defined as in \S~\ref{Doubling setup}, with induction from $\widetilde{P}$ to $\widetilde{H}$. If $H=\GL_{2kc}$, according to \eqref{covering:MP split over GL GL} there is a quasi-character $\chi_0$ of $F^*$, such that the action of $\{\diag(g^{\triangle},g^{\triangle}):g\in\GL_c\}$ on ${}^{\delta^{-1}}(J_{V_{(c^k)},\psi_k}(\rho_1)\otimes J_{V_{(c^k)},\psi_k}(\rho_2))$ is given by $g\mapsto\chi_0(\det g)$. Let $\pi_1$ (resp., $\pi_2$) be an anti-genuine (resp., genuine) finite length admissible representation of $G^{(m)}$. If $H=\GSpin_{2kc}$, we assume $\pi_1$ and $\pi_2$ admit central characters. Then by \eqref{covering:GSpin center} these characters restrict to genuine characters of $\widetilde{C}_H^{\circ}$, denoted $\chi_{\pi_1}$ and $\chi_{\pi_2}$, which we can identify with quasi-characters of $F^*$. We assume $\chi_{\pi_1}^{-1}=\chi_{\pi_2}$ and put $\eta=\chi_{\pi_1}^{-1}$. Consider the space \begin{align}\label{eq:covering Hom factor 1} \Hom_{(G,G)^{(m)}}(J_{U,\psi_U^{-1}}(V(s,\rho\otimes\eta)),\pi_1\otimes\pi_2). \end{align} The representation $V(s,\rho\otimes\eta)$ is a priori a representation of $(G,G)^{(m)}$ by \eqref{covering:G G}. Since $\pi_1$ is anti-genuine and $\pi_2$ is genuine, $\pi_1\otimes\pi_2$ factors through $(G,G)^{(m)}$, and it follows that \eqref{eq:covering Hom factor 1} is well defined. Recall $D=U\rtimes (G,G)$ and denote $D^{(m)}=U\rtimes (G,G)^{(m)}$. Then \eqref{eq:covering Hom factor 1} is isomorphic to \begin{align}\label{eq:covering Hom L 1} \Hom_{D^{(m)}}(V(s,\rho\otimes\eta),\psi_U^{-1}\otimes\pi_1\otimes\pi_2). \end{align} By \eqref{covering:G G e_2}, $V(s,\rho\otimes\eta)$ is a genuine representation of the right copy of $G^{(m)}$, and so is $\pi_2$. Combining \eqref{covering:G G e_2} with \eqref{covering:G G}, $\epsilon_1\in\mu_m$ is mapped to $\epsilon_1^{-1}$ under $\widetilde{\mathfrak{e}}_1$, whence $V(s,\rho\otimes\eta)$ is an anti-genuine representation of the left copy of $G^{(m)}$, as is $\pi_1$. Therefore the representation \begin{align*} V(s,\rho\otimes\eta)\otimes(\psi_U\otimes\pi_1^{\vee}\otimes\pi_2^{\vee}) \end{align*} of $D^{(m)}$ factors through $D$. Hence \eqref{eq:covering Hom L 1} equals \begin{align*} &\Hom_{D}(V(s,\rho\otimes\eta)\otimes(\psi_U\otimes\pi_1^{\vee}\otimes\pi_2^{\vee}),1) \\&=\Hom_{D}(\Ind_{\widetilde{P}\times D^{(m)}}^{\widetilde{H}\times D^{(m)}}\left((|\det|^{s-1/2}\rho\otimes\eta)\otimes(\psi_U\otimes\pi_1^{\vee}\otimes\pi_2^{\vee})\right),1). \end{align*} (Cf. \eqref{eq:Hom L}.) Recall $P_{h}={}^{h^{-1}}P\cap D$. The covering $\widetilde{P}_{h}$ obtained by restriction from $\widetilde{H}$ coincides with the covering restricted from $D^{(m)}$, by \eqref{covering:G G}. The space of distributions on $\widetilde{P}hD^{(m)}$ corresponds to \begin{align*} \Hom_{D}(\ind_{\widetilde{P}_{h}}^{D^{(m)}}\left({}^{h^{-1}}((|\det|^{s-1/2}\rho\otimes\eta)\delta_P^{1/2})\otimes(\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee})\right),1), \end{align*} which by the Frobenius reciprocity is equal to \begin{align}\label{covering H(h)} \mathcal{H}(h)=\Hom_{P_{h}}({}^{h^{-1}}(|\det|^{s-1/2}\rho\otimes\eta)\otimes(\psi_U\otimes \pi_1^{\vee}\otimes\pi_2^{\vee}),\theta_h). \end{align} (Cf. \eqref{H(h)}.) We can now use the theory of distributions on $l$-sheafs of \cite{BZ1}. Recall that the right action of $D$ on $P\backslash H$ is constructive, i.e., the graph of the action is a finite union of locally closed sets (see \cite[6.1--6.6]{BZ1} for more details on these notions). This follows from \cite[Theorem~A]{BZ1}, because $P\backslash H$ is an algebraic $F$-variety. Since \begin{align*} (\widetilde{P}\times D^{(m)})\backslash(\widetilde{H}\times D^{(m)})\cong \widetilde{P}\backslash\widetilde{H}\cong P\backslash H \end{align*} (as topological spaces), the right action of $D$ on $(\widetilde{P}\times D^{(m)})\backslash(\widetilde{H}\times D^{(m)})$ is also constructive, justifying the application of \cite[Theorem~6.9]{BZ1} (note that the action of $D^{(m)}$ on the quotient factors through $D$). The arguments of \S~\ref{outline} for showing the vanishing of $\mathcal{H}(h)$ also remain valid. We explain this in more detail. First, if $Y<{}^hU\cap M_P$, then by \eqref{eq:covering conjugating canonical splitting}, ${}^{h^{-1}}\varphi_Y(Y)=\varphi_{{}^{h^{-1}}Y}({}^{h^{-1}}Y)=\varphi_{U}({}^{h^{-1}}Y)$. Hence \eqref{eq:T on Jacquet} holds and any morphism in $\mathcal{H}(h)$ factors through $J_{Y,{}^{h}\psi_U^{-1}}(\rho)$. Condition \eqref{psi U nontrivial} is independent of the covering. Since $\rho\otimes\eta$ is trivial on $\varphi_{N_H}(U_P)$, the condition ${}^hY<U_P$ and \eqref{eq:covering conjugating canonical splitting} imply ${}^{h^{-1}}(|\det|^{s-1/2}\rho\otimes\eta)$ is trivial on $\varphi_Y(Y)$, then we can deduce $\mathcal{H}(h)=0$. The second method, where we show that any morphism in $\mathcal{H}(h)$ factors through $J_{V(\sigma),\psi'}(\rho)$ with $\sigma\succsim(k^c)$ and $\psi'\in\widehat{V}(\sigma)_{\mathrm{gen}}$, also implies $\mathcal{H}(h)=0$, as in the linear case. The only change concerns the third condition, where it is not necessarily true that the preimage of ${}^h(1,C_{\GL_l})$ in $\widetilde{H}$ acts by a character, because this preimage might not be abelian. However, we can instead use the preimage $\widetilde{C}_{\GL_l}^d$ of $C_{\GL_l}^d$ (for a large integer $d$), which is abelian and belongs to the center of $\widetilde{\GL}_l$, by assumption \eqref{covering:center of GLl}. Then $\widetilde{C}_{\GL_l}^d$ acts by a character on each irreducible constituent of $J_{U_R}(\pi_2^{\vee})$, and the preimage of ${}^h(1,C_{\GL_l}^d)$ in $\widetilde{H}$ acts by a character on each of finitely many constituents. The only change to \eqref{eq:relation for T with s} is that now we replace $a\in F^*$ with $a^d$, but this still implies the vanishing outside a discrete subset of $s$. Define \begin{align*} d(s,\rho,\eta,\pi_1,\pi_2)=\dim\Hom_{(G,G)^{(m)}}(J_{U,\psi_U^{-1}}(V(s,\rho\otimes\eta)),\pi_1\otimes\pi_2). \end{align*} We are ready to prove Theorem~\ref{theorem:uniqueness} for covering groups. \begin{theorem}\label{theorem:covering uniqueness} Let $\pi_1$, $\pi_2$ and $\rho$ be as above. \begin{enumerate}[leftmargin=*] \item\label{part1}Outside a discrete subset of $s$, $d(s,\rho,\eta,\pi_1,\pi_2)\leq\dim\Hom_{G^{(m)}}(\chi_0\pi_1^{\vee},\pi_2^{\iota})$. \item\label{part2}If $\pi_1$ and $\pi_2$ are irreducible, outside a discrete subset of $s$, $d(s,\rho,\eta,\pi_1,\pi_2)=0$ unless $\pi_1=\chi_0(\pi_2^{\iota})^{\vee}$ in which case $d(s,\rho,\eta,\pi_1,\pi_2)\leq1$. \end{enumerate} Furthermore, assume $\pi_2$ is supercuspidal and $\rho$ is not necessarily of finite length. Then the assertions of \eqref{part1} and \eqref{part2} hold for all $s$, granted either $H\ne\GL_{2kc}$ and $c>2$, or $H=\Sp_{4k}$. \end{theorem} \begin{remark} Evidently, there is no essential difference between the statements in the linear setting and the covering (for $m=1$, $G^{(m)}=G$), except the supercuspidal cases, where we excluded the conditions depending on $\rho$. This is because we are not discussing $\rho_c(\tau)$ for covering groups here; the definition of this representation is thus far clear only when $\tau$ is a genuine unramified principal series (see \cite{me12}). Once the details are worked out, the arguments here are expected to extend to these cases as well. \end{remark} \begin{proof} Since $\widetilde{P}\backslash \widetilde{H}/D^{(m)}=P\backslash H/D$, we can use the same description for the representatives $h$. The arguments of Propositions~\ref{proposition:structure of w u}--\ref{proposition:wu_0 nontrivial implies h nontrivial orbit} and Propositions~\ref{proposition:GL structure of w u}--\ref{proposition:GL wu_0 nontrivial implies h nontrivial orbit} extend to the covering. For Propositions~\ref{proposition:d_1 < n-l}, \ref{proposition:d1 = n-l l < n}, \ref{proposition:GL d_1 < n-l}, \ref{proposition:GL d1 = n-l l < n}--\ref{proposition:GL d1 = n-l j < l = c} we used two types of arguments. First, we showed that the Jacquet module $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ vanishes, because the order of nilpotency of $\varphi$ is at least $k+1$. The arguments involving the action of a normalizer on the set of characters of an abelian unipotent subgroup carry over to the covering. This is because in general, if a subgroup $A<H$ normalizes a unipotent subgroup $Y<H$, thereby acts on its set of characters, then $\widetilde{A}$ also acts on the set of characters of $Y$ with the same orbits (because $\widetilde{H}$ is split canonically over $Y$). The arguments of \cite[5.9--5.12]{BZ1} still apply. Then we used \cite[Theorems~A, E]{GGS}, in which strictly speaking covering groups were not discussed. However, one can still use conjugations and \cite[5.9--5.12]{BZ1} to show that $J_{V_{\beta},\psi_{V_{\beta}}^{-1}}(\rho)$ factors through a Jacquet module with respect to a unipotent orbit which is greater than or non-comparable with $(k^c)$. See Example~\ref{example:covering GGS} below. Second, we used \eqref{eq:relation for T with s}, which is still applicable with the minor change explained above. It remains to consider $\mathcal{H}(h)$ where $h\sim\delta$. Consider $H\ne\GL_{2kc}$. Then \begin{align*} \mathcal{H}(\delta)&=\Hom_{G^{\iota}\times C_H^{\circ}}({}^{\delta^{-1}}J_{V_{(c^k)},\psi_k}(\rho)\otimes\eta\otimes\pi_1^{\vee}\otimes\pi_2^{\vee},1). \end{align*} For $H=\GSpin_{2kc}$ the assumption $\eta=\chi_{\pi_1}^{-1}=\chi_{\pi_2}$ implies that this space equals \begin{align*} \Hom_{G^{\iota}}({}^{\delta^{-1}}J_{V_{(c^k)},\psi_k}(\rho)\otimes\pi_1^{\vee}\otimes\pi_2^{\vee},1). \end{align*} Then since $J_{V_{(c^k)},\psi_k}(\rho)$ is a trivial representation of $\SL_c^{\triangle}$ (see \eqref{covering:GLkc def}) and by virtue of \eqref{covering:inv iota}, \begin{align*} \mathcal{H}(\delta)=\Hom_{G^{\iota}}(\pi_1^{\vee}\otimes\pi_2^{\vee},1)=\Hom_{G}(\pi_1^{\vee}\otimes(\pi_2^{\vee}){}^{\iota},1)= \Hom_{G^{(m)}}(\pi_1^{\vee},\pi_2^{\iota}). \end{align*} For $H=\GL_{2kc}$ we first have \begin{align*} \mathcal{H}(\delta)&=\Hom_{G^{\iota}}({}^{\delta^{-1}}J_{V_{(c^k)}\times V_{(c^k)},\psi_k\otimes\psi_k}(\rho)\otimes\pi_1^{\vee}\otimes\pi_2^{\vee},1). \end{align*} The action of $G^{\iota}$ on ${}^{\delta^{-1}}(J_{V_{(c^k)},\psi_k}(\rho_1)\otimes J_{V_{(c^k)},\psi_k}(\rho_2))$ is given by $g\mapsto\chi_0(\det g)$, and we obtain $\Hom_{G^{(m)}}(\chi_0\pi_1^{\vee},\pi_2)$. The remaining parts of the proof now follow as in the linear case. \end{proof} \begin{example}\label{example:covering GGS} Consider a Jacquet module of a $(2,2)$ representation $\rho$ with respect to the unipotent subgroup $Y<\GL_4$ and character $\psi$ given by \begin{align*} Y=\left\{y=\left(\begin{smallmatrix} 1 & x_1 & x_2 & x_3 \\ & 1 & x_4 & x_5 \\ & & 1 & \\ & & & 1 \end{smallmatrix}\right)\right\},\qquad \psi(y)=\psi(x_1+x_5). \end{align*} It suffices to show the vanishing with respect to the subgroup of $Y$ with $x_4=0$. Using conjugation by $\diag(1,\left(\begin{smallmatrix}&1\\1\end{smallmatrix}\right),1)$, we obtain \begin{align*} Y'=\left\{y=\left(\begin{smallmatrix} 1 & x_2 & x_1 & x_3 \\ & 1 & & \\ & & 1 & x_5 \\ & & & 1 \end{smallmatrix}\right)\right\}. \end{align*} The Jacquet module $J_{Y',\psi}(\rho)$ ($\psi$ does not change) is a representation of \begin{align*} X=\left\{\left(\begin{smallmatrix} 1 & & & \\ & 1 & & x_6 \\ & & 1 & \\ & & & 1 \end{smallmatrix}\right)\right\}. \end{align*} The preimage of the subgroup $\{\diag(1,t,I_2):t\in F^*\}$, which also acts on $J_{Y',\psi}(\rho)$, acts on the set of characters of $X$ with $2$ orbits (for an action of $T_{\GL_2}$ use $\diag(t',t,t',t')$). Both orbits can be conjugated into $J_{Y'\rtimes X,\psi}(\rho)$ with still the same $\psi$ using $\diag(1,\left(\begin{smallmatrix}1&\\z&1\end{smallmatrix}\right),1)$. It remains to prove $J_{Y'\rtimes X,\psi}(\rho)=0$. Passing to the subgroup with $x_2=0$ and conjugating by $\diag(\left(\begin{smallmatrix}&1\\1\end{smallmatrix}\right),I_2)$, it is enough to prove $J_{Y'',\psi}(\rho)=0$ with \begin{align*} Y''=\left\{y=\left(\begin{smallmatrix} 1 & & & x_6 \\ & 1 & x_1 & x_3 \\ & & 1 & x_5 \\ & & & 1 \end{smallmatrix}\right)\right\},\qquad \psi(y)=\psi(x_1+x_5). \end{align*} As with $x_6$, one can fill in the missing coordinate above $x_1$ using \begin{align*} X'=\left\{\left(\begin{smallmatrix} 1 & & x_0 & \\ & 1 & & \\ & & 1 & \\ & & & 1 \end{smallmatrix}\right)\right\},\qquad \{\diag(t,I_3):t\in F^*\}, \qquad \diag(\left(\begin{smallmatrix}1&\\z&1\end{smallmatrix}\right),I_2). \end{align*} We have shown that $J_{Y,\psi}(\rho)$ factors through $J_{V_{(2,1,1)},\psi}(\rho)$. This module is filtered by the third and fourth derivatives of $\rho$ (in the sense of \cite{BZ1}), both of which vanish because $\rho$ is $(2,2)$. \end{example} We briefly describe the applicability of Theorem~\ref{theorem:covering uniqueness} to the construction of \cite{me12}. Henceforth assume $-1$ is an $m$-th root of unity in $F^*$ (this is a technical assumption, used in \cite{me12} and several other works, to simplify some of the computations). For any integer $l$, let $\Sp_{2l}^{(m)}$ denote the covering of \cite{Mats} defined using the $m$-th order Hilbert symbol $(,)_m$. For $\GL_l$, let $\GL_l^{(m)}$ denote the covering obtained by restriction from $\Sp_{2l}^{(m)}$, when we identify $\GL_l$ with the standard Siegel Levi subgroup of $\Sp_{2l}$ by $g\mapsto \diag(g,g^*)$. Let $r=m$ if $m$ is odd, otherwise $r=m/2$. Let $k_0$ be a positive integer, and put $k=rk_0$. The above list of properties were verified in \cite{me12} when $G=\Sp_c$ or $\GL_c$, for the covering $G^{(m)}$, with $\widetilde{H}=H^{(m)}$. \begin{remark} The group $\GL_l^{(m)}$ was denoted $\GL_l^{(m,r)}$ in \cite{me12}, to underline the difference between this covering and the coverings of \cite{KP}, and $k$ of \cite{me12} is $k_0$ here. \end{remark} Assume we have a $(k,c)=(rk_0,c)$ representation $\rho$ (admissible of finite length). It is at present not clear how to construct such representations in general (e.g., from representations of a covering of $\GL_k$), but in the unramified setting this was obtained in \cite{me12} (following \cite{Suzuki1998}). Note that here the ``unramified setting" includes the assumptions $|m|=1$ and $q>4$. Briefly, given a genuine unramified principal series representation $\tau$ of $\GL_{k_0}^{(m)}$, one can choose an unramified character of $T_{\GL_{k_0}}$ associated with the inducing data of $\tau$ (the correspondence is not unique). Using this character and an exceptional representation of $\GL_{rc}^{(m)}$ (exceptional in the sense of \cite{KP}, see \cite{Gao5}), the prescribed $\rho$ was constructed in \cite[\S~2.2]{me12}. For $H=\GL_{2kc}$, $\chi_0$ was taken to be trivial (see \cite[(3.34)]{me12}). Also let $\pi$ be a genuine irreducible representation of $G^{(m)}$. The integral $Z(s,\omega,f)$, with a holomorphic or rational section $f$, was defined in \cite{me12} (using notation similar to \S~\ref{Doubling setup}). Formally, it belongs to \eqref{eq:covering Hom factor 1} with $\pi_1=\pi^{\vee}$ and $\pi_2=\pi^{\iota}$. This was proved in \cite[Propositions~68, 75]{me12} (in \textit{loc. cit.} (3.21) and (3.36), $G^{(m)}\times G^{(m)}$ should be replaced with $(G,G)^{(m)}$). \begin{corollary}\label{corollary:covering integral props and gamma} $Z(s,\omega,f)$ admits meromorphic continuation to a function in $q^{-s}$. \end{corollary} \begin{proof} This follows from Theorem~\ref{theorem:covering uniqueness} and Bernstein's continuation principle (\cite{Banks}), see \cite[Remark~72]{me12} and \cite[\S~3.3]{me12} (cf. Corollary~\ref{coro:meromorphic continuation for doubling 1} here). \end{proof} \begin{corollary} One can define a local $\gamma$-factor $\gamma(s,\pi\times\tau,\psi)$ by virtue of \eqref{eq:gamma factor}. \end{corollary} Note that the additional normalization of the intertwining operator appearing in \eqref{eq:gamma factor} can be applied to the covering case as well; but we are not proving the multiplicativity properties of the $\gamma$-factor here, and at any rate, we are still limited to the unramified setting. The point here is that the proportionality factor exists. \subsection{Global unfolding}\label{Global unfolding} The global doubling construction in the linear case for arbitrary $k$ was first described in \cite{CFGK2} mainly for the symplectic group, with some details also for the special even orthogonal group, then briefly explained in \cite{CFK} for the other cases appearing here. The covering version for the symplectic group was described in \cite{me12}. Let $F_0$ be a number field with a ring of adeles $\A$. Let $\tau$ be an irreducible cuspidal automorphic representation of $\GL_k(\A)$, and $\mathcal{E}_{\tau}$ denote the generalized Speh representation of $\GL_{kc}(\A)$ corresponding to $c$ copies of $\tau$, constructed by Jacquet \cite{Jac4}. According to \cite{G2,JL2013,CFGK2,CFK}, the representation $\mathcal{E}_{\tau}$ is a global $(k,c)$ representation: it does not support any Fourier coefficient along an orbit greater than or non-comparable with $(k^c)$, it supports a Fourier coefficient along $(k^c)$, and all of its local components are $(k,c)$. See \cite[\S~2.2]{CFGK2} and the references therein for more details on the global notions. Moreover, if $\tau=\otimes_{\nu}'\tau_{\nu}$ as a restricted tensor product, $(\mathcal{E}_{\tau})_{\nu}=\rho_c(\tau_{\nu})$ for any place $\nu$ of $F_0$. One can readily globalize our arguments used for the proof of Theorem~\ref{theorem:uniqueness} to obtain a proof of the unfolding of the global doubling integral, for all of the groups under consideration here (and in \cite{CFK}). At the same time, since local vanishing of Jacquet modules implies global vanishing of the corresponding Fourier coefficients (even one local $(k,c)$ component suffices for this), the proof of Theorem~\ref{theorem:uniqueness} also provides a proof of the global unfolding. In addition we obtain the following corollary, which for brevity, is stated here in the symplectic or special orthogonal cases, but the other cases are evident as well. We use the notation and definitions of \S~\ref{Doubling setup}, in the global context. Let $K_H$ be a standard maximal compact subgroup, in a ``good position" with respect to $T_H$. Let $f$ be a $K_H$-finite section of $\Ind_{P(\A)}^{H(\A)}(|\det|^{s-1/2}\mathcal{E}_{\tau})$, whose restriction to $K_H$ is independent of $s$. We regard $f$ as a complex-valued function. Recall the definition \eqref{psi_U on V beta d1 n-l} of a character $\psi_{V_{\beta}}$ when $\beta=(c^k)$, defined with respect to $0\leq l\leq n$, which we re-denote here by $\psi_{(c^k),l}$ (in the context of \eqref{psi_U on V beta d1 n-l}, $l$ was fixed). In particular $\psi_{(c^k),n}$ is in the orbit of $\psi_k^{-1}$ ($\psi_{(c^k),n}=\psi_k^{-1}$ when $c$ is even). For $k=1$, $\psi_{(c^k),l}$ is trivial. Then we have the Fourier coefficients of $f$ along $(V_{(c^k)},\psi_{(c^k),l})$, defined by \begin{align*} &f^{V_{(c^k)},\psi_{(c^k),l}}(s,x)=\int\limits_{V_{(c^k)}(F_0)\backslash V_{(c^k)}(\A)} f(s,vx)\,\psi_{(c^k),l}(v)\,dv. \end{align*} In particular $f^{V_{(c^k)},\psi_{(c^k),n}}$ is the coefficient $f_{W_{\psi}(\mathcal{E}_{\tau})}$ appearing in \cite[Theorem~1]{CFGK2}, i.e., the composition of $f$ with the global $(k,c)$ functional on the space of $\mathcal{E}_{\tau}$ given by a Fourier coefficient (if $c$ is odd, this is true up to a conjugation which identifies $\psi_{(c^k),n}$ with $\psi_k^{-1}$). The Eisenstein series corresponding to $f$ is defined by \begin{align}\label{eq:Eisenstein series main} E(x;s,f)=\sum\limits_{\gamma\in P(F_0)\backslash H(F_0)}f(s,\gamma x),\qquad x\in H(\A). \end{align} The series is absolutely convergent for $\Real(s)\gg0$ and admits meromorphic continuation to $\C$. Consider the Fourier coefficient of $E(x;s,f)$ along $(U,\psi_U)$, given by \begin{align}\label{eq:U psi U coefficient of the series} E^{U,\psi_U}(x;s,f)=\int\limits_{U(F_0)\backslash U(\A)}E(ux;s,f)\psi_U(u)\,du. \end{align} The definitions imply that $E^{U,\psi}(\cdot;s,f)$ is an automorphic form on $G(\A)\times G(\A)$. For $0\leq l\leq n$, let $w_l$ be the representative $w$ chosen after the proof of Proposition~\ref{proposition:2nd reduction of w} (used for the computation of \eqref{eq:beta}), but with $d_1=\ldots=d_{k-1}=n-l$. Using Example~\ref{eq:example k=2,3} we see that \begin{align*} w_l=\left(\begin{smallmatrix}0&0&0&0&I_l&0\\ 0&0&I_{c-l}&0&0&0\\0&0&0&0&0&I_{(k-1)c}\\\epsilon_0I_{(k-1)c}&0&0&0&0&0\\0&0&0&I_{c-l}&0&0\\0&\epsilon_0I_l&0&0&0&0\end{smallmatrix}\right) \jmath_{(k-1)c+l}. \end{align*} A quick computation implies ${}^{w_l^{-1}}V_{(c^k)}={}^{w''}V_{(c^k)}$, where \begin{align*} w''=\jmath_{(k-1)c+l}\diag(I_{(k-1)c+l},\left(\begin{smallmatrix}& I_{c-l} \\\epsilon_0I_{c-l} & \end{smallmatrix}\right),I_{(k-1)c+l}). \end{align*} Then $U={}^{w_l^{-1}}V_{(c^k)}\ltimes U_{n-l}$ for the subgroup \begin{align*} U_{n-l}={}^{w''}(U\cap U_P)= {}^{{}^{\jmath_{(k-1)c+l}}}\left(\begin{smallmatrix} I_{(k-1)c} & 0& u_1 & 0 & u_2 & u_3\\ & I_l &&&& u_2' \\ & & I_{c-l} &&& 0\\ & & & I_{c-l}&&u_1' \\ &&&& I_l &0\\ &&&&&I_{(k-1)c} \end{smallmatrix}\right) \end{align*} (if we replace $w_l$ by $\delta_0$, $U_0=U\cap {}^{\jmath_{kc}}U_P$), and $P_{w_l}\cap U={}^{w_l^{-1}}V_{(c^k)}$. Denote $P_{w_l}'=P_{w_l}\cap (G,G)$. \begin{corollary}\label{corollary:unfolding of the series} In $\Real(s)\gg0$, \begin{align*} E^{U,\psi_U}(x;s,f)= \sum_{l=0}^n \sum\limits_{y\in P_{w_l}'(F_0)\backslash (G(F_0),G(F_0))}\, \int\limits_{U_{n-l}(\A)}f^{V_{(c^k)},\psi_{(c^k),l}}(s,w_l({}^{\jmath_l}u_l)uyx)\psi_U(u)\,du. \end{align*} \end{corollary} \begin{proof} We can assume $x=I_{2kc}$. Write the sum \eqref{eq:Eisenstein series main} over $P(F_0)\backslash H(F_0)/D(F_0)\times P_h(F_0)\backslash D(F_0)$. In a right half plane we can exchange summation and integration. Thus \begin{align*} E^{U,\psi_U}(I_{2kc};s,f)=\sum\limits_{h\in P(F_0)\backslash H(F_0)/D(F_0)}\,\int\limits_{U(F_0)\backslash U(\A)}\, \sum\limits_{y\in P_h(F_0)\backslash D(F_0)}f(s,hyu)\psi_U(u)\,du. \end{align*} Next because $u\in M_Q$ and $P \cap {}^wQ=(P \cap {}^wM_Q)\ltimes (P \cap {}^wU)$, we have \begin{align*} {}^{h^{-1}}P \cap Q={}^{h^{-1}}(P \cap {}^wQ)= ({}^{h^{-1}}P \cap M_Q)\ltimes({}^{h^{-1}}P \cap U). \end{align*} Since $P_h<Q$ and $(G,G)<M_Q$, we deduce \begin{align*} P_h=(P_h\cap (G,G))\ltimes (P_h\cap U)=P'_h\ltimes P''_h, \end{align*} whence we can collapse the $du$-integral, exchange $yu\mapsto uy$ and take the integral inside: \begin{align*} E^{U,\psi_U}(I_{2kc};s,f)=\sum\limits_{h\in P(F_0)\backslash H(F_0)/D(F_0)}\, \sum\limits_{y\in P'_h(F_0)\backslash(G(F_0),G(F_0))}\, \int\limits_{P''_h(F_0)\backslash U(\A)} f(s,huy)\psi_U(u)\,du. \end{align*} Now the proof of Theorem~\ref{theorem:uniqueness}, more specifically Propositions~\ref{proposition:structure of w u}, \ref{proposition:1st reduction of w}--\ref{proposition:d_1 < n-l}, imply the inner $du$-integral vanishes unless $h\sim w_l({}^{\jmath_l}u_l)$, $0\leq l\leq n$. The corresponding summand is \begin{align*} \sum\limits_{y\in P_{w_l}'(F_0)\backslash (G(F_0),G(F_0))}\, \int\limits_{U_{n-l}(\A)}f^{V_{(c^k)},\psi_{(c^k),l}}(s,w_l({}^{\jmath_l}u_l)uy)\psi_U(u)\,du. \end{align*} This completes the proof. \end{proof} Now let $\pi_1$ and $\pi_2$ be irreducible cuspidal automorphic representations of $G(\A)$, and $\varphi_1$ and $\varphi_2$ be two cusp forms in the corresponding spaces. Assume $G$ admits nontrivial unipotent subgroups (i.e., exclude some low rank cases). Denote ${}^{\iota}\varphi_2(g)=\varphi_2({}^{\iota}g)$ and \begin{align*} \langle\varphi_1,\varphi_2\rangle=\int\limits_{G(F_0)\backslash G(\A)}\varphi_1(g)\overline{\varphi_2(g)}\,dg. \end{align*} Then by Corollary~\ref{corollary:unfolding of the series} and Lemma~\ref{lemma:Jacquet module is a trivial rep of U_R}, \eqref{eq:U psi U coefficient of the series} pairs with $\varphi_1$ and $\varphi_2$, in the sense that for $\Real(s)\gg0$, \begin{align}\label{eq:main identity} &\int\limits_{G(F_0)\backslash G(\A)\times G(F_0)\backslash G(\A)}\varphi_1(g_1)\overline{{}^{\iota}\varphi_2(g_2)} E^{U,\psi}((g_1,g_2);s,f)\,dg_1\,dg_2 \\&=\int\limits_{G(\A)}\int\limits_{U_0(\A)}<\varphi_1,\pi_2(g)\varphi_2> f^{V_{(c^k)},\psi_{(c^k),n}}(s,\delta u_0(1,{}^{\iota}g))\psi_U(u_0)\,du_0\,dg.\notag \end{align} Indeed consider one of the summands appearing in Corollary~\ref{corollary:unfolding of the series} with $l<n$. Set $U_R^l={}^{w_l({}^{\jmath_l}u_l)}(1,{}^{\jmath_{(c+1)l}}U_R)$, with the notation of the proof of Proposition~\ref{proposition:d1 = n-l l < n}. The Fourier coefficient of $f$ along $(V_{(c^k)},\psi_{(c^k),l})$ is left invariant under $U_R^l(\A)$. To see this consider a second Fourier expansion of this coefficient, along $U_R^l$. All terms but the constant one vanish, because by Lemma~\ref{lemma:Jacquet module is a trivial rep of U_R}, at the non-archimedean places $v$ of $F_0$ the action of $U_R^l((F_0)_v)$ on $J_{V_{(c^k)},\psi_{(c^k),l}^{-1}}((\mathcal{E}_{\tau})_{v})$ is trivial. Since $\pi_2$ is cuspidal, the summand itself vanishes. Of course \eqref{eq:main identity} is plainly the main global identity of \cite{CFGK2}: the left hand side is the global doubling integral $Z(s,\varphi_1,\varphi_2,f)$, and it is nontrivial when $\pi_1=\pi_2$ according to the computations of the local integrals appearing in the Euler product on the right hand side. One can include low rank cases, e.g., $c=2$ and $G=\SO_2$, by globalizing the argument from Lemma~\ref{lemma:Jacquet module is a finite length} (the constant term of the Eisenstein series defining $\mathcal{E}_{\tau}$ along $V_{(1,kc-1)}$ vanishes when $k>1$). The low rank arguments of Propositions~\ref{proposition:GL d1 = n-l l < n} and \ref{proposition:GL d1 = n-l j < l = c} can be globalized using the constant term computation of $\mathcal{E}_{\tau}$ given by \cite[Lemma~4.1]{JngLiu}. The results of this section also hold in the covering case of \cite{me12}, but to formulate them properly one must check the validity of certain properties of the global covering, which are the analogs of the list from \S~\ref{Covering groups} (this was carried out in \cite{me12}).
2,869,038,156,153
arxiv
\section*{Nomenclature} \noindent\begin{tabular}{@{}lcl@{}} \textit{$\lambda$} &=& wavelength\\ \textit{$\Delta\lambda$} &=& spectral bandwidth of filter\\ \textit{$\tau$} &=& pulse duration\\ \textit{$T$} &=& pulse interval\\ \textit{$P_{peak}$} &=& peak emitted optical power\\ \textit{$P_{ave}$} &=& average emitted optical power\\ \textit{$\varepsilon_{\textrm{DQE}}$}&=& detector quantum efficiency, the ratio of detected to incident photons\\ \textit{$R_{dark}$}&=&detector dark rate\\ \textit{$\Omega$} &=& emission solid angle\\ \textit{$D$} &=& telescope effective aperture diameter\\ \textit{r} &=& range from satellite to ground station\\ \si{\dbm} &=& decibels referenced to \SI{1}{\milli\watt}\\ \si{\dbg} &=& decibels referenced to \num{1} \si{photon\per\s}\\ \end{tabular} \\ \section*{Terminology and Acronyms} \noindent\begin{tabular}{@{}lcl@{}} \textit{ELROI} &:& Extremely Low Resource Optical Identifier\\ \textit{\$SWaP} &:& Cost, Size, Weight, and Power \\ \textit{HRT} &:& Horizontal Range Test\\ \textit{ID} &:& Satellite Identification Number\\ \textit{LEO} &:& Low Earth Orbit\\ \textit{RFI} &:& Radio Frequency Interference\\ \textit{STC} &:& Space Traffic Control\\ \textit{SOI} &:& Space Object Identification\\ \textit{SSA} &:& Space Situational Awareness\\ \end{tabular} \\ \section{Introduction} Space is crowded---with thousands of manned spacecraft, active and retired payloads, rocket bodies, explosion fragments, and other orbiting space objects---and getting more so. Knowing where satellites are is essential for preventing collisions, and useful for satellite owners and operators. The current publicly available Space-Track catalog \cite{spacetrack} lists over 16,000 objects along with their orbital elements. However, this list is restricted to those objects that have been continuously tracked since launch to maintain ``track custody,'' or those that have been confidently identified by other means. Track custody can be lost when there are interruptions in observations, when solar activity causes sudden increases in atmospheric drag, when there are unexpected maneuvers, or when two satellite orbits become confusingly close. When an unidentified satellite then reappears in radar and optical observations, it can't be put back into the catalog unless multiple observations can quantify its orbit and track it back to a previously-lost satellite. Even when a satellite is tracked from the moment it is released from its launcher, it may not be fully identified. Individual rocket launches can release dozens to more than a hundred satellites (particularly CubeSats) at a time \cite{Foust2017a,Foust2017}. Shortly after launch, a satellite operator may know only that their satellite is one of the many ``OBJECT XX'' entries in the catalog, making it difficult to contact and control the spacecraft. Space Object Identification (SOI) is easier if a satellite carries an identifying beacon that can be read from the ground. Ideally, this beacon would consume few resources and would last the entire orbital lifetime of the satellite. There is currently no standard or widely accepted technology to provide such a beacon, but several options are under consideration, including conventional radio beacons combined with GPS, radio tags activated by a ground signal, and passive reflectors \cite{Ewart2016}. Radio is the most developed option, but radio transmitters that can be received at orbital distances tend to be heavy and power-hungry. Furthermore, if they operate continuously they can cause radio frequency interference (RFI), so they are turned off when a satellite is decommissioned. Thus, radio beacons do not enable SOI for post-mission satellites, and they are also not suitable for passive debris objects such as rocket boosters and interstages. Conventional optical beacons (LEDs, for example) avoid the problem of RFI, but still require significant power to be visible from the ground with traditional detection methods. The Extremely Low Resource Optical Identifier (ELROI) beacon is a new concept for a low-power optical ``license plate'' that can be attached to anything that goes into space \cite{Palmer2015,Palmer2016} (Figures \ref{figoverview} \& \ref{figwaveform}). The ELROI beacon encodes a unique ID number in short, omnidirectional flashes of laser light, which can be read from the ground with only a few photons per second using single-photon detection and an innovative background-rejection technique. The remarkably low power level required for photon-counting optical communications has already been noted as a possibility for low power communication, even at interstellar distances \cite{Howard2000}. This strategy allows the beacon to be low cost, small, lightweight, and low power, so it fits the \$SWaP (Cost, Size, Weight, and Power) budget, even at the CubeSat level. The power requirements are low enough that the beacon can be self-powered with a small solar cell, which makes system integration simple, and allows the beacon to be flown on inert debris objects without power, communications, data, or attitude control systems. The beacon does not produce RFI, and may be operated continuously throughout the on-orbit lifetime of the satellite. \begin{figure}[p!] \includegraphics[width=0.75\textwidth]{operation-fig.eps} \caption{Overview of the ELROI system. The beacon is attached to a satellite and continuously emits its optical signal---encoding a unique ID number---over a wide solid angle. A ground telescope collects a small portion of the emitted photons, which are detected with a photon-counting sensor. A narrowband filter centered on the beacon wavelength rejects background light. The recorded data (circular inset) consists of a list of photon detection times at a tracked location (green circle). Streaks represent background stars. The data analysis technique uses the timing characteristics of the ELROI signal to eliminate more than 99\% of background photons, making it possible to read the ID in a single pass even if the signal is only a few photons per second.} \label{figoverview} \end{figure} \begin{figure}[p!] \includegraphics[width=0.75\textwidth]{signal-characteristics.eps} \caption{Representative characteristics of the ELROI signal. The spectral bandwidth of the signal is about 1~nm, and the example wavelength shown is 638~nm (other wavelengths may also be suitable, as discussed in \S\ref{sect:design}). The width of the optical pulses is $\tau = 2\mu$s, and the interval between pulses is $T = 0.5$ ms. If the instantaneous peak power is 1 W and half the bits contain a pulse, the average power with this duty cycle is 2 mW, and a 128-bit ID repeats every 64 ms.} \label{figwaveform} \end{figure} The ELROI signal consists of short, bright pulses of monochromatic light clocked at a fixed period. Each period starts with either a flash of light (encoding a 1 bit), or no flash (a 0), as shown in Figure \ref{figwaveform}. The ID repeats several times a second. The peak power of the light source (a laser diode) is in the 1 W range, but the average power is much lower. If the pulse duration and interval are as shown in Figure \ref{figwaveform} and half the bits are ones, the duty cycle of the laser diode is 1:500, and a peak power of 1 W gives an average power of just 2 mW. This means that power for the beacon can be provided by a few square centimeters of solar cell, making it independent of the host satellite. The ELROI signal is intentionally open and accessible to anyone with a ground station that can be built from Commercial Off The Shelf components. The identification number (ID) of each beacon will be stored in an open registry, along with contact information for its operator and other information. This allows ELROI to be adopted as an international standard, read by ground stations around the world to assist in the worldwide problem of space traffic control (STC) and space situational awareness (SSA). The beacon can transmit additional data beyond the ID, giving satellite operators a backup channel for anomaly resolution and other diagnostic purposes. This, along with the benefits to the spacecraft operator of being able to identify their own satellite, can drive adoption of ELROI even in the absence of international norms or mandates. The first ELROI prototype has been delivered and integrated with a spacecraft (Figure \ref{fig:prototypes}a) scheduled for launch in 2018. A fully autonomous module (Figure \ref{fig:prototypes}b) suitable for either attaching to a host satellite or launchable as a 1/2U CubeSat free-flyer, is currently being designed and will be produced in quantity (6 for the initial production run) to support multiple flight opportunities as they present themselves. An optimized beacon will eventually be miniaturized to a package the size of a thick postage stamp---a few square centimeters in area and a few millimeters thick---that can be mass produced in quantity to support the global launch rate (Figure \ref{fig:prototypes}c). \begin{figure}[p!] \subfigure[NMTSat]{\label{fig:a}\includegraphics[width=0.3\textwidth]{ELROI_NMT_MIDREZ.eps}} \subfigure[ELROI-UP]{\label{fig:b}\includegraphics[width=0.3\textwidth]{ELROI-UP-final.eps}} \subfigure[ELROI 1.0]{\label{fig:c}\includegraphics[width=0.3\textwidth]{ELROI-1-mockup-scale.eps}} \caption{ELROI hardware development stages. {\bf a.} ELROI-PC104 unit to be incorporated into a CubeSat in the PC-104 form factor. {\bf b.} CAD Drawing of ELROI-UP unit that can operate fully autonomously when attached to host object, or as a free-flyer. {\bf c.} Illustrative mock-up of a miniaturized ELROI unit, suitable for any LEO CubeSat or other small LEO space object.} \label{fig:prototypes} \end{figure} The ELROI concept has also been verified in ground-to-ground tests. Using two test units operating at 638 nm with average power less than 0.3 \si{\milli\watt} at a distance of 15 km, photon detection rates in the range of 0.1--40 \si{photons\per\second} were measured, depending on lens aperture and attenuation. As discussed in detail in \S \ref{hrt}, these measurements are consistent with the predicted detection rate, and the same model predicts a rate of approximately 3.3 signal \si{photons\per\second} from a LEO range of 1000 km (Appendix B). The encoded ID was successfully extracted from the ground-to-ground test data, with reliability depending on the total number of photons. Scaling the measured photon detection rate to simulate the LEO case shows that approximately 100-160 seconds of observation at the LEO rate is sufficient to reduce the misidentification rate to less than 1 in a billion, making it practical to read the ID in a single satellite pass over the ground station. ELROI has been developed at Los Alamos National Laboratory (LANL), growing out of our experience both with space systems and with single-photon imaging. Initial testing will use a pre-existing ground station consisting of a commercial telescope (a Celestron 35 cm aperture telescope on a Software Bisque Paramount) and a LANL-developed single-photon imaging camera \cite{Priedhorsky2005,Thompson2013}. However, any additional groups that wish to use their own systems to identify the ELROI beacons are invited and encouraged to do so. In this paper, \S \ref{sect:concept} describes the ELROI concept, and briefly describes the design and operations of the beacon and the ground station. \S \ref{sect:design} discusses some of the design details that allow a low power signal to be transmitted, what trade-offs and optimizations are available to make this a practical system, and what else can be done with ELROI. Our progress so far in developing this, with both ground tests and current and future flight hardware, is discussed in \S \ref{sect:status}. The Conclusion, \S \ref{sect:conclusion}, summarizes the paper and discusses how ELROI can develop to become a global standard. Appendix A shows how to computationally extract the ELROI ID from the ground station's data and Appendix B works through an example link budget to demonstrate the feasibility of this concept. \section{Concept} \label{sect:concept} ELROI uses a small optical beacon that can be attached to any object that goes into space (Figure \ref{figoverview}). This beacon produces flashes of light that encode an ID number, providing a ``license plate'' that uniquely identifies the satellite. The flashing light can be detected and read by a small ground telescope with a single-photon detector, allowing anyone to identify the satellite. The specific characteristics of the light flashes allow the ID to be read even if only a few photons are detected per second, and in the presence of background. The optical signal is diffused to a large solid angle (almost a full hemisphere) so that spacecraft attitude control or pointing is not required. The receiver telescope tracks the satellite, using its known orbital elements. The light is spectrally filtered to select only those photons at the beacon wavelength. A single-photon detector records the arrival time of each photon, and the list of times is then analyzed to reconstruct the ID by determining which time bins contain 1's and 0's. This data analysis is described in more detail in Appendix A. The combination of the beacon signal characteristics and the ground station design enable background rejection and signal extraction techniques that allow an extremely low-power signal to be recovered. Unlike a radio antenna, an optical telescope's imaging capability can reject all background sources more than a small fraction of a degree from the beacon. The spectral purity of the laser diodes allow narrow-bandwidth filters to reject almost all of the non-beacon light from the sky and the satellite itself. The use of single-photon detection and timing brings the signal into the digital domain, eliminating analog noise and allowing signal integration over the entire observation time. The extremely low duty cycle and strict periodicity of the beacon allows a phase cut that rejects the background light that doesn't precisely match the timing of the beacon. Finally, the use of an error-correcting coding scheme for the ID makes it very unlikely that one ID will be mistaken for another, even when the values of some bits are uncertain. \subsection{System Design Overview} More detailed design considerations are discussed in \S \ref{sect:design}. The electronics required for the beacon are quite simple, as shown in the block diagram Figure \ref{figelectronics}. The power system is mostly remarkable for its low capacity requirements compared to conventional satellite systems. The pattern generator can be a very simple microcontroller, FPGA, or ASIC. The pulse driver circuitry drives a laser diode, which provides highly monochromatic light with good efficiency. This light is diffused to cover a wide solid angle so that light reaches any ground station in its field of view without any pointing requirements placed on the host. \begin{figure}[p!] \centering \includegraphics[width=0.6\textwidth]{beacon-block-diagram.eps} \caption{Components required for the beacon. A small solar cell provides power independent of the host satellite, with the battery allowing operation during orbital night. A pattern generator supplies the pulse sequence to a pulse driver, which powers the laser diode that emits the optical signal. The light is diffused into a wide solid angle so that the beacon does not need to be pointed directly at the ground station. } \label{figelectronics} \end{figure} \begin{figure}[p!] \centering \includegraphics[width=\textwidth]{ground-block-diagram.eps} \caption{Components required for the ground station. A tracking system drives a telescope equipped with a photon-counting sensor. If a single- or few-pixel detector is used, tracking requirements are more stringent than for an imaging sensor, and a separate tracking camera may be required. A time-tagger records the time of each photon detection. (For a single-photon camera, the location of each detection is also recorded.) The signal is decoded by acquiring the carrier frequency, rejecting out-of-phase background photons, and assigning a value to each bit based on the number of photons detected. The resulting bit values are compared to the entries in a registry to find the best match to an existing ID.} \label{figgroundstation} \end{figure} The ground station, shown in Figure \ref{figgroundstation}, uses a telescope to concentrate the light from the satellite and reject light from other parts of the sky. This telescope must be mounted so that it can track satellites, based on supplied orbital elements. The collected light is further filtered by wavelength, using a narrow-band spectral filter at the beacon wavelength. This rejects the majority of background light at other wavelengths (from the sky and from sunlight reflecting from the satellite). The filtered light is then recorded with a ``photon-counting detector.'' This is a detector that, minimally, emits a pulse that triggers a time-recording circuit every time it detects a photon. The detector may also have some position-sensitivity or imaging capability which, as we shall see in \S\ref{subsect:mountdetectortradeoff}, affects the requirements on the telescope and its tracking system. The data from the photon-counting detector---a list of photon times, including both beacon photons and background photons---is then analyzed to determine the ID of the beacon, as described in Appendix A. \subsection{Operations} The ELROI beacon requires a targeted observation to read an ID. It is intended as an adjunct to persistent satellite tracking operations by radar and optical methods that maintain and update a self-consistent catalog. It will typically be used for objects where the identity is unknown or questionable, such as immediately after launch when multiple satellites must be matched to multiple object detections. The existence of an ELROI beacon on an object relaxes the demands on the persistent tracking systems, because the consequences of a lost track are diminished if the identity can be recovered. Due to sky background, the typical ground station can only operate at night. Observations will require clear skies, at least over a significant portion of the apparent path of the satellite, but atmospheric turbulence (seeing) is unlikely to have much of an effect on receiver sensitivity. The narrow field of view of the telescope will require that the satellite's orbit be known in advance. The required orbit prediction accuracy is determined by the field of view of the photon counting detector, and the presence or absence of sunlight on the satellite to allow closed-loop tracking. For larger satellites, observations in eclipse will reduce the background and allow for faster identification, at the cost of requiring accurate ``blind'' pointing without seeing the satellite itself in reflected sunlight. Each ground station is expected to observe many different satellites during their respective passes above the horizon. The cost of an individual ground station (likely to be in the in the range of 0.1--1 million dollars), and the skills required to operate it, will be beyond the practical capability of most individual CubeSat operators. It would be wasteful to develop a ground station and use it only for a single satellite (which might be visible from a given ground station for only a few minutes a day) and that restricted usage would require that the satellite's identity be already known. Thus ELROI identification is probably best implemented to by specialized ground station operators contacting the operator of each satellite as it is identified, as either a commercial or governmental service. A full implementation of ELROI for global Space Traffic Control (STC) will require a worldwide network of ground stations to provide prompt identifications for a variety of orbital geometries, regardless of the weather and other conditions at individual ground stations. Different locations, or different ground stations at a single location, can have different design choices in terms of telescope, mount and detector, to cover the diversity of situations expected. \section{Design and Trade-offs} \label{sect:design} \subsection{Beacon Design} \label{subsect:beacondesign} The optimal ELROI beacon signal characteristics---wavelength, peak power, pulse width and separation, etc.---will depend on trade-offs between the resources that will be devoted to each beacon (\$SWaP), the resources for each ground system (primarily cost), the characteristics of the satellite, and the required reliability of identification. Some of these characteristics must be standard for all ELROI units, while others can vary from satellite to satellite. The strictest technical requirements for an ELROI standard are merely that the beacon produce sufficient light at an agreed-upon wavelength, so that a telescope with a narrow-band filter and a photon-counting detector can record the photons. Variation in the details of the timing and coding scheme can be drawn from a small set of standards, which can evolve with time as more experience is gained and more capabilities are required. (The computational power required to process a list of photon time-tags is minimal, allowing the data to be tested against each of the standards in use to find the correct one.) Semiconductor laser diodes at the \si{watt} scale are readily available in rugged packages for a variety of technologies, and are relatively resistant to radiation at the levels expected in a LEO environment \citep{Johnston2000,Phifer2004,Ott2006}. The planned flight tests (\S \ref{pc104}--\ref{elroi1}) include a variety of laser diodes and, along with additional environmental tests, will help determine final part selection. Because this application does not require coherence, multiple independent emitters can be combined to provide the desired optical power. This allows $P_{peak}$ to be scaled to arbitrary levels by merely adding more emitters. Increasing $P_{peak}$ while keeping $P_{ave}$ constant (a beacon with shorter, brighter pulses) will reduce the number of in-phase background counts while keeping the source counts the same. Thus, for larger (brighter) satellites, higher $P_{peak}$ laser diodes may be optimal if the package size is determined by the solar cell and battery (and hence $P_{ave}$). For smaller satellites, reduction of the already low in-phase background gives less advantage and may not be worth the increased circuitry required for higher $P_{peak}$. Different laser technologies will have different characteristics, including power efficiency, availability of components, robustness, radiation hardness, and wavelength variability. They will also have different wavelengths, which affects both the propagation characteristics and the availability and efficiency of detectors. (The choice of $\lambda =$ 638 nm and 450 nm laser diodes for the initial flight units is largely determined by the wavelength-dependent sensitivity of the pre-existing LANL ground station and camera.) If the beacon is a single unit mounted on a flat surface, this restricts emission to a hemisphere ($\Omega = 2\pi$ \si{\steradian}) which for a randomly-oriented craft leads to a 50\% probability of being observable from a ground station at any given time. This can be addressed in a number of ways: the emitter may be mounted on a corner of the satellite or at the end of a solar panel to provide a wider view; multiple emitters or multiple beacons can be mounted around the spacecraft; or the restricted emission can be accepted as a factor that reduces but does not eliminate the probability of a valid ID. If, for example, the satellite is expected to be freely tumbling so that its emitter is visible to the ground station for half of a typical pass, then the aspect and attitude variation of a satellite during a given pass may be enough to provide a valid ID acceptably often. Although Fig. \ref{figelectronics} shows the components of an autonomous beacon, some of the functions can be integrated with other spacecraft systems. A system that taps a small amount of power from the existing spacecraft bus and is controlled by surplus capacity of a processor or FPGA could be incorporated into a spacecraft during the design stage, requiring nothing more than the laser diodes, drivers, and diffusers to be added to the spacecraft hardware. However, an autonomous beacon as a separate package can be added to a spacecraft much later in the design cycle with a much lower engineering expense. The design of an error-correcting coding scheme that selects the ID numbers is beyond the scope of this paper. One possibility is a Bose-Chaudhuri-Hocquenghem (BCH) code, which for a 127-bit ID length would provide 4 million different IDs while being robust to up to 21 erroneous bits. Another possibility is a 32-of-128 code, which has less error correction capability but (due to the lower number of '1' bits) requires less power to transmit. \subsection{Ground Station Design} \label{subsect:mountdetectortradeoff} Because clear, dark skies with a line of sight to the satellite are required for a ground station to read the ELROI beacon, a global network of ground stations will be required to support the need for timely identification. It is likely that there will be diversity in the types of detectors and other characteristics of these ground systems. The major design trade-offs are among the capabilities and costs of the detector, the telescope, and its tracking system. The sensitivity of the ELROI ID measurement relies on the use of a ``photon-counting detector''. That is, each photon registered by the detector must produce a discrete signal whose timing can be measured with resolution better than the pulse width $\tau$. The detector will have a quantum efficiency, $\varepsilon_{\textrm{DQE}}$, with which it registers the photons that hit it, and a detector dark count rate, $R_{dark}$, which is the rate of false photon detections. These detectors can be single pixels, registering the time for a photon anywhere on a sensitive area, or position sensitive detectors that provide both the time and a location in the field of view for each photon. Popular single-pixel detectors include single-photon avalanche diodes (SPADs) and photomultiplier tubes (PMTs). Arrays of SPADs, position-sensitive readouts for PMTs, and other technologies such as microchannel plates (MCPs) can provide position resolution, from simple quadrant detectors up to full imaging detectors with hundreds or thousands of pixels on a side. (More familiar imaging sensors such as conventional CCDs, electron-multiplying CCDs, and CMOS imagers do not have the combination of time resolution and low read noise needed for reading an ELROI beacon.) A single-pixel detector requires that the light from the satellite be concentrated on the detector, without too much additional background light. This requires a telescope and mount that can precisely follow the track of a satellite across the sky. If the satellite is illuminated by the Sun, a conventional camera can detect the satellite and provide pointing corrections to the telescope mount, while a dichroic mirror directs photons at the beacon wavelength to a single-photon detector. If the satellite is not illuminated, the same conventional camera can provide detections of stars as they pass through the field of view, and keep the mount pointing at the satellite's \emph{predicted} location, based on orbital elements provided by previous observations. Some suitable ground stations already exist with the addition of a small amount of extra hardware and software---for example, satellite laser ranging (SLR) stations have the equipment and expertise to track LEO satellites and observe them with single-pixel detectors. These stations are widely-spaced around the globe and used for geodesy \cite{Pearlman2002}. A position-sensitive detector allows a less-accurate tracking system to keep the satellite in a larger field-of-view, with the satellite photons separated from the star and sky background in later processing stages. However, such detectors are substantially more complex and expensive than single-pixel detectors. The ground station that we will use for the initial testing uses a photon counting camera developed at LANL \cite{Priedhorsky2005}, but equivalent systems are available commercially \cite{photonis, roentdek}. A larger ground station aperture and a higher $\varepsilon_{\textrm{DQE}}$ increase both source and background count rates. These cause a corresponding decrease in the time required to ID the beacon. The trade-offs discussed above may drive the system to different designs in different operating regimes; for example, LEO vs. GEO satellites. For observing LEO satellites, which have apparent speeds of order one degree per second across the sky, the optimum may be a relatively small telescope on a rapidly slewing mount, with an imaging detector used to reduce the required precision of the high-speed motion. For satellites at or near geosynchronous orbit, a larger telescope can be mounted on a slower mount and locked on to a slowly-moving target to feed a single-pixel detector. The larger telescope aperture and the higher $\varepsilon_{\textrm{DQE}}$ available in single-pixel detectors helps to compensate for the inverse-square loss at the much longer range to GEO. Changes in technology will also change optimal design choices. Large imaging SPAD arrays are under active development (driven by the desire for flash-LIDAR systems for self-driving cars \cite{Hecht2017,McManamon2017}) and may make high-performance photon counting cameras cheap. Superconducting nanowire singlephoton detectors (SNSPDs) are becoming commercially available and may provide higher performance than other single-pixel detectors at an acceptable cost \cite{quantum-opus,photon-spot,id-quantique}. \subsection{Additional capabilities} The ID signal consists of low-duty-cycle periodic flashes confined to an in-phase time slice of width $\tau$ in each period $T$. This permits additional signals to be interleaved as extra flashes out of phase with the ID bits. The additional signal must have low enough spectral power at the period of the ID signal to avoid interference with the clock recovery, but this can be achieved by a careful choice of coding scheme. One possible use for this auxiliary data channel is to provide ``black box'' information for diagnosing and recovering from spacecraft anomalies. The operation of an autonomous ELROI unit will be unaffected by most failure modes of its host satellite, and so this can provide an independent low-bandwidth channel for small amounts of status information when other information is not available. The ELROI unit may have its own sensors, such as MEMS accelerometers, gyroscopes, and shock sensors. Even with no additional hardware, variations in the solar cell voltage can be analyzed to detect sudden changes in satellite attitude or spin state. The cost in \$SWaP for implementing this optional capability is relatively low, and includes an increase in complexity of the pattern generator, any interfaces or internal sensors that are required for the status information, and an increase in overall power proportional to the additional light generated to transmit the additional bits. \section{Current results and future development} \label{sect:status} \subsection{Horizontal range test}\label{hrt} \begin{figure}[p!] \centering \includegraphics[width=0.75\textwidth]{carrier_detect.eps} \caption{Time required to detect the signal carrier period and phase. The left panel covers a range of apertures (f/2.8 to f/22) for both transmitter units (solid: Unit 1, dotted: Unit 2) in the HRT conditions, covering signal count rates of 0.1--40 counts/second. The right panel is the same data, scaled to 3.3 signal counts/s to simulate the case of a LEO CubeSat. The collapse of the scaled data to a common band gives us confidence in our link budget model and indicates that, after all the background rejection steps are implemented, the total number of signal counts is the dominant parameter in our ability to track the signal, at least in the CubeSat-equivalent case.} \label{figcarrier} \end{figure} \begin{figure}[p!] \centering \includegraphics[width=0.75\textwidth]{Bit_Error_Ratio.eps} \caption{Time required to achieve a confident ID read. The bit error ratio (BER) is shown as a function of observation duration in the HRT conditions (left), and scaled to simulate the LEO case (right). The BERs corresponding to a range of codeword error ratios (CERs, horizontal lines) are marked. As in Figure \ref{figcarrier}, the agreement among the scaled data measurements indicates that the dominant requirement for accurately reading the ID codeword is the number of signal counts.} \label{figcoderror} \end{figure} To validate link budget calculations and determine whether the ID can be read in conditions similar to a LEO system, the ELROI concept was demonstrated with a horizontal range test (HRT) over approximately 15 km. The bit error ratio (BER) and resulting codeword error ratio (CER, the probability of incorrectly identifying the ID) were measured under a range of observing and analysis conditions to simulate different observation durations and signal strengths. The results of the HRT show that the link budget calculations are accurate when applied to the test conditions, and that the transmitted ID number can be read with data rates comparable to a LEO system. \subsubsection{Horizontal range test conditions} The HRT used a receiver stationed 15 km away from two transmitter units (at altitudes of $\sim$2500 m for the transmitters and $\sim$2000 m for the receiver). Each transmitter consisted of a laser diode (638~nm), optical diffuser, and driver electronics. The lasers were driven in a 64-of-128 pattern ID with a $\tau=\SI{2}{\micro\second}$ pulse duration, a $T=\SI{500}{\micro\second}$ pulse interval, which repeats the ID every \SI{64}{\milli\second} (recall Figure \ref{figwaveform}). The two transmitter units encoded different IDs. The time-averaged effective isotropic radiated power (EIRP) in the direction of the receiver was measured to be 0.3 mW, corresponding to a peak EIRP of 150 mW after adjusting for the 1:500 duty fraction. Unit 2 was attenuated by a 13\%-transmission neutral density filter, while Unit 1 was left unfiltered. The receiver consisted of a LANL-developed photon-counting camera \cite{Priedhorsky2005,Thompson2013} with an f = 400 mm lens and adjustable aperture (up to f/2.8 = 143 mm diameter). The camera has a quantum efficiency of about 3.9\% at 638~nm and sub-nanosecond timing resolution. The receiver was equipped with a 10-nm bandpass filter centered on the transmitter laser wavelength (638 nm) to reduce background. The observations were made on a moonless night under variable clouds, beginning about 90 minutes after sunset. Observations were interrupted by rain and fog at times, but the data discussed here was taken while conditions were clear with no obvious obscuration. The optical path was within Los Alamos County, NM, USA (population 18,000) and light pollution was minimal. \subsubsection{Link budget verification} Link budget calculations (see Appendix B) predict that a $P_{ave}=\SI{2}{\milli\watt}$ ELROI beacon on a sunlit CubeSat from LEO at 1000-km range from the Fenton Hill Observatory 14" telescope and LANL sensor would produce 3.3 signal counts and 91 background counts per second. Applied to the horizontal range test, the same calculation predicts that Unit 1 at f/8 will produce 4.7 counts per second, and the attenuated Unit 2 will produce the same count rate at f/2.8. As discussed in Appendix B, the variable transmission of the atmosphere is not accounted for in these estimates; however, the HRT test range of 15 km was chosen to provide more than the equivalent atmospheric attenuation of a sea-level site observing at the zenith. The measured values from the HRT are 8.3 counts/s for Unit 1 at f/8 and 4.5 counts/s for Unit 2 at f/2.8. Measurements covered a range of apertures yielding count rates from about 0.1 counts/s (attenuated Unit 2 at f/22) to 40 counts/s (Unit 1 at f/2.8). The measured count rates for Unit 2 agree well with predictions, while count rates for Unit 1 are somewhat higher than predicted. This may be due to the effect of indirect light from the sides of Unit 1 illuminating the area around the transmitter and then reaching the receiver. In contrast, Unit 2 was housed in a box with a window covered by the neutral density filter. \subsubsection{Signal extraction and error rates} To decode the ID, the photons from the location of the transmitter are first selected from the camera FOV. In the analysis of the HRT this is done manually with a radius cut around each transmitter location. As discussed in Appendix A, an algorithm is used to detect the carrier frequency of 2 kHz (see Figure \ref{figcarrier}), and out-of-phase photons are rejected. The remaining photon detections are then stacked by the 128-bit pattern length to form a histogram of counts in the cyclic time bins corresponding to each bit. A threshold level is set to distinguish between 0 and 1 values. Due to errors from statistical fluctuations and background, this analysis will give some fraction of incorrect bit values (the bit error ratio, or BER), so identifying the correct ID requires searching the registry for the number that has the fewest discrepancies from the measured sequence. For the HRT, an ID is considered correct if it contains 12 or fewer bit errors--an ID with 13 or more bit errors is considered a Codeword Error. Optimal coding schemes can correct more than 12 erroneous bits out of 128, and using a simple threshold cut decreases statistical power compared to more sophisticated methods, therefore this is a conservative criterion. Figure \ref{figcoderror} shows the bit error ratio (BER) as a function of observation time for the HRT conditions, and scaled by signal count rate to the predicted LEO system. The codeword error ratio (CER) is a function of the BER; as a result of using error-correcting codes, a BER of 3.7\% gives a CER of just one in a thousand while a BER of 1\% gives a CER below one in a billion. In the data of Figure \ref{figcoderror}, 55-95 seconds at the predicted LEO rate is required to achieve CER~$ = 10^{-3}$, and 105-157 seconds reduces the CER to $10^{-9}$. These observation times are reasonable for a single pass of a LEO satellite; therefore, we expect to be able to read the ELROI ID with high confidence in the predicted LEO system. The measured count rates in the HRT conditions also confirm that the link budget calculation methods are accurate. The HRT therefore successfully demonstrates the ELROI concept and lays the groundwork for the first on-orbit test, ELROI-PC104, described in the next section. \subsection{ELROI-PC104}\label{pc104} ELROI has been developed in a PC-104 form factor for integration into a CubeSat (Figure \ref{fig:prototypes}a). It has 4 laser diode emitters projecting through two diffusers on opposite sides of the CubeSat. One diffuser has two $\lambda=$ \SI{638}{\nano\meter} red laser diodes, at $P_{peak} = $ \SI{1}{\watt} and \SI{0.7}{\watt}, from two different manufacturers (Mitsubishi ML501P73 and Oclaro HL63193MG, respectively). The other diffuser has an identical \SI{1}{\watt} $\lambda = $ \SI{638}{\nano\meter} laser diode and a $\lambda = $ \SI{450}{\nano\meter} \SI{1.6}{\watt} blue laser diode (Thorlabs L450P1600MM). This unit relies on the spacecraft bus for power, and can be controlled by the spacecraft computer over an Inter-Integrated Circuit (I$^{2}$C) interface. Each laser emitter can be independently controlled with its own combination of pulse width $\tau$, pulse interval $T$, and ID number (up to 128 bits). The relative phases of the diodes can be controlled so that they do not flash simultaneously, even if they have the same $T$. In the absence of any commands from the spacecraft, the unit waits for 45 minutes after power is applied, then automatically powers up the three red laser diodes, each with its own ID and phase. The 45 minute delay before transmission is required for 1-3U CubeSats by the CubeSat standard \cite{CalPoly2009}. Only the red diodes are automatically activated because the blue laser requires a higher operating voltage and more power from the spacecraft bus. In this three-diode, red-only mode, the entire ELROI unit draws only \SI{56}{\milli\watt}. Many CubeSats fail after launch before providing their first telemetry \cite{Swartwout2013}. Although the failure rate is declining as the community gains greater experience, the lack of feedback can prevent anomalies from being diagnosed, making recovery or learning from the experience very difficult. The autonomous start-up of ELROI-PC104 will improve this situation. If the ELROI signal is detected, it will demonstrate that the spacecraft power systems are active. As the spacecraft processor boots up and starts activating systems, it can update the transmitted ID, allowing progress to be monitored. Any failures detected by the processor can also be transmitted by modifying the ELROI ID. Other status information, such as the number of radio commands received from the ground, can be transmitted in the same way. The unit shown in Fig \ref{fig:prototypes}a has been integrated into the New Mexico Institute of Technology's NMTSat, a 3U CubeSat with launch expected in 2018 \cite{nmtsat}. \subsection{ELROI-UP}\label{elroiup} A unit that can be attached to any satellite, the ELROI Universal Prototype (ELROI-UP), is currently being designed and built (Figure \ref{fig:prototypes}b). This model includes its own solar cell and battery, and is capable of fully autonomous operation. It can accept commands or power from the host spacecraft, but does not require them. The unit can also be mounted in a passive mechanical structure and launched as a free-flying CubeSat in 1/3U, 1/2U or larger form factor. ELROI-UP supports up to four laser diodes. The first test flight units will be populated with four $\lambda = $ \SI{638}{\nano\meter} red laser diodes, at $P_{peak} = $ \SI{2.5}{\watt} (Mitsubishi ML562G84). Multiple codes and timings with different combinations of the emitters will be pre-programmed to allow testing at up to $P_{peak} = $ \SI{10}{\watt}. The first production run will be of 6 units at a marginal cost of \$4,000 each. The size is $9.8\times9.2\times3.1 = \SI{280}{\centi\meter^3}$, the mass is \SI{300}{g} and the power (if externally supplied instead of provided by the solar cell) is \SI{50}{\milli\watt}. This \$SWAP qualifies it as a Low Resource Optical Identifier. We are willing to provide these beacons to appropriate launch opportunities. \subsection{ELROI-1.0}\label{elroi1} Test results from ELROI-PC104 and ELROI-UP will allow ELROI to be standardized and adopted by the entire space community. CubeSats are the class of space object that have the most pressing need for an identification system, and are also the most constrained by \$SWaP. Thus we need to develop the lightest, smallest, cheapest ELROI beacon that can be mass-produced and attached to any CubeSat with minimal integration cost. The size of beacon will be dominated by its largest components, which in this case will be the power system. A given system power mandates the solar cell area needed to supply it, and the battery volume required to maintain power both through the orbital night, and through extended periods when a randomly-oriented inert or non-attitude-controlled satellite might keep the solar cell pointed away from the Sun. A satellite that maintains attitudes that prevent good solar illumination (\emph{e.g.}, if the beacon is on a nadir-pointing face of the spacecraft) may require additional, separated, or re-oriented solar cells. For the LEO CubeSat beacon used as an example in the link budget Appendix B, $P_{ave}$ is \SI{2}{\milli\watt} of optical power. Assuming a 50\% conversion efficiency from electricity to light and allocating an additional \SI{1}{\milli\watt} for other components such as the pulse pattern generator, this gives a total system power of \SI{5}{\milli\watt}. We assume a photovoltaic cell that is 30\% efficient, is randomly oriented on the relevant timescales (50\% of the time it is pointed away from the Sun, and the remaining time has a slant-adjusted area that averages to 50\%) and is in eclipse for 50\% of its orbit. These benchmark values combined with a \SI{1.36}{\kilo\watt\per\m^2} solar constant give an average yield of \SI{5}{\milli\watt\per\cm^2} of photovoltaic cell area. LiFePO4 cells store $\sim$\SI{0.22}{\watt\hour\per\cm^3} of power (e.g., \cite{lifepo4}) so \SI{0.55}{\centi\meter^3} of this chemistry could power the beacon for an entire day on a full charge. Thus, if the size of the beacon is dominated by the power system, this implies a size scale of $\sim$\SI{1}{\cm^3}. Although this minimal-size system would provide adequate power and signal under the typical conditions, increasing the overall signal would allow the ID to be read over a wider range of conditions, such as shorter-duration passes at lower horizon angles and longer distances from the ground station. Under optimal conditions, an increased signal would allow the ground station to receive the ID code more rapidly, allowing it to service more satellites per night. Allowing a larger beacon would also simplify the engineering challenges of manufacture. Experience gained from the early flights of ELROI-PC104 and ELROI-UP will help to refine the practical trade-offs between the cost and benefits of a larger, more powerful beacon. Figure \ref{fig:prototypes}c shows a mock-up of what such a beacon could look like in a $2 \times 2 \times 0.5 ~\si{\cm^3}$ size with $\SI{3}{\cm^3}$ of photovoltaic cell. Attaching one or more of these to every CubeSat would be technically and economically feasible and, given the benefits that it provides to the spacecraft operator, likely to be widely adopted. \section{Conclusion} \label{sect:conclusion} ELROI is a simple, low-cost optical beacon that allows a modest ground station to provide unambiguous Space Object Identification. The beacon is cheap and small enough for use on CubeSats, and ground stations can be built around the world using commercially-available technology. It is suitable for use on everything that goes into space to simplify the important task of Space Traffic Control. The ELROI concept has been verified in long-range ground tests that simulate the link budget and atmospheric path-length of a LEO system. A beacon has been integrated into a spacecraft for launch in the near future, and will be observed with an existing ground station. Multiple copies of an autonomous beacon that can be attached to other spacecraft with minimal integration are being built. There are no technical barriers to miniaturizing ELROI to the size of a thick postage stamp for easy application to even small satellites. We encourage other groups to develop their own ground systems or use existing facilities to track the ELROI beacons after launch. We can also provide prototype beacons to spacecraft developers for integration into their satellites. Based on the data from these initial flights, international standards can be developed to allow any ground station to identify an unknown satellite that has a beacon. Producing an optimal set of requirements on the beacon and ground stations will also require extensive physical modeling and simulation, vendor surveys, technology forecasts, economic and policy consideration, and value judgements to balance the competing interests that will eventually arise. The advantages that ELROI provides to spacecraft operators, combined with the low expenditure required to incorporate the beacons in satellites, means that they can become widely used even in the absence of international norms or mandates. \section*{Appendix} \subsection{Data extraction and analysis} \label{subsect:datareading} This is a simplified description of the process of determining an ELROI beacon's ID from observations. A more detailed discussion of statistical analysis and computational optimization techniques is beyond the scope of this paper. The telescope and detector acquire the data from the beacon as a list of time-tagged photon detection events. To go from that data to a ID is a multi-step process: \begin{enumerate} \item Detect and recover clock phase and period. \item Extract photon count values for each ID bit. \item Determine the ID that best fits the data, and an estimate of its reliability. \end{enumerate} If the precise clock period of the beacon, $T$, is known, then the fractional phase of each photon in the range of $[0,1)$ cycles, is $$\phi_{i} = \textrm{frac}(t^{\prime}_{i} / T)$$ where $t^{\prime}_{i}$ is the time at which photon $i$ is detected, adjusted for light-travel time from the satellite's location, and $\textrm{frac}(x) = x - \lfloor {x} \rfloor$ is the fractional part function. The true clock phase may be determined by generating a histogram of $\phi_{i}$ and finding the maximum at a value $\phi_{\textrm{peak}}$, with a width $\tau / T$. In practice, even if a nominal value of $T_{\textrm{nom}}$ is standardized (so that it can be predicted for an unidentified satellite), the true value of $T$ will be in an uncertainty range $T_{\textrm{nom}} (1-e_T) < T < T_{\textrm{nom}} (1+e_T)$ where $e_T$, the fractional error tolerance in $T$, is due to tolerances in the beacon's clock. For a crystal oscillator, $e_T \sim 50 \times 10^{-6}$ or 50 ppm is a common value. In addition, there may be drift in $T$ during the course of the observation, primarily due to temperature variation. This variation in $T$ can be handled by searching over the expected range for the value that produces the best peak in the histogram of $\phi_{i}$. To prevent loss of sensitivity, this search will require a granularity of $\delta T \lesssim \tau T / { \Delta t}$ where $\Delta t$ is the total duration of the data analyzed. For a \SI{10}{\second} observation with $\tau =$ \SI{1}{\micro\second}, $T_{\textrm{nom}} =$ \SI{1}{\milli\second}, a $\delta T = 1 \times 10^{-10} s$ search over a $\pm 50$ ppm range ($T_{\textrm{nom}} \pm 5 \times 10^{-8}$) will require 1000 trial histograms. This can be handled with good computational efficiency by techniques such as the Fast Folding Algorithm \cite{Staelin1969} (FFA), which allows a specified frequency range to be searched with a specified frequency spacing to look for features that are narrow in phase space. The FFA can be extended to allow efficient searches over slow changes in frequency during the observation to compensate for temperature drift. The FFA is fast enough that detecting and recovering the clock phase will take an insignificant amount of computer time compared to the duration of the observation. When a trial histogram shows a significant peak, small adjustments to $T$, drift in $T$, and the phase offset can often improve the peak value if the beacon photons were originally distributed among adjacent phase bins. This `peaking up' is an acceptable way of refining the clock solution, with the understanding that the significance of the result reflects a larger number of trials. Note that the basic version of the much more familiar Fast Fourier Transform (FFT) is not computationally efficient for this search. This is because the spectral energy of the ELROI signal is spread out over many harmonics (from below a \SI{}{\kilo\hertz} to above a \SI{}{\mega\hertz}), the FFT must be calculated over all frequencies from zero to the highest frequency of interest, the FFT's frequency spacing is fixed and inadequate, and the FFT cannot easily handle slowly-varying frequencies. After the clock characteristics are determined and refined, the photon detections can be divided into those whose detection times are respectively in-phase and out-of-phase with the beacon's emission. The out-of-phase photons are used to determine the background rate, but can be otherwise discarded. The in-phase photons, which include both beacon and background photons are used in further analysis. Each in-phase photon detection is given a bit index $0 \leq j^{\prime} < m $ which indicates which bit it corresponds to in the repeating $m$-bit ID, with the prime ($^{\prime}$) to indicate that the 'first' bit of the ID as found in the registry does not necessarily correspond to $j^{\prime} = 0$. $C_{j^\prime}$, the number of detected photons for bit index $j^{\prime}$, is therefore a measurement of whether the corresponding bit of the ID is a one or a zero. This will tend to be bimodally distributed, with the expectation value for bits that are zero, $\langle^{0}N\rangle$, being estimable from the background rate, and the expectation value for bits that are one having an additional signal: $\langle^{1}N\rangle = \langle^{0}N\rangle + \langle^{1}S\rangle$. The simplest way to match the data to an ID is to assign a threshold cut: \[ b_{j^\prime} = \begin{cases} 1, & \text{if } C_{j^\prime} \geq C_{\textrm{thresh}}\\ 0, & \text{otherwise} \end{cases} \] with $C_{thresh}$ chosen to split the $b_{j^\prime}$ values into the expected number of zeros and ones. The set of $b_{j^\prime}$ values is then matched to each individual IDs in the catalog of all currently flying ELROI units, with each of the $m$ bit positions being used as $j^{\prime} = 0$. The ID that matches the set of $b_{j^\prime}$ with the fewest discrepancies is the most likely ID to correspond to the observed ELROI beacon. If all other IDs have substantially more discrepancies, then that indicates with high confidence that the identification is correct. There are many possible refinements to this algorithm. Improved clock recovery algorithms that take advantage of the sparse nature of the data (the relatively small number of detected photons) can be developed. Photon detections can be weighted based on the measured variation in the background and expected signal level. The threshold cut may be replaced by a probabilistic analysis with a continuous rather than binary 0 vs 1 determination. The confidence level of an identification can be rigorously calculated based on the counting statistics of the photon detections. These refinements significantly improve the beacon detectability and are straightforward to implement, but are beyond the scope of this paper. \subsection{Link budget calculation for a sample design} \label{subsect:linkbudget} The link budget gives the rate of signal photon detections (\si{photons\per\second}) expected under various conditions. This rate depends on the optical power emitted by the beacon, the distance from the beacon to the ground station receiver, the area of the collecting optics, the transmission of the atmosphere and optical components, and the quantum efficiency of the photon-counting sensor. The values in the following example calculation are based on the LANL ground station at Fenton Hill, but would be applicable to most mid-sized telescopes and many different types of single-photon detectors. An estimate of background count rates in several scenarios is also given. The count rate in \si{photons\per\second} at the receiver is $$ R = P_{\textrm{avg}} \times 1/\Omega \times A/r^2 \times T_{\textrm{tot}} \times \varepsilon_{\textrm{DQE}}/E_{\gamma} $$ $P_{\textrm{avg}}$ is the average power of the beacon, which depends on the peak power $P_{\textrm{peak}}$ and the duty cycle $d$. $\Omega$ is the solid angle of the beacon emission, and can be as large as 2$\pi$. $A$ is the collecting area of the receiver optics, and $r$ is the distance from the beacon to the receiver. $T_{\textrm{tot}}$ is the total optical transmission, including the spectral filter transmission $T_{\textrm{f}}$ and the atmosphere transmission $T_{\textrm{atm}}$. Atmospheric transmission can change rapidly and depends on so many factors (atmospheric density, humidity, pollution, zenith angle, wavelength, etc.) that a detailed model is beyond the scope of this work. Thus, as in \S \ref{hrt}, $T_{\textrm{atm}}$ is ignored for the purposes of a link budget estimate, but it can be expected to contribute 1-3 dB of additional loss depending on conditions. $\varepsilon_{\textrm{DQE}}$ is the quantum efficiency of the photon-counting detector, which ranges from 1\% (some PMTs) to 75\% (some SPADs), and has been measured to be 3.9\% for the LANL sensor. $E_{\gamma}$ is the energy per photon at the beacon wavelength. Typical values for the link budget parameters are shown in Table \ref{table:link-budget-parameters}, and the calculation is worked out in Table \ref{table:link-budget-calc}, giving a nominal rate of 3.3 \si{photons\per\second} from LEO. \input{parameters-table.tex} \input{beacon-table.tex} When the host satellite is in sunlight, reflected light will be the dominant source of background photons \footnotemark . The amount of reflected sunlight depends on a combination of the satellite's size and albedo, and will generally be higher for larger satellites. This background also depends on the spacecraft attitude and the angle between the Sun and the observer, so for simplicity the spacecraft is approximated by an equivalent sphere at a 90\textdegree~(``half moon'') phase angle. The background rate for a 10-\si{\centi\meter} sunlit CubeSat is worked out in Table \ref{table:background}, giving a nominal rate of 0.36 background photons/second after filtering. \footnotetext{The contribution from sky background is usually small, but depends on the local sky surface brightness and the field of view necessary for tracking. Assuming a field of view of 1 arcminute and the other parameters given in Table \ref{table:link-budget-parameters}, the sky background count rate ranges from about 40 \si{photons\per\second} with no moonlight (sky surface brightness = 21.6 \si{magnitude\per}arcsecond$^2$) to about 1000 \si{photon\per\second} at the full moon (sky surface brightness = 18 \si{magnitude\per}arcsecond$^2$), before the phase cut \cite{Spoelstra2009}. (Note that sky surface brightness values are measured in the V spectral band; scattering of light in the atmosphere is biased toward the blue end of the visible spectrum, so count rates at 638 nm will be lower.) After the phase cut, these rates would be reduced to about 0.2 \si{photons\per\second} and 4 \si{photons\per\second}, respectively. The background count rate may increase dramatically if bright stars enter the field of view, but these events are isolated in time and can be removed from the data.} \input{background-table.tex} The predicted signal rate of 3.3 \si{photons\per\second} and background rate of 0.36 \si{photons\per\second} gives a sufficient link budget to read the ID reliably in a single pass---as discussed in \S\ref{hrt} and shown in Figure \ref{figcoderror}, 55 to 95 \si{\second} of observation at the predicted LEO rate is required to reduce the CER (Codeword Error Ratio, the probability of incorrectly identifying the code) to one in a thousand, and 105 to 157 \si{\second} reduces the CER to one in a billion. \section*{Acknowledgments} Initial work on this project was supported by the US Department of Energy through the Los Alamos National Laboratory (LANL) Laboratory Directed Research and Development program as part of the IMPACT (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking) project. Further work was supported by the Richard P. Feynman Center for Innovation, LANL. ELROI hardware and software was developed at LANL by Louis Borges, Richard Dutch, David Hemsing, Joellen Lansford and Charles Weaver, with thermal analysis by Alexandra Hickey, Lee Holguin, and Zachary Kennison. The Horizontal Range Test was assisted by Amanda Graff, David Graff, Mike Rabin, and David Thompson. We thank the New Mexico Tech CubeSat team, including Anders Jorgensen, Sawyer Gill, Samantha Young, and Aaron Zucherman, for providing the first opportunity to fly an ELROI unit. We thank Roberta Ewart of the Space and Missiles Systems Center for her encouragement and advocacy. \section*{References}
2,869,038,156,154
arxiv
\chapter{Introduction} Thirteen is the number of independent parameters required to describe the fermion sector of the standard model: nine masses, three mixing angles, and a CP-violating phase. Although such a plethora of arbitrary parameters is usually regarded as a weakness, we could instead view the situation as an opportunity to reach beyond the standard model. Because the fermion parameters can take on arbitrary values in the standard model, any prediction of these parameters can only come from beyond the standard model. Conversely, their observed experimental values could provide a clue to new physics. \REF\rrSUfive{ H.~Georgi and S.~Glashow, \sl Phys. Rev. Lett. \bf 32 \rm (1974) 438; \nextline M.~S.~Chanowitz, J.~Ellis, and M.~K.~Gaillard, \sl Nucl. Phys. \bf B 128 \rm (1977) 506; \nextline A.~Buras, J.~Ellis, M.~K.~Gaillard, and D.~Nanopoulos, \sl Nucl. Phys. \bf B 135 \rm (1978) 66. } \REF\rrGJ{ H.~Georgi and C.~Jarlskog, \sl Phys. Lett. \bf B 86 \rm (1979) 297. } \REF\rrHRR{ J.~Harvey, P.~Ramond, and D.~Reiss, \sl Phys. Lett. \bf B 92 \rm (1980) 309; \sl Nucl. Phys. \bf B 199 \rm (1982) 223. } \REF\rrDisc{ S.~Weinberg, \sl Trans. N. Y. Acad. Sci. \bf 38 \rm (1977) 185; \nextline F.~Wilczek and A.~Zee, \sl Phys. Lett. \bf B 70 \rm (1977) 418. } \REF\rrFri{ H.~Fritzsch, \sl Phys. Lett. \bf B 70 \rm (1977) 436; \bf 73 \rm (1978) 317; \bf 166 \rm (1986) 423. } A step in this direction has been taken with the discovery of various phenomenological relations among fermion masses and mixing angles, which could be viewed as modern-day Balmer formulae. Two types of relations among fermion parameters have been explored. The first type links the masses of fermions within the same generation to one another. Such relations result naturally from grand unified theories (GUTs) when the fermions belong to a common grand unified representation and couple to a single Higgs field; group theory then dictates a relation between their masses [\rrSUfive--\rrHRR]. The second type of relation connects Cabibbo-Kobayashi-Maskawa (CKM) matrix elements with ratios of masses of fermions in different generations. These relations arise naturally when certain entries of the Yukawa matrices vanish, perhaps as a result of discrete symmetries [\rrDisc, \rrFri]. These various relations among Yukawa couplings are presumably a consequence of new physics, and therefore hold at the energy scale of the new physics, \eg, the grand unification scale. But Yukawa couplings evolve in accord with the renormalization group (RG) equations, so relations among them that apply at one scale will not necessarily hold at another scale. Therefore, before they can be compared with low-energy data, GUT-scale relations must be corrected to account for RG~ running. \REF\rrCEL{ T.~P.~Cheng, E.~Eichten, and L.-F.~Li, \sl Phys. Rev. \bf D 9 \rm (1974) 2259. } \REF\rrIKKT{ K.~Inoue, A.~Kakuto, H.~Komatsu, and S.~Takeshita, \sl Prog. Theor. Phys. \bf 67 \rm (1982) 1889. } \REF\rrIL{ L.~Iba\~nez and C.~Lopez, \sl Nucl. Phys. \bf B 233 \rm (1984) 511. } \REF\rrMP{ E.~Ma and S.~Pakvasa, \sl Phys. Lett. \bf B 86 \rm (1979) 43; \sl Phys. Rev. \bf D 20 \rm (1979) 2899; \nextline K.~Sasaki, \sl Zeit. Phys. \bf C 32 \rm (1986) 149; \nextline K.~S.~Babu, \sl Zeit. Phys. \bf C 35 \rm (1987) 69. } \REF\rrFla{ H.~Arason, D.~Casta\~no, B.~Keszthelyi, S.~Mikaelian, E.~Piard, P.~Ramond, and B.~Wright, \sl Phys. Rev. Lett. \bf 67 \rm (1991) 2933; \nextline P.~Ramond, Florida preprint UFIFT-92-4; \nextline H.~Arason, D.~Casta\~no, E.~Piard, and P.~Ramond, Florida preprint UFIFT-92-8. } \REF\rrKLN{ S.~Kelley, J.~Lopez, and D.~Nanopoulos, \sl Phys. Lett. \bf B 274 \rm (1992) 387. } \REF\rrDHR{ S.~Dimopoulos, L.~J.~Hall, and S.~Raby, \sl Phys. Rev. Lett. \bf 68 \rm (1992) 1984; \sl Phys. Rev. \bf D 45 \rm (1992) 4192. } \REF\rrBBHZ{ V.~Barger, M.~S.~Berger, T.~Han, and M.~Zralek, \sl Phys. Rev. Lett. \bf 68 \rm (1992) 3394. } \REF\rrGiu{ G.~Giudice, \sl Mod. Phys. Lett. \bf A 7 \rm (1992) 2429. } Most of the running of the Yukawa couplings is induced by the gauge couplings. For example, group-theoretic relations between quark and lepton masses at the GUT scale are greatly modified at low-energy scales because quarks and leptons have different gauge couplings and their masses therefore run differently [\rrSUfive--\rrHRR]. If they are sufficiently large, however, Yukawa couplings themselves induce further running [\rrCEL], and therefore further modifications of relations between masses [\rrIKKT, \rrIL]. Yukawa couplings also induce running of the CKM matrix elements [\rrMP], which are invariant under gauge coupling-induced running. The effect of the top quark Yukawa coupling on fermion mass and mixing angle relations in supersymmetric theories was recently investigated in refs.~[\rrFla--\rrGiu]. \REF\rrBBO{ V.~Barger, M.~S.~Berger, and P.~Ohmann, Madison preprint MAD/PH/711. } \REF\rrADHR{ G.~Anderson, S.~Dimopoulos, L.~J.~Hall, and S.~Raby, preprint OHSTPY-HEP-92-018. } \REF\rrBS{ K.~S.~Babu and Q.~Shafi, Bartol preprint BA-92-70. } In the standard model, the Yukawa couplings of fermions other than the top quark are too small to cause significant running. In supersymmetric theories, however, the Yukawa couplings of the other third generation fermions, the $b$ quark and $\tau$ lepton, may be comparable to the $t$ quark Yukawa coupling, even though their masses are much smaller. This occurs when the expectation value of the Higgs field to which $b$ and $\tau$ are coupled is much less than that of the Higgs field to which $t$ is coupled; that is, when $ \tan\beta$, the ratio of expectation values of the two Higgs fields, is large. In this regime, all three third generation fermions may cause significant running. This case has also been investigated recently by several authors [\rrFla, \rrKLN, \rrBBO--\rrBS], who solved the RG~ equations numerically. In this paper, we would like to calculate the effect of RG~ running induced by the entire third generation of fermions {\it analytically}. An analytic result would have several advantages over a numerical solution. In addition to enhancing intuition about the effects of Yukawa coupling-induced running, it would allow one to see transparently how changes in the input parameters affect the predictions, without having to re-run the numerical routines for each new set of data. It would also simplify the error analysis. Although it is not possible to solve the RG~ equations exactly, we introduce an approximation (good to within a few percent) that includes the running induced by all three third generation fermions and that allows an analytic solution. We then use this approximate solution to determine the effects of RG~ running on several different sets of mass and mixing angle relations, and on the predictions that follow from those relations. The RG~ running of the Yukawa couplings is logically independent of the fermion mass and mixing angle relations because the latter arise from new physics at the GUT scale whereas the running from the GUT scale to the low-energy scale only depends on the particle spectrum below the GUT scale. {}From the point of view of the RG~ equations, the only role of the fermion relations is to provide boundary conditions at the GUT scale. Because of this, we will be able to analyze the RG~ running of the Yukawa couplings independently of any particular set of fermion relations. One approach is to evolve the matrices of Yukawa couplings down to the low-energy scale, and then diagonalize them to find the fermion masses and mixing angles. Many degrees of freedom of the Yukawa matrices are not physical, however, because of the freedom to perform unitary redefinitions of the fermion bases. Therefore, we instead begin with the RG~ equations for the smaller set of physical parameters, {\it viz.}, the fermion masses, mixing angles, and CP-violating phase. This simplifies the computational task by reducing the number of equations, and allows us to deal only with physical quantities throughout. In sect.~2, we review the one-loop RG~ equations for fermion masses and CKM matrix elements in the minimal supersymmetric standard model. Adopting a non-standard parametrization of the CKM matrix, we obtain explicit RG~ equations for the mixing angles and CP-violating phase. In sect.~3, we introduce an approximation that allows us to solve the RG~ equations analytically, including the effects of the entire third generation of fermions. We then analyze in sect.~4 the RG~ effects on three different sets of fermion mass and mixing angle relations that might result from new physics at the GUT scale. Sect.~5 contains our conclusions. \chapter{Running Masses and Mixing Angles} In this section, we review the renormalization group running of fermion masses and CKM matrix elements. We also include a discussion of phase choices for the CKM matrix so that we can obtain explicit RG~ equations for the mixing angles which parametrize it. \REF\rrSUSY{ J.~Ellis, S.~Kelley, and D.~Nanopoulos, \sl Phys. Lett. \bf B 249 \rm (1990) 441; \nextline U.~Amaldi, W.~de Boer, and H.~F\"urstenau, \sl Phys. Lett. \bf B 260 \rm (1991) 447; \nextline P.~Langacker and M.~Luo, \sl Phys. Rev. \bf D 44 \rm (1991) 817. } \REF\rrPR{ B.~Pendleton and G.~G.~Ross, \sl Phys. Lett. \bf B 98 \rm (1981) 291. } As noted in the introduction, the renormalization group analysis is independent of the new physics responsible for relations between fermion masses and mixing angles, and can therefore be applied to different sets of relations. We assume, however, that the relations result from some grand unified theory, and therefore that the three gauge couplings meet at a single scale $ {\overline\scale} $. This can be achieved in the context of the minimal supersymmetric standard model, with the supersymmetry breaking scale $\scale_{\rm susy} $ between 100 GeV and 10 TeV [\rrSUSY]. We will assume that this framework describes physics up to the GUT scale $ {\overline\scale} $. The one-loop RG~ equations for the gauge couplings are $$ 16\pi^2 {\d \over \d t} \ln g_i = b_i g_i^2 , \qquad\qquad t = \ln \mu, \eqn\eeGaugeRGE $$ and have solutions $$ {1 \over g^2_i (\mu)} = {1 \over \overline{g} ^2} - {b_i\over 8 \pi^2} \ln \left( \mu \over {\overline\scale} \right). \eqn\eeGaugeSoln $$ Between $\scale_{\rm susy} $ and the grand unified scale $ {\overline\scale} $, the coefficients are given by $$ (b_1, b_2, b_3) = \left( {\textstyle {33\over 5}, 1, -3 } \right). \eqn\eeGaugeRGECoeff $$ We choose $\scale_{\rm susy} = 170$ GeV for convenience (close to the top quark mass); our results will be rather insensitive to the exact value of $\scale_{\rm susy} $. Using [\rrBBO] $$ {g_1^2 (\scale_{\rm susy} ) \over 4\pi} = {1\over 58.5}, \qquad\qquad {g_2^2 (\scale_{\rm susy} ) \over 4\pi} = {1\over 30.1}, \qquad\qquad {\rm for~} \scale_{\rm susy} = 170 {\rm~GeV} , \eqn\eeGaugeBC $$ we obtain $$ { \overline{g} ^2 \over 4\pi } = {1\over 25.0}, \qquad\qquad {\overline\scale} = 1.2 \times 10^{16} {\rm~GeV} . \eqn\eeGaugeGUT $$ Throughout this paper, an overline denotes quantities evaluated at the GUT scale. In the minimal supersymmetric standard model, the charge ${2\over 3}$ quarks couple to a Higgs field with expectation value $ (v / \sqrt 2) \sin \beta$, where $ v = 246$ GeV and $\beta$ is arbitrary; the charged leptons and charge $-{1\over 3}$ quarks couple to a Higgs field with expectation value $ (v / \sqrt 2) \cos \beta$. The fermion masses come from the Yukawa couplings $$ L_{\rm Yuk} = \left( {v\over\sqrt 2} \sin \beta\right) \overline{U}_L Y_U U_R + \left( {v\over\sqrt 2} \cos \beta\right) \overline{D}_L Y_D D_R + \left( {v\over\sqrt 2} \cos \beta\right) \overline{E}_L Y_E E_R + {\it h.c.} , \eqn\eeYukLag $$ where $$ U = \pmatrix{ u \cr c \cr t \cr }, \qquad \qquad D = \pmatrix{d \cr s \cr b \cr }, \qquad \qquad E = \pmatrix{e \cr \mu \cr \tau \cr }, \eqn\eeFermVec $$ are the fermion fields, and $ Y_U $, $ Y_D $, and $ Y_E $ are arbitrary complex $ 3 \times 3 $ matrices. (In this paper, we take the neutrinos to be massless.) These Yukawa matrices obey the one-loop supersymmetric RG~ equations [\rrIKKT] $$ \eqalign{ 16 \pi^2 {\d \over \d t} Y_U & ~=~\left[ -~c^u_i g_i^2 + \Tr ( 3 Y_U { Y_U }^ \dagger ) + 3 Y_U { Y_U }^ \dagger + Y_D { Y_D }^ \dagger \right] Y_U , \cr 16 \pi^2 {\d \over \d t} Y_D & ~=~\left[ -~c^d_i g_i^2 + \Tr ( 3 Y_D { Y_D }^ \dagger + Y_E { Y_E }^ \dagger ) + 3 Y_D { Y_D }^ \dagger + Y_U { Y_U }^ \dagger \right] Y_D , \cr 16 \pi^2 {\d \over \d t} Y_E & ~=~\left[-~c^e_i g_i^2 + \Tr ( 3 Y_D { Y_D }^ \dagger + Y_E { Y_E }^ \dagger ) + 3 Y_E { Y_E }^ \dagger \right] Y_E , \cr } \eqn\eeYukRGE $$ where $$ (c^u_1, c^u_2, c^u_3) = \left( {\textstyle {13\over 15}, 3, {16\over 3} } \right), \qquad (c^d_1, c^d_2, c^d_3) = \left( {\textstyle {7\over 15}, 3, {16\over 3} } \right), \qquad (c^e_1, c^e_2, c^e_3) = \left( {\textstyle {27\over 15}, 3, 0} \right) , \eqn\eeYukRGECoeff $$ between $\scale_{\rm susy} $ and $ {\overline\scale} $. Not all the parameters of the Yukawa matrices are physical. Under an arbitrary unitary transformation on the fermion bases, $ F_L \to {L_F} F_L $, $ F_R \to {R_F} F_R $ (where $F = U$, $D$, $E$), the Yukawa matrix undergoes a bi-unitary transformation, $ Y_F \to {L_F} ^ \dagger Y_F {R_F} $, and the charged current becomes off-diagonal, with mixing matrix $ {L_U} ^ \dagger {L_D} $. We may perform scale-dependent unitary transformations $ {L_F} (\mu)$ and $ {R_F} (\mu)$ on the fermion bases so as to diagonalize the Yukawa matrices at each scale. Thus $$ \Yhat_F (\mu) = {L_F} ^ \dagger (\mu) Y_F (\mu) {R_F} (\mu), \qquad\qquad F = U, D, E, \eqn\eeYukDiag $$ where $ \Yhat_F $ denotes the diagonalized Yukawa matrix, and $$ V (\mu) = {L_U} ^ \dagger (\mu) {L_D} (\mu) \eqn\eeCKMDef $$ is the corresponding scale-dependent CKM matrix. We now derive RG~ equations for the physically relevant quantities: the Yukawa eigenvalues $ \Yhat_F (\mu)$ and the CKM matrix $V(\mu)$ [\rrMP]. The transformations on the right-handed fields are irrelevant to the CKM matrix, so we begin by writing $ \Yhat_F $ in terms of $ {L_F} $ only $$ \Yhat_F ^2 (\mu) = {L_F} ^ \dagger (\mu) Y_F (\mu) { Y_F }^ \dagger (\mu) {L_F} (\mu), \qquad\qquad F = U, D, E. \eqn\eeYukSqDiag $$ Differentiating eq.~\eeYukSqDiag, and using eqs.~\eeYukRGE, we obtain $$ \eqalign{ {\d \over \d t} \left( \Yhat_U ^2 \right) & = [ \Yhat_U ^2, {L_U} ^ \dagger \dot{L}_U ] + {1\over 16\pi^2} \left[ 6 \Tr ( \Yhat_U ^2) \Yhat_U ^2 + 6 \Yhat_U ^4 + V \Yhat_D ^2 V^ \dagger \Yhat_U ^2 + \Yhat_U ^2 V \Yhat_D ^2 V^ \dagger \right] , \crr {\d \over \d t} \left( \Yhat_D ^2 \right) & = [ \Yhat_D ^2, {L_D} ^ \dagger \dot{L}_D ] + {1\over 16\pi^2} \Big[ 2 \Tr ( 3 \Yhat_D ^2 + \Yhat_E ^2 ) \Yhat_D ^2 + 6 \Yhat_D ^4 + V^ \dagger \Yhat_U ^2 V \Yhat_D ^2 + \Yhat_D ^2 V^ \dagger \Yhat_U ^2 V \Big] , \crr {\d \over \d t} \left( \Yhat_E ^2 \right) & = [ \Yhat_E ^2, {L_E} ^ \dagger \dot{L}_E ] + {1\over 16\pi^2} \left[ 2 \Tr ( 3 \Yhat_D ^2 + \Yhat_E ^2 ) \Yhat_E ^2 + 6 \Yhat_E ^4 \right], \cr } \eqn\eeYukSqRGE $$ where $ \dot{L}_F = ( {\rm d} {L_F} / {\rm d} t )$. The commutator $ [ \Yhat_F ^2, {L_F} ^ \dagger \dot{L}_F ] $ has vanishing diagonal elements because $ \Yhat_F ^2$ is diagonal. Thus the RG~ equations for the Yukawa eigenvalues $ y _\alpha$ follow immediately from the diagonal entries of eqs.~\eeYukSqRGE. The remaining entries of eqs.~\eeYukSqRGE~yield equations for the {\it off-diagonal} elements of $ {L_U} ^ \dagger \dot{L}_U $ and $ {L_D} ^ \dagger \dot{L}_D $, as long as there are no degeneracies among the quark masses: $$ \eqalign{ ( {L_U} ^ \dagger \dot{L}_U )_{\alpha\beta} &= {1\over 16\pi^2} \sum_{\gamma=d,s,b} { y _\beta^2 + y _\alpha^2 \over y _\beta^2 - y _\alpha^2} V_{\alpha\gamma} \; y _\gamma^2 \; {V^\dag} _{\gamma\beta}, \qquad \alpha \neq \beta , \qquad \alpha, \beta = u,c,t, \cr ( {L_D} ^ \dagger \dot{L}_D )_{\alpha\beta} & = {1\over 16\pi^2} \sum_{\gamma=u,c,t} { y _\beta^2 + y _\alpha^2 \over y _\beta^2 - y _\alpha^2 } {V^\dag} _{\alpha\gamma} \; y _\gamma^2 \; V_{\gamma\beta} , \qquad \alpha \neq \beta, \qquad \alpha, \beta = d,s,b. \cr } \eqn\eeOffdiagUnitRGE $$ The {\it diagonal} elements of $ {L_U} ^ \dagger \dot{L}_U $ and $ {L_D} ^ \dagger \dot{L}_D $ are {\it not} determined by eqs.~\eeYukSqRGE. It is easy to see why. Equation \eeYukSqDiag~determines $ {L_U} $ and $ {L_D} $ only up to right multiplication by a diagonal matrix of (scale-dependent) phases. These undetermined phases contribute arbitrary imaginary functions to the diagonal elements of $ {L_U} ^ \dagger \dot{L}_U $ and $ {L_D} ^ \dagger \dot{L}_D $. (The off-diagonal elements are unambiguously determined because they receive no contribution from the phases.) We can, however, {\it choose} the phases to make the diagonal elements of $ {L_U} ^ \dagger \dot{L}_U $ and $ {L_D} ^ \dagger \dot{L}_D $, which are manifestly imaginary, vanish: $$ ( {L_U} ^ \dagger \dot{L}_U )_{\alpha\alpha} = ( {L_D} ^ \dagger \dot{L}_D )_{\alpha\alpha} = 0 \qquad {\rm ~by~an~appropriate~choice~of~phases.} \eqn\eeDiagUnitRGE $$ The RG~ equations for the CKM matrix elements \eeCKMDef~are then [\rrMP] $$ \eqalign{ 16 \pi^2 {\d \over \d t} V_{\alpha\beta} & = 16 \pi^2 \left( V {L_D} ^ \dagger \dot{L}_D - {L_U} ^ \dagger \dot{L}_U V \right)_{\alpha\beta} \cr & = \sum_{\gamma=u,c,t} \sum_{\delta=d,s,b \atop \delta \neq \beta} { y _\beta^2 + y _\delta^2 \over y _\beta^2 - y _\delta^2 } V_{\alpha\delta} {V^\dag} _{\delta\gamma} \; y _\gamma^2 \; V_{\gamma\beta} + \sum_{\gamma=u,c,t \atop \gamma \neq \alpha} \sum_{\delta=d,s,b } { y _\alpha^2 + y _\gamma^2 \over y _\alpha^2 - y _\gamma^2 } V_{\alpha\delta} \; y _\delta^2 \; {V^\dag} _{\delta\gamma} V_{\gamma\beta}, \cr } \eqn\eeCKMRGEone $$ as long as we choose the phases to guarantee eqs. \eeDiagUnitRGE. We now neglect the contributions to the running caused by the first and second generation Yukawa couplings. If we further assume $ y _u^2 \ll y _c^2 \ll y _t^2$ and $ y _d^2 \ll y _s^2 \ll y _b^2$, then the CKM matrix RG~ equations \eeCKMRGEone~reduce to $$ 16 \pi^2 {\d \over \d t} V_{\alpha\beta} = y _t^2 \sum_{\delta=d,s,b} \varepsilon _{\delta\beta} V_{\alpha\delta} {V^\dag} _{\delta t} V_{t\beta} ~+~ y _b^2 \sum_{\gamma=u,c,t} \varepsilon _{\gamma\alpha} V_{\alpha b} {V^\dag} _{b \gamma} V_{\gamma \beta} , \qquad \varepsilon _{\alpha\beta} = \cases{1 & if $ y _\alpha < y _\beta$,\cr 0 & if $ y _\alpha = y _\beta$,\cr -1 & if $ y _\alpha > y _\beta$.\cr} \eqn\eeCKMRGEtwo $$ The CKM matrix elements $V_{\alpha\beta}$ are not all independent because of the constraint of unitarity. We prefer to go a step further than previous treatments by deriving RG~ equations for a set of {\it independent} quantities parametrizing the unitary CKM matrix: the mixing angles and CP-violating phase (but see ref.~[\rrPR]). To do so, however, we must squarely face the issue of phases multiplying the matrix. We adopt the following (nonstandard) parametrization of the CKM matrix $$ V (\mu) = \pmatrix{ {\rm e} ^{i\phi_u} & 0 & 0 \cr 0 & {\rm e} ^{i\phi_c} & 0 \cr 0 & 0 & {\rm e} ^{i\phi_t} \cr } \pmatrix{ \s_1 \s_2 \c_3 + \c_1 \c_2 \e^{i\phi} & \c_1 \s_2 \c_3 - \s_1 \c_2 \e^{i\phi} & \s_2 \s_3 \cr \s_1 \c_2 \c_3 - \c_1 \s_2 \e^{i\phi} & \c_1 \c_2 \c_3 + \s_1 \s_2 \e^{i\phi} & \c_2 \s_3 \cr - \s_1 \s_3 & - \c_1 \s_3 & \c_3 \cr} \pmatrix{ {\rm e} ^{i\phi_d} & 0 & 0 \cr 0 & {\rm e} ^{i\phi_s} & 0 \cr 0 & 0 & {\rm e} ^{i\phi_b} \cr }, \eqn\eeCKMParam $$ where $ {\rm c} _i = \cos \theta_i$ and $ {\rm s} _i = \sin \theta_i$, and all the parameters are functions of the scale $\mu$. The middle matrix is chosen to have real elements in the third row and column. We cannot automatically eliminate the left and right phase matrices by rephasing the quark fields; we have already used that freedom to ensure eqs.~\eeDiagUnitRGE, and those equations implicitly determine the functions $\dot{\phi}_\alpha (\mu)$. We can, however, impose the boundary condition $ \phi_\alpha ( {\overline\scale} ) = 0 $; \ie, the initial values for the five matrix elements $V_{\alpha\beta}$ in the third row or column may be chosen to be real. One can show, using $ V^ \dagger V = 1$, that the RG~ equations \eeCKMRGEtwo~for this set of five matrix elements close on themselves. Since their initial values are real and the coefficients in the equations are real, these five matrix elements remain real (\ie, $ \phi_\alpha (\mu) = 0 $) for all $\mu$. In other words, the choice of quark phases that guarantees eqs.~\eeDiagUnitRGE~also implies that the phases $\phi_\alpha (\mu)$ in the CKM parametrization \eeCKMParam~vanish (in the approximation that eq.~\eeCKMRGEtwo~is valid). We then obtain the RG~ equations for the angles $\theta_i$ and the CP-violating phase $\phi$ by substituting eq.~\eeCKMParam~into eq.~\eeCKMRGEtwo $$ \eqalign{ 16 \pi^2 {\d \over \d t} \ln \tan \theta_1 & = - y _t^2 \sin^2 \theta_3 , \qquad \qquad 16 \pi^2 {\d \over \d t} \ln \tan \theta_3 = - y _t^2 - y _b^2 , \crr 16 \pi^2 {\d \over \d t} \ln \tan \theta_2 & = - y _b^2 \sin^2 \theta_3 , \qquad \qquad 16 \pi^2 {\d \over \d t} \phi = 0. \cr } \eqn\eeAngleRGE $$ The RG~ equations for the Yukawa eigenvalues [\rrIKKT, \rrMP], $$ \eqalign{ 16\pi^2 {\d \over \d t} \ln y _u & = -c^u_i g_i^2 + 3 y _t^2 + y _b^2 \cos^2 \theta_2 \sin^2 \theta_3 ,\cr 16\pi^2 {\d \over \d t} \ln y _c & = -c^u_i g_i^2 + 3 y _t^2 + y _b^2 \sin^2 \theta_2 \sin^2 \theta_3 ,\cr 16\pi^2 {\d \over \d t} \ln y _t & = -c^u_i g_i^2 + 6 y _t^2 + y _b^2 \cos^2 \theta_3 ,\cr 16\pi^2 {\d \over \d t} \ln y _d & = -c^d_i g_i^2 + y _t^2 \sin^2 \theta_1 \sin^2 \theta_3 + 3 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _s & = -c^d_i g_i^2 + y _t^2 \cos^2 \theta_1 \sin^2 \theta_3 + 3 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _b & = -c^d_i g_i^2 + y _t^2 \cos^2 \theta_3 + 6 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _e & = -c^e_i g_i^2 + 3 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _\mu & = -c^e_i g_i^2 + 3 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _\tau & = -c^e_i g_i^2 + 3 y _b^2 + 4 y _\tau^2 ,\cr } \eqn\eeEigvalRGEone $$ are obtained by substituting eq.~\eeCKMParam~into eqs.~\eeYukSqRGE, again neglecting first and second generation Yukawa coupling contributions to the running. Although we assumed $ y _u^2 \ll y _c^2 \ll y _t^2$ and $ y _d^2 \ll y _s^2 \ll y _b^2$ in deriving eqs.~\eeAngleRGE~and \eeEigvalRGEone, we did {\it not} assume that the mixing angles $\theta_i$ were small. Because the third generation mixes with the first two, third generation quarks induce some running of the mixing angles $\theta_1$ and $\theta_2$ and the ratios of first and second generation quarks, $ y _u / y _c $ and $ y _d / y _s $. The amount of running of these quantities, however, is typically quite small because of the smallness of $\theta_3 \sim 0.05 $; they change by less than 0.1\% from the GUT scale to the electroweak scale if $ y _b$, $ y _t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.5$. We will therefore be justified in the following in neglecting terms proportional to $\sin^2 \theta_3$ on the r.h.s. of the RG~ equations \eeAngleRGE~and \eeEigvalRGEone. \chapter{Approximate Solutions to the RG~ Equations} In this section, we find explicit solutions to the renormalization group equations for the fermion masses and CKM matrix elements. To do so, we need to make several approximations. In deriving the RG equations \eeAngleRGE~and \eeEigvalRGEone~in the last section, we neglected the running caused by the first and second generations of Yukawa couplings. We now make the further assumption that the mixing angles are small. In this approximation, the RG~ equations for the CKM matrix elements simplify to $$ 16 \pi^2 {\d \over \d t} \ln V_{\alpha\beta} = \cases{ - y _t^2 - y _b^2 & for $\alpha\beta = ub, cb, td,$ and $ts$, \cr ~~0 & for $\alpha\beta = ud, us, cd, cd,$ and $tb$, \cr} \eqn\eeCKMRGEthree $$ and the Yukawa eigenvalues satisfy $$ \eqalign{ 16\pi^2 {\d \over \d t} \ln y _{u,c} & = -c^u_i g_i^2 + 3 y _t^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _{t~~} & = -c^u_i g_i^2 + 6 y _t^2 + y _b^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _{d,s} & = -c^d_i g_i^2 + 3 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _{b~~} & = -c^d_i g_i^2 + y _t^2 + 6 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _{e,\mu} & = -c^e_i g_i^2 + 3 y _b^2 + y _\tau^2 ,\cr 16\pi^2 {\d \over \d t} \ln y _{\tau~~} & = -c^e_i g_i^2 + 3 y _b^2 + 4 y _\tau^2 .\cr } \eqn\eeEigvalRGEtwo $$ If we define the scaling factors $$ A_\alpha (\mu) = \exp \left[ \textstyle{ {1 \over 16\pi^2} \int_{\ln \mu}^{\ln {\overline\scale} } c_i^\alpha g_i^2 ( \scale^\prime ) {\rm d} (\ln \scale^\prime ) } \right] \eqn\eeADef $$ and $$ B_\alpha (\mu) = \exp \left[ \textstyle{ -~{1 \over 16\pi^2} \int_{\ln \mu}^{\ln {\overline\scale} } y _\alpha^2 ( \scale^\prime ) {\rm d} (\ln \scale^\prime ) } \right], \eqn\eeBDef $$ then the solutions to eqs.~\eeCKMRGEthree~and \eeEigvalRGEtwo~are given by $$ V_{\alpha\beta} (\mu) = \cases{ \overline{V}_{\alpha\beta} B_t^{-1} B_b^{-1} & for $\alpha\beta = ub, cb, td,$ and $ts$, \cr \overline{V}_{\alpha\beta} & for $\alpha\beta = ud, us, cd, cd,$ and $tb$, \cr} \eqn\eeCKMSoln $$ and $$ \eqalign{ & y _{u} (\mu) = \overline{\lam} _{u} A_u B_t^3 , \quad\qquad y _{d} (\mu) = \overline{\lam} _{d} A_d B_b^3 B_{\tau} , \quad\qquad y _{e} (\mu) = \overline{\lam} _{e} A_e B_b^3 B_{\tau} , \cr & y _{c} (\mu) = \overline{\lam} _{c} A_u B_t^3 , \quad\qquad y _{s} (\mu) = \overline{\lam} _{s} A_d B_b^3 B_{\tau} , \quad\qquad y _{\mu} (\mu) = \overline{\lam} _{\mu} A_e B_b^3 B_{\tau} , \cr & y _{t} (\mu) = \overline{\lam} _{t} A_u B_t^6 B_b , \qquad y _{b} (\mu) = \overline{\lam} _{b} A_d B_t B_b^6 B_{\tau} , \qquad y _{\tau} (\mu) = \overline{\lam} _{\tau} A_e B_b^3 B_{\tau} ^4 , \cr } \eqn\eeEigvalSoln $$ where the overline denotes quantities evaluated at the GUT scale. The $A_\alpha$ factors encapsulate the running induced by the gauge couplings; the $B_\alpha$ factors that induced by the Yukawa couplings. \REF\rrOP{ M.~Olechowski and S.~Pokorski, \sl Phys. Lett. \bf B 257 \rm (1991) 388.} \REF\rrUni{ V.~Barger, M.~S.~Berger, and P.~Ohmann, Madison preprint MAD/PH/722.} In the approximation of small mixing angles, the four off-diagonal CKM matrix elements involving the third generation all run with the same scale factor $ B_t^{-1} B_b^{-1}$, while the remaining five matrix elements do not run at all, as has been observed previously [\rrBS, \rrOP, \rrUni]. Also in this approximation, the ratios of first and second generation Yukawa couplings, $ y _u / y _c $, $ y _d / y _s $, and $ y _e / y _\mu $, are invariant under running induced by third generation Yukawa couplings, as well as under running induced by gauge couplings [\rrBS, \rrOP, \rrUni]. These results hold to all orders in perturbation theory [\rrUni]. In order to use eqs.~\eeCKMSoln~and \eeEigvalSoln~to scale fermion mass and mixing angle relations from the GUT scale to the supersymmetry breaking scale, we must know the values of $A_\alpha (\scale_{\rm susy} )$ and $B_\alpha (\scale_{\rm susy} )$. The scaling factors due to the gauge couplings $A_\alpha (\mu)$ obey $$ 16 \pi^2 {\d \over \d t} \ln A_\alpha = -c_i^\alpha g_i^2 , \eqn\eeARGE $$ and are easily calculated using eqs.~\eeGaugeRGE~and \eeGaugeSoln: $$ A_\alpha (\mu) = \prod_{i=1}^3 \left[ g_i (\mu) \over \overline{g} \right]^{-c_i^\alpha / b_i} = \prod_{i=1}^3 \left[ 1 - {b_i \overline{g} ^2 \over 8 \pi^2 } \ln \left( \mu \over {\overline\scale} \right) \right]^{c_i^\alpha / 2b_i} . \eqn\eeASoln $$ These factors are given at the supersymmetry breaking scale by $$ A_u (\scale_{\rm susy} )= 3.21, \qquad A_d (\scale_{\rm susy} )= 3.13, \qquad A_e (\scale_{\rm susy} )= 1.48, \qquad {\rm for~} \scale_{\rm susy} = 170 {\rm~GeV} , \eqn\eeANumer $$ using eqs.~\eeGaugeRGECoeff~and \eeGaugeGUT. The scaling factors due to the Yukawa couplings $B_\alpha (\mu)$ obey $$ \eqalign{ 16 \pi^2 {\d \over \d t} \ln B_t & = y _t^2 = \overline{\lam} _t^2 A_u^2 B_t^{12} \left[ B_b^2 \right] ,\cr 16 \pi^2 {\d \over \d t} \ln B_b & = y _b^2 = \overline{\lam} _b^2 A_d^2 B_b^{12} \left[ B_t^2 B_{\tau} ^2 \right] ,\cr 16 \pi^2 {\d \over \d t} \ln B_{\tau} & = y _\tau^2 = \overline{\lam} _\tau^2 A_e^2 B_{\tau} ^{12} \left[ ( B_b / B_{\tau} )^4 B_b^2 \right] .\cr } \eqn\eeBRGE $$ The $B_\alpha$ are equal to 1 at the GUT scale, and decrease monotonically as one lowers the scale. The equations \eeBRGE~do not have an analytic solution. We can obtain an approximate solution by setting the factors in brackets equal to 1. The equations then decouple from one another, and have the solutions $$ \eqalign{ B_t (\mu) & \approx \left[ 1 + \overline{\lam} _t^2 K_u (\mu) \right]^{-1/12} , \crr B_b (\mu) & \approx \left[ 1 + \overline{\lam} _b^2 K_d (\mu) \right]^{-1/12} , \qquad\qquad K_\alpha (\mu) = {3 \over 4\pi^2} \int_{\ln \mu}^{\ln {\overline\scale} } A_\alpha^2 ( \scale^\prime ) {\rm d} (\ln \scale^\prime ). \crr B_{\tau} (\mu) & \approx \left[ 1 + \overline{\lam} _\tau^2 K_e (\mu) \right]^{-1/12} , \cr } \eqn\eeBSolnone $$ We numerically integrate $K_\alpha (\mu)$ to find $$ K_u (\scale_{\rm susy} ) = 8.65, \qquad K_d (\scale_{\rm susy} )= 8.33, \qquad K_e (\scale_{\rm susy} )= 3.77, \qquad {\rm for~} \scale_{\rm susy} = 170 {\rm~GeV} . \eqn\eeINumer $$ The terms in the brackets in eq.~\eeBRGE~are all less than 1, so omitting them tends to increase the running of the factors $B_\alpha$. Consequently, the expressions in eq.~\eeBSolnone~are smaller than the exact values of $B_\alpha$ at $\mu = \scale_{\rm susy} $ by about 1 or 2\%. As we will see in sect.~4, this approximation tends to exaggerate the effect of the running induced by the Yukawa couplings. In the limit $ y _b$, $ y _\tau \ll y _t$, the approximate solutions \eeBSolnone~reduce to the exact result for the running induced by the top quark alone [\rrDHR, \rrBBHZ] $$ B_t (\mu) = \left[ 1 + \overline{\lam} _t^2 K_u (\mu) \right]^{-1/12} , \qquad\qquad B_b (\mu) = B_{\tau} (\mu) = 1. \eqn\eeBExactSoln $$ \chapter{Running Relations} We examine in this section several different sets of fermion mass and mixing angle relations, and the effect of renormalization group running on these relations. We assume that physics at the GUT scale dictates certain forms, or textures, for the matrices of Yukawa couplings. Different physics leads to different textures. In this section, we focus on three different textures: the Georgi-Jarlskog texture, the Giudice texture, and the Fritzsch texture. We will not be concerned so much with the physics behind these textures, but simply take them as given, and examine the relations among fermion masses and mixing angles to which they give rise. The relations derived from these various texures hold at the GUT scale, and need to be scaled down to low energy to yield predictions for measured parameters. We use the spectrum of the minimal supersymmetric standard model to run the relations from the GUT scale down to the scale at which supersymmetry is broken, $\scale_{\rm susy} =170$ GeV. Below $\scale_{\rm susy} $, the CKM matrix elements do not evolve much, \foot{The $t$, $b$, and $\tau$ Yukawa couplings continue to induce running down to the scale of their masses, but the amount of running is much less than that from $ {\overline\scale} $ to $\scale_{\rm susy} $.} but the Yukawa eigenvalues continue to run due to QED and QCD effects. This additional running is incorporated in the factors $\eta_\alpha$, defined by $$ \eta_\alpha = { y _\alpha (m_\alpha) \over y _\alpha (\scale_{\rm susy} ) }. \eqn\eeEtaDef $$ In this paper, $m_\alpha$ denotes not the physical mass but rather the running mass of the fermion, defined by $$ m_\alpha = y _\alpha (m_\alpha) {v \over \sqrt{2} } \cases{ \sin \beta & for $\alpha=u$, $c$, and $t$, \cr \cos \beta & for $\alpha=d$, $s$, $b$, $e$, $\mu$, and $\tau$. \cr} \eqn\eeMassDef $$ The physical (pole) mass of the top quark is then related to its running mass by $$ m_t^{\rm phys} = \left[ 1 + {4 \over 3 \pi} \alpha_3 (m_t) + O (\alpha_3^2) \right] m_t. \eqn\eeMtopPole $$ In eqs.~\eeEtaDef~and \eeMassDef, $ y _\alpha (m_\alpha)$ should be replaced by $ y _\alpha (1 {\rm~GeV} )$ for the three lightest quarks, $\alpha = u$, $d$, and $s$. \REF\rrGL{ J.~Gasser and H.~Leutwyler, \sl Phys. Rep. \bf 87 \rm (1982) 77. } When specific numerical values are required in the following, we will use [\rrGL] $$ m_\tau = 1.7841^{+0.0027}_{-0.0036} {\rm~GeV} , \qquad m_c = 1.27 \pm 0.05 {\rm~GeV} , \qquad m_b = 4.25 \pm 0.10 {\rm~GeV} , \eqn\eeMassNumer $$ for the masses, and [\rrBBHZ] $$ \eta_u = 2.17, \qquad \eta_s = 2.16, \qquad \eta_c = 1.89, \qquad \eta_b = 1.47, \qquad \eta_\tau = 1.02, \eqn\eeEtaNumer $$ for the QCD/QED scaling factors, corresponding to $\alpha_3 (M_Z) = 0.111$. We will generally assume that $m_t$ is close enough to $\scale_{\rm susy} = 170 {\rm~GeV} $ that running between the two scales is small, $$ \eta_t \approx 1. \eqn\eeEtatNumer $$ There is considerable uncertainty in the values of the scaling factors \eeEtaNumer~due to the uncertainty in $\alpha_3 (M_Z)$ [\rrBBO, \rrADHR]. Since our results are analytic, it is easy to determine the effects of choosing other values for $m_\alpha$ and $\eta_\alpha$. \section{The Georgi-Jarlskog Texture} The first texture for the Yukawa matrices we consider is $$ Y_U = \pmatrix{ 0 & C & 0 \cr C & 0 & B \cr 0 & B & A \cr}, \qquad Y_D = \pmatrix{ 0 & F {\rm e} ^{i\phi} & 0 \cr F {\rm e} ^{-i\phi} & E & 0 \cr 0 & 0 & D \cr}, \qquad Y_E = \pmatrix{ 0 & F & 0 \cr F & -3E & 0 \cr 0 & 0 & D \cr}, \eqn\eeGJTexture $$ assumed to hold at the grand unification scale. Georgi and Jarlskog [\rrGJ] originally posited this form for the Yukawa matrices in an SU(5) grand unified theory, and Harvey, Ramond, and Reiss [\rrHRR] used it in an SO(10) model. Recently, a number of authors [\rrFla, \rrDHR, \rrBBHZ, \rrBBO, \rrADHR] have re-examined this texture in a supersymmetric context. The relations between the $ Y_D $ and $ Y_E $ matrix elements follow if the charged leptons and charge $-{1\over 3}$ quarks belong to the same grand unified representation. Entries of the two matrices that are equal in magnitude result from Yukawa couplings to a Higgs field in the 5 of SU(5) or the 10 of SO(10). Entries differing by a factor of $-3$ result from Yukawa couplings to a Higgs field in the 45 of SU(5) or the 126 of SO(10). The zero entries of $ Y_U $, $ Y_D $, and $ Y_E $ are due to discrete symmetries [\rrDisc, \rrFri] at the grand unified scale. The Georgi-Jarlskog texture \eeGJTexture~leads to six relations among fermion masses and mixing angles. The eigenvalues of the Yukawa matrices \eeGJTexture~obey the SU(5) relation [\rrSUfive] $$ { \overline{\lam} _b \over \overline{\lam} _\tau} = 1 \eqn\eeGJSUGUT $$ and the Georgi-Jarlskog relations [\rrGJ] $$ { \overline{\lam} _\mu - \overline{\lam} _e \over \overline{\lam} _s - \overline{\lam} _d} = 3 , \qquad\qquad { \overline{\lam} _e \overline{\lam} _\mu \over \overline{\lam} _d \overline{\lam} _s} = 1. \eqn\eeGJRoneGUT $$ The latter two equations can be combined into $$ { \left( \overline{\lam} _d / \overline{\lam} _s \right) \over \left[ 1 - \left( \overline{\lam} _d / \overline{\lam} _s \right)\right]^2 } = { 9 \left( \overline{\lam} _e / \overline{\lam} _\mu \right) \over \left[ 1 - \left( \overline{\lam} _e / \overline{\lam} _\mu \right) \right]^2 }. \eqn\eeGJRtwoGUT $$ The quark Yukawa matrices $ Y_U $ and $ Y_D $ are diagonalized by bi-unitary transformations $ \Yhat_F = {L_F} ^ \dagger Y_F {R_F} $, with $$ {L_U} = \pmatrix{ 1 & 0 & 0 \cr 0 & \bc_3 & - \bs_3 \cr 0 & \bs_3 & \bc_3 \cr} \pmatrix{ \bc_2 & - \bs_2 & 0 \cr \bs_2 & \bc_2 & 0 \cr 0 & 0 & 1 \cr}, \qquad {L_D} = \pmatrix{ \e^{i\phi} & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & 1 \cr} \pmatrix{ \bc_1 & - \bs_1 & 0 \cr \bs_1 & \bc_1 & 0 \cr 0 & 0 & 1 \cr}, \eqn\eeGJUnit $$ and with $ {R_U} $ and $ {R_D} $ equal to $ {L_U} $ and $ {L_D} $ respectively, modulo diagonal matrices of signs to make the eigenvalues $ \overline{\lam} _\alpha$ positive. In the unitary matrices \eeGJUnit, $ \overline{\rm c} _i = \cos \overline{\theta} _i$ and $ \overline{\rm s} _i = \sin \overline{\theta} _i$, with $$ \tan^2 \overline{\theta} _1 = { \overline{\lam} _d \over \overline{\lam} _s} , \qquad\qquad \tan^2 \overline{\theta} _2 = { \overline{\lam} _u \over \overline{\lam} _c} , \qquad\qquad \tan^2 \overline{\theta} _3 = { \overline{\lam} _c - \overline{\lam} _u \over \overline{\lam} _t} \approx { \overline{\lam} _c \over \overline{\lam} _t} . \eqn\eeGJAngles $$ The unitary transformations \eeGJUnit~result in a CKM matrix of exactly the form \eeCKMParam $$ \overline{V} = {L_U} ^ \dagger {L_D} = \pmatrix{ \bs_1 \bs_2 \bc_3 + \bc_1 \bc_2 \e^{i\phi} & \bc_1 \bs_2 \bc_3 - \bs_1 \bc_2 \e^{i\phi} & \bs_2 \bs_3 \cr \bs_1 \bc_2 \bc_3 - \bc_1 \bs_2 \e^{i\phi} & \bc_1 \bc_2 \bc_3 + \bs_1 \bs_2 \e^{i\phi} & \bc_2 \bs_3 \cr - \bs_1 \bs_3 & - \bc_1 \bs_3 & \bc_3 \cr}. \eqn\eeGJCKMMat $$ Therefore, in the approximation that the mixing angles are small, the CKM matrix elements satisfy the Harvey-Ramond-Reiss (HRR) relation [\rrHRR] $$ \left| \overline{V}_{cb} \right| \approx \sqrt{ \overline{\lam} _c \over \overline{\lam} _t} , \eqn\eeGJVcbGUT $$ which depends on the top quark Yukawa coupling, as well as the relations $$ \left| \overline{V}_{us} \right| \approx \left| \sqrt{ \overline{\lam} _d \over \overline{\lam} _s} - {\rm e} ^{-i\phi} \sqrt{ \overline{\lam} _u \over \overline{\lam} _c} \right|, \qquad\qquad { \left| \overline{V}_{ub} \right| \over \left| \overline{V}_{cb} \right| } \approx \sqrt{ \overline{\lam} _u \over \overline{\lam} _c} , \eqn\eeGJCKMGUT $$ which depend on only the first and second generation Yukawa couplings. The six relations \eeGJSUGUT--\eeGJRtwoGUT, \eeGJVcbGUT, and \eeGJCKMGUT~hold at the GUT scale. The Yukawa matrices do not retain the form \eeGJTexture~below the GUT scale because of RG~ running induced by large Yukawa couplings. Rather than follow the evolution of the Yukawa matrices, however, we will determine the effect of the running on the relations \eeGJSUGUT--\eeGJRtwoGUT, \eeGJVcbGUT, and \eeGJCKMGUT. We use eqs.~\eeCKMSoln~and \eeEigvalSoln~to scale the relations from the GUT scale $ {\overline\scale} $ to the supersymmetry breaking scale $\scale_{\rm susy} $. (Whenever the scaling factors $A_\alpha$ and $B_\alpha$ are written without an explicit scale $\mu$ throughout this section, $\mu = \scale_{\rm susy} $ is understood.) The further running of the Yukawa couplings from $\scale_{\rm susy} $ to the scale of the fermion masses is included in the factors $\eta_\alpha$ defined in eq.~\eeEtaDef. Thus, the SU(5) relation \eeGJSUGUT~leads to $$ {m_b \over m_\tau} = { A_d \eta_b \over A_e \eta_\tau } { B_t B_b^3 \over B_{\tau} ^3} , \eqn\eeGJSUPhys $$ and the Georgi-Jarlskog relations \eeGJRoneGUT~and \eeGJRtwoGUT~imply $$ {m_e m_\mu \over m_d m_s} = { A_e^2 \eta_e \eta_\mu \over A_d^2 \eta_d \eta_s } , \qquad\qquad { \left( m_d / m_s \right) \over \left[ 1 - \left( m_d / m_s \right)\right]^2 } = { 9 \left( \eta_\mu m_e / \eta_e m_\mu \right) \over \left[ 1 - \left( \eta_\mu m_e / \eta_e m_\mu \right) \right]^2 }, \eqn\eeGJRPhys $$ since $\eta_d = \eta_s$. The HRR relation \eeGJVcbGUT~implies $$ \left| V_{cb} \right| \approx \sqrt{\eta_t m_c \over \eta_c m_t} \sqrt{ B_t \over B_b}, \eqn\eeGJVcbPhys $$ and the CKM relations \eeGJCKMGUT~lead to $$ \left| V_{us} \right| \approx \left| \sqrt{m_d \over m_s} - {\rm e} ^{-i\phi} \sqrt{\eta_c m_u \over \eta_u m_c } \right|, \qquad\qquad { \left| V_{ub} \right| \over \left| V_{cb} \right| } \approx \sqrt{\eta_c m_u \over \eta_u m_c } . \eqn\eeGJCKMPhys $$ The running induced by the third generation of fermions is contained in the factors $ B_t$, $ B_b$, and $ B_{\tau} $. Dimopoulos, Hall, and Raby [\rrDHR] found that four of the six relations implied by the Georgi-Jarlskog texture, {\it viz.}, the two Georgi-Jarlskog relations \eeGJRoneGUT~and the two CKM matrix element relations \eeGJCKMGUT, are unaffected by the running induced by the top quark, in the limit that the mixing angles are small. We see from eqs.~\eeGJRPhys~and \eeGJCKMPhys~that, not surprisingly, these four relations are also insensitive to the running induced by the bottom quark and $\tau$ lepton. Therefore, the predictions of ref.~[\rrDHR] for $m_s$, $m_d$, $\phi$, and $\left| V_{ub} / V_{cb} \right|$ remain unchanged even when all three third generation Yukawa couplings contribute significantly to the running. The SU(5) and HRR relations, however, are modified by the running induced by the third generation of fermions. The scaling factors $ B_t$, $ B_b$, and $ B_{\tau} $ are determined by the RG~ equations \eeBRGE, which have the approximate solutions $$ B_t \approx \left[ 1 + \overline{\lam} _t^2 K_u \right]^{-1/12} , \qquad B_b \approx \left[ 1 + \overline{\lam} _b^2 K_d \right]^{-1/12} , \qquad B_{\tau} \approx \left[ 1 + \overline{\lam} _\tau^2 K_e \right]^{-1/12} , \eqn\eeBSolntwo $$ where $K_\alpha = K_\alpha (\scale_{\rm susy} )$ are given in eq.~\eeINumer. The scaling factors $B_\alpha$ depend on the GUT scale Yukawa couplings $ \overline{\lam} _t$, $ \overline{\lam} _b$, and $ \overline{\lam} _\tau$. These couplings are related to the fermion masses by $$ \eqalignno{ m_\tau & = {v A_e \eta_\tau \over \sqrt{2} } \; \overline{\lam} _\tau B_b^3 B_{\tau} ^4 (\cos\beta), & \eqname \eeMtauDef \cr m_b & = {v A_d \eta_b \over \sqrt{2} } \; \overline{\lam} _b B_t B_b^6 B_{\tau} (\cos \beta), & \eqname \eeMbottomDef \cr m_t & = {v A_u \eta_t \over\sqrt{2}} \; \overline{\lam} _t B_t^6 B_b (\sin \beta) \approx {v A_u \eta_t \over\sqrt{2K_u}} B_b \sqrt{ 1 - B_t^{12} } ~(\sin \beta), & \eqname \eeMtopDef \cr } $$ using eqs.~\eeEigvalSoln, \eeEtaDef, and \eeMassDef, where the last equality in eq.~\eeMtopDef~depends on the approximation~\eeBSolntwo. The three GUT scale Yukawa couplings $ \overline{\lam} _t$, $ \overline{\lam} _b$, and $ \overline{\lam} _\tau$ are not independent, however. First, the Georgi-Jarlskog texture dictates that $ \overline{\lam} _\tau = \overline{\lam} _b$. Second, from the SU(5) relation \eeGJSUPhys, we have $$ B_t = k \left( B_{\tau} \over B_b \right)^3 , \qquad\qquad k \equiv {A_e \eta_\tau m_b \over A_d \eta_b m_\tau } \approx 0.78. \eqn\eeGJBtop $$ (The deviation of $k$ from unity shows that significant running must be induced by the Yukawa couplings for the SU(5) relation to be valid.) Equation \eeGJBtop, together with eq.~\eeBSolntwo, may be used to determine $ \overline{\lam} _t$ in terms of $ \overline{\lam} _b$. Hence, $ B_t$, $ B_b$, and $ B_{\tau} $, as well as the fermion masses, may be written in terms of a single GUT scale parameter $ \overline{\lam} _b$. This implies a relation between $\beta$ and $m_t$. The parameter $\beta$ may be expressed in terms of $ \overline{\lam} _b$ as $$ \sec \beta = {v A_e \eta_\tau \over \sqrt{2} m_\tau} ( \overline{\lam} _b B_b^3 B_{\tau} ^4 ) , \eqn\eeGJBeta $$ using eq.~\eeMtauDef~and $ \overline{\lam} _\tau = \overline{\lam} _b$. The top quark mass is given by $$ m_t \approx {v A_u \eta_t \over \sqrt{2 K_u} } B_b \sqrt{ 1 - k^{12} \left( B_{\tau} / B_b \right)^{36} } {}~(\sin \beta), \eqn\eeGJMtop $$ using eqs.~\eeMtopDef~and \eeGJBtop. Plotting \eeGJBeta~and \eeGJMtop~parametrically as functions of $ \overline{\lam} _b$, we obtain the relation between $m_t$ and $ \tan\beta$ shown by the solid line in fig.~1. To see more explicitly the dependence of $m_t$ on $\beta$, we neglect terms of $O( \overline{\lam} _b^3)$ on the r.h.s. of eq.~\eeGJBeta~to obtain $$ \overline{\lam} _b \approx {\sec \beta \over 150 } , \eqn\eeGJBetaApprox $$ then expand eq.~\eeGJMtop~in terms of $ \overline{\lam} _b$ to obtain the approximate relation $$ m_t \approx (185 {\rm~GeV} ) (\sin \beta) [ 1 - (5 \times 10^{-5}) \sec^2 \beta + \ldots ]. \eqn\eeGJMtopApprox $$ The Georgi-Jarlskog texture implies an upper bound on the top quark mass, $m_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 185 {\rm~GeV} $, which is saturated for $ \tan\beta \sim 10$. If the running induced by the $b$ quark and $\tau$ lepton is neglected, $m_t$ increases monotonically with $\beta$, as shown by the dotted line in fig.~1, and eq.~\eeGJMtopApprox~reduces to the linear relation between $ m_t $ and $ \sin \beta $ found in refs. [\rrDHR, \rrBBHZ]. The inclusion of the effects of all three third generation Yukawa couplings is responsible for the last factor in eq.~\eeGJMtopApprox, and causes $m_t$ to decrease for large values of $ \tan\beta$. Thus, a given value of $m_t$ may correspond to two possible values of $\beta$ [\rrBBO, \rrADHR]. The relation between $m_t$ and $ \tan\beta$ obtained by numerically integrating the RG~ equations \eeBRGE~without approximation is shown by the dashed line in fig.~1, and is similar to plots shown in refs.~[\rrFla, \rrKLN, \rrBBO, \rrADHR]. The solid line, based on the approximation~\eeBSolntwo, has the same qualitative behavior as the dashed line, but the effects of the Yukawa-induced running for large $ \tan\beta$ are exaggerated, as anticipated in sect.~3. The HRR relation \eeGJVcbPhys~determines the dependence of $ \left| V_{cb} \right| $ on $\beta$. Using eq.~\eeGJVcbPhys~together with eq.~\eeGJBtop, we find $$ \left| V_{cb} \right| \approx \sqrt{ A_e \eta_\tau \eta_t m_b m_c \over A_d \eta_b \eta_c m_\tau m_t } \sqrt{ B_{\tau} ^3 \over B_b^4}. \eqn\eeGJVcbPhystwo $$ (When running induced by the $b$ quark and $\tau$ lepton are neglected, this reduces to eq.~(22) of ref.~[\rrDHR].) Using eqs.~\eeGJBeta~and \eeGJVcbPhystwo, we plot $ \left| V_{cb} \right| $ as a function of $ \tan\beta$ in fig.~2 (solid line). As before, the dashed line indicates the result based on numerical integration of the RG~ equations (similar to plots in ref.~[\rrADHR]). Expanding eq.~\eeGJVcbPhystwo~in terms of $ \overline{\lam} _b$, we obtain the approximate relation $$ \left| V_{cb} \right| \approx { 0.053 \over \sqrt{ \sin \beta } } [ 1 + (7 \times 10^{-5} ) \sec^2 \beta + \ldots ]. \eqn\eeGJVcbApprox $$ The Georgi-Jarlskog texture implies that $ \left| V_{cb} \right| $ must be $ \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.053 $. In principle, given $ \left| V_{cb} \right| $, we could use eq.~\eeGJVcbPhystwo~to determine $\beta$ (up to a two-fold ambiguity) and therefore $m_t$. The uncertainty in $ \left| V_{cb} \right| $, however, allows us only to set bounds on $\beta$. For example, requiring $ \left| V_{cb} \right| < 0.058 $ leads to a lower bound on $ \tan\beta$ [\rrDHR, \rrBBHZ] $$ \left| V_{cb} \right| < 0.058 \quad \Rightarrow \quad \tan\beta \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.5 \quad \Rightarrow \quad m_t \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 155 {\rm~GeV} , \eqn\eeGJBetaLow $$ as well as an upper bound [\rrBBO, \rrADHR] $$ \left| V_{cb} \right| < 0.058 \quad \Rightarrow \quad \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 40 \quad \Rightarrow \quad m_t \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 170 {\rm~GeV} . \eqn\eeGJBetaHigh $$ It is the running induced by the $b$ quark and $\tau$ lepton that causes $ \left| V_{cb} \right| $ to increase for large $ \tan\beta$ \eeGJVcbApprox, and therefore allows us to derive this upper bound on $ \tan\beta$; neglecting this running leads to the relation shown by the dotted line in fig.~2, which obeys $ \left| V_{cb} \right| < 0.058$ for all $ \tan\beta \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.5$. Finally, we plot $ \left| V_{cb} \right| $ as a function of $m_t$ in fig.~3. Here our analytic approximation (solid line) almost exactly coincides with the numerical solution (dashed line). Each value of $ \left| V_{cb} \right| \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.053$ corresponds to two different values of $m_t$ [\rrADHR]. The lower branch of the curve is approximately described by the relation $$ { \left| V_{cb} \right| \over 0.053 } \approx \sqrt { 185 {\rm~GeV} \over m_t }, \eqn\eeGJLowerBranch $$ and holds for $ \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 8$. The upper branch of the curve holds for $ \tan\beta \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 8$; here the relation between $ \left| V_{cb} \right| $ and $m_t$ is approximately linear $$ \left| V_{cb} \right| \approx 0.053 + 0.075 \left( 1 - {m_t\over 185 {\rm~GeV} } \right), \eqn\eeGJUpperBranch $$ which we obtain by combining eqs.~\eeGJMtopApprox~and \eeGJVcbApprox. \section{The Giudice Texture } Next we consider the Giudice texture [\rrGiu] for the Yukawa matrices, $$ Y_U = \pmatrix{ 0 & 0 & B \cr 0 & B & 0 \cr B & 0 & A \cr}, \qquad Y_D = \pmatrix{ 0 & F {\rm e} ^{i\phi} & 0 \cr F {\rm e} ^{-i\phi} & E & D \cr 0 & D & C \cr}, \qquad Y_E = \pmatrix{ 0 & F & 0 \cr F & -3E & D \cr 0 & D & C \cr}, \eqn\eeGiuTexture $$ at the GUT scale. (Giudice additionally imposes the {\it ad hoc} relation $D=2E$, but we leave these matrix elements unrelated.) This texture leads to six relations among fermion mases and mixing angles. The eigenvalues of the Yukawa matrices obey not only the Georgi-Jarlskog and SU(5) relations $$ { \overline{\lam} _e \over \overline{\lam} _d} \approx {1 \over 3}, \qquad\qquad { \overline{\lam} _\mu \over \overline{\lam} _s} \approx 3, \qquad\qquad { \overline{\lam} _b \over \overline{\lam} _\tau} = 1, \eqn\eeGiuEigvalGUT $$ but also the geometric mean relation $$ \overline{\lam} _u \overline{\lam} _t = \overline{\lam} _c^2 . \eqn\eeGeomMeanGUT $$ The quark Yukawa matrices are diagonalized by $$ {L_U} = \pmatrix{ \bc_2 & 0 & - \bs_2 \cr 0 & 1 & 0 \cr \bs_2 & 0 & \bc_2 \cr}, \qquad {L_D} = \pmatrix{ \e^{i\phi} & 0 & 0 \cr 0 & \bc_3 & \bs_3 \cr 0 & - \bs_3 & \bc_3 \cr} \pmatrix{ \bc_1 & - \bs_1 & 0 \cr \bs_1 & \bc_1 & 0 \cr 0 & 0 & 1 \cr}, \eqn\eeGiuUnit $$ with $$ \tan^2 \overline{\theta} _1 = { \overline{\lam} _d \over \overline{\lam} _s} , \qquad\qquad \tan^2 \overline{\theta} _2 = { \overline{\lam} _u \over \overline{\lam} _t} , \qquad\qquad \overline{\theta} _3 \approx {D \over \overline{\lam} _b} . \eqn\eeGiuAngles $$ The unitary transformations \eeGiuUnit~lead to a CKM matrix of the form $$ \overline{V} = {L_U} ^ \dagger {L_D} = \pmatrix{ - \bs_1 \bs_2 \bs_3 + \bc_1 \bc_2 \e^{i\phi} & - \bc_1 \bs_2 \bs_3 - \bs_1 \bc_2 \e^{i\phi} & \bs_2 \bc_3 \cr \bs_1 \bc_3 & \bc_1 \bc_3 & \bs_3 \cr - \bs_1 \bc_2 \bs_3 - \bc_1 \bs_2 \e^{i\phi} & - \bc_1 \bc_2 \bs_3 + \bs_1 \bs_2 \e^{i\phi} & \bc_2 \bc_3 \cr }. \eqn\eeGiuCKMMat $$ Thus, for small mixing angles, the CKM matrix elements obey the relations $$ \left| \overline{V}_{us} \right| \approx \sqrt{ \overline{\lam} _d \over \overline{\lam} _s}, \qquad\qquad \left| \overline{V}_{ub} \right| \approx \sqrt{ \overline{\lam} _u \over \overline{\lam} _t}. \eqn\eeGiuCKMGUT $$ The matrix element $ \left| \overline{V}_{cb} \right| = \sin \overline{\theta} _3$ remains arbitrary unless additional constraints [\rrGiu] are imposed on the parameters of the texture \eeGiuTexture. The six GUT scale relations \eeGiuEigvalGUT, \eeGeomMeanGUT~and \eeGiuCKMGUT~are modified at low energies by RG~ running. Using eqs.~\eeCKMSoln, \eeEigvalSoln, and \eeEtaDef, we obtain the relations $$ { m_e \over m_d} \approx {1\over 3} {A_e \eta_e \over A_d \eta_d} , \qquad\qquad {m_\mu \over m_s} \approx 3 { A_e \eta_\mu \over A_d \eta_s} , \qquad\qquad \left| V_{us} \right| \approx \sqrt{m_d \over m_s} , \eqn\eeGiuPhysOne $$ which are not affected by the running induced by third generation fermions, and the relations $$ \eqalignno{ {m_b \over m_\tau} & = {A_d \eta_b \over A_e \eta_\tau} { B_t B_b^3 \over B_{\tau} ^3} , & \eqname\eeGiuSUPhys \cr m_u & = { \eta_t \eta_u m_c^2 \over \eta_c^2 m_t } { B_t^3 B_b }, & \eqname\eeGeomMeanPhys \cr \left| V_{ub} \right| & \approx \sqrt{\eta_t m_u \over \eta_u m_t} \sqrt{ B_t \over B_b}, & \eqname\eeGiuVubPhys \cr } $$ which are. These relations reduce to those given in ref.~[\rrGiu] if we set $B_b = B_\tau = \eta_\tau = 1$. The relation between $m_t$ and $ \tan\beta$ shown in fig.~1 depends only on the SU(5) relation \eeGiuSUPhys, and therefore holds for the Giudice texture as well as for the Georgi-Jarlskog texture. For the same reason, eqs.~\eeGJBtop--\eeGJMtopApprox~also continue to hold. The up quark mass is determined by $\beta$ in the Giudice texture. The geometric mean relation \eeGeomMeanPhys~together with eq.~\eeGJBtop~yields $$ m_u = { \eta_u \eta_t m_c^2 k^3 \over \eta_c^2 m_t } \left( B_{\tau} ^9 \over B_b^8 \right) , \eqn\eeGiuMup $$ which we plot as a function of $ \tan\beta$ in fig.~4. Expanding eq.~\eeGiuMup~in terms of $ \overline{\lam} _b$, we obtain $$ m_u = { 2.5 {\rm~MeV} \over \sin \beta } [ 1 + (1.7 \times 10^{-4}) \sec^2 \beta + \ldots], \eqn\eeGiuMupApprox $$ implying a lower bound on the up quark mass, $m_u \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 2.5 {\rm~MeV} $. We plot the relation between $m_u$ and $m_t$ in fig.~5. Using eqs.~\eeGJMtopApprox~and \eeGiuMupApprox, we find that $$ { m_u\over 2.5 {\rm~MeV} } \approx { 185 {\rm~GeV} \over m_t } \eqn\eeGiuLowerBranch $$ for $ \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 8$, and $$ \left( {m_u\over 2.5 {\rm~MeV} } - 1\right) \approx 3.6 \left( 1 - {m_t\over 185 {\rm~GeV} } \right), \eqn\eeGiuUpperBranch $$ for $ \tan\beta \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 8$. The central value of the up quark mass given in ref.~[\rrGL], $m_u = 5.1 \pm 1.5 {\rm~MeV} $, could correspond either to a small value of $ \tan\beta$ ($\sim 0.56$) with $m_t \sim 90 {\rm~GeV} $ [\rrGiu], or to a much larger value of $ \tan\beta $ ($\sim 50 $) with $m_t \sim 130 {\rm~GeV} $ [\rrBBO]. An up quark in the range $3.6 {\rm~MeV} < m_u < 6.6 {\rm~MeV} $ requires either $ 0.4 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.0 $ with $ 70 {\rm~GeV} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 130 {\rm~GeV} $, or $ 50 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 60 $ with $ 100 {\rm~GeV} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 160 {\rm~GeV} $. The Giudice texture also determines $ \left| V_{ub} \right| $ as a function of $\beta$. {}From eqs.~\eeGJBtop, \eeGeomMeanPhys, and \eeGiuVubPhys, we have $$ \left| V_{ub} \right| \approx { \eta_t m_c k^2 \over \eta_c m_t} { B_{\tau} ^6 \over B_b^6}, \eqn\eeGiuVubPhystwo $$ which is plotted as a function of $ \tan\beta$ in fig.~6. Expanding eq.~\eeGiuVubPhystwo~in terms of $ \overline{\lam} _b$, we find $$ \left| V_{ub} \right| \approx { 0.0022 \over \sin \beta} [ 1 + (1.5 \times 10^{-4}) \sec^2 \beta + \ldots ], \eqn\eeGiuVubApprox $$ implying the lower bound $ \left| V_{ub} \right| \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.0022$. The rather weak constraint $ \left| V_{ub} \right| < 0.007$ requires $ \tan\beta \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.3 $. \section{The Fritzsch Texture } \REF\rrGN{ F.~Gilman and Y.~Nir, \sl Annu. Rev. Nucl. Part. Sci. \bf 40 \rm (1990) 213. } Finally, we consider the Fritzsch texture [\rrFri, \rrGN] for the quark Yukawa matrices, $$ Y_U = \pmatrix{ 0 & C & 0 \cr C & 0 & B \cr 0 & B & A \cr}, \qquad Y_D = \pmatrix{ 0 & F {\rm e} ^{i\phi_1} & 0 \cr F {\rm e} ^{-i\phi_1} & 0 & E {\rm e} ^{i\phi_2} \cr 0 & E {\rm e} ^{-i\phi_2} & D \cr}, \eqn\eeFriTexture $$ at the GUT scale. There is no ansatz for the lepton Yukawa matrix. The matrices \eeFriTexture~are diagonalized by $$ \eqalign{ {L_U} & = \pmatrix{ 1 & 0 & 0 \cr 0 & \bc^\prime_3 & - \bs^\prime_3 \cr 0 & \bs^\prime_3 & \bc^\prime_3 \cr} \pmatrix{ \bc_2 & - \bs_2 & 0 \cr \bs_2 & \bc_2 & 0 \cr 0 & 0 & 1 \cr}, \crr {L_D} & = \pmatrix{ {\rm e} ^{i\phi_1} & 0 & 0 \cr 0 & 1 & 0 \cr 0 & 0 & {\rm e} ^{-i\phi_2} \cr} \pmatrix{ 1 & 0 & 0 \cr 0 & \bc^{\prime\prime}_3 & - \bs^{\prime\prime}_3 \cr 0 & \bs^{\prime\prime}_3 & \bc^{\prime\prime}_3 \cr} \pmatrix{ \bc_1 & - \bs_1 & 0 \cr \bs_1 & \bc_1 & 0 \cr 0 & 0 & 1 \cr}, \cr } \eqn\eeFriUnit $$ with $$ \eqalign{ \tan^2 \overline{\theta} _1 & = { \overline{\lam} _d \over \overline{\lam} _s} , \qquad \qquad \tan^2 \overline{\theta} ^{\prime\prime}_3 = { \overline{\lam} _s - \overline{\lam} _d \over \overline{\lam} _b} \approx { \overline{\lam} _s \over \overline{\lam} _b} , \crr \tan^2 \overline{\theta} _2 & = { \overline{\lam} _u \over \overline{\lam} _c} , \qquad\qquad \tan^2 \overline{\theta} ^\prime_3 = { \overline{\lam} _c - \overline{\lam} _u \over \overline{\lam} _t} \approx { \overline{\lam} _c \over \overline{\lam} _t} . \cr } \eqn\eeFriAngles $$ When the mixing angles are small, the unitary matrices \eeFriUnit~lead to a CKM matrix whose elements are given by $$ \left| \overline{V}_{us} \right| \approx \left| \sqrt{ \overline{\lam} _d \over \overline{\lam} _s} - {\rm e} ^{-i\phi_1} \sqrt{ \overline{\lam} _u \over \overline{\lam} _c} \right|, \qquad\qquad { \left| \overline{V}_{ub} \right| \over \left| \overline{V}_{cb} \right| } \approx \sqrt{ \overline{\lam} _u \over \overline{\lam} _c} , \qquad\qquad \left| \overline{V}_{cb} \right| \approx \left| \sqrt{ \overline{\lam} _s \over \overline{\lam} _b } - {\rm e} ^{-i\phi_2} \sqrt{ \overline{\lam} _c \over \overline{\lam} _t } \right| \eqn\eeFriCKMGUT $$ at the GUT scale. There are no relations among the Yukawa couplings. We scale down the relations \eeFriCKMGUT~using eqs.~\eeCKMSoln~and \eeEigvalSoln. The relations involving only the first and second generation Yukawa couplings $$ \left| V_{us} \right| \approx \left| \sqrt{m_d \over m_s} - {\rm e} ^{-i\phi_1} \sqrt{\eta_c m_u \over \eta_u m_c } \right|, \qquad\qquad { \left| V_{ub} \right| \over \left| V_{cb} \right| } \approx \sqrt{\eta_c m_u \over \eta_u m_c } , \eqn\eeFriCKMPhys $$ are the same as in the Georgi-Jarlskog texture, and are not affected by the running induced by large Yukawa couplings. The relation for $ \left| V_{cb} \right| $, however, is given by $$ \left| V_{cb} \right| \approx \left| \sqrt{ B_b \over B_t} \sqrt{\eta_b m_s \over \eta_s m_b } - {\rm e} ^{-i\phi_2} \sqrt{ B_t \over B_b} \sqrt{\eta_t m_c \over \eta_c m_t } \right| \eqn\eeFriVcbPhys $$ in the Fritzsch texture. The relation \eeFriVcbPhys~yields a connection between the top quark mass and $ \left| V_{cb} \right| $. The small experimental value for $ \left| V_{cb} \right| $ requires a large amount of cancellation between the two terms in eq.~\eeFriVcbPhys, which seems to imply both $\phi_2 \approx 0$ and a relatively light top quark [\rrGN]. If we neglect the running induced by Yukawa couplings and require $m_s > 120 {\rm~MeV} $, $m_b < 4.35 {\rm~GeV} $ and $m_c < 1.32 {\rm~GeV} $ [\rrGL], we obtain the upper bound for the top quark mass $$ \left| V_{cb} \right| < 0.058 \quad \Rightarrow \quad m_t < 110 {\rm~GeV} , \qquad\qquad {\rm if~} B_t= B_b=1. \eqn\eeFriTopBound $$ The inclusion of Yukawa coupling-induced running implies an even lower upper bound for $m_t$ for small or moderate values of $ \tan\beta$, since typically $B_t < B_b$. If $ \tan\beta$ is very large, however, then it is possible that $ B_b < B_t$, loosening the bound on the top quark, as Babu and Shafi have pointed out [\rrBS]. They demonstrated this numerically for a specific value of $ \tan\beta$. We will derive a modified bound for $m_t$ using the analytic approximation \eeBSolntwo. In this approximation, the top quark mass is given by $$ m_t \approx {v A_u \eta_t \over\sqrt{2 K_u} } B_b \sqrt{ 1 - B_t^{12} } \qquad \qquad {\rm for~} \tan\beta \gg 1. \eqn\eeMtopDeftwo $$ The largest amount of cancellation will occur in eq.~\eeFriVcbPhys~when $ B_b/ B_t$ is minimized. Holding $m_t$ fixed in eq.~\eeMtopDeftwo, we find that $ B_b/ B_t$ is minimized when $ B_t = \left( 1\over 7 \right)^{1/12} \approx 0.85 $, and therefore $$ \left( B_b \over B_t \right)_{\rm min} \approx { m_t \over 150 {\rm~GeV} }. \eqn\eeFriBbBt $$ Substituting this into eq.~\eeFriVcbPhys, we obtain $$ \left| V_{cb} \right| < 0.058 \quad \Rightarrow \quad m_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 140 {\rm~GeV} , \eqn\eeFriTopBoundtwo $$ a considerably higher upper bound than eq.~\eeFriTopBound. This high value of the top quark mass requires a large value of $ \tan\beta$. {}From eq.~\eeMbottomDef, and using the approximation \eeBSolntwo, we have $$ \sec \beta \approx {v A_d \eta_b \over m_b \sqrt{2 K_d}} B_t B_{\tau} \sqrt{ 1 - B_b^{12} }, \eqn\eeFriSecant $$ so that $$ m_t = 140 {\rm~GeV} \quad \Rightarrow \quad \sec \beta \sim 50 B_{\tau} . \eqn\eeFriTangent $$ In the Fritzsch texture, $ \overline{\lam} _\tau$ is unrelated to $ \overline{\lam} _b$ so $ B_{\tau} $ is undetermined, but $0.8 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} B_{\tau} \le 1$ for $0 \le \overline{\lam} _\tau < 2$. \chapter{Conclusions} Relations among fermion masses and mixing angles at the scale of grand unification are modified at lower energies by renormalization group running induced by gauge and Yukawa couplings. In supersymmetric theories, the $b$ quark and $\tau$ lepton Yukawa couplings, as well as the $t$ quark coupling, may cause significant running if $ \tan\beta$ is large. In this paper, we have analyzed the running of fermion masses and mixing angles caused by the entire third generation of Yukawa couplings. We made several approximations along the way. First, we assumed a hierarchy of masses between generations. In this approximation, we derived explicit RG~ equations for the mixing angles \eeAngleRGE. Next, we assumed that the mixing angles in the RG~ equations were small; the error from this approximation was shown to be less than 0.1\%. Finally, we made the approximation that the RG~ equations \eeBRGE~for the scaling factors $B_\alpha$ decoupled from one another. This allowed us to obtain the analytic expressions \eeBSolnone~for these scaling factors, which differ from the exact values by no more than 1 or 2\%. We then used the approximate analytic expressions for the scaling factors to determine how running induced by the third generation of fermions affects the predictions arising from the GUT scale Yukawa matrices. We summarize here the results for each of the three textures considered in this paper, using the input data \eeMassNumer~and \eeEtaNumer. It is easy to determine the effect of choosing different values for the input parameters because our results are analytic. The Georgi-Jarlskog texture incorporates the SU(5) relation \eeGJSUPhys, which implies the relation between $m_t$ and $ \tan\beta$ displayed in fig.~1 and given approximately by \eeGJMtopApprox. The top quark mass is bounded above by $\sim 185 {\rm~GeV} $, the bound being saturated for $ \tan\beta \sim 10$. This texture also implies the HRR relation \eeGJVcbPhys; this leads to the relation between $ \left| V_{cb} \right| $ and $ \tan\beta$ shown in fig.~2 and given approximately by eq.~\eeGJVcbApprox. Consequently, $ \left| V_{cb} \right| $ must be greater than $\sim 0.053$. Requiring $ \left| V_{cb} \right| \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 0.058$ implies that $ 1.5 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 40$, and therefore $ m_t \mathrel{\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}}} 155 {\rm~GeV} $. The Giudice texture also incorporates the SU(5) relation and therefore the relation between $m_t$ and $ \tan\beta$ shown in fig.~1. This texture also implies the geometric mean relation \eeGeomMeanPhys, which leads to the relation between $m_u$ displayed in fig.~4 and given approximately by eq.~\eeGiuMupApprox. The up quark mass is bounded below by $\sim 2.5 {\rm~MeV} $. If the up quark lies in the range $3.6 {\rm~MeV} < m_u < 6.6 {\rm~MeV} $ [\rrGL], then the Giudice texture implies that either $ 0.4 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 1.0 $ with $ 70 {\rm~GeV} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 130 {\rm~GeV} $, or $ 50 \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} \tan\beta \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 60 $ with $ 100 {\rm~GeV} \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} m_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 160 {\rm~GeV} $. Finally the Giudice texture implies the relation between $ \left| V_{ub} \right| $ and $ \tan\beta$ shown in fig.~6 and given approximately by eq.~\eeGiuVubApprox. The Fritzsch texture relates $ \left| V_{cb} \right| $ to $m_t$ through eq.~\eeFriVcbPhys. For small or moderate values of $ \tan\beta$, this relation leads to an upper bound $m_t \mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 110 {\rm~GeV} $, but for large $ \tan\beta$ ($\sim 50$), the top quark could be as heavy as $140 {\rm~GeV} $. All of these textures seem able to accommodate a top quark mass in the range preferred by the analysis of electroweak radiative corrections, at least for some value of $ \tan\beta$. Which one, or whether any of them, accurately describes reality remains an open question. There also remains the more fundamental question: what underlying mechanism determines the form of the Yukawa matrices at the GUT scale? \FIG\fig{ The relation between $m_t$ and $ \tan\beta$ implied by the SU(5) relation \eeGJSUPhys. Solid line: approximate analytic solution. Dashed line: numerical solution. Dotted line: $b$ and $\tau$ induced running neglected. } \FIG\fig{ The relation between $ \left| V_{cb} \right| $ and $ \tan\beta$ implied by the Georgi-Jarlskog texture. Solid line: approximate analytic solution. Dashed line: numerical solution. Dotted line: $b$ and $\tau$ induced running neglected. } \FIG\fig{ The relation between $ \left| V_{cb} \right| $ and $m_t$ implied by the Georgi-Jarlskog texture. Solid line: approximate analytic solution. Dashed line: numerical solution. } \FIG\fig{ The relation between $m_u$ and $ \tan\beta$ implied by the Giudice texture. Solid line: approximate analytic solution. Dashed line: numerical solution. Dotted line: $b$ and $\tau$ induced running neglected. } \FIG\fig{ The relation between $m_u$ and $m_t$ implied by the Giudice texture. Solid line: approximate analytic solution. Dashed line: numerical solution. } \FIG\fig{ The relation between $ \left| V_{ub} \right| $ and $ \tan\beta$ implied by the Giudice texture. Solid line: approximate analytic solution. Dashed line: numerical solution. Dotted line: $b$ and $\tau$ induced running neglected. } \endpage \refout \figout \end
2,869,038,156,155
arxiv
\section{Introduction} Multi-layer cellular neural networks (MCNNs) are large aggregates of analogue circuits presenting themselves as arrays of identical cells which are locally coupled. MCNNs have been widely applied in studying the signal propagation between neurons, and in image processing, pattern recognition and information technology \cite{ABFM-ITCSIFTA1998,CR-2002,CY-ITCS1988a,CC-ITCS1995,CRC-ITCS1993,Li-NC2009,Mur-IJCM2010,PZL-FI2009,XYSV-ITCSIRP2004,YNU-ITF2002}. A One-dimensional MCNN is realized as \begin{equation}\label{eq-general-system} \frac{d x^{(\ell)}_i}{dt} = - x^{(\ell)}_i + \sum_{|k| \leq d} a^{(\ell)}_k y^{(\ell)}_{i+k} + \sum_{|k| \leq d} b^{(\ell)}_k u^{(\ell)}_{i+k} + z^{(\ell)}, \end{equation} for some $d \in \mathbb{N}, 1 \leq \ell \leq n \in \mathbb{N}, i \in \mathbb{Z}$, where \begin{equation} u^{(\ell)}_i = y^{(\ell-1)}_i \text{ for } 2 \leq \ell \leq n, \quad u^{(1)}_i = u_i, \quad x_i^{(\ell)}(0) = x_{i,0}^{(\ell)}, \end{equation} and \begin{equation} y=f(x) = \frac{1}{2} (|x+1|-|x-1|) \end{equation} is the output function. The stationary solutions $\bar{x} = (\bar{x}^{(\ell)}_i)$ of (\ref{eq-general-system}) are essential for understanding the system, and their outputs $\bar{y}^{(\ell)}_i = f(\bar{x}^{(\ell)}_i)$ are called \emph{output patterns}. A \emph{mosaic solution} $(\bar{x}^{(\ell)}_i)$ is a stationary solution satisfying $|\bar{x}^{(\ell)}_i| > 1$ for all $i, \ell$ and the output of a mosaic solution is called a mosaic output pattern. Mosaic solutions are crucial for studying the complexity of \eqref{eq-general-system} due to their asymptotical stability \cite{CM-ITCSIFTaA1995,CMV-IJBCASE1996,CMV-RCD1996,CS-SJAM1995,CCTS-IJBCASE1996,GBC-ITCSIRP2004,IC-IJBCASE2004,JL-SJAM2000,KM-IJBCASE2010,LH-AMAS2003,LS-IJBCASE1999,Shi-SJAM2000}. In a MCNN system, the ``status" of each cell is taken as an input for a cell in the next layer except for those cells in the last layer. The results that can be recorded are the output of the cells in the last layer. Since the phenomena that can be observed are only the output patterns of the $n$th layer, the $n$th layer of \eqref{eq-general-system} is called the \emph{output layer}, while the other $n-1$ layers are called \emph{hidden layers}. We remark that, except from mosaic solutions exhibiting key features of MCNNs, mosaic solutions themselves are constrained by the so-called ``separation property'' (cf.~\cite{BC-IJBCASE2009, BC-NPL2014}). This makes the investigation more difficult. Furthermore, the output patterns of mosaic solutions of a MCNN can be treated as a cellular automaton. For the discussion of systems satisfying constraints and cellular automata, readers are referred to Wolfram's celebrated book \cite{Wol-2002}. (The discussion of constrained systems is referred to chapter 5.) Suppose $\mathbf{Y}$ is the solution space of a MCNN. For $\ell = 1, 2, \ldots, n$, let $$ \mathbf{Y}^{(\ell)} = \{\cdots y_{-1}^{(\ell)} y_0^{(\ell)} y_1^{(\ell)} \cdots\} $$ be the space which consists of patterns in the $\ell$th layer of $\mathbf{Y}$, and let $\phi^{(\ell)}: \mathbf{Y} \to \mathbf{Y}^{(\ell)}$ be the projection map. Then $\mathbf{Y}^{(n)}$ is called the \emph{output space} and $\mathbf{Y}^{(\ell)}$ is called the ($\ell$th) \emph{hidden space} for $\ell = 1, 2, \ldots, n-1$. It is natural to ask whether there exists a relation between $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$ for $1 \leq i \neq j \leq n$. Take $n = 2$ for instance; the existence of a map connecting $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$ that commutes with $\phi^{(1)}$ and $\phi^{(2)}$ means the \emph{decoupling} of the solution space $\mathbf{Y}$. More precisely, if there exists $\pi_{12}: \mathbf{Y}^{(1)} \to \mathbf{Y}^{(2)}$ such that $\pi_{12} \circ \phi^{(1)} = \phi^{(2)}$, then $\pi_{12}$ enables the investigation of structures between the output space and hidden space. A serial work is contributed for this purpose. At the very beginning, Ban \emph{et al.}~\cite{BCLL-JDE2009} demonstrated that the output space $\mathbf{Y}^{(n)}$ is topologically conjugated to a one-dimensional sofic shift. This result is differentiated from earlier research which indicated that the output space of a $1$-layer CNN without input is topologically conjugated to a Markov shift (also known as a shift of finite type). Some unsolved open problems, either on the mathematical or on the engineering side, have drawn interest since then. An analogous argument asserts that every hidden space $\mathbf{Y}^{(\ell)}$ is also topologically conjugated to a sofic shift for $1 \leq \ell \leq n-1$, and the solution space $\mathbf{Y}$ is topologically conjugated to a subshift of finite type. More than that, the topological entropy and dynamical zeta function of $\mathbf{Y}^{(\ell)}$ and $\mathbf{Y}$ are capable of calculation. A novel phenomenon, the asymmetry of topological entropy. It is known that a nonempty insertive and extractive language $\mathcal{L}$ is regular if and only if $\mathcal{L}$ is the language of a sofic shift; namely, $\mathcal{L} \subseteq \bigcup_{i \geq 0} B_i(X)$ for some sofic shift $X$, where $$ B_i(X) = \{x_1 x_2 \cdots x_i: (x_k)_{k \in \mathbb{Z}} \in X\}. $$ Therefore, elucidating sofic shifts is equivalent to the investigation of regular languages. Readers are referred to \cite{BP-2011} and the references therein for more details about the illustration of languages and sofic shifts. Followed by \cite{BCL-JDE2012}, the classification of the hidden and output spaces is revealed for those spaces reaching the same topological entropy. Notably, the study of the existence of $\pi_{ij}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(j)}$ for some $i,j$ is equivalent to illustrating whether there is a map connecting two sofic shifts. Mostly it is difficult to demonstrate the existence of such maps. The authors have provided a systematic strategy for determining whether there exists a map between $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$. More than that, the explicit expression of $\pi_{ij}$ is unveiled whenever there is a factor-like matrix $E$ (defined later). The present paper, as a continuation of \cite{BCL-JDE2012, BCLL-JDE2009}, is devoted to investigating the Hausdorff dimension of the output and hidden spaces. We emphasize that, in this elucidation, those spaces need not attain the same topological entropy. In addition to examining the existence of maps between $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$ (for the case where the topological entropies of two spaces are distinct), the complexity of the geometrical structure is discussed. The Hausdorff dimension of a specified space is an icon that unveils the geometrical structure and helps with the description of the complexity. This aim is the target of this study. Furthermore, aside from the existence of factor maps between $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$, the correspondence of the Hausdorff dimension is of interest to this study. Suppose there exists a factor map $\pi_{ij}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(j)}$, the Hausdorff dimension of $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$ are related under some additional conditions (see Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO}). More explicitly, it is now known that in many examples the calculation of the Hausdorff dimension of a set is closely related to the maximal measures (defined later) of its corresponding symbolic dynamical system (cf.~\cite[Theorem 13.1]{P-1997} for instance). Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO} also indicate that the Hausdorff dimension of $\mathbf{Y}^{(j)}$ is the quotient of the measure-theoretic entropy $h_{\pi_{ij} \nu^{(i)}}(\mathbf{Y}^{(j)})$ and the metric of $\mathbf{Y}^{(j)}$, where $\nu^{(i)}$ is the maximal measure of $\mathbf{Y}^{(i)}$. Notably, such a result relies on whether the push-forward measure $\pi_{ij} \nu^{(i)}$ of $\nu^{(i)}$ under the factor map $\pi_{ij}$ remains a maximal measure. We propose a methodology so that all the conditions are checkable, and the Hausdorff dimension $\dim \mathbf{Y}^{(\ell)}$ can be formulated accurately for $1 \leq \ell \leq n$. \begin{figure} \begin{center} \includegraphics[scale=0.7]{551428_2d} \end{center} \caption{The fractal sets of the hidden and output spaces of a MCNN with templates $[a^{(1)}, a_r^{(1)}, z^{(1)}] = [2.9, 1.7, 0.1]$ and $[a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] = [-0.3, -1.2, 0.7, 2.3, 0.9]$. Each fractal set is a subspace of $[0, 1] \times [0, 1]$, and is derived from the expansion $\Phi^{(i)}(x) = (\Sigma_{k \geq 0} \frac{x_k}{m_i^{k+1}}, \Sigma_{k \leq 0} \frac{x_k}{m_i^{|k|+1}})$ for $x \in \mathbf{Y}^{(i)}$, $m_i = |\mathcal{A}(\mathbf{Y}^{(i)})|$, and $i = 1, 2$. It is seen that the expansion map $\Phi^{(i)}: \mathbf{Y}^{(i)} \to [0, 1] \times [0, 1]$ is one-to-one almost everywhere, and hence does not make an impact on the discussion of the Hausdorff dimension. The figures come from repeating $9$ operations based on the basic set of admissible local patterns. See Example \ref{eg-Y1Y2-SFT} for more discussion.} \label{fig-551429} \end{figure} Figure \ref{fig-551429} illustrates the fractal sets of the hidden and output spaces (namely, $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$) of a two-layer CNN. It is seen that $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$ are entirely two different spaces. Aside from calculating the Hausdorff dimension of these spaces, it is interesting to investigate whether there is a map connecting $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$, and how $\dim \mathbf{Y}^{(1)}$ is related to $\dim \mathbf{Y}^{(2)}$. See Example \ref{eg-Y1Y2-SFT} for more details. In the mean time, we want to mention some further issues that are related to our elucidation and which have caused widespread attention recently. One of them is the investigation of the so-called \emph{sofic measure} or \emph{hidden Markov measure}. Let $\mu$ be a Markov measure on $\mathbf{Y}$. The push-forward measure $\phi^{(\ell)} \mu$, defined by $(\phi^{(\ell)} \mu) (\mathcal{O}) = \mu ((\phi^{(\ell)})^{-1} \mathcal{O})$ for all Borel set $\mathcal{O}$ in $\mathbf{Y}^{(\ell)}$, is called a sofic measure or a hidden Markov measure. There have been piles of papers about sofic measures written in the past decades. A concerned question is under what condition the push-forward measure of a Markov measure remains a Markov measure. To be more specific, we are interested in which properties a sofic measure would satisfy. This elucidation focuses on the study of the measures on the hidden/output space. Recalling that the hidden/output space is a factor of the solution space, it follows that the investigation of the measures on the hidden/output space is equivalent to the investigation of sofic measures. We propose a methodology to verify when a sofic measure is reduced to be Markov. In this case, the explicit form of a maximal measure and the Hausdorff dimension of the hidden/output space are formulated. For more discussion of the hidden Markov measures, the reader is referred to \cite{BCC-2012,BP-2011,KT-MAMS1985} and the references therein. It is known that the tiling problem is undecidable. As an application, it is of interest to investigate the decidability of the language of a sofic shift which can be realized as a hidden or output space of a MCNN. The related work is in preparation. The rest of this investigation is organized as follows. A brief recall of \cite{BCL-JDE2012, BCLL-JDE2009} and some definitions and notations are given in Section 2. The main theorems (Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO}) for $2$-layer CNNs are also stated therein. Section 3 analyzes the existence of factor maps that connect two spaces and the hidden Markov measures. The proofs of the main theorems are illustrated there. Some examples are given in Section 4. We generalize Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO} to general MCNNs in Section 5. Figure \ref{fig-flow-chart} provides the flow chart of the present investigation. Section 6 is saved for the conclusion and further problems. \section{Main Results and Preliminaries} Due to this paper being a continuation of \cite{BCL-JDE2012}, the upcoming section intends to give a brief review of \cite{BCL-JDE2012} and illustrates the main results of our study. For the self-containment of the present investigation, we recall some definitions and known results for symbolic dynamical systems and MCNNs. The reader is referred to \cite{BCL-JDE2012, BCLL-JDE2009, LM-1995} and the references therein for more details. \subsection{Multi-layer Cellular Neural Networks} Since an elucidation of two-layer CNNs is essential for the study of MCNNs, we refer MCNNs to two-layer CNNs and focus on them in the rest of this paper unless otherwise stated. A two-layer cellular neural network is realized as \begin{equation}\label{eq-2layer-mcnn} \left\{ \begin{split} \frac{d x_i^{(1)}}{dt} &= - x_i^{(1)} + \sum_{|k| \leq d} a^{(1)}_k y_{i+k}^{(1)} + \sum_{|\ell| \leq d} b^{(1)}_{\ell} u_{i+\ell}^{(1)} + z^{(1)}, \\ \frac{d x_i^{(2)}}{dt} &= - x_i^{(2)} + \sum_{|k| \leq d} a^{(2)}_k y_{i+k}^{(2)} + \sum_{|\ell| \leq d} b^{(2)}_{\ell} u_{i+\ell}^{(2)} + z^{(2)}, \end{split} \right. \end{equation} for some $d \in \mathbb{N}$, and $u_{i}^{(2)} = y_{i}^{(1)}$ for $i \in \mathbb{Z}$; $\mathbb{N}$ denotes the positive integers and $\mathbb{Z}$ denotes the integers. The prototype of \eqref{eq-2layer-mcnn} is $$ \frac{d x_i}{dt} = -x_i + \sum_{|k| \leq d} a_k y_{i+k} + \sum_{|\ell| \leq d} b_{\ell} u_{i+\ell} + z. $$ Here $A = [-a_d, \cdots, a_d], B = [-b_d, \cdots, b_d]$ are the \emph{feedback} and \emph{controlling templates}, respectively. $z$ is the \emph{threshold}, and $y_i = f(x_i) = \frac{1}{2} (|x_i+1| - |x_i-1|)$ is the output of $x_i$. The quantity $x_i$ represents the state of the cell at $i$ for $i \in \mathbb{Z}$. The output of a stationary solution $\bar{x} = (\bar{x}_i)_{i \in \mathbb{Z}}$ is called a output pattern. A \emph{mosaic solution} $\bar{x}$ satisfies $|\bar{x}_i| > 1$ and its corresponding pattern $\bar{y}$ is called a \emph{mosaic output pattern}. Consider the mosaic solution $\bar{x}$, the necessary and sufficient condition for state ``$+$" at cell $C_i$, i.e., $\bar{y}_i = 1$, is \begin{equation}\label{eq-cnn-state+} a - 1 + z > -(\sum_{0 < |k| \leq d} a_k \bar{y}_{i+k} + \sum_{|\ell| \leq d} b_{\ell} u_{i+\ell}), \end{equation} where $a = a_0$. Similarly, the necessary and sufficient conditions for state ``$-$" at cell $C_i$, i.e., $\bar{y}_i = -1$, is \begin{equation}\label{eq-cnn-state-} a - 1 - z > \sum_{0 < |k| \leq d} a_k \bar{y}_{i+k} + \sum_{|\ell| \leq d} b_{\ell} u_{i+\ell}. \end{equation} For simplicity, denoting $\bar{y}_i$ by $y_i$ and rewriting the output patterns $y_{-d} \cdots y_0 \cdots y_d$ coupled with input $u_{-d} \cdots u_0 \cdots u_d$ as \begin{equation} \boxed{y_{-d} \cdots y_{-1} y_0 y_1 \cdots y_d \atop \displaystyle u_{-d} \cdots u_{-1} u_0 u_1 \cdots u_d} \equiv y_{-d} \cdots y_d \diamond u_{-d} \cdots u_d \in \{-1, 1\}^{\mathbb{Z}_{(2d+1)\times 2}}. \end{equation} Let $$ V^n = \{ v \in \mathbb{R}^n : v = (v_1, v_2, \cdots, v_n), \text{ and } |v_i| = 1, 1 \leq i \leq n \}, $$ where $n = 4d+1$, \eqref{eq-cnn-state+} and \eqref{eq-cnn-state-} can be rewritten in a compact form by introducing the following notation. Denote $\alpha = (a_{-d}, \cdots, a_{-1}, a_1, \cdots, a_d)$, $\beta = (b_{-d}, \cdots, b_d)$. Then, $\alpha$ can be used to represent $A'$, the surrounding template of $A$ without center, and $\beta$ can be used to represent the template $B$. The basic set of admissible local patterns with ``$+$" state in the center is defined as \begin{equation} \mathcal{B}(+, A, B, z) = \{v \diamond w \in V^n : a - 1 + z > -(\alpha \cdot v + \beta \cdot w) \}, \end{equation} where ``$\cdot$" is the inner product in Euclidean space. Similarly, the basic set of admissible local patterns with ``$-$" state in the center is defined as \begin{equation} \mathcal{B}(-, A, B, z) = \{v' \diamond w' \in V^n : a - 1 - z > \alpha \cdot v + \beta \cdot w \}. \end{equation} Furthermore, the admissible local patterns induced by $(A, B, z)$ can be denoted by \begin{equation} \mathcal{B}(A, B, z) = (\mathcal{B}(+, A, B, z), \mathcal{B}(-, A, B, z)). \end{equation} It is shown that the parameter space can be partitioned into finite equivalent subregions, that is, two sets of parameters induce identical basic sets of admissible local patterns if they belong to the same partition in the parameter space. Moreover, the parameter space of a MCNN is also partitioned into finite equivalent subregions \cite{BCLL-JDE2009}. Suppose a partition of the parameter space is determined, that is, the templates $$ A^{(\ell)} = [a^{(\ell)}_{-d}, \cdots, a^{(\ell)}_{d}], \quad B^{(\ell)} = [b^{(\ell)}_{-d}, \cdots, b^{(\ell)}_{d}], \quad z^{(\ell)} \qquad \ell = 1, 2 $$ are given. A stationary solution $ \mathbf{x} = \mathbf{x}^{(2)} \diamond \mathbf{x}^{(1)} = \begin{pmatrix} x^{(2)}_i \\ x^{(1)}_i \\ \end{pmatrix}_{i \in \mathbb{Z}} $ is called mosaic if $|x^{(\ell)}_i| > 1$ for $\ell = 1, 2$ and $i \in \mathbb{Z}$. The output $ \mathbf{y} = \mathbf{y}^{(2)} \diamond \mathbf{y}^{(1)} = \begin{pmatrix} y^{(2)}_i \\ y^{(1)}_i \\ \end{pmatrix}_{i \in \mathbb{Z}} $ of a mosaic solution $\mathbf{x}$ is called a mosaic pattern. Suppose $\mathcal{B}$ is the basic set of admissible local patterns of a MCNN. Since \eqref{eq-2layer-mcnn} is spatial homogeneous, that is, the templates of \eqref{eq-2layer-mcnn} are fixed for each cell, the \emph{solution space} $\mathbf{Y} \subseteq \{-1, 1\}^{\mathbb{Z}_{\infty \times 2}}$ is determined by $\mathcal{B}$ as $$ \mathbf{Y}= \left\{\mathbf{y}^{(2)} \diamond \mathbf{y}^{(1)}: y^{(2)}_{i-d} \cdots y^{(2)}_i \cdots y^{(2)}_{i+d} \diamond y^{(1)}_{i-d} \cdots y^{(1)}_i \cdots y^{(1)}_{i+d} \in \mathcal{B} \text{ for } i \in \mathbb{Z}\right\}. $$ Moreover, the \emph{output space} $\mathbf{Y}^{(2)}$ and the \emph{hidden space} $\mathbf{Y}^{(1)}$ are defined by $$ \mathbf{Y}^{(2)} = \left\{\mathbf{y} \in \{-1, 1\}^{\mathbb{Z}} : \mathbf{y} \diamond \mathbf{u} \in \mathbf{Y} \text{ for some } \mathbf{u}\right\} $$ and $$ \mathbf{Y}^{(1)} = \left\{\mathbf{u} \in \{-1, 1\}^{\mathbb{Z}} : \mathbf{y} \diamond \mathbf{u} \in \mathbf{Y} \text{ for some } \mathbf{y}\right\} $$ respectively. In \cite{BCL-JDE2012, BCLL-JDE2009}, the authors demonstrated that $\mathbf{Y}$ is a \emph{shift of finite type} (SFT) and $\mathbf{Y}^{(1)}, \mathbf{Y}^{(2)}$ are both sofic shifts. In general, for $i = 1, 2$, the factor $\phi^{(i)}: \mathbf{Y} \to \mathbf{Y}^{(i)}$ is not even a finite-to-one surjective map. Furthermore, for $i = 1, 2$, there is a \emph{covering space} $W^{(i)}$ of $\mathbf{Y}^{(i)}$ and a finite-to-one factor $\phi^{(i)}: W^{(i)} \to \mathbf{Y}^{(i)}$ with $W^{(i)}$ being a SFT. (We abuse the finite-to-one factor $\phi^{(i)}$ rather than $\widehat{\phi}^{(i)}$ to ease the use of notation.) For a topological space $Y$, we say that $X$ is a covering space of $Y$ if there exists a continuous onto map $\phi: X \to Y$ which is locally homeomorphic. A quantity that describes the complexity of a system is \emph{topological entropy}. Suppose $X$ is a shift space. Denote $\Gamma_k(X)$ the cardinality of the collection of words of length $k$. The topological entropy of $X$ is then defined by $$ h(X) = \lim_{k \to \infty} \frac{\log \Gamma_k(X)}{k}. $$ Whenever the hidden space $\mathbf{Y}^{(1)}$ and the output space $\mathbf{Y}^{(2)}$ reach the same topological entropy, $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$ are \emph{finite shift equivalent} (FSE) \cite{BCL-JDE2012}. Herein two spaces $X$ and $Y$ are FSE if there is a triple $(Z, \phi_X, \phi_Y)$ such that $Z$ is a SFT and $\phi_X: Z \to X, \phi_Y: Z \to Y$ are both finite-to-one factors. Ban et al.\ \cite{BCL-JDE2012} asserted that the existence of a \emph{factor-like} matrix helps in determining whether or not there is a map between $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$. A nonnegative $m \times n$ integral matrix $E$ is called factor-like if, for each fixed row, the summation of all entries is equal to $1$. \begin{figure} \begin{center} \begin{pspicture}(3,4) \psset{nodesep=0.1cm} \rput(1.5,4){\rnode{A}{$W$}} \rput(0,2){\rnode{B}{$W^{(1)}$}}\rput(3,2){\rnode{C}{$W^{(2)}$}} \rput(0,0){\rnode{D}{$Y^{(1)}$}}\rput(3,0){\rnode{E}{$Y^{(2)}$}} \ncline{->}{A}{B}\Bput{$\varphi_{W^{(1)}}$}\ncline{->}{A}{C}\Aput{$\varphi_{W^{(2)}}$}\ncline[linestyle=dashed]{<->}{B}{C} \ncline{->}{B}{D}\Bput{$\varphi^{(1)}$}\ncline{->}{C}{E}\Aput{$\varphi^{(2)}$} \ncline[linestyle=dashed]{<->}{D}{E} \end{pspicture} \end{center} \caption{For the case that $h(\mathbf{Y}^{(1)}) = h(\mathbf{Y}^{(2)})$, we get a trianglular structure. Whether the dash lines can be replaced by solid lines infers whether the structure of the hidden and output spaces are related.} \label{fig-Y1-Y2-triangle} \end{figure} \begin{proposition}[See {\cite[Proposition 3.15, Theorem 3.17]{BCL-JDE2012}}]\label{prop-factor-like-embed} Let $T^{(i)}$ be the transition matrix of $W^{(i)}$ for $i = 1, 2$. Suppose $E$ is a factor-like matrix such that $T^{(i)}E = E T^{(\overline{i})}$, then there is a map $\pi: W^{(i)} \to W^{(\overline{i})}$ which preserves topological entropy, where $i + \overline{i} = 3$. Furthermore, if $\phi^{(i)}$ is a conjugacy, then there is a map $\overline{\pi}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(\overline{i})}$ which preserves topological entropy. \end{proposition} Proposition \ref{prop-factor-like-embed} infers a criterion for the existence of maps between $W^{(1)}, W^{(2)}$ and $\mathbf{Y}^{(1)}, \mathbf{Y}^{(2)}$ when $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$ are FSE. Figure \ref{fig-Y1-Y2-triangle} illustrates a triangular structure between $W^{(i)}$ and $\mathbf{Y}^{(i)}$. The structures of the hidden and output spaces are related if the dash lines can be replaced by solid lines. Some natural questions follow immediately. \begin{problem}\label{prob-dim-W1W2} Suppose $\pi: W^{(i)} \to W^{(\overline{i})}$ exists. \begin{enumerate}[a.] \item Let $\mu$ be a Markov measure on $W^{(i)}$. Is $\pi \mu$ a Markov measure on $W^{(\overline{i})}$, where $\pi \mu := \mu \circ \pi^{-1}$ is the push-forward measure of $\mu$? \item Suppose $\pi$ is surjective. For each Markov measure $\mu'$ on $W^{(\overline{i})}$, does there exist a $\mu$ on $W^{(i)}$ such that $\pi \mu = \mu'$? \item How is the Hausdorff dimension $\dim W^{(\overline{i})}$ related to the Hausdorff dimension $\dim W^{(i)}$? \end{enumerate} \end{problem} Problem \ref{prob-dim-W1W2} considers whether or not a topological map connects the measures and the Hausdorff dimension of two spaces. Notably, $W^{(1)}, W^{(2)}$ are topological Markov chains. It is getting more complicated when investigating the hidden and output spaces. \begin{problem}\label{prob-dim-Y1Y2} Suppose $\overline{\pi}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(\overline{i})}$ exists. \begin{enumerate}[a.] \item Let $\nu$ be a maximal measure on $\mathbf{Y}^{(i)}$. Is $\overline{\pi} \nu$ a maximal measure on $\mathbf{Y}^{(\overline{i})}$? \item Suppose $\overline{\pi}$ is surjective. For each Markov measure $\nu'$ on $\mathbf{Y}^{(\overline{i})}$, does there exist a $\nu$ on $\mathbf{Y}^{(i)}$ such that $\overline{\pi} \nu = \nu'$? \item Is the Hausdorff dimension $\dim \mathbf{Y}^{(\overline{i})}$ related to the Hausdorff dimension $\dim \mathbf{Y}^{(i)}$? \end{enumerate} \end{problem} \subsection{Shift Spaces and Hausdorff Dimension} In this subsection, we recall some definitions and properties of shift spaces and the Hausdorff dimension for the reader's convenience. The detailed information is referred to in \cite{LM-1995, P-1997}. Let $\mathcal{A}$ be a finite set with cardinality $|\mathcal{A}| = n$, which we consider to be an alphabet of symbols. Without the loss of generality, we usually take $\mathcal{A} = \{0, 1, \ldots, n-1\}$. The full $\mathcal{A}$-shift $\mathcal{A}^{\mathbb{Z}}$ is the collection of all bi-infinite sequences with entries from $\mathcal{A}$. More precisely, $$ \mathcal{A}^{\mathbb{Z}} = \{\alpha = (\alpha_i)_{i \in \mathbb{Z}}: \alpha_i \in \mathcal{A} \text{ for all } i \in \mathbb{Z}\}. $$ The shift map $\sigma$ on the full shift $\mathcal{A}^{\mathbb{Z}}$ is defined by $$ \sigma(\alpha)_i = \alpha_{i+1} \quad \text{for} \quad i \in \mathbb{Z}. $$ A \emph{shift space} $X$ is a subset of $\mathcal{A}^{\mathbb{Z}}$ such that $\sigma(X) \subseteq X$. $\mathcal{A}^{\mathbb{Z}}$ is a compact metric space endowed with the metric $$ d(x, y) = \sum_{i \in \mathbb{Z}} \frac{|x_i - y_i|}{n^{|i|+1}}, \quad x, y \in \mathcal{A}^{\mathbb{Z}}. $$ Two specific types of shift spaces that are related to our investigation are subshifts of finite type and sofic shifts. First we introduce the former. For each $k \in \mathbb{N}$, let $$ \mathcal{A}_k = \{w_0 w_1 \cdots w_{k-1}: w_i \in \mathcal{A}, 0 \leq i \leq k-1\} $$ denote the collection of words of length $k$ and let $\mathcal{A}_0$ denote the empty set. A \emph{cylinder} $I \subset \mathcal{A}^{\mathbb{Z}}$ is $$ I = \{x \in \mathcal{A}^{\mathbb{Z}}: x_i x_{i+1} \cdots x_{i+k-1} = \omega_0 \omega_1 \cdots \omega_{k-1}\}, $$ for some $i \in \mathbb{Z}, k \in \mathbb{N}$, and $\omega_0 \omega_1 \cdots \omega_{k-1} \in \mathcal{A}_k$. (Sometimes we also write $I = [\omega_0, \omega_1, \cdots, \omega_{k-1}]$.) If $X$ is a shift space and there exists $L \geq 0$ and $\mathcal{F} \subseteq \cup_{0 \leq k \leq L} \mathcal{A}_k$ such that $$ X = \{(\alpha_i)_{i \in \mathbb{Z}}: \alpha_i \alpha_{i+1} \cdots \alpha_{i+k-1} \notin \mathcal{F} \text{ for } k \leq L, i \in \mathbb{Z}\} $$ then we say that $X$ is a SFT. The SFT is \emph{$L$-step} if words in $\mathcal{F}$ have length at most $L+1$. Notably, it is known that, without the loss of generality, SFTs can be defined by $0, 1$ transition matrices. For instance, let $T$ be an $n \times n$ matrix with rows and columns indexed by $\mathcal{A}$ and entries from $\{0, 1\}$. Then $$ X = \{x \in \mathcal{A}^{\mathbb{Z}}: T(x_i, x_{i+1}) = 1 \text{ for all } i \in \mathbb{Z}\} $$ is a one-step SFT. (It is also known as a \emph{topological Markov chain} by Parry.) A topological Markov chain is called irreducible/mixing if its transition matrix is irreducible/mixing. An extended concept of SFTs is called \emph{sofic shifts}. A sofic shift is a subshift which is the image of a SFT under a factor map. Suppose $X$ and $Y$ are two shift spaces. A \emph{factor map} is a continuous onto map $\pi: X \to Y$ such that $\pi \circ \sigma_{X} = \sigma_{Y} \circ \pi$. A one-to-one factor map is called a topological conjugacy. A sofic shift is \emph{irreducible} if it is the image of an irreducible SFT. In the previous subsection we mentioned that the topological entropy illustrates the complexity of the topological behavior of a system. Aside from the topological entropy, the Hausdorff dimension characterizes its geometrical structure. The concept of the Hausdorff dimension generalizes the notion of the dimension of a real vector space and helps to distinguish the difference of measure zero sets. We recall the definition of the Hausdorff dimension for reader's convenience. Given $\epsilon > 0$, an $\epsilon$-cover $\{U_i\}$ of $X$ is a cover such that the diameter of $U_i$ is less than $\epsilon$ for each $i$. Putting \begin{equation} \mathcal{H}^s(X) = \liminf_{\epsilon \to 0} \sum_{i=1}^{\infty} \delta(U_i)^s, \end{equation} where $\delta(U_i)$ denotes the diameter of $U_i$. The Hausdorff dimension of $X$ is defined by \begin{equation} \dim X = \inf \{s: \mathcal{H}^s(X) = 0\}. \end{equation} For subsets that are invariant under a dynamical system we can pose the problem of the Hausdorff dimension of an invariant measure. To be precise let us consider a map $g: X \to X$ with invariant probability measure $\mu$. The stochastic properties of $g$ are related to the topological structure of $X$. A relevant quantitative characteristic, which can be used to describe the complexity of the topological structure of $X$, is the Hausdorff dimension of the measure $\mu$. The Hausdorff dimension of a probability measure $\mu$ on $X$ is defined by $$ \dim \mu = \inf \{\dim Z: Z \subset X \text{ and } \mu(Z) = 1\}. $$ $\mu$ is called a \emph{measure of full Hausdorff dimension} (MFHD) if $\dim \mu = \dim X$. A MFHD is used for the investigation of the Hausdorff dimension $\dim X$, and the computation of the Hausdorff dimension of a MFHD corresponds to the computation of the \emph{measure-theoretic entropy}, an analogous quantity as the topological entropy that illustrates the complexity of a physical system, of $X$ with respect to the MFHD \cite{Bow-PMI1979, P-1997}. This causes the discussion of measure-theoretic entropy to play an important role in this elucidation. Given a shift space $(X, \sigma)$, we denote by $\mathcal{M}(X)$ the set of $\sigma$-invariant Borel Probability measures on $X$. Suppose $P$ is a irreducible stochastic matrix and $p$ a stochastic row vector such that $pP = p$, that is, the summation of entries in each row of $P$ is $1$, and the summation of the entries of $p$ is $1$. Notably such $p$ is unique due to the irreducibility of $P$. Define a $0, 1$ matrix $T$ by $T(i, j) = 1$ if and only if $P(i, j) > 0$. (The matrix $T$ is sometimes known as the \emph{incidence matrix} of $P$.) Denote the space of \emph{right-sided} SFT $X_T^+$ by $$ X_T^+ = \{x \in \mathcal{A}^{\mathbb{Z}^+}: T(x_i, x_{i+1}) = 1 \text{ for all } i \geq 0\}. $$ It is seen that $X_T^+$ is embedded as a subspace of $X_T$. The metric on $X_T^+$ is endowed with $$ d^+(x, y) = \sum_{i \geq 0} \frac{|x_i - y_i|}{n^{i+1}}, \quad x, y \in X_T^+. $$ Then $(p, P)$ defines an invariant measure $\mu^+$ on $X_T^+$ as $$ \mu^+([\omega_0, \omega_1, \cdots, \omega_{k-1}]) = p(\omega_0) P(\omega_0, \omega_1) \cdots P(\omega_{k-2}, \omega_{k-1}) $$ for each cylinder set $I^+ = [\omega_0, \omega_1, \cdots, \omega_{k-1}] \subset X_T^+$ by the Kolmogorov Extension Theorem. Moreover, a measure $\mu^+$ on $X_T^+$ is Markov if and only if it is determined by a pair $(p, P)$ as above. Similar to the above, we define the \emph{left-sided} SFT $X_T^-$ by $$ X_T^- = \{x \in \mathcal{A}^{\mathbb{Z}^-}: T(x_{-i}, x_{-i+1}) = 1 \text{ for all } i \in \mathbb{N}\}. $$ Then $X_T^-$ is a subspace of $X_T$, and the metric on $X_T^-$ is endowed with $$ d^-(x, y) = \sum_{i \leq 0} \frac{|x_i - y_i|}{n^{-i+1}}, \quad x, y \in X_T^-. $$ Let $Q$ be the transpose of $P$ and $q$ is the stochastic row vector such that $qQ = q$. Then $(q, Q)$ defines an invariant measure $\mu^-$ on $X_T^-$ as $$ \mu^-([\omega_{-k+1}, \cdots, \omega_{-1}, \omega_0]) = q(\omega_0) Q(\omega_0, \omega_{-1}) \cdots Q(\omega_{-k+2}, \omega_{-k+1}) $$ for each cylinder set $I^- = [\omega_{-k+1}, \cdots, \omega_{-1}, \omega_0] \subset X_T^-$. Notice that given a cylinder $I = [\omega_{-\ell+1}, \ldots, \omega_0, \ldots, \omega_{k-1}] \subset X_T$, $\ell, k \geq 1$, $I$ can be identified with the direct product $I^+ \times I^-$, where $I^+ = [\omega_0, \cdots, \omega_{k-1}] \subset X_T^+$ and $I^- = [\omega_{-\ell+1}, \cdots, \omega_0] \subset X_T^-$. Furthermore, $(p, P)$ defines an invariant measure $\mu$ on $X_T$ as $\mu(I) \approx \mu^+(I^+) \mu^-(I^-)$ for any cylinder $I \subset X_T$. To be precise, there exist positive constants $A_1$ and $A_2$ such that for integers $k, \ell \geq 0$, and any cylinder $I = [\omega_{-\ell+1}, \ldots, \omega_0, \ldots, \omega_{k-1}] \subset X_T$, \begin{equation}\label{eq-mu-equal-mu+mu-} A_1 \leq \dfrac{\mu(I)}{\mu^+(I^+) \mu^-(I^-)} \leq A_2. \end{equation} Combining \eqref{eq-mu-equal-mu+mu-} with the fact that every cylinder $I \in X_T$ is identified with $I^+ \times I^-$ infers that the study of the measure-theoretic entropy of one-sided subspace $X_T^+$/$X_T^-$ is significant for investigating the measure-theoretic entropy of $X_T$. What is more, the computation of the Hausdorff dimension of $X$ is closely related to the computation of the Hausdorff dimension of $X_T^+$/$X_T^-$. The reader is referred to \cite{Kit-1998} for more details. Now we are ready to introduce the general definition of the measure-theoretic entropy. Given a shift space $X \subseteq \mathcal{A}^{\mathbb{Z}}$ and an invariant probability measure $\mu$ on $X$, the measure-theoretic entropy of $X$ with respect to $\mu$ is given by $$ h_{\mu}(X) = - \lim_{n \to \infty} \frac{1}{n} \sum_{I \in X_n} \mu(I) \log \mu(I), $$ where $X_n$ denotes the collection of cylinders of length $n$ in $X$. The concepts of the measure-theoretic and topological entropies are connected by the \emph{Variational Principle}: $$ h(X) = \sup \{h_{\mu}(X): \mu \in \mathcal{M}(X)\}. $$ $\mu$ is called a \emph{measure of maximal entropy} (also known as maximal measure) if $h_{\mu}(X) = h(X)$. Notably, suppose $X_T^+$/$X_T^-$/$X_T$ is a SFT determined by $T$, which is the incidence matrix of an irreducible stochastic matrix $P$. It is well-known that the Markov measure $\mu^+$/$\mu^-$/$\mu$, derived from the pair $(p, P)$, is the \emph{unique} measure of maximal entropy. \subsection{Results} This subsection is devoted to illustrating the main results of the present elucidation. First we recall a well-known result. \begin{theorem}[See {\cite[Theorem 4.1.7]{Kit-1998}}]\label{thm-diamond-inf2one} Suppose $\phi: X \to Y$ is a one-block factor map between mixing SFTs, and $X$ has positive entropy. Then either \begin{enumerate} \item $\phi$ is uniformly bounded-to-one, \item $\phi$ has no diamond, \item $h(X)=h(Y)$ \end{enumerate} or \begin{enumerate}\setcounter{enumi}{3} \item $\phi$ is uncountable-to-one on some point, \item $\phi$ has diamond, \item $h(X)>h(Y)$. \end{enumerate} \end{theorem} \begin{figure} \begin{center} \begin{pspicture}(13,4)\psset{nodesep=2pt} \rput(1.5,2){\ovalnode{A}{\Tn}}\rput(9.5,2){\ovalnode{B}{\Tn}} \rput(2.5,3){\ovalnode{C}{\Tn}}\rput(2.5,1){\ovalnode{D}{\Tn}} \rput(4,3){\ovalnode{E}{\Tn}}\rput(4,1){\ovalnode{F}{\Tn}} \rput(5.5,3){\ovalnode{G}{\Tn}}\rput(5.5,1){\ovalnode{H}{\Tn}} \rput(7,3){\rnode{I}{\Tn{$\cdots$}}}\rput(7,1){\rnode{J}{\Tn{$\cdots$}}} \rput(8.5,3){\ovalnode{K}{\Tn}}\rput(8.5,1){\ovalnode{L}{\Tn}} \ncline{->}{A}{C}\naput{$a_1$}\ncline{->}{C}{E}\naput{$a_2$}\ncline{->}{E}{G}\naput{$a_3$} \ncline{->}{G}{I}\ncline{->}{I}{K}\ncline{->}{K}{B}\naput{$a_n$} \ncline{->}{A}{D}\nbput{$a_1$}\ncline{->}{D}{F}\nbput{$a_2$}\ncline{->}{F}{H}\nbput{$a_3$} \ncline{->}{H}{J}\ncline{->}{J}{L}\ncline{->}{L}{B}\nbput{$a_n$} \end{pspicture} \end{center} \caption{A factor map $\phi: X \to Y$ has a diamond infers that there exists a pair of distinct points in $X$ differing in only finitely many coordinates with the same image under $\phi$. It is named after the shape of its labeled graph representation.} \label{fig-diamond} \end{figure} A \emph{diamond} for $\phi: X \to Y$ is a pair of distinct points in $X$ differing in only a finite number of coordinates with the same image under $\phi$ (cf.~Figure \ref{fig-diamond}). Theorem \ref{thm-diamond-inf2one} reveals that the investigation of the existence of diamonds is equivalent to the study of infinite-to-one factor maps. Without the loss of generality, we may assume that every factor map $\phi$ is a one-block code. That is, there exists $\Phi: \mathcal{A}(X) \to \mathcal{A}(Y)$ such that $\phi(x)_i = \Phi(x_i)$ for $i \in \mathbb{Z}$. Theorem \ref{thm-diamond-inf2one}, in other words, indicates that every factor map is either finite-to-one or infinite-to-one. In \cite{BCL-JDE2012}, the authors investigated those finite-to-one factor maps. The infinite-to-one factor maps are examined in this study. Once a factor map exists, we can use it to formulate the Hausdorff dimension of these spaces. We start with considering the case that $\mathbf{Y}^{(1)}$ is finitely shift equivalent to $\mathbf{Y}^{(2)}$. Two spaces are FSE infers that a factor map between them, if it exists, is finite-to-one. Let $X$ be a shift space. A point $x \in X$ is said to be \emph{doubly transitive} if, for every $k \in \mathbb{N}$ and word $w$ in $X$, there exist $\overline{\ell} < 0 < \ell$ with $|\overline{\ell}|, \ell > k$ such that $$ x_{\overline{\ell}-|w|+1} \cdots x_{\overline{\ell}} = w \quad \text{and} \quad x_{\ell} \cdots x_{\ell + |w|-1} = w. $$ Suppose $\phi: X \to Y$ is a factor map. If there is a positive integer $K$ such that every doubly transitive point of $Y$ has exactly $K$ preimages under $\phi$. Such $K$ is called the \emph{degree} of $\phi$ and we define $d_{\phi} = K$ \cite{LM-1995}. Let $w = w_1 \cdots w_n$ be a word in $Y$. For $1 \leq i \leq n$, define $d^*_{\phi}(w, i)$ to be the number of alphabets at coordinate $i$ in the preimages of $w$. In other words, $$ d^*_{\phi}(w, i) = \# \{u_i \in \mathcal{A}(X): u = u_1 \cdots u_n \in X_n, \Phi(u_i) = w_i \text{ for } 1 \leq i \leq n\}. $$ Denote $$ d^*_{\phi} = \min \{d^*_{\phi}(w, i): w \in B(Y), 1 \leq i \leq |w|\}, $$ where $B(Y)$ indicates the collection of words in $Y$. \begin{definition} We say that $w \in B(Y)$ is a \emph{magic word} if $d^*_{\phi}(w, i) = d^*_{\phi}$ for some $i$. Such an index $i$ is called a \emph{magic coordinate}. \end{definition} We say a factor map $\phi$ has a \emph{synchronizing word} if there is a finite block $y_1 y_2 \cdots y_n \in \mathcal{A}_n(Y)$ such that, each element in $\phi^{-1}(y_1 y_2 \cdots y_n)$ admits the same terminal entry. A finite-to-one factor map $\phi$ has a synchronizing word indicating that the push-forward measure of a measure of maximal entropy under a finite-to-one factor map is still a measure of maximal entropy. The following is our first main result. \begin{theorem}\label{main-thm-FSE} Suppose the hidden space $\mathbf{Y}^{(1)}$ and the output space $\mathbf{Y}^{(2)}$ are FSE. Let $W^{(i)}$ be irreducible with finite-to-one factor map $\phi^{(i)}: W^{(i)} \to \mathbf{Y}^{(i)}$ for $i = 1, 2$. If $\phi^{(i)}$ has a synchronizing word, then \begin{enumerate}[\bf i)] \item There is a one-to-one correspondence between $\mathcal{M}_{\max}(W^{(i)})$ and $\mathcal{M}_{\max}(\mathbf{Y}^{(i)})$, where $\mathcal{M}_{\max}(X)$ indicates the set of measures of maximal entropy. \item Let $m_i = |\mathcal{A}(W^{(i)})|, n_i = |\mathcal{A}(\mathbf{Y}^{(i)})|$, and $\mu^{(i)}$ a maximal measure of $W^{(i)}$. Then \begin{align*} \dim W^{(i)} &= \frac{h_{\mu^{(i)}}(W^{(i)})}{\log m_i} = 2 \frac{h_{\mu^{(i),+}}(W^{(i),+})}{\log m_i} \\ \intertext{and} \dim \mathbf{Y}^{(i)} &= \frac{h_{\phi^{(i)}\mu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i} = \frac{h_{\mu^{(i)}}(W^{(i)})}{\log n_i} = 2 \frac{h_{\mu^{(i),+}}(W^{(i),+})}{\log n_i}, \end{align*} where $\mu^{(i),+}$ is the maximal measure of the right-sided subspace $W^{(i),+}$ of $W^{(i)}$. \item Suppose $\pi: W^{(i)} \to W^{(\overline{i})}$ is a factor map and $\nu^{(i)} = \phi^{(i)} \mu^{(i)}$ for $i = 1, 2$, where $i + \overline{i} = 3$. If $$ \dim \mathbf{Y}^{(i)} = \frac{h_{\nu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i}, $$ then $$ \dim \mathbf{Y}^{(\overline{i})} = \frac{h_{\overline{\pi}\nu^{(i)}}(\mathbf{Y}^{(\overline{i})})}{\log n_{\overline{i}}} = \frac{h_{\nu^{(\overline{i})}}(\mathbf{Y}^{(\overline{i})})}{\log n_{\overline{i}}} $$ for some $\overline{\pi}$. \end{enumerate} \end{theorem} In contrast with the map connecting $\overline{\pi}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(\overline{i})}$, if it exists, being finite-to-one when $h(\mathbf{Y}^{(1)}) = h(\mathbf{Y}^{(2)})$, Theorem \ref{thm-diamond-inf2one} indicates that $\overline{\pi}$ must be infinite-to-one for the case where $h(\mathbf{Y}^{(1)}) \neq h(\mathbf{Y}^{(2)})$. Intuitively, the number of infinite-to-one factor maps is much larger than the number of finite-to-one factor maps. Computer assisted examination serves affirmative results for MCNN \cite{BC-AMC2013}. Suppose $\phi: X \to Y$ is a factor map and $h(X) \neq h(Y)$. Intuitively there is a maximal measure in $Y$ with infinite preimage. It is natural to ask whether these preimages are isomorphic to one another. The isomorphism of two measures demonstrates their measure-theoretic entropies coincide with the same value. In \cite{BT-TAMS1984}, Boyle and Tuncel indicated that any two Markov measures associated with the same image are isomorphic to each other if $\phi$ is a \emph{uniform factor}. We say that $\phi$ is uniform if $\phi \mu \in \mathcal{M}_{\max}(Y)$ for every $\mu \in \mathcal{M}_{\max}(X)$. $\phi$ is a uniform factor indicating $\dim Y$ is related to $\dim X$. \begin{theorem}\label{main-thm-ITO} Assume that $h(\mathbf{Y}^{(1)}) \neq h(\mathbf{Y}^{(2)})$. Let $W^{(i)}$ be irreducible with finite-to-one factor map $\phi^{(i)}: W^{(i)} \to \mathbf{Y}^{(i)}$ for $i = 1, 2$. \begin{enumerate}[\bf i)] \item Suppose $\pi: W^{(i)} \to W^{(\overline{i})}$ is a uniform factor. Let $m_i = |\mathcal{A}(W^{(i)})|, n_i = |\mathcal{A}(\mathbf{Y}^{(i)})|$, and $\mu^{(i)}$ be a maximal measure of $W^{(i)}$. If $$ \dim W^{(i)} = \frac{h_{\mu^{(i)}}(W^{(i)})}{\log m_i}, $$ then $$ \dim W^{(\overline{i})} = \frac{h_{\mu^{(\overline{i})}}(W^{(\overline{i})})}{\log m_{\overline{i}}} = \frac{h_{\pi\mu^{(i)}}(W^{(\overline{i})})}{\log m_{\overline{i}}}. $$ \end{enumerate} Furthermore, suppose $h(\mathbf{Y}^{(i)}) > h(\mathbf{Y}^{(\overline{i})})$ and $\phi^{(i)}$ has a synchronizing word, then \begin{enumerate}[\bf i)]\setcounter{enumi}{1} \item There exists a factor map $\overline{\pi}: \mathcal{M}_{\max}(\mathbf{Y}^{(i)}) \to \mathcal{M}_{\max}(\mathbf{Y}^{(\overline{i})})$. \item If $$ \dim \mathbf{Y}^{(i)} = \dfrac{h_{\nu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i}, $$ then $$ \dim \mathbf{Y}^{(\overline{i})} = \dfrac{h_{\overline{\pi} \nu^{(i)}}(\mathbf{Y}^{(\overline{i})})}{\log n_{\overline{i}}}. $$ \end{enumerate} \end{theorem} We postpone the proof of Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO} to the following section. In the mean time, we will introduce the factor maps between the solution, hidden, and output spaces. \section{Existence of Factors} The existence of factor maps plays an important role in the proof of Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO}. First we focus on whether or not a factor map between two spaces exists, and, if it exists, the possibility of finding out an explicit form. \subsection{Classification of Solution Spaces} To clarify the discussion, we consider a simplified case. A simplified MCNN (SMCNN) is unveiled as \begin{equation}\label{eq-2layer-mcnn-aar} \left\{ \begin{split} \frac{d x_i^{(1)}}{dt} &= - x_i^{(1)} + a^{(1)} y_i^{(1)} + a^{(1)}_r y_{i+1}^{(1)} + z^{(1)}, \\ \frac{d x_i^{(2)}}{dt} &= - x_i^{(2)} + a^{(2)} y_i^{(2)} + a^{(2)}_r y_{i+1}^{(2)} + b^{(2)} u_i^{(2)} + b^{(2)}_r u_{i+1}^{(2)} + z^{(2)}. \end{split} \right. \end{equation} Suppose $\mathbf{y} = \left({\cdots y_{-1}^{(2)} y_0^{(2)} y_1^{(2)} \cdots \atop \cdots y_{-1}^{(1)} y_0^{(1)} y_1^{(1)} \cdots}\right)$ is a mosaic pattern. For $i \in \mathbb{Z}$, $y_i^{(1)} = 1$ if and only if \begin{equation} a^{(1)} + z^{(1)} - 1 > -a^{(1)}_r y^{(1)}_{i+1}.\label{eq-2layer-ineq1} \end{equation} Similarly, $y_i^{(1)} = -1$ if and only if \begin{equation} a^{(1)} - z^{(1)} - 1 > a^{(1)}_r y^{(1)}_{i+1}.\label{eq-2layer-ineq2} \end{equation} The same argument asserts \begin{align} a^{(2)} + z^{(2)} - 1 &> -a^{(2)}_r y^{(2)}_{i+1} - (b^{(2)} u^{(2)}_i + b^{(2)}_r u^{(2)}_{i+1}),\label{eq-2layer-ineq3} \\ \intertext{and} a^{(2)} - z^{(2)} - 1 &> a^{(2)}_r y^{(2)}_{i+1} + (b^{(2)} u^{(2)}_i + b^{(2)}_r u^{(2)}_{i+1})\label{eq-2layer-ineq4} \end{align} are the necessary and sufficient conditions for $y_i^{(2)} = -1$ and $y_i^{(2)} = 1$, respectively. Note that the quantity $u^{(2)}_i$ in \eqref{eq-2layer-ineq3} and \eqref{eq-2layer-ineq4} satisfies $|u^{(2)}_i| = 1$ for each $i$. Define $\xi_1: \{-1, 1\} \to \mathbb{R}$ and $\xi_2: \{-1, 1\}^{\mathbb{Z}_{3 \times 1}} \to \mathbb{R}$ by $$ \xi_1(w) = a^{(1)}_r w, \qquad \xi_2(w_1, w_2, w_3) = a^{(2)}_r w_1 + b^{(2)} w_2 + b^{(2)}_r w_3. $$ Set \begin{align*} \mathcal{B}^{(1)} &= \left\{\boxed{y^{(1)} y_r^{(1)}}: y^{(1)}, y_r^{(1)} \in \{-1, 1\} \text{ satisfy } \eqref{eq-2layer-ineq1}, (\ref{eq-2layer-ineq2})\right\}, \\ \mathcal{B}^{(2)} &= \left\{\boxed{y^{(2)} y_r^{(2)} \atop \displaystyle u^{(2)} u_r^{(2)}}: y^{(2)}, y_r^{(2)}, u^{(2)}, u_r^{(2)} \in \{-1, 1\} \text{ satisfy } \eqref{eq-2layer-ineq3}, \eqref{eq-2layer-ineq4}\right\}. \end{align*} That is, \begin{align*} \boxed{y^{(1)} y_r^{(1)}} \in \mathcal{B}^{(1)} &\Leftrightarrow \left\{ \begin{array}{ll} a^{(1)} + z^{(1)} - 1 > - \xi_1(y_r^{(1)}), & \hbox{if $y^{(1)}=1$;} \\ a^{(1)} - z^{(1)} - 1 > \xi_1(y_r^{(1)}), & \hbox{if $y^{(1)}=-1$.} \end{array} \right. \\ \boxed{y^{(2)} y_r^{(2)} \atop \displaystyle u^{(2)} u_r^{(2)}} \in \mathcal{B}^{(2)} &\Leftrightarrow \left\{ \begin{array}{ll} a^{(2)} + z^{(2)} - 1 > - \xi_2(y_r^{(2)}, u^{(2)}, u_r^{(2)}), & \hbox{if $y^{(2)}=1$;} \\ a^{(2)} - z^{(2)} - 1 > \xi_2(y_r^{(2)}, u^{(2)}, u_r^{(2)}), & \hbox{if $y^{(2)}=-1$.} \end{array} \right. \end{align*} The set of admissible local patterns $\mathcal{B}$ of (\ref{eq-2layer-mcnn-aar}) is then $$ \mathcal{B} = \left\{\boxed{y y_r \atop \displaystyle u u_r}: \boxed{y y_r \atop \displaystyle u u_r} \in \mathcal{B}^{(2)} \text{ and } \boxed{u u_r} \in \mathcal{B}^{(1)}\right\} $$ The authors indicated in \cite{BCL-JDE2012} that there exists $139,968$ regions in the parameter space $\mathcal{P}$ of SMCNNs such that any two sets of templates that are located in the same region infer the same solution spaces. The partition of the parameter space is determined as follows. Since $y^{(1)}, y^{(2)}, y_r^{(1)}, y_r^{(2)}, u^{(2)}, u_r^{(2)} \in \{-1, 1\}$, $a^{(1)} + z^{(1)} - 1 = - \xi_1(y_r^{(1)})$ and $a^{(1)} + z^{(1)} - 1 = \xi_1(y_r^{(1)})$ partition $a^{(1)}\ z^{(1)}$-plane into $9$ regions, the ``order" of lines $a^{(1)} + z^{(1)} - 1 = (-1)^{\ell} \xi_1(y_r^{(1)})$, $\ell = 0, 1$, comes from the sign of $a_r^{(1)}$. Thus the parameter space $\{(a^{(1)}, a^{(1)}_r, z^{(1)})\}$ is partitioned into $2 \times 9 = 18$ regions. Similarly, $a^{(2)} + z^{(2)} - 1 > - \xi_2(y_r^{(2)}, u^{(2)}, u_r^{(2)})$ and $a^{(2)} + z^{(2)} - 1 > \xi_2(y_r^{(2)}, u^{(2)}, u_r^{(2)})$ partition $a^{(2)}\ z^{(2)}$-plane into $81$ regions. The order of $a^{(2)} + z^{(2)} - 1 > (-1)^{\ell} \xi_2(y_r^{(2)}, u^{(2)}, u_r^{(2)})$ can be uniquely determined according to the following procedures. \begin{enumerate}[(i)] \item The signs of $a_r^{(2)}, b^{(2)}, b_r^{(2)}$. \item The magnitude of $a_r^{(2)}, b^{(2)}, b_r^{(2)}$. \item The competition between the parameters with the largest magnitude and the others. In other words, suppose $m_1 > m_2 > m_3$ represent $|a_r^{(2)}|, |b^{(2)}|, |b_r^{(2)}|$. We need to determine whether $m_1 > m_2 + m_3$ or $m_1 < m_2 + m_3$. \end{enumerate} This partitions the parameter space $\{(a^{(2)}, a^{(2)}_r, b^{(2)}, b^{(2)}_r, z^{(2)})\}$ into $8 \times 6 \times 2 \times 81 = 7776$ regions. Hence the parameter space $\mathcal{P}$ is partitioned into $81 \times 7776 = 139,968$ equivalent subregions. Since the solution space $\mathbf{Y}$ is determined by the basic set of admissible local patterns, these local patterns play an essential role for investigating SMCNNs. Substitute mosaic patterns $-1$ and $1$ as symbols $-$ and $+$, respectively. Define the ordering matrix of $\{-, +\}^{\mathbb{Z}_{2 \times 2}}$ by $$ \mathbb{X} = \bordermatrix{ & \fbox{$\overset{\displaystyle-}{-}$} & \fbox{$\overset{\displaystyle-}{+}$} & \fbox{$\overset{\displaystyle+}{-}$} & \fbox{$\overset{\displaystyle+}{+}$} \vspace{1mm}\cr \fbox{$\overset{\displaystyle-}{-}$} & \fbox{$\overset{\displaystyle--}{--}$} & \fbox{$\overset{\displaystyle--}{-+}$} & \fbox{$\overset{\displaystyle-+}{--}$} & \fbox{$\overset{\displaystyle-+}{-+}$} \vspace{1mm}\cr \fbox{$\overset{\displaystyle-}{+}$} &\fbox{$\overset{\displaystyle--}{+-}$} & \fbox{$\overset{\displaystyle--}{++}$} & \fbox{$\overset{\displaystyle-+}{+-}$} & \fbox{$\overset{\displaystyle-+}{++}$} \vspace{1mm}\cr \fbox{$\overset{\displaystyle+}{-}$} &\fbox{$\overset{\displaystyle+-}{--}$} & \fbox{$\overset{\displaystyle+-}{-+}$} & \fbox{$\overset{\displaystyle++}{--}$} & \fbox{$\overset{\displaystyle++}{-+}$} \vspace{1mm}\cr \fbox{$\overset{\displaystyle+}{+}$} &\fbox{$\overset{\displaystyle+-}{+-}$} & \fbox{$\overset{\displaystyle+-}{++}$} & \fbox{$\overset{\displaystyle++}{+-}$} & \fbox{$\overset{\displaystyle++}{++}$}} = (x_{pq})_{1 \leq p, q \leq 4} $$ Notably each entry in $\mathbb{X}$ is a $2 \times 2$ pattern since $\mathcal{B}$ consists of $2 \times 2$ local patterns. Suppose that $\mathcal{B}$ is given. The transition matrix $T \equiv T(\mathcal{B}) \in \mathcal{M}_4(\{0, 1\})$ is defined by $$ T(p,q) = \left\{ \begin{array}{ll} 1, & \hbox{if $x_{pq} \in \mathcal{B}$;} \\ 0, & \hbox{otherwise.} \end{array} \right. $$ Let $\mathcal{A} = \{\alpha_0, \alpha_1, \alpha_2, \alpha_3\}$, where $$ \alpha_0 = --, \quad \alpha_1 = -+, \quad \alpha_2 = +- \quad \text{and} \quad \alpha_3 = ++ $$ Define $\mathcal{L}^{(1)}, \mathcal{L}^{(2)}$ by $$ \mathcal{L}^{(1)}(y_0 y_1 \diamond u_0 u_1) = u_0 u_1 \quad \text{and} \quad \mathcal{L}^{(2)}(y_0 y_1 \diamond u_0 u_1) = y_0 y_1 $$ respectively. It is known that $T$ determines a graph while $(T, \mathcal{L}^{(i)})$ determines a labeled graph for $i = 1, 2$. As we mentioned in last section that the transition matrix $T$ determines the solution space $\mathbf{Y}$, $T$ not describe the hidden and output spaces $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$, though. Instead, $\mathbf{Y}^{(1)}, \mathbf{Y}^{(2)}$ are illustrated by the symbolic transition matrices. The symbolic transition matrix $S^{(i)}$ is defined by \begin{equation}\label{eq-symbolic-matrix} S^{(i)}(p,q) = \left\{ \begin{array}{ll} \alpha_j, & \hbox{if $T(p,q)=1$ and $\mathcal{L}^{(i)}(x_{pq}) = \alpha_j$ for some $j$;} \\ \varnothing, & \hbox{otherwise.} \end{array} \right. \end{equation} Herein $\varnothing$ means there exists no local pattern in $\mathcal{B}$ related to its corresponding entry in the ordering matrix. A labeled graph is called \emph{right-resolving} if, for every fixed row of its symbolic transition matrix, the multiplicity of each symbol is $1$. With a little abuse of notations, $\mathbf{Y}^{(i)}$ can be described by $S^{(i)}$ which is right-resolving for $i = 1, 2$. Let $T^{(i)}$ be the incidence matrix of $S^{(i)}$, that is, $T^{(i)}$ is of the same size of $S^{(i)}$ and is defined by $$ T^{(i)}(p,q) = \left\{ \begin{array}{ll} 1, & \hbox{if $S^{(i)}(p,q) \neq \varnothing$;} \\ 0, & \hbox{otherwise.} \end{array} \right. $$ Then $W^{(i)}$ is determined by $T^{(i)}$ for $i = 1, 2$. The reader is referred to \cite{BCL-JDE2012, BCLL-JDE2009} for more details. \subsection{Sofic Measures and Linear Representable Measures} Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO} investigate the Hausdorff dimension of $W^{(i)}$ and $\mathbf{Y}^{(i)}$ and see if they are related. The proof relies on two essential ingredients: the existence of maximal measures and factor maps. The upcoming subsection involves the former while the latter is discussed in the next two subsections separately. Let $X$ and $Y$ be subshifts and $\phi: X \rightarrow Y$ be a factor map. Suppose $\mu$ is a Markov measure on $X$, then $\phi \mu $ is called a \emph{sofic measure} (also known as a \emph{hidden Markov measure}, cf.~\cite{BP-2011}). Let $B \in \mathbb{R}^{m \times m}$ be an irreducible matrix with spectral radius $\rho_{B}$ and positive right eigenvector $r$; the \emph{stochasticization} of $B$ is the stochastic matrix \begin{equation*} \mathbb{B}:= stoch(B)=\frac{1}{\rho_B }D^{-1}BD, \end{equation*} where $D$ is the diagonal matrix with diagonal entries $D(i,i)=r(i)$. A measure $\mu$ on $X$ is called \emph{linear representable} with dimension $m$ if there exists a triple $(x,P,y)$ with $x$ being a $1\times m$ row vector, $y$ being a $m \times 1$ column vector and $P=(P_{i})_{i\in \mathcal{A}(X)}$, where $P_{i} \in \mathbb{R}^{m \times m}$ such that for all $I=[i_{0},\ldots ,i_{n-1}] \in X_{n} $, the measure $\mu$ can be characterized as the following form: \begin{equation*} \mu (\left[ I\right] )=xP_{I}y = x P_{i_{0}}P_{i_{1}} \cdots P_{i_{n-1}} y. \end{equation*}% The triple $(x,P,y)$ is called the \emph{linear representation} of the measure $\mu$. The reader is referred to \cite{BP-2011} for more details. \begin{proposition}[See {\cite[Theorem 4.20]{BP-2011}}]\label{prop-LR-pushforward-LR} Let $X$ be an irreducible SFT with transition matrix $T \in \mathbb{R}^{m \times m}$ and $\phi: X \rightarrow Y$ be a one-block factor map. Let $\mathbb{T}=stoch(T)$ and $l$ be the probability left eigenvector of $\mathbb{T}$. Then \begin{enumerate}[(i)] \item The Markov measure $\mu$ on $X$ is the linear representable measure with respect to the triple $(l, P, \mathbf{1}_m)$, where $\mathbf{1}_m$ is the column vector with each entry being $1$ and $P = (\mathbb{T}_{i}) _{i\in \mathcal{A}(X)}$ for which \begin{equation*} P_{I} = \mathbb{T}_{i_{0}}\cdots \mathbb{T}_{i_{n-1}}, \quad \text{for all} \quad I=[i_{0},\ldots ,i_{n-1}] \in X_{n} \end{equation*}% here $\mathbb{T}_{k}(i,j)=\mathbb{T}( i,j) $ if $j=k$ and $\mathbb{T}_{k}(i,j)=0$ otherwise. \item The push-forward measure $\nu =\phi \mu$ is linear representable with respect to the triple $(l,Q,\mathbf{1}_m)$, where $Q$ is generated by $(Q_{j}) _{j\in \mathcal{A}(Y)}=\left( \mathbb{T}_{j}\right) _{j\in \mathcal{A}(Y)}$ for which $\mathbb{T}_{k}(u,v)=\mathbb{T}(u,v) $ if $\phi(v)=k$ and $\mathbb{T}_{k}(u,v)=0$ otherwise. \end{enumerate} \end{proposition} In the following we propose a criterion to determine whether a sofic measure is actually a Markov measure. The procedure of the criterion is systematic and is checkable which makes our method practical. Suppose the factor map $\phi: X \to Y$ is a one-block code. For $j \in \mathcal{A}(Y)$, define $$ E_{j} = \{i: \phi(i) = j\} \quad \text{and} \quad e_j = \# E_j $$ For each $j_1 j_2 \in Y_2$, let $N_{j_1 j_2} \in \mathbb{R}^{e_{j_1} \times e_{j_2}}$ be defined by $$ N_{j_1 j_2}(p, q) = \left\{ \begin{array}{ll} 1, & p q \in X_2 \hbox{;} \\ 0, & \hbox{otherwise.} \end{array}\right. $$ where $p \in E_{j_1}, q \in E_{j_2}$. Set $N = (N_{j_1 j_2})$ if $e_{j_1} = e_{j_2}$ for all $j_1 j_2 \in Y_2$. Otherwise, we enlarge the dimension of $N_{j_1 j_2}$ by inserting ``pseudo vertices" so that $N_{j_1 j_2}$ is a square matrix. We say that $N$ satisfies the \textit{Markov condition of order $k$} if there exists a nontrivial subspace $\{V_J\}_{J \in Y_{k+1}}$ such that, for each $J \in Y_{k+1}$, there exists $m_{J(0, k-1), J(1, k)}$ such that $V_{J(0, k-1)} N_J = m_{J(0, k-1), J(1, k)} V_{J(1, k)}$, here $J(i_1, i_2) = j_{i_1} j_{i_1+1} \cdots j_{i_2}$ with $J = j_1 j_2 \cdots j_{k+1}$. For simplification, we say that $N$ satisfies the Markov condition if $N$ satisfies the Markov condition of order $k$ for some $k \in \mathbb{N}$. At this point, a further question arises: \begin{quote} \it Suppose $N$ satisfies the Markov condition. What kind of Markov measure is $\nu$? \end{quote} To answer this question, we may assume $m(J( 0,k-1), J(1,k)) \in \mathbb{R}$ such that \begin{equation}\label{eq:coeff-when-N-markov-condition} V_{J(0, k-1) }N_{J(0, k)} = m(J(0, k-1), J(1, k)) V_{J(1, k)}. \end{equation} In \cite{BCC-2012}, the authors illustrated what kind of Markov measure $\nu$ is. \begin{theorem}[See {\cite[Theorem 4]{BCC-2012}}]\label{thm-sofic-measure-THM4} If $N$ satisfies the Markov condition of order $k$, then $Y$ is a SFT. Furthermore, $\nu$ is the unique maximal measure of $Y$ with transition matrix $M = [m(J,J')]_{J,J' \in Y_k}$. \end{theorem} To clarify the construction of $N$ and Theorem \ref{thm-sofic-measure-THM4}, we introduce an example which was initiated by Blackwell. \begin{example}[Blackwell {\cite{Bla-1957}}] Let $\mathcal{A}(X)=\{1,2,3\}, \mathcal{A}(Y)=\{1,2\}$ and the one-block map $\Phi: \mathcal{A}(X) \rightarrow \mathcal{A}(Y)$ be defined by $\Phi(1)=1,$ $\Phi(2) = \Phi(3) =2$. Let $\phi: X \rightarrow Y$ be the factor induced from $\Phi$, and the transition matrix of $X$ be \begin{equation*} A=\left( \begin{array}{ccc} 0 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 0 & 1% \end{array}% \right) \mbox{.} \end{equation*}% This factor has been proven ({\cite[Example 2.7]{BP-2011}}) to be Markovian. Here we use Theorem \ref{thm-sofic-measure-THM4} to give a criterion for this property. Since $E_{1}=\left\{ 1\right\} $ and $E_{2}=\left\{ 2,3\right\}$, we see that $\mathbf{m}=e_{2}=2$, and an extra pseudo vertex is needed for $E_1$. For such reason we introduce the new symbols and the corresponding sets $\widehat{E}_{1}$ and $\widehat{E}_{2}$ are as follows. \begin{eqnarray*} D &=&\left\{ 1,2\right\} \times \left\{ 1,2\right\} =\left\{ \left( 1,1\right) ,\left( 1,2\right) ,\left( 2,1\right) ,\left( 2,2\right) \right\} , \\ \mbox{ }\widehat{E}_{1} &=&\left\{ 1=\left( 1,1\right) ,\left( 2,1\right) \right\} \mbox{, }\widehat{E}_{2}=\left\{ 2=\left( 1,2\right) ,3=\left( 2,2\right) \right\} . \end{eqnarray*}% Therefore, \begin{equation*} B = \left( \begin{array}{cc} N_{11} & N_{12} \\ N_{21} & N_{22}% \end{array}% \right) =\left( \begin{array}{cccc} 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1% \end{array}% \right) . \end{equation*}% \begin{equation*} N_{11}=\left( \begin{array}{cc} 0 & 0 \\ 0 & 0% \end{array}% \right) \mbox{, }N_{12}=\left( \begin{array}{cc} 1 & 1 \\ 0 & 0% \end{array}% \right) \mbox{, }N_{21}=\left( \begin{array}{cc} 1 & 0 \\ 1 & 0% \end{array}% \right) \mbox{, }N_{22}=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1% \end{array}% \right) \mbox{.} \end{equation*}% Taking $V_{1}=(1\ 0) $ and $V_{2}=(1\ 1) $, one can easily check that $N=(N_{ij})_{i,j=1}^{2}$ satisfies the Markov condition of order $1$. Thus Theorem \ref{thm-sofic-measure-THM4} is applied to show that the factor is a Markov map. \end{example} \subsection{Proof of Theorem \ref{main-thm-FSE}} Proposition \ref{prop-factor-like-embed} asserts that the existence of a factor-like matrix for $T^{(1)}, T^{(2)}$ together with the topological conjugacy of $\phi^{(i)}$ infers there is a map $\overline{\pi}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(\overline{i})}$ that preserves topological entropy, where $i = 1, 2$, and $i + \overline{i} = 3$. A natural question is whether or not we can find a map connecting $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$ under the condition neither $\phi^{(1)}$ nor $\phi^{(2)}$ is topological conjugacy. The answer is affirmative. First we define the product of scalar and alphabet. \begin{definition} Suppose $\mathcal{A}$ is an alphabet set. Let $\mathbf{A}$ be the free abelian additive group generated by $\mathcal{A} \cup \{\varnothing\}$, here $\varnothing$ is the identity element. For $\mathbf{k} \in \mathbb{Z}, \mathbf{a} \in \mathcal{A} \cup \{\varnothing\}$, we define an commutative operator $*$ by $$ \mathbf{a} * \mathbf{k} = \mathbf{k} * \mathbf{a} = \left\{ \begin{array}{ll} \mathbf{k}\mathbf{a}, & \hbox{if $\mathbf{a} \neq \varnothing$ and $\mathbf{k} \neq 0$;} \\ \varnothing, & \hbox{otherwise.} \end{array} \right. $$ \end{definition} Suppose $S$ is an $m \times n$ symbolic matrix and $A$ is an $n \times k$ integral matrix. The product $S*A$ is defined by $(S*A)(p, q) = \sum_{i=1}^n S(p, i) * A(i, q)$ for $1 \leq p \leq m, 1 \leq q \leq k$. For simplicity we denote $S*A$ by $SA$. Similarly, we can define $A*S$ and denote by $AS$ for $m \times n$ integral matrix $A$ and $n \times k$ symbolic matrix $S$. The following proposition, which is an extension of Proposition \ref{prop-factor-like-embed}, can be verified with a little modification of the proof of Proposition 3.15 in \cite{BCL-JDE2012}. Hence we omit the detail. \begin{proposition}\label{prop-factor-like-Y-W} Let $S^{(i)}$ be the symbolic transition matrix of $\mathbf{Y}^{(i)}$ for $i = 1, 2$. Suppose $E$ is a factor-like matrix such that $S^{(i)}E = E S^{(\overline{i})}$, then there exist maps $\pi: W^{(i)} \to W^{(\overline{i})}$ and $\overline{\pi}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(\overline{i})}$ that both preserve topological entropy, where $i + \overline{i} = 3$. \end{proposition} A factor map $\phi$ is \emph{almost invertible} if every doubly transitive point has exactly one preimage. Lemma \ref{lem-ai-sync-word} shows that the existence of a synchronizing word is a necessary and sufficient criterion whether $\phi$ is almost invertible. \begin{lemma}\label{lem-ai-sync-word} Suppose $\phi : X \to Y$ is a one-block factor map. Then $\phi$ is almost invertible if and only if $\phi$ has a synchronizing word. \end{lemma} \begin{proof} If $\phi$ is almost invertible, then $d_{\phi}^* = 1$. Let $w$ be a magic word and $i$ be a magic coordinate. In other words, $d_{\phi}(w, i) = 1$. The fact that $\phi$ is right-resolving infers that $d_{\phi}(w, |w|) = 1$. Hence $w$ is a synchronizing word. On the other hand, suppose $w$ is a synchronizing word. $\phi$ is right-resolving indicates $d_{\phi}(wa, |wa|) = 1$ for some $a$ such that $wa \in B(Y)$. That is, $wa$ is a magic word and $d_{\phi}^* = 1$. Therefore, $\phi$ is almost invertible. \end{proof} The proof of the first statement of Theorem \ref{main-thm-FSE} is done by Lemma \ref{lem-ai-sync-word} and the following theorem. \begin{theorem}[See {\cite[Theorem 3.4]{MS-PJM2001}}]\label{thm-MS-max2max} Suppose $\phi: X \to Y$ is a factor map and $X$ is an irreducible SFT. If $\phi$ is almost invertible, then $\phi: \mathcal{M}_{\max}(X) \to \mathcal{M}_{\max}(Y)$ is a bijection. Moreover, $h_{\mu}(X) = h_{\phi \mu}(Y)$ for $\mu \in \mathcal{M}_{\max}(X)$. \end{theorem} Next we continue the proof of Theorem \ref{main-thm-FSE}. Fix $i \in \{1, 2\}$. Recall that the metric $d^{(i)}: W^{(i)} \times W^{(i)} \to \mathbb{R}$ is given by $$ d^{(i)}(x, y) = \sum_{j \in \mathbb{Z}} \frac{|x_j - y_j|}{m_i^{|j|+1}}, $$ for $x, y \in W^{(i)}$, where $m_i = |\mathcal{A}(W^{(i)})|$. To formulate the explicit form of the Hausdorff dimension of the hidden and output spaces, we introduce the following from Pesin's well-known work \begin{theorem}[See {\cite[Theorems 13.1 and 22.2]{P-1997}}]\label{thm-pesin-dim} Let $(X, \sigma)$ be a shift space with $|\mathcal{A}(X)| = m$, and $0 < \lambda_1, \ldots, \lambda_m < 1$. Suppose $d$ is a metric defined on $X$. If there exist $K_1, K_2 > 0$ such that \begin{align*} K_1 \prod_{j=0}^{n_2} \lambda_{i_j} < &\mathrm{diam} [i_0, \ldots, i_{n_2}] < K_2 \prod_{j=0}^{n_2} \lambda_{i_j}, \\ K_1 \prod_{j=0}^{n_1} \lambda_{i_j} < &\mathrm{diam} [i_{-n_1}, \ldots, i_0] < K_2 \prod_{j=0}^{n_1} \lambda_{i_j}, \end{align*} for any cylinder $I = [i_{-n_1}, \ldots, i_{n_2}], n_1, n_2 \geq 0$, then $$ \dim X = - \frac{h_{\mu_{\lambda}}(X)}{\int_X \log \lambda_{i_0} d \mu_{\lambda}} = - 2 \frac{h_{\mu^{\pm}_{\lambda}}(X)}{\int_X \log \lambda_{i_0} d \mu^{\pm}_{\lambda}}, $$ where $\mu_{\lambda}$ is a maximal measure on $X$ and $\mu^{\pm}_{\lambda}$ is a maximal measure on the right-sided subspace $X^+$/left-sided subspace $X^-$. \end{theorem} Suppose $\mu^{(i)}$ is a maximal measure of $W^{(i)}$. For any cylinder $I=[i_{-n_1}, \ldots, i_{n_2}]$, the diameter of $[i_0, \ldots, i_{n_2}]$ and $[i_{-n_1}, \ldots, i_0]$ are $\dfrac{1}{m_i^{n_2+1}}$ and $\dfrac{1}{m_i^{n_1+1}}$ respectively. Let $K_1=1, K_2=3$, and $\lambda_1 = \cdots = \lambda_{m_i} = \dfrac{1}{m_i}$, apply Theorem \ref{thm-pesin-dim} we have $$ \dim W^{(i)} = - \frac{h_{\mu^{(i)}}(W^{(i)})}{\displaystyle\int_{W^{(i)}} \log \dfrac{1}{m_i} d \mu^{(i)}} = \frac{h_{\mu^{(i)}}(W^{(i)})}{\log m_i} = 2 \frac{h_{\mu^{(i),\pm}}(W^{(i)})}{\log m_i}. $$ Moreover, the one-to-one correspondence between $\mathcal{M}_{\max}(W^{(i)})$ and $\mathcal{M}_{\max}(\mathbf{Y}^{(i)})$ demonstrates that \begin{align*} \dim \mathbf{Y}^{(i)} &= \sup \{\frac{h_{\nu}(\mathbf{Y}^{(i)})}{\log n_i}: \nu \text{ is invariant on } \mathbf{Y}^{(i)}\} \\ &= \frac{h_{\phi^{(i)}\mu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i} = \frac{h_{\mu^{(i)}}(W^{(i)})}{\log n_i}. \end{align*} The last equality comes from Theorem \ref{thm-MS-max2max}. This completes the proof of Theorem \ref{main-thm-FSE} part (ii). Observe that $$ \dim \mathbf{Y}^{(i)} = \frac{h_{\nu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i} $$ indicates $\nu^{(i)}$ is a maximal measure on $\mathbf{Y}^{(i)}$. Since $W^{(i)}$ is irreducible, the maximal measure $\mu^{(i)}$ is unique. Hence there is a bijection $\overline{\pi}: \mathcal{M}_{\max}(\mathbf{Y}^{(i)}) \to \mathcal{M}_{\max}(\mathbf{Y}^{(\overline{i})})$ such that $\overline{\pi} \nu^{(i)} = \nu^{(\overline{i})}$. Therefore, $$ \dim \mathbf{Y}^{(\overline{i})} = \frac{h_{\overline{\pi}\nu^{(i)}}(\mathbf{Y}^{(\overline{i})})}{\log n_{\overline{i}}} = \frac{h_{\nu^{(\overline{i})}}(\mathbf{Y}^{(\overline{i})})}{\log n_{\overline{i}}}. $$ This completes the proof of Theorem \ref{main-thm-FSE}. \subsection{Proof of Theorem \ref{main-thm-ITO}} Whether there exists a factor map connecting two spaces is always a concerning issue. In general, it is difficult to construct or to say such factor maps exist for a given pair of spaces. Proposition \ref{prop-factor-like-Y-W} proposes a methodology for constructing a connection between two spaces. Notably a map constructed via Proposition \ref{prop-factor-like-Y-W} preserves topological entropy. In other words, it only works for those spaces reaching the same topological entropy if we restrict the factor maps. In this subsection, we turn our attention to the factor maps connecting spaces with non-equal topological entropies. Similar to the proof of Theorem \ref{main-thm-FSE}, demonstrating Theorem \ref{main-thm-ITO} relies mainly on the existence of a factor map. Instead of $\mathbf{Y}^{(1)}, \mathbf{Y}^{(2)}$, we start with examining whether there is a factor map from $W^{(i)}$ to $W^{(\overline{i})}$; note here that $i + \overline{i} = 3$. \begin{theorem}\label{thm-inf-to-one-on-W} Suppose $W^{(1)}$ and $W^{(2)}$ are irreducible with $h(W^{(1)}) \neq h(W^{(2)})$. Suppose $h(W^{(i)}) > h(W^{(\overline{i})})$, where $i + \overline{i} = 3$. Then there exists an infinite-to-one map $\pi: W^{(i)} \to W^{(\overline{i})}$ if one of the following is satisfied. \begin{enumerate}[\bf a)] \item $h(W^{(i)}) = h(\mathbf{Y})$ and there is a factor-like matrix $F$ such that $T^{(i)} F = F T$, where $T$ is the transition matrix of $\mathbf{Y}$. \item $h(W^{(i)}) < h(\mathbf{Y})$. \end{enumerate} \end{theorem} \begin{remark}\label{rmk-for-thm-inf-to-one-on-W} \begin{enumerate}[\bf (i)] \item Suppose $X, Y$ are two irreducible SFTs with $h(X) > h(Y)$. In \cite{Kit-1998}, Kitchens showed that if there is an infinite-to-one factor map from $X^+$ to $Y^+$, then there exists an infinite-to-one factor map $\pi: X \to Y$. This reduces the investigation of Theorem \ref{thm-inf-to-one-on-W} to the existence of an infinite-to-one map between the right-sided subspaces of $W^{(1)}$ and $W^{(2)}$. \item Theorem \ref{thm-inf-to-one-on-W} reveals the existence of an infinite-to-one map between the hidden and output spaces whenever these two spaces hit different topological entropies; however, there are an infinite number of such maps general. In addition, it is difficult to find the explicit form of an infinite-to-one map. This is an important issue and is still open in the field of symbolic dynamical systems. It helps for the investigation of MCNNs if one can propose a methodology to find a concrete expression of an infinite-to-one map. \end{enumerate} \end{remark} The following corollary comes immediately after Theorem \ref{thm-inf-to-one-on-W}. \begin{corollary} Under the same assumption of Theorem \ref{thm-inf-to-one-on-W}. Suppose furthermore that $|\mathcal{A}(W^{(i)})| \geq |\mathcal{A}(\mathbf{Y})|$ and \textbf{a)} is satisfied. Then $\pi: W^{(i)} \to W^{(\overline{i})}$ is an infinite-to-one factor map. \end{corollary} Suppose $X$ is a shift space. Let $P(X)$ denote the collection of periodic points in $X$ and let $P_n(X)$ be the set of periodic points with period $n$. Given two shifts $X$ and $Y$, let $q_n(X)$ and $q_n(Y)$ be the cardinality of $\cup_{k \geq n} P_k(X)$ and $\cup_{k \geq n} P_k(Y)$, respectively. If $q_n(X) \leq q_n(Y)$ for $n \geq 1$, then we call it an \emph{embedding periodic point condition}, and write it as $P(X) \hookrightarrow P(Y)$. Embedding Theorem asserts a necessary and sufficient condition whether there exists an injective map between $X$ and $Y$. \begin{theorem}[Embedding Theorem] Suppose $X$ and $Y$ are irreducible SFTs. There is an embedding map $\phi: X \to Y$ if and only if $h(X) < h(Y)$ and $P(X) \hookrightarrow P(Y)$. \end{theorem} A forthcoming question is the existence of a factor map between $X$ and $Y$. Like the embedding periodic point condition, the \emph{factor periodic point condition} indicates that, for every $x \in P_n(X)$, there exists a $y \in P_m(Y)$ such that $m$ is a factor of $n$, and is denoted by $P(X) \searrow P(Y)$. \begin{theorem}[See {\cite[Theorem 4.4.5]{Kit-1998}}]\label{thm-inf-to-1-SFT} Suppose $X$ and $Y$ are irreducible SFTs. There exists an infinite-to-one factor code $\phi: X \to Y$ if and only if $h(X) > h(Y)$ and $P(X) \searrow P(Y)$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm-inf-to-one-on-W}] Without the loss of generality, we may assume that $h(W^{(1)}) < h(W^{(2)})$. It suffices to demonstrate there is an infinite-to-one map from $W^{(2), +}$ to $W^{(1), +}$ due to the observation in Remark \ref{rmk-for-thm-inf-to-one-on-W} (i). For the ease of notation, the spaces in the upcoming proof are referred to as right-sided subspaces. Suppose that condition \textbf{a)} is satisfied. The existence of factor-like matrix $F$ such that $T^{(2)} F = F T$ implies there is a map $\Phi^{(2)}: W^{(2)} \to \mathbf{Y}$. Recall that the graph representation $G^{(1)}$ of $W^{(1)}$ is obtained by applying subset construction to $(G, \mathcal{L}^{(1)})$. Without the loss of generality, we assume that $G^{(1)}$ is essential. That is, every vertex in $G^{(1)}$ is treated as an initial state of one edge and as a terminal state of another. Suppose $w = w_1 \cdots w_n$ is a cycle in $G$. If the initial state $i(w_k)$ of $w_k$ is a vertex in $G^{(1)}$ for $k = 1, \ldots, n$, then $w$ is also a cycle in $G^{(1)}$. Assume $k$ is the only index that either $i(w_k)$ or $t(w_k)$ is not a vertex in $G^{(1)}$, where $t(e)$ denotes the terminal vertex of the edge $e$. First we consider that only one of these two vertices is not in $G^{(1)}$. For the case that $i(w_k)$ is not a vertex in $G^{(1)}$, there is a vertex, say $v_k$, in $G^{(1)}$ so that $v_k$ is a grouping vertex which contains $i(w_k)$.\footnote{If fact, each vertex in $G^{(1)}$ is the grouping of one or more vertices in $G$, and so is $G^{(2)}$. The reader is referred to \cite{BCLL-JDE2009} for more details.} Hence there is an edge $\overline{w}_{k-1}$ in $G^{(1)}$ such that $i(\overline{w}_{k-1}) = i(w_{k-1})$ and $t(\overline{w}_{k-1}) = v_k$. In other words, there is an edge in $G^{(1)}$ that can be related to $w_{k-1}$. Moreover, there is an edge $(v_k, t(w_k))$ in $G^{(1)}$ if $t(w_k)$ is a vertex in $G^{(1)}$. Hence there is a cycle in $G^{(1)}$ that corresponds to $w$. The case that $t(w_k)$ is not a vertex in $G^{(1)}$ can be conducted in an analogous discussion. For the case that both the initial and terminal states of $w_k$ are not in $G^{(1)}$, combining the above demonstration infers there is a new vertex $v_{k+1}$ and two new edges $e_k=(v_k, v_{k+1}), e_{k+1} = (v_{k+1}, t(w_{k+1}))$ in $G^{(1)}$. That is, there is still a cycle in $G^{(1)}$ that corresponds to $w$. Repeating the above process if necessary, it is seen that, for every cyclic path in $G$ with length $n$, there is an associated cyclic path in $G^{(1)}$ with length $m$ and $m$ divides $n$. Theorem \ref{thm-inf-to-1-SFT} asserts there exists an infinite-to-one factor $\Phi^{(1)}: \mathbf{Y} \to W^{(1)}$. Let $\pi = \Phi^{(1)} \circ \Phi^{(2)}$. Then $\pi$ is an infinite-to-one map from $W^{(2)} \to W^{(1)}$ by Theorem \ref{thm-diamond-inf2one}. Next, for another case, suppose that condition \textbf{b)} is satisfied. It suffices to demonstrate the existence of an embedding map from $W^{(2)}$ to $\mathbf{Y}$. The elucidation of the existence of a map from $W^{(2)}$ to $\mathbf{Y}$ can be performed via a similar but converse argument as with the discussion of $\Phi^{(1)}$. Hence we omit the details. Since the graph representation $G^{(2)}$ of $W^{(2)}$ comes from applying subset construction to $(G, \mathcal{L}^{(2)})$, it can be verified that every periodic point in $W^{(2)}$ corresponds to a cyclic path in $G^{(2)}$, and, for every cyclic path in $G^{(2)}$, we can illustrate a cyclic path in $G$. The Embedding Theorem demonstrates the existence of an embedding map $\overline{\Phi}^{(2)}: W^{(2)} \to \mathbf{Y}$. This completes the proof. \end{proof} Once we demonstrate the existence of a factor map $\pi: W^{(i)} \to W^{(\overline{i})}$, the proof of Theorem \ref{main-thm-ITO} can be performed via analogous method as the proof of Theorem \ref{main-thm-FSE}. Hence we skip the proof. Instead, it is interesting if there is a criterion to determine whether $\pi$ is uniform. \begin{theorem}\label{thm-uniform-iff-W} Suppose $N$, obtained from $\pi$ as defined in the previous subsection, satisfies the Markov condition of order $k$. Define $$ M = [m_{J(0, k-1), J(1, k)}]_{J \in W^{(1)}_{k+1}}, $$ here $m_{J(0, k-1), J(1, k)}$ is defined by \eqref{eq:coeff-when-N-markov-condition}. Then $\pi$ is uniform if and only if \begin{equation}\label{eq-uniform-cond-on-W} \rho_M = \dfrac{\rho^{(2)}}{\rho^{(1)}} \end{equation} where $\rho^{(i)}$ is the spatial radius of the transition matrix $T^{(i)}$ of $W^{(i)}$ for $i = 1, 2$. \end{theorem} Theorem \ref{thm-uniform-iff-W} is obtained with a little modification of the proof of Proposition 6.1 in \cite{BT-TAMS1984}, thus we omit it here. The following corollary comes immediately from Theorem \ref{thm-uniform-iff-W}. \begin{corollary}\label{cor-dim-inf2one} Let $N$ be defined as above. Suppose $N$ satisfies the Markov condition and \eqref{eq-uniform-cond-on-W} holds. Then $\overline{\pi} \nu^{(i)} \in \mathcal{M}_{\max}(\mathbf{Y}^{(\overline{i})})$ if $\nu^{(i)} \in \mathcal{M}_{\max}(\mathbf{Y}^{(i)})$. Furthermore, if $$ \dim \mathbf{Y}^{(i)} = \dfrac{h_{\nu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i}, $$ then $$ \dim \mathbf{Y}^{(\overline{i})} = \dfrac{h_{\overline{\pi} \nu^{(i)}}(\mathbf{Y}^{(\overline{i})})}{\log n_{\overline{i}}}. $$ \end{corollary} \section{Examples} \begin{example}\label{eg-Y1Y2-SFT} Suppose the templates of a SMCNN are given by the following: \begin{align*} [a^{(1)}, a_r^{(1)}, z^{(1)}] &= [2.9, 1.7, 0.1] \\ [a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] &= [-0.3, -1.2, 0.7, 2.3, 0.9] \end{align*} Then the basic set of admissible local patterns is $$ \mathcal{B} = \left\{ \boxed{-+ \atop \displaystyle --}\,, \boxed{-+ \atop \displaystyle +-}\,, \boxed{+- \atop \displaystyle -+}\,, \boxed{+- \atop \displaystyle ++}\,, \boxed{++ \atop \displaystyle -+}\,, \boxed{++ \atop \displaystyle ++}\right\}. $$ The transition matrix $T$ of the solution space $\mathbf{Y}$ is $$ T = \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ \end{pmatrix}, $$ and the symbolic transition matrices of the hidden and output spaces are $$ S^{(1)} = \begin{pmatrix} \varnothing & \alpha_1 \\ \alpha_2 & \alpha_3 \\ \end{pmatrix} \quad and \quad S^{(2)} = \begin{pmatrix} \varnothing & \alpha_1 & \varnothing \\ \alpha_2 & \varnothing & \alpha_3 \\ \alpha_2 & \varnothing & \alpha_3 \\ \end{pmatrix} $$ respectively. Figure \ref{fig-551429} shows that $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$ are two different spaces. The topological entropy of $\mathbf{Y}^{(i)}$ is related to the spectral radius of the incidence of $S^{(i)}$. An easy computation infers $h(\mathbf{Y}^{(1)}) = h(\mathbf{Y}^{(2)}) = \log g$, where $g = (1+\sqrt{5})/2$ is the golden mean. Let $$ E = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 1 \\ \end{pmatrix}. $$ Then $S^{(2)} E = E S^{(1)}$. Proposition \ref{prop-factor-like-Y-W} indicates that there exist factor maps $\pi: W^{(2)} \to W^{(1)}$ and $\overline{\pi}: \mathbf{Y}^{(2)} \to \mathbf{Y}^{(1)}$. More precisely, let \begin{align*} &\mathcal{A}(W^{(1)}) = \{x_1, x_2\}, \quad \mathcal{A}(W^{(2)}) = \{x'_1, x'_2, x'_3\}; \\ &\mathcal{A}(\mathbf{Y}^{(1)}) = \{y_1, y_2\}, \quad \mathcal{A}(\mathbf{Y}^{(2)}) = \{y'_1, y'_2, y'_3\}. \end{align*} Then $$ \pi(x'_1) = x_1, \quad \pi(x'_2) = \pi(x'_3) = x_2, \quad \overline{\pi}(y'_1) = y_1, \quad \overline{\pi}(y'_2) = \overline{\pi}(y'_3) = y_2. $$ See Figure \ref{fig-eg-Y1Y2-SFT}. \begin{figure} \begin{center} \psset{unit=0.8cm} \begin{pspicture}(13,3) \psset{nodesep=0.1cm} \rput(1,2){\ovalnode{A}{$x_1$}} \rput(5,2){\ovalnode{B}{$x_2$}} \rput(9,1){\ovalnode{C}{$x_1'$}} \rput(13,1){\ovalnode{D}{$x_3'$}} \rput(11,3){\ovalnode{E}{$x_2'$}} \ncarc[arcangle=20]{->}{A}{B}\Aput{$1$} \ncarc[arcangle=20]{->}{B}{A}\Aput{$0.382$} \nccurve[angleA=30,angleB=-30,ncurv=5.5]{->}{B}{B}\Aput{$0.618$} \ncarc[arcangle=20]{->}{C}{E}\Aput{$1$} \ncarc[arcangle=20]{->}{E}{C}\Aput{$0.382$} \ncline{->}{E}{D}\Aput{$0.618$} \nccurve[angleA=30,angleB=-30,ncurv=5.5]{->}{D}{D}\Aput{$0.618$} \ncline{->}{D}{C}\Aput{$0.382$} \end{pspicture} \end{center} \caption{The graph representation of the hidden and output spaces of Example \ref{eg-Y1Y2-SFT}. The number on the edge is the transition probability. The left one represents $\mathbf{Y}^{(1)}$ and the right one represents $\mathbf{Y}^{(2)}$.} \label{fig-eg-Y1Y2-SFT} \end{figure} Suppose $\mathcal{A}(\mathbf{Y}) = \{z_1, z_2, z_3, z_4\}$, the factor map $\psi: \mathbf{Y} \to \mathbf{Y}^{(1)}$ is given by $$ \psi(z_1) = \psi(z_3) = y_1, \quad \psi(z_2) = \psi(z_4) = y_2 $$ Set $N = (N_{ij})_{i \leq i, j \leq 2}$ and $L_1, L_2$ by $$ N_{11} = N_{21} = \begin{pmatrix} 0 & 1 \\ 0 & 0 \\ \end{pmatrix}, \quad N_{12} = N_{22} = \begin{pmatrix} 0 & 0 \\ 1 & 1 \\ \end{pmatrix} $$ and $L_1 = (0\ 1), L_2 = (g\ g)$, respectively. A straightforward calculation demonstrates that $$ L_1 N_{11} = 0 \cdot L_1, L_1 N_{12} = g^{-1} \cdot L_2, L_2 N_{11} = g \cdot L_1, L_2 N_{22} = 1 \cdot L_2. $$ That is, $N$ satisfies the Markov condition of order $1$. Theorem \ref{thm-sofic-measure-THM4} indicates that $\mathbf{Y}^{(1)}$ is a SFT with the unique maximal measure of entropy $\nu^{(1)}$, and $\nu^{(1),+} = (p_{\mathbf{Y}^{(1)}}, P_{\mathbf{Y}^{(1)}})$, where $p_{\mathbf{Y}^{(1)}} = (\dfrac{2-g}{3-g}, \dfrac{1}{3-g})$ and $$ P_{\mathbf{Y}^{(1)}} = \begin{pmatrix} 0 & 1 \\ 2-g & g-1 \\ \end{pmatrix} = stoch(M), \quad M = \begin{pmatrix} 0 & 1/g \\ g & 1 \\ \end{pmatrix}. $$ Applying Theorem \ref{main-thm-FSE}, we have \begin{align*} \dim W^{(1)} &= \dim \mathbf{Y}^{(1)}= 2 \frac{h_{\nu^{(1),+}}(\mathbf{Y}^{(1)})}{\log 2} \\ &= \frac{2}{(g-3)\log 2} ((2-g) \log (2-g) + (g-1) \log (g-1)) = 2 \frac{\log g}{\log 2}. \end{align*} On the other hand, $$ S^{(2)}S^{(2)} = \begin{pmatrix} \alpha_1 \alpha_2 & \varnothing & \alpha_1 \alpha_3 \\ \alpha_3 \alpha_2 & \alpha_2 \alpha_1 & \alpha_3 \alpha_3 \\ \alpha_3 \alpha_2 & \alpha_2 \alpha_1 & \alpha_3 \alpha_3 \\ \end{pmatrix} $$ infers that every word of length $3$ in $\mathbf{Y}^{(2)}$ is a synchronizing word. That is, $\mathbf{Y}^{(2)}$ is topological conjugate to $W^{(2)}$. Since the unique maximal measure of $W^{(2)}$ is $\mu^{(2)}$ with $\mu^{(2),+} = (p_{W^{(2)}}, P_{W^{(2)}})$, where $p_{W^{(2)}} = (\dfrac{2-g}{3-g}, \dfrac{2-g}{3-g}, \dfrac{g-1}{3-g})$, $$ P_{W^{(2)}} = \begin{pmatrix} 0 & 1 & 0 \\ 2-g & 0 & g-1 \\ 2-g & 0 & g-1 \\ \end{pmatrix}. $$ Theorem \ref{main-thm-FSE} suggests that \begin{align*} \dim W^{(2)} &= 2 \frac{h_{\mu^{(2),+}}(W^{(2)})}{\log 3} = 2 \frac{\log g}{\log 3}, \\ \intertext{and} \dim \mathbf{Y}^{(2)} &= 2 \frac{h_{\phi^{(2)}\mu^{(2),+}}(\mathbf{Y}^{(2)})}{\log 2} = 2 \frac{h_{\mu^{(2),+}}(W^{(2)})}{\log 2} = 2 \frac{\log g}{\log 2}. \end{align*} $\pi\mu^{(2)} = \mu^{(1)}$ can be verified without difficulty, thus we omit the details. \end{example} \begin{figure} \begin{center} \includegraphics[scale=0.7]{642427_2d} \caption{The fractal sets of the hidden and output spaces of Example \ref{eg-FSE-AI}. The templates are given by $[a^{(1)}, a_r^{(1)}, z^{(1)}] = [2.9, 1.7, 0.1]$ and $[a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] = [-0.1, -1.1, 2.1, -1.4, 0.9]$. The output space $\mathbf{Y}^{(2)}$ is a strict sofic shift rather than a SFT. Meanwhile, the hidden space $\mathbf{Y}^{(1)}$ is a SFT.} \label{fig-642428} \end{center} \end{figure} \begin{example}\label{eg-FSE-AI} Suppose the template of the first layer is the same as in Example \ref{eg-Y1Y2-SFT}, and $$ [a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] = [-0.1, -1.1, 2.1, -1.4, 0.9]. $$ The basic set of admissible local patterns of the solution space $\mathbf{Y}$ is $$ \mathcal{B} = \left\{ \boxed{-+ \atop \displaystyle -+}\,, \boxed{-- \atop \displaystyle -+}\,, \boxed{+- \atop \displaystyle +-}\,, \boxed{++ \atop \displaystyle +-}\,, \boxed{+- \atop \displaystyle ++}\,, \boxed{+- \atop \displaystyle --}\right\}. $$ The transition matrix $T$ of the solution space $\mathbf{Y}$ is $$ T = \begin{pmatrix} 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 \\ \end{pmatrix}. $$ After careful examination, the hidden and output spaces are both mixing with symbolic transition matrices $$ S^{(1)} = \begin{pmatrix} \varnothing & \varnothing & \alpha_1 \\ \alpha_0 & \varnothing & \alpha_1 \\ \varnothing & \alpha_2 & \varnothing \\ \end{pmatrix}, \quad S^{(2)} = \begin{pmatrix} \varnothing & \varnothing & \alpha_1 & \varnothing \\ \alpha_2 & \varnothing & \varnothing & \varnothing \\ \varnothing & \alpha_3 & \varnothing & \alpha_2 \\ \varnothing & \varnothing & \alpha_1 & \varnothing \\ \end{pmatrix}. $$ See Figure \ref{fig-642428}. $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$ are FSE since $h(\mathbf{Y}^{(1)}) = h(\mathbf{Y}^{(2)}) = \log \rho$, where $\rho \approx 1.3247$ satisfies $\rho^3 - \rho - 1 = 0$. Let $$ E = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix}. $$ Notably, $T^{(2)} E = E T^{(1)}$ and there exists no factor-like matrix $F$ such that $S^{(2)} F = F S^{(1)}$ or $S^{(1)} F = F S^{(2)}$. It follows from $S^{(1)}$ that every word of length $2$ in $\mathbf{Y}^{(1)}$ is a synchronizing word. Hence $\mathbf{Y}^{(1)} \cong W^{(1)}$. The unique maximal measure of entropy for $W^{(1),+}$ is $\mu^{(1),+} = (p_{W^{(1)}}, P_{W^{(1)}})$, where $p_{W^{(1)}} = (0.1770, 0.4115, 0.4115)$ and $$ P_{W^{(1)}} = \begin{pmatrix} 0 & 0 & 1 \\ 0.4302 & 0 & 0.5698 \\ 0 & 1 & 0 \\ \end{pmatrix}. $$ Hence \begin{align*} \dim W^{(1)} &= 2 \frac{h_{\mu^{(1),+}}(W^{(1)})}{\log 3} \approx 0.5119, \\ \intertext{and} \dim \mathbf{Y}^{(1)} &= 2 \frac{h_{\phi^{(1)}\mu^{(1),+}}(\mathbf{Y}^{(1)})}{\log 2} = 2 \frac{h_{\mu^{(1),+}}(W^{(1)})}{\log 2} \approx 0.8114. \end{align*} Unlike Example \ref{eg-Y1Y2-SFT}, it can be checked (with or without computer assistance) that $\mathbf{Y}^{(2)}$, rather than a SFT, is a strict sofic shift since there exists no $k \in \mathbb{N}$ such that every word of length $k$ is a synchronizing word in $\mathbf{Y}^{(2)}$. Nevertheless, there is a synchronizing word of length $2$ (that is, $\alpha_3 = ++$). Theorem \ref{main-thm-FSE} (i) indicates that there is a one-to-one correspondence between $\mathcal{M}_{\max}(W^{(2)})$ and $\mathcal{M}_{\max}(\mathbf{Y}^{(2)})$. Since the unique maximal measure of $W^{(2),+}$ is $\mu^{(2),+} = (p_{W^{(2)}}, P_{W^{(2)}})$, where $p_{W^{(2)}} = (0.1770, 0.1770, 0.4115, 0.2345)$ and $$ P_{W^{(2)}} = \begin{pmatrix} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0.4302 & 0 & 0.5698 \\ 0 & 0 & 1 & 0 \\ \end{pmatrix}, $$ we have \begin{align*} \dim W^{(2)} &= 2 \frac{h_{\mu^{(1),+}}(W^{(1)})}{\log 4} \approx 0.4057, \\ \intertext{and} \dim \mathbf{Y}^{(2)} &= 2 \frac{h_{\phi^{(2)}\mu^{(2),+}}(\mathbf{Y}^{(2)})}{\log 2} = 2 \frac{h_{\mu^{(2),+}}(W^{(2)})}{\log 2} \approx 0.8114. \end{align*} \end{example} \begin{figure} \begin{center} \includegraphics[scale=0.7]{551539_2d} \caption{The fractal sets of the hidden and output spaces of Example \ref{eg-ITO-Y1-SFT}. The templates are given by $[a^{(1)}, a_r^{(1)}, z^{(1)}] = [2.9, 1.7, 0.1]$ and $[a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] = [1.3, -1.2, 0.7, 2.3, 0.8]$. It is seen that the hidden space $\mathbf{Y}^{(1)}$ is the unit square $[0, 1] \times [0,1 ]$, and so is $W^{(1)}$. Moreover, there are infinite-to-one factor maps $\pi: W^{(1)} \to W^{(2)}$ and $\pi: \mathbf{Y}^{(1)} \to \mathbf{Y}^{(2)}$.} \label{fig-551539} \end{center} \end{figure} \begin{example}\label{eg-ITO-Y1-SFT} Suppose the template of the first layer is the same as in Example \ref{eg-Y1Y2-SFT}, and \begin{align*} [a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] = [1.3, -1.2, 0.7, 2.3, 0.8]. \end{align*} Then the basic set of admissible local patterns is $$ \mathcal{B} = \left\{ \boxed{-- \atop \displaystyle --}\,, \boxed{-+ \atop \displaystyle --}\,, \boxed{-+ \atop \displaystyle +-}\,, \boxed{+- \atop \displaystyle -+}\,, \boxed{+- \atop \displaystyle ++}\,, \boxed{++ \atop \displaystyle --}\,, \boxed{++ \atop \displaystyle -+}\,, \boxed{++ \atop \displaystyle ++}\right\}. $$ The transition matrix $T$ of the solution space $\mathbf{Y}$, $$ T = \begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 1 \\ \end{pmatrix} $$ suggests that $\mathbf{Y}$ is mixing. It is not difficult to see that the symbolic transition matrices of the hidden and output spaces are $$ S^{(1)} = \begin{pmatrix} \alpha_0 & \alpha_1 \\ \alpha_2 & \alpha_3 \\ \end{pmatrix} \quad and \quad S^{(2)} = \begin{pmatrix} \alpha_0 & \varnothing & \alpha_1 & \varnothing & \varnothing \\ \varnothing & \varnothing & \alpha_1 & \varnothing & \varnothing \\ \varnothing & \alpha_2 & \varnothing & \alpha_3 & \varnothing \\ \varnothing & \varnothing & \varnothing & \alpha_3 & \alpha_2 \\ \alpha_0 & \varnothing & \alpha_1 & \varnothing & \varnothing \\ \end{pmatrix} $$ respectively. See Figure \ref{fig-551539} for the fractal sets of $\mathbf{Y}^{(1)}$ and $\mathbf{Y}^{(2)}$. Obviously $\mathbf{Y}^{(1)}$ is a full $2$-shift. It is remarkable that $\phi^{(1)} \mu^{(1)}$ is not a Markov measure. The unique maximal measure for $W^{(1),+}$ (also for $\mathbf{Y}^{(1),+}$) is the uniform Bernoulli measure $\mu^{(1),+} = (1/2, 1/2)$. Therefore, $$ \dim W^{(1)} = \dim \mathbf{Y}^{(1)} = 2 \frac{h_{\mu^{(1),+}}(W^{(1)})}{\log 2} = 2. $$ Since $h(W^{(2)}) = \log \rho$, where $\rho \approx 1.8668$ satisfies $\rho^4 - 2 \rho^3 + \rho - 1 = 0$, the factor map $\pi: W^{(1)} \to W^{(2)}$ must be infinite-to-one if it exists. The fact $W^{(2)}$ has two fixed points, which can be seen from $T^{(2)}$, asserts that there exists an infinite-to-one factor map $\pi: W^{(1)} \to W^{(2)}$ by Theorem \ref{thm-inf-to-1-SFT}. However, it is difficult to find the explicit form of $\pi$. Since the unique maximal measure of $W^{(2),+}$ is $\mu^{(2),+} = (p_{W^{(2)}}, P_{W^{(2)}})$ with $p_{W^{(2)}} = (0.1888, 0.0658, 0.2294, 0.3524, 0.1636)$ and $$ P_{W^{(2)}} = \begin{pmatrix} 0.5357 & 0 & 0.4643 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0.2870 & 0 & 0.7130 & 0 \\ 0 & 0 & 0 & 0.5357 & 0.4643 \\ 0.5357 & 0 & 0.4643 & 0 & 0 \\ \end{pmatrix}, $$ the Hausdorff dimension of $W^{(2)}$ is $$ \dim W^{(2)} = 2 \frac{h_{\mu^{(2),+}}(W^{(2)})}{\log 5} \approx 0.7758. $$ Since $W^{(2)}$ is mixing, we have $$ \dim \mathbf{Y}^{(2)} = 2 \frac{h_{\nu^{(2),+}}(\mathbf{Y}^{(2)})}{\log 2} = 2 \frac{h_{\phi^{(2)}\mu^{(2),+}}(\mathbf{Y}^{(2)})}{\log 2} = 2 \frac{h_{\mu^{(2),+}}(W^{(2)})}{\log 2} \approx 1.8012. $$ As a conclusion, in the present example, an infinite-to-one factor map is associated with a different Hausdorff dimension. \end{example} \begin{figure} \begin{center} \includegraphics{642529_2d} \caption{The fractal sets of the hidden and output spaces of Example \ref{eg-ITO-sofic}. The templates are given by $[a^{(1)}, a_r^{(1)}, z^{(1)}] = [2.9, 1.7, 0.1]$ and $[a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] = [0.7, -1.1, 2.1, -1.4, 1.7]$. It is demonstrated that there is an infinite-to-one factor map $\pi: W^{(1)} \to W^{(2)}$, and $\mathbf{Y}^{(1)}, \mathbf{Y}^{(2)}$ are strictly sofic.} \label{fig-642529} \end{center} \end{figure} \begin{example}\label{eg-ITO-sofic} Suppose the template of the first layer is the same as in Example \ref{eg-Y1Y2-SFT}, and $$ [a^{(2)}, a_r^{(2)}, b^{(2)}, b_r^{(2)}, z^{(2)}] = [0.7, -1.1, 2.1, -1.4, 1.7]. $$ The basic set of admissible local patterns of the solution space $\mathbf{Y}$ is $$ \mathcal{B} = \left\{ \boxed{-+ \atop \displaystyle -+}\,, \boxed{-- \atop \displaystyle -+}\,, \boxed{+- \atop \displaystyle +-}\,, \boxed{++ \atop \displaystyle +-}\,, \boxed{+- \atop \displaystyle ++}\,, \boxed{+- \atop \displaystyle --}\,, \boxed{++ \atop \displaystyle ++}\right\}. $$ The transition matrix $T$ of the solution space $\mathbf{Y}$ is $$ T = \begin{pmatrix} 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ \end{pmatrix}. $$ A straightforward examination shows that the hidden and output spaces are both mixing with symbolic transition matrices $$ S^{(1)} = \begin{pmatrix} \varnothing & \varnothing & \alpha_1 \\ \alpha_0 & \varnothing & \alpha_1 \\ \varnothing & \alpha_2 & \alpha_3 \\ \end{pmatrix}, \quad S^{(2)} = \begin{pmatrix} \varnothing & \alpha_2 & \alpha_3 \\ \alpha_1 & \varnothing & \varnothing \\ \varnothing & \alpha_0 & \alpha_3 \\ \end{pmatrix}. $$ $h(\mathbf{Y}^{(1)}) = \log \rho$ and $h(\mathbf{Y}^{(2)}) = \log g$, where $\rho \approx 1.8393$ satisfies $\rho^3 - \rho^2 - \rho - 1 = 0$. See Figure \ref{fig-642529}. Since $W^{(2)}$ has a fixed point, Theorem \ref{thm-inf-to-1-SFT} infers there is an infinite-to-one factor map $\pi: W{(1)} \to W^{(2)}$. The unique maximal measure of $W^{(1),+}$ is $\mu^{(1),+} = (p_{W^{(1)}}, P_{W^{(1)}})$ with $p_{W^{(1)}} = (0.0994, 0.2822, 0.6184)$ and $$ P_{W^{(1)}} = \begin{pmatrix} 0 & 0 & 1 \\ 0.3522 & 0 & 0.6478 \\ 0 & 0.4563 & 0.5437 \\ \end{pmatrix}. $$ This suggests $$ \dim W^{(1)} = 2 \frac{h_{\mu^{(1),+}}(W^{(1)})}{\log 3} \approx 1.1094. $$ The symbolic transition matrix $S^{(1)}$ asserts that every word of length $2$ in $\mathbf{Y}^{(1)}$ is a synchronizing word, hence $\mathbf{Y}^{(1)}$ is topologically conjugated to $W^{(1)}$ and $$ \dim \mathbf{Y}^{(1)} = 2 \frac{h_{\nu^{(1),+}}(\mathbf{Y}^{(1)})}{\log 2} = 2 \frac{h_{\phi^{(1)} \mu^{(1),+}}(\mathbf{Y}^{(1)})}{\log 2} \approx 1.7582. $$ On the other hand, it is verified that the unique maximal measure of $W^{(2),+}$ is $\mu^{(2),+} = (p_{W^{(2)}}, P_{W^{(2)}})$ with $p_{W^{(2)}} = (\dfrac{2-g}{3-g}, \dfrac{2-g}{3-g}, \dfrac{g-1}{3-g})$ and $$ P_{W^{(2)}} = \begin{pmatrix} 0 & 2-g & g-1 \\ 1 & 0 & 0 \\ 0 & 2-g & g-1 \\ \end{pmatrix}. $$ Since every word of length $2$ in $\mathbf{Y}^{(2)}$ is a synchronizing word, we have $$ \dim W^{(2)} = 2 \frac{h_{\mu^{(2),+}}(W^{(2)})}{\log 3} = 2 \frac{\log g}{\log 3} \approx 0.8760, $$ and $$ \dim \mathbf{Y}^{(2)} = 2 \frac{h_{\nu^{(2),+}}(\mathbf{Y}^{(2)})}{\log 2} = 2 \frac{h_{\phi^{(2)} \mu^{(2),+}}(\mathbf{Y}^{(2)})}{\log 2} \approx 1.3884. $$ \end{example} \section{Relation Between the Hausdorff Dimension of Two Hidden Spaces} Theorems \ref{main-thm-FSE} and \ref{main-thm-ITO} can be extended to two spaces that are induced from a general $n$-layer cellular neural network \eqref{eq-general-system} via analogous discussion as in previous sections. Hence we illustrate the results without providing a detailed argument. The solution space $\mathbf{Y}$ of \eqref{eq-general-system} is determined by \begin{align*} \mathcal{B} &\equiv \mathcal{B}(A^{(1)}, \ldots, A^{(n)}, B^{(1)}, \ldots, B^{(n)}, z^{(1)}, \ldots, z^{(n)}) \\ &= \left\{ \boxed{y_{-d}^{(n)} \cdots y_{-1}^{(n)} y_0^{(n)} y_1^{(n)} \cdots y_d^{(n)} \atop {\displaystyle \vdots \atop {\displaystyle y_{-d}^{(2)} \cdots y_{-1}^{(2)} y_0^{(2)} y_1^{(2)} \cdots y_d^{(2)} \atop {\displaystyle y_{-d}^{(1)} \cdots y_{-1}^{(1)} y_0^{(1)} y_1^{(1)} \cdots y_d^{(1)}}}}}\right\} \subseteq \{-1, 1\}^{\mathbb{Z}_{(2d+1) \times n}}. \end{align*} For $1 \leq \ell \leq n$, set $$ \mathcal{L}^{(\ell)}(y_{-d}^{(n)} \cdots y_d^{(n)} \diamond \cdots \diamond y_{-d}^{(1)} \cdots y_d^{(1)}) = y_{-d}^{(\ell)} \cdots y_d^{(\ell)}. $$ The hidden space $\mathbf{Y}^{(\ell)}$ is then defined by $\mathcal{L}^{(\ell)}$ as before. (For simplicity, we also call $\mathbf{Y}^{(n)}$ a hidden space instead of the output space.) Similarly, $\mathbf{Y}^{(\ell)}$ is a sofic shift with respect to a right-resolving finite-to-one factor map $\phi^{(\ell)}: W^{(\ell)} \to \mathbf{Y}^{(\ell)}$ and a SFT $W^{(\ell)}$. Furthermore, $W^{(\ell)}$ can be described by the transition matrix $T^{(\ell)}$ while $\mathbf{Y}^{(\ell)}$ can be completely described by the symbolic transition matrix $S^{(\ell)}$. For $1 \leq i, j \leq n$, without the loss of generality, we assume that $h(\mathbf{Y}^{(i)}) \geq h(\mathbf{Y}^{(j)})$ and $\mathcal{A}(\mathbf{Y}^{(i)}) \geq \mathcal{A}(\mathbf{Y}^{(j)})$. \begin{proposition} Suppose $h(\mathbf{Y}^{(i)}) = h(\mathbf{Y}^{(j)})$. If there exists a factor-like matrix $E$ such that $S^{(i)} E = E S^{(j)}$, then there are finite-to-one factor maps $\pi_{ij}: W^{(i)} \to W^{(j)}$ and $\overline{\pi}_{ij}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(j)}$. For the case where $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$ attain distinct topological entropies, there is an infinite-to-one factor map $\pi_{ij}: W^{(i)} \to W^{(j)}$ if $|\mathcal{A}(W^{(i)})| > |\mathcal{A}(\mathbf{Y})|$ and there exists a factor-like matrix $F$ such that $T^{(i)} F = F T$. \end{proposition} The relation of the Hausdorff dimension of $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$, if it exists, is organized as follows. \begin{theorem}\label{main-thm-general} Suppose $W^{(i)}$ and $W^{(j)}$ are irreducible SFTs, and there exists a factor map $\pi_{ij}: W^{(i)} \to W^{(j)}$. \begin{enumerate}[\bf {Case} I.] \item $\mathbf{Y}^{(i)}, \mathbf{Y}^{(j)}$ share the same topological entropy. \begin{enumerate}[\bf a)] \item There is a one-to-one correspondence between $\mathcal{M}_{\max}(W^{(\ell)})$ and $\mathcal{M}_{\max}(\mathbf{Y}^{(\ell)})$, where $\ell = i, j$. \item Let $m_{\ell} = |\mathcal{A}(W^{(\ell)})|, n_{\ell} = |\mathcal{A}(\mathbf{Y}^{(\ell)})|$, and $\mu^{(\ell)}$ be a maximal measure of $W^{(\ell)}$. If $\phi^{(i)}$ has a synchronizing word, then $$ \dim W^{(\ell)} = \frac{h_{\mu^{(\ell)}}(W^{(\ell)})}{\log m_{\ell}} \quad \text{and} \quad \dim \mathbf{Y}^{(\ell)} = \frac{h_{\mu^{(\ell)}}(W^{(\ell)})}{\log n_{\ell}}. $$ \item Suppose $\nu^{(\ell)} = \phi^{(\ell)} \mu^{(\ell)}$. If $$ \dim \mathbf{Y}^{(i)} = \frac{h_{\nu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i}, $$ then $$ \dim \mathbf{Y}^{(j)} = \frac{h_{\overline{\pi}\nu^{(i)}}(\mathbf{Y}^{(j)})}{\log n_j} = \frac{h_{\nu^{(j)}}(\mathbf{Y}^{(j)})}{\log n_j} $$ for some $\overline{\pi}$. \end{enumerate} \item $\mathbf{Y}^{(i)}, \mathbf{Y}^{(j)}$ are associated with distinct topological entropies. \begin{enumerate}[\bf a)] \item Suppose $\pi_{ij}: W^{(i)} \to W^{(j)}$ is a uniform factor. If $$ \dim W^{(i)} = \frac{h_{\mu^{(i)}}(W^{(i)})}{\log m_i}, $$ then $$ \dim W^{(j)} = \frac{h_{\mu^{(j)}}(W^{(j)})}{\log m_{j}} = \frac{h_{\pi\mu^{(i)}}(W^{(j)})}{\log m_j}. $$ \item If $\phi^{(i)}$ has a synchronizing word, then there exists a factor map $\overline{\pi}: \mathcal{M}_{\max}(\mathbf{Y}^{(i)}) \to \mathcal{M}_{\max}(\mathbf{Y}^{(\overline{i})})$. \item If $$ \dim \mathbf{Y}^{(i)} = \dfrac{h_{\nu^{(i)}}(\mathbf{Y}^{(i)})}{\log n_i}, $$ then $$ \dim \mathbf{Y}^{(j)} = \dfrac{h_{\overline{\pi} \nu^{(i)}}(\mathbf{Y}^{(j)})}{\log n_j}. $$ \end{enumerate} \end{enumerate} \end{theorem} We conclude this section via the flow chart (cf.~Figure \ref{fig-flow-chart}), which explains Theorem \ref{main-thm-general} more clearly. \begin{figure} \begin{center} \begin{pspicture}(9,12) \rput(5,12){\ovalnode{A}{\shortstack{MCNN \\ $\mathbf{Y}^{(i)}, \mathbf{Y}^{(j)}$}}} \rput(1,10){\ovalnode{B}{$h(\mathbf{Y}^{(i)}) \neq h(\mathbf{Y}^{(j)})$}} \rput(9,10){\ovalnode{C}{$h(\mathbf{Y}^{(i)}) = h(\mathbf{Y}^{(j)})$}} \rput(1,8){\ovalnode{D}{\shortstack{Infinite-To-One \\ $\pi_{ij}: W^{(i)} \to W^{(j)}$}}} \rput(9,8){\ovalnode{E}{\shortstack{Finite-To-One \\ $\pi_{ij}: W^{(i)} \to W^{(j)}$}}} \rput(1,6){\ovalnode{F}{$\overline{\pi}_{ij}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(j)}$}} \rput(9,6){\ovalnode{G}{$\overline{\pi}_{ij}: \mathbf{Y}^{(i)} \to \mathbf{Y}^{(j)}$}} \rput(1,4){\ovalnode{H}{Uniform Factor}} \rput(9,4){\ovalnode{I}{Almost Invertible}} \rput(5,2){\ovalnode{J}{\shortstack{Hausdorff Dimension \\ Related}}} \ncline{->}{A}{B} \ncline{->}{A}{C} \ncline{->}{B}{D} \ncline{->}{C}{E}\Aput{Factor-Like for $T^{(i)}, T^{(j)}$} \ncline{->}{D}{F}\Bput{$\phi^{(i)}$ conjugacy} \ncline{->}{E}{G}\Aput{$\phi^{(i)}$ conjugacy} \ncarc[arcangle=290,ncurv=1.5]{->}{C}{G}\Bput{\shortstack{Factor-Like \\ for $S^{(i)}, S^{(j)}$}} \ncline{->}{F}{H}\Bput{Markov Condition} \ncline{->}{G}{I}\Aput{Synchronizing Word} \ncline{->}{H}{J} \ncline{->}{I}{J} \end{pspicture} \end{center} \caption{The flow chart of the existence of factor maps for arbitrary two hidden spaces.} \label{fig-flow-chart} \end{figure} \section{Conclusion and Further Discussion} This investigation elucidates whether there is a factor map $\pi$ (respectively $\overline{\pi}$) connecting $W^{(i)}$ and $W^{(j)}$ (respectively $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$). If a factor map does exist, the push-forward measure of a maximal measure is also a maximal measure provided the factor map is either finite-to-one or uniform. Moreover, the Hausdorff dimension of two spaces is thus related. Topological entropy provides a media to make the discussion more clear. When $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$ are FSE, the existence of a factor-like matrix asserts the existence of factor map $\overline{\pi}$. With the assistance of computer programs we can rapidly determine if there exists a factor-like matrix for a given MCNN. Moreover, the factor map $\overline{\pi}$ can be expressed in an explicit form. For most of the cases, there is no factor-like matrix for $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$. \begin{problem} Suppose there is a factor map between $\mathbf{Y}^{(i)}$ and $\mathbf{Y}^{(j)}$. Is $\dim \mathbf{Y}^{(i)}$ related to $\dim \mathbf{Y}^{(j)}$? Or, equivalently, is there a one-to-one correspondence between $\mathcal{M}_{\max}(\mathbf{Y}^{(i)})$ and $\mathcal{M}_{\max}(\mathbf{Y}^{(j)})$? \end{problem} A partial result of the above problem is the existence of synchronizing words. Lemma \ref{lem-ai-sync-word} demonstrates that, if $\phi^{(i)}$/$\phi^{(j)}$ has a synchronizing word, then $\phi^{(i)}$/$\phi^{(j)}$ is almost invertible. This infers a one-to-one correspondence between $\mathcal{M}_{\max}(\mathbf{Y}^{(i)})$ and $\mathcal{M}_{\max}(\mathbf{Y}^{(j)})$. \begin{problem} How large is the portion of almost invertible maps in the collection of factor maps? \end{problem} If $h(\mathbf{Y}^{(i)}) \neq h(\mathbf{Y}^{(j)})$, on the other hand, we propose a criterion for the existence of factor maps. We will not find the explicit form of the factor map. \begin{problem} Can we find some methodology so that we can write down the explicit form of a factor map if it exists? \end{problem} For the case where $h(\mathbf{Y}^{(i)}) \neq h(\mathbf{Y}^{(j)})$, a uniform factor provides the one-to-one correspondence between the maximal measures of two spaces. When the Markov condition is satisfied, Theorem \ref{thm-uniform-iff-W} indicates an if-and-only-if criterion. Notably we can use Theorem \ref{thm-uniform-iff-W} only if the explicit form of the factor map is found. Therefore, the most difficult part is the determination of a uniform factor. \begin{problem} How to find, in general, a uniform factor? \end{problem} \bibliographystyle{amsplain
2,869,038,156,156
arxiv
\section{Introduction} Decomposition, the observation that a local quantum field theory is sometimes a disjoint union of other local quantum field theories, has by now been extensively studied since its initial observation in \cite{Hellerman:2006zs} in two-dimensional gauge theories with trivially-acting subgroups, see for example \cite{ajt1,ajt2,ajt3,t1,gt1,xt1,Caldararu:2007tc,Hellerman:2010fv, Anderson:2013sia,Sharpe:2014tca,Sharpe:2019ddn,Tanizaki:2019rbk,Cherman:2020cvw,Cherman:2021nox,Robbins:2020msp,Robbins:2021ibx, Robbins:2021lry,Robbins:2021xce,Eager:2020rra,Komargodski:2020mxz,Yu:2021zmu, Nguyen:2021yld,Nguyen:2021naa,Honda:2021ovk,Huang:2021zvu}. A few reviews can be found in \cite{Sharpe:2006vd,Sharpe:2010zz,Sharpe:2010iv,Sharpe:2019yag,Sharpe:2022ene}. Although decomposition was originally observed in two dimensional theories, it has also been observed in four-dimensional theories, see for example \cite{Tanizaki:2019rbk,Cherman:2020cvw}. The purpose of this paper is to discuss examples in three dimensions, where it has not previously been studied. Globally, decomposition is expected to take place in any theory in $d$ spacetime dimensions with a global $(d-1)$-form symmetry (possibly realized noninvertibly) \cite{Tanizaki:2019rbk,Cherman:2020cvw}. One way to produce such a symmetry is via a suitable gauging. In broad brushstrokes, gauging a trivially-acting $n$-form symmetry results in a theory with a global $(n+1)$-form symmetry (distinct from the quantum symmetry), so one can hope to produce a $d$-dimensional theories with a decomposition by gauging a trivially-acting $(d-2)$-form symmetry. For example, ordinary gauge theories with trivially-acting subgroups are a source of examples in two dimensions, as mentioned above, because such theories have a global one-form symmetry (distinct from the quantum symmetry, and tied specifically to the fact that the group acts trivially). In this paper, we study three-dimensional gauge theories with gauged trivially-acting one-form symmetries. Gauging the trivially-acting one-form symmetry leads to a global two-form symmetry, hence, in three dimensions, a decomposition. Specifically, in this paper we describe orbifolds of three-dimensional effective\footnote{ We emphasize that because we often discuss orbifolds of three-dimensional sigma models, we understand those sigma models as effective field theories, not necessarily renormalizable theories. Our methods also apply to more general three-dimensional theories, such as, for example, Chern-Simons theories. } field theories by 2-groups, which are extensions of ordinary (here, finite) groups by one-form symmetries. (See for example \cite{baezlauda} for a mathematical introduction to 2-groups. These structures have a long history in both math and physics, see for example \cite{yetter,br,mackaay,porter,fmp,fhlt,Schommer-Pries:2011vyj,Baez:2005sn,Nikolaus:2011zg,Freed:1994ad,Baez:2010ya,Sati:2008eg,Pfeiffer:2003je,Frohlich:2009gb,Carqueville:2012dk,Carqueville:2013mpa,Brunner:2013ota,Brunner:2014lua,Carqueville:2015pra,Schreiber:2008uk,Sati:2009ic,Fiorenza:2010mh,sw,Kim:2019owc,Fiorenza:2012tb,Fiorenza:2013jz} for a few older instances, and \cite{Sharpe:2015mja,Cordova:2018cvg,Benini:2018reh,Cordova:2020tij,DelZotto:2020sop,Iqbal:2020lrt,Lee:2021crt,Fiorenza:2020iax,Sati:2020nob} for some more recent physics descriptions and applications of 2-groups.) When those one-form symmetries act trivially, gauging them results in a global two-form symmetry, hence a decomposition as above, which we will check explicitly. We begin in section~\ref{sect:ordinary-case} by reviewing two-dimensional orbifolds by central extensions of $G$ by trivially-acting $K$, and how a decomposition arises in such orbifolds. In particular, decomposition implements a restriction on nonperturbative sectors. In an orbifold the nonperturbative sectors are the twisted sectors, and in these orbifolds those twisted sectors are restricted to those describing $G$ bundles satisfying a condition. The restriction is implemented physically by a sum over $G$ orbifolds, namely the decomposition, realizing a `multiverse interference effect' between the constituent $G$ orbifolds (`universes'). An important role in that decomposition is played by discrete torsion, so in section~\ref{sect:3ddt} we review three-dimensional analogues of discrete torsion, counted by $H^3(G,U(1))$. In section~\ref{sect:2gp-orb} we turn to the main content of this paper: we define and study orbifolds by 2-group extensions of ordinary (finite) groups $G$ by trivially-acting one-form symmetry groups $BK$. Just as in two-dimensional cases, the nonperturbative sectors correspond to $G$ bundles satisfying a condition. We argue that, also just as in two-dimensional cases, that restriction implies (and is implemented by) a decomposition of the three-dimensional theory, with universes indexed by irreducible representations of $K$, which we study explicitly in several examples. In section~\ref{sect:sigma-2gerbe} we interpret this structure formally in terms of a sigma model whose target is a 2-gerbe. In section~\ref{sect:higher} we outline higher-dimensional analogues and their interpretations. In section~\ref{sect:cs} we briefly outline analogous decompositions in Chern-Simons theories with gauged one-form symmetry group actions, which will be further addressed in other work to appear. In appendix~\ref{app:homotopy}, we give mathematically rigorous derivations of statements about bundles of 2-groups. In appendix~\ref{app:duality} we formally discuss decomposition as a duality transform, as a type of Fourier transform. Finally, in appendix~\ref{app:gpcohom} we collect some results on group cohomology that are used in computations in the main text. Higher-dimensional orbifolds have also been discussed in e.g.~\cite{Freed:1991bn,Carqueville:2017aoe}. We believe our observations in this paper are novel. \section{Review: decomposition in ordinary orbifolds} \label{sect:ordinary-case} In this section we will review decomposition of two-dimensional orbifolds in which a central subgroup of the orbifold group acts trivially. The fact that such orbifolds are equivalent to (`decompose into') disjoint unions of other theories was worked out in \cite{Hellerman:2006zs}; however, our presentation of the phenomenon here has not been previously published, and is the prototype for our discussion of decomposition in 2-group orbifolds later. Let $X$ be a space, and $G$ a finite group acting on $X$. Let $\Gamma$ be a central extension of $G$ by a finite abelian group $K$: \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: \Gamma \: \longrightarrow \: G \: \longrightarrow \: 1. \end{equation} Such extensions are classified by elements of $H^2(G,K)$. Briefly, the statement of decomposition here is that \cite{Hellerman:2006zs} \begin{equation} \label{eq:decomp:basic} {\rm QFT}\left( [X/\Gamma] \right) \: = \: \coprod_{\rho \in \hat{K}} {\rm QFT}\left( [X/G]_{\rho(\omega)} \right), \end{equation} where $\hat{K}$ denotes irreducible representations of $K$, and $\rho(\omega) \in H^2(G,U(1))$ is the image $\rho \circ \omega$ of the extension class $\omega$ under $\rho \in \hat{K}$. (Decomposition is also defined for more general orbifolds \cite{Hellerman:2006zs,Robbins:2020msp,Robbins:2021ibx}, but for our purposes in this paper, the special case of central extensions above will suffice.) Next, we establish this decomposition, by computing partition functions. First, recall that the extension $\Gamma$ can be described set-wise as a product $G \times K$, with product deformed by an element $[\omega] \in H^2(G,K)$. Let $\gamma \in \Gamma$, and write $\Gamma$ set-wise as the product $G \times K$, then the product in $\Gamma$ is defined by \begin{equation} \gamma_1 \gamma_2 \: = \: (g_1,k_1)\,(g_2,k_2) \: = \: \left( g_1 g_2, k_1 k_2 \omega(g_1, g_2) \right). \end{equation} In the partition function of a two-dimensional orbifold $[X/\Gamma]$ on $T^2$, we sum over commuting pairs of group elements in $\Gamma$, but clearly the condition for $\gamma_1$ and $\gamma_2$ to commute is equivalent to $g_1$ commuting with $g_2$ and \begin{equation} \frac{ \omega(g_1, g_2) }{ \omega(g_2, g_1) } \: = \: 1. \end{equation} Define \begin{equation} \epsilon(g,h) \: = \: \frac{ \omega(g_1, g_2) }{ \omega(g_2, g_1) }, \end{equation} then it is straightforward to demonstrate that \begin{equation} \epsilon(a,bc) \: = \: \epsilon(a,b) \epsilon(a,c), \end{equation} (and symmetrically,) so as a consequence, $\epsilon$ is invariant under conjugation\footnote{ We restrict to the same $h$ on each input because $\epsilon$ is only defined on commuting pairs. }: \begin{eqnarray} \epsilon(h a h^{-1}, h b h^{-1}) & = & \epsilon(h a h^{-1}, h) \, \epsilon(h a h^{-1}, b) \, \epsilon(h a h^{-1}, h^{-1}), \nonumber\\ & = & \epsilon(h a h^{-1}, b), \nonumber\\ & = & \epsilon(h, b) \, \epsilon(a,b) \, \epsilon(h^{-1}, b), \nonumber\\ & = & \epsilon(a,b). \end{eqnarray} In particular, this descends to isomorphism classes of $G$ bundles, which on $T^2$ are classified by Hom$(\pi_1(T^2),G)/G$. We can view $\epsilon$ as assigning a phase to each such bundle. Thus, the partition function of a two-dimensional $[X/\Gamma]$ orbifold looks like the partition function of a $[X/G]$ orbifold but with a restriction on the allowed sectors. We can implement that projection on allowed sectors by inserting an operator \begin{equation} \Pi(g_1, g_2) \: = \: \frac{1}{|K|} \sum_{\rho \in \hat{K}} \epsilon_{\rho}(g_1, g_2), \end{equation} where $\epsilon_{\rho}$ is the image of $\omega(g_1, g_2)/\omega(g_2, g_1)$ under $\rho: K \rightarrow U(1)$. This is the origin of decomposition \cite{Hellerman:2006zs}. Assembling these pieces, and taking into account numerical factors, when then have \begin{eqnarray} Z_{T^2}\left( [X/\Gamma] \right) & = & \frac{|K|^2}{|\Gamma|} \sum_{gh=hg, \epsilon=1} Z(g,h), \nonumber\\ & = & \frac{|K|^2}{\Gamma} \frac{|G|}{|K|} \sum_{\rho \in \hat{K}} Z\left( [X/G]_{\rho(\omega)} \right), \nonumber\\ & = & \sum_{\rho \in \hat{K}} Z\left( [X/G]_{\rho(\omega)} \right). \end{eqnarray} Thus, we see that partition functions are consistent with the prediction of decomposition~(\ref{eq:decomp:basic}). In passing, note that in the case $G = {\mathbb Z}_2 = K$, $H^2(G,K) = {\mathbb Z}_2$ (and hence has nontrivial elements), but for all $[\omega] \in H^2(G,K)$, and all commuting pairs, \begin{equation} \frac{ \omega(g_1,g_2) }{ \omega(g_2,g_1) } \: = \: 1. \end{equation} Thus, triviality of the ratio of cocycles can happen even if $\omega$ is a nontrivial cohomology class. Our analysis above was specific to the case that the worldsheet is $T^2$, but it generalizes easily to other genus. Before considering general genus, let us next walk through the case of genus $2$. Let $\gamma_i = (a_i, k_i) \in \Gamma$, $\lambda_i = (b_i, z_i) \in \Gamma$, $i \in \{1, 2 \}$, obeying the condition \begin{equation} \label{eq:genus2-condition} [ \gamma_1, \lambda_1] \, [\gamma_2, \lambda_2] \: = \: 1, \end{equation} for \begin{equation} [g,h] \: = \: g h g^{-1} h^{-1}, \end{equation} and define \begin{equation} \xi_1 \: = \: [a_1,b_1] \: = \: a_1 b_1 a_1^{-1} b_1^{-1}. \end{equation} Then, using the fact that \begin{equation} \gamma_i^{-1} \: = \: \left(a_i^{-1}, k_i^{-1} \omega(a_i,a_i^{-1})^{-1} \right), \: \: \: \lambda_i^{-1} \: = \: \left( b_i^{-1}, z_i^{-1} \omega(b_i, b_i^{-1})^{-1} \right), \end{equation} it is straightforward to compute that \begin{eqnarray} [\gamma_1, \lambda_1] & = & \left( [a_1, b_1], \omega(a_1,b_1) \, \omega(a_1 b_1, a_1^{-1}) \, \omega(a_1 b_1 a_1^{-1}, b_1^{-1}) \, \right. \nonumber \\ & & \hspace*{1.1in} \left. \cdot \omega(a_1,a_1^{-1})^{-1} \, \omega(b_1, b_1^{-1})^{-1} \right), \end{eqnarray} \begin{eqnarray} [\gamma_1,\lambda_1] \, [\gamma_2, \lambda_2] & = & \left( [a_1, b_1] [a_2, b_2], \omega(a_1,b_1) \, \omega(a_1 b_1, a_1^{-1}) \, \omega(a_1 b_1 a_1^{-1}, b_1^{-1}) \, \right. \nonumber \\ & & \hspace*{1.1in} \left. \cdot \omega(\xi_1,a_2) \, \omega(\xi_1 a_2, b_2) \, \omega(\xi_1 a_2 b_2, a_2^{-1}) \, \right. \nonumber \\ & & \hspace*{1.1in} \left. \cdot \omega(\xi_1 a_2 b_2 a_2^{-1}, b_2^{-1}) \right. \nonumber \\ & & \hspace*{1.1in} \left. \cdot \omega(a_1, a_1^{-1})^{-1} \, \omega(a_2, a_2^{-1})^{-1} \, \right. \nonumber \\ & & \hspace*{1.1in} \left. \cdot \omega(b_1, b_1^{-1})^{-1} \, \omega(b_2, b_2^{-1})^{-1} \right), \end{eqnarray} so we see that the closure condition~(\ref{eq:genus2-condition}) holds if and only if both \begin{equation} [a_1, b_1] \, [a_2, b_2] \: = \: 1 \end{equation} and \begin{eqnarray} 1 & = & \omega(a_1,b_1) \, \omega(a_1 b_1, a_1^{-1}) \, \omega(a_1 b_1 a_1^{-1}, b_1^{-1}) \, \nonumber \\ & & \hspace*{0.5in} \cdot \omega(\xi_1,a_2) \, \omega(\xi_1 a_2, b_2) \, \omega(\xi_1 a_2 b_2, a_2^{-1}) \, \omega(\xi_1 a_2 b_2 a_2^{-1}, b_2^{-1}) \nonumber \\ & & \hspace*{0.5in} \cdot \omega(a_1, a_1^{-1})^{-1} \, \omega(a_2, a_2^{-1})^{-1} \, \omega(b_1, b_1^{-1})^{-1} \, \omega(b_2, b_2^{-1})^{-1}. \end{eqnarray} Next, we generalize to arbitrary genus. Consider a Riemann surface of genus $g$, with boundary conditions determined by $\gamma_i = (a_i, k_i) \in \Gamma$, $\lambda_i = (b_i, z_i) \in \Gamma$, $i \in \{1, \cdots, g\}$. Define $\xi_i = [a_i, b_i]$, and \begin{equation} X \: = \: \left[ \prod_i \omega(a_i, a_i^{-1}) \prod_i \omega(b_i,b_i^{-1}) \right]^{-1}. \end{equation} The condition that the group elements must obey to define boundary conditions on the Riemann surface is that \begin{equation} [ \gamma_1, \lambda_1] \, [\gamma_2, \lambda_2] \cdots [\gamma_g, \lambda_g] \: = \: 1, \end{equation} which implies that \begin{equation} [ a_1, b_1] \, [a_2, b_2] \cdots [a_g, b_g] \: = \: 1 \end{equation} (which are required for $a_i, b_i \in G$ to close on the Riemann surface) as well as \begin{equation} \epsilon(a_i, b_i) \: = \: 1 \end{equation} for \begin{eqnarray} \epsilon(a_i, b_i) & \equiv & X \omega(a_1, b_1) \omega(a_1 b_1, a_1^{-1}) \omega(a_1 b_1 a_1^{-1}, b_1^{-1}) \omega(\xi_1, a_2) \omega(\xi_1 a_2, b_2) \omega(\xi_1 a_2 b_2, a_2^{-1}) \nonumber \\ & & \hspace*{1in} \cdot \omega(\xi_1 a_2 b_2 a_2^{-1}, b_2^{-1}) \omega(\xi_1 \xi_2, a_3) \cdots \omega(\xi_1 \cdots \xi_{g-1} a_g b_g a_g^{-1}, b_g^{-1}). \nonumber \end{eqnarray} (This can be obtained either by direct multiplication or by triangulating the Riemann surface into simplices and associating a factor of $\omega$ with each simplex, as in \cite{Aspinwall:2000xv}.) Thus, as before, the data required to define a $\Gamma$ orbifold on a genus $g$ Riemann surface is a restriction on the combinatorial data used to define a $G$ orbifold on the same Riemann surface, a restriction of the form $\epsilon(a_i, b_i) = 1$. As for $T^2$, we can implement that restriction by inserting a projection operator $\Pi$, of the same form as before, with $\epsilon_{\rho}$ that are the image of the genus-$g$ $\epsilon$ under an irreducible representation $\rho$. The resulting phases are the same as the phases defining discrete torsion on a genus $g$ Riemann surface (see \cite[equ'n (15)]{Aspinwall:2000xv}, \cite{Bantay:2000eq}), again for discrete torsion given by the image of $H^2(G,K)$ under the irreducible representation $\rho: K \rightarrow U(1)$. Thus, we see the story for $T^2$ generalizes immediately to other Riemann surfaces. For later use, we note that the discrete torsion here can equivalently be understood as a coupling to a discrete theta angle, defined by a characteristic class $x^* \omega$, for $x: \Sigma \rightarrow BG$ a map defining the twisted sector, in the notation of appendix~\ref{app:homotopy}. One can rewrite such a discrete theta angle coupling \begin{equation} \int_{\Sigma} \langle \rho, x^* \omega \rangle \end{equation} as a discrete torsion phase by triangulating the Riemann surface $\Sigma$ and associating phases to each simplex as reviewed above and in \cite{Aspinwall:2000xv,Dijkgraaf:1989pz}. So far we have considered central extensions. Decomposition also exists for orbifolds by non-central extensions, see e.g.~\cite{Hellerman:2006zs,Robbins:2020msp}; however, its form is more complex. In this paper we focus on (analogues of) central extensions. \section{Three-dimensional analogues of discrete torsion} \label{sect:3ddt} We have seen that two-dimensional orbifolds with trivially-acting subgroups decompose into disjoint unions of orbifolds with discrete torsion, a modular-invariant phase factor \cite{Hellerman:2006zs,Robbins:2020msp}. Similarly, the three-dimensional version of decomposition will also generate theories twisted by a three-dimensional version of discrete torsion. Such analogues of discrete torsion were studied in \cite{Dijkgraaf:1989pz} in the special case of orbifolds of points (forming Dijkgraaf-Witten theory), and more generally in \cite{Sharpe:2000qt}. In this section, we briefly review those constructions here, in both ordinary orbifolds and in orientifolds, to set up their appearance in three-dimensional versions of decomposition. \subsection{Ordinary orbifolds} First, recall that in two dimensions, discrete torsion in a $G$ orbifold is classified by group cohomology, specifically $H^2(G,U(1))$ with a trivial action on the coefficients. Similarly, in three dimensions \cite{Dijkgraaf:1989pz,Sharpe:2000qt}, the analogue of discrete torsion in a $G$ orbifold is classified by $H^3(G,U(1))$, again with a trivial action on the coefficients. Furthermore, given $[\omega] \in H^2(G,U(1))$, one can derive coboundary-invariant phases that weight Riemann surfaces. For example, on $T^2$, a twisted sector is defined by two commuting elements $g, h \in G$, and the corresponding coboundary-invariant phase is \begin{equation} \frac{ \omega(g,h) }{ \omega(h,g) }. \end{equation} Analogous expressions on higher-genus Riemann surfaces can be found in \cite{Aspinwall:2000xv}. Analogous constructions exist in three dimensions, which use $[\omega] \in H^3(G,U(1))$ to assign a coboundary-invariant phase to three-manifolds. One construction \cite{Dijkgraaf:1989pz} proceeds as follows. given a three-manifold $Y$, we pick a triangulation by simplices, and associate to each simplex a cocycle. We then take an alternating product of those associated cocycles (with exponent determined by orientation) to form a coboundary-invariant phase. For example, the triangulation of a cube into six simplices can be visualized by viewing the cube along a line through two corners, as \begin{center} \begin{picture}(40,80)(0,0) \Line(20,40)(20,80) \Line(20,40)(0,20) \Line(20,40)(40,20) \DashLine(20,40)(20,0){5} \DashLine(20,40)(0,60){5} \DashLine(20,40)(40,60){5} \Line(0,20)(0,60) \Line(40,20)(40,60) \Line(0,60)(20,80) \Line(40,60)(20,80) \Line(20,0)(0,20) \Line(20,0)(40,20) \end{picture} \end{center} and then taking the tetrahedra cut out by the six interior lines projected through the cube in the figure above. See also \cite{Sharpe:2000qt} for an alternative construction in terms of $C$ field holonomlies. For example, on $T^3$, a twisted sector is defined by three commuting group elements $g_1, g_2, g_3$, \begin{center} \begin{picture}(70,70)(0,0) \Line(0,0)(40,0) \Line(0,0)(0,40) \Line(40,0)(40,40) \Line(0,40)(40,40) \Line(0,40)(30,70) \Line(30,70)(70,70) \Line(40,40)(70,70) \Line(40,0)(70,30) \Line(70,30)(70,70) \Text(20,18)[l]{$g_1$} \Text(30,55)[l]{$g_3$} \Text(50,35)[l]{$g_2$} \end{picture} \end{center} and here one multiplies $Z(g_1, g_2, g_3)$ by the phase \cite[equ'n (6.35)]{Dijkgraaf:1989pz}, \cite{Sharpe:2000qt} \begin{equation} \label{eq:dw-phases} \epsilon_3(g_1,g_2,g_3) \: = \: \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_2, g_1, g_3) } \frac{ \omega(g_3, g_1, g_2) }{ \omega(g_3, g_2, g_1) } \frac{ \omega(g_2, g_3, g_1) }{ \omega(g_1, g_3, g_2) }. \end{equation} corresponding to $[\omega] \in H^3(G,U(1))$. As noted in \cite[footnote 5]{Dijkgraaf:1989pz}, perhaps the simplest example in which this phase is nontrivial is the group $G = ( {\mathbb Z}_2 )^3$. As discussed in \cite{Dijkgraaf:1989pz,Sharpe:2000qt}, this phase factor is invariant under both coboundaries as well as $SL(3,{\mathbb Z})$ transformations of $T^3$, just as the discrete torsion phase factor is invariant under both coboundaries as well as $SL(2,{\mathbb Z})$ transformations of $T^2$. For another example, consider $S^1 \times \Sigma$ for $\Sigma$ a genus-two surface. Here, the associated phase is \begin{eqnarray} \xi_2 & = & \frac{\omega(a_1, b_1, g) }{ \omega(\gamma b_1, a_1, g) \, \omega(\gamma, b_1, g) } \frac{ \omega(\gamma, a_2, g) \, \omega(\gamma a_2, b_2, g) }{ \omega(b_2, a_2, g)} \nonumber \\ & & \hspace*{0.25in} \cdot \frac{ \omega(\gamma b_1, g, a_1) \, \omega(\gamma, g, b_1) }{ \omega(a_1, g, b_1)} \frac{\omega(b_2, g, a_2)}{\omega(\gamma, g, a_2) \, \omega(\gamma a_2, g, b_2)} \nonumber \\ & & \hspace*{0.25in} \cdot \frac{\omega(g, a_1, b_1)}{\omega(g, \gamma b_1, a_1) \, \omega(g,\gamma, b_1)} \frac{\omega(g,\gamma,a_2) \, \omega(g,\gamma a_2, b_2)}{\omega(g,b_2,a_2)} \end{eqnarray} where \begin{equation} \gamma \: = \: a_1 b_1 a_1^{-1} b_1^{-1}, \: \: \: \gamma a_2 b_2 a_2^{-1} b_2^{-1} \: = \: 1. \end{equation} and $g$ commutes with all $a_i$, $b_i$. It can be shown that this expression is invariant under coboundaries. This expression is motivated by the two-dimensional genus-two phase \cite[equ'n (15)]{Aspinwall:2000xv} \begin{equation} \frac{\omega(a_1,b_1)}{\omega(\gamma b_1, a_1) \, \omega(\gamma, b_1)} \frac{\omega(\gamma, a_2) \, \omega(\gamma a_2, b_2)}{\omega(b_2, a_2)}. \end{equation} Also, in the special case that $\gamma = 1$, it correctly factorizes into the product of two $T^3$ phases: \begin{equation} \xi_2 \: = \: \frac{\omega(a_1, b_1, g)}{\omega(b_1, a_1, g)} \frac{\omega(b_1, g,a_1)}{\omega(a_1, g, b_1)} \frac{\omega(g, a_1, b_1)}{\omega(g, b_1, a_1)} \cdot \frac{\omega(a_2,b_2,g)}{\omega(b_2,a_2,g)} \frac{\omega(b_2,g,a_2)}{\omega(a_2,g,b_2)} \frac{\omega(g,a_2,b_2)}{\omega(g,b_2,a_2)}, \end{equation} where without loss of generality we assume tha the cocycle $\omega$ is normalized (so that $\omega = 1$ if any of its arguments is the identity). In fact, it is also straightforward to conjecture the corresponding phase factor for $S^1 \times \Sigma_h$ for $\Sigma_h$ a genus-$h$ Riemann surface. Following \cite{Aspinwall:2000xv}, define \begin{equation} \gamma_i \: = \: a_i b_i a_i^{-1} b_i^{-1}, \: \: \: \zeta_i \: = \: \gamma_1 \gamma_2 \cdots \gamma_{i-1}, \end{equation} then the two-dimensional discrete torsion phase is \cite[equ'n (15)]{Aspinwall:2000xv} \begin{equation} \xi_h \: = \: \frac{ \omega(a_1,b_1) }{ \omega(\gamma_1 b_1, a_1) \, \omega(\gamma_1,b_1) } \left( \prod_{i=2}^{h-1} \frac{ \omega(\zeta_i, a_i) \, \omega(\zeta_i a_i, b_i) }{ \omega(\zeta_i \gamma_i b_i, a_i) \, \omega(\zeta_i \gamma_i, b_i) } \right) \frac{ \omega(\zeta_h, a_h) \, \omega(\zeta_h a_h, b_h) }{ \omega(b_h, a_h) } \end{equation} and we conjecture that the analogous three-dimensional phase on $S^1 \times \Sigma_h$ is \begin{eqnarray} \xi_h & = & \frac{ \omega(a_1,b_1,g) }{ \omega(\gamma_1 b_1, a_1,g) \, \omega(\gamma_1,b_1,g) } \frac{ \omega(\gamma_1 b_1, g, a_1) \, \omega(\gamma_1, g, b_1) }{ \omega(a_1,g,b_1) } \frac{ \omega(g, a_1,b_1) }{ \omega(g, \gamma_1 b_1, a_1) \, \omega(g, \gamma_1,b_1) } \nonumber \\ & & \hspace*{0.25in} \cdot \Biggl( \prod_{i=2}^{h-1} \frac{ \omega(\zeta_i, a_i,g) \, \omega(\zeta_i a_i, b_i,g) }{ \omega(\zeta_i \gamma_i b_i, a_i,g) \, \omega(\zeta_i \gamma_i, b_i,g) } \frac{ \omega(\zeta_i \gamma_i b_i, g, a_i) \, \omega(\zeta_i \gamma_i, g, b_i) } { \omega(\zeta_i, g, a_i) \, \omega(\zeta_i a_i, g, b_i) } \nonumber \\ & & \hspace*{2.0in} \cdot \frac{ \omega(g, \zeta_i, a_i) \, \omega(g, \zeta_i a_i, b_i) }{ \omega(g, \zeta_i \gamma_i b_i, a_i) \, \omega(g, \zeta_i \gamma_i, b_i) } \Biggr) \nonumber \\ & & \hspace*{0.25in} \cdot \frac{ \omega(\zeta_h, a_h,g) \, \omega(\zeta_h a_h, b_h,g) }{ \omega(b_h, a_h,g) } \frac{ \omega(b_h, g, a_h) }{ \omega(\zeta_h, g, a_h) \, \omega(\zeta_h a_h, g, b_h) } \nonumber \\ & & \hspace*{2.5in} \cdot \frac{ \omega(g, \zeta_h, a_h) \, \omega(g, \zeta_h a_h, b_h) }{ \omega(g, b_h, a_h) } \end{eqnarray} where $g \in G$ commutes with all $a_i$, $b_i$. In two dimensions, discrete torsion phases obey multiloop factorization (target space unitarity), which is the following constraint. If $\Sigma$ is any Riemann surface, corresponding to a twisted sector of some orbifold, and $\Sigma$ can degenerate into a product of $\Sigma_1$ and $\Sigma_2$ connected at one point (compatibly with the orbifold structure, in the sense that there are no twist fields at the connection), then the phase associated to $\Sigma$ must equal the product of the phases associated to $\Sigma_1$ and $\Sigma_2$. In two dimensions, for the genus-one phase \begin{equation} \epsilon_2(g,h) \: = \: \frac{\omega(g,h)}{\omega(h,g)}, \end{equation} this is the property \cite[equ'n (42)]{Vafa:1986wx} \begin{equation} \label{eq:eps2-hom} \epsilon_2(x,ab) \: = \: \epsilon_2(x,a) \, \epsilon_2(x,b), \end{equation} which can be demonstrated simply using \begin{equation} \frac{ (d \omega)(x,a,b) \, (d \omega)(a,b,x) }{ (d \omega)(a,x,b) } \: = \: \frac{ \epsilon_2(x,ab) }{ \epsilon_2(x,a) \, \epsilon_2(x,b) } \end{equation} for $x$, $a$, $b$ all mutually commuting. When combined with the fact that $\epsilon_2(1,-) = \epsilon_2(-,1) = 1$, we see this means that $\epsilon_2$ is a bihomomorphism from commuting pairs in $G$ to $U(1)$. In three dimensions, there is a simple analogue of multiloop factorization: if a three-manifold $S^1 \times \Sigma$ can degenerate into $S^1 \times ( \Sigma_1 \coprod \Sigma_2)$, the the phase assigned to $S^1 \times \Sigma$ must match the product of the phases assigned to $S^1 \times \Sigma_1$, $S^1 \times \Sigma_2$. On such grounds, one then expects \begin{equation} \label{eq:eps3-hom} \epsilon_3(x,y,ab) \: = \: \epsilon_3(x,y,a) \epsilon_3(x,y,b). \end{equation} In fact, it is straightforward to check that this is a consequence of the identity \begin{equation} \frac{ (d\omega)(y,x,a,b) }{ (d\omega)(x,y,a,b) } \frac{ (d\omega)(a,b,y,x) }{ (d\omega)(a,b,x,y) } \frac{ (d\omega)(y,a,b,x) }{ (d\omega)(x,a,b,y) } \frac{ (d\omega)(x,a,y,b) }{ (d\omega)(y,a,x,b) } \frac{ (d\omega)(a,x,b,y) }{ (d\omega)(a,y,b,x) } \frac{ (d\omega)(a,y,x,b) }{ (d\omega)(a,x,y,b) } \: = \: 1. \end{equation} (See also \cite[section 6]{Dijkgraaf:1989pz}, where a different argument is given for the same result.) One can use multiloop factorization to argue that discrete torsion(-like) phases descend to conjugacy classes. For example, in the case of the genus-one phase $\epsilon_2$, from~(\ref{eq:eps2-hom}), it is easy to show that\footnote{In fact, formally both this expression and its three-dimensional analogue appear to generalize to independent conjugatiosn on the parameters, as \begin{equation} \epsilon_2(a g a^{-1}, b h b^{-1}) \: = \: \epsilon_2(g,h). \end{equation} However, $\epsilon_2(g,h)$ is only defined for commuting $g$, $h$, so we restrict to the case $a=b$. Identical remarks apply to $\epsilon_3$. } \begin{eqnarray} \epsilon_2(a g a^{-1}, a h a^{-1}) & = & \epsilon_2(a g a^{-1}, a) \, \epsilon_2(a g a^{-1}, h)\, \epsilon_2(a g a^{-1}, a^{-1}) \: = \: \epsilon_2(a g a^{-1}, h), \nonumber\\ & = & \epsilon_2(a,h) \, \epsilon_2(g, h) \, \epsilon_2(a^{-1}, h), \nonumber\\ & = & \epsilon_2(g,h). \end{eqnarray} Computing in exactly the same fashion, one can use~(\ref{eq:eps3-hom}) to show that \begin{equation} \epsilon_3(a g a^{-1}, a h a^{-1}, a k a^{-1}) \: = \: \epsilon_3(g,h,k). \end{equation} \subsection{Manifolds with boundaries} For completeness, let us also quickly outline the case of manifolds with boundary, that will be of use in our subsequent works. Let's begin with an overview of how this works in two-dimensional theories and ordinary discrete torsion, before describing an example in three dimensions. Consider a genus-one correlation function in an orbifold $[X/G]$ with a single insertion of an operator associated to $g \in G$. In effect, we have a $T^2$ with a puncture corresponding to $g$. If we let $a, b \in G$ denote group elements corresponding to the usual $T^2$ boundary conditions, then we can sketch the construction of the punctured torus as in the diagram below: \begin{center} \begin{picture}(80,80)(0,0) \ArrowLine(0,0)(80,0) \ArrowLine(0,0)(0,80) \ArrowLine(80,0)(80,80) \ArrowLine(0,80)(80,80) \Oval(65,65)(7,20)(45) \Text(40,5)[b]{$a$} \Text(5,40)[l]{$b$} \Text(40,75)[t]{$a$} \Text(75,45)[r]{$b$} \Text(60,60)[l]{$g$} \end{picture} \end{center} with a hole cut out in the upper right corner, or equivalently, \begin{center} \begin{picture}(80,80)(0,0) \ArrowLine(0,0)(80,0) \ArrowLine(0,0)(0,80) \ArrowLine(80,0)(80,50) \ArrowLine(0,80)(50,80) \Line(50,80)(80,50) \Text(40,5)[b]{$a$} \Text(5,40)[l]{$b$} \Text(25,75)[t]{$a$} \Text(75,25)[r]{$b$} \Vertex(0,0){2} \Text(-5,0)[r]{$x$} \Vertex(80,0){2} \Text(85,0)[l]{$bx$} \Vertex(0,80){2} \Text(-5,80)[r]{$ax$} \Vertex(50,80){2} \Text(48,75)[t]{$bax$} \Vertex(80,50){2} \Text(85,50)[l]{$abx$} \Text(72,72)[l]{$g$} \DashArrowArc(65,65)(21,-45,135){5} \end{picture} \end{center} In the presence of the puncture, $a$ and $b$ no longer commute, but instead obey \begin{equation} \label{eq:t2-punc} abg \: = \: ba. \end{equation} Alternatively, we can say that if $\Sigma$ is a punctured $T^2$, then to specify an element of Hom$(\pi_1(\Sigma),G)$ we can first assign a group element $g$ to the loop circling the puncture, and then the group elements $a$ and $b$ assigned to the non-contractible cycles of the torus will need to satisfy~(\ref{eq:t2-punc}), since the cycle associated to $a^{-1}b^{-1}ab$ is homotopic to the cycle circling the puncture. Then bundles on $\Sigma$ are classified by Hom$(\pi_1(\Sigma),G)/G$, as usual. One more perspective comes from consideration of topological defect lines. The $a$ and $b$ twists on the $T^2$ are implemented by wrapping $a$ and $b$ lines around the cycles. Saying that our inserted operator is associated to $g$ is equivalent to saying that it sits at the end of a defect line labeled by $g$. The other end of that line must terminate somewhere on the first two defect lines. The simplest possibility is to connect everything at a single junction of degree five. In order for that junction to remain topological (i.e.~to avoid an extra non-topological insertion), we need the cyclic product of lines coming in to give the identity, which again leads to~(\ref{eq:t2-punc}). In any event, the discrete torsion phase assigned to a punctured $T^2$ is not the same as that assigned to $T^2$ itself -- the contribution to the boundary conditions from the puncture modifies the phase. Applying methods of \cite{Aspinwall:2000xv}, we see that the phase associated to this diagram is \begin{equation} \xi_{1,1} \: \equiv \: \frac{ \omega(a,b) }{ \omega(b,a) } \, \omega(ab,g). \end{equation} Now, if we add a coboundary $\alpha$, this phase changes: \begin{equation} \xi_{1,1} \: \mapsto \: \xi_{1,1} \, \alpha(g). \end{equation} This is not quite invariant under coboundaries; however, the coboundary $\alpha(g)$ can be absorbed into the operator at the puncture, so taking that into account, the phase is well-defined. Proceeding in this fashion, one is led to correlation functions, see e.g.~\cite{Ramgoolam:2022xfk} for examples in the case of orbifolds of a point (Dijkgraaf-Witten theory). Now, let us turn to three-dimensional analogues. We consider a $T^3$ with a hole, of boundary $T^2$. If we let the three primary sides be related by $g_1$, $g_2$, $g_3$, and the new edge defining the hole by $k$, then graphically, \begin{center} \begin{picture}(70,70)(0,0) \Line(0,0)(40,0) \Line(0,0)(0,40) \Line(40,0)(40,40) \Line(0,40)(40,40) \Line(0,40)(40,70) \Line(40,70)(60,70) \Line(60,70)(70,60) \Line(40,40)(70,60) \Line(40,0)(70,20) \Line(70,20)(70,60) \Text(15,20)[l]{$g_1$} \Text(33,53)[l]{$g_3$} \Text(50,27)[l]{$g_2$} \DashCArc(65,65)(7,-45,135){3} \Text(75,70)[l]{$k$} \end{picture} \end{center} The cut-out corner, seen edge-on, is the square \begin{center} \begin{picture}(80,80)(0,0) \ArrowLine(0,0)(80,0) \ArrowLine(0,0)(0,80) \ArrowLine(80,0)(80,80) \ArrowLine(0,80)(80,80) \Text(40,5)[b]{$g_3$} \Text(5,40)[l]{$k$} \Text(40,75)[t]{$g_3$} \Text(75,45)[r]{$k$} \Vertex(0,0){2} \Text(-5,0)[r]{$x$} \Vertex(0,80){2} \Text(-5,80)[r]{$g_3 x$} \Vertex(80,0){2} \Text(85,0)[l]{$kx$} \Vertex(80,80){2} \Text(85,80)[l]{$g_3 k x = k g_3 x$} \end{picture} \end{center} In order for the diagram to close, the four group elements $g_1, g_2, g_3, k \in G$ obey \begin{equation} g_1 g_3 \: = \: g_3 g_1, \: \: \: g_2 g_3 \: = \: g_3 g_2, \: \: \: g_1 g_2 k \: = \: g_2 g_1, \: \: \: g_3 k \: = \: k g_3. \end{equation} Applying the same methods as \cite{Sharpe:2000qt}, we find that the phase factor associated with this diagram is \begin{equation} \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_2, g_1, g_3) } \frac{ \omega(g_3, g_1, g_2) }{ \omega(g_1, g_3, g_2) } \frac{ \omega(g_2, g_3, g_1) }{ \omega(g_3, g_2, g_1) } \frac{ \omega(g_1 g_2, k, g_3) \, \omega(g_3, g_1 g_2, k) }{ \omega(g_1 g_2, g_3, k) }. \end{equation} As for the $T^2$ with boundary, this is not quite coboundary-invariant, but rather picks up a phase \begin{equation} \frac{ \alpha(k,g_3) }{ \alpha(g_3, k) }, \end{equation} which has the same appearance as the phase one would assign to a $T^2$ with the same boundary conditions. We interpret this as before, as a contribution that would be absorbed by a defect inserted at the puncture, precisely in the spirit of anomaly inflow (see e.g.~\cite{Callan:1984sa}). It is also extremely reminiscent of the relationship between three-dimensional Chern-Simons theories and WZW models on boundaries, see e.g.~\cite{Moore:1989yh}. \subsection{Orientifolds} Now, consider the case that a subgroup of the orbifold group $G$ acts, in part, by reversing orientations, to form an orientifold. $C$ fields on orientifolds were analyzed in \cite[section 6]{Sharpe:2009hr}, in the same pattern as in \cite{Sharpe:2000qt} for $C$ fields on ordinary orbifolds and \cite{Sharpe:2009hr} for $B$ fields on orientifolds. Briefly, the conclusion was that the analogue of $C$ field discrete torsion on orientifolds is counted by $H^3(G,U(1))$ with a nontrivial action on the coefficients, encoded in a homomorphism $\epsilon: G \rightarrow {\mathbb Z}_2$ expressing whether a given element acts trivially. One example discussed in \cite[section 6.2]{Sharpe:2009hr} is a cube, with sides identified by three group elements $g_1, g_2, g_3 \in G$, in which one of the group elements reverses the orientation. The three group elements must be related by \begin{equation} g_2 g_3 \: = \: g_3 g_2, \: \: \: g_1 g_3 \: = \: g_3 g_1, \: \: \: g_1 \: = \: g_2 g_1 g_2, \end{equation} where of the three, $g_1$ reverses orientation, but the other two do not. It was argued there that the corresponding partition function phase factor is \begin{equation} \frac{ \omega(g_1, g_2^{-1}, g_3) \, \omega(g_2, g_3, g_1) \, \omega(g_3, g_1, g_2^{-1}) }{ \omega(g_2, g_1, g_3) \, \omega(g_1, g_3, g_2^{-1}) \, \omega(g_3, g_2, g_1) } \frac{ \omega(g_3, g_2, g_2^{-1}) \, \omega(g_2, g_2^{-1}) }{ \omega(g_2, g_3, g_2^{-1}) }, \end{equation} which is invariant under coboundaries. As another example, consider $S^1 \times {\mathbb R} {\mathbb P}^2$. Let $g \in G$ be orientation-reversing, with $g^2 = 1$, and $h \in G$ any other element that commutes with $g$. It was argued in \cite[section 5.2]{Sharpe:2009hr} that on the real projective plane ${\mathbb R} {\mathbb P}^2$, with sides identified by $g$, the discrete torsion phase is $\omega(g,g)$, which for $g^2 = 1$ is easily checked to be coboundary-invariant. For $S^1 \times {\mathbb R} {\mathbb P}^2$, where the real projective plane is again constructed with $g$, the $C$ field discrete torsion phase can be shown to be \begin{equation} \frac{ \omega(g,g,h) \, \omega(h,g,g) }{ \omega(g,h,g) }, \end{equation} which is easily checked to be coboundary-invariant. \section{Orbifolds by 2-groups} \label{sect:2gp-orb} In this section we will discuss three-dimensional\footnote{ As also noted in the introduction, throughout we have in mind effective field theories as prototypes, though our methods also apply more generally. } orbifolds by 2-group extensions. We saw in section~\ref{sect:ordinary-case} that an ordinary orbfiold by a central group extension of $G$ by trivially-acting $K$ involves a restriction on permitted $G$ bundles, which is implemented by the sum over universes. We shall see an analogous structure here: the 2-group orbifold will involve a restriction on permitted $G$ bundles, which is implemented by a sum over universes. In this fashion we will derive a decomposition, which we will check in examples. In passing, we should mention that just as Dijkgraaf-Witten topological field theory \cite{Dijkgraaf:1989pz} can be interpreted as an orbifold of a point, at least naively the Yetter model \cite{yetter,br,mackaay,porter,fmp,fhlt} appears to be interpretable as a 2-group (or higher) group orbifold of a point. We will not pursue that in this paper, however. \subsection{General aspects} \label{sect:orb-2gp-genl} \subsubsection{Notions of 2-groups and their gauging} A 2-group is, roughly, a group in which associativity holds only up to isomorphisms. In this section we will outline orbifolds by 2-groups, and their decomposition. Briefly, from \cite[section 8.3]{baezlauda}, given a group $G$ and an abelian group $K$, to specify a (coherent) 2-group one specifies an action $\alpha: G \rightarrow {\rm Aut}(K)$ plus an element of $H^3(G,K)$, where the group cohomology is defined with the action of $G$ on $K$ given by $\alpha$. In this section we will restrict to the analogue of a central extension, for which the map $\alpha$ is trivial, and for which $H^3(G,K)$ is defined with trivial action on the coefficients. We will describe 2-groups as extensions of the form \begin{equation} 1 \: \longrightarrow \: BK \: \longrightarrow \: \tilde{\Gamma} \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} for finite abelian $K$. These are classified by $[\omega] \in H^3(G,K)$. In broad brushstrokes, to gauge a 2-group $\tilde{\Gamma}$ means that the path integral \begin{itemize} \item sums over $K$ gerbes, and within that, for each $K$ gerbe, \item sums over $G$ bundles twisted by the action of the $K$ gerbe, in the sense of e.g.~\cite{Witten:1998cd}. \end{itemize} (In general there may also be other mutual twistings, as in e.g.~\cite[equ'ns (1.10), (1.14)]{Cordova:2020tij}, implementing a Green-Schwarz mechanism, in which case one would not have for example precisely a path integral over ordinary $K$ gerbes, but rather over slightly different objects forming a torsor under $K$ gerbes.) Examples in which the $K$ gerbe acts nontrivially (via the action of $BK$ on line operators, for example) include the gauging that arose in \cite{Sharpe:2019ddn}, and also in discussions of gauging $BK$ in Chern-Simons theories for $K$ the center of the gauge group. In this paper, we will be focused on the case in which the one-form symmetry group being gauged acts completely trivially on the three-dimensional theory, meaning that line operators are invariant\footnote{ For example, consider $SU(2)$ Chern-Simons theory in three dimensions. This has a $B {\mathbb Z}_2$ one-form symmetry, inherited from the center of $SU(2)$. However, that one-form symmetry multiplies Wilson lines by phases, and so we would not characterize $SU(2)$ Chern-Simons as invariant under this $B {\mathbb Z}_2$. One could in principle consider a different $B {\mathbb Z}_2$, unrelated to the central ${\mathbb Z}_2$, which leaves all Wilson lines invariant. In that case, that $B {\mathbb Z}_2$ could be said to act trivially. } under $BK$, meaning for example that associated line operators have no braiding with one another or with any of the line operators in the theory being gauged. In this case, relevant for us in this paper, we will see that gauging a 2-group $\tilde{\Gamma}$ means that the path integral (modulo mutual twistings subtleties as above), \begin{itemize} \item sums over $K$ gerbes, and for each $K$ gerbe, \item sums over ordinary $G$ bundles -- no longer twisted by $K$, as $BK$ now acts trivially, but with a more subtle restriction on allowed $G$ bundles, a shadow of the fact that we are gauging a nontrivial extension of $G$ by $BK$. \end{itemize} These two cases can be subsumed into a more geenral picture which is most conveniently related by by describing the 2-groups differently, in terms of what are called crossed modules. In any event, in this paper we will gauge finite 2-group extensions involving trivially-acting $BK$, for which the second notion of gauging is a more apt description. We will study more general cases in upcoming work. \subsubsection{Decomposition conjecture} Consider, as above, gauging a 2-group $\tilde{\Gamma}$ described formally as an extension of a finite group $G$ by $BK$ for $K$ finite and abelian, \begin{equation} 1 \: \longrightarrow \: BK \: \longrightarrow \: \tilde{\Gamma} \: \longrightarrow \: G \: \longrightarrow \: 1. \end{equation} This extension determines an element $[\omega] \in H^3(G,K)$. Because we are gauging a trivially-acting $BK$, one expects that the theory should possess a global two-form symmetry (distinct from the quantum symmetry), and so should decompose. We conjecture that such three-dimensional theories decompose in the form \begin{equation} \label{eq:3d-2gp-decomp} {\rm QFT}\left( [X/\tilde{\Gamma}] \right) \: = \: {\rm QFT}\left( \coprod_{\rho \in \hat{K}} [X/G]_{\rho(\omega)} \right), \end{equation} where $\rho(\omega) \in H^3(G,U(1))$ represents a discrete theta angle, formally involving a term in the action of the form \begin{equation} \int_M \langle \rho, x^* \omega \rangle, \end{equation} for $x^* \omega$ as defined in appendix~\ref{sect:homotopy:2-group}. As we will discuss later, at least on Seifert fibered three-manifolds, this can be rewritten as a discrete-torsion-like phase (of the form discussed in section~\ref{sect:3ddt}) given by the image of $\omega$ under the map \begin{equation} H^3(G,K) \: \stackrel{\rho}{\longrightarrow} \: H^3(G,U(1)). \end{equation} This is a three-dimensional version of decomposition \cite{Hellerman:2006zs}, whose existence reflects the fact that $[X/\tilde{\Gamma}]$ has a 2-form symmetry, due to the trivially-acting $BK$. Next, we will justify this decomposition conjecture by computing partition functions for gauged finite 2-groups, and also studying operator spectra. In subsequent sections we will check the details in examples. \subsubsection{Partition functions} In this section we will compute partition functions for $\tilde{\Gamma}$ orbifolds in three dimensions (for $\tilde{\Gamma}$ a 2-group extension of a finite group $G$ by a trivially-acting $BK$). These are (weighted) sums over $G$ bundles restricted so that an invariant vanishes (see appendix~\ref{sect:homotopy:2-group}). We will see that the resulting partition functions are equivalent to sums of partition functions of ordinary $G$ orbifolds, weighted by $C$ field analogues of discrete torsion, \begin{equation} Z\left( [X/\tilde{\Gamma}] \right) \: = \: \sum_{\rho \in \hat{K}} Z\left( [X/G]_{\rho(\omega)} \right), \end{equation} in accordance with decomposition~(\ref{eq:3d-2gp-decomp}). In general terms, this is a consequence of the fact, explained in appendix~\ref{sect:homotopy:2-group}, that $\tilde{\Gamma}$ bundles on three-manifolds $M$ map to $G$ bundles obeying the constraint $x^* \omega = 1 \in H^3(M,K)$, where $\omega \in H^3(G,K)$ determines the extension $\tilde{\Gamma}$, and $x: M \rightarrow BG$ determines the $G$ bundle. Such a constraint is implemented by a projector, proportional to \begin{equation} \sum_{\rho \in \hat{K}} \exp\left( \int_M \langle \rho, x^* \omega \rangle \right). \end{equation} Summing over $\rho \in \hat{K}$ effectively cancels out contributions from any $G$ bundle for which $x^* \omega \neq 1$. As we saw for ordinary central extensions in section~\ref{sect:ordinary-case}, inserting such a projection operator in a path integral is equivalent to working with a sum of theories, one for each $\rho \in \hat{K}$, each of which is modified by a discrete theta angle defined by $\rho \in \hat{K}$ and coupling to $x^* \omega \in H^3(M,K)$. This gives rise to the present version of decomposition~(\ref{eq:3d-2gp-decomp}). At least for Seifert fibered three-manifolds, it is straightforward to give this construction a much more concrete description, by describing $x^* \omega$ explicitly in terms of phases derived from the group cocycle $\omega$. To do so, we follow the same\footnote{ Our notations differ, but the procedure is identical. Specifically, the $\gamma: M \rightarrow BG$ used in \cite{Dijkgraaf:1989pz} is the same as $x: M \rightarrow BG$ here, and the $\alpha \in H^3(G,U(1))$ used there coincides with $\omega \in H^3(G,K)$ here. Their analysis is done for $U(1)$ coefficients, but essentially because $K$ is abelian and in both cases, the group action on the coefficients is trivial, the argument is otherwise the same. } procedure used in \cite[section 6.5]{Dijkgraaf:1989pz}. Briefly, given a triangulation of the three-manifold $M$, associate a phase $\omega(g_1, g_2, g_3)$ to each simplex, and use an ordering to determine whether to multiply or divide the phase. (We specialize to Seifert fibered manifolds solely because of potential practical difficulties in explicitly construction a triangulation. Given a triangulation, the method of \cite{Dijkgraaf:1989pz} is otherwise general.) The result is that $\langle \rho, x^* \omega \rangle$ can be identified with a discrete-torsion-like phase \cite{Sharpe:2000qt}, as described in section~\ref{sect:3ddt}, for a class in $H^3(G,U(1))$ given by the image of $\omega \in H^3(G,K)$ under $\rho$, or schematically, \begin{eqnarray} H^3(G,K) & \stackrel{\rho}{\longrightarrow} & H^3(G,U(1)) , \nonumber\\ \omega & \mapsto & \rho \circ \omega = \rho(\omega). \end{eqnarray} We have that on a connected three-manifold $M$, \begin{equation} Z_M\left( [X/\tilde{\Gamma}] \right) \: = \: \sum_{\rho \in \hat{K}} Z_M\left( [X/G]_{\rho(\omega)} \right), \end{equation} matching the prediction of decomposition~\ref{eq:3d-2gp-decomp}, with the sum over universes implementing the restriction to $G$ bundles such that $x^* \omega = 1$. Next, we specialize to the case of $M = T^3$. As everything can be computed explicitly in this case, we will walk through all the details in order to better explain the idea. Ordinarily, in a $G$ orbifold on $T^3$, one would sum over commuting triples $g_1, g_2, g_3 \in G$. Here, however, because of the 2-group extension, only some triples are consistent, much as we saw in the case of ordinary central extensions in section~\ref{sect:ordinary-case}. As mentioned above, and as described in detail in appendix~\ref{sect:homotopy:2-group}, the constraint on $G$ bundles is that $x^* \omega = 1 \in H^3(T^3,K)$. To understand the result, we outline here a slightly sloppy computation for the special case of $T^3$, which will reproduce the $T^3$ result derived rigorously in appendix~\ref{app:tori}. To make the 2-group $\tilde{\Gamma}$ more concrete, we imagine associating $K$-valued wavefunctions $\psi_g$ to $g \in G$, which can then be multiplied by $K$-valued cocycles, where associativity holds up to the cocycle $\omega$ as \begin{equation} \psi_{g_1 g_2} \psi_{g_3} \: = \: \omega(g_1, g_2, g_3) \, \psi_{g_1} \psi_{g_2 g_3}. \end{equation} (Note that adding coboundaries to $\omega$ merely multiplies the products by phases.) Then, we can derive a consistency condition on commuting triples, as follows. \begin{eqnarray} \psi_{g_1 g_2} \psi_{g_3} & = & \psi_{g_2 g_1} \psi_{g_3} , \nonumber\\ & = & \omega(g_2, g_1, g_3) \psi_{g_2} \psi_{g_1 g_3} , \nonumber\\ & = & \omega(g_2, g_1, g_3) \psi_{g_2}\psi_{ g_3 g_1} , \nonumber\\ & = & \frac{ \omega(g_2, g_1, g_3) }{ \omega(g_2, g_3, g_1)} \psi_{g_2 g_3} \psi_{g_1} , \nonumber\\ & = & \frac{ \omega(g_2, g_1, g_3) }{ \omega(g_2, g_3, g_1)} \psi_{g_3 g_2} \psi_{g_1} . \end{eqnarray} It also equals \begin{eqnarray} \psi_{g_1 g_2} \psi_{g_3} & = & \omega(g_1, g_2, g_3) \psi_{g_1} \psi_{g_2 g_3}, \nonumber\\ & = & \omega(g_1, g_2, g_3) \psi_{g_1} \psi_{g_3 g_2} , \nonumber\\ & = & \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_1, g_3, g_2) } \psi_{g_1 g_3} \psi_{g_2} , \nonumber\\ & = & \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_1, g_3, g_2) } \psi_{g_3 g_1} \psi_{g_2} , \nonumber\\ & = & \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_1, g_3, g_2) } \omega(g_3, g_1, g_2) \psi_{g_3} \psi_{g_1 g_2} , \nonumber\\ & = & \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_1, g_3, g_2) } \omega(g_3, g_1, g_2) \psi_{g_3} \psi_{g_2 g_1} , \nonumber\\ & = & \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_1, g_3, g_2) } \frac{ \omega(g_3, g_1, g_2) }{ \omega(g_3, g_2, g_1) } \psi_{g_3 g_2} \psi_{g_1} . \end{eqnarray} In order for these two expressions to match, we must require \begin{equation} \label{eq:3dconstr1} \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_1, g_3, g_2) } \frac{ \omega(g_3, g_1, g_2) }{ \omega(g_3, g_2, g_1) } \frac{ \omega(g_2, g_3, g_1) }{ \omega(g_2, g_1, g_3) } \: = \: 1 \end{equation} as an element of $K$, which is the same condition derived mathematically in appendix~\ref{app:tori}. (We suspect it may also be possible to use topological defect lines to give a simple argument, but we leave that for future work.) We can therefore understand a $\tilde{\Gamma}$ bundle as a collection of $K$ gerbes and $G$ bundles on $T^3$ defined by commuting triples $(g_1, g_2, g_3)$ subject to the constraint \begin{equation} \epsilon(g_1, g_2, g_3) \: = \: 1 \end{equation} for \begin{equation} \epsilon(g_1, g_2, g_3) \: = \: \frac{ \omega(g_1, g_2, g_3) }{ \omega(g_1, g_3, g_2) } \frac{ \omega(g_3, g_1, g_2) }{ \omega(g_3, g_2, g_1) } \frac{ \omega(g_2, g_3, g_1) }{ \omega(g_2, g_1, g_3) }. \end{equation} For the same reasons as discussed for $H^3(G,U(1))$ in section~\ref{sect:3ddt}, it is straightforward to demonstrate that \begin{equation} \epsilon(g_1, g_2, g_3 g_4) \: = \: \epsilon(g_1, g_2, g_3) \epsilon(g_1, g_2, g_4) \end{equation} (and symmetrically), hence using the same argument as in the two-dimensional case, $\epsilon$ is invariant under simultaneous conjugation\footnote{ We restrict to the same $h$ on each factor because $\epsilon$ is only defined on commuting triples, meaning each pair obeys $g_i g_j = g_j g_i$. }, \begin{equation} \epsilon(h g_1 h^{-1}, h g_2 h^{-1}, h g_3 h^{-1}) \: = \: \epsilon(g_1, g_2, g_3). \end{equation} The partition function of the $\tilde{\Gamma}$ orbifold on $T^3$ then takes the form\footnote{ The overall factor of $1/|G|$ is standard in orbifolds and ultimately reflects the fact that the sum is counting bundles with automorphisms, see e.g.~\cite[equ'n (5.14)]{Freed:1991bn}. The factors involving $K$ can be found in e.g.~\cite[equ'ns (2.31), (2.32)]{Benini:2022hzx}. } \cite{yujipriv} \begin{eqnarray} \label{eq:genl-part-fn} Z_{T^3}\left( [X/\tilde{\Gamma} ] \right) & = & \frac{ | H^0(T^3, K) | }{ | H^1(T^3, K) | } \frac{1}{|H^0(T^3,G)|} \sum_{z_1, z_2, z_3 \in K} {\sum_{g_1, g_2, g_3 \in G}}^{\!\!\!\!\!\prime} \: Z(g_1, g_2, g_3), \nonumber\\ & = & \frac{1}{|K|^2 |G|} \sum_{z_1, z_2, z_3 \in K} {\sum_{g_1, g_2, g_3 \in G}}^{\!\!\!\!\!\prime}\: Z(g_1, g_2, g_3), \end{eqnarray} where the prime indicates that the sum over triples in $G$ is constrained to commuting triples such that $\epsilon(g_1, g_2, g_3) = 1$. Now, we can enforce the condition that $\epsilon = 1$ by inserting a projector \begin{equation} \frac{1}{|K|} \sum_{\rho \in \hat{K} } \epsilon_{\rho}(g_1, g_2, g_3) \end{equation} where $\epsilon_{\rho}$ is the image of $\epsilon$ under $\rho: K \rightarrow U(1)$. The partition function then has the form \begin{eqnarray} Z_{T^3}\left( [X/\tilde{\Gamma} ] \right) & = & \frac{1}{|K|^2 |G|} |K|^3 \sum_{g_1, g_2, g_3 \in G} \frac{1}{|K|} \sum_{\rho \in \hat{K} } \epsilon_{\rho}(g_1, g_2, g_3) Z(g_1, g_2, g_3), \nonumber\\ & = & \sum_{\rho \in \hat{K}} Z_{T^3}\left( [X/G]_{\epsilon_{\rho})} \right), \end{eqnarray} where \begin{equation} Z_{T^3}\left( [X/G]_{\epsilon_{\rho})} \right) \: = \: \frac{1}{|G|} \sum_{g_1, g_2, g_3 \in G} \epsilon_{\rho}(g_1, g_2, g_3) Z(g_1, g_2, g_3), \end{equation} using a standard normalization (compare e.g.~\cite[equ'n (5.14)]{Freed:1991bn}). Each factor $\epsilon_{\rho}$ is precisely a $C$ field analogue of discrete torsion, as reviewed in section~\ref{sect:3ddt}, and coincides with the quantity we earlier labelled $\rho(\omega)$. Thus, we see that for the special case of $T^3$, partition functions are consistent with the decomposition conjecture~\ref{eq:3d-2gp-decomp}. As outlined at the beginning, the same argument applies for any three-manifold. The only real difference on other three-manifolds is that there may be dilaton-type Euler counterterm shifts, as discussed in e.g.~\cite{Hellerman:2006zs}, which vanish on $T^3$ as $\chi(T^3) = 0$. Modulo such trivial counterterms, on any connected three-manifold, \begin{equation} Z\left( [X/\tilde{\Gamma}] \right) \: = \: \sum_{\rho \in \hat{K}} Z\left( [X/G]_{\epsilon_{\rho}} \right). \end{equation} This is precisely the statement of decomposition~(\ref{eq:3d-2gp-decomp}), at the level of partition functions. To summarize, we see that inserting a projection operator to enforce the constraint on $G$-twisted sectors makes manifest the statement that the partition function of the 2-group orbifolds equals the partition function for a sum of three-dimensional orbifolds, each twisted by an $\epsilon_{\rho}$ which is \cite{Sharpe:2000qt} a three-dimensional analogue of discrete torsion. In this fashion, we recover decomposition~(\ref{eq:3d-2gp-decomp}), at the level of partition functions, in close analogy with the description in section~\ref{sect:ordinary-case} of decomposition in two-dimensional orbifolds. As an aside, previously in two-dimensional theories with a one-form symmetry given by a trivially-acting $K$, we saw universes enumerated by irreducible representations of $K$, see e.g.~\cite{Hellerman:2006zs}. Here, since we have a 2-form symmetry and trivially-acting $BK$, one might have naively guessed that universes would be enumerated by representations of $BK$, at variance with the conjecture above. However, we examine decomposition for both 1-form and 2-form symmetries formally in appendix~\ref{app:duality}, and observe there that in both cases, universes appear to be enumerated by representations of $K$, so the form of the conjecture above is consistent. \subsubsection{Local operators} So far we have given a general justification of the decomposition conjecture for gauged 2-groups using partition functions. Let us briefly outline an analogous argument using local operators. In two dimensional orbifolds with trivially-acting subgroups, the twist fields associated to trivially-acting group elements form dimension-zero operators, and the projectors (onto universes) are constructed from linear combinations of those projectors. In three dimensions, when gauging a one-form symmetry, from the general theory of topological defect lines, the theory contains monopole operators, which play an analogous role. Briefly, the monopole operators are endpoints of real codimension two lines corresponding to the gauged one-form symmetry, just as gauging an ordinary (zero-form) symmetry results in real codimension one walls. Two-spheres surrounding the monopole operators have $K$ gerbes, just as circles surrounding two-dimensional twist fields carry bundles. In any event, given a trivially-acting gauged $BK$ symmetry, the resulting three-dimensional theory will contain monopole operators, which are closely analogous to two-dimensional twist fields, and can be used to build projectors. For example, in a gauged $B {\mathbb Z}_k$, the monopole operators will generate ${\mathbb Z}_k$ gerbes on $S^2$, which are classified by $H^2(S^2,{\mathbb Z}_k) = {\mathbb Z}_k$. As those gerbes on $S^2$ are all generated by powers of one gerbe, there will be one monopole operator which generates the others, call it $\hat{z}$, and which obeys $\hat{z}^k = 1$. Given such operators, one can build projectors, as linear combinations of the form \begin{equation} \Pi_m \: = \: \frac{1}{k} \sum_{j=0}^{k-1} \xi^{jm} \hat{z}^j, \end{equation} for $\xi = \exp(2 \pi i/k)$, which from $\hat{z}^k = 1$ are easily checked to obey \begin{equation} \Pi_m \Pi_n \: = \: \Pi_m \delta_{m,n}, \: \: \: \sum_{m=0}^{k-1} \Pi_m \: = \: 1. \end{equation} \subsection{Example: $G = 1$, $K = {\mathbb Z}_2$} Let us consider the orbifold $[X/B {\mathbb Z}_2]$ for a moment, where the $B {\mathbb Z}_2$ acts trivially, in the sense that all line operators in the theory are invariant under the $B {\mathbb Z}_2$. Then, at a path integral level, the orbifold $[X/B {\mathbb Z}_2]$ involves a sum over ${\mathbb Z}_2$ gerbes, but each of the gerbe sectors is identical, much as in a two-dimensional orbifold by a group that acts completely trivially. At the level of operators, gauging the $B {\mathbb Z}_2$ results in monopole operators, which generate ${\mathbb Z}_2$ gerbes on spheres surrounding the operators, much as twist fields generate branch cuts and hence bundles on surrounding circles in two-dimensional theories. Since the $B {\mathbb Z}_2$ acts trivially, the monopole operators commute with all local operators present in the original theory, we see that the full set of operators in the gauged theory is just two copies of the operators of the original theory. Furthermore, since the monopole operators generate ${\mathbb Z}_2$ gerbes on surrounding $S^2$'s, and the product of a nontrivial ${\mathbb Z}_2$ gerbe with itself is trivial, we see that if $\hat{z}$ denotes a monopole operator, then $\hat{z}^2 = 1$, and so we can build projection operators \begin{equation} \Pi_{\pm} \: = \: \frac{1}{2} \left( 1 \pm \hat{z} \right), \end{equation} which implement a decomposition. In particular, in these circumstances, \begin{equation} [X / B {\mathbb Z}_2 ] \: = \: X \, \coprod \, X, \end{equation} as expected from decomposition~(\ref{eq:3d-2gp-decomp}). (Here, we use the fact that $\rho(\omega) = 1$ for all $\rho \in \hat{K}$, as $\omega$ itself is trivial.) \subsection{Example: $G = {\mathbb Z}_2 = K$} Let us begin with a very simple example. Consider the case of a two-group extension of the form \begin{equation} 1 \: \longrightarrow \: B {\mathbb Z}_2 \: \longrightarrow \: \tilde{\Gamma} \: \longrightarrow \: {\mathbb Z}_2 \: \longrightarrow \: 1. \end{equation} As discussed in appendix~\ref{app:z2}, $H^3({\mathbb Z}_2, {\mathbb Z}_2) = {\mathbb Z}_2$, so there is a nontrivial 2-group extension $\tilde{\Gamma}$ of this form. In this case, it is straightforward to check that $\epsilon(g_1, g_2, g_3)$ is the identity in ${\mathbb Z}_2$ for all triples $g_{1-3} \in {\mathbb Z}_2$, so there is no additional constraint on $G$ bundles on $T^3$ (beyond pairwise commutivity) to lift to a $\tilde{\Gamma}$ bundle. It is then straightforward to compute the $T^3$ partition function from~(\ref{eq:genl-part-fn}), yielding \begin{eqnarray} Z_{T^3}\left( [X/\tilde{\Gamma} ] \right) & = & \frac{1}{|K|^2 |G|} \sum_{z_1, z_2, z_3 \in K} \sum_{g_1, g_2, g_3 \in G} Z(g_1, g_2, g_3), \nonumber\\ & = & \frac{|K|}{|G|} \sum_{g_1, g_2, g_3 \in G} Z(g_1, g_2, g_3), \nonumber\\ & = & Z_{T^3} \left( \coprod_{\hat{K}} [X/G] \right), \end{eqnarray} as expected from decomposition~(\ref{eq:3d-2gp-decomp}). In this case, $\rho(\omega) = 1$ for all $\rho \in \hat{K}$. Although $H^3(G,U(1)) = {\mathbb Z}_2$ for $G = {\mathbb Z}_2$, the group $G = {\mathbb Z}_2$ is in some sense too small to have any nontrivial phases resulting from analogues of discrete torsion. The reader should also note that we get this decomposition for both 2-group extensions $\tilde{\Gamma}$ indexed by $H^2({\mathbb Z}_2,{\mathbb Z}_2) = {\mathbb Z}_2$, implying that they are physically equivalent to one another. (Analogous relations were seen in decomposition of two-dimensional theories with one-form symmetries in \cite{Hellerman:2006zs}, in which different gerbes are described by the same physical theory.) \subsection{Example: $G = ({\mathbb Z}_2)^3$, $K = {\mathbb Z}_2$} Write $G = ({\mathbb Z}_2)^3 = \langle a, b, c \rangle$. Let us pick an extension of $G$ by $BK$ corresponding to the element of $H^3(G,K)$ given by $(-)^{a_1 b_2 c_3}$ in appendix~\ref{app:z23}. Then, the commuting triples $g_{1-3}$ for which $\epsilon(g_1, g_2, g_3) \neq 1 \in K$ include, for example, $(ax,by,cz)$ and their permutations, where \begin{equation} \label{eq:ex:omitted} x \in \{1, b, c, bc\}, \: \: \: y \in \{1, a, c, ac\}, \: \: \: z \in \{1, a, b, ab\}. \end{equation} The partition function of $[X/\tilde{\Gamma}]$ then has the form \begin{equation} Z_{T^3}\left( [X/\tilde{\Gamma}] \right) \: = \: \frac{1}{|K|^2 |G|} \sum_{z_{1-3}\in K} {\sum_{g_{1-3}\in G}}^{\!\!\!\prime} \: Z(g_1, g_2, g_3) \: = \: \frac{|K|}{|G|} {\sum_{g_{1-3}\in G}}^{\!\!\!\prime} \: Z(g_1, g_2, g_3), \end{equation} where the prime indicates that some of the $G$-twisted sectors are omitted. For the trivial representation $1 \in \hat{K}$, $\epsilon_1(g_1, g_2, g_3) = 1$, but for the nontrivial representation $1 \in \hat{K}$, $\epsilon_{\rho}(g_1, g_2, g_3)$ corresponds to the discrete-torsion-like phase~(\ref{eq:dw-phases}) corresponding to the cocycle $\omega_4 \in H^3(G,U(1))$ listed in appendix~\ref{app:z23}, essentially because the $\omega_4$ cocycle has the same form as the chosen element of $H^3(G,K)$ above: $\omega_4(g_1, g_2, g_3) = (-)^{a_1 b_2 c_3}$ also. That discrete-torsion-like phase equals $-1$ on precisely the triples that are omitted from the $[X/\tilde{\Gamma}]$ orbifold, namely sectors of the form $(ax, by, cz)$ and their permutations, for $x$, $y$, $z$ as in~(\ref{eq:ex:omitted}). Sectors that are not omitted include $(g,g,g)$ for $g$ any element of $( {\mathbb Z}_2 )^3$. Putting this together, we see \begin{equation} Z_{T^3}\left( [X/\tilde{\Gamma}] \right) \: = \: Z_{T^3}\left( [X/G] \, \coprod \, [X/G]_{\omega_4} \right), \end{equation} matching the prediction of decomposition~(\ref{eq:3d-2gp-decomp}) for this case. The sectors that are omitted in the $\tilde{\Gamma}$ orbifold cancel out between the two $G$ orbifolds, realizing a `multiverse interference effect' as usual. \subsection{Example: $G = ({\mathbb Z}_2)^2 = K$} In this case, it is straightforward to check that the discrete-torsion-like phase factors $\omega(\rho)$ are all trivial for any extension class in $H^3(G,K)$ and any $\rho \in \hat{K}$, hence in this case our conjecture~(\ref{eq:3d-2gp-decomp}) predicts \begin{equation} {\rm QFT}\left( [X/\tilde{\Gamma}] \right) \: = \: {\rm QFT}\left( \coprod_{\rho \in \hat{K}} [X/G] \right). \end{equation} We can check this by computing the $T^3$ partition function. In this case, for $G=K=({\mathbb Z}_2)^2$, it is straightforward to check that $\epsilon = 1$ holds automatically for every $[\omega] \in H^3( G, K )$, so there is no constraint on commuting triples $(g_1, g_2, g_3)$. Then, from the general formula~(\ref{eq:genl-part-fn}), \begin{eqnarray} Z_{T^3}\left( [X/\tilde{\Gamma} ] \right) & = & \frac{1}{|K|^2 |G|} \sum_{z_1, z_2, z_3 \in K} \sum_{g_1, g_2, g_3 \in G} Z(g_1, g_2, g_3) , \nonumber\\ & = & \frac{ |K| }{ |G| } \sum_{g_1, g_2, g_3 \in G} Z(g_1, g_2, g_3) , \nonumber\\ & = & |K| Z_{T^3}\left( [X/G] \right), \end{eqnarray} which is consistent with the prediction of decomposition. \section{Interpretation: sigma models on 2-gerbes} \label{sect:sigma-2gerbe} These orbifolds by 2-groups have a more formal description as realizations of sigma models on 2-gerbes, closely analogous to sigma models on gerbes as described in \cite{Pantev:2005rh,Pantev:2005zs,Pantev:2005wj}. Briefly, gerbes are closely analogous to principal bundles. A $n$-($G$-)gerbe is essentially a fiber bundle whose fibers are `groups' $B^n G$ of higher-form symmetries. As a result, a sensibly-defined sigma model with target such a gerbe should admit a global $B^n G$ symmetry, corresponding to translations along the fibers of the gerbe. Because the `group' $BG = [{\rm point}/G]$, a $G$-gerbe -- a fiber bundle with fiber $BG$ -- can be locally presented as a quotient in which a subgroup acts trivially. This was utilized in the previous work \cite{Pantev:2005rh,Pantev:2005zs,Pantev:2005wj} to construct sigma models on gerbes, presented as orbifolds and gauge theories with trivially-acting subgroups. Now, this glosses over a number of subtleties, including questions about non-uniqueness of presentations (dealt with by identifying a sigma model on a stack or gerbe with a universality class of RG flow), potential modular invariance and unitarity issues in orbifolds, seeming moduli mismatches, and most important for decomposition, violations of the cluster decomposition axiom, which were discussed in \cite{Pantev:2005rh,Pantev:2005zs,Pantev:2005wj,Hellerman:2006zs}. In any event, from the same reasoning, orbifolds by 2-groups with trivially-acting one-form symmetries appear to be presentations of sigma models on 2-gerbes, just as sigma models on ordinary gerbes are realized in terms of gauge theories with trivially-acting (ordinary) subgroups \cite{Pantev:2005rh,Pantev:2005zs,Pantev:2005wj}. As discussed in \cite[section 2]{Hellerman:2010fv}, a map $f: Y \rightarrow {\cal G}$, for ${\cal G}$ a (banded) $G$-gerbe over $M$ ($G$ assumed finite), defines\footnote{ In fact, the map $f$ is equivalent to the map $\tilde{f}$ plus a specific choice of trivialization of $\tilde{f}^* {\cal G}$. } a map $\tilde{f}: Y \rightarrow M$ with a trivialization of $\tilde{f}^* {\cal G}$. If $\dim Y = 2$, this gives a restriction on the degree of $\tilde{f}$. Explicitly, let $\pi: {\cal G} \rightarrow M$ be projection, then $\tilde{f} = \pi \circ f$, and $\tilde{f}^* {\cal G}$ has a canonical trivialization. This trivialization may be clearer to the reader in the closely related case of bundles. Given a map $g: Y \rightarrow E$ for some bundle $\pi: E \rightarrow M$, we can define $\tilde{g} = \pi^* g$, and then as \begin{equation} \tilde{g}^* E \: = \: \{(y,e) \in Y \times E \, | \, \tilde{g}(y) = \pi(e) \}, \end{equation} there is a trivialization $Y \rightarrow \tilde{g}^* E$ given by $y \mapsto (y,\tilde{g}(y))$. The same analysis applies to gerbes. So, we have that a map $f: Y \rightarrow {\cal G}$ defines a map $\tilde{f}: Y \rightarrow M$ such that $\tilde{f}^* {\cal G}$ is trivializable. As discussed in \cite[section 2]{Hellerman:2010fv}, if $\dim Y = 2$, this implies a restriction on degrees. If the characteristic class of ${\cal G}$ is $\omega \in H^2( M, G)$ ($G$ finite), then $\tilde{f}^* \omega = 0 \in H^2(Y, G)$. For example, if $Y = {\mathbb P}^1$ and $M = {\mathbb P}^N$, with $\tilde{f}: {\mathbb P}^1 \rightarrow {\mathbb P}^N$ of degree $d$, and $G = {\mathbb Z}_k$, then $\tilde{f}^* \omega = d \omega$, and $d \omega = 0 \in H^2( {\mathbb P}^N, {\mathbb Z}_k)$ means $d \omega \equiv 0 \mod k$, that the product of $d$ and the characteristic class is divisible by $k$. If the dimension of $Y$ is not two, then one still has a constraint that $\tilde{f}^* {\cal G}$ is trivializable, which does restrict the possible maps $\tilde{f}$; however, that restriction will not be describable as simply as a restriction on map degrees. Briefly, the same formal arguments apply to (banded analogues of) 2-gerbes. Just as for ordinary gerbes, a map $f: Y \rightarrow {\cal G}$, for ${\cal G}$ a 2-($G$-)gerbe over $M$, from essentially the same argument as before, one gets a map $\tilde{f}: Y \rightarrow M$ with a restriction on degrees, following from the statement that $\tilde{f}^* {\cal G}$ is trivializable (and so has vanishing characteristic class in $H^3(Y,G)$). \section{Analogues in other dimensions and other degrees} \label{sect:higher} \subsection{Decomposition in higher-dimensional orbifolds} In this section, we make some conjectures for how this program could be continued into higher dimensions, by observing that the arguments we have applied to ordinary central extensions and 2-group extensions also apply, with only minor modifications, to higher-group extensions. Consider orbifolds in $d$ dimensions. Specifically, consider gauging a higher-group extension \begin{equation} 1 \: \longrightarrow \: B^{d-2} K \: \longrightarrow \: \tilde{\Gamma} \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} for $K$ a finite abelian group, classified by an element $[\omega] \in H^{d}(G,K)$. The orbifold $[X/\tilde{\Gamma}]$ has the structure of a $[X/G]$ orbifold but with a restriction on the $G$ sectors, namely that they trivialize a coboundary-invariant constructed from $\omega$, or explicitly $x^* \omega = 1$ in the notation of appendix~\ref{app:homotopy}. For example, on $T^d$, we require that commuting $d$-tuples $g_1, \cdots, g_d$ also obey \begin{equation} \label{eq:dtuple:restr} \epsilon(g_1, \cdots, g_d) \: = \: 1 \: \in \: K, \end{equation} for \begin{equation} \epsilon(g_1, \cdots, g_d) \: = \: \prod_{{\rm perm's} \: \sigma} \omega(g_{\sigma(1)}, \cdots, g_{\sigma(d)} )^{{\rm sgn}\, \sigma}, \end{equation} as outlined in appendix~\ref{app:tori}. The reader should note in passing that the phase $\epsilon$ above, for coefficients in any abelian group, obeys standard properties of discrete-torsion-like phases, specifically, \begin{itemize} \item the phase $\epsilon$ is invariant under coboundaries, and so is well-defined on cohomology $H^d(G,U(1))$, \item the phase $\epsilon$ is a homomorphism in the sense that \begin{equation} \epsilon(ab,g_3,\cdots,g_{d+1}) \: = \: \epsilon(a,g_3, \cdots, g_{d+1}) \, \epsilon(b,g_3,\cdots,g_{d+1}), \end{equation} (and similarly for products in other positions, from the antisymmetry of $\epsilon$), as can be verified from the identity \begin{equation} {\prod_{ {\rm perm's}\: \sigma}}^{\!\!\!\prime} \: (d\omega)\left(g_{\sigma(1)}, \cdots, g_{\sigma(d+1)} \right)^{ {\rm sgn}\:\sigma} \: = \: 1, \end{equation} for permutations of the $(d+1)$-tuple $(a, b, g_3, \cdots, g_{d+1})$, where the prime indicates that we restrict to permutations preserving the order of $a$, $b$, \item the phase $\epsilon(g_1,\cdots,g_d)$ is invariant under $SL(n,{\mathbb Z})$ actions on the group elements, as is straightforward to verify from the homomorphism property. \end{itemize} Returning to partition functions, the restriction above on $G$ bundles can be implemented by inserting a projector, which (as discussed previously) is equivalent to a decomposition into universes $[X/G]$ weighted by a discrete theta angle coupling to $x^* \omega$, in the notation of appendix~\ref{app:homotopy}. In the special case of $T^d$, the restriction above to $d$-tuples obeying~(\ref{eq:dtuple:restr}) is equivalent to inserting a projection operator in an ordinary $[X/G]$ orbifold, with projector which on $T^d$ takes the form \begin{equation} \frac{1}{|K|} \sum_{\rho \in \hat{K}} \epsilon_{\rho}(g_1, \cdots, g_d), \end{equation} where $\epsilon_{\rho} \in U(1)$ is the image of $\epsilon$ under $\rho: K \rightarrow U(1)$. The resulting $T^d$ partition function is the same as that of a sum of partition functions of $[X/G]$ orbifolds, each with a discrete-torsion-like phase factor defined by $\epsilon_{\rho}$. Thus, in higher dimensions, based on the partition function analysis above, we expect that the $[X/\tilde{\Gamma}]$ orbifold decomposes: \begin{equation} {\rm QFT}\left( [X/\tilde{\Gamma}] \right) \: = \: {\rm QFT}\left( \coprod_{\rho \in \hat{K}} [X/G]_{\rho(\omega)} \right), \end{equation} (for $\rho(\omega)$ indicating a discrete theta angle $\rho$ coupled to $x^* \omega$,) which at least in special cases can be expressed in the form \begin{equation} {\rm QFT}\left( [X/\tilde{\Gamma}] \right) \: = \: {\rm QFT}\left( \coprod_{\rho \in \hat{K}} [X/G]_{\rho(C)} \right), \end{equation} for $\rho(C)$ expressing elements of higher-dimensional analogues of discrete torsion. (Interpreted literally as a sigma model, this theory should only be understood as a low-energy effective action, of course, though this should also be a prototype for theories in $d$ dimensions.) It is also straightforward to outline the origin of projectors in this language. In two dimensional orbifolds, the projectors onto the universes are constructed as linear combinations of the twist fields associated to trivially-acting group elements. Now, in a $d$ dimensional theory, if we gauge a $p$-form symmetry, then in the language of topological defect lines (see e.g.~\cite{Chang:2018iay}), one gets a real codimension $(p+1)$ object that generalizes the branch cuts of an orbifold, and which terminates on a real codimension $(p+2)$ object, which is the analogue of a twist field. So, work in $d$ dimensions, and gauge a (trivially-acting) $(d-2)$-form symmetry. In principle, this should result in a theory with a global $(d-1)$-form symmetry, and hence a decomposition. Because we have gauged a $(d-2)$-form symmetry, we get a real codimension $(d-1)$ object, an analogue of the two-dimensional branch cut, which terminates at a real codimension $d$ object (an analogue of a twist field), which in $d$ dimensions is pointlike. Those pointlike objects, those analogues of twist fields, could then be used to construct projectors. \subsection{Interpretation: higher-dimensional sigma models} \label{sect:higher-sigma} In this paper we have discussed how maps from 2-manifolds into ordinary gerbes and maps from 3-manifolds into 2-gerbes define maps into spaces with restrictions on degrees (following from the constraint that the pullback of the gerbe be trivial). There is a very closely analogous story for higher gerbes, which we outline in this section (slightly generalizing \cite[section 2]{Hellerman:2010fv}). Maps into ($m$-)$G$-gerbes are closely related to maps into underlying spaces with restrictions on degrees. Consider a map $f$ from a space $Y$ into a ($m$-)gerbe ${\cal G} \rightarrow M$. Composing with the projection gives a map $\tilde{f}: Y \rightarrow M$. The map $f$ defines a section of $\tilde{f}^* {\cal G}$, almost by definition, hence it trivializes $\tilde{f}^* {\cal G}$. As a consequence, the map $\tilde{f}$ induces \begin{equation} \tilde{f}^*: \: H^{m+1}(M, G) \: \longrightarrow \: H^{m+1}(Y, G). \end{equation} The characteristic class of the $m$-gerbe ${\cal G}$ must be in the kernel of that map, hence there is a restriction on possible maps $\tilde{f}$. In particular, a map $f: Y \rightarrow {\cal G}$ is equivalent to a map $\tilde{f}: Y \rightarrow M$, trivializing the characteristic class of the gerbe, together with a specific choice of trivialization of the $m$-gerbe $\tilde{f}^* {\cal G}$, which is an $(m-1)$-gerbe over $B$. Depending upon the circumstances, this may imply a restriction on the map $\tilde{f}$. For example, if ${\cal G}$ is an $m$-gerbe and $\dim Y \leq m$, the map $\tilde{f}$ is unconstrained, since the pullback of the characteristic class is an element of $H^{m+1}(Y,G) = 0$, so all maps are in the kernel. On the other hand, suppose we have an $m$-gerbe and $\dim Y > m$. (For example, a four-dimensional low-energy effective sigma model mapping into a 1-gerbe, 2-gerbe, or 3-gerbe.) In this case, the map $\tilde{f}$ is constrained, but depending upon the relative values of $m$ and $\dim Y$, the restriction may be on e.g.~lower homotopy. \section{Analogues in Chern-Simons theories in three dimensions} \label{sect:cs} It is well-known that gauging the $B {\mathbb Z}_2$ central symmetry of $SU(2)$ Chern-Simons theory in three dimensions results in an $SO(3)$ Chern-Simons theory. Briefly, the path integral sums over ${\mathbb Z}_2$ gerbes and gerbe-twisted $SU(2)$ bundles with connection, for which bundle transition functions only close up to gerbe transition functions on triple overlaps; the resulting path integral is precisely a path integral over $SO(3)$ bundles with connection, for which the second Stiefel-Whitney class $w_2$ coincides with the gerbe characteristic class, and the third Stiefel-Whitney class is determined by a Steenrod square as $w_3 = {\rm Sq}^1(w_2)$. In that case, the $B {\mathbb Z}_2$ acted nontrivially on line operators, specifically as phases determined by the $n$-ality of the representation (partially) defining the Wilson line. We could consider more general situations, in which the one-form symmetry group maps to an action on the center, but with a nonzero kernel. In general, consider a 2-group $\Gamma$ defined by a crossed module $\{d: A \rightarrow H\}$, where $A$ is abelian and the image of $d$ is contained within the center of the group $H$. If we let $K$ denote the kernel of $d$, and $G = H / {\rm im}\, A$, then \begin{equation} 1 \: \longrightarrow \: K \: \longrightarrow \: A \: \longrightarrow \: H \: \longrightarrow \: G \: \longrightarrow \: 1, \end{equation} which defines an element $\omega \in H^3(G,K)$. In principle, if $G$ is, for example, a Lie group, but we are only concerned with flat bundles, then the same homotopy computations of appendix~\ref{sect:homotopy:2-group} imply that (flat) $\Gamma$ bundles map to (flat) $G$ bundles obeying the constraint that $\phi^* \omega = 0$. Such a constraint can be implemented via a decomposition, and flat bundles arise in Chern-Simons theories, so we have a prediction: \begin{equation} \mbox{Chern-Simons}(H) / BA \: = \: \coprod_{\theta \in \hat{K}} \mbox{Chern-Simons}(G)_{\theta}, \end{equation} where the $\theta$ are discrete theta angles coupling to $\phi^* \omega$, and for levels such that the Chern-Simons theories are defined. For example, consider an $SU(2)$ Chern-Simons with an action of $B {\mathbb Z}_4$, which maps to the central one-form symmetry of $SU(2)$, with a $B {\mathbb Z}_2$ kernel which leaves all line operators invariant. In this case, we predict \begin{equation} \mbox{Chern-Simons}(SU(2)) / B {\mathbb Z}_4 \: = \: \mbox{Chern-Simons}(SO(3))_+ \: \coprod \: \mbox{Chern-Simons}(SO(3))_-. \end{equation} This form of decomposition will be discussed in detail in upcoming work. \section{Conclusions} In this paper we have discussed 2-group orbifolds and their decomposition. Because these theories involve the gauging of a trivially-acting one-form symmetry, they possess a global two-form symmetry, implying a decomposition. The pattern followed is very similar to two dimensions: the twisted sectors of the 2-group orbifolds look like twisted sectors of ordinary orbifolds obeying a constraint, and that constraint is implemented by the decomposition. In our analysis, we specialized to 2-groups that were analogues of central extensions, defined in part by trivial group actions of $G$ on $K$. It would be interesting to consider more general cases; such analyses are left for future work. One direction that would be interesting to pursue would be to deform 2-group orbifolds by turning on $C$ field flux, in the same way that one can turn on discrete torsion to deform ordinary two-dimensional orbifolds. Decomposition in orbifolds with discrete torsion was discussed in \cite{Robbins:2020msp}. A related direction that would be interesting to pursue would be analogues of quantum symmetries in 2-group orbifolds, generalizing the results of \cite{Robbins:2021ibx}. \section*{Acknowledgements} We would like to thank S.~Gukov, U~Schreiber, Y.~Tachikawa, B.~T\"oen, and M.~Yu for useful discussions. T.P. was partially supported by NSF/BSF grant DMS-2200914, NSF grant DMS-1901876, and Simons Collaboration grant number 347070. D.R. and T.V. were partially supported by NSF grant PHY-1820867. E.S. was partially supported by NSF grant PHY-2014086.
2,869,038,156,157
arxiv
\section{Introduction} The nonabelian Hodge theory established by Hitchin and Simpson associates representations of the topological fundamental group of an algebraic variety $X$ over $\mathbb C$ to a holomorphic object on $X$ named Higgs bundle. Later, Ogus and Vologodsky established the nonabelian Hodge theory in positive characteristic in their fundamental work~\cite{OgVo07}. They constructed the Cartier functor and the inverse Cartier functor, which give an equivalence of categories between the category of nilpotent Higgs modules of exponent $\leq p-1$ and the category of nilpotent flat modules of exponent $\leq p-1$ over a smooth proper and $W_2(k)$-liftable variety. This equivalence generalizes the classical Cartier descent theorem. Fontaine-Laffaille~\cite{FoLa82} for $\mathcal X= \Spec\,W(k)$ and Faltings in general case have introduced the category $\mathcal{MF}_{[a,b]}^{\nabla}(\mathcal X/W)$. The objects in $\mathcal{MF}_{[a,b]}^{\nabla}(\mathcal X/W)$ are the so-called \emph{ Fontaine-Faltings modules} and consist of a quadruple $(V,\nabla,\Fil,\varphi)$, where $(V,\nabla,\Fil)$ is a filtered de Rham bundle over $\mathcal X$ and $\varphi$ is a relative Frobenius which is horizontal with respect to $\nabla$ and satisfies the strong $p$-divisibility condition. The latter condition is a $p$-adic analogue of the Riemann-Hodge bilinear relations. Then the Fontaine-Laffaille-Faltings correspondence gives a fully faithful functor from $\mathcal{MF}_{[0,w]}^{\nabla}(\mathcal X/W)$ $(w \leq p-2)$ to the category of \emph{crystalline representations} of $\pi^\text{\'et}_1(X_K)$, where $X_K$ is the generic fiber of $\mathcal X$. This can be regarded as a $p$-adic version of the Riemann-Hilbert correspondence.\\[.1cm] Faltings \cite{Fal05} has established an equivalence of categories between the category of generalized representations of the geometric fundamental group and the category of Higgs bundles over a p-adic curve, which has generalized the earlier work of Deninger-Werner \cite{DW} on a partial p-adic analogue of Narasimhan-Seshadri theory. \\[.1cm] Lan, Sheng and Zuo have established a $p$-adic analogue of the Hitchin-Simpson correspondence between the category of $\mathrm{GL}_r(W_n(\mathbb F_q))$-crystalline representations and the category of graded periodic Higgs bundles by introducing the notion of \emph{Higgs-de Rham flow}. It is a sequence of graded Higgs bundles and filtered de Rham bundles, connected by the inverse Cartier transform defined by Ogus and Vologodsky \cite{OgVo07} and the grading functor by the attached Hodge filtrations on the de Rham bundles (for details see Section $3$ in \cite{LSZ13a} or Section \ref{section HDF} in this paper). \\[.1cm] A periodic Higgs bundle must have trivial Chern classes. This fact limits the application of the $p$-adic Hitchin-Simpson correspondence. For instance, Simpson constructed a canonical Hodge bundle $\Omega^1_X \oplus \mathcal{O}_X$ on $X$ in his proof of the Miyaoka-Yau inequality (Proposition 9.8 and Proposition 9.9 in~\cite{Simpson}), which has nontrivial Chern classes in general. In fact, the classical nonabelian Hodge theorem tells us that the Yang-Mills-Higgs equation is still solvable for a polystable Higgs bundle with nontrivial Chern classes. Instead of getting a flat connection, one can get a \emph{projective flat connection} in this case, whose monodromy gives a $\mathrm{PGL}_r$-representation of the fundamental group. This motivates us to find a $p$-adic Hitchin-Simpson correspondence for graded Higgs bundles with nontrivial Chern classes.\\[.1cm] As the first main result of this paper we introduce the $1$-periodic \emph{twisted Higgs-de Rham flow} over $X_1$ as follows \[ \xymatrix{ & (V,\nabla,\Fil)_0\ar[dr]^{\mathrm{Gr}(\cdot)\otimes (L,0)} & \\ (E,\theta)_0 \ar[ur]^{C_1^{-1}} & & (E,\theta)_1\otimes(L,0) \ar@/^1pc/[ll]^{\phi_L}_\sim } \] Here $L$ is called a twisting line bundle on $X_1$, and $\phi_L : (E_1,\theta_1) \otimes (L,0) \cong (E_0,\theta_0)$ is called the twisted $\phi$-structure.\\ On the Fontaine module side, we also introduce the \emph{twisted Fontaine-Faltings module} over $X_1$. The latter consists of the following data: a filtered de Rham bundle $(V,\nabla,\Fil)$ together with an isomorphism between de Rham bundles: \[ \varphi_L: (C^{-1}_{1} \circ \mathrm{Gr}_{\Fil}(V,\nabla)) \otimes (L^{\otimes p},\nabla_{\mathrm{can}}) \cong (V,\nabla). \] We will refer to the isomorphism $\varphi_L$ as the twisted $\varphi$-structure. The general construction of twisted Fontaine-Faltings modules and twisted periodic Higgs-de Rham flows are given in Section~\ref{section TFFMES} and Section~\ref{section TPHDF} (over $X_n/W_n(k)$, and multi-periodic case). \begin{thm}[Theorem~\ref{equiv:TFF&THDF}]\label{thm:second_thm} Let $\mathcal X$ be a smooth proper scheme over $W$. For each integer $0 \leq a \leq p-2$ and each $f \in \mathbb{N}$, there is an equivalence of categories between the category of all twisted $f$-periodic Higgs-de Rham flows over $X_n$ of level $\leq a$ and the category of strict $p^n$-torsion twisted Fontaine-Faltings modules over $X_n$ of Hodge-Tate weight $\leq a$ with an endomorphism structure of $W_n(\mathbb{F}_{p^f})$. \end{thm} Theorem \ref{thm:second_thm} can be generalized to the logarithmic case.[Theorem~\ref{equiv:logTFF&THDF}] \\[.2cm] The next goal is to associate a $\mathrm{PGL}_n$-representation of $\pi^\text{\'et}_1$ to a twisted (logarithmic) Fontaine-Faltings module. To do so, we need to generalize Faltings' work. Following Faltings \cite{Fal89}, we construct a functor $\mathbb{D}^P$ in section~\ref{section FDP}, which associates to a twisted (logarithmic) Fontaine-Faltings module a $\mathrm{PGL}_n$ representation of the \'etale fundamental group. \begin{thm}[Theorem~\ref{ConsFunc:D^P}] Let $\mathcal X$ be a smooth proper geometrically connected scheme over $W$ with a simple normal crossing divisor $\D\subset \mathcal X$ relative to $W$. Suppose $\mathbb F_{p^f}\subset k$. Let $M$ be a twisted logarithmic Fontaine-Faltings module over $\mathcal X$ (with pole along $\D$) with endomorphism structure of $W(\mathbb{F}_{p^f})$. Applying $\mathbb D^P$-functor, one gets a projective representation \[\rho : \pi^\text{\'et}_1(X_K^o) \to \mathrm{PGL}(\mathbb{D}^P(M)),\] where $X_K^o$ is the generic fiber of $\mathcal X^o=\mathcal X\setminus \D$. \end{thm} In Section~\ref{section SRSPHdRF}, we study several properties of this functor $\mathbb{D}^P$. For instance, we prove that a projective subrepresentation of $\mathbb{D}^P(M)$ corresponds to a sub-object $N \subset M$ such that $\mathbb{D}^P(M/N)$ is isomorphic to this subrepresentation. Combining this with Theorem~\ref{equiv:TFF&THDF}, we infer that a projective representation coming from a stable twisted periodic Higgs bundle $(E,\theta)$ with $(\mathrm{rank}(E),\deg_H(E))=1$ must be irreducible.\\ The next theorem gives a $p$-adic analogue of the existence of projective flat Yang-Mills-Higgs connection in terms of semistability of Higgs bundles and triviality of the discriminant. \begin{thm}[Theorem~\ref{Main: preperiod}] A semistable Higgs bundle over $X_1$ initials a twisted preperiodic Higgs-de Rham flow if and only if it is semistable and has trivial discriminant.\end{thm} Consequently we obtain the existence of non-trivial representations of \'etale fundamental group in terms of the existence of semistable graded Higgs bundles.\\[.2cm] \begin{defi} A representation $\pi^\text{\'et}_1(X^o_K)\to \PGL_r(\mathbb{F}_q)$ is called geometrically absolutely irreducible if its pull-back to the geometric fundamental group $$\bar\rho: \pi^\text{\'et}_1( {X^o}_{\bar {{\mathbb Q}}_p})\to \PGL_r({\mathbb F}_{q})$$ is absolutely irreducible, i.e. it is irreducible as a $\PGL_r(\bar{{\mathbb F}}_p)$-representation. \end{defi} \begin{thm}[Theorem~\ref{Mainthm}]Let $k$ be a finite field of characteristic $p$. Let $\mathcal X$ be a smooth proper geometrically connected scheme over $W(k)$ together with a smooth log structure $\D/W(k)$ and let $\mathcal X^o=\mathcal X\setminus \D.$ Assume that there exists a semistable graded logarithmic Higgs bundle $(E,\theta)/(\mathcal X,\D)_1$ with $r:=\mathrm{rank}( E) \leq p-1,$ discriminant $\Delta_H(E)=0$, $r$ and $\deg_H(E)$ are coprime. Then there exists a positive integer $f$ and a geometrical absolutely irreducible $\PGL_r(\mathbb{F}_{p^f})$-representation $\rho$ of $\pi^\text{et}_1(X^o_{K'})$, where $\mathcal X^o=\mathcal X\setminus \D$ and $K'=W(k\cdot\mathbb{F}_{p^f})[1/p]$. \end{thm} The proof of Theorem $0.5$ will be divided into two parts. We first show the existence of the irreducible projective representation of $\pi^\text{\'et}_1(X^o_{K'})$, in section $3$ (see Theorem~\ref{Mainthm}). The proof for the geometric irreducibility of $\rho$ will be postponed to Section-5. \\[.1cm] The second main result of this paper, the so-called \emph{ base changing of the projective Fontaine-Faltings module and twisted Higgs-de Rham flow over a very ramified valuation ring $V$} is introduced in Section-5. We show that there exists an equivalent functor from the category of twisted periodic Higgs-de Rham flow over $ {\mathscr X}_{\pi,1}$ to the category of twisted Fontaine-Faltings modules over $ {\mathscr X}_{\pi,1},$ where $ {\mathscr X}_{\pi,1}$ is the closed fiber of the formal completion of the base change of $\mathcal{X}$ to the PD-hull of $V$. As a consequence, we prove the second statement of Theorem 0.5 on the geometric absolute irreducibility of $\rho$ in Subsection 5.4 (see Theorem~\ref{ramified_Thm}).\\[.1cm] We like to emphasize that the Fontaine-Faltings module and Higgs-de Rham flow over a very ramified valuation ring $V$ introduced here shall be a crucial step toward to constructing $p$-adic Hitchin-Simpson correspondence between the category of de Rham representations of $\pi_1^\text{\'et}(X_{V[1/p]})$ and the category of periodic Higgs bundles over a potentially semistable reduction $\mathcal{X}_{V}$.\\[.2cm] As the third ingredient of this paper, we investigate the dynamic of Higgs-de Rham flows on the projective line with marked points in Section~\ref{section CCREFGpCHB}. Taking the moduli space $M$ of graded stable Higgs bundles of rank-$2$ and degree $1$ over $\mathbb{P}^1$ with logarithmic structure on $m\geq 4$ marked points we show that the self-map induced by Higgs-de Rham flow stabilizes the component $M(1,0)$ of $M$ of maximal dimension $m-3$ as a rational and dominant map. Hence by Hrushovski's theorem \cite{Hru} the subset of periodic Higgs bundles is Zariski dense in $M(1,0)$. In this way, we produce infinitely many geometrically absolutely irreducible $\mathrm{PGL}_2(\mathbb{F}_{p^f})$-crystalline representations. By Theorem~\ref{Mainthm}, all these representations lift to $\mathrm{PGL}_2(\mathbb Z_p^{ur})$-crystalline representations. In Proposition~\ref{strong_irred} we show that all those lifted representations are strongly irreducible.\\ For the case of four marked points $\{0,1,\infty,\lambda\} $ we state an explicit formula for the self-map and use it to study the dynamics of Higgs-de Rham flows for $p=3$ and several values of $\lambda$. \\[.1cm] Much more exciting, we claim that (Conjecture \ref{conj-1}) the self-map on the moduli space $M(1,0)$ induced by the Higgs-de Rham flow for $\mathbb{P}^1\supset \{0,\,1,\infty,\lambda\}$ coincides with the multiplication by $p$ map on the associated elliptic curve defined as the double cover $\pi: \mathcal{C}_\lambda\to\mathbb{P}^1$ and ramified on $\{0,\,1,\infty,\lambda\}$. We have checked this conjecture holds true for $p\leq 50.$ It really looks surprised that the self-map coming from nonabelian $p$-adic Hodge theory has really something to do with the addition law on an elliptic curve. \\[.1cm] For $\ell$-adic representations Kontsevich has observed a relation between the set of isomorphic classes of $\text{GL}_2(\bar{\mathbb Q}_l)$-local systems over $\mathbb{P}^1\setminus \{0,\,1,\infty,\lambda\}$ over $\mathbb{F}_q$ and the set of rational points on $C_\lambda$ over $\mathbb{F}_q$ via the work of Drinfeld on the Langlands program over function field. It looks quite mysterious. There should exist a relation between periodic Higgs bundles in the $p$-adic world and the Hecke-eigenforms in the $\ell$-adic world via Abe's solution of Deligne conjecture on $\ell$-to-$p$ companions. We plan to carry out this program in a further coming paper joint with J.Lu and X.Lu \cite{preparation}.\\[.2cm] In the last subsection \ref{proj F-units}, we consider a smooth projective curve $\mathcal X$ over $W(k)$ of genus $g\geq2$. In the Appendix of \cite{Osserman}, de Jong and Osserman have shown that the subset of twisted periodic vector bundles over $X_1$ in the moduli space of semistable vector bundles over $X_1$ of any rank and any degree is always Zariski dense. By applying our main theorem for twisted periodic Higgs de Rham flows with zero Higgs fields, which should be regarded as projective \'etale trivializable vector bundles in the projective version of Lange-Stuhler's theorem (see~\cite{LangeStuhe}), they all correspond to $\mathrm{PGL}_r(\mathbb{F}_{p^f})$- representations of $\pi^\text{\'et}_1(X_1)$. Once again we show that they all lift to $\mathrm{PGL}_r(\mathbb Z_p^\mathrm{ur})$ of $\pi^\text{\'et}_1(X_1)$. It should be very interesting to make a comparison between the lifting theorem obtained here lifting $\mathrm{GL}_r(\mathbb{F}_{p^f})$-representations of $\pi^\text{\'et}_1(X_1)$ to $\mathrm{GL}_r(\mathbb Z_p^\mathrm{ur})$-representation of $\pi^\text{\'et}_1({X_1}_{\bar{\mathbb{F}}_p})$ and the lifting theorem developed by Deninger-Werner~\cite{DW}. In their paper, they have shown that any vector bundle over $\mathcal X/W$ which is \'etale trivializable over $X_1$ lifts to a $\mathrm{GL}_r(\mathbb{C}_p)$-representation of $\pi^\text{\'et}_1(X_{\overline K})$.\\ \section{Twisted Fontaine-Faltings modules}\label{section TFFM} In this section, we will recall the definition of Fontaine-Faltings modules in~\cite{Fal89} and generalize it to the twisted version. \subsection{Fontaine-Faltings modules}\label{section FFM} Let $X_n$ be a smooth and proper variety over $W_n(k)$. And $(V,\nabla)$ is a \emph{de Rham sheaf} (i.e. a sheaf with an integrable connection) over $X_n$. In this paper, a filtration $\Fil$ on $(V,\nabla)$ will be called a \emph{Hodge filtration of level in $[a,b]$} if the following conditions hold: \begin{itemize} \item[-] $\Fil^i V$'s are locally split sub-sheaves of $V$, with \[V=\Fil^aV\supset \Fil^{a+1}V \supset\cdots \supset \Fil^bV\supset \Fil^{b+1}V=0,\] and locally on all open subsets $U\subset X_n$, the graded factor\\ $\Fil^i V(U)/\Fil^{i+1} V(U)$ are finite direct sums of $\mathcal O_{X_n}(U)$-modules of form $\mathcal O_{X_n}(U)/p^e$. \item[-] $\Fil$ satisfies Griffiths transversality with respect to the connection $\nabla$. \end{itemize} In this case, the triple $(V,\nabla,\Fil)$ is called a \emph{filtered de Rham sheaf}. One similarly gives the conceptions of \emph{ (filtered) de Rham modules over a $W$-algebra}. \subsubsection{Fontaine-Faltings modules over a small affine base.} Let $\mathcal U=\mathrm{Spec}R$ be a small affine scheme ( which means there exist an \'etale map $$W_n[T_1^{\pm1},T_2^{\pm1},\cdots, T_{d}^{\pm1}]\rightarrow \mathcal{O}_{X_n}(U),$$ see \cite{Fal89}) over $W$ and $\Phi:\widehat{R}\rightarrow\widehat{R}$ be a lifting of the absolute Frobenius on $R/pR$, where $\widehat{R}$ is the $p$-adic completion of $R$. A Fontaine-Faltings module over $\mathcal U$ of Hodge-Tate weight in $[a,b]$ is a quadruple $(V,\nabla,\Fil,\varphi)$, where \begin{itemize} \item[-] $(V,\nabla)$ is a de Rham $R$-module; \item[-] $\Fil$ is a Hodge filtration on $(V,\nabla)$ of level in $[a,b]$; \item[-] $\varphi$ is an $R$-linear isomorphism \[\varphi:F^*_{\widehat{\mathcal U},\Phi}\widetilde{V}=\widetilde{V}\otimes_{\Phi}\widehat{R} \longrightarrow V,\] where $F^*_{\widehat{\mathcal U},\Phi}=\mathrm{Spec}(\Phi)$, $\widetilde{V}$ is the quotient $\bigoplus\limits_{i=a}^b\Fil^i/\sim$ with $x\sim py$ for any $x\in\Fil^iV$ and $y$ is the image of $x$ under the natural inclusion $\Fil^iV\hookrightarrow\Fil^{i-1}V$. \item[-] The relative Frobenius $\varphi$ is horizontal with respect to the connections $F^*_{\widehat{\mathcal U},\Phi}\widetilde{\nabla}$ on $F^*_{\widehat{\mathcal U},\Phi}\widetilde{V}$ and $\nabla$ on $V$, i.e. the following diagram commutes: \begin{equation*} \xymatrix{F^*_{\widehat{\mathcal U},\Phi}\widetilde{V} \ar[r]^{\varphi} \ar[d]^{F^*_{\widehat{\mathcal U},\Phi}\widetilde{\nabla}} & V\ar[d]^{\nabla} \\ F^*_{\widehat{\mathcal U},\Phi}\widetilde{V}\otimes \Omega_{\mathcal U/W}^1 \ar[r]^{\quad \varphi\otimes \mathrm{id}} & V\otimes \Omega_{\mathcal U/W}^1} \end{equation*} \end{itemize} Let $M_1=(V_1,\nabla_1,\Fil_1,\varphi_1)$ and $M_2=(V_2,\nabla_2,\Fil_2,\varphi_2)$ be two Fontaine-Faltings modules over $\mathcal U$ of Hodge-Tate weight in $[a,b]$. The homomorphism set between $M_1$ and $M_2$ constitutes by those morphism $f:V_1\rightarrow V_2$ of $R$-modules, satisfying: \begin{itemize} \item[-] $f$ is strict for the filtrations. i.e. $f^{-1}(\Fil^iV_2)=\Fil^iV_1$. \item[-] $f$ is a morphism of de Rham modules. i.e. $(f\otimes \mathrm{id})\circ\nabla_1=\nabla_2\circ f$. \item[-] $f$ commutes with the $\varphi$-structures. i.e. $(\widetilde{f}\otimes \mathrm{id})\circ\varphi_1=\varphi_2\circ f$, where $\widetilde{f}$ is the image of $f$ under Faltings' tilde functor. \end{itemize} Denote by $\mathcal {MF}_{[a,b]}^{\nabla,\Phi}(\mathcal U/W)$ the category of all Fontaine-Faltings modules over $\mathcal U$ of Hodge-Tate weight in $[a,b]$. \paragraph{\emph{The gluing functor.}} In the following, we recall the gluing functor of Faltings. In other words, up to a canonical equivalence of categories, the category $\mathcal {MF}_{[a,b]}^{\nabla,\Phi}(\mathcal U/W)$ does not depend on the choice of $\Phi$. More explicitly, the equivalent functor is given as follows. Let $\Psi$ be another lifting of the absolute Frobenius. For any filtered de Rham module $(V,\nabla,\Fil)$, Faltings~\cite[Theorem~2.3]{Fal89} shows that there is a canonical isomorphism by Taylor formula \[\alpha_{\Phi,\Psi}: F^*_{\widehat{\mathcal U},\Phi}\widetilde{V}\simeq F^*_{\widehat{\mathcal U},\Psi}\widetilde{V},\] which is parallel with respect to the connection, satisfies the cocycle conditions and induces an equivalent functor of categories \begin{equation} \xymatrix@R=0mm{ \mathcal {MF}_{[a,b]}^{\nabla,\Psi}(\mathcal U/W)\ar[r] & \mathcal {MF}_{[a,b]}^{\nabla,\Phi}(\mathcal U/W)\\ (V,\nabla,\Fil,\varphi)\ar@{|->}[r] & (V,\nabla,\Fil,\varphi\circ\alpha_{\Phi,\Psi})\\} \end{equation} % \subsubsection{Fontaine-Faltings modules over global base.} Let $I$ be the index set of all pairs $(\mathcal U_i,\Phi_i)$. The $\mathcal U_i$ is a small affine open subset of $\mathcal X$, and $\Phi_i$ is a lift of the absolute Frobenius on $\mathcal O_\mathcal X(\mathcal U_i)\otimes_W k$. Recall that the category $\mathcal {MF}_{[a,b]}^{\nabla}(\mathcal X/W)$ is constructed by gluing those categories $\mathcal {MF}_{[a,b]}^{\nabla,\Phi_i}(\mathcal U_i/W)$. Actually $\mathcal {MF}_{[a,b]}^{\nabla,\Phi_i}(\mathcal U_i/W)$ can be described more precisely as below. A Fontaine-Faltings module over $\mathcal X$ of Hodge-Tate weight in $[a,b]$ is a tuple $(V,\nabla,\Fil,\{\varphi_i\}_{i\in I})$ over $\mathcal X$, i.e. a filtered de Rham sheaf $(V,\nabla,\Fil)$ together with $\varphi_i: \widetilde{V}(\mathcal U_i)\otimes_{\Phi_i} \widehat{\mathcal O_\mathcal X(\mathcal U_i)}\rightarrow V(\mathcal U_i)$ such that \begin{itemize} \item[-] $M_i:=(V(\mathcal U_i),\nabla,\Fil,\varphi_i)\in \mathcal {MF}_{[a,b]}^{\nabla,\Phi_i}(\mathcal U_i/W)$. \item[-] For all $i,j\in I$, on the overlap open set $\mathcal U_i\cap \mathcal U_j$, local Fontaine-Faltings modules $M_i\mid_{\mathcal U_{i}\cap \mathcal U_j}$ and $M_j\mid_{\mathcal U_{i}\cap \mathcal U_j}$ are associated to each other by the equivalent functor respecting these two liftings $\Phi_i$ and $\Phi_j$. In other words, the following diagram commutes \begin{equation} \xymatrix@C=2cm{ \widetilde{V}(\mathcal U_{ij})\otimes_{\Phi_i} \widehat{\mathcal O_\mathcal X(\mathcal U_i)} \ar[r]^{\alpha_{\Phi_i,\Phi_j}} \ar[d]^{\varphi_i} & \widetilde{V}(\mathcal U_{ij})\otimes_{\Phi_j} \widehat{\mathcal O_\mathcal X(\mathcal U_i)} \ar[d]^{\varphi_j} \\ V(\mathcal U_{ij})\ar[r]^{\mathrm{id}} & V(\mathcal U_{ij})\\} \end{equation} \end{itemize} Morphisms between Fontaine-Faltings modules are those between sheaves and locally they are morphisms between local Fontaine-Faltings modules. More precisely, for a morphism $f$ of the underlying sheaves of two Fontaine-Faltings modules over $\mathcal X$, the map $f$ is called a morphism of Fontaine-Faltings modules if and only if $f(\mathcal U_i) \in \mathrm{Mor}\left(\mathcal {MF}_{[a,b]}^{\nabla,\Phi_i}(\mathcal U_i/W)\right)$, for all $i\in I$. Denote by $\mathcal {MF}_{[a,b]}^{\nabla}(\mathcal X/W)$ the category of all Fontaine-Faltings modules over $\mathcal X$ of Hodge-Tate weight in $[a,b]$. And denote by $\mathcal {MF}_{[a,b]}^{\nabla}(X_{n+1}/W_{n+1})$ the sub-category of $\mathcal {MF}_{[a,b]}^{\nabla}(\mathcal X/W)$ consisted of strict $p^n$-torsion Fontaine-Faltings modules over $\mathcal X$ of Hodge-Tate weight in $[a,b]$. \subsection{Inverse Cartier functor}\label{section ICF} For a Fontaine-Faltings module $(V,\nabla,\Fil,\{\varphi_i\}_{i\in I})$, we call $\{\varphi_i\}_i$ the $\varphi$-structure of the Fontaine-Faltings module. In this section, we first recall a global description of the $\varphi$-structure via the inverse Cartier functor over truncated Witt rings constructed by Lan, Sheng, and Zuo~\cite{LSZ13a}. Note that the inverse Cartier functor $C_1^{-1}$ (the characteristic $p$ case) is introduced in the seminal work of Ogus-Vologodsky \cite{OgVo07}. Here we sketch an explicit construction of $C_1^{-1}$ presented in \cite{LSZ13a}. Let $(E,\theta)$ be a nilpotent Higgs bundle over $X_1$ of exponent$\leq p-1$. Locally we have \begin{itemize} \item[-] $V_i=F_{\mathcal U_{i}}^*(E\mid_{\mathcal U_i})$, \item[-] $\nabla_i = \mathrm{d} + \frac{\mathrm{d} \tilde{F}_{\tilde{\mathcal U}_i}} {[p]} (F_{\mathcal U_{i}}^*\theta\mid_{\mathcal U_i}): V_i\rightarrow V_i\otimes \Omega_{\mathcal U_i}^1$, \item[-] $G_{ij} = \mathrm{exp}(h_{ij}(F_{\mathcal U_{i}}^*\theta\mid_{\mathcal U_i})): V_i\mid_{\mathcal U_{ij}}\rightarrow V_j\mid_{\mathcal U_{ij}}$, \end{itemize} where $F_{\mathcal U_i}$ is the absolute Frobenius on $\mathcal U_i$ and $h_{ij}:F_{\mathcal U_{i,1}}^*\Omega^1_{\mathcal U_{ij}}\rightarrow \mathcal{O}_{\mathcal U_{ij}}$ is the homomorphism given by the Deligne-Illusie's Lemma~\cite{JJIC02}. Those local data $(V_i,\nabla_i)$'s can be glued into a global sheave $H$ with integrable connection $\nabla$ via the transition maps $\{ G_{ij} \}$ (Theorem 3 in \cite{LSZ12a}). The inverse Cartier functor on $(E,\theta)$ is \[C^{-1}_1(E,\theta):=(V,\nabla).\] \begin{rmk} Note that the inverse Cartier transform $C^{-1}_1$ also has the logarithmic version. When the log structure is given by a simple normal crossing divisor, an explicit construction of the log inverse Cartier functor is given in the Appendix of \cite{LSYZ14}. \end{rmk} As mentioned in the introduction, we need to generalize $C^{-1}_1$ to the inverse Cartier transform over the truncated Witt ring for Higgs bundles over $X_n/W_m(k)$. We briefly recall the construction in section $4$ of \cite{LSZ13a}. \subsubsection{Inverse Cartier functor over truncated Witt ring} Let $S=\mathrm{Spec(W(k))}$ and $F_S$ be the Frobenius map on $S$. Let $ X_{n+1}\supset X_n$ be a $W_{n+1}$-lifting of smooth proper varieties. Recall that the functor $C^{-1}_n$ is defined as the composition of $\mathcal C^{-1}_n$ and the base change $F_S:X_n'=X_n\times_{F_S} S \rightarrow X_n$ (by abusing notation, we still denote it by $F_S$). The functor $\mathcal C^{-1}_n$ is defined as the composition of two functors $\mathcal{T}_n$ and $\mathcal{F}_n$. In general, we have the following diagram and its commutativity follows easily from the construction of those functors. \begin{equation}\label{diag:C^{-1}} \xymatrix{ & \mathcal{H}(X_n)\ar[drr]_(0.3){C_n^{-1}}|!{[r];[ddr]}\hole \ar[r]^{F_S^*}\ar[dd]^{\mathcal T_n} & \mathcal{H}(X'_n) \ar[dr]^{\mathcal C_n^{-1}} \ar[dd]_{\mathcal T_n} &\\ \mathrm{MCF}_{p-2}(X_n)\ar[dr]_{\widetilde{(\cdot)}}\ar[ur]^{\overline{\mathrm{Gr}}} &&& \mathrm{MIC}(X_n)\\ &\widetilde{\mathrm{MIC}}(X_n) \ar[r]_{F_S^*} \ar@{..>}[urr]^(0.3){\{F_\mathcal U^*\}_\mathcal U}|!{[r];[uur]}\hole & \widetilde{\mathrm{MIC}}(X'_n)\ar[ur]_{\quad\mathcal F_n=\{F_{\mathcal U/S}^*\}_\mathcal U} &\\ } \end{equation} These categories appeared in the diagram are explained as following: \begin{itemize} \item $\mathrm{MCF}_{a}(X_n)$ is the category of filtered de Rham sheaves over $X_n$ of level in $[0,a]$. \item $\mathcal{H}(X_n)$ (resp. $\mathcal{H}(X'_n)$) is the category of tuples $(E,\theta,\bar{V},\bar{\nabla},\overline{Fil},\overline{\psi})$, where \begin{itemize} \item[-] $(E,\theta)$ is a graded Higgs module \footnote{A Higgs bundle $(E,\theta)$ is called graded if $E$ can be written as direct sum of sub bundles $E^{i}$ with $\theta(E^i)\subset E^{i-1}\otimes\Omega^1$. Obviously, a graded Higgs bundle is also nilpotent.} over $X_n$ (resp. $X_n'=X_n\otimes_\sigma W$) of exponent $\leq p-2$; \item[-] $(\bar{V},\bar{\nabla},\overline{Fil})$ is a filtered de Rham sheaf over $X_{n-1}$ (resp. over $X_{n-1}'$); \item[-] and $\overline{\psi}: Gr_{\bar{Fil}}(\bar{V},\bar{\nabla}) \simeq (E, \theta)\otimes\mathbb Z/p^{n-1}\mathbb Z$ is an isomorphism of Higgs sheaves over $X_n$ (resp. $X_n'$). \end{itemize} \item $\widetilde{\mathrm{MIC}}(X_n)$ (resp. $\widetilde{\mathrm{MIC}}(X'_n)$) is the category of sheaves over $X_n$ (resp. $X'_n$) with integrable $p$-connection . \item $\mathrm{MIC}(X_n)$ (resp. $\mathrm{MIC}(X'_n)$) is the category of de Rham sheaves over $X_n$ (resp. $X'_n$). \end{itemize} \paragraph{\emph{Functor $\overline{\mathrm{Gr}}$.}} For an object $(V,\nabla,\Fil)$ in $\mathrm{MCF}_{p-2}(X_n)$, the functor $\overline{\mathrm{Gr}}$ is given by \[\overline{\mathrm{Gr}}(V,\nabla,\Fil)=(E,\theta,\overline{V},\overline{\nabla},\overline{Fil},\overline{\psi}),\] where $(E,\theta)=\mathrm{Gr}(V,\nabla,\Fil)$ is the graded sheaf with Higgs field, $(\overline{V},\overline{\nabla},\overline{Fil})$ is the modulo $p^{n-1}$-reduction of $(V,\nabla,\Fil)$ and $\overline{\psi}$ is the identifying map $\mathrm{Gr}(\overline{V})\cong E\otimes \mathbb Z/p^{n-1}\mathbb Z$.\\ \paragraph{\emph{Faltings tilde functor $\widetilde{(\cdot)}$.}} For an object $(V,\nabla,\Fil)$ in $\mathrm{MCF}_{p-2}(X_n)$, the $\widetilde{(V,\nabla,\Fil)}$ will be denoted as the quotient $\bigoplus\limits_{i=0}^{p-2}\Fil^i/\sim$ with $x\sim py$ for any $x\in\Fil^iV$ and $y$ the image of $x$ under the natural inclusion $\Fil^iV\hookrightarrow\Fil^{i-1}V$. \paragraph{\emph{The construction of functor $\mathcal{T}_n$.}} Let $(E,\theta,\bar{V},\bar{\nabla},\overline{Fil},\psi)$ be an object in $\mathcal{H}(X_n)$ (resp. $\mathcal{H}(X'_n)$). Locally on an affine open subset $\mathcal U\subset \mathcal X$ (resp. $\mathcal U\subset \mathcal X'$), there exists $(V_\mathcal U,\nabla_\mathcal U,\Fil_\mathcal U)$ (Lemma 4.6 in~\cite{LSZ13a}), a filtered de Rham sheaf, such that \begin{itemize} \item[-] $(\bar{V},\bar{\nabla},\overline{Fil})\mid_\mathcal U \simeq (V_\mathcal U,\nabla_\mathcal U,\Fil_\mathcal U) \otimes \mathbb Z/p^{n-1}\mathbb Z$; \item[-] $(E,\theta)\mid_\mathcal U \simeq \mathrm{Gr}(V_\mathcal U,\nabla_\mathcal U,\Fil_\mathcal U)$. \end{itemize} The tilde functor associates to $(V_\mathcal U,\nabla_\mathcal U,\Fil_\mathcal U)$ a sheaf with $p$-connection over $\mathcal U$. By gluing those sheaves with $p$-connections over all $\mathcal U$'s (Lemma 4.10 in~\cite{LSZ13a}), one gets a global sheaf with $p$-connection over $X_n$ (resp. $X'_n$). Denote it by \[\mathcal T_n(E,\theta,\bar{V},\bar{\nabla},\overline{Fil},\psi).\] \paragraph{\emph{The construction of functor $\mathcal F_n$.}} For small affine open subset $\mathcal U$ of $\mathcal X$, there exists endomorphism $F_\mathcal U$ on $\mathcal U$ which lifts the absolute Frobenius on $\mathcal U_k$ and is compatible with the Frobenius map $F_S$ on $S=\mathrm{Spec}(W(k))$. Thus there is a map $F_{\mathcal U/S}:\mathcal U\rightarrow \mathcal U'=\mathcal U\times_{F_S} S$ satisfying $F_\mathcal U= F_S\circ F_{\mathcal U/S}$. \begin{equation} \xymatrix{ \mathcal U \ar@/^15pt/[drr]^{F_\mathcal U} \ar[dr]^{F_{\mathcal U/S}} \ar@/_15pt/[ddr] &&\\ & \mathcal U' \ar[r]^{F_S}\ar[d] & \mathcal U\ar[d]\\ & S\ar[r]^{F_S} & S\\} \end{equation} Let $(\widetilde{V}',\widetilde{\nabla}')$ be an object in $\widetilde{MIC}(X'_n)$. Locally on $\mathcal U$, applying functor $F_{\mathcal U/S}^*$, we get a de Rham sheaf over $\mathcal U$ \[F_{\mathcal U/S}^*(\widetilde{V}'\mid_{\mathcal U'},\widetilde{\nabla}'\mid_{\mathcal U'}).\] By Taylor formula, up to a canonical isomorphism, it does not depends on the choice of $F_\mathcal U$. In particular, on the overlap of two small affine open subsets, there is an canonical isomorphism of two de Rham sheaves. By gluing those isomorphisms, one gets a de Rham sheaf over $X_n$, we denote it by \[\mathcal F_n(\widetilde{V}',\widetilde{\nabla}').\] \subsection{Global description of the $\varphi$-structure in Fontaine-Faltings modules (via the inverse Cartier functor).}\label{section GDphiFFM} Let $(V,\nabla,\Fil)\in \mathrm{MFC}_{p-2}(X_n)$ be a filtered de Rham sheaf over $X_n$ of level in $[0,p-2]$. From the commutativity of diagram~(\ref{diag:C^{-1}}), for any $i\in I$, one has \begin{equation} \begin{split} C_n^{-1}\circ\overline{\mathrm{Gr}}(V,\nabla,\Fil)\mid_{\mathcal U_i} &= \mathcal F_n\circ\mathcal T_n\circ F_S^*\circ \overline{\mathrm{Gr}}(V,\nabla,\Fil)\mid_{\mathcal U_i}\\ &=\mathcal F_n\circ F_S^* (\widetilde{V},\widetilde{\nabla})\mid_{\mathcal U_i}\\ &\simeq F_{\mathcal U_i}^*(\widetilde{V}\mid_{\mathcal U_i},\widetilde{\nabla}\mid_{\mathcal U_i}). \end{split} \end{equation} Here $F_{\mathcal U_i}=\mathrm{Spec}(\Phi_i): \mathcal U_i\rightarrow \mathcal U_i$ is the lifting of the absolute Frobenius on $\mathcal U_{i,k}$. As the $\mathcal F_n$ is glued by using the Taylor formula, for any $i,j\in I$, one has the following commutative diagram \begin{equation} \xymatrix{ F_{\mathcal U_i}^*(\widetilde{V}\mid_{\mathcal U_i\cap \mathcal U_j},\widetilde{\nabla}\mid_{\mathcal U_i\cap \mathcal U_j}) \ar[r]^(0.47){\sim} \ar[d]^{\alpha_{\Phi_i,\Phi_j}} & C^{-1}_n\circ \overline{\mathrm{Gr}}(V,\nabla,\Fil) \mid_{\mathcal U_i\cap \mathcal U_j} \ar@{=}[d]\\ F_{\mathcal U_j}^*(\widetilde{V}\mid_{\mathcal U_i\cap \mathcal U_j},\widetilde{\nabla}\mid_{\mathcal U_i\cap \mathcal U_j}) \ar[r]^(0.47){\sim} & C^{-1}_n\circ \overline{\mathrm{Gr}}(V,\nabla,\Fil)\mid_{\mathcal U_i\cap \mathcal U_j}\\ } \end{equation} To give a system of compatible $\varphi$-structures (for all $i\in I$) \[\varphi_{i}: F_{\mathcal U_i}^*(\widetilde{V}\mid_{\mathcal U_i},\widetilde{\nabla}\mid_{\mathcal U_i}) \rightarrow (V\mid_{\mathcal U_i},\nabla\mid_{\mathcal U_i}),\] it is equivalent to give an isomorphism \[\varphi:C^{-1}_n\circ \overline{\mathrm{Gr}}(V,\nabla,\Fil) \rightarrow (V,\nabla).\] In particular, we have the following results \begin{lem}[Lemma 5.6 in~\cite{LSZ13a}] \label{lem:anotherDiscribtionFM} To give a Fontaine-Faltings module in $\mathcal{MF}_{[0,p-2]}^\nabla(\mathcal X/W)$, it is equivalent to give a tuple $(V,\nabla,\Fil,\varphi)$, where \begin{itemize} \item[-] $(V,\nabla,\Fil)\in \mathrm{MCF}_{p-2}(X_n)$ is a filtered de Rham sheaf over $X_n$ of level in $[0,p-2]$, for some positive integer $n$; \item[-] $\varphi: C_n^{-1}\circ \overline{\mathrm{Gr}} (V,\nabla,\Fil)\rightarrow (V,\nabla)$ is an isomorphism of de Rham sheaves. \end{itemize} \end{lem} \subsection{Fontaine-Faltings modules with endomorphism structure.}\label{section FFMES} Let $f$ be a positive integer. We call $(V,\nabla,\Fil,\varphi,\iota)$ a Fontaine-Faltings module over $\mathcal X$ with endomorphism structure of $W(\mathbb{F}_{p^f})$ whose Hodge-Tate weights lie in $[a,b]$, if $(V,\nabla,\Fil,\varphi)$ is an object in $\mathcal {MF}_{[a,b]}^{\nabla}(\mathcal X/W)$ and \[\iota: W(\mathbb{F}_{p^f})\rightarrow \mathrm{End}_{\mathcal{MF}}(V,\nabla,\Fil,\varphi)\] is a continuous ring homomorphism. We call $\iota$ an endomorphism structure of $W(\mathbb{F}_{p^f})$ on $(V,\nabla,\Fil,\varphi)$. Let's denote by $\mathcal {MF}_{[a,b],f}^{\nabla}(X/W)$ the category of Fontaine-Faltings module with endomorphism structure of $W(\mathbb{F}_{p^f})$ whose Hodge-Tate weights lie in $[a,b]$. And denote by $\mathcal {MF}_{[0,p-2],f}^{\nabla}(X_{n+1}/W_{n+1})$ the subcategory of $\mathcal {MF}_{[0,p-2],f}^{\nabla}(\mathcal X/W)$ consisted by strict $p^n$-torsion objects. \begin{lem}\label{lem:anotherDiscribtionFMwithEnd} Assume $f$ is a positive integer with $\mathbb{F}_{p^f}\subset k$. Then giving an object in $\mathcal {MF}_{[0,p-2],f}^{\nabla}(\mathcal X/W)$ is equivalent to give $f$-ordered objects \[ (V_i,\nabla_i,\Fil_i)\in \mathrm{MCF}_{p-2}(X_n), \qquad i=0,1,\cdots,f-1\] (for some $n\in \mathbb N$) together with isomorphisms of de Rham sheaves \[\varphi_i:C_n^{-1}\circ\overline{\mathrm{Gr}}(V_i,\nabla_i,\Fil_i) \rightarrow (V_{i+1},\nabla_{i+1}), \quad \text{ for } 0\leq i\leq f-2,\] and \[\varphi_{f-1}:C_n^{-1}\circ\overline{\mathrm{Gr}}(V_{f-1},\nabla_{f-1},\Fil_{f-1}) \rightarrow (V_{0},\nabla_{0}).\] \end{lem} \begin{proof} Let $(V,\nabla,\Fil,\varphi,\iota)$ be an object in $\mathcal {MF}_{[0,p-2],f}^{\nabla}(\mathcal X/W)$. Let $\sigma$ be the Frobenius map on $W(\mathbb{F}_{p^f})$ and let $\xi$ be a generator of $W(\mathbb{F}_{p^f})$ as a $\mathbb Z_p$-algebra. Then $\iota(\xi)$ is an endomorphism of the Fontaine-Faltings module $(V,\nabla,\Fil,\varphi)$. Since $\mathbb{F}_{p^f}\subset k$, all conjugate elements of $\xi$ are of form $\sigma^i(\xi)$, which are contained in $w=w(K)$. The filtered de Rham sheaf $(V,\nabla,\Fil)$ can be decomposed into eigenspaces \[(V,\nabla,\Fil)=\bigoplus_{i=0}^{f-1}(V_i,\nabla_i,\Fil_i),\] where $(V_i,\nabla_i,\Fil_i)=(V,\nabla,\Fil)^{\iota(\xi)=\sigma^i(\xi)}$ is the $\sigma^i(\xi)$-eigenspace of $\iota(\xi)$. Applying $C_n^{-1}\circ\overline{\mathrm{Gr}}$ on both side, we get \[C_n^{-1}\circ\overline{\mathrm{Gr}}(V,\nabla,\Fil)=\bigoplus_{i=0}^{f-1}C_n^{-1}\circ\overline{\mathrm{Gr}}(V_i,\nabla_i,\Fil_i).\] Comparing $\sigma^{i+1}(\xi)$-eigenspaces of $\iota(\xi)$ on both side of \[\varphi:C_n^{-1}\circ\overline{\mathrm{Gr}}(V,\nabla,\Fil)\simeq (V,\nabla),\] one gets the restrictive isomorphisms \[\varphi_i:C_n^{-1}\circ\overline{\mathrm{Gr}}(V_i,\nabla_i,\Fil_i) \rightarrow (V_{i+1},\nabla_{i+1}), \quad \text{for all } 0\leq i\leq f-2,\] and \[\varphi_{f-1}:C_n^{-1}\circ\overline{\mathrm{Gr}}(V_{f-1},\nabla_{f-1},\Fil_{f-1}) \rightarrow (V_{0},\nabla_{0}).\] Conversely, we can construct the Fontaine-Faltings module with endomorphism structure in an obvious way. \end{proof} \subsection{Twisted Fontaine-Faltings modules with endomorphism structure.}\label{section TFFMES} Let $L_n$ be a line bundle over $X_n$. Then there is a natural connection $\nabla_{\mathrm{can}}$ on $L_n^{p^{n}}$ by 5.1.1 in~\cite{KaNi70}. Tensoring with $(L_n^{p^{n}}, \nabla_{\mathrm{can}})$ induces a self equivalence functor on the category of de Rham bundles over $X_n$. \begin{defi} An $L_n$-twisted Fontaine-Faltings module over $X_n$ with endomorphism structure of $W_n(\mathbb F_{p^f})$ whose Hodge-Tate weights lie in $[a,b]$ is a tuple consisting the following data: \begin{itemize} \item[-] for $0\leq i\leq f-1$, a filtered de Rham bundle $(V_i,\nabla_i,\Fil_i)$ over $X_n$ of level in $[a,b]$; \item[-] for $0\leq i\leq f-2$, an isomorphism of de Rham sheaves \[\varphi_i: \mathcal C^{-1}_n\circ \overline{\mathrm{Gr}} (V_i,\nabla_i,\Fil_i)\rightarrow (V_{i+1},\nabla_{i+1});\] \item[-] an isomorphism of de Rham sheaves \[\varphi_{f-1}: \left(\mathcal C^{-1}_n\circ \overline{\mathrm{Gr}}(V_{f-1},\nabla_{f-1},\Fil_{f-1})\right) \otimes(L_n^{p^{n}},\nabla_{\mathrm{can}}) \rightarrow (V_0,\nabla_0).\] \end{itemize} We use $(V_i,\nabla_i,\Fil_i,\varphi_i)_{0\leq i<f}$ to denote the $L_n$-twisted Fontaine-Faltings module and use $\mathcal {TMF}_{[a,b],f}^{\nabla}(X_{n+1}/W_{n+1})$ to denote the category of all twisted Fontaine-Faltings modules over $X_n$ with endomorphism structure of $W_n(\mathbb F_{p^f})$ whose Hodge-Tate weights lie in $[a,b]$. \end{defi} A morphism between two objects $(V_i,\nabla_i,\Fil_i,\varphi_i)_{0\leq i<f}$ and $(V'_i,\nabla'_i,\Fil'_i,\varphi'_i)_{0\leq i<f}$ is an $f$-tuple $(g_0,g_1,\cdots,g_{f-1})$ of morphisms of filtered de Rham sheaves \[g_i:(V_i,\nabla_i,\Fil_i)\rightarrow (V'_i,\nabla'_i,\Fil'_i), \quad i=0,1,\cdots,f-1\] satisfying \[g_{i+1}\circ \varphi_i=\varphi_i'\circ \left(\mathcal C^{-1}_n\circ\mathrm{Gr}(g_i)\right), \qquad \text{for } 0\leq i\leq f-2,\] and \[\left(g_0\otimes \mathrm{id}_{L_n^{p^n}}\right)\circ \varphi_{f-1}=\varphi_{f-1}'\circ \left(\mathcal C^{-1}_n\circ\mathrm{Gr}(g_{f-1})\right).\] \begin{rmk} $i)$. By Lemma~\ref{lem:anotherDiscribtionFM}, to give an object in $\mathcal{TMF}_{[a,b],1}^{\nabla}(X_{n}/W_{n})$ with $L_n=\mathcal O_{X_n}$ is equivalent to give a strict $p^n$-torsion Fontaine-Faltings module over $X_n$ whose Hodge-Tate weights lie in $[a,b]$.\\ $ii)$. Suppose $\mathbb F_{p^f}\subset k$. By Lemma~\ref{lem:anotherDiscribtionFMwithEnd}, to give an object in $\mathcal{TMF}_{[a,b],f}^{\nabla}(X_{n}/W_{n})$ with $L_n=\mathcal O_{X_n}$ is equivalent to give a strict $p^n$-torsion Fontaine-Faltings module over $X_n$ with endomorphism structure of $W_n(\mathbb F_{p^f})$ and whose Hodge-Tate weight in $[a,b]$ .\\ \end{rmk} \paragraph{\emph{Local trivialization.}} Let $j\in I$. Locally on the small open affine set $\mathcal U_j$ ($R_j=\mathcal O_\mathcal X(\mathcal U_j)$), we choose and fix a lifting $\Phi_j:\widehat{R}_j\rightarrow \widehat{R}_j$ and a trivialization of the line bundle $L_n$ \[ \tau_j:\mathcal O_{X_n}(\mathcal U_j) \simeq L_n(\mathcal U_j).\] It induces a trivialization of de Rham bundle $\tau_j^{p^n}: (\mathcal O_{X_n}(\mathcal U),\mathrm{d}) \simeq (L_n^{p^{n}}(\mathcal U),\nabla_{\mathrm{can}})$. Let $M=(V_i,\nabla_i,\Fil_i,\varphi_i)_{0\leq i<f}\in \mathcal {TMF}_{[a,b],f}^{\nabla}(X_{n+1}/W_{n+1})$ be an $L_n$-twisted Fontaine-Faltings module over $X_n$ with endomorphism structure of $W_n(\mathbb F_{p^f})$ whose Hodge-Tate weights lie in $[a,b]$. Then one gets a local Fontaine-Faltings module over $R_j$ with endomorphism structure of $W_n(\mathbb F_{p^f})$ whose Hodge-Tate weights lie in $[a,b]$ \[M(\tau_j)=\Big(\oplus V_i(\mathcal U_j), \oplus \nabla_i,\oplus \Fil_i, \sum_{i=0}^{f-2}\varphi_i+\varphi_{f-1}\circ(\mathrm{id}\otimes\tau_j^{p^n})\Big).\] We call $M(\tau_j)$ the \emph{trivialization of of $M$ on $\mathcal U_j$ via $\tau_j$}. \paragraph{\emph{Logarithmic version.}} Finally, let us mention that everything in this section extends to the logarithmic context. Let $\mathcal X$ be a smooth and proper scheme over $W$ and $\mathcal X^o$ is the complement of a simple normal crossing divisor $\D\subset \mathcal X$ relative to $W$. Similarly, one constructs the category $\mathcal {TMF}_{[a,b],f}^{\nabla}(X^o_{n+1}/W_{n+1})$ of strict $p^n$-torsion twisted logarithmic Fontaine modules (with pole along $\D\times W_n\subset \mathcal X\times W_n$) with endomorphism structure of $W_n(\mathbb{F}_{p^f})$ whose Hodge-Tate weights lie in $[a,b]$. \section{Projective Fontaine-Laffaille-Faltings functor}\label{section PFLFF} \subsection{The Fontaine-Laffaille-Faltings' $\mathbb D$-functor}\label{section FDF} \paragraph{\emph{The functor $\mathbb{D}_\Phi$.}} Let $R$ be a small affine algebra over $W=W(k)$ with a $\sigma$-linear map $\Phi:\widehat{R}\rightarrow\widehat{R}$ which lifts the absolute Frobenius of $R/pR$. If $\Phi$ happens to be \'etale in characteristic $0$, Faltings (page 36 of \cite{Fal89}) constructed a map $\kappa_\Phi:\widehat{R}\rightarrow B^+(\widehat{R})$ which respects Frobenius-lifts. Thus the following diagram commutes \begin{equation} \xymatrix@C=2cm{\widehat{R} \ar[r]^{\kappa_\Phi} \ar[d]^{\Phi} & B^+(\widehat{R}) \ar[d]^{\Phi_B}\\ \widehat{R} \ar[r]^{\kappa_{\Phi}} & B^+(\widehat{R}).\\ } \end{equation} Here $\Phi_B$ is the Frobenius on $B^+(\widehat{R})$. Denote $D=B^+(\widehat R)[1/p]/B^+(\widehat R)$, which is equipped with the natural $\varphi$-structure and filtration. Let $M=(V,\nabla,\Fil,\varphi)$ be an object in $\mathcal {MF}_{[a,b]}^{\nabla,\Phi}(\mathcal U/W)$. Faltings constructed a functor $\mathbb{D}_{\Phi}$ by \[\mathbb{D}_{\Phi}(M)=\mathrm{Hom}(V\otimes_{\kappa_{\Phi}} B^+(\widehat R),D),\] where the homomorphisms are $B^+(\widehat R)$-linear and respect filtrations and the $\varphi$-structure. The action of $\mathrm{Gal}(\widehat{\overline{R}}/\widehat{R})$ on the tensor product $V\otimes_{\kappa_\Phi} B^+(\widehat{R})$ is defined via the connection on $V$, which commutes with the $\varphi$'s and hence induces an action of $\mathrm{Gal}(\widehat{\overline{R}}/\widehat{R})$ on $\mathbb{D}_{\Phi}(M)$. Since $V$ is a $p$-power torsion finitely generated $R$-module, $\mathbb{D}_{\Phi}(M)$ is a finite $\mathbb Z_p$-module. Faltings shows that the functor $\mathbb{D}_\Phi$ from $\mathcal {MF}_{[a,b]}^{\nabla,\Phi}(\mathcal U/W)$ to $\mathrm{Rep}_{\mathbb Z_p}^{\mathrm{finite}}\left(\mathrm{Gal}\left(\widehat{\overline{R}}/\widehat{R}\right)\right)$, the category of finite $\mathbb Z_p$-representation of $\mathrm{Gal}\left(\widehat{\overline{R}}/\widehat{R}\right)$, is fully faithful and its image is closed under subobjects and quotients. \paragraph{\emph{The functor $\mathbb{D}$.}} Recall that $I$ is the index set of all pair $(\mathcal U_i,\Phi_i)$ of small affine open subset $\mathcal U_i$ of $\mathcal X$ and lifting $\Phi_i$ of the absolute Frobenius on $\mathcal O_\mathcal X(\mathcal U_i)\otimes_W k$. For each $i\in I$, the functor $\mathbb D_{\Phi_i}$ associates to any Fontaine-Faltings module over $\mathcal X$ a compatible system of \'etale sheaves on $\widehat{\mathcal U}_{i,K}$ (the generic fiber of $\widehat{\mathcal U}_i$). By gluing and using the results in EGA3, one obtains a locally constant sheaf on $X_K$ and a globally defined functor $\mathbb D$. In the following, we give a slightly different way to construct the functor $\mathbb D$. Let $J$ be a finite subset of the index set $I$, such that $\{\mathcal U_j\}_{j\in J}$ forms a covering of $\mathcal X$. Denote $U_j=(\mathcal U_j)_K$ and choose $\overline{x}$ a geometric point of $X_K$ contained in $\bigcap\limits_{j\in J} U_j$. Let $(V,\nabla,\Fil,\{\varphi_i\}_{i\in I})$ be a Fontaine-Faltings module over $\mathcal X$. For each $j\in J$, the functor $\mathbb D_{\Phi_j}$ gives us a finite $\mathbb Z_p$-representation of $\pi^\text{\'et}_1(\widehat{U}_j,\overline{x})$. Recall that the functor $\mathbb D_\Phi$does not depends on the choice of $\Phi$, up to a canonical isomorphism. In particular, for all $j_1, j_2\in J$, there is a natural isomorphism of $\pi^\text{\'et}_1(\widehat{U}_{j_1}\cap \widehat{U}_{j_2},s)$-representations \[\mathbb D(V(\mathcal U_{j_1}\cap \mathcal U_{j_2}),\nabla,\Fil,\varphi_{j_1})\simeq \mathbb D(V(\mathcal U_{j_1}\cap \mathcal U_{j_2}),\nabla,\Fil,\varphi_{j_2}).\] By Theorem~\ref{Mainthm:rep}, all representations $\mathbb D(V(\mathcal U_{j}),\nabla,\Fil,\varphi_{j})$'s descend to a $\mathbb Z_p$-representations of $\pi^\text{\'et}_1(X_K,\overline{x})$. Up to a canonical isomorphism, this representation does not depend on the choice of $J$ and $s$. This representation is just $\mathbb D(V,\nabla,\Fil,\{\varphi_i\}_{i\in I})$ and we construct the Fontaine-Laffaille-Faltings' $\mathbb D$-functor in this way. \begin{thm}[Faltings]\label{faltingslocal} The functor $\mathbb D$ induces an equivalence of the category $\mathscr {MF}_{[0,p-2]}^\nabla(\mathcal X/W)$ with the full subcategory of finite $\mathbb Z_p[[\pi^\text{\'et}_1(X_K)]]$-modules whose objects are dual-crystalline representations. This subcategory is closed under sub-objects and quotients. \end{thm} \paragraph{\emph{The extra $W(\mathbb F_{p^f})$-structure}} Suppose $\mathbb{F}_{p^f}\subset k$. Let $(V,\nabla,\Fil,\varphi,\iota)$ be an object in $\mathcal {MF}_{[0,p-2],f}^{\nabla}(X_{n+1}/W_{n+1})$. Since the functor $\mathbb D$ is fully faithful, we get an extra $W(\mathbb F_{p^f})$-structure on $\mathbb D(V,\nabla,\Fil,\varphi)$, via the composition \[W(\mathbb{F}_{p^f})\overset{\iota}{\longrightarrow} \mathrm{End}_{\mathcal{MF}} (V,\nabla,\Fil,\varphi) \overset{\sim}{\longrightarrow} \mathrm{End}(\mathbb{D}(V,\nabla,\Fil,\varphi)).\] Since $V$ is strictly $p^n$-torsion, the $W_n(\mathbb F_{p^f})$-module $\mathbb{D}(V,\nabla,\Fil,\varphi)$ is free with a linear action of $\pi^\text{\'et}_1(X_K)$. We write this $W_n(\mathbb{F}_{p^f})$-representation as \[\mathbb D(V,\nabla,\Fil,\varphi,\iota).\] \subsection{The category of projective representations}\label{section CPR} \paragraph{\emph{The categories $\mathrm{Rep}_{\mathcal O}(G)$ and $\mathrm{Rep}^{\mathrm{free}}_{\mathcal O}(G)$.}} Let $\mathcal O$ be a commutative topological ring with identity and let $G$ be a topological group. Note that all morphisms of topological groups and all actions of groups are continuous in this section. Denote by $\mathrm{Rep}_{\mathcal O}(G)$ the category of all finitely generated $\mathcal O$-modules with an action of $G$ and denote by $\mathrm{Rep}^{\mathrm{free}}_{\mathcal O}(G)$ the subcategory of all free $\mathcal O$-modules of finite rank with an action of $G$. \paragraph{\emph{The categories $\mathrm{PRep}_{\mathcal O}(G)$ and $\mathrm{PRep}^{\mathrm{free}}_{\mathcal O}(G)$.}} For a finitely generated $\mathcal O$-module ${\mathbb V}$, we denote by $\mathrm{PGL}_{\mathcal O}({\mathbb V})$ the quotient group $\mathrm{GL({\mathbb V})}/\mathcal O^\times$. If $\rho: G\rightarrow \mathrm{PGL}_{\mathcal O}({\mathbb V})$ is a group morphism, then there exists a group action of $G$ on the quotient set ${\mathbb V}/\mathcal O^\times$ defined by $g([v]):=[\rho(g)v]$ for any $g\in G$ and $v\in \mathbb V$. In this case, we call the pair $({\mathbb V},\rho)$ a \emph{projective $\mathcal O$-representation of $G$}. A morphism of projective $\mathcal O$-representations from $({\mathbb V}_1,\rho_1)$ to $({\mathbb V}_2,\rho_2)$ is an $\mathcal O$-linear morphism $f:{\mathbb V}_1\rightarrow {\mathbb V}_2$ such that the quotient map from ${\mathbb V_1}/{\mathcal O}^\times$ to ${\mathbb V_2}/{\mathcal O}^\times$ induced by $f$ is a morphism of $G$-sets. Denote $\mathrm{PRep}_{\mathcal O}(G)$ the category of finite projective $\mathcal O$-representations of $G$. Denote by $\mathrm{PRep}^{\mathrm{free}}_{\mathcal O}(G)$ the subcategory with $\mathbb V$ being a free $\mathcal O$-module. \subsection{Gluing representations and projective representations}\label{section GRPR} Let $S$ be an irreducible scheme. We fix a geometric point $s$ of $S$. In this section, ${U}$ is an open subset of $S$ containing $s$. \begin{prop}[SGA$1$ \cite{SGA1}, see also Proposition 5.5.4 in~\cite{Sza09}] \label{surjectivity} The open immersion $U\to S$ induces a surjective morphism of fundamental groups \[\rho^S_{U}:\pi^\text{\'et}_1({U}, {s}) \twoheadrightarrow \pi^\text{\'et}_1(S, {s}).\] \end{prop} Thus, there is a natural restriction functor $\mathrm{res}$ from the category of $\pi^\text{\'et}_1(S, {s})$-sets to the category of $\pi^\text{\'et}_1({U}, {s})$-sets, which is given by \[\mathrm{res}(\rho)=\rho\circ \rho^S_{U}.\] \begin{cor}\label{Cor:compHom} The restriction functor $\mathrm{res}$ is fully faithful. \end{cor} The proof of this corollary directly follows from the surjectivity proved in Proposition \ref{surjectivity} and Lemma $52.4.1$ in \cite[Tag 0BN6]{stacks-project}. Let $\widetilde{S}$ be a finite \'etale covering of $S$. Then there is a natural action of $\pi^\text{\'et}_1(S,s)$ on the fiber $F_s(\widetilde{S})$. \begin{prop}\label{main:prop} $i).$ The fiber functor $F_s$ induces an equivalence from the category of finite \'etale covering of $S$ to the category of finite $\pi^\text{\'et}_1(S,{s})$-sets. $ii).$ The functor $F_{s}$ is compatible with the restrictions of covering to open set ${U}\subset S$ and restrictions of $\pi^\text{\'et}_1(S, {s})$-sets to $\pi^\text{\'et}_1({U},s)$-sets by $\rho_{U}^S$. \end{prop} See Proposition~$52.3.10$ in \cite[Tag 0BN6]{stacks-project} for a proof of the first statement. The second one follows the very definition, one can find the proof in 5.1 of~\cite{Mur67} As a consequence, one has the following result, which should be well-known for experts. \begin{cor}[Rigid]\label{lem:rigid} The restriction functor $(\cdot)\mid_{U}$ from the category of finite \'etale coverings of $S$ to the category of finite \'etale coverings of $U$ is fully faithful. Suppose that there is an isomorphism $f_{U}: \widetilde{S'}\mid_{U} \rightarrow \widetilde{S}\mid_{U}$ of finite \'etale coverings of ${U}$, for some finite \'etale coverings $\widetilde{S}$ and $\widetilde{S'}$ of $S$. Then there is a unique isomorphism $f_S: \widetilde{S'} \rightarrow \widetilde{S}$ of finite \'etale coverings of $S$, such that $f_{U}=f_S\mid_{U}$. \end{cor} In the following, we fix a finite index set $J$ and an open covering $\{{U}_j\}_{j\in J}$ of $S$ with $s\in \bigcap\limits_j {U}_j$. Then for any $j\in J$, the inclusion map ${U}_j\rightarrow S$ induces a surjective group morphism of fundamental groups \[\tau_j:\pi^\text{\'et}_1({U}_j,s)\twoheadrightarrow \pi^\text{\'et}_1(S,s).\] Denote ${U}_{J_1}:={U}_{j_1 j_2\cdots j_r}:={U}_{j_1}\cap {U}_{j_2}\cap\cdots \cap {U}_{j_r}$ for any $J_1=\{j_1,j_2,\cdots,j_r\}\subset J$. Similarly, for any $J_1 \subset J_2 \subset J$, we have a surjective group morphism of fundamental groups \[\tau_{J_2}^{J_1}:\pi^\text{\'et}_1({U}_{J_2},s)\twoheadrightarrow \pi^\text{\'et}_1({U}_{J_1},s).\] Now we can view every $\pi^\text{\'et}_1({U}_{J_1},s)$-set as a $\pi^\text{\'et}_1({U}_{J_2},s)$-set through this group morphism. Since we already have the rigidity of finite \'etale coverings, one can use it to glue these local $\pi_1$-sets together. \begin{thm}\label{Mainthm:rep} Let $(\Sigma_j,\rho_j)$ be a finite $\pi^\text{\'et}_1({U}_j,s)$-set for each $j\in J$. Suppose for each pair $i,j\in J$, there exists an isomorphism of $\pi^\text{\'et}_1({U}_{ij},s)$-sets $\eta_{ij}:\Sigma_i\simeq \Sigma_j$. Then every $\Sigma_j$ descends to a $\pi^\text{\'et}_1(S,s)$-set $(\Sigma_j,\widehat{\rho_j})$ uniquely. Moreover, the image of $\rho_j$ equals that of $\widehat{\rho_j}$. \end{thm} \subsection{Comparing representations associated to local Fontaine-Faltings modules underlying isomorphic filtered de Rham sheaves}\label{section LFFMHDRF} In this section, we compare several representations associated to local Fontaine-Faltings modules underlying isomorphic filtered de Rham sheaves. To do so, we first introduce a local Fontaine-Faltings module, which corresponds to a $W_n(\mathbb{F}_{p^f})$-character of the local fundamental group. We will then use this character to measure the difference between the associated representations. Let $R$ be a small affine algebra over $W(k)$ and denote $R_n=R/p^nR$ for all $n\geq 1$. Fix a lifting $\Phi:\widehat{R}\rightarrow \widehat{R}$ of the absolute Frobenius on $R/pR$. Recall that $\kappa_\Phi:\widehat{R}\rightarrow B^+(\widehat{R})$ is the lifting of $B^+(\widehat{R})/F^1B^+(\widehat{R})\simeq \widehat{R}$ with respect to the $\Phi$. Under such a lifting, the Frobenius $\Phi_B$ on $B^+(\widehat{R})$ extends to $\Phi$ on $\widehat{R}$. \paragraph{\emph{Element $a_{n,r}$.}} Let $f$ be an positive integer. For any $r\in \widehat{R}^\times$, we construct a Fontaine-Faltings module of rank $f$ as following. Let \[V=R_n e_0\oplus R_n e_1\oplus \cdots \oplus R_n e_{f-1}\] be a free $R_n$-module of rank $f$. The integrable connection $\nabla$ on $V$ is defined by formula \[\nabla(e_i)=0,\] and the filtration $\Fil$ on $V$ is the trivial one. Applying the tilde functor and twisting by the map $\Phi$, one gets \[\widetilde{V}\otimes_\Phi \widehat{R}= R_n \cdot (\widetilde{e}_0\otimes_\Phi 1) \oplus R_n \cdot (\widetilde{e}_1\otimes_\Phi 1) \oplus \cdots \oplus R_n \cdot (\widetilde{e}_{f-1}\otimes_\Phi 1),\] where the connection on $\widetilde{V}\otimes_\Phi \widehat{R}$ is determined by \[\nabla(\widetilde{e}_i\otimes_\Phi 1)=0.\] Denote by $\varphi$ the $R_n$-linear map from $(\widetilde{V}\otimes_\Phi \widehat{R},\nabla)$ to $(V,\nabla)$ \[\varphi(\widetilde{e}_0\otimes_\Phi 1,\widetilde{e}_1\otimes_\Phi 1,\cdots,\widetilde{e}_{f-1}\otimes_\Phi 1)=(e_0,e_1,\cdots,e_{f-1}) \left(\begin{array}{ccccc} 0 & & & & r^{p^n}\\ 1 & 0 & & & \\ & 1 & 0 & & \\ & & \ddots & \ddots & \\ & & & 1 & 0 \\ \end{array}\right).\] The $\varphi$ is parallel due to $\mathrm{d}(r^{p^n})\equiv 0\pmod{p^n}$. By lemma~\ref{lem:anotherDiscribtionFM}, the tuple $(V,\nabla,\Fil,\varphi)$ forms a Fontaine-Faltings module. Applying Fontaine-Laffaille-Faltings' functor $\mathbb D_\Phi$, one gets a finite $\mathbb Z_p$-representation of $\mathrm{Gal}(\widehat{\overline{R}}/\widehat{R})$, which is a free $\mathbb Z/p^n\mathbb Z$-module of rank $f$. \begin{lem}\label{a_n,r} Let $n$ and $f$ be two positive integers and let $r$ be an invertible element in $R$. Then there exists an $a_{n,r} \in B^+(\widehat{R})^\times$ such that \begin{equation}\label{a_nr} \Phi_B^f(a_{n,r})\equiv \kappa_{\Phi}(r)^{p^n}\cdot a_{n,r} \pmod{p^n}. \end{equation} \end{lem} \begin{proof} Since $\mathbb{D}_{\Phi}(V,\nabla,\Fil,\varphi)$ is free over $\mathbb Z/p^n\mathbb Z$ of rank $f$. one can find an element $g$ with order $p^n$. Recall that $\mathbb{D}_{\Phi}(V,\nabla,\Fil,\varphi)$ is the sub-$\mathbb Z_p$-module of $\mathrm{Hom}_{B^+(\widehat{R})}(V\otimes_{\kappa_{\Phi}} B^+(\widehat{R}),D)$ consisted by elements respecting the filtration and $\varphi$. In particular, the following diagram commutes \begin{equation} \xymatrix@C=1.5cm{ \left(\widetilde{V}\otimes_{\kappa_\Phi} B^+(\widehat{R})\right)\otimes_{\Phi} B^+(\widehat{R}) \ar[r]^(0.6){g\otimes_{\Phi} \mathrm{id}}\ar@{=}[d] &D\otimes_{\Phi} B^+(\widehat{R})\ar[dd]^{\simeq }\\ \left(\widetilde{V}\otimes_{\Phi} \widehat{R}\right)\otimes_{\kappa_\Phi} B^+(\widehat{R}) \ar[d]^{ \varphi\otimes \mathrm{id}} &\\ V\otimes_{\kappa_\Phi}B^+(\widehat{R})\ar[r]^g &D\\ } \end{equation} Comparing images of $(\widetilde{e}_i\otimes_{\kappa_\Phi} 1)\otimes_\Phi 1$ under the diagram, we have \[\Phi(g(v_{i}\otimes_{\kappa_\Phi} 1))=g(v_{i+1}\otimes_{\kappa_\Phi} 1), \quad \text{ for all } 0\leq i\leq f-2;\] and \[\Phi(g(v_{f-1}\otimes_{\kappa_\Phi} 1))=\kappa_\Phi(r)^{p^n}\cdot g(v_{0}\otimes_{\kappa_\Phi} 1).\] So we have \[\Phi^f(g(v_{0}\otimes_{\kappa_\Phi} 1))=\kappa_\Phi(r)^{p^n}\cdot g(v_{0}\otimes_{\kappa_\Phi} 1).\] Since the image of $g$ is $p^n$-torsion, $\mathrm{Im}(g)$ is contained in $D[p^n]=\frac{1}{p^n}B^+(\widehat{R})/B^+(\widehat{R})$, the $p^n$-torsion part of $D$. Choose a lifting $a_{n,r}$ of $g(e_0\otimes_{\kappa_\Phi} 1)$ under the surjective map $B^+(\widehat{R}) \overset{\frac{1}{p^n}}{\longrightarrow} D[p^n]$. Then the equation~(\ref{a_nr}) follows. Similarly, one can define $a_{n,r^{-1}}$ for $r^{-1}$. By equation~(\ref{a_nr}), we have \[\Phi^f(a_{n,r}\cdot a_{n,r^{-1}})=a_{n,r}\cdot a_{n,r^{-1}}.\] Thus $a_{n,r}\cdot a_{n,r^{-1}}\in W(\mathbb{F}_{p^f})$. Since both $a_{n,r}$ and $a_{n,r^{-1}}$ are not divided by $p$ (by the choice of $g$), we know that $a_{n,r}\cdot a_{n,r^{-1}}\in W(\mathbb{F}_{p^f})^\times$. The invertibility of $a_{n,r}$ follows. \end{proof} \paragraph{\emph{Comparing representations.}} Let $n$ and $f$ be two positive integers. For all $0\leq i\leq f$, let $(V_i,\nabla_i,\Fil_i)$ be filtered de Rham $R_n=R/p^nR$-modules of level $a$ ($a\leq p-1$). We write $V=\bigoplus\limits_{i=0}^{f-1} V_i$, $\nabla=\bigoplus\limits_{i=0}^{f-1} \nabla_i$ and $\Fil=\bigoplus\limits_{i=0}^{f-1} \Fil_i$ for short. Let \begin{equation} \begin{split} \varphi_i: &\ C^{-1}\circ \overline{\mathrm{Gr}}(V_i,\nabla_i,\Fil_i)\simeq (V_{i+1},\nabla_{i+1}), \quad 0\leq i\leq f-2\\ \varphi_{f-1}: &\ C^{-1}\circ \overline{\mathrm{Gr}}(V_{f-1},\nabla_{f-1},\Fil_{f-1})\simeq (V_{0},\nabla_{0})\\ \end{split} \end{equation} be isomorphisms of de Rham $R$-modules. Let $r$ be an element in $R^\times$. Since $\mathrm{d}(r^{p^n})=0\pmod{p^n}$, the map $r^{p^n}\varphi_{f-1}$ is also an isomorphism of de Rham $R_n$-modules. Thus \[M=(V,\nabla,\Fil,\varphi) \text{ and } M'=(V,\nabla,\Fil,\varphi')\] are Fontaine-Faltings modules over $R_n$, where $\varphi=\sum\limits_{i=0}^{f-1}\varphi_i$ and $\varphi'=\sum\limits_{i=0}^{f-2}\varphi_i+r^{p^n}\varphi_{f-1}$. \begin{prop} $i)$. There are $W_n(\mathbb{F}_{p^f})$-module structures on $\mathbb D_{\Phi}(M)$ and $\mathbb D_{\Phi}(M')$. And the actions of $\mathrm{Gal}(\widehat{\overline{R}}/\widehat{R})$ are semi-linear.\\ $ii)$. The multiplication of $a_{n,r}$ on $\Hom_{B^+(\widehat{R})}\left(V \otimes_{\kappa_\Phi} B^+(\widehat{R}),D\right)$ induces a $W_n(\mathbb{F}_{p^f})$-linear map between these two submodules \[\mathbb D_{\Phi}(M)\overset{\sim}{\longrightarrow} \mathbb D_{\Phi}(M').\] \end{prop} \begin{proof} $i)$. Let $g : \bigoplus\limits_{i=0}^{f-1} V_i \otimes_{\kappa_\Phi} B^+(\widehat{R})\rightarrow D$ be an element in $\mathbb D_{\Phi}(M)$. For any $a\in W_n(\mathbb{F}_{p^f})$, denote \begin{equation}\label{action of W_n(F_p^f)} a\star g:=\sum_{i=0}^{f-1} \sigma^{i}(a) g_i \end{equation} where $g_i$ is the restriction of $g$ on the $i$-th component $V_i \otimes_{\kappa_{\Phi}} B^+(\widehat{R})$. One checks that $a\star g$ is also contained in $\mathbb D_{\Phi}(M)$. Thus $\star$ defines a $W_n(\mathbb{F}_{p^f})$-module structure on $\mathbb D_{\Phi}(M)$. Let $\delta$ be an element in $\mathrm{Gal}(\widehat{\overline{R}}/\widehat{R})$. Then \begin{equation} \begin{split} \delta(a\star g) & = \delta\circ\left(\sum_{i=0}^{f-1} \sigma^{i}(a) g_i\right)\circ \delta^{-1}\\ &=\sum_{i=0}^{f-1} \sigma^{i}(\delta(a)) \delta\circ g_i \circ \delta^{-1}\\ &=\delta(a)\star\delta(g) \end{split} \end{equation} In this way, $\mathbb D_{\Phi}(M)$ forms a $W_n(\mathbb{F}_{p^f})$-module with a continuous semi-linear action of $\pi^\text{\'et}_1(U_K)$. For the $W_n(\mathbb{F}_{p^f})$-module structure on $\mathbb D_{\Phi}(M')$, the action of $W_n(\mathbb{F}_{p^f})$ on $\mathbb D_{\Phi}(M')$ is defined in the same manner as in (\ref{action of W_n(F_p^f)}). \\ $ii)$. Recall that $\mathbb{D}_\Phi(M)$ (resp. $\mathbb{D}_\Phi(M')$) is defined to be the set of all morphisms in $\Hom_{B^+(\widehat{R})}\left(V \otimes_{\kappa_\Phi} B^+(\widehat{R}),D\right)$ compatible with the filtration and $\varphi$ (resp. $\varphi'$). Since $\mathbb{D}_\Phi(M)$ and $\mathbb{D}_\Phi(M')$ have the same rank and multiplication by $a_{n,r}$ map on $\Hom_{B^+(\widehat{R})}\left(V \otimes_{\kappa_\Phi} B^+(\widehat{R}),D\right)$ is injective, we only need to show that $a_{n,r}\cdot f\in \mathbb D_{\Phi}(M')$ for all $f\in \mathbb D_{\Phi}(M)$. Suppose $f:V\otimes_{\kappa_\Phi} B^+(\widehat{R})\rightarrow D$ is an element in $\mathbb D_{\Phi}(M)$, which means that $f$ satisfies the following two conditions: 1). $f$ is strict for the filtrations. i.e. \[\sum_{\ell_1+\ell_2=\ell}\Fil^{\ell_1} V\otimes_{\kappa_\Phi} \Fil^{\ell_2} B^+(\widehat{R})=f^{-1}(\Fil^{\ell} D).\] 2).$f \otimes _\Phi \mathrm{id} =f\circ (\varphi\otimes_{\kappa_\Phi} \mathrm{id})$. i.e. the following diagram commutes \begin{equation} \xymatrix@C=1.5cm{ \left(\widetilde{V}\otimes_{\kappa_\Phi} B^+(\widehat{R})\right)\otimes_{\Phi} B^+(\widehat{R}) \ar[r]^(0.6){ f\otimes_{\Phi} \mathrm{id}}\ar@{=}[d] & D\otimes_{\Phi} B^+(\widehat{R})\ar[dd]^{\simeq }\\ \left(\widetilde{V}\otimes_{\Phi} \widehat{R}\right)\otimes_{\kappa_\Phi} B^+(\widehat{R}) \ar[d]^{ \varphi_{n,r}\otimes \mathrm{id}} &\\ V\otimes_{\kappa_\Phi}B^+(\widehat{R})\ar[r]^f &D\\ } \end{equation} Since $a_{n,r}\in B^+(\widehat{R})^\times\subset\Fil^0 B^+(\widehat{R})\setminus \Fil^1 B^+(\widehat{R})$, we have $a_{n,r}\cdot \Fil^\ell D=\Fil^\ell D$, and thus \[\sum_{\ell_1+\ell_2=\ell}\Fil^{\ell_1} V\otimes_{\kappa_\Phi} \Fil^{\ell_2} B^+(\widehat{R})=f^{-1}(\Fil^{\ell} D)=(a_{n,r}\cdot f)^{-1}(\Fil^{\ell} D).\] Simultaneously, we have \begin{equation*} \begin{split} (a_{n,r}\cdot f)\otimes _\Phi \mathrm{id} =& f\otimes_\Phi \Phi(a_{n,r})\cdot \mathrm{id}\\ =& f\otimes_\Phi a_{n,r}\cdot \kappa_\Phi(r)^{p^n}\cdot \mathrm{id}=( a_{n,r}\cdot \kappa_\Phi(r)^{p^n})\cdot \Big(f\otimes_\Phi\mathrm{id}\Big) \\ =& a_{n,r}\cdot \kappa_\Phi(r)^{p^n}\cdot\Big(f\circ (\varphi\otimes_{\kappa_\Phi} \mathrm{id})\Big) \\ =& (a_{n,r}\cdot f)\circ (r^{p^n}\varphi\otimes_{\kappa_\Phi} \mathrm{id})\\ \end{split} \end{equation*} So by definition $a_{n,r}\cdot f\in \mathbb D_{\Phi}(M')$. \end{proof} \begin{cor}\label{cor_compare_proj} Suppose that $\mathbb{F}_{p^f}\subset k$. The map from $\mathbb D_{\Phi}(M)$ to $\mathbb D_{\Phi}(M')$ is an isomorphism of projective $W_n(\mathbb{F}_{p^f})$-representations of $\mathrm{Gal}(\overline{\widehat{R}}/\widehat{R})$. In particular, we have an bijection of $\mathrm{Gal}(\overline{\widehat{R}}/\widehat{R})$-sets \[ \mathbb D_{\Phi}(M)/W_n(\mathbb{F}_{p^f})^\times\rightarrow \mathbb D_{\Phi}(M')/W_n(\mathbb{F}_{p^f})^\times.\] \end{cor} \subsection{The functor $\mathbb D^P$}\label{section FDP} In this section, we assume $f$ to be a positive integer with $\mathbb F_{p^f}\subset k$. Let $\{\mathcal U_j\}_{j\in J}$ be a finite small affine open covering of $\mathcal X$. Let ${U}_j=(\mathcal U_j)_K$. For every $j\in J$, fix $\Phi_j$ as a lifting of the absolute Frobenius on $\mathcal U_j\otimes_W k$. Fix $\overline{x}$ as a geometric point in ${U}_J=\bigcap\limits_{j\in J} U_j$ and fix $j_0$ an element in $J$. Let $(V,\nabla,\Fil,\varphi,\iota)$ be a Fontaine-Faltings module over $X_n$ with an endomorphism structure of $W(\mathbb{F}_{p^f})$ whose Hodge-Tate weights lie in $[0,p-2]$. Locally, Applying Fontaine-Laffaille-Faltings' functor $\mathbb D_{\Phi_j}$, one gets a finite $W_n(\mathbb F_{p^f})$-representation $\varrho_j$ of $\pi^\text{\'et}_1(U_j,\overline{x})$. Faltings shows that there is an isomorphism $\varrho_{j_1}\simeq \varrho_{j_2}$ of $\mathbb Z/p^n\mathbb Z$-representations of $\pi^\text{\'et}_1(U_{j_1j_2},\overline{x})$. By Lan-Sheng-Zuo~\cite{LSZ13a}, this isomorphism is $W_n(\mathbb F_{p^f})$-linear. By Theorem~\ref{Mainthm:rep}, these $\varrho_j$'s uniquely descend to a $W_n(\mathbb F_{p^f})$-representation of $\pi^\text{\'et}_1(X_K,\overline{x})$. Thus one reconstructs the $W_n(\mathbb F_{p^f})$-representation $\mathbb D(V,\nabla,\Fil,\varphi,\iota)$ in this way. Now we construct functor $\mathbb D^P$ for twisted Fontaine-Faltings modules, in a similar way. Let $(V_i,\nabla_i,\Fil_i,\varphi_i)_{0\leq i<f}\in\mathcal {TMF}_{[0,p-2],f}^{\nabla}(X_{n+1}/W_{n+1})$ be an $L_n$-twisted Fontaine-Faltings module over $X_n$ with endomorphism structure of $W_n(\mathbb F_{p^f})$ whose Hodge-Tate weights lie in $[0,p-2]$. For each $j\in J$, choosing a trivialization $M(\tau_j)$ and applying Fontaine-Laffaille-Faltings' functor $\mathbb D_{\Phi_j}$, we get a $W_n(\mathbb F_{p^f})$-module together with a linear action of $\pi^\text{\'et}_1(U_j,\overline{x})$. Denote its projectification by $\varrho_j$. By Corollary~\ref{cor_compare_proj}, there is an isomorphism $\varrho_{j_1}\simeq \varrho_{j_2}$ as projective $W_n(\mathbb F_{p^f})$-representations of $\pi^\text{\'et}_1(U_{j_1j_2},\overline{x})$. In what follows, we will show that these $\varrho_j$'s uniquely descend to a projective $W_n(\mathbb F_{p^f})$-representation of $\pi^\text{\'et}_1(X_K,\overline{x})$ by using Theorem~\ref{Mainthm:rep}. In order to use Theorem~\ref{Mainthm:rep}, set $\Sigma_j$ to be the quotient $\pi^\text{\'et}_1(U_j,\overline{x})$-set $$\mathbb{D}_{\Phi_j}(M(\tau_j))/W_n(\mathbb F_{p^f})^\times.$$ Obviously the kernel of the canonical group morphism \[\mathrm{GL}\big(\mathbb{ D}_{\Phi_{j}}(M(\tau_j))\big)\rightarrow \mathrm{Aut}(\Sigma_j)\] is just $W_n(\mathbb F_{p^f})^\times$, we identify the image of this morphism with \[\mathrm{PGL}\big(\mathbb{D}_{\Phi_{j}}(M(\tau_j))\big)=\mathrm{GL}\big(\mathbb{D}_{\Phi_{j}}(M(\tau_j))\big)/W_n(\mathbb F_{p^f})^\times.\] Let's denote by $\rho_j$ the composition of $\varrho_j$ and $\mathrm{GL}\big(\mathbb{ D}_{\Phi_{j}}(M(\tau_j))\big)\rightarrow \mathrm{Aut}(\Sigma_j)$ for all $j\in J$. \begin{equation*} \xymatrix{ \pi^\text{\'et}_1(U_{j_0},\overline{x}) \ar@{->>}[d] \ar[r]^(0.3){\varrho_{j_0}} \ar[rrd]_(0.3){\rho_{j_0}}|!{[r];[dr]}\hole & \mathrm{GL}\big(\mathbb{ D}_{\Phi_{j_0}}(M(\tau_{j_0}))\big) \ar@{->>}[d] \ar[dr] & \\ \pi^\text{\'et}_1(X_K,\overline{x})\ar@{.>}[r]^(0.3){\widehat{\rho}_{j_0}} \ar@/_15pt/@{.>}[rr]_{\widehat{\rho}_{j_0}}& \mathrm{PGL}\big(\mathbb{D}_{\Phi_{j_0}}(M(\tau_{j_0}))\big) \ar@{>->}[r] & \mathrm{Aut}(\Sigma_{j_0})\\ } \end{equation*} By Corollary~\ref{cor_compare_proj}, the restrictions of $(\Sigma_{j_1},\rho_{j_1})$ and $(\Sigma_{j_2},\rho_{j_2})$ on $\pi^\text{\'et}_1(U_{j_1j_2},\overline{x})$ are isomorphic for all $j_1,j_2\in J$. Hence by Theorem~\ref{Mainthm:rep}, the map $\rho_{j_0}$ descends to some $\widehat{\rho}_{j_0}$ and the image of $\widehat{\rho}_{j_0}$ is contained in $\mathrm{PGL}\big(\mathbb{D}_{\Phi_{j_0}}(M(\tau_{j_0}))\big)$. So the projective $W_n(\mathbb{F}_{p^f})$-representation $(\mathbb{D}_{\Phi_{j_0}}(M(\tau_{j_0})),\rho_{j_0})$ of $\pi^\text{\'et}_1(U_{j_0},\overline{x})$ descends to projective representation $(\mathbb{D}_{\Phi_{j_0}}(M(\tau_{j_0})),\widehat\rho_{j_0})$ of $\pi^\text{\'et}_1(X_K,\overline{x})$. Up to a canonical isomorphism, this projective representation does not depends on the choices of the covering $\{\mathcal U_{j}\}_{{j}\in J}$, the liftings $\Phi_j$'s and $j_0$. And we denote this projective $W_n(\mathbb F_{p^f})$-representation of $\pi^\text{\'et}_1(X_K,\overline{x})$ by \[\mathbb D^P\Big((V_i,\nabla_i,\Fil_i,\varphi_i)_{0\leq i<f}\Big).\] Similarly as Faltings' functor $\mathbb{D}$ in \cite{Fal89}, our construction of the $\mathbb D^P$ functor can also be extended to the logarithmic version. More precisely, let $\mathcal X$ be a smooth and proper scheme over $W$ and let $\mathcal X^o$ be the complement of a simple normal crossing divisor $\D\subset \mathcal X$ relative to $W$. Similarly, by replacing $X_K$ and $U_{j}$ with $X_K^o=X_K$ and $U_{j}^o$, we construct the functor \begin{equation} \mathbb D^P: \mathcal {TMF}_{[0,p-2],f}^{\nabla}(X^o_{n+1}/W_{n+1}) \rightarrow \mathrm{PRep}_{W_n(\mathbb F_{p^f})}^{\mathrm{free}}(\pi^\text{\'et}_1(X_K^o)) \end{equation} from the category of strict $p^n$-torsion twisted logarithmic Fontaine modules (with pole along $\D\times W_n\subset \mathcal X\times W_n$) with endomorphism structure of $W_n(\mathbb{F}_{p^f})$ whose Hodge-Tate weights lie in $[0,p-2]$ to the category of free $W_n(\mathbb{F}_{p^f})$-modules with projective actions of $\pi^\text{\'et}_1(X_K^o)$. Summarizing this section, we get the following result. \begin{thm}\label{ConsFunc:D^P} Let $M$ be a twisted logarithmic Fontaine-Faltings module over $\mathcal X$ (with pole along $\D$) with endomorphism structure of $W(\mathbb{F}_{p^f})$. Te $\mathbb D^P$-functor associates to $M$ and its endomorphism structure n a projective representation \[\rho : \pi^\text{\'et}_1(X_K^o) \to \mathrm{PGL}(\mathbb{D}^P(M)),\] where $X_K^o$ is the generic fiber of $\mathcal X^o=\mathcal X\setminus \D$. \end{thm} \section{Twisted periodic Higgs-de Rham flows}\label{section (T)PHDF} In this section, we will recall the definition of periodic Higgs-de Rham flows and generalize it to the twisted version. \subsection{Higgs-de Rham flow over $X_n\subset X_{n+1}$}\label{section HDF} Recall~\cite{LSZ13a} that a \emph{Higgs-de Rham flow} over $X_n\subset X_{n+1}$ is a sequence consisting of infinitely many alternating terms of filtered de Rham bundles and Higgs bundles \[\left\{ (V,\nabla,\Fil)^{(n-1)}_{-1}, (E,\theta)_{0}^{(n)}, (V,\nabla,\Fil)_{0}^{(n)}, (E,\theta)_{1}^{(n)}, (V,\nabla,\Fil)_{1}^{(n)}, \cdots\right\},\] which are related to each other by the following diagram inductively \begin{equation*}\tiny \xymatrix@W=10mm@C=-3mm@R=5mm{ &&&& (V,\nabla,\Fil)_{0}^{(n)} \ar[dr]^{\mathrm{Gr}} && (V,\nabla,\Fil)_{1}^{(n)} \ar[dr]^{\mathrm{Gr}} &\\ &&& (E,\theta)_{0}^{(n)} \ar[ur]^{\mathcal C^{-1}_n} \ar@{..>}[dd] && (E,\theta)_{1}^{(n)} \ar[ur]^{\mathcal C^{-1}_n} && \cdots\\ (V,\nabla,\Fil)^{(n-1)}_{-1} \ar[dr]^{\mathrm{Gr}} &&&&&&&\\ &\mathrm{Gr}\left((V,\nabla,\Fil)^{(n-1)}_{-1}\right)\ar[rr]^{\sim}_{\psi} &&(E,\theta)_{0}^{(n)}\pmod{p^{n-1}} &&&&\\ } \end{equation*} where \begin{itemize} \item[-] $(V,\nabla,\Fil)^{(n-1)}_{-1}$ is a filtered de Rham bundle over $X_{n-1}$ of level in $[0,p-2]$; \item[-] $(E,\theta)_0^{(n)}$ is a lifting of the graded Higgs bundle $\mathrm{Gr}\left((V,\nabla,\Fil)^{(n-1)}_{-1}\right)$ over $X_n$, $(V,\nabla)_0^{(n)}:=C^{-1}_n ((E,\theta)_0^{(n)},(V,\nabla,\Fil)^{(n-1)}_{-1} ,\psi)$ and $\Fil^{(n)}_0$ is a Hodge filtration on $(V,\nabla)_0^{(n)}$ of level in $[0,p-2]$; \item[-] Inductively, for $m\geq1$, $(E,\theta)_m^{(n)}:=\mathrm{Gr}\left((V,\nabla,\Fil)_{m-1}^{(n)}\right)$ and $(V,\nabla)_m^{(n)}:=C^{-1}_n \left( (E,\theta)_{m}^{(n)}, (V,\nabla,\Fil)^{(n-1)}_{m-1}, \mathrm{id} \right)$. Here $(V,\nabla,\Fil)^{(n-1)}_{m-1}$ is the reduction of $(V,\nabla,\Fil)^{(n)}_{m-1}$ on $X_{n-1}$. And $\Fil^{(n)}_m$ is a Hodge filtration on $(V,\nabla)_m^{(n)}$. \end{itemize} \begin{rmk} In case $n=1$, the data of $(\overline{V},\overline{\nabla},\overline{\Fil})_{-1}^{(n-1)}$ is empty. The Higgs-de Rham flow can be rewritten in the following form \[\left\{ (E,\theta)_{0}^{(1)}, (V,\nabla,\Fil)_{0}^{(1)}, (E,\theta)_{1}^{(1)}, (V,\nabla,\Fil)_{1}^{(1)}, \cdots\right\}.\] In this way, the diagram becomes \begin{equation*}\tiny \xymatrix@W=5mm@C=2mm{ &(V,\nabla,\Fil)_{0}^{(1)} \ar[dr]^{\mathrm{Gr}} && (V,\nabla,\Fil)_{1}^{(1)} \ar[dr]^{\mathrm{Gr}} &\\ (E,\theta)_{0}^{(1)} \ar[ur]^{\mathcal C^{-1}_1} && (E,\theta)_{1}^{(1)} \ar[ur]^{\mathcal C^{-1}_1} && \cdots\\ } \end{equation*} \end{rmk} In the rest of this section, we will give the definition of twisted periodic Higgs-de Rham flow (section~\ref{section TPHDF}), which generalizes the periodic Higgs-de Rham flow in~\cite{LSZ13a}. \subsection{Twisted periodic Higgs-de Rham flow and equivalent categories}\label{section TPHDF} Let $L_n$ be a line bundle over $X_n$. For all $1\leq \ell <n$, denote $L_\ell=L_n\otimes_{\mathcal O_{X_n}}\mathcal O_{X_\ell}$ the reduction of $L_n$ on $X_\ell$. In this subsection, let $a\leq p-2$ be a positive integer. We will give the definition of $L_n$-twisted Higgs-de Rham flow of level in $[0,a]$. \subsubsection{Twisted periodic Higgs-de Rham flow over $X_1$.} \begin{defi} Let $f$ be a positive integer. An \emph{$f$-periodic $L_1$-twisted Higgs-de Rham flow over $X_1\subset X_2$} of level in $[0,a]$, is a Higgs-de Rham flow over $X_1$ \[\left\{ (E,\theta)_{0}^{(1)}, (V,\nabla,\Fil)_{0}^{(1)}, (E,\theta)_{1}^{(1)}, (V,\nabla,\Fil)_{1}^{(1)}, \cdots\right\}\] together with isomorphisms $\phi^{(1)}_{f+i}:(E,\theta)_{f+i}^{(1)}\otimes (L_1^{p^i},0)\rightarrow (E,\theta)_i^{(1)}$ of Higgs bundles for all $i\geq0$ \begin{equation*} \tiny\xymatrix@W=2cm@C=-13mm{ &\left(V,\nabla,\Fil\right)_{0}^{(1)} \ar[dr]^{\mathrm{Gr}} &&\left(V,\nabla,\Fil\right)_{1}^{(1)}\ar[dr]^{\mathrm{Gr}} &&\cdots \ar[dr]^{\mathrm{Gr}} &&\left(V,\nabla,\Fil\right)_{f}^{(1)}\ar[dr]^{\mathrm{Gr}} &&\left(V,\nabla,\Fil\right)_{f+1}^{(1)}\ar[dr]^{\mathrm{Gr}} \ar[dr]^{\mathrm{Gr} &&\cdots\\%\ar@/_20pt/[llllll]_{\cdots}\\ \left(E,\theta\right)_{0}^{(1)}\ar[ur]_{\mathcal C^{-1}_{1}} &&\left(E,\theta\right)_{1}^{(1)}\ar[ur]_{\mathcal C^{-1}_{1}} && \cdots \ar[ur]_{\mathcal C^{-1}_{1}} &&\left(E,\theta\right)_{f}^{(1)}\ar[ur]_{\mathcal C^{-1}_{1}} \ar@/^20pt/[llllll]^{\phi_f^{(1)}}|(0.33)\hole &&\left(E,\theta\right)_{f+1}^{(1)}\ar[ur]_{\mathcal C^{-1}_{1}} \ar@/^20pt/[llllll]^{\phi_{f+1}^{(1)}}|(0.33)\hole && \cdots \ar@/^20pt/[llllll]^{\cdots}\ar[ur]_{\mathcal C^{-1}_{1}}\\} \end{equation*} And for any $i\geq 0$ the isomorphism \begin{equation*} C^{-1}_1(\phi^{(1)}_{f+i}): (V,\nabla)_{f+i}^{(1)}\otimes (L_1^{p^{i+1}},\nabla_{\mathrm{can}})\rightarrow (V,\nabla)_{i}^{(1)}, \end{equation*} strictly respects filtrations $\Fil_{f+i}^{(1)}$ and $\Fil_{i}^{(1)}$. Those $\phi^{(1)}_{f+i}$'s are relative to each other by formula \[\phi^{(1)}_{f+i+1}=\mathrm{Gr}\circ C^{-1}_1(\phi^{(1)}_{f+i}).\] \end{defi} Denote the category of all twisted $f$-periodic Higgs-de Rham flow over $X_1$ of level in $[0,a]$ by $\mathcal{HDF}_{a,f}(X_{2}/W_2)$. \subsubsection{Twisted periodic Higgs-de Rham flow $X_n\subset X_{n+1}$.} Let $n\geq2$ be an integer and $f$ be a positive integer. And $L_n$ is a line bundle over $X_n$. Denote by $L_{\ell}$ the reduction of $L_n$ modulo $p^\ell$. We define the category $\mathcal{THDF}_{a,f}(X_{n+1}/W_{n+1})$ of all $f$-periodic twisted Higgs-de Rham flow over $X_n\subset X_{n+1}$ of level in $[0,a]$ in the following inductive way. \begin{defi} An $L_n$-twisted $f$-periodic Higgs-de Rham flow over $X_n\subset X_{n+1}$ is a Higgs-de Rham flow \[\Big\{(V,\nabla,\Fil)_{n-2}^{(n-1)},(E,\theta)_{n-1}^{(n)},(V,\nabla,\Fil)_{n-1}^{(n)},(E,\theta)_{n}^{(n)},\cdots\Big\}_{/X_n\subset X_{n+1}}\] which is a lifting of an $L_{1}$-twisted $f$-periodic Higgs-de Rham flow \[\Big\{(E,\theta)_{0}^{(1)},(V,\nabla,\Fil)_{0}^{(1)},(E,\theta)_{1}^{(1)},(V,\nabla,\Fil)_{1}^{(1)},\cdots;\phi^{(1)}_{\bullet}\Big\}_{/X_{1}\subset X_{2}}\] It is constructed by the following diagram for $2\leq \ell \leq n$, inductively \begin{equation*} \tiny \xymatrix@W=2cm@C=-8mm{ &&\Big(V,\nabla,\Fil\Big)_{\ell-1}^{(\ell)} \ar[dr]^{\mathrm{Gr}}\ar@{.>}[ddd] &&\cdots \ar[dr]^{\mathrm{Gr}}\ar@{.>}[ddd] &&\Big(V,\nabla,\Fil\Big)_{\ell+f-2}^{(\ell)} \ar[dr]^{\mathrm{Gr}}\ar@{.>}[ddd] \\ &\left(E,\theta\right)_{\ell-1}^{(\ell)} \ar[ur]_{\mathcal C^{-1}_n}\ar@{.>}[ddd]_(0.3){\mod p^{\ell-1}} &&\left(E,\theta\right)_{\ell}^{(\ell)} \ar[ur]_{\mathcal C^{-1}_n}\ar@{.>}[ddd] &&\cdots \ar[ur]_{\mathcal C^{-1}_n}\ar@{.>}[ddd] &&\left(E,\theta\right)_{\ell+f-1}^{(\ell)} \ar@{.>}[ddd] \ar@/^20pt/[llllll]^(0.44){\phi_{\ell+f-1}^{(\ell)}} \\ &&&&&&&\\ \Big(V,\nabla,\Fil\Big)_{\ell-2}^{(\ell-1)} \ar[dr]^{\mathrm{Gr}} &&\Big(V,\nabla,\Fil\Big)_{\ell-1}^{(\ell-1)} \ar[dr]^{\mathrm{Gr}} &&\cdots \ar[dr]^{\mathrm{Gr}} &&\Big(V,\nabla,\Fil\Big)_{\ell+f-2}^{(\ell-1)} \ar[dr]^{\mathrm{Gr}} \\ &\quad\left(E,\theta\right)_{\ell-1}^{(\ell-1)}\quad \ar[ur]_{\mathcal C^{-1}_{\ell-1}} &&\left(E,\theta\right)_{\ell}^{(\ell-1)} \ar[ur]_{\mathcal C^{-1}_{\ell-1}} &&\cdots \ar[ur]_{\mathcal C^{-1}_{\ell-1}} &&\left(E,\theta\right)_{\ell+f-1}^{(\ell-1)} \ar@/^20pt/[llllll]^(0.44){ \phi_{\ell+f-1}^{(\ell-1)}} \\ \end{equation*} Here $\bullet$ $(E,\theta)_{\ell-1}^{(\ell)}/X_\ell$ is a lifting of $(E,\theta)_{\ell-1}^{(\ell-1)}/X_{\ell-1}$, which implies automatically $(V,\nabla)_{\ell-1}^{(\ell)}:=C^{-1}_\ell \left( (E,\theta)_{\ell-1}^{(\ell)}, (V,\nabla,\Fil)^{(\ell-1)}_{\ell-2}, \mathrm{id}\right)$ is a lifting of $(V,\nabla)_{\ell-1}^{(\ell-1)}$ since $C_\ell^{-1}$ is a lifting of $C_{\ell-1}^{-1}$.\\ $\bullet$ $\Fil_{\ell-1}^{(\ell)}\subset (V,\nabla)_{\ell-1}^{(\ell)}$ is a lifting of the Hodge filtration $\Fil_{\ell-1}^{(\ell-1)}\subset (V,\nabla)_{\ell-1}^{(\ell-1)}$, which implies that $(E,\theta)_{\ell}^{(\ell)}=\mathrm{Gr}\left((V,\nabla,\Fil)_{\ell-1}^{(\ell)}\right)/X_\ell$ is a lifting of $(E,\theta)_{\ell}^{(\ell-1)}/X_{\ell-1}$ and $(V,\nabla)_{\ell}^{(\ell)}:=C^{-1}_\ell \left( (E,\theta)_\ell^{(\ell)}, (V,\nabla,\Fil)^{(\ell-1)}_{\ell-1}, \mathrm{\mathrm{id}} \right)$.\\ $\bullet$ Repeating the process above, one gets the data $\Fil_i^{(\ell)}$, $(E,\theta)_{i+1}^{(\ell)}$ and $(V,\nabla)_{i+1}^{(\ell)}$ for all $i\geq \ell$.\\ $\bullet$ Finally, for all $i\geq \ell-1$, $\phi^{(\ell)}_{i+f}:(E,\theta)_{i+f}^{(\ell)}\otimes (L_\ell^{p^i},0) \rightarrow (E,\theta)_{i}^{(\ell)} $ is a lifting of $\phi^{(\ell-1)}_{i+f}$. And these morphisms are related to each other by formula $\phi_{i+f+1}^{(\ell)}=\mathrm{Gr}\circ C^{-1}_\ell(\phi_{i+f}^{(\ell)})$. Denote the twisted periodic Higgs-de Rham flow by \[\Big\{(V,\nabla,\Fil)_{n-2}^{(n-1)},(E,\theta)_{n-1}^{(n)},(V,\nabla,\Fil)_{n-1}^{(n)},(E,\theta)_{n}^{(n)},\cdots;\phi^{(n)}_{\bullet}\Big\}_{/X_n\subset X_{n+1}}\] The category of all periodic twisted Higgs-de Rham flow over $X_n\subset X_{n+1}$ of level in $[0,a]$ is denoted by $\mathcal{THDF}_{a,f}(X_{n+1}/W_{n+1})$. \end{defi} \begin{rmk} For the trivial line bundle $L_n$, the definition above is equivalent to the original definition of periodic Higgs-de Rham flow in~\cite{LSZ13a} by using the identification $\phi: (E,\theta)_0=(E,\theta)_f$. \end{rmk} Note that we can also define the logarithmic version of the twisted periodic Higgs-de Rham flow since we already have the log version of inverse Cartier transform. $\mathcal X$ is a smooth proper scheme over $W$ and $\mathcal X^o$ is the complement of a simple normal crossing divisor $\D\subset \mathcal X$ relative to $W$. Similarly, one constructs the category $\mathcal{THDF}_{a,f}(X^o_{n+1}/W_{n+1})$ of twisted $f$-periodic logarithmic Higgs-de Rham flows (with pole along $\D\times W_n\subset \mathcal X\times W_n$) over $\mathcal X\times W_n$ whose nilpotent exponents are $\leq p-2$ . \subsubsection{Equivalence of categories.} We establish an equivalence of categories between $\mathcal{THDF}_{a,f}(X_{n+1}/W_{n+1})$ and $\mathcal {TMF}_{[0,a],f}^{\nabla}(X_{n+1}/W_{n+1})$. \begin{thm}\label{equiv:TFF&THDF} Let $a \leq p-1$ be a natural number and $f$ be an positive integer. Then there is an equivalence of categories between $\mathcal{THDF}_{a,f}(X_{n+1}/W_{n+1})$ and $\mathcal {TMF}_{[0,a],f}^{\nabla}(X_{n+1}/W_{n+1})$. \end{thm} \begin{proof} Let \[\mathscr E=\Big\{(V,\nabla,\Fil)_{n-2}^{(n-1)},(E,\theta)_{n-1}^{(n)},(V,\nabla,\Fil)_{n-1}^{(n)},(E,\theta)_{n}^{(n)},\cdots;\phi^{(n)}_{\bullet}\Big\}_{/X_n\subset X_{n+1}}\] be an $f$-periodic $L_n$-twisted Higgs-de Rham flow over $X_n$ with level in $[0,a]$. Taking out $f$ terms of filtered de Rham bundles \[(V,\nabla,\Fil)_0^{(n)}, (V,\nabla,\Fil)_1^{(n)},\cdots,(V,\nabla,\Fil)_{f-1}^{(n)}\] together with $f-1$ terms of identities maps \[\varphi_i: C^{-1}_n\circ \mathrm{Gr}\left( (V,\nabla,\Fil)_i^{(n)} \right)=(V,\nabla)_{i+1}^{(n)}, \quad i=0,1,\cdots,f-2,\] and $\varphi_{f-1}:= C^{-1}_n\left(\phi_{f}^{(n)}\right)$, one gets a tuple \[ \mathcal{IC}(\mathscr E):=\left(V_i^{(n)},\nabla_i^{(n)},\Fil_i^{(n)},\varphi_i\right)_{0\leq i<f},\] This tuple forms an $L_n$-twisted Fontaine-Faltings module by definition. It gives us the functor $\mathcal{IC}$ from $\mathcal{THDF}_{a,f}(X_{n+1}/W_{n+1})$ to $\mathcal {TMF}_{[0,a],f}^{\nabla}(X_{n+1}/W_{n+1})$. Conversely, let $(V_i,\nabla_i,\Fil_i,\varphi_i)_{0\leq i<f}$ be an $L_n$-twisted Fontaine-Faltings module. For $0\leq i\leq f-2$, we identify $(V_{i+1},\nabla_{i+1})$ with $C^{-1}_n\circ\mathrm{Gr}(V_i,\nabla_i,\Fil_i)$ via $\varphi_i$. We construct the corresponding flow by induction on $n$. In case $n=1$, we already have following diagram \begin{equation}\tiny \xymatrix@W=2cm@C=-13mm{ (V,\nabla,\Fil)_0\ar[dr]|{Gr} && (V,\nabla,\Fil)_1\ar[dr]|{Gr} && \cdots \ar[dr]|{Gr} && (V,\nabla,\Fil)_{f-1} \ar[dr]|{Gr} && (V,\nabla)_f \ar@/_15pt/[llllllll]|{\varphi_{f-1}}\\ & (E,\theta)_1 \ar[ur]|{C_1^{-1}} &&\cdots \ar[ur]|{C_1^{-1}} && (E,\theta)_{f-1} \ar[ur]|{C_1^{-1}} && (E,\theta)_{f} \ar[ur]|{C_1^{-1}} } \end{equation} Denote $(E,\theta)_0=(E,\theta)_f\otimes(L_1,0)$. Then \[C_1^{-1}(E_0,\theta_0)\simeq(V_f,\nabla_f)\otimes(L_1^p,\nabla_{\mathrm{can}})\simeq (V_0,\nabla_0).\] By this isomorphism, we identify $(V_0,\nabla_0)$ with $C_1^{-1}(E_0,\theta_0)$. Under this isomorphism, the Hodge filtration $\Fil_0$ induces a Hodge filtration $\Fil_f$ on $(V_f,\nabla_f)$. Take Grading and denote \[(E_{f+1},\theta_{f+1}):=\mathrm{Gr}(V_f,\nabla_f,\Fil_f).\] Inductively, for $i>f$, we denote $(V_i,\nabla_i)=C^{-1}_1(E_i,\theta_i)$. By the isomorphism \[\left(C^{-1}_1\circ \mathrm{Gr}\right)^{i-f}(\varphi_{f-1}):(V_i,\nabla_i)\otimes (L^{p^{i+1-f}},\nabla_{\mathrm{can}})\rightarrow (V_{i-f},\nabla_{i-f}),\] the Hodge filtration $\Fil_{i-f}$ induces a Hodge filtration $\Fil_i$ on $(V_i,\nabla_i)$. Denote \[(E_{i+1},\nabla_{i+1}):=\mathrm{Gr}(V_i,\nabla_i,\Fil_i).\] Then we extend above diagram into the following twisted periodic Higgs-de Rham flow over $X_1$ \begin{equation}\tiny \xymatrix@W=2cm@C=-13mm{ & (V,\nabla,\Fil)_0\ar[dr]|{Gr} && (V,\nabla,\Fil)_1\ar[dr]|{Gr} && \cdots \ar[dr]|{Gr} && (V,\nabla,\Fil)_{i} \ar[dr]|{Gr} && (V,\nabla)_{i+1} \ar[dr]|{Gr} \\ (E,\theta)_0 \ar[ur]|{C_1^{-1}} && (E,\theta)_1 \ar[ur]|{C_1^{-1}} &&\cdots \ar[ur]|{C_1^{-1}} && (E,\theta)_{i} \ar[ur]|{C_1^{-1}} && (E,\theta)_{i+1} \ar[ur]|{C_1^{-1}} && \cdots } \end{equation} For $n\geq2$, denote \[(\overline{V}_{-1},\overline{\nabla}_{-1},\overline{\Fil}_{-1}):=(\overline{V}_{f-1}\otimes L_{n-1}^{p^{n-1}},\overline\nabla_{f-1}\otimes \nabla_{can},\overline\Fil_{f-1}\otimes \Fil_{\mathrm{tri}}),\] where $(\overline{V}_{f-1},\overline{\nabla}_{f-1},\overline{\Fil}_{f-1})$ is the modulo $p^{n-1}$ reduction of $({V}_{f-1}, {\nabla}_{f-1}, {\Fil}_{f-1})$. Those $\varphi_i$ reduce to a $\varphi$-structure on $(\overline{V}_{i},\overline{\nabla}_{i},\overline{\Fil}_{i})_{-1\leq i<f-1}$. This gives us a $L_{n-1}$-twisted Fontaine-Faltings module over $X_{n-1}$ \[(\overline{V}_{i},\overline{\nabla}_{i},\overline{\Fil}_{i},\overline{\varphi}_i)_{-1\leq i<f-1}\] By induction, we have a twisted periodic Higgs-de Rham flow over $X_{n-1}$ \begin{equation*} \tiny \xymatrix@W=2cm@C=-13mm{ &\Big(\overline{V},\overline{\nabla},\overline{\Fil}\Big)_{-1} \ar[dr]|{\mathrm{Gr}} &&\Big(\overline{V},\overline{\nabla},\overline{\Fil}\Big)_{0} \ar[dr]|{\mathrm{Gr}} &&\cdots \ar[dr]|{\mathrm{Gr}} &&\Big(\overline{V},\overline{\nabla},\overline{\Fil}\Big)_{f-1} \ar[dr]|{\mathrm{Gr}} &&\cdots\\ \left(\overline{E},\overline{\theta}\right)_{-1} \ar[ur]|{\mathcal C^{-1}_{n-1}} &&\left(\overline{E},\overline{\theta}\right)_{0} \ar[ur]|{\mathcal C^{-1}_{n-1}} &&\cdots\ar[ur]|{\mathcal C^{-1}_{n-1}} &&\left(\overline{E},\overline{\theta}\right)_{f-1} \ar[ur]|{\mathcal C^{-1}_{n-1}} &&\left(\overline{E},\overline{\theta}\right)_{f} \ar[ur]|{\mathcal C^{-1}_{n-1}}\\} \end{equation*} where the first $f$-terms of filtered de Rham bundles over $X_{n-1}$ are those appeared in the twisted Fontaine-Faltings module over $X_{n-1}$. Based on this flow over $X_{n-1}$, we extend the diagram similarly as the $n=1$ case, \begin{equation}\tiny \xymatrix@W=2cm@C=-13mm{ (V,\nabla,\Fil)_0\ar[dr]|{Gr} && (V,\nabla,\Fil)_1\ar[dr]|{Gr} && \cdots \ar[dr]|{Gr} && (V,\nabla,\Fil)_{f-1} \ar[dr]|{Gr} && (V,\nabla)_f \ar@/_15pt/[llllllll]|{\varphi_{f-1}}\\ & (E,\theta)_1 \ar[ur]|{C_n^{-1}} &&\cdots \ar[ur]|{C_n^{-1}} && (E,\theta)_{f-1} \ar[ur]|{C_n^{-1}} && (E,\theta)_{f} \ar[ur]|{C_n^{-1}} } \end{equation} Now it is a twisted periodic Higgs-de Rham flow over $X_n$. Denote this flow by \[\mathcal{GR}\big((V_i,\nabla_i,\Fil_i,\varphi_i)_{0\leq i<f}\big).\] It is straightforward to verify $\mathcal{GR}\circ \mathcal{IC}\simeq \mathrm{id}$ and $\mathcal{IC} \circ \mathcal{GR} \simeq \mathrm{id}$. \end{proof} This Theorem can be straightforwardly generalized to the logarithmic case and the proof is similar as that of Theorem~\ref{equiv:TFF&THDF}. \begin{thm}\label{equiv:logTFF&THDF} Let $\mathcal X$ be a smooth proper scheme over $W$ with a simple normal crossing divisor $\D\subset \mathcal X$ relative to $W$. Then for each natural number $f\in \mathbb N$, there is an equivalence of categories between $\mathcal{THDF}_{a,f}(X^o_{n+1}/W_{n+1})$ and $\mathcal {TMF}_{[0,a],f}^{\nabla}(X^o_{n+1}/W_{n+1})$ \end{thm} \subsubsection{A sufficient condition for lifting the twisted periodic Higgs-de Rham flow} We suppose that the field $k$ is finite in this section. Let $\mathcal X$ be a smooth proper variety over $W(k)$ and denote $X_n=X\times_{W(k)} W_n(k)$. Let $D_1\subset X_1$ be a $W(k)$-liftable normal crossing divisor over $k$. Let $\D\subset \mathcal X$ be a lifting of $D_1$. \begin{prop}\label{Lifting_PHDF} Let $n$ be an positive integer and let $L_{n+1}$ be a line bundle over $X_{n+1}$. Denote by $L_\ell$ the reduction of $L_{n+1}$ on $X_{\ell}$. Let \[\Big\{(V,\nabla,\Fil)_{n-2}^{(n-1)},(E,\theta)_{n-1}^{(n)},(V,\nabla,\Fil)_{n-1}^{(n)},(E,\theta)_{n}^{(n)},\cdots;\phi^{(n)}_{\bullet}\Big\}_{/X_n\subset X_{n+1}}\] be an $L_{n}$-twisted periodic Higgs-de Rham flow over $X_{n}\subset X_{n+1}$. Suppose \begin{itemize} \item[-] Lifting of the graded Higgs bundle $(E,\theta)_i^{(n)}$ is unobstructed. i.e. there exist a logarithmic graded Higgs bundle $(E,\theta)_i^{(n+1)}$ over $X_{n+1}$, whose reduction on $X_{n}$ is isomorphic to $(E,\theta)_i^{(n)}$. \item[-] Lifting of the Hodge filtration $\Fil_i^{(n)}$ is unobstructed. i.e. for any lifting $(V,\nabla)_i^{(n+1)}$ of $(V,\nabla)_i^{(n)}$ over $X_{n+1}$, there exists a Hodge filtration $\Fil_i^{(n+1)}$ on $(V,\nabla)_i^{(n+1)}$, whose reduction on $X_{n}$ is $\Fil_i^{(n)}$. \end{itemize} Then every twisted periodic Higgs-de Rham flow over $X_{n}$ can be lifted to a twisted periodic Higgs-de Rham flow over $X_{n+1}$. \end{prop} \begin{proof} By assumption, we choose $(E',\theta')_{n}^{(n+1)}$ a lifting of $(E',\theta')_n^{(n)}$. Inductively, for all $i\geq n$, we construct $(V',\nabla',\Fil')_i^{(n+1)}$ and $(E',\theta')_{i+1}^{(n+1)}$ as follows. Denote \[(V',\nabla')_i^{(n+1)}=\mathcal C^{-1}_{n+1}\left((E',\theta')_i^{(n+1)}\right),\] which is a lifting of $(V,\nabla)_i^{(n)}$. By assumption, we choose a lifting $\Fil_{i}^{'(n+1)}$ on $(V',\nabla')_i^{(n+1)}$ of the Hodge filtration $\Fil_i^{(n)}$ and denote \[(E',\theta')_{i+1}^{(n+1)}=\mathrm{Gr}(V',\nabla',\Fil')_{i}^{(n+1)},\] which is a lifting of $(E,\theta)_{i+1}^{(n)}$. From the $\phi$-structure of the Higgs-de Rham flow, for all $m\geq0$ there is an isomorphism \[(E,\theta)_{n}^{(n)}\simeq (E,\theta)_{n+mf}^{(n)}\otimes L_n^{p^{n-1}+p^{n}+\cdots+p^{n+mf-2}}.\] Twisting $(E',\theta')_{n+mf}^{(n+1)}$ with $L_{n+1}^{p^{n-1}+p^{n}+\cdots+p^{n+mf-2}}$, one gets a lifting of $(E,\theta)_{n}^{(n)}$.\\ By deformation theory, the lifting space of $(E,\theta)_{n}^{(n)}$ is a torsor space modeled by $H^1_{Hig}\left(X_1,\mathrm{End}\left((E,\theta)_{n}^{(1)}\right)\right)$. Therefore, the torsor space of lifting $(E,\theta)_{n}^{(n)}$ as a \emph{graded} Higgs bundle should be modeled by a subspace of $H^1_{Hig}$. We give a description of this subspace as follows. For simplicity of notations, we shall replace $(E,\theta)_{n}^{(1)}$ by $(E,\theta)$ in this paragraph. The decomposition of $E = \bigoplus_{p+q=n}E^{p,q}$ induces a decomposition of $\mathrm{End}(E)$: \[ (\mathrm{End}(E))^{k,-k} := \bigoplus_{p+q=n} (E^{p,q})^{\vee} \otimes E^{p+k,q-k} \] Furthermore, it also induces a decomposition of the Higgs complex $\mathrm{End}(E,\theta)$. One can prove that the hypercohomology of the following Higgs subcomplex \begin{equation}\label{gr-lifting space} \mathbb{H}^1 ((\mathrm{End}(E))^{0,0} \stackrel{\theta^{\mathrm{End}}}{\longrightarrow} (\mathrm{End}(E))^{-1,1} \otimes \Omega^1 \stackrel{\theta^{\mathrm{End}}}{\longrightarrow} \cdots) \end{equation} gives the subspace corresponding to the lifting space of graded Higgs bundles.\\ Thus by the finiteness of the torsor space, there are two integers $m>m'\geq 0$, such that \begin{equation}\label{equ_twisting_1} (E',\theta')_{n+mf}^{(n+1)} \otimes L_{n+1}^{p^{n-1}+p^{n}+\cdots+p^{n+mf-2}} \simeq (E',\theta')_{n+m'f}^{(n+1)} \otimes L_{n+1}^{p^{n-1}+p^{n}+\cdots+p^{n+m'f-2}}. \end{equation} By twisting suitable power of the line bundle $L_{n+1}$ we may assume $m'=0$. By replacing the period $f$ with $mf$, we may assume $m=1$. For integer $i\in[n,n+f-1]$ we denote \[(E,\theta,V,\nabla,\Fil)_i^{(n+1)}:=(E',\theta',V',\nabla',\Fil')_i^{(n+1)}.\] Then (\ref{equ_twisting_1}) can be rewritten as \begin{equation}\label{equ_twisting_2} \phi_{n+f}^{(n+1)}:(E,\theta)_{n+f}^{(n+1)}\otimes L_{n+1}^{p^{n-1}+p^n+p^{n+f-2}}\rightarrow (E,\theta)_n^{(n+1)} \end{equation} where $(E,\theta)^{(n+1)}_{n+f}=(E',\theta')^{(n+1)}_{n+f}=\mathrm{Gr}\left((V,\nabla,\Fil)_{n+f-1}^{(n+1)}\right)$. Inductively, for all $i\geq n+f$, we construct $(V,\nabla,\Fil)_i^{(n+1)}$, $(E,\theta)_{i+1}^{(n+1)}$ and $\phi_{i+1}^{n+1}$ as follows. Denote \[(V,\nabla)_i^{(n+1)}=\mathcal C^{-1}_{n+1}\left((E,\theta)_i^{(n+1)}\right).\] According to the isomorphism \begin{equation}\label{equ_twisting_3} \mathcal C^{-1}_{n+1}(\phi_{i}^{(n+1)}):(V,\nabla)_i^{(n+1)}\otimes L_{n+1}^{p^{i-f-1}+p^{i-f}+\cdots+p^{i-2}}\rightarrow (V,\nabla)_{i-f}^{(n+1)}, \end{equation} the Hodge filtration $\Fil_{i-f}^{(n+1)}$ on $(V,\nabla)_{i-f}^{(n+1)}$ induces a Hodge filtration $\Fil_{i}^{(n+1)}$ on $(V,\nabla)_{i}^{(n+1)}$. Denote \[(E,\theta)_{i+1}^{(n+1)}=\mathrm{Gr}\left((V,\nabla,\Fil)_{i}^{(n+1)}\right).\] Taking the associated graded objects in equation~(\ref{equ_twisting_3}), one gets a lifting of $\phi_{i+1}^{(n+1)}$ \begin{equation}\label{equ_twisting_4} \phi_{i+1}^{(n+1)}:(E,\theta)_{i+1}^{(n+1)}\otimes L_{n+1}^{p^{i-f-1}+p^{i-f}+p^{i-1}}\rightarrow (E,\theta)_{i+1-f}^{(n+1)} \end{equation} and a twisted Higgs-de Rham flow over $X_{n+1}\subset X_{n+2}$ \[\Big\{(V,\nabla,\Fil)_{n-1}^{(n)},(E,\theta)_{n}^{(n+1)},(V,\nabla,\Fil)_{n}^{(n+1)},(E,\theta)_{n+1}^{(n+1)},\cdots;\phi^{(n+1)}_{\bullet}\Big\}_{/X_{n+1}\subset X_{n+2}}\] which lifts the given twisted periodic flow over $X_n\subset X_{n+1}$. \end{proof} \begin{rmk} In the proof, we see that one needs to enlarge the period for lifting the twisted periodic Higgs-de Rham flow. \end{rmk} \subsection{Twisted Higgs-de Rham self map on moduli schemes of semi-stable Higgs bundle with trivial discriminant}\label{section CTLBSSHBTD} Let $X_1$ be a smooth proper $W_2$-liftable variety over $k$, with $\mathrm{dim} \, X_1=n$. Let $H$ be a polarization of $X_1$. Let $r<p$ be a positive integer and $(E,\theta)_0$ be a semistable graded Higgs bundle over $X_1$ of rank $r$ and with the vanishing discriminant. \begin{thm}\label{thm:existence_of_HiggsdeRham} There is a Higgs-de Rham flow of Higgs bundles and de Rham bundles over $X_1$ with initial term $(E,\theta)_0$. \end{thm} The construction of the Higgs-de Rham flow given by Theorem \ref{thm:existence_of_HiggsdeRham} is made by two steps. \\ {\bf Step 1.} there exists a Simpson's graded semistable Hodge filtration $\Fil$ (Theorem A.4 in~\cite{LSZ13a} and Theorem $5.12$ in~\cite{Langer14}), which is the most coarse Griffiths transverse filtration on a semi-stable de Rham module such that the associated graded Higgs sheaf is torsion free and still semi-stable. Denote $(V,\nabla)_0:=C^{-1}_1(E_0,\theta_0)$ and $\Fil_0$ the Simpson's graded semistable Hodge filtration on $(V,\nabla)_0$. Denote $(V,\nabla)_1:=C^{-1}_1(E_1,\theta_1)$ and $\Fil_1$ the Simpson's graded semistable Hodge filtration on $(V,\nabla)_1$. Repeating this process, we construct a Higgs-de Rham flow of torsion free Higgs sheaves and de Rham sheaves over $X_1$ with initial term $(E,\theta)$ \begin{equation}\label{HDF_ass(E,Theta)}\tiny \xymatrix@W=10mm@C=0mm{ &(V,\nabla,\Fil)_{0} \ar[dr]^{\mathrm{Gr}} && \cdots \ar[dr]^{\mathrm{Gr}} && (V,\nabla,\Fil)_{f-1} \ar[dr]^{\mathrm{Gr}} && \cdots \\ (E,\theta)_{0} \ar[ur]^{\mathcal C^{-1}_1} && (E,\theta)_{1}\ar[ur]^{\mathcal C^{-1}_1} && \cdots \ar[ur]^{\mathcal C^{-1}_1} && (E,\theta)_{f}\ar[ur]^{\mathcal C^{-1}_1} \\ } \end{equation} Since the Simpson's graded semistable Hodge filtration is unique, this flow is also uniquely determined by $(E,\theta)_0$.\\ {\bf Step 2.} The Higgs sheaves and de Rham sheaves appearing in the Higgs-de Rham flow are locally free. Thanks to the recent paper by A. Langer \cite{Langer19}. The local freeness follows from Theorem 2.1 and Corollary 2.9 in his paper.\\[.2cm] The purpose of this subsection is to find a canonical choice of the twisting line bundle $L$ such that this Higgs-de Rham flow is twisted preperiodic. Firstly, we want to find a positive integer $f_1$ and a suitable twisting line bundle $L_1$ such that $(E'_{f_1},\theta'_{f_1}):=(E_{f_1},\theta_{f_1})\otimes (L_1,0)$ satisfies the following conditions \begin{equation}\label{chernclass} \begin{split} \text{(i).} & \quad c_1(E'_{f_1}) = c_1(E_0),\\ \text{(ii).} & \quad c_2(E'_{f_1}) \cdot [H]^{n-2} = c_2(E_0) \cdot [H]^{n-2}.\\ \end{split} \end{equation} Under these two condition, both $(E,\theta)_0$ and $(E,\theta)_{f_1}$ are contained in the moduli scheme $M^{ss}_{Hig}(X_1/k,r,a_1,a_2)$ constructed by Langer in~\cite{Langer04} classifying all semistable Higgs bundles over $X_1$ with some fixed topological invariants (which will be explained later). Following~\cite{Langer04}, we introduce $\mathcal{S}'_{X_1/k}(d;r,a_1,a_2,\mu_{max})$ the family of Higgs bundles over $X_1$ such that $(E,\theta)$ is a member of the family, where $E$ is of rank $d$, $\mu_{max}(E,\theta) \leq \mu_{max}$,$a_0(E)=r$,$a_1(E)=a_1$ and $a_2(E) \geq a_2$. Here $\mu_{max}(E,\theta)$ is the slope of the maximal destabilizing sub sheaf of $(E,\theta)$, and $a_i(E)$ are defined by \[ \chi (X_{1,\bar{k}},E(m)) = \Sigma^d_{i=0} a_i(E) {m+d-i \choose d-i}. \] By the results of Langer, the family $\mathcal{S}'_{X_1/k}(d;r,a_1,a_2,\mu_{max})$ is bounded (see Theorem $4.4$ of \cite{Langer04}). So $M^{ss}_{Hig}(X_1/k,r,a_1,a_2)$ is the moduli scheme which corepresents this family. Note that $a_i(E) = \chi(E|_{\bigcap_{j \leq d-i} H_j})$ where $H_1,\dots,H_d \in |\mathcal{O}(H)|$ is an $E$-regular sequence (see \cite{HL}). Using Hirzebruch-Riemann-Roch theorem, one finds that $a_1(E)$,$a_2(E)$ will be fixed if $c_1(E)$ and $c_2(E) \cdot [H]^{n-2}$ are fixed. \begin{prop}\label{prop:find L1} Assume discriminant of $E_0$ (with respect to the polarization $H$) $\Delta(E_0):= (c_2(E_0)-\frac{r-1}{2r}c_1(E_0)^2)\cdot[H]^{n-2}$ equals to zero. Let $f_1$ be the minimal positive integer with $r\mid p^{f_1}-1$, and let $L_1=\det(E_0)^{\frac{1-p^{f_1}}{r}}$. Then the two conditions in (\ref{chernclass}) are satisfied. \end{prop} \begin{proof} Since $c_1(C^{-1}_1(E_0,\theta_0)) = pc_1(E_0)$ and $c_1(L_1) = \frac{1-p^{f_1}}{r}\cdot c_1(E_0)$, we have \[c_1(E'_{f_1}) = rc_1(L_1) + c_1\left((Gr\circ C^{-1}_1)^{f_1}(E_0,\theta_0)\right)=\left(r\cdot\frac{1-p^{f_1}}{r} +p^{f_1}\right)c_1(E_0).\] One gets Condition (i). Note that the discriminant $\Delta$ is invariant under twisting line bundles, and $\Delta(C^{-1}_1(E_0,\theta_0)) = p^2\Delta(E_0)$, one gets \[\Delta(E'_{f_1}) = \Delta(Gr\circ C^{-1}_1(E_0,\theta_0)) = p^{2f_2}\Delta(E_0) =0.\] So we have $c_2(E_0) \cdot [H]^{n-2} = c_1(E_0)^2\cdot [H]^{n-2}$ and $c_2(E'_{f_1}) \cdot [H]^{n-2} = c_1(E'_{f_1})^2\cdot [H]^{n-2}$. Since $c_1(E'_{f_1}) = c_1(E_0)$, we already get Condition (ii). \end{proof} \begin{cor-defi}\label{def:selfmap} There is a self-map $\varphi$ on the set of $k$-points of $M^{ss}_{Hig}(X_1/k,r,a_1,a_2)$ defined by the twisted Higgs-de Rham flow, which sends a Higgs bundle $(E,\theta)_0$ to the Higgs bundle $(E,\theta)_{f_1}\otimes (\det(E_0)^\frac{1-p^{f_1}}{r},0)$. Here $f_1$ is the minimal positive integer with $r\mid p^{f_1}-1$. \end{cor-defi} \begin{rmk} In fact one can show that the self-map is a constructible map, i.e. there is a stratification of the moduli scheme such that there is a Simpson graded semistable Hodge filtration attached to the universal de Rham bundle restricted to each constructible subset. Taking the associated graded objects and twisting suitable line bundles, one gets a morphism from each constructible subset to the moduli scheme itself. For more explicit construction one can see the special case in \autoref{section SMMSHBPMP}. \end{rmk} \begin{prop}\label{Self-map} Suppose that discriminant of $E_0$ equals to zero and there exists $f_2$ a positive integer with $\varphi^{f_2}(E_0,\theta_0) \simeq (E_0,\theta_0)$. Then the Higgs-de Rham flow~(\ref{HDF_ass(E,Theta)}) is $\det(E_0)^{\frac{p^f-1}{r}}$-twisted $f$-periodic, where $f=f_1f_2$. \end{prop} \begin{proof} Inductively, one shows that \begin{equation} \varphi^{m} (E_0,\theta_0) = (E,\theta)_{mf_1} \otimes (\det(E_0)^\frac{1-p^{mf_1}}{r},0). \end{equation} Since $\varphi^{f_2} (E_0,\theta_0)\simeq (E_0,\theta_0)$, there is an isomorphism of Higgs bundles \[\phi_f:(E_f,\theta_f)\otimes (\det(E_0)^{\frac{p^f-1}{r}},0)\rightarrow (E_0,\theta_0).\] By formula $\phi_i=\big(\mathrm{Gr}\circ\mathcal C^{-1}_1\big)^{i-f}(\phi_f)$ for all $i\geq f$, we construct the twisted $\phi$-structure. Under this $\phi$-structure the Higgs-de Rham flow is $\det(E_0)^{\frac{p^f-1}{r}}$-twisted $f$-periodic. \end{proof} \begin{thm} \label{Main: preperiod} A semistable Higgs bundle over $X_1$ with trivial discriminant is preperiodic after twisting. Conversely, a twisted preperiodic Higgs bundle is semistable with a trivial discriminant. \end{thm} \begin{proof} For a Higgs bundle $(E,\theta)$ in $M^{ss}_{Hig}(X_1/k,r,a_1,a_2)$, we consider the iteration of the self-map $\varphi$. Since $M^{ss}_{Hig}(X_1/k,r,a_1,a_2)$ is of finite type over $k$ and has only finitely many $k$-points, there must exist a pair of integers $(e,f_2)$ such that $\varphi^e(E,\theta) \cong \varphi^{e+f_2}(E,\theta)$. By Proposition \ref{Self-map}, we know that $(E,\theta)$ is preperiodic after twisting.\\ Conversely, let $(E,\theta)$ be the initial term of a twisted $f$-preperiodic Higgs-de Rham flows. We show that it is semistable. Let $(F,\theta) \subset (E,\theta)$ be a proper sub bundle. Denote $(F^{(1)}_i,\theta^{(1)}_i)$ and $(E^{(1)}_i,\theta^{(1)}_i)$ are the terms appearing in the Higgs-de Rham flows. By the preperiodicity, there exists a line bundle $L$ and an isomorphism $\phi : (E_e,\theta_e) \cong (E_{e+f},\theta_{e+f}) \otimes (L,0)$. Calculating the slope on both side, one get $\mu(L)=(1-p^f)\mu(E_e)$. Iterating $m$ times of this isomorphism $\phi$, one get \[\phi^m : (E_e,\theta_e) \cong (E_{e+mf},\theta_{e+mf}) \otimes (L^{1+p^f+\cdots+p^{(m-1)f}},0).\] So $\left(\phi^m\right)^{-1} \left(F_{e+mf}\otimes L^{1+p^f+\cdots+p^{m-1}f} \right)$ forms a sub sheaf of $E_e$ of slope \[p^{mf}\mu(F_e)+(1+p^f+\cdots+p^{(m-1)f})\mu(L)=p^{mf}\big(\mu(F_e)-\mu(E_e)\big)+\mu(E_e).\] So $\mu(F_e)\leq \mu(E_e)$ (otherwise there are subsheaves of $E_e$ with unbounded slopes, but this is impossible). So we have \[\mu(F)=\frac{1}{p^e}\mu(F_e)\leq\frac{1}{p^e}\mu(E_e)=\mu(E).\] This shows that $(E,\theta)$ is semistable. The discriminant equals zero follows from the fact that $\Delta(C^{-1}_1(E,\theta)) = p^2 \Delta(E)$. \end{proof} \begin{cor}\label{slop sub THDF} Let $(E,\theta)\supset (F,\theta)$ be the initial terms of a twisted periodic Higgs-de Rham flow and a sub twisted periodic Higgs-de Rham flow. Then \[\mu(F)=\mu(E).\] \end{cor} \subsection{Sub-representations and sub periodic Higgs-de Rham flows}\label{section SRSPHdRF} In this section, we assume $\mathbb{F}_{p^f}$ is contained in $k$. Recall that the functor $\mathbb D^P$ is contravariant and sends quotient object to subobject, i.e. for any sub twisted Fontaine-Faltings module $N\subset M$ with endomorphism structure, the projective representation $\mathbb D^P(M/N)$ is a projective subrepresentation of $\mathbb D^P(M)$. Conversely, we will show that every projective subrepresentation comes from this way. By the equivalence of the category of twisted Fontaine-Faltings modules and the category of twisted periodic Higgs-de Rham flows, we construct a twisted periodic sub Higgs-de Rham flow for each projective subrepresentation. Let $\mathcal X$ be a smooth proper $W(k)$-variety. Denote by $X_{n}$ the reduction of $\mathcal X$ on $W_n(k)$ . Let $\{\mathcal U_i\}_{i\in I}$ be a finite covering of small affine open subsets and we choose a geometric point $x$ in $\bigcap\limits_{i\in I} U_{i,\overline{K}}$. \begin{prop}\label{subrep-1} Let $M$ be an object in $\mathcal{TMF} _{[a,b],f} ^{\nabla}(X_{2}/W_{2})$. Suppose we have a projective $\mathbb F_{p^f}$-subrepresentation of $\pi^\text{\'et}_1(X_K)$ ${\mathbb V} \subset \mathbb{D}^P(M)$, then there exists a subobject $N$ of $M$ such that ${\mathbb V}$ equals to $\mathbb{D}^P(M/N)$. \end{prop} \begin{proof} Recall that the functor $\mathbb D^P$ is defined by gluing representations of $\Delta_i=\pi^\text{\'et}_1(U_{i,K},x)$ into a projective representation of $\Delta=\pi^\text{\'et}_1(X_K,x)$. Firstly, we show that the projective subrepresentation $\mathbb V$ is actually corresponding to some local subrepresentations. Secondly, since the Fontaine-Laffaille-Faltings' functor $\mathbb D$ is fully faithful, there exist local Fontaine-Faltings modules corresponding to those subrepresentations. Thirdly, we glue those local Fontaine-Faltings modules into a global twisted Fontaine-Faltings module. For $i\in I$, we choose a trivialization $M_i=M(\tau_i)$ of $M$ on $\mathcal U_i$, which gives a local Fontaine-Faltings module with endomorphism structure on $\mathcal U_i$. By definition of $\mathbb D^P$, those representations $\mathbb{D}_{\mathcal U_i}(M_i)$ of $\Delta_i$ are glued into the projective representation $\mathbb{D}^P(M)$. In other words, we have the following commutative diagram of $\Delta_{ij}=\pi^\text{\'et}_1(U_{i,K}\cap U_{j,K},x)$-sets \begin{equation} \xymatrix@W=14mm@C=0mm@R=5mm{ & \mathbb{D}_{\mathcal U_i}(M_i)/\mathbb F_{p^f}^\times \ar[dd]^{a_{1,r}} \\ \mathbb{D}^P(M)/\mathbb F_{p^f}^\times \ar[ur]\ar[dr] &\\ & \mathbb{D}_{\mathcal U_j}(M_j)/\mathbb F_{p^f}^\times\\ } \end{equation} Here $r$ is the difference of the trivializations of the twisting line bundle on $\mathcal U_i$ and $\mathcal U_j$. And $a_{1,r}$ is the elements given in Lemma~\ref{a_n,r}. Assume that $\mathbb V$ is a projective $\mathbb F_{p^f}$-subrepresentation of $\mathbb{D}^P(M)$ of $\pi^\text{\'et}_1(X_K,x)$, i.e. $\mathbb V/\mathbb F_{p^f}^\times$ is a $\pi^\text{\'et}_1(X_K)$-subset of $\mathbb D^P(M)/\mathbb F_{p^f}^\times$. Then ${\mathbb V}_i$, the image of $\mathbb V$ under the map $\mathbb{D}^P(M) \to \mathbb{D}_{\mathcal U_i}(M_i)$, is a projective $\mathbb F_{p^f}$-subrepresentation of $\mathbb{D}_{\mathcal U_i}(M_i)$. So we have the following commutative diagram of $\Delta_{ij}$-sets \begin{equation}\label{diag:5.2} \xymatrix@W=14mm@C=0mm@R=5mm{ & \mathbb V_i/\mathbb F_{p^f}^\times \ar@{>->}[rr] \ar[dd]^(0.3){a_{1,r}}|(0.5)\hole && \mathbb{D}_{\mathcal U_i}(M_i)/\mathbb F_{p^f}^\times \ar[dd]^(0.3){a_{1,r}} \\ \mathbb V/\mathbb F_{p^f}^\times \ar@{>->}[rr] \ar[ur]\ar[dr] && \mathbb{D}^P(M)/\mathbb F_{p^f}^\times \ar[ur]\ar[dr] &\\ & \mathbb V_j/\mathbb F_{p^f}^\times \ar@{>->}[rr] && \mathbb{D}_{\mathcal U_j}(M_j)/\mathbb F_{p^f}^\times\\ } \end{equation} Notice that $\mathbb{D}_{\mathcal U_i}(M_i) /\mathbb F_{p^f}^\times$ is the projectification of the $\mathbb F_{p^f}$-representation $\mathbb{D}_{\mathcal U_i}(M_i)$ of $\Delta_i$. So $\mathbb V_i\subset \mathbb D_{\mathcal U_i}(M_i)$ is actually a $\mathbb F_{p^f}$-subrepresentation of $\Delta_i$. Since the image of the contravariant functor $\mathbb D_{\mathcal U_i}$ is closed under subobjects, there exists $N_i\subset M_i$ as a sub Fontaine-Faltings module with endomorphism structure of $\mathbb{F}_{p^f}$, such that \[\mathbb V_i=\mathbb D_{\mathcal U_i}(M_i/N_i).\] On the overlap $\mathcal U_i\cap \mathcal U_j$, those two Fontaine-Faltings module $M_i$ and $M_j$ have the same underlying filtered de Rham sheaf. We can twist the $\varphi$-structure of $M_i$ to get $M_j$ by the element $r$. Doing the same twisting on $N_i$, we get a sub-Fontaine-Faltings module $N_i'$ of $M_j$. By the functoriality of $\mathbb{D}$, one has the following commutative diagram \begin{equation} \xymatrix{ \mathbb D(M_i/N_i) \ar@{>->}[r]\ar@{..>}[d]^{\exists} & \mathbb D(M_i) \ar@{->>}[r] \ar[d]^{a_{1,r}} & \mathbb D(N_i) \ar[d]^{a_{1,r}} \\ \mathbb D(M_j/N_i') \ar@{>->}[r] & \mathbb D(M_j) \ar@{->>}[r] & \mathbb D(N_i') \\ } \end{equation} So we have $\mathbb D(M_j/N_i')= a_{1,r} \mathbb D(M_i/N_i)=a_{1,r}\mathbb V_i$. On the other hand, one has $\mathbb D(M_j/N_j)=\mathbb V_j=a_{1,r}\mathbb V_i$ by diagram~(\ref{diag:5.2}). Thus one has $\mathbb D(M_j/N_i')=\mathbb D(M_j/N_j)$. Since $\mathbb{D}$ is fully faithful and contravariant, $N_i'=N_j$. In particular, on the overlap $\mathcal U_i\cap \mathcal U_j$ the local Fontaine-Faltings modules $N_i$ and $N_j$ have the same underlying subbundle. By gluing those local subbundles together, we get a subbundle of the underlying bundle $M$. The connection, filtration and the $\varphi$-structure can be restricted locally on this subbundle, so does it globally. And we get the desired sub-Fontaine-Faltings module. \end{proof} Let $\mathscr E$ be a twisted $f$-periodic Higgs-de Rham flow. Denote by $M=\mathcal{IC}(E)$ the Fontaine module with the endomorphism structure corresponding to $\mathscr E$. By the equivalence of the category of twisted Fontaine-Faltings modules and the category of periodic Higgs-de Rham flow, one get the following result. \begin{cor}\label{subHDF} Suppose $\mathbb V\subset \mathbb D^P(M)$ is a non-trivial projective $\mathbb F_{p^f}$-subrepresentation. Then there exists a non-trivial sub-twisted periodic Higgs-de Rham flow of $\mathscr E$ which corresponds to $\mathbb{D}^P(M)/\mathbb{V}$. \end{cor} After Corollary~\ref{subHDF} we arrive at the main theorem 0.6 stated in the introduction. However we prove a weaker form of Theorem 0.6 in the below. The proof of the stronger form will be postponed in the section 5. \begin{thm}\label{Mainthm} Let $k$ be a finite field of characteristic $p$. Let $\mathcal X$ be a smooth proper scheme over $W(k)$ together with a smooth log structure $\D/W(k)$. Assume that there exists a semistable graded logarithmic Higgs bundle $(E,\theta)/(\mathcal X,\D)_1$ with discriminant $\Delta_H(E)=0$, $\mathrm{rank}(E)<p$ and $(\mathrm{rank}(E),\deg_H(E))=1$. Then there exists a positive integer $f$ and an absolutely irreducible projective $\mathbb{F}_{p^f}$-representation $\rho$ of $\pi^\text{\'et}_1(X^o_{K'})$, where $\mathcal X^o=\mathcal X\setminus \D$ and $K'=W(k\cdot\mathbb{F}_{p^f})[1/p]$. \end{thm} \begin{proof} We only show the result for $\D=\emptyset$, as the proof of the general case is similar. By Theorem~\ref{Main: preperiod}, there is a twisted preperiodic Higgs-de Rham flow with initial term $(E,\theta)$. Removing finitely many terms if necessary, we may assume that it is twisted $f$-periodic, for some positive integer $f$. By using Theorem~\ref{equiv:TFF&THDF} and applying functor $\mathbb D^P$, one gets a $\mathrm{PGL}_{\mathrm{rank}(E)}(\mathbb F_{p^f})$-representation $\rho$ of $\pi^\text{\'et}_1(X^o_{K'})$. Since $(\mathrm{rank}(E),\deg_H(E))=1$, the semi-stable bundle $E$ is actually stable. According to Corollary~\ref{slop sub THDF}, there is no non-trivial sub twisted periodic Higgs-de Rham flow. By Corollary~\ref{subHDF}, there is no non-trivial projective subrepresentation of $\rho$, so that $\rho$ is irreducible. \end{proof} \begin{rmk} For simplicity, we only consider results on $X_1$. Actually, all results in this section can be extended to the truncated level. \end{rmk} \section{Constructing crystalline representations of \'etale fundamental groups of $p$-adic curves via Higgs bundles}\label{section CCREFGpCHB} As an application of the main theorem (Theorem~\ref{Mainthm}), we construct irreducible $\mathrm{PGL}_2$ crystalline representations of $\pi^\text{\'et}_1$ of the projective line removing $m$ ($m\geq 4$) marked points. Let $M$ be the moduli space of semistable graded Higgs bundles of rank $2$ degree $1$ over $\mathbb{P}^1/W(k)$, with logarithmic Higgs fields which have $m$ poles $\{x_1,x_2, \dots,x_m\}$ (actually stable, since the rank and degree are coprime to each other). The main object of this section is to study the self-map $\varphi$ (Corollary-Definition~\ref{def:selfmap}) on $M$. In section~\ref{section DMS}, we decompose $M$ into connected components. In section~\ref{section SMMSHBPMP}, we show that the self-map is rational and dominant on the component of $M$ with maximal dimension. In section~\ref{section EFSMFMP}, we give the explicit formula in case of $m=4$. \subsection{Connected components of the moduli space $M$}\label{section DMS} First, let's investigate the geometry of $M$. For any $[(E,\theta)] \in M$, $E \cong \mathcal{O}(d_2)\oplus \mathcal{O}(d_1)$ with $d_1 +d_2=1$($d_2<d_1$). And the graded semi-stable Higgs bundle with nilpotent non-zero Higgs field \[\theta:\mathcal{O}(d_1) \longrightarrow \mathcal{O}(d_2) \otimes \Omega^1_{\mathbb{P}^1}(m)\] By the condition $\theta \neq 0$ in $\Hom_{\mathcal{O}_{\mathbb{P}^1}} (\mathcal{O}(d_1),\mathcal{O}(d_2+m-2))$, we have $d_1 \leq d_2 +m-2$. Combining with the assumption $d_1 +d_2=1$($d_2<d_1$), one get $m \geq 3$ and \[(d_1,d_2)=(1,0),(2,-1),\cdots, \text{ or } (\left[\frac{m-1}{2}\right],\left[\frac{4-m}{2}\right]),\] where $\left[\cdot\right]$ is the greatest integer function. Therefore, $M$ admits a decomposition \[ M= \coprod_{(d_2,d_1)} M(d_2,d_1) \] where $M(d_2,d_1)$ is isomorphic to \[\mathbb{P}\left(\mathrm{Hom}_{\mathcal O_{\mathbb{P}^1}}\left(\mathcal O(d_1),\mathcal O(d_2)\otimes \Omega_{\mathbb{P}^1}^1(\log \D) \right)\right)\simeq \mathbb{P} \Big(\mathrm{H}^0(\mathbb{P}^1,\mathcal O(d_2-d_1+m-2))\Big)\] (note that in this case two Higgs bundles are isomorphic if the Higgs fields differ by a scalar). For $m=3,4$, the decomposition is trivial because $(d_2,d_1)=(0,1)$ is the only choice. But for $m \geq 5$, there are more choices. The following table presents the information of $M(d_2,d_1)$: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c} \hline \diagbox{$(d_1,d_2)$}{$M(d_2,d_1)$}{$m$} &3&4&5&6&7&8&9& $\cdots$ \\ \hline $(1,0)$ & $\mathbb P^0$ & $\mathbb P^1$ & $\mathbb P^2$ &\ $\mathbb P^3$ & $\mathbb P^4$ &$\mathbb P^5$ & $\mathbb P^6$ & $\cdots$\\ \hline $(2,-1)$ &&& $\mathbb P^0$ & $\mathbb P^1$ & $\mathbb P^2$ & $\mathbb P^3$ & $\mathbb P^4$ & $\cdots$ \\ \hline $(3,-2)$ &&&&& $\mathbb P^0$ & $\mathbb P^1$ & $\mathbb P^2$ & $\cdots$ \\ \hline $\vdots$ &&&&&&& $\ddots$ & $\ddots$ \\ \end{tabular} \end{center} \subsection{Self-maps on moduli spaces of Higgs bundles on $\mathbb P^1$ with marked points}\label{section SMMSHBPMP} Let $p$ be an odd prime number. Since the rank $r=2$ for any element in $M$, by Corollary-Definition~\ref{def:selfmap} we know that $f_1=1$ and $L_1=\mathcal O_{\mathbb{P}^1}(\frac{1-p}{2})$. In other words, the self-map is given by \[\varphi: (E,\theta)\mapsto \left(\mathrm{Gr}\circ C^{-1}_1(E,\theta)\right)\otimes \mathcal O_{\mathbb{P}^1}(\frac{1-p}{2}),\] where the filtration on $C^{-1}_1(E,\theta)$ is the Simpson's graded semistable Hodge filtration. Let's denote $(V, \nabla)=C^{-1}_1(E,\theta)$, which is a rank $2$ degree $p$ stable de Rham bundle over $\mathbb{P}^1$. Using Grothendieck's theorem, one gets $V \cong \mathcal{O}(l_1) \oplus \mathcal{O}(l_2)$ with $l_1 + l_2 = p$ (assume $l_1 <l_2$). In this case, the Simpson's graded semistable Hodge filtration is just the natural filtration $(\mathcal{O}(l_2) \subset V)$. Since $(V,\nabla)$ is stable, $\mathcal{O}(l_2)$ cannot be $\nabla$-invariant, which means the Higgs field \[ Gr \nabla : \mathcal{O}(l_2) \longrightarrow \mathcal{O}(l_1) \otimes \Omega^1_{\mathbb{P}^1}(m) \cong \mathcal{O}(l_1+m-2) \] is nontrivial. Thus, $l_2 \leq l_1+m-2$. Combining with the fact $l_1 + l_2 = p$ and $\ell_1<\ell_2$, one gets \[(l_1,l_2) = ({\frac{p-1}{2}},{\frac{p+1}{2}}),({\frac{p-3}{2}},{\frac{p+3}{2}}),\cdots, \text{ or } (\left[\frac{p-m+3}{2}\right],\left[\frac{p+m-2}{2}\right]).\] For $m \geq 5$, the jumping phenomena appears, i.e. there exists $[(E,\theta)] \in M(d_2,d_1)$ such that the type of $\left(\mathrm{Gr}\circ C^{-1}_{1}(E,\theta)\right)\otimes \mathcal O(\frac{1-p}{2})$ is different from $(d_2,d_1)$. Next we shall characterize the jumping locus on $M(d_2,d_1)$. Define a $\mathbb Z$-valued function $l$ on $M(d_2,d_1)$: for each $[(E,\theta)] \in M(d_2,d_1)$, set $l([(E,\theta)])=l([\theta]) := l_2$. \begin{lem} The function $l$ on $M(d_2,d_1)$ is upper semicontinuous. \end{lem} \begin{proof} Define $\mathcal{U}_n := \{ [\theta] \in \mathbb{P} H^0(\mathcal{O}(d_2-d_1+m-2))\, |\, l([\theta]) \leq n\}$. One only need to prove that $\mathcal{U}_n$ is Zariski open in $\mathbb{P}^{d_2-d_1+m-2}$ for all $n \in \mathbb Z$. Recall the proof of Grothendieck's theorem, for $(V_{\theta},\nabla) := C^{-1}_{1,2}(\mathcal{O}(d_2) \oplus \mathcal{O}(d_1),\theta)$ one defines \[ m := \mathrm{min} \{\lambda \in \mathbb Z \, | \, H^0(\mathbb{P}^1 , V_{\theta}(\lambda)) \neq 0\} \] and gets the splitting $V_{\theta} \cong \mathcal{O}(-m) \oplus \mathcal{O}(p+m)$ ($p+m \leq -m$). Therefore, $l([\theta])=-m$. Since $[\theta] \in \mathcal{U}_n$, one gets $-m \leq n$. But this means $-n-1 < -n \leq m$. Thus $H^0(\mathbb{P}^1 , V_{\theta}(-n-1))=0$.\\ By the semicontinuity of the rank of the direct image sheaf, we know that $H^0(\mathbb{P}^1 , V_{\theta'}(-n-1))=0$ for $\theta'$ in a neighborhood of $\theta$. This means $l([\theta']) \leq n$ in a neighborhood. Therefore, $\mathcal{U}_n$ is Zariski open for each $n \in \mathbb Z$. \end{proof} \subsubsection*{Construction of the universal Simpson graded semistable Hodge filtration and the rational self-map} Now we consider the first component of moduli scheme $M(1,0)$ and the universal Higgs bundle $(E^u,\theta^u))$ on $\mathbb{P}^1 \times \mathcal{U}_{\frac{p+1}{2}}$: \[ E^u= (\mathcal{O}_{\mathbb{P}^1} \oplus \mathcal{O}_{\mathbb{P}^1}(1)) \otimes \mathcal{O}_{\mathbb{P}^1 \times \mathcal{U}_{\frac{p+1}{2}}}, \,\,\,\theta^u|_{\mathbb{P}^1 \times \{x\}}=\theta_x \in \mathrm{Hom}_{\mathcal O_{\mathbb{P}^1}}\left(\mathcal O(1),\mathcal O \otimes \Omega_{\mathbb{P}^1}^1(\log \D) \right) \] for $x \in \mathcal{U}_{\frac{p+1}{2}}$. Applying the inverse Cartier functor, we get the universal de Rham bundle $(V^u,\nabla^u)$ on $\mathbb{P}^1 \times M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$. Here $M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$ is the corresponded component of the moduli space of semistable de Rham bundles with rank 2 degree $p$, i.e. $[(V,\nabla)]$ with $V \cong \mathcal{O}({\frac{p-1}{2}})\oplus \mathcal{O}({\frac{p+1}{2}})$. For each $s \in M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$, we know that $\mathcal{O}({\frac{p+1}{2}}) \hookrightarrow V_s$ gives a Hodge filtration. In order to find a Hodge filtration on $(V^u,\nabla^u)$, we shall construct a subsheaf ${\mathcal F} \subset V^u$ such that ${\mathcal F}_s \cong \mathcal{O}({\frac{p+1}{2}})$. We have the following diagram \[ \xymatrix{ & \mathbb{P}^1 \times M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}}) \ar[ld]_{p_1} \ar[rd]^{p_2} & \\ \mathbb{P}^1 & & M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}}) } \] Define ${\mathcal L}:= {p_2}_*({p_1}^*\mathcal{O}_{\mathbb{P}^1}(-\frac{p+1}{2}) \otimes V^u)$. For each $s \in M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$, ${\mathcal L}_s = H^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(-\frac{p+1}{2}) \otimes V^u_s)$. By the definition of $\mathcal{U}_{\frac{p+1}{2}}$ and $M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$, we know that $V^u_s \cong \mathcal{O}({\frac{p-1}{2}})\oplus \mathcal{O}({\frac{p+1}{2}})$. Thus ${\mathcal L}$ is a line bundle on $M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$ by Grauert's theorem (see {\cite[Corollary~12.9]{hartshorne}}). Now we define ${\mathcal F}:= {p_1}^*\mathcal{O}({\frac{p+1}{2}}) \otimes {p_2}^*{\mathcal L}$ as a line bundle on $\mathbb{P}^1 \times M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$.\\ Then there is a canonical nonzero morphism from ${\mathcal F}$ to $V^u$: \[ {\mathcal F}= {p_1}^*\mathcal{O}({\frac{p+1}{2}}) \otimes {p_2}^*{p_2}_*({p_1}^*\mathcal{O}(-\frac{p+1}{2}) \otimes V^u) \stackrel{\neq0}{\longrightarrow} {p_1}^*\mathcal{O}({\frac{p+1}{2}}) \otimes {p_1}^*\mathcal{O}(-\frac{p+1}{2}) \otimes V^u \cong V^u. \] Thus the image $\text{Im}({\mathcal F} \to V^u)$ is a sub line bundle of $V^u$ on a Zariski dense open subset $W$ of $M_{dR}({\frac{p-1}{2}},{\frac{p+1}{2}})$, which gives the Hodge filtration of $V^u$ on $W$.\\[.2cm] By the discussion above, $U:=C(W)$ is a Zariski open set of $M(1,0)$, where $C$ is the morphism induced by the Cartier functor. All Higgs bundles $(E,\theta)$ in $U$ will be sent back to $M(1,0)$ by applying the inverse Cartier transform, taking the quotient of ${\mathcal F}_{[C^{-1}(E,\theta)]} \cong \mathcal{O}(\frac{p+1}{2})$ and tensoring with $\mathcal O(\frac{1-p}{2})$. This process actually gives us a functor, which we denote as $\mathrm{Gr_{{\frac{p+1}{2}}}}\circ C_1^{-1}(\cdot)\otimes \mathcal O(\frac{1-p}{2})$.\\ Then we want to represent this functor as a rational self-map on the moduli scheme $M(1,0)$. \begin{lem}\label{lem:rational map} The functor $\mathrm{Gr_{{\frac{p+1}{2}}}}\circ C_1^{-1}(\cdot)\otimes \mathcal O(\frac{1-p}{2})$ induces a rational map \[ \varphi: M(1,0) \dashrightarrow M(1,0). \] \end{lem} \begin{proof} Let $\underline{M}(1,0)$ denote the moduli functor of semistable graded Higgs bundles of type $(1,0)$ (see \autoref{section DMS} for details), which is represented by the scheme $M(1,0)$. And $\underline{U}$ denotes the subfunctor corresponding to $U$.\\ Note that the functor $\mathrm{Gr_{{\frac{p+1}{2}}}}\circ C_1^{-1}(\cdot)\otimes \mathcal O(\frac{1-p}{2})$ gives a natural transform between these two moduli functors $\underline{U}$ and $\underline{M}(1,0)$. Since $\underline{M}(1,0)$ is represented by $M(1,0)$, one gets the following diagram \[ \xymatrix{ \underline{U} \ar[d] \ar@{-->}[dr] & \\ \underline{M}(1,0) \ar[r] & \Hom_k (\cdot, M(1,0)) } \] By the universal property of the coarse moduli scheme, one get a natural transform \[ \Hom_k (\cdot , U) \longrightarrow \Hom_k (\cdot ,M(1,0)) \] Take $Id \in \Hom_k (U,U)$, the natural transform will give the $k$-morphism \[ U \longrightarrow M(1,0) \] One can easily check that this map is induced by the self-map. \end{proof} \begin{rmk} We only deal with the first strata $\mathcal{U}_{\frac{p+1}{2}}$ here. Actually the argument above can be applied for each strata $\mathcal{U}_{k+1}/\mathcal{U}_k$, for $k= {\frac{p+1}{2}}, {\frac{p+3}{2}}, {\frac{p+5}{2}}, \cdots$. The restriction of the self-map on each strata is a rational map from $\mathcal{U}_{k+1}/\mathcal{U}_k$ to $M(k-{\frac{p-1}{2}}, {\frac{p+1}{2}}-k)$. Therefore, the self-map is a contructible map. \end{rmk} Now we want to prove: \begin{lem} \label{dominant} The rational map $\varphi$ is dominant. \end{lem} \begin{proof} We prove this lemma by induction on the number $m$ of the marked points. For $m=3$, the lemma trivially holds since $M$ is just a point. Now suppose the statement is true for the case of $m-1$ marked points. We want to prove $\varphi$ is dominant for the case of $m$ marked points. Set $Z := \overline{\mathrm{Im}(\varphi)} \subset M(1,0)$ and we want to prove $Z = M(1,0)$. Suppose $Z$ is a proper subscheme of $M(1,0) \cong \mathbb{P}^{m-3}$. Then $\mathrm{dim} \, Z \leq m-4$. Denote $M(\hat{x_i})$ to be the moduli space of semistable graded Higgs bundles of rank $2$ degree $1$ over $\mathbb{P}^1$, with nilpotent logarithmic Higgs fields which have $m-1$ poles $\{x_1, \dots,\hat{x_i},\dots, x_m\}$. Then one can define a natural embedding $M(\hat{x_i}) \hookrightarrow M$ by forgetting one marked point $x_i$. Therefore, \[ \bigcup_i \varphi(M(\hat{x_i};1,0)) \subset Z \] where $M(\hat{x_i};1,0)$ is the component of $M(\hat{x_i})$ with maximal dimension. Then we know that $M(\hat{x_i};1,0) \cong \mathbb{P}^{m-4}$. So $\mathrm{dim} \, Z = m-4$ by the assumption that $\varphi$ is dominant for $m-1$ case. And $Z$ has more than one irreducible component. But this is impossible since $Z$ is the Zariski closure of $\varphi (M(1,0)) \cong \varphi (\mathbb{P}^{m-3})$, which is irreducible. \end{proof} Now we can state and prove the main result of this section: \begin{thm} \label{Periodic} The set of periodic points of $\varphi$ is Zariski dense in $M(1,0)$. Combining this with Proposition \ref{Self-map}, one gets infinitely many irreducible crystalline projective representations of the fundamental group. \end{thm} To prove this we need a theorem of Hrushovski : \begin{thm}[Hrushovski~\cite{Hru}, see also Theorem $3.7$ in \cite{ES}] \label{Hrushovski} Let $Y$ be an affine variety over $\mathbb{F}_q$, and let $\Gamma \subset (Y \times_{\mathbb{F}_q}Y) \otimes_{\mathbb{F}_q} \bar{\mathbb{F}}_q$ be an irreducible sub variety over $\bar{\mathbb{F}_q}$. Assume the two projections $\Gamma \to Y$ are dominant. Then, for any closed sub variety $W \subsetneq Y$, there exists $x \in Y(\bar{\mathbb{F}}_q)$ such that $(x, x^{q^m}) \in \Gamma$ and $x \notin W$ for large enough natural number $m$. \end{thm} \begin{proof}[proof of Theorem \ref{Periodic}] For each Zariski open subset $\mathcal U \subset M(1,0)$, we need to find a periodic point $x$ of $\varphi$ such that $x \in \mathcal U$. We take $Y$ to be an affine neighborhood of $M(1,0)$. And $\Gamma$ is the intersection of $\Gamma_{\varphi}\otimes_{\mathbb{F}_q} \bar{\mathbb{F}}_q$ and $(Y \times_{\mathbb{F}_q}Y) \otimes_{\mathbb{F}_q} \bar{\mathbb{F}}_q$. $W$ is defined to be the union of $(M(1,0) \setminus \mathcal U) \cap Y$ and the indeterminacy of $\varphi$. By Lemma \ref{dominant}, the projections $\Gamma \to Y$ are dominant. So we can apply Theorem \ref{Hrushovski} and find a point $x \in Y(\bar{\mathbb{F}}_q)$ such that $(x, x^{q^m}) \in \Gamma$ and $x \notin W$ for some $m$. Therefore, $x \in \mathcal U$, $\varphi$ is well-defined at $x$ and $\varphi(x) = x^{q^m}$ ($Y \subset \mathbb{A}^r$ , so $x$ can be written as $(x_1,\dots x_r) \in \mathbb{A}^r(\bar{\mathbb{F}}_q)$ and $x^{q^m} := (x^{q^m}_1,\dots x^{q^m}_r)$). The rational map $\varphi$ is well-defined at $x$ means that $\varphi$ is also well-defined at $x^{q^N}$ for any $N \in \mathbb{N}$. Then we have \[ \varphi (\varphi(x)) = \varphi (x^{q^m}) = \varphi(x)^{q^m}=x^{q^{2m}} \] Thus $\varphi^N(x) = x^{q^{Nm}}=x$ for $N$ large enough. That means, $x$ is a periodic point of $\varphi$. \end{proof} \subsection{An explicit formula of the self-map in the case of four marked points.}\label{section EFSMFMP} \def\varphi_{\lambda,p}{\varphi_{\lambda,p}} In this section, we given an explicit formula of the self-map in case of $m=4$ marked point. Using M\"obius transformation on $\mathbb P^1$, we may assume these $4$ points are of form $\{0,1,\infty,\lambda\}$. By section~\ref{section DMS}, the moduli space $M$ is connected and isomorphic to $\mathbb{P}^1$, where the isomorphism is given by sending $(E,\theta)$ to the zero locus $(\theta)_0\in \mathbb P^1$. To emphasize the dependence of the self-map on $\lambda$ and $p$, we rewrite the self-map by $\varphi_{\lambda,p}$. By calculation, details are given in the appendix section~\ref{Calculate_Selfmap}, we get \begin{equation} \varphi_{\lambda,p}(z)=\frac{z^p}{\lambda^{p-1}}\cdot\left( \frac{f_\lambda(z^p)}{g_\lambda(z^p)}\right)^2, \end{equation} where $f_\lambda(z^p)$ is the determinant of matrix \begin{equation*}\small \left(\begin{array}{ccccc} \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{2}}{2}& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{3}}{3} &\cdots& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p+1)/2}}{(p+1)/2} \\ \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{3}}{3} & \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{4}}{4} &\cdots& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p+3)/2}}{(p+3)/2} \\ \vdots&\vdots&\ddots&\vdots\\ \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p+1)/2}}{(p+1)/2} & \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p+3)/2}}{(p+3)/2} &\cdots& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{p-1}}{p-1} \\ \end{array} \right) \end{equation*} and $g_\lambda(z^p)$ is the determinant of matrix \begin{equation*}\small \left(\begin{array}{ccccc} \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^1}{1}& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{2}}{2} &\cdots& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p-1)/2}}{(p-1)/2} \\ \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{2}}{2} & \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{3}}{3} &\cdots& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p+1)/2}}{(p+1)/2} \\ \vdots&\vdots&\ddots&\vdots\\ \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p-1)/2}}{(p-1)/2} & \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{(p+1)/2}}{(p+1)/2} &\cdots& \frac{\lambda^p(1-z^p)-(\lambda^p-z^p)\lambda^{p-2}}{p-2} \\ \end{array} \right). \end{equation*} By calculation, for $p=3$ one has \[\varphi_{\lambda,3}(z)= z^3 \left(\frac{z^3+\lambda(\lambda+1)}{(\lambda+1)z^3+\lambda^2}\right)^2 \] and $\varphi_{\lambda,3}(z)=z^{3^2}$ if and only if $\lambda=-1$; for $p=5$, one has \[\varphi_{\lambda,5}(z)= z^5\left(\frac{z^{10}-\lambda(\lambda+1)(\lambda^2-\lambda+1)z^5+\lambda^4(\lambda^2-\lambda+1)}{(\lambda^2-\lambda+1)z^{10}-\lambda^2(\lambda+1)(\lambda^2-\lambda+1)z^5+\lambda^6 }\right)^2,\] and $\varphi_{\lambda,5}(z)=z^{5^2}$ if and only if $\lambda$ is a $6$-th primitive root of unit; for $p=7$ one has \begin{equation*} \begin{split} \varphi_{\lambda,7}(z)=& z^7\left( \begin{array}{l} z^{21}+2\lambda(\lambda+1)(\lambda^2+\lambda+1)(\lambda^2+3\lambda+1)z^{14}\\ \ + \lambda^4(\lambda+1)^2(\lambda^2+\lambda+1)(\lambda^2+1)z^{7} + \lambda^9(\lambda+1)(\lambda^2+\lambda+1)\\\hline (\lambda+1)(\lambda^2+\lambda+1)z^{21} +\lambda^2(\lambda+1)^2(\lambda^2+\lambda+1)(\lambda^2+1) z^{14}\\ \ + 2\lambda^6(\lambda+1)(\lambda^2+\lambda+1)(\lambda^2+3\lambda+1)z^{7} +\lambda^{12}\\ \end{array} \right)^2 \end{split} \end{equation*} and $\varphi_{\lambda,7}(z)= z^{7^2}$ if and only if $(\lambda+1)(\lambda^2+\lambda+1)=0$. We regard $\varphi_{\lambda,p}$ as a self-map on $\mathbb{P}^1$, which is rational and of degree $p^2\neq 1$. Thus it has $p^2+1$ fixed $\overline{k}$-points counting with multiplicity. Suppose the conjecture~\ref{conj-1} holds, then the multiplicity of each fixed point equals to $1$. Let $(E,\theta)/\mathbb{P}_{k'}^1$ be a fixed point of $\varphi_{\lambda,p}$ defined over some extension field $k'$ of $k$. Then in the language of Higgs-de Rham flow, $(E,\theta)$ is the initial term of a twisted $1$-periodic Higgs-de Rham flow over $\mathbb{P}^1_{k'}$. {\subsection {Lifting of twisted periodic logarithmic Higgs-de Rham flow on the projective line with marked points and strong irreducibility} Here we just consider $1$-periodic case, for the higher-periodic case the treatment is similar. First of all, Higgs bundles considered here are given by logarithmic 1-forms on the punctured projective line vanishing at one point. They lift unobstructed to $W_n(k)$. Secondly, the obstruction group of lifting Hodge filtration in this case is $H^1({\mathbb P}^1_k, {\mathcal O}(-1))=0$. Hence those two conditions required in Proposition~\ref{Lifting_PHDF} hold true and one lifts $(E,\theta)$ to a twist periodic Higgs bundle over ${\mathbb P}^1_{W_2}$. Recall the proof of Proposition~\ref{Lifting_PHDF}, one constructs a self-map on the torsor space of all liftings of $(E,\theta)$, and the fixed points of this self-map correspond to those liftings of the twisted $1$-periodic Higgs-de Rham flow. Fix a point $x_0$ in the torsor space, we identify the torsor space (\ref{gr-lifting space}) with $k$. Let $x$ be any point in the torsor space, denote $z=x-x_0\in k$, by Corollary~\ref{InvCar_Torsor_map} and Proposition~\ref{prop:torsor_Grading}, there exists an element $a\in k$ such that $az^p=\mathrm{Gr}\circ C^{-1}(x)-\mathrm{Gr}\circ C^{-1}(x_0)$. Denote $b=\mathrm{Gr}\circ C^{-1}(x_0)-x_0\in k$, then the self-map on this torsor space is of form \[z\mapsto az^p+b,\] where $a,b\in k$. \textbf{Case 1: $a=0$.} Then $z=b$ is the unique fixed point of the self-map. In other words, there is a unique twisted periodic lifting of the given twisted $1$-periodic Higgs-de Rham flow over ${\mathbb P}^1_{W_2(k)}$. \textbf{Case 2: $a\neq 0$.} Let $z_0\in \overline{k}$ be a solution of $z=az^p+b$. Then $\Sigma=\{i\cdot a^{-\frac{1}{p-1}}+z_0 \mid i\in {\mathbb F}_p\}$ is the set of all solutions. If $a\neq 0$ is not a $(p-1)$-th power of any element in $k^\times$, then $\#(\Sigma\cap k)\leq 1$. In other words there is at most one twisted $1$-periodic lifting over ${\mathbb P}^1_{W_2(k)}$ of the given twisted $1$-periodic Higgs-de Rham flow. If $a\neq 0$ is a $(p-1)$-th power of some element in $k^\times$, then $\#(\Sigma\cap k)=0$ or $p$. In other words, if the twisted $1$-periodic Higgs-de Rham flow is liftable then there are exactly $p$ liftings over ${\mathbb P}^1_{W_2(k)}$. If we consider the lifting problem over an extension $k'$ of $k$, which contains $\Sigma$, then there are exactly $p$ liftings of the twisted $1$-periodic Higgs-de Rham flow over ${\mathbb P}^1_{W_2(k')}$. Repeating the same argument for lifting over truncated Witt ring of higher order we lift twisted periodic Higgs-de Rham flows over $W(\bar {{\mathbb F}}_p)$. We prove \begin{thm}\label{lifting-thm} Any periodic Higgs bundle in $ M(1,0)_{\mathbf{F}_q}$ lifts to a periodic Higgs bundle in $M(1,0)_{\mathbb{Z}_p^\text{ur}}.$ \end{thm} We recall the notion of strong irreducibility of representations, which is introduced in \cite{LSYZ14}, Proposition 1.4. Let $\rho: \pi^\text{\'et}_1({\mathcal X}^0_K)\to \mathrm{PGL_r}(\mathbb{Z}_p^{ur})$ be a representation and $\bar{\rho}$ be the restriction of $\rho$ to the geometric fundamental group $\pi^\text{\'et}_1({\mathcal X}^0_{\bar K}).$ We say $\rho$ is strongly irreducible if for any surjective and generically finite logarithmic morphism $$f: \mathcal Y_{\bar K}\to {\mathcal X}_{\bar K},$$ the pull-back representation $f^*(\bar \rho)$ is irreducible.\\[.2cm] \begin{prop}\label{strong_irred} Let $$\rho :\pi^\text{\'et}_1(({\mathbb P}^1\setminus \mathcal D)_{\mathbb Q_p^{ur}}) \to \mathrm{PGL_2}(\mathbb Z_p^{ur})$$ be a representation corresponding to a lifted twisted periodic logarithmic Higgs bundle $(E,\theta)=(\mathcal O(1)\oplus \mathcal O, \theta)$ over $({\mathbb P}^1, \mathcal D)_{\mathbb Z_p^{ur}}$. Then $\rho$ is strongly irreducible. \end{prop} \begin{proof} Denote ${\mathcal Y}={\mathbb P}^1={\mathcal X}$. We take the double cover of ${\mathbb P}^1$ \[\sigma:{\mathcal Y} \to {\mathcal X}\] defined by $z\mapsto z^2$, which is ramified on $\{0,\infty\} \subset \mathcal D$. Taking the logarithmic structure $\mathcal D':=\sigma^*(\mathcal D)$ on ${\mathcal Y}$, then $\sigma$ is a logarithmic \'etale morphism with respect to the logarithmic structures $\mathcal D$ and $\mathcal D'$. The logarithmic inverse Cartier transforms on both logarithmic curves are compatible with respect to $\sigma^*$. We may choose compatible local Frobenius liftings on both logarithmic curves. Let ${\mathcal U}$ be a small affine open subset of ${\mathcal X}$ and denote ${\mathcal V}=\sigma^{-1}{\mathcal U}$. According to the following commutative diagram of logarithmic schemes \begin{equation} \xymatrix@C=2cm{(V_1,{\mathcal D}'\mid_{V_1})\ar[d]_{\text{close embedding}} \ar[r]^{\Phi_{V_1}} & ({\mathcal V},{\mathcal D}'\mid_{{\mathcal V}}) \ar[d]^{\sigma}\\ ({\mathcal V},{\mathcal D}'\mid_{{\mathcal V}}) \ar@{..>}[ur]^{\exists \Phi_V} \ar[r]_{\Phi_{\mathcal U}\circ\sigma} & ({\mathcal U},{\mathcal D}\mid_{{\mathcal U}})\\} \end{equation} and the logarithmic \'etaleness of $\sigma$, the Proposition~3.12 in~\cite{Kato88} implies that there exists a Frobenius lifting $\Phi_{{\mathcal V}}$ on ${\mathcal V}$ fitting into the commutative diagram. Since the inverse Cartier transforms is constructed by using the pullback via local Frobenius liftings, the local inverse Cartier transforms \footnote{The original inverse Cartier transform is defined by Ogus and Vologodsky~\cite{OgVo07} for characteristic $p$. And lately, it was generalized to the truncated version by Lan-Sheng-Zuo~\cite{LSZ13a} and to the logarithmic version by Schepler~\cite{Sch08} and Lan-Sheng-Yang-Zuo~\cite{LSYZ14}. Here we need a truncated logarithmic version. In this case, the inverse Cartier transform is defined in the same manner as in~\cite{LSZ13a} except that there are some restrictions on the choices of local liftings of the Frobenius map. The existence of such kind liftings is given by proposition~9.7 in~\cite{EV-92}. It is routine to give an explicit definition. We left it to the readers.} on both logarithmic curves are compatible with respect to $\sigma^*$. After the gluing process, one gets global compatibility. According to the compatibility of logarithmic inverse Cartier transforms on both logarithmic curves, the periodicity is preserved by the pull-back $\sigma^*$, i.e. if $(E,\theta)^{(1)}$ is a logarithmic $\mathcal O(1)^{\otimes{ p-1\over 2}}$-twisted periodic Higgs bundle over $({\mathcal X}, {\mathcal D})_1$, then $\sigma^*(E,\theta)^{(1)}\otimes {\mathcal O}(1)^{-1}$ is a logarithmic periodic Higgs bundle over $({\mathcal Y}, {\mathcal D}')_1$ and $\sigma^*\theta\not=0$. Furthermore if $(E,\theta)^{(l)}$ is a lifting of $(E,\theta)^{(1)}$ to $({\mathcal X}, {\mathcal D})_l$ as a twisted periodic Higgs bundle, then $\sigma^*(E,\theta)^{(l)}\otimes {\mathcal O}(1)^{-1}$ is a lifting of $\sigma^*(E,\theta)^{(1)}\otimes{\mathcal O}(1)^{-1}$ to $({\mathcal X},{\mathcal D})_\ell$ as a periodic Higgs bundle. In this way we show that the projective representation $\sigma^*\rho$ lifts to a $\mathrm{GL_2}$-crystalline representation $$\rho':\pi^\text{\'et}_1(({\mathcal Y}\setminus{\mathcal D}')_{\mathbb Q_p^{ur}})\to \mathrm{GL_2}((\mathbb Z_p^{ur}))$$ corresponding to the Higgs bundle $\sigma^*((E,\theta))\otimes {\mathcal O}(-1)=:(E,\theta)'$ over $({\mathcal Y}, {\mathcal D}')_{\bZ_p^{ur}}$ of the form $$({\mathcal O}(1)\oplus {\mathcal O}(-1),\sigma^*\theta_{\not=0}: {\mathcal O}(1)\to {\mathcal O}(-1)\otimes \Omega^1_{{\mathcal Y}}(\log {\mathcal D}')).$$ By the same argument used in the proof of Proposition 1.4 in \cite{LSYZ14}, we are going to show that $\rho'$ is strongly irreducible. Hence $\rho$ is strongly irreducible. Let $f: {\mathcal Z}\to {\mathcal Y}$ be a surjective logarithmic morphism between logarithmic curves. By the example in page 861 of \cite{Fal05}, one can see that the generalized representation associated to $(E,\theta)'_{\mathbb C_p}:=(E,\theta)' \otimes\mathbb C_p$ is compatible with $\bar \rho'$ by tensoring with $\mathbb C_p$. We can find a finite extension field $K'$ of $\mathbb Q_p^{ur}$ with its integral ring $\mathcal O_{K'}$, such that $ {\mathcal Z}_{\bar K}$ has an integral model ${\mathcal Z}_{\mathcal O_{K'}}$ over $ \mathcal O_{K'}$ and with toroidal singularity. By the construction of the correspondence (\cite{Fal05}, Theorem 6), the twisted pullback of the graded Higgs bundle $f^{\circ}(E,\theta)'_{\mathbb C_p}$ corresponds to the pullback representation of $\bar \rho'\otimes \mathbb C_p$ to $\pi^\text{\'et}_1({\mathcal Z}^o_{\mathbb C_p})$. By the construction of the twisted pullback, one has a short exact sequence $$ 0\to (f^*\mathcal O(-1), 0)_{\mathbb C_p}\to f^\circ(E,\theta)'_{\mathbb C_p}\to (f^*\mathcal O(1),0)_{\mathbb C_p}\to 0,$$ and that the Higgs field of $f^\circ(E,\theta)'_{\mathbb C_p}$ is nonzero. Assume by contradiction $f^*\bar \rho'\otimes \mathbb C_p$ is not irreducible. Then it contains a one-dimensional $\mathbb C_p$-subrepresentation. By the last paragraph in Page 860 of \cite{Fal05}, it follows that $f^\circ(E,\theta)'_{\mathbb C_p}$ contains a rank-1 Higgs subbundle $(N,0)$ of $\deg N=0.$ Since the Higgs field of $f^\circ(E,\theta)'_{\mathbb C_p}$ is nonzero, $ (f^*\mathcal O(-1), 0)_{\mathbb C_p}$ in the above short exact sequence is the unique rank-1 Higgs subbundle. Hence, one obtains a nonzero factor map $(N,0)\to (f^*\mathcal O(-1), 0)_{\mathbb C_p}.$ But it is impossible, since $\deg N>\deg f^*\mathcal O(-1).$ The proof is completed. \end{proof} \begin{rmk} The inverse Cartier functor over truncated Witt rings was defined by Lan-Sheng-Zuo~\cite{LSZ13a} \end{rmk} \subsection{Examples of dynamics of Higgs-de Rham flow on $\mathbb{P}^1$ with four-marked points} In the following, we give some examples in case $k=\mathbb{F}_{3^4}$. For any $\lambda\in k\setminus\{0,1\}$, the map $\varphi_{\lambda,3}$ is a self $k$-morphism on $\mathbb{P}^1_k$. So it can be restricted as a self-map on the set of all $k$-points \[\varphi_{\lambda,3}:k\cup \{\infty\}\rightarrow k\cup \{\infty\}.\] Since $\alpha=\sqrt{1+\sqrt{-1}}$ is a generator of $k=\mathbb{F}_{3^4}$ over $\mathbb{F}_3$, every elements in $k$ can be uniquely expressed in form $a_3\alpha^3+a_2\alpha^2+a_1\alpha+a_0$, where $a_3,a_2,a_1,a_0\in\{0,1,2\}$. We use the integer $27a_3+9a_2+3a_1+a_0\in [0,80]$ stand for the element $a_3\alpha^3+a_2\alpha^2+a_1\alpha+a_0$. By identifying the set $k\cup\infty$ with $\{0,1,2,\cdots,80,\infty\}$ in this way, we get a self-map on $\{0,1,2,\cdots,80,\infty\}$ for all $\lambda\in k$ \[\varphi_{\lambda,3}:\{0,1,2,\cdots,80,\infty\}\rightarrow \{0,1,2,\cdots,80,\infty\}.\] In the following diagrams, the arrow \textcircled{$\beta$} $\rightarrow$ \textcircled{$\gamma$} means $\gamma=\varphi_{\lambda,3}(\beta)$. And an $m$-length loop in the following diagrams just stands for a twisted $m$-periodic Higgs-de Rham flow, which corresponds to $\mathrm{PGL}_2(\mathbb{F}_{3^m})$-representation by Theorem~\ref{equiv:logTFF&THDF} and Theorem~\ref{Mainthm}. $\bullet$ For $\lambda=2\sqrt{1+\sqrt{-1}}$, we have \begin{center} \begin{tikzpicture} [L1Node/.style={circle,draw=black!50, very thick, minimum size=7mm}] \node[L1Node] (n1) at (-2, 1){$21$}; \draw [thick,->](-1.62,0.8) -- (-0.4,0.2); \node[L1Node] (n1) at (-2.2, 0){$43$}; \draw [thick,->](-1.8,0) -- (-0.45,0); \node[L1Node] (n1) at (-2, -1){$54$}; \draw [thick,->](-1.62,-0.8) -- (-0.4,-0.2); \node[L1Node] (n1) at (0, 0){$27$}; \draw [thick,->](0.4,0) -- (1.6,0); \node[L1Node] (n1) at (2, 0){$~6\,$}; \draw [thick,->](2.3,0.3) .. controls (4,2) and (4,-2) .. (2.35,-0.35); \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture} [L1Node/.style={circle,draw=black!50, very thick, minimum size=7mm}] \node[L1Node] (n1) at (-2, 2.5){$34$}; \node[L1Node] (n1) at (-2.2, 1.5){$61$}; \node[L1Node] (n1) at (-2, 0.5){$62$}; \node[L1Node] (n1) at (0, 1.5){$15$}; \node[L1Node] (n1) at (-2, -2.5){$38$}; \draw [thick,->](-1.62,-2.3) -- (-0.4,-1.7); \node[L1Node] (n1) at (-2.2, -1.5){$47$}; \draw [thick,->](-1.8,-1.5) -- (-0.45,-1.5); \node[L1Node] (n1) at (-2, -0.5){$25$}; \node[L1Node] (n1) at (0, -1.5){$35$}; \node[L1Node] (n1) at (2, 0){$65$}; \draw [thick,->](-1.62,-0.7) -- (-0.4,-1.3); \draw [thick,->](-1.8,1.5) -- (-0.45,1.5); \draw [thick,->](-1.62,2.3) -- (-0.4,1.7); \draw [thick,->](-1.62,0.7) -- (-0.4,1.3); \draw [thick,->](0.38,1.25) -- (1.6,0.25); \draw [thick,->](0.38,-1.25) -- (1.6,-0.25); \draw [thick,->](2.3,0.3) .. controls (4,2) and (4,-2) .. (2.35,-0.35); \end{tikzpicture} \end{center} The $1$-length loops $\xymatrix{\textcircled{\tiny{6}}\ar@(ur,dr)}$ \quad and $\xymatrix{\textcircled{\tiny{65}}\ar@(ur,dr)}$ \quad in the diagrams above correspond to projective representations of form \[\pi^\text{\'et}_1\left(\mathbb{P}_{W(\mathbb{F}_{3^4})[1/3]}^1\setminus\left\{0,1,\infty,2\sqrt{1+\sqrt{-1}}\right\}\right)\longrightarrow \mathrm{PGL}_2(\mathbb{F}_3),\] here $W(\mathbb{F}_{3^4})[1/3]$ is the unique unramified extension of $\mathbb Q_3$ of degree $4$. $\bullet$ For $\lambda=\sqrt{-1}$, we have \begin{center} \begin{tikzpicture} [L1Node/.style={circle,draw=black!50, very thick, minimum size=7mm}] \node[L1Node] (n1) at (0, 1){$47$}; \draw [thick,->](0.38,0.8) -- (1.6,0.2); \node[L1Node] (n1) at (0, -1){$60$}; \draw [thick,->](0.38,-0.8) -- (1.6,-0.2); \node[L1Node] (n1) at (2, 0){$31$}; \draw [thick,->](2.3,0.3) .. controls (3,1) and (4,1) .. (4.65,0.35); \node[L1Node] (n1) at (7, 1){$35$}; \draw [thick,->](6.62,0.8) -- (5.4,0.2); \node[L1Node] (n1) at (7, -1){$57$}; \draw [thick,->](6.62,-0.8) -- (5.4,-0.2); \node[L1Node] (n1) at (5, 0){$15$}; \draw [thick,->](4.7,-0.3) .. controls (4,-1) and (3,-1) .. (2.35,-0.35); \end{tikzpicture} \end{center} The $2$-length loop $\xymatrix{\textcircled{\tiny{31}}\ar@/^/[r] &\textcircled{\tiny{15}}\ar@/^/[l]}$ corresponds to a projective representation of form \[\pi^\text{\'et}_1\left(\mathbb{P}_{W(\mathbb{F}_{3^4})[1/3]}^1\setminus\left\{0,1,\infty,\sqrt{-1}\right\}\right)\longrightarrow \mathrm{PGL}_2(\mathbb{F}_{3^2}).\] We also have diagram \begin{center} \begin{tikzpicture} [L1Node/.style={circle,draw=black!50, very thick, minimum size=7mm}] \node[L1Node] (n1) at (-1,-2.4){$21$}; \draw [thick,->](-0.6,-2.4) -- (0.5,-2.4); \node[L1Node] (n1) at (1,-2.4){$64$}; \draw [thick,->](1.3,-2.1) -- (2.05,-1.35); \node[L1Node] (n1) at (2.4,-1){$48$}; \draw [thick,->](2.4,-0.6) -- (2.4,0.5); \node[L1Node] (n1) at (2.4,1){$53$}; \draw [thick,->](2.1,1.3) -- (1.35,2.05); \node[L1Node] (n1) at (1,2.4){$24$}; \draw [thick,->](0.6,2.4) -- (-0.5,2.4); \node[L1Node] (n1) at (-1,2.4){$37$}; \draw [thick,->](-1.3,2.1) -- (-2.05,1.35); \node[L1Node] (n1) at (-2.4,1){$78$}; \draw [thick,->](-2.4,0.6) -- (-2.4,-0.5); \node[L1Node] (n1) at (-2.4,-1){$77$}; \draw [thick,->](-2.1,-1.3) -- (-1.35,-2.05); \end{tikzpicture} \end{center} which is an $8$-length loop and corresponds to a projective representation of form \[\pi^\text{\'et}_1\left(\mathbb{P}_{W(\mathbb{F}_{3^8})[1/3]}^1\setminus\left\{0,1,\infty,\sqrt{-1}\right\}\right)\longrightarrow \mathrm{PGL}_2(\mathbb{F}_{3^8}).\] $\bullet$ For $\lambda=2+\sqrt{1+\sqrt{-1}}$, one has \begin{center} \begin{tikzpicture} [L1Node/.style={circle,draw=black!50, very thick, minimum size=7mm}] \node[L1Node] (n1) at (-1,3.4){$33$}; \draw [thick,->](-0.78,3.03) -- (-0.24,2.1); \node[L1Node] (n1) at (1,3.4){$34$}; \draw [thick,->](0.78,3.03) -- (0.24,2.1); \node[L1Node] (n1) at (0,1.7){$32$}; \draw [thick,->](0.22,1.33) -- (0.76,0.4); \node[L1Node] (n1) at (-3,0){$65$}; \draw [thick,->](-2.57,0) -- (-1.45,0); \node[L1Node] (n1) at (-1,0){$35$}; \draw [thick,->](-0.78,0.37) -- (-0.24,1.3); \node[L1Node] (n1) at (1,0){$59$}; \draw [thick,->](0.57,0) -- (-0.55,0); \node[L1Node] (n1) at (3,0){$60$}; \draw [thick,->](2.57,0) -- (1.45,0); \node[L1Node] (n1) at (-2,-1.7){$74$}; \draw [thick,->](-1.78,-1.33) -- (-1.24,-0.4); \node[L1Node] (n1) at (2,-1.7){$61$}; \draw [thick,->](1.78,-1.33) -- (1.24,-0.4); \end{tikzpicture} \end{center} and the $3$-length loop in this diagram corresponds to a projective representation of form \[\pi^\text{\'et}_1\left(\mathbb{P}_{W(\mathbb{F}_{3^{12}})[1/3]}^1\setminus\left\{0,1,\infty,2+\sqrt{1+\sqrt{-1}}\right\}\right)\longrightarrow \mathrm{PGL}_2(\mathbb{F}_{3^3}).\] We also have \begin{center} \begin{tikzpicture} [L1Node/.style={circle,draw=black!50, very thick, minimum size=7mm}] \node[L1Node] (n1) at (1.5,1.5){$15$}; \draw [thick,->](-1.07,-1.5) -- (1.05,-1.5); \node[L1Node] (n1) at (-1.5,1.5){$58$}; \draw [thick,->](1.5,-1.07) -- (1.5,1.05); \node[L1Node] (n1) at (1.5,-1.5){$38$}; \draw [thick,->](1.07,1.5) -- (-1.05,1.5); \node[L1Node] (n1) at (-1.5,-1.5){$31$}; \draw [thick,->](-1.5,1.07) -- (-1.5,-1.05); \end{tikzpicture} \end{center} which is a $4$-length loop and corresponds to a projective representation of form \[\pi^\text{\'et}_1\left(\mathbb{P}_{W(\mathbb{F}_{3^4})[1/3]}^1\setminus\left\{0,1,\infty,2+\sqrt{1+\sqrt{-1}}\right\}\right)\longrightarrow \mathrm{PGL}_2(\mathbb{F}_{3^4}).\] \subsection{Question on periodic Higgs bundles and torsion points on the associated elliptic curve.} For $\mathbb{P}^1_{W(\mathbb{F}_q)}$ with 4 marked points $\{0,\,1,\,\infty,\,\lambda\}$ we denote the associated elliptic curve as the double cover $\pi: \mathcal{C}_\lambda\to \mathbb{P}^1$ ramified on $\{0,\,1,\,\infty,\,\lambda\}$ and $[p]: \mathcal{C}_\lambda \to \mathcal{C}_\lambda$ to be the multiplication by $p$ map. \begin{conj}\label{conj-1} The following diagram commutes \[ \xymatrix{ \mathcal{C}_\lambda \ar[r]^{[p]} \ar[d]^{\pi} & \mathcal{C}_\lambda \ar[d]^{\pi} \\ \mathbb{P}^1 \ar[r]^{\varphi_{\lambda,p}} & \mathbb{P}^1 \\ } \] where $\varphi_{\lambda,p}$ is the self-map induced by the Higgs-de Rham flow. \end{conj} Conjecture \ref{conj-1} has been checked to be true for all primes $p\leq 50.$\\[.2cm] \begin{cor} Assuming Conjecture \ref{conj-1} holds. A Higgs bundle $(E,\theta)\in M(1,0)_{\mathbb{F}_q}$ is periodic if and only the zero $(\theta)_0=\pi(x)$, where $x$ is a torsion point in $ \mathcal{C}_\lambda$ of order coprime to $p$. \end{cor} In Theorem \ref{lifting-thm} we have shown that any periodic Higgs bundle in $M(1,0)_{\mathbb{F}_q}$ lifts to a periodic Higgs bundle over $\mathbb{Z}_p^\text{ur}$. In fact, there are infinitely many liftings if $\phi_{\lambda, p}\not= z^{p^2}.$ \\[.2cm] \begin{conj}\label{conj-2} A periodic Higgs bundle $(E,\theta)$ in $M(1,0)_{\mathbb{F}_q}$ lifts to a periodic Higgs bundle $(\mathcal{E}, \mathcal{\theta})$ in $M(1,0)_{W(\mathbb{F}_{q'})}$ if and only if $(\mathcal{\theta})_0=\pi(x)$, where $x$ is a torsion point in $ \mathcal{C}_\lambda$ of order coprime to $p$. \end{conj} \subsection{Question on $\ell$-adic representation and $\ell$-to-$p$ companions.} Kontsevich has observed a relation between the set of isomorphic classes of $\text{GL}_2(\bar{\mathbb Q}_l)$-local systems over $\mathbb{P}^1\setminus \{0,\,1,\infty,\lambda\}$ over $\mathbb{F}_q$ and the set of rational points on $C_\lambda$ over $\mathbb{F}_q$ (see {\cite[section 0.1]{kontsevich}}) via the work of Drinfeld on the Langlands program over function field. It looks quite mysterious as the elliptic curve appears in $p$-adic as well in $\ell$-adic case. There should exist a relation between periodic Higgs bundles in the $p$-adic world and the Hecke-eigenforms in the $\ell$-adic world via Abe's solution of Deligne conjecture on $\ell$-to-$p$ companions. To make the analogy, we first lift the $\mathrm{PGL}_2$-representations to $\mathrm{GL}_2$-representations.\\ In this paragraph, we keep the notations in the proof of Proposition~\ref{strong_irred}. Now we want to descent the $\mathrm{GL}_2$-representation \[\rho':\pi^\text{\'et}_1(({\mathcal Y}\setminus{\mathcal D}')_{{\mathbb Q}_{q'}})\to \mathrm{GL_2}((\bZ_q))\] to a $\mathrm{GL}_2$-representation of $\pi^\text{\'et}_1(({\mathcal X}\setminus{\mathcal D})_{{\mathbb Q}_{q'}})$, whose projectification is just the projective representation $\rho$. There is a natural action of the deck transformation group $G=\mathrm{Gal}({\mathcal Y}/{\mathcal X})$ on ${\mathcal O}_{\mathcal Y}=\sigma^*{\mathcal O}_{\mathcal X}$, with ${\mathcal O}_{\mathcal X}={\mathcal O}_{\mathcal Y}^G$. Since $0$ and $\infty$ are fixed by $G$, the action of $G$ on ${\mathcal O}_{\mathcal Y}$ can be extended to ${\mathcal O}_{\mathcal Y}((0))$ and ${\mathcal O}_{\mathcal Y}((\infty))$. On the other hand, both ${\mathcal O}_{\mathcal Y}((0))$ and ${\mathcal O}_{\mathcal Y}((\infty))$ are isomorphic to ${\mathcal O}_{\mathcal Y}(1)$. We could endow two actions of $G$ on the periodic Higgs bundle $\sigma^*(E,\theta)\otimes{\mathcal O}_{\mathcal Y}(1)^{-1}$. This is equivalent to endow a parabolic structure \footnote{ Recall that to give a Higgs bundle with parabolic structure is equivalent to give a Higgs bundle over some Galois covering with an action of the deck transformation group. In our case, we can view $\sigma^*(E,\theta)\otimes{\mathcal O}_{\mathcal Y}(1)^{-1}$ with a $G$-action as the Higgs bundle $(E,\theta)$ with a parabolic structure. } on the Higgs bundle $(E,\theta)$. Extend the actions to the periodic graded Higgs-de Rham flow with initial term $\sigma^*(E,\theta)\otimes{\mathcal O}_{\mathcal Y}(1)^{-1}$. Via Faltings' functor $\mathbb D$, the actions of $G$ on the flow induce actions of $G$ on the sections of locally constant sheaf on $({\mathcal Y}\setminus{\mathcal D}')_{\mathbb Q_{q'}}$. Then those $G$-invariant sections forms a locally constant sheaf on $({\mathcal X}\setminus{\mathcal D})_{\mathbb Q_{q'}}$. This gives us a $\mathrm{GL_2}((\mathbb Z_q))$-representation of $\pi^\text{\'et}_1(({\mathcal X}\setminus{\mathcal D})_{\mathbb Q_{q'}})$. For example, if we choose the $G$-action on ${\mathcal O}_{\mathcal Y}(1)$ via the isomorphism ${\mathcal O}_{\mathcal Y}(1)\simeq {\mathcal O}_{\mathcal Y}((\infty))$. Then one could lift the $\text{PGL}_2( \mathbb{Z}_q )$-representation to \[\rho: \pi^\text{\'et}_1(X\setminus \{0,\,1,\infty,\lambda\})\to \text{GL}_2(\mathbb{Z}_q)\] such that the local monodromy around $\{0,\,1,\,\lambda\}$ are unipotent and around $\infty$ is quasi-unipotent with eigenvalue $-1$.\\[.1cm] Let $(V,\nabla, Fil^\bullet,\Phi)$ be the Fontaine-Faltings module corresponding to $\rho$. Forgetting the Hodge filtration on $V$ one obtains a logarithmic $F$-isocrystal $(V,\nabla, \Phi)/{(\mathbb{P}^1,\{0,\,1,\,\lambda,\infty\})}_{\mathbb{Q}_{q'}}$, which should correspond to an $\ell$-adic representation $\rho_{\ell} :\pi^\text{et}_1(\mathbb{P}^1_{\mathbb {F}_{q'}}\setminus \{0,\,1,\,\lambda,\infty\} )\to \text{GL}_2( \bar{\mathbb{Q}}_\ell)$ by applying Abe's solution of Deligne's conjecture on $\ell$-to-$p$ companion (see {\cite[Theorem 4.4.1]{Abe-13}} or {\cite[Theorem 7.4.1]{Drinfeld-18}}). However, in order to apply Abe's theorem one has to check the determinant of the $F$-isocrystal $(V,\nabla, \Phi_p)/(\mathbb{P}^1,\{0,\,1,\,\lambda,\infty\})$ is of finite order (note that the category of $F$-isocrystal is a tensor category, and $\text{det}(V,\nabla, \Phi_p)$ is of finite order just means that its some tensor power becomes the trivial $F$-isocrystal $\mathcal{O}_{\mathbb{P}^1}$). \begin{conj}\label{conj-3} There exist elements $ u\in \mathbb{Z}_{q'}^*$ such that $(\det V,\det\nabla, u\det \Phi)$ corresponds to a finite character of $ \pi^\text{\'et}_1(\mathbb{P}^1_{\mathbb{Q}_{q'}}\setminus \{0,\,1,\infty,\lambda\}).$ \end{conj} \subsection{Projective $F$-units crystal on smooth projective curves}\label{proj F-units} Let $\mathcal X$ be a smooth proper scheme over $W(k)$. In~\cite{LSZ13a} an equivalence between the category of $f$-periodic vector bundles $(E,0)$ of rank-$r$ over $X_n$ (i.e. $(E,0)$ initials an $f$-periodic Higgs-de Rham flow with zero Higgs fields in all Higgs terms) and the category of $\mathrm{GL}_r(W_n(\mathbb{F}_{p^f}))$-representations of $\pi^\text{\'et}_1(X_1)$ has been established. This result generalizes Katz's original theorem for $\mathcal X$ being an affine variety. As an application of our main theorem, we show that \begin{thm} The $\mathbb D^P$ functor is faithful from the category of rank-$r$ twisted $f$-periodic vector bundles $(E,0)$ over $X_n$ to the the category of projective $W_n(\mathbb{F}_{p^f})$-representations of $\pi^\text{\'et}_1(X_{1,k'})$ of rank $r$, where $k'$ is the minimal extension of $k$ containing $\mathbb{F}_{p^f}$. \end{thm} \begin{rmk} For $n=1$ the above theorem is just a projective version of Lange-Stuhler's theorem. \end{rmk} \begin{thm}[lifting twisted periodic vector bundles] Let $(E,0)/X_1$ be an twisted $f$-periodic vector bundle. Assume $H^2(X_1, End(E))=0$. Then for any $n\in \mathbb N$ there exists some positive integer $f_n$ with $f\mid f_n$ such that $(E,0)$ lifts to a twisted $f_n$-periodic vector bundle over $X_n$. \end{thm} Translate the above theorem in the terms of representations: \begin{thm}[lifting projective representations of $\pi^\text{\'et}_1(X_1)$] Let $\rho$ be a projective $\mathbb{F}_{p^f}$-representation of $\pi^\text{\'et}_1(X_1)$. Assume $H^2(X_1, End(\rho))=0$, then there exist an positive integer $f_n$ divided by $f$ such that $\rho$ lifts to a projective $W_n(\mathbb{F}_{p^{f_n}})$-representation of $\pi^\text{\'et}_1(X_{1,k'})$ for any $n\in\mathbb N$, where $k'$ is the minimal extension of $k$ containing $\mathbb{F}_{p^{f_n}}$. \end{thm} Assume $\mathcal X$ is a smooth proper curve over $W(k)$, de Jong and Osserman (see Appendix A in~\cite{Osserman}) have shown that the subset of periodic vector bundles over $X_{1,\overline{k}}$ is Zariski dense in the moduli space of semistable vector bundles over $X_1$ (Laszlo and Pauly have also studied some special case, see~ \cite{LP}). Hence by Lange-Stuhler's theorem (see~\cite{LangeStuhe}) every periodic vector bundle corresponds to a $(P)\mathrm{GL}_r(\mathbb{F}_{p^f})$-representations of $\pi^\text{\'et}_1(X_{1, k'})$, where $f$ is the period and $k'$ is a definition field of the periodic vector bundle containing $\mathbb{F}_{p^f}$. \begin{cor}Every $(\mathrm{P})\mathrm{GL}_r(\mathbb{F}_{p^f})$-representation of $\pi^\text{\'et}_1(X_{1,k'})$ lifts to a $\mathrm{(P)GL}_r(W_n(\mathbb{F}_{p^{f_n}}))$-representation of $\pi^\text{\'et}_1(X_{1,k''})$ for some positive integer $f_n$ divided by $f$, where $k''$ is a definition field of the periodic vector bundle containing $\mathbb{F}_{p^{f_n}}$. \end{cor} \begin{rmk} It shall be very interesting to compare this result with Deninger-Werner's theorem (see~\cite{DW}). Let $\mathscr E$ be a vector bundle over $\mathcal X$, we view it as a graded Higgs bundle with trivial filtration and trivial Higgs field. Suppose it is preperiodic over $X_1$. Then it has strongly semistable reduction of degree zero. According to Deninger-Werner's theorem, this vector bundle induces a $\mathrm{GL}_r(\mathbb C_p)$-representation of $\pi^\text{\'et}_1(X_{\bar K})$. \end{rmk} \section{Base change of twisted Fontaine-Faltings modules and twisted Higgs-de Rham flows over very ramified valuation rings} Let $k$ be a finite field of characteristic $p$ containing $\mathbb F_{p^f}$. Denote $K_0=W(k)[\frac1p]$. Let $\mathcal X$ be a smooth proper scheme over $W(k)$ together with a smooth log structure $\D/W(k)$. For any finite extension $K$ of $K_0$, denote $X_K^o=(\mathcal X\times_{W(k)} \Spec K)\setminus (\D\times_{W(k)} \Spec K)$. Recall that Theorem~\ref{Mainthm} guarantees the existence of non-trivial representations of \'etale fundamental group in terms of the existence of semistable graded Higgs bundles. Then the $\mathrm{PGL}_r(\mathbb{F}_{p^f})$-crystalline representation of $\pi^\text{\'et}_1((X_{K_0}^o)$ corresponding to the stable Higgs bundle should have some stronger property: its restriction to the geometric fundamental group $\pi^\text{\'et}_1((\mathcal{X} \setminus \mathcal{D})_{\bar{\mathbb{Q}}_p})$ is absolutely irreducible.\\ We outline the proof as follows. Fix a $K_0$-point in $X_{K_0}$, one can pull back the representation $\rho$ to a representation of the Galois group, whose image is finite. This finite quotient will give us a field extension $K/K_0$ such that the restriction of $\rho$ on $\mathrm{Gal}(\bar{K_0}/K)$ is trivial. That means $\rho(\pi^\text{\'et}_1(X_{K}^o))=\rho(\pi^\text{\'et}_1(X^o_{\overline{K}_0}))$. So it suffices to prove the irreducibility of $\rho$ on $\pi^\text{\'et}_1(X_{K}^o)$, which gives us the chance to apply the method of twisted periodic Higgs-de Rham flows as before. But the field extension $K/K_0$ is usually ramified. So we have to work out the construction in the previous sections to the very ramified case. Note that this section is deeply inspired by Faltings' work~\cite{Fal99}. \subsection{Notations in the case of $\Spec \,k$.} In this notes, $k$ will always be a perfect field of characteristic $p>0$. Let $\pi$ be a root of an Eisenstein polynomial \[f(T)=T^e+\sum_{i=0}^{e-1}a_i T^i\] of degree $e$ over the Witt ring $W=W(k)$. Denote $K_0=\Frac(W)=W[\frac1p]$ and $K=K_0[\pi]$, where $K_0[\pi]$ is a totally ramified extension of $K_0$ of degree $e$. Denote by $W_\pi=W[\pi]$ the ring of integers of $K$, which is a complete discrete valuation ring with maximal ideal $\pi W_\pi$ and the residue field $W_\pi/\pi W_\pi=k$. Denote by $W[[T]]$ the ring of formal power-series over $W$. Then \[W_\pi=W[[T]]/fW[[T]].\] The PD-hull ${\mathcal B}_{W_\pi}$ of $W_\pi$ is the PD-completion of the ring obtained by adjoining to $W[[T]]$ the divided powers $\frac{f^n}{n!}$. More precisely \[{\mathcal B}_{W_\pi}=\left\{\left.\sum_{n=0}^{\infty}a_nT^n\in K_0[[T]] \right| a_n[n/e]!\in W \text{ and }a_n[n/e]!\rightarrow 0\right\}.\] A decreasing filtration is defined on ${\mathcal B}_{W_\pi}$ by the rule that $F^q({\mathcal B}_{W_\pi})$ is the closure of the ideal generated by divided powers $\frac{f^n}{n!}$ with $n\geq q$. Note that the ring ${\mathcal B}_{W_\pi}$ only depends on the degree $e$ while this filtration depends on $W_\pi$ and $e$. One has \[{\mathcal B}_{W_\pi}/\Fil^1{\mathcal B}_{W_\pi}\simeq W_\pi. \] There is a unique continuous homomorphism of $W$-algebra ${\mathcal B}_{W_\pi}\rightarrow B^+(W_\pi)$ which sends $T$ to $[\underline{\pi}]$. Here $\underline{\pi} = (\pi,\pi^{\frac{1}{p}},\pi^{\frac{1}{p^2}},\dots) \in \varprojlim \bar{R}$. We denote \[{\widetilde{\mB}}_{W_\pi}={\mathcal B}_{W_\pi}[\frac{f}{p}]\] which is a subring of $K_0[[T]]$. The idea $(\frac{f}{p})$ induces a decreasing filtration $\Fil^\cdot {\widetilde{\mB}}_{W_\pi}$ such that \[{\widetilde{\mB}}_{W_\pi}/\Fil^1{\widetilde{\mB}}_{W_\pi}\simeq W_\pi. \] The Frobenius endomorphism on $W$ can be extended to an endomorphism $\varphi$ on $K_0[[T]]$, where $\varphi$ is given by $\varphi(T)=T^p$. Since $\varphi(f)$ is divided by $p$, we have $\varphi({\widetilde{\mB}}_{W_\pi})\subset {\mathcal B}_{W_\pi}$. Thus one gets two restrictions \[\varphi:{\widetilde{\mB}}_{W_\pi}\rightarrow {\mathcal B}_{W_\pi} \text{ and } \varphi:{\mathcal B}_{W_\pi}\rightarrow {\mathcal B}_{W_\pi} .\] Note that the ideal of ${\mathcal B}_{W_\pi}$, generated by $\Fil^1{\mathcal B}_{W_\pi}$ and $T$, is stable under $\varphi$. Then we have the following commutative diagram \begin{equation}\label{diag:FrobLift} \xymatrix{ {\mathcal B}_{W_\pi}\ar[d]^{\varphi} \ar@{->>}[r] &{\mathcal B}_{W_\pi}/(\Fil^1{\mathcal B}_{W_\pi},T)=k \ar[d]^{(\cdot)^p}\\ {\mathcal B}_{W_\pi} \ar@{->>}[r] &{\mathcal B}_{W_\pi}/(\Fil^1{\mathcal B}_{W_\pi},T)=k \\ }. \end{equation} \subsection{Base change in the small affine case.} For a smooth and small $W$-algebra $R$ (which means there exists an \'etale map \[W[T_1^{\pm1},T_2^{\pm1},\cdots, T_{d}^{\pm1}]\rightarrow R,\] see \cite{Fal89}), Lan-Sheng-Zuo constructed categories $\mathrm{MIC}(R/pR)$, $\tMIC(R/pR)$, $\MCF(R/pR)$ and $\MF(R/pR)$. A Fontaine-Faltings module over $R/pR$ is an object $(V,\nabla,\Fil)$ in $\MCF(R/pR)$ together with an isomorphism $\varphi: \widetilde{(V,\nabla,\Fil)}\otimes_{\Phi}\hR\rightarrow (V,\nabla)$ in $\mathrm{MIC}(R/pR)$, where $\widetilde{(\cdot)}$ is the Faltings' tilde functor. We generalize those categories over the $W_\pi$-algebra $R_\pi=R\otimes_W{W_\pi}$. In general, there does not exist Frobenius lifting on the p-adic completion of $\hR_\pi$. We lift the absolute Frobenius map on $R_\pi/\pi R_\pi$ to a map $\Phi:\mBR\rightarrow \mBR$ \begin{equation}\label{diag:FrobLift} \xymatrix{ \mBR \ar[d]^{\Phi} \ar@{->>}[r] & R_\pi/\pi R_\pi=R/pR \ar[d]^{(\cdot)^p}\\ \mBR \ar@{->>}[r] & R_\pi/\pi R_\pi=R/pR\\ } \end{equation} where $\mBR$ is the $p$-adic completion of ${\mathcal B}_{W_\pi}\otimes_W R$. This lifting is compatible with $\varphi:{\mathcal B}_{W_\pi}\rightarrow {\mathcal B}_{W_\pi}$. Denote ${\widetilde{\mB}_{R_\pi}}=\mBR[\frac{f}{p}]$. Then $\Phi$ can be extended to \[\Phi:{\widetilde{\mB}_{R_\pi}}\rightarrow \mBR\] uniquely, which is compatible with $\varphi:{\widetilde{\mB}}_{W_\pi}\rightarrow {\mathcal B}_{W_\pi}$. The filtrations on ${\mathcal B}_{W_\pi}$ and ${\widetilde{\mB}}_{W_\pi}$ induce filtrations on $\mBR$ and ${\widetilde{\mB}_{R_\pi}}$ respectively, which satisfy \[\mBR/\Fil^1\mBR\simeq \hR_\pi\simeq {\widetilde{\mB}_{R_\pi}}/\Fil^1{\widetilde{\mB}_{R_\pi}}.\] \begin{lem}\label{lem:F^iB&F^itB} Let $n<p$ be a natural number and let $b$ be an element in $F^n\mBR$. Then $\frac{b}{p^n}$ is an element in $F^n{\widetilde{\mB}_{R_\pi}}$. \end{lem} \begin{proof} Since the filtrations on $\mBR$ and ${\widetilde{\mB}_{R_\pi}}$ are induced by those on ${\mathcal B}_{W_\pi}$ and ${\widetilde{\mB}}_{W_\pi}$ respectively, we have \begin{equation}\label{FnBR} F^n\mBR=\left\{\left.\sum_{i\geq n}^{\infty}a_i\frac{f^i}{i!}\right| a_i\in \hR[[T]] \text{ and }a_i \rightarrow 0\right\}, \end{equation} and \begin{equation}\label{FntBR} \begin{split} F^n{\widetilde{\mB}_{R_\pi}} &=\left\{\left.\sum_{i\geq n}^{\infty}a_i\frac{f^i}{p^i}\right| a_i\in \mBR \text{ and }a_i=0 \text{ for } i\gg0\right\}\\ &=\left\{\left.\sum_{i\geq n}^{\infty}a_i\frac{f^i}{p^i}\right| a_i\in \hR[[T]] \text{ and }\frac{i!}{p^i}a_i\rightarrow 0 \right\}.\\ \end{split} \end{equation} Assume $b=\sum_{i\geq n}a_i\frac{f^i}{i!}$ with $a_i\in \hR[[T]]$ and $a_i\rightarrow 0$, then \[\frac{b}{p^n}=\sum_{i\geq n} \frac{p^i a_i }{p^n i!}\cdot \frac{f^i}{p^i},\] and the lemma follows. \end{proof} Recall that $\mBR$ and ${\widetilde{\mB}_{R_\pi}}$ are $\hR$-subalgebras of $\hR(\frac{1}{p})[[T]]$. We denote by $\Omega^1_{\mBR}$ (resp. $\Omega^1_{{\widetilde{\mB}_{R_\pi}}}$) the $\mBR$-submodule (resp. ${\widetilde{\mB}_{R_\pi}}$-submodule) of $\Omega^1_{\hR(\frac{1}{p})[[T]]/W}$ generated by elements $\rmd b$, where $b\in \mBR$ (resp. $b\in {\widetilde{\mB}_{R_\pi}}$). There is a filtration on $\Omega^1_{\mBR}$ (resp. $\Omega^1_{{\widetilde{\mB}_{R_\pi}}}$) given by $F^n \Omega^1_{\mBR}=F^n \mBR \cdot \Omega^1_{\mBR}$ (resp. $F^n \Omega^1_{{\widetilde{\mB}_{R_\pi}}}=F^n {\widetilde{\mB}_{R_\pi}} \cdot \Omega^1_{{\widetilde{\mB}_{R_\pi}}}$). One gets the following result directly by Lemma~\ref{lem:F^iB&F^itB}. \begin{cor} Let $n<p$ be a natural number. Then $\frac{1}{p^n}F^n \Omega^1_{\mBR}\subset F^n \Omega^1_{{\widetilde{\mB}_{R_\pi}}}$. \end{cor} \begin{lem} The graded pieces $\Gr^n {\widetilde{\mB}_{R_\pi}}$ and $\Gr^n \mBR$ are free $\hR_\pi$-modules of rank $1$ generated by $\frac{f^n}{p^n}$ and $\frac{f^n}{n!}$ respectively. \end{lem} \begin{proof}By equation~(\ref{FnBR}), one has $\hR[[T]]\cdot \frac{f^{n+1}}{n!} \subset \hR[[T]]\cdot \frac{f^n}{n!} \bigcap F^{n+1}\mBR$ and \begin{equation*} \Gr^n \mBR=\frac{\hR[[T]]\cdot \frac{f^n}{n!}}{\hR[[T]]\cdot \frac{f^n}{n!} \bigcap F^{n+1}\mBR}. \end{equation*} On the other hand, \begin{equation*} \begin{split} \hR[[T]]\cdot \frac{f^n}{n!} \bigcap F^{n+1}\mBR & \subset \hR[[T]]\cdot \frac{f^n}{n!}\bigcap \hR[\frac1p][[T]]\cdot f^{n+1}\\ & \subset \left(\hR[[T]]\bigcap \hR[\frac1p][[T]]\cdot f\right)\cdot \frac{f^n}{n!}\\ & =\hR[[T]]\cdot \frac{f^{n+1}}{n!}.\\ \end{split} \end{equation*} Then the result for $\Gr^n \mBR$ follows that $\hR_\pi\simeq\hR[[T]]/(f)$. The proof of the result for $\Gr^n {\widetilde{\mB}_{R_\pi}}$ is similar, one only need to replace $n!$ by $p^n$ and to use equation~(\ref{FntBR}). \end{proof} We have the following categories \begin{itemize} \item $\mathrm{MIC}(\mBR/p\mBR)$: the category of free $\mBR/p\mBR$-modules with integrable connections. \item $\tMIC({\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}})$: the category of free ${\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}$-modules with integrable nilpotent $p$-connection. \item $\MCFa(\mBR/p\mBR)$: the category of filtered free $\mBR/p\mBR$-modules equipped with integrable connections, which satisfy the Griffiths transversality, and each of these $\mBR/p\mBR$-modules admits a filtered basis $v_i$ of degree $q_i$, $0\leq q_i\leq a$. \end{itemize} A Fontaine-Faltings module over $\mBR/p\mBR$ of weight $a$ ($0\leq a\leq p-2$) is an object $(V,\nabla,\Fil)$ in the category $\MCFa(\mBR/p\mBR)$ together with an isomorphism in $\mathrm{MIC}(\mBR/p\mBR)$ \[\varphi: \widetilde{(V,\nabla,\Fil)}\otimes_{\Phi}\mBR\rightarrow (V,\nabla),\] where $\widetilde{(\cdot)}:\MCF(\mBR/p\mBR) \rightarrow \tMIC({\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}})$ is an analogue of the Faltings' tilde functor. For an object $(V,\nabla,\Fil)$ in $\MCF(\mBR/p\mBR)$ with filtered basis $v_i$ (of degree $q_i$, $0\leq q_i\leq a$), ${\widetilde V}$ is defined as a filtered free ${\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}$-module \[{\widetilde V}=\bigoplus_i {\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}\cdot [v_i]_{q_i}\] with filtered basis $[v_i]_{q_i}$ (of degree $q_i$, $0\leq q_i\leq a$). Informally one can view $[v_i]_{q_i}$ as ``$\frac{v_i}{p^{q_i}}$". Since $\nabla$ satisfies the Griffiths transversality, there are $\omega_{ij}\in F^{q_j-1-q_i}\Omega^1_\mBR$ satisfying \[\nabla(v_j)=\sum_{i} v_i\otimes \omega_{ij}.\] Since $q_j-1-q_i<a\leq p-2$, $\frac{\omega_{ij}}{p^{q_j-1-q_i}}\in F^{q_j-1-q_i}\Omega^1_{\widetilde{\mB}_{R_\pi}}$. We define a $p$-connection ${\widetilde{\nabla}}$ on ${\widetilde V}$ via \[{\widetilde{\nabla}}([v_j]_{q_j})=\sum_{i} [v_i]_{q_i}\otimes \frac{\omega_{ij}}{p^{q_j-1-q_i}}.\] \begin{lem} The ${\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}$-module ${\widetilde V}$ equipped with the $p$-connection ${\widetilde{\nabla}}$ is independent of the choice of the filtered basis $v_i$ up to a canonical isomorphism. \end{lem} \begin{proof} We write $v=(v_1,v_2,\cdots)$ and $\omega=(\omega_{ij})_{i,j}$. Then \[\nabla(v)=v\otimes \omega \text{ and } {\widetilde{\nabla}}([v])=[v]\otimes (pQ\omega Q^{-1}),\] where $Q=\mathrm{diag}(p^{q_1},p^{q_2},\cdots)$ is a diagonal matrix. Assume that $v_i'$ is another filtered basis (of degree $q_i$, $0\leq q_i\leq a$) and $({\widetilde V}',{\widetilde{\nabla}}')$ is the corresponding ${\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}$-module equipped with the $p$-connection. Similarly, we have \[\nabla(v')=v'\otimes \omega' \text{ and } {\widetilde{\nabla}}([v'])=[v']\otimes (pQ\omega' Q^{-1}),\] Assume $v_j'=\sum_i a_{ij}v_i$ ($a_{ij}\in F^{q_j-q_i}\mBR$). Then $A=(a_{ij})_{i,j}\in \GL_{\rank(V)}(\mBR)$ and $QAQ^{-1}=\left(\frac{a_{ij}}{p^{q_j-q_i}}\right)_{i,j}\in \GL_{\rank(V)}({\widetilde{\mB}_{R_\pi}})$. We construct an isomorphism from ${\widetilde V}'$ to ${\widetilde V}$ by \[\tau([v'])=[v] \cdot (QAQ^{-1}),\] where $[v]=([v_1]_{q_1},[v_2]_{q_2},\cdots)$ and $[v']=([v_1']_{q_1},[v_2']_{q_2},\cdots)$. Now we only need to check that $\tau$ preserve the $p$-connections. Indeed, \begin{equation} \tau\circ{\widetilde{\nabla}}'([v']) =[v]\otimes (QAQ^{-1}\cdot p Q\omega' Q^{-1})=[v]\otimes (pQ\cdot A\omega'\cdot Q^{-1}) \end{equation} and \begin{equation} \begin{split} {\widetilde{\nabla}}\circ\tau([v']) & ={\widetilde{\nabla}}([v]QAQ^{-1})\\ & =[v]\otimes (pQ\omega Q^{-1}\cdot QAQ^{-1} +p\cdot Q \rmd A Q^{-1})\\ &=[v]\otimes (pQ\cdot (\omega A + \rmd A)\cdot Q^{-1})\\ \end{split} \end{equation} Since $\nabla(v')=\nabla(vA)=v\otimes \rmd A + v\otimes \omega A=v'\otimes (A^{-1}\rmd A+ A^{-1}\omega A)$, we have $\omega'=A^{-1}\rmd A+ A^{-1}\omega A$ by definition. Thus $\tau\circ{\widetilde{\nabla}}'={\widetilde{\nabla}}\circ\tau$. If there are third filtered bases $v''$, one has the following commutative diagram under the isomorphisms constructed: \begin{equation} \xymatrix{ ({\widetilde V},{\widetilde{\nabla}}) \ar[r] \ar[rd] &({\widetilde V}',{\widetilde{\nabla}}') \ar[d]\\ & ({\widetilde V}'',{\widetilde{\nabla}}'')\\ } \end{equation} This can be checked easily by definition. In this sense, the isomorphism constructed is canonical. \end{proof} The functor \[-\otimes_\Phi\mBR : \tMIC({\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}) \rightarrow \mathrm{MIC}(\mBR/p\mBR)\] is induced by base change under $\Phi$. Note that the connection on $({\widetilde V},{\widetilde{\nabla}})\otimes_\Phi\mBR$ is given by \[\rmd + \frac{\rmd \Phi}{p}(\Phi^*{\widetilde{\nabla}})\] Denote by $\MFa(\mBR/p\mBR)$ the category of all Fontaine-Faltings module over $\mBR/p\mBR$ of weight $a$. Let $(M,\nabla,\Fil,\Psi)$ be an object defined in Definition 2 of \cite{Fal99}. Then we can construct an Fontaine-Faltings module over $B_{W_\pi}/pB_{W_\pi}$ as follows. Denote \begin{itemize} \item $V:=M/pM$; \item $\overline{\nabla}=\nabla\pmod{p}$; \item $\overline{\Fil}^iV=(\Fil^iM+pM)/pM$. \end{itemize} By i) and ii) in Definition 2 of \cite{Fal99}, one gets \[(V,\overline{\nabla},\overline{\Fil})\in \MCFa({\mathcal B}_{W\pi}/p{\mathcal B}_{W_\pi}).\] Assume $\{m_i\}$ is a filtered basis of $M$ with filtered degree $q_i$. Then $v_i=m_i+pM\in V$ forms a filtered basis of $V$ with filtered degree $q_i$. By the definition of tilde functor we have \[{\widetilde V}=\bigoplus_i {\widetilde{\mB}}_{W_\pi}/p{\widetilde{\mB}}_{W_\pi}\cdot [v_i]_{q_i}.\] Now we can construct a ${\mathcal B}_{W_{\pi}}$-morphism \[\varphi:{\widetilde V}\otimes_{\Phi}{\mathcal B}_{W_{\pi}} \rightarrow V,\] by giving \[\varphi([v_i]_{q_i}\otimes_{\Phi}1)=\frac{\Psi(m_i)}{p^{q_i}}\pmod{p}.\] Since $\Psi$ is a $\nabla$-horizontal semilinear endomorphism and $\frac{\Psi(m_i)}{p^{q_i}}$ forms a new $R_{W_{\pi}}$-basis of $M$, the morphism $\varphi$ is actually an isomorphism of modules with connections. Thus we get a Fontaine-Faltings module \[(V,\overline{\nabla},\overline{\Fil},\varphi)\in \MFa({\mathcal B}_{W_\pi}/p{\mathcal B}_{W_\pi}).\] Replacing every $W_\pi$ by $R_\pi$, one gets a functor from the category of Fontaine modules defined in \cite{Fal99} to the category $\MFa(\mBR/p\mBR)$. In this sense the Fontaine-Faltings modules we defined above is compatible with the Fontaine modules defined in \cite{Fal99}. \begin{lem}\label{lem:coeffExt} We have the following commutative diagram by extending the coefficient ring from $R$ to $\mBR$ (or ${\widetilde{\mB}_{R_\pi}}$) \begin{equation*} \xymatrix@C=1cm{ \MCF(R/pR) \ar[r]^{\widetilde{(\cdot)}} \ar[d]^{-\otimes_R \mBR} & \tMIC(R/pR)\ar[r]^{-\otimes_{\Phi} {R}} \ar[d]^{-\otimes_R {\widetilde{\mB}_{R_\pi}}} & \mathrm{MIC}(R/pR) \ar[d]^{-\otimes_R \mBR} \\ \MCF(\mBR/p\mBR) \ar[r]^{\widetilde{(\cdot)}} & \tMIC({\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}})\ar[r]^{-\otimes_\Phi\mBR} & \mathrm{MIC}(\mBR/p\mBR) \\ } \end{equation*} In particular, we get a functor from the category of Fontaine-Faltings modules over $R/pR$ to that over $\mBR/p\mBR$ \[\MFa(R/pR)\rightarrow \MFa(\mBR/p\mBR).\] \end{lem} Those categories of Fontaine-Faltings modules are independent of the choice of the Frobenius lifting by the Taylor formula. \begin{thm}\label{thm:gluingFFmod} For any two choices of $\Phi_{\mBR}$ there is an equivalence between the corresponding categories $\MFa(\mBR/p\mBR)$ with different $\Phi_{\mBR}$. These equivalences satisfy the obvious cocycle condition. Therefore, $\MFa(\mBR/p\mBR)$ is independent of the choice of $\Phi_{\mBR}$ up to a canonical isomorphism. \end{thm} \begin{defi}\label{lem:defiFunctorD} For an object $(V,\nabla,\Fil,\varphi)$ in $\MFa(\mBR/p\mBR)$, denote \[{\mathbb D}(V,\nabla,\Fil,\varphi)=\Hom_{B^+(R),\Fil,\varphi}(V\otimes_{\mBR} B^+(R),B^+(R)/pB^+(R)).\] \end{defi} The proof of Theorem~2.6 in~\cite{Fal89} works in this context. we can define an adjoint functor ${\mathbb E}$ of ${\mathbb D}$ as \[{\mathbb E}(L)= \varinjlim \{V\in \MFa(\mBR/p\mBR)\mid L\rightarrow {\mathbb D}(V)\}.\] The proof in page~41 of~\cite{Fal89} still works. Thus we obtain: \begin{thm} $i).$ The homomorphism set ${\mathbb D}(V,\nabla,\Fil,\varphi)$ is an ${\mathbb F}_p$-vector space with a linear $\mathrm{Gal}(\overline{R}_K/R_K)$-action whose rank equals to the rank of $V$.\\ $ii).$ The functor ${\mathbb D}$ from $\MFa(\mBR/p\mBR)$ to the category of $W_n({\mathbb F}_p)$-$\mathrm{Gal}(\overline{R}_K/R_K)$-modules is fully faithful and its image on objects is closed under subobjects and quotients. \end{thm} \subsection{ Categories and Functors on proper smooth variety over very ramified valuation ring $W_\pi$} Let ${\mathcal X}$ be a smooth proper $W$-scheme and ${\mathcal X}_\pi={\mathcal X}\otimes_{W}W_\pi$. Let ${\mathscr X}_\pi$ be the formal completion of ${\mathcal X}\otimes_{W}{\mathcal B}_{W_\pi}$ and ${\widetilde{\sX}}_\pi$ be the formal completion of ${\mathcal X}\otimes_W{{\widetilde{\mB}}_{W_\pi}}$. Then ${\mathscr X}_\pi$ is an infinitesimal thickening of ${\mathcal X}_\pi$ and the ideal defining ${\mathcal X}_\pi$ in ${\mathscr X}_\pi$ has a nilpotent PD-structure which is compatible with that on $F^1({\mathcal B}_{W_\pi})$ and $(p)$ \begin{equation} \xymatrix@R=2mm{ {\mathcal X}_\pi \ar[rr] \ar[dd] \ar[dr] && {\mathscr X}_\pi \ar[dd]|(0.5)\hole \ar[dr]&\\ & \widetilde{{\mathscr X}_\pi} \ar[rr] \ar[dd] && {\mathcal X} \ar[dd]\\ \Spec W_\pi \ar[rr]|(0.5)\hole \ar[dr] && \Spec {\mathcal B}_{W_\pi} \ar[dr] &\\ & \Spec\widetilde{{\mathcal B}}_\pi \ar[rr] && \Spec W \ .\\ } \end{equation} Let $\{{\mathcal U}_{i}\}_i$ be a covering of small affine open subsets of ${\mathcal X}$. By base change, we get a covering $\{\mathcal{U}_i={\mathcal U}_{i}\times_{\mathcal X} {\mathscr X}_\pi\}_i$ of ${\mathscr X}_\pi$ and a covering $\{\widetilde{\mathcal{U}}_i={\mathcal U}_{i}\times_{\mathcal X} {\widetilde{\sX}}_\pi\}_i$ of $\widetilde{{\mathscr X}_\pi}$. For each $i$, we denote $R_{i}={\mathcal O}_{{\mathcal X}_\pi}({\mathcal U}_{i}\times_{\mathcal X} {\mathcal X}_\pi)$. Then ${\mathcal B}_{R_i}=\mathcal O_{{\mathscr X}_\pi}(\mathcal{U}_i)$ and ${\widetilde{\mB}}_{R_i}=\mathcal O_{\widetilde{{\mathscr X}_\pi}}(\widetilde{\mathcal{U}}_i)$ are the coordinate rings. Fix a Frobenius-lifting $\Phi_i:{\widetilde{\mB}}_{R_i}\rightarrow {\mathcal B}_{R_i}$, one gets those categories of Fontaine-Faltings modules \[\MFa({\mathcal B}_{R_i}/p{\mathcal B}_{R_i}).\] By the Theorem~\ref{thm:gluingFFmod}, these categories are glued into one category. Moreover those underlying modules are glued into a bundle over ${\mathscr X}_{\pi,1}={\mathscr X}_\pi\otimes_{\bZ_p}{\mathbb F}_p$. We denote this category by $\MFa({\mathscr X}_{\pi,1})$. \subsubsection{Inverse Cartier functor and a description of $\MFa({\mathscr X}_{\pi,1})$ via Inverse Cartier functor} Let $\overline{\Phi}: \mBR/p\mBR \rightarrow \mBR/p\mBR$ be the $p$-th power map. Then we get the following lemma directly. \begin{lem}\label{lem:FrobLift} Let $\Phi:\mBR\rightarrow \mBR$ and $\Psi:\mBR\rightarrow \mBR$ be two liftings of $\overline{\Phi}$ which are both compatible with the Frobenius map on ${\mathcal B}_{W_\pi}$. \begin{itemize} \item[i).] Since $\varphi(f)$ is divided by $p$, we extend $\Phi$ and $\Psi$ to maps on ${\widetilde{\mB}_{R_\pi}}$ via $\frac{f^n}{p}\mapsto \left(\frac{\varphi(f)}{p^n}\right)^n$ uniquely; \item[ii).] the difference $\Phi-\Psi$ on ${\widetilde{\mB}_{R_\pi}}$ is still divided by $p$; \item[iii).] the differentials $\rmd \Phi : \Omega^1_{{\widetilde{\mB}_{R_\pi}}}\rightarrow \Omega^1_{\mBR}$ and $ \rmd \Psi : \Omega^1_{{\widetilde{\mB}_{R_\pi}}}\rightarrow \Omega^1_{\mBR}$ are divided by $p$. \end{itemize} \end{lem} From now on, we call the extension given by i) of Lemma~\ref{lem:FrobLift} the \emph{Frobenius liftings} of $\overline{\Phi}$ on ${\widetilde{\mB}_{R_\pi}}$. \begin{lem}Let $\Phi:{\widetilde{\mB}_{R_\pi}}\rightarrow \mBR$ and $\Psi:{\widetilde{\mB}_{R_\pi}}\rightarrow \mBR$ be two Frobenius liftings of $\overline{\Phi}$ on ${\widetilde{\mB}_{R_\pi}}$. Then there exists a $\mBR/p\mBR$-linear morphism \[h_{\Phi,\Psi}:\Omega^1_{{\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}}\otimes_{\overline\Phi}\mBR/p\mBR \rightarrow \mBR/p\mBR\] satisfying that: \begin{itemize} \item[i).] we have $\frac{\rmd \Phi}{p}-\frac{\rmd \Psi}{p}=\rmd h_{\Phi,\Psi}$ over $\Omega^1_{{\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}}\otimes_{\overline\Phi}1$; \item[ii).] the cocycle condition holds. \end{itemize} \end{lem} \begin{proof} As $\Omega^1_{{\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}}\otimes_{\overline\Phi}\mBR/p\mBR$ is an $\mBR/p\mBR$-module generated by elements of the form $\rmd g\otimes 1$ ($g\in{\widetilde{\mB}_{R_\pi}}/p{\widetilde{\mB}_{R_\pi}}$) with relations $\rmd(g_1+g_2)\otimes 1-\rmd g_1 \otimes 1-\rmd g_2 \otimes 1$ and $\rmd(g_1g_2)\otimes 1-\rmd g_1 \otimes\overline{\Phi}(g_2)-\rmd g_2 \otimes \overline{\Phi}(g_1)$. Since $\Phi-\Psi$ is divided by $p$, we can denote $h_{ij}({\rmd g}\otimes 1)=\frac{\Phi(\hat{g})-\Psi(\hat{g})}{p}\pmod{p}\in \mBR/p\mBR$ for any element $g\in {\mathcal O}_{U_1}$ (the definition does not depend on the choice of the lifting $\hat{g}$ of $g$ in ${\mathcal O}_{{\mathcal U}}$). By direct calculation, we have \[h_{ij}(\rmd(g_1+g_2)\otimes 1)=h_{ij}(\rmd g_1 \otimes 1)+h_{ij}(\rmd g_2 \otimes 1)\] and \[h_{ij}(\rmd(g_1g_2)\otimes 1)=\overline{\Phi}(g_2)\cdot h_{ij}(\rmd g_1 \otimes 1)+\overline{\Phi}(g_1)\cdot h_{ij}(\rmd g_2 \otimes 1)\] Thus $h_{ij}$ can be $\mBR/p\mBR$-linearly extended. One checks i) and ii) directly by definition. \end{proof} Let $({\widetilde V},{\widetilde{\nabla}})$ be a locally filtered free sheaf over ${\widetilde{\sX}}_{\pi,1}={\widetilde{\sX}}_\pi\otimes_{\bZ_p}{\mathbb F}_p$ with an integrable $p$-connection. Here a ``filtered free'' module over a filtered ring $R$ is a direct sum of copies of $R$ with the filtration shifted by a constant amount. The associated graded then has a basis over $gr_F(R)$ consisting of homogeneous elements(see \cite{Fal99}). Let $({\widetilde V}_i,{\widetilde{\nabla}}_i)=({\widetilde V},{\widetilde{\nabla}})\mid_{\widetilde{\mathcal{U}}_{i,1}}$ be its restriction on the open subset $\widetilde{\mathcal{U}}_{i,1}=\widetilde{\mathcal{U}}_{i}\otimes_{\bZ_p}{\mathbb F}_p$. By taking functor $\Phi_i^*$, we get bundles with integrable connections over $\mathcal{U}_{i,1}=\mathcal{U}_i\otimes_{\bZ_p}{\mathbb F}_p$ \[\left(\Phi_i^*{\widetilde V}_i, \rmd + \frac{\rmd \Phi_i}{p}(\Phi_i^*{\widetilde{\nabla}})\right).\] \begin{lem}\label{gluingFlatBundle} Let $({\widetilde V},{\widetilde{\nabla}})$ be a locally filtered free sheaf over ${\widetilde{\sX}}_{\pi,1}$ with an integrable $p$-connection. Then these local bundles with connections \[\left(\Phi_i^*{\widetilde V}_i, \rmd + \frac{\rmd \Phi_i}{p}(\Phi_i^*{\widetilde{\nabla}})\right)\] can be glued into a global bundle with a connection on ${\mathscr X}_{\pi,1}$ via transition functions \[G_{ij}=\exp\left(h_{\Phi_i,\Phi_j}(\overline{\Phi}^*{\widetilde{\nabla}})\right):\Phi_{i}^*({\widetilde V}_{ij}) \rightarrow \Phi_{j}^*({\widetilde V}_{ij}).\] Denote this global bundle with connection by $C^{-1}_{{\mathscr X}_{\pi,1}}({\widetilde V},{\widetilde{\nabla}})$. Then we can construct a functor \[C^{-1}_{{\mathscr X}_{\pi,1}}: \tMIC({\widetilde{\sX}}_{\pi,1})\rightarrow \mathrm{MIC}({\mathscr X}_{\pi,1}).\] \end{lem} \begin{proof}The cocycle condition easily follows from the integrability of the Higgs field. We show that the local connections coincide on the overlaps, that is \[\left(G_{ij}\otimes \id\right) \circ \left(\rmd + \frac{\rmd \Phi_i}{p}(\Phi_i^*{\widetilde{\nabla}})\right) =\left(\rmd + \frac{\rmd \Phi_j}{p}(\Phi_j^*{\widetilde{\nabla}})\right) \circ G_{ij}. \] It suffices to show \[\frac{\rmd \Phi_i}{p}(\Phi_i^*{\widetilde{\nabla}}) = G^{-1}_{ij}\circ \rmd G_{ij} + G^{-1}_{ij}\circ \frac{\rmd \Phi_j}{p}(\Phi_j^*{\widetilde{\nabla}}) \circ G_{ij}.\] Since $G^{-1}_{ij}\circ \rmd G_{ij}=\rmd h_{\Phi_i,\Phi_j}(\overline{\Phi}^*{\widetilde{\nabla}})$ and $G_{ij}$ commutes with $\frac{\rmd \Phi_j}{p}(\Phi_j^*{\widetilde{\nabla}})$ we have \begin{equation*} \begin{split} G^{-1}_{ij}\circ \rmd G_{ij}+G^{-1}_{ij}\circ \frac{\rmd \Phi_j}{p}(\Phi_j^*{\widetilde{\nabla}}) \circ G_{ij} & =\rmd h_{\Phi_i,\Phi_j}(\overline{\Phi}^*{\widetilde{\nabla}}) +\frac{\rmd \Phi_j}{p}(\Phi_j^*{\widetilde{\nabla}})\\ & =\frac{\rmd \Phi_i}{p}(\Phi_i^*{\widetilde{\nabla}}) \end{split} \end{equation*} by the integrability of the Higgs field. Thus we glue those local bundles with connections into a global bundle with connection via $G_{ij}$. \end{proof} \begin{lem} To give an object in the category $\MF({\mathscr X}_{\pi,1})$ is equivalent to give a tuple $(V,\nabla,\Fil,\phi)$ satisfying \begin{itemize} \item[i).] $V$ is filtered local free sheaf over ${\mathscr X}_{\pi,1}$ with local basis having filtration degrees contained in $[0,a]$; \item[ii).] $\nabla:V\rightarrow V\otimes_{{\mathcal O}_{{\mathscr X}_{\pi,1}}} \Omega^1_{{\mathscr X}_{\pi,1}}$ is an integrable connection satisfying the Griffiths transversality; \item[iii).] $ \varphi:C_{{\mathscr X}_{\pi,1}}^{-1}\widetilde{(V,\nabla,\Fil)}\simeq (V,\nabla)$ is an isomorphism of sheaves with connections over ${\mathscr X}_{\pi,1}$. \end{itemize} \end{lem} \subsubsection{The functors ${\mathbb D}$ and ${\mathbb D}^P$} For an object in $\MFa({\mathscr X}_{\pi,1})$, we get locally constant sheaves on ${\mathcal U}_K$ by applying the local ${\mathbb D}$-functors. These locally constant sheaves can be expressed in terms of certain finite \'etale coverings. They can be glued into a finite covering of ${\mathcal X}_{\pi,K}={\mathcal X}_K$. We have the following result. \begin{thm} Suppose that $X$ is a proper smooth and geometrically connected scheme over $W$. Then there exists a fully faithful contravariant functor ${\mathbb D}$ from $\MFa({\mathscr X}_{\pi,1})$ to the category of ${\mathbb F}_p$-representations of $\pi^\text{\'et}_1({\mathcal X}_K)$. The image of ${\mathbb D}$ on objects is closed under subobjects and quotients. Locally ${\mathbb D}$ is given by the same as in Lemma~\ref{lem:defiFunctorD}. \end{thm} Again one can define the category $\MFa({\mathscr X}_{\pi,1}^o)$ in the logarithmic case, if one replaces all "connections" by "logarithmic connections" and "Frobenius lifting" by "logarithmic Frobenius lifting". We also have the version of $\MF_{[0,a],f}({\mathscr X}_{\pi,1}^o)$ with endomorphism structures of ${\mathbb F}_{p^f}$, which is similar as the \emph{Variant 2} discussed in section $2$ of\cite{LSZ13a}. And the twisted versions $\TMF_{[0,a],f}({\mathscr X}_{\pi,1}^o)$ can also be defined on ${\mathscr X}_{\pi,1}$ in a similar way as before. More precisely, let ${\mathcal L}$ be a line bundle over ${\mathscr X}_{\pi,1}$. The ${\mathcal L}$-twisted Fontaine-Faltings module is defined as follows. \begin{defi} An ${\mathcal L}$-twisted Fontaine-Faltings module over ${\mathscr X}_{\pi,1}$ with endomorphism structure is a tuple \[((V,\nabla,\Fil)_0,(V,\nabla,\Fil)_1,\cdots,(V,\nabla,\Fil)_{f-1},\varphi_\cdot)\] where $(V,\nabla,\Fil)_i$ are objects in $\MCF({\mathscr X}_{\pi,1}^o)$ equipped with isomorphisms in $\mathrm{MIC}({\mathscr X}_{\pi,1}^o)$ \[\varphi_i:C_{{\mathscr X}_{\pi,1}}^{-1}\widetilde{(V,\nabla,\Fil)}_i\simeq (V,\nabla)_{i+1} \text{ for } i=0,1,\cdots,f-2;\] and \[\varphi_{f-1}:C_{{\mathscr X}_{\pi,1}}^{-1}\widetilde{(V,\nabla,\Fil)}_{f-1} \otimes ({\mathcal L}^{p},\nabla_{\mathrm {can}})\simeq (V,\nabla)_0.\] \end{defi} The proof of Theorem~\ref{ConsFunc:D^P} works in this context. Thus we obtain the following result. \begin{thm}\label{thm:functorD^P} Suppose that ${\mathcal X}$ is a proper smooth and geometrically connected scheme over $W$ equipped with a smooth log structure ${\mathcal D}/W(k)$. Suppose that the residue field $k$ contains ${\mathbb F}_{p^f}$. Then there exists an exact and fully faithful contravariant functor ${\mathbb D}^P$ from $\TMF_{a,f}({\mathscr X}_{\pi,1}^o)$ to the category of projective ${\mathbb F}_{p^f}$-representations of $\pi^\text{\'et}_1({\mathcal X}_{K}^o)$. The image of ${\mathbb D}^p$ is closed under subobjects and quotients. \end{thm} Recall that $\{\mathcal{U}_i\}_i$ is an open covering of ${\mathscr X}$. A line bundle on ${\mathscr X}$ can be expressed by the transition functions on $\mathcal{U}_{ij}$. \begin{lem} Let ${\mathcal L}$ be a line bundle on ${\mathscr X}_{\pi,1}$ expressed by $(g_{ij})$. Denote by $\widetilde{{\mathcal L}}$ the line bundle on ${\widetilde{\sX}}_{\pi,1}$ defined by the same transition functions $(g_{ij})$. Then one has \[C^{-1}_{{\mathscr X}_{\pi,1}}(\widetilde{{\mathcal L}},0)={\mathcal L}^p.\] \end{lem} \begin{proof} Since $g_{ij}$ is an element in ${\mathcal B}_{R_{ij}} \subset {\widetilde{\mB}}_{R_{ij}}$, by diagram~(\ref{diag:FrobLift}), one has \[\Phi(g_{ij})\equiv g_{ij}^p \pmod{p}.\] On the other hand, since the $p$-connection is trivial, one has \[C^{-1}_{{\mathscr X}_{\pi,1}}(\widetilde{{\mathcal L}},0)=(\Phi\,\mathrm{mod}\,p)^*(\widetilde{{\mathcal L}}).\] Thus one has $C^{-1}_{{\mathscr X}_{\pi,1}}(\widetilde{{\mathcal L}},0)=({\mathcal O}_{\widetilde{\mathcal{U}}_{i,1}},g_{ij}^p)={\mathcal L}^p$. \end{proof} In a similar way, one can define the Higgs-de Rham flow on ${\mathscr X}_{\pi,1}$ as a sequence consisting of infinitely many alternating terms of Higgs bundles over ${\widetilde{\sX}}_{\pi,1}$ and filtered de Rham bundles over ${\mathscr X}_{\pi,1}$ \[\left\{ (E,\theta)_{0}, (V,\nabla,\Fil)_{0}, (E,\theta)_{1}, (V,\nabla,\Fil)_{1}, \cdots\right\}\] with $(V,\nabla)_i=C^{-1}_{{\mathscr X}_{\pi,1}}((E,\theta)_i)$ and $(E,\theta)_{i+1}=\widetilde{(V,\nabla,\Fil)_i}$ for all $i\geq 0$. \emph{$f$-periodic ${\mathcal L}$-twisted Higgs-de Rham flow} over ${\mathscr X}_{\pi,1}$ of level in $[0,a]$ is a Higgs-de Rham flow over ${\mathscr X}_{\pi,1}$ \[\left\{ (E,\theta)_{0}, (V,\nabla,\Fil)_{0}, (E,\theta)_{1}, (V,\nabla,\Fil)_{1}, \cdots\right\}\] equipped with isomorphisms $\phi_{f+i}:(E,\theta)_{f+i}\otimes (\widetilde{{\mathcal L}}^{p^i},0)\rightarrow (E,\theta)_i$ of Higgs bundles for all $i\geq0$ \begin{equation*} \tiny\xymatrix@W=2cm@C=-13mm{ &\left(V,\nabla,\Fil\right)_{0} \ar[dr]^{\widetilde{(\cdot)}} &&\left(V,\nabla,\Fil\right)_{1}\ar[dr]^{\widetilde{(\cdot)}} &&\cdots \ar[dr]^{\widetilde{(\cdot)}} &&\left(V,\nabla,\Fil\right)_{f}\ar[dr]^{\widetilde{(\cdot)}} &&\left(V,\nabla,\Fil\right)_{f+1}\ar[dr]^{\widetilde{(\cdot)}} &&\cdots\\%\ar@/_20pt/[llllll]_{\cdots}\\ \left(E,\theta\right)_{0}\ar[ur]_{\mathcal C^{-1}_{{\mathscr X}_{\pi,1}}} &&\left(E,\theta\right)_{1}\ar[ur]_{\mathcal C^{-1}_{{\mathscr X}_{\pi,1}}} && \cdots \ar[ur]_{\mathcal C^{-1}_{{\mathscr X}_{\pi,1}}} &&\left(E,\theta\right)_{f}\ar[ur]_{\mathcal C^{-1}_{{\mathscr X}_{\pi,1}}} \ar@/^20pt/[llllll]^{\phi_f}|(0.33)\hole &&\left(E,\theta\right)_{f+1}\ar[ur]_{\mathcal C^{-1}_{{\mathscr X}_{\pi,1}}} \ar@/^20pt/[llllll]^{\phi_{f+1}}|(0.33)\hole && \cdots \ar@/^20pt/[llllll]^{\cdots}\ar[ur]_{\mathcal C^{-1}_{{\mathscr X}_{\pi,1}}}\\} \end{equation*} And for any $i\geq 0$ the isomorphism \begin{equation*} C^{-1}_{{\mathscr X}_{\pi,1}}(\phi_{f+i}): (V,\nabla)_{f+i}\otimes ({\mathcal L}^{p^{i+1}},\nabla_{\mathrm{can}})\rightarrow (V,\nabla)_{i}, \end{equation*} strictly respects filtrations $\Fil_{f+i}$ and $\Fil_{i}$. Those $\phi_{f+i}$'s are related to each other by formula \[\phi_{f+i+1}=\mathrm{Gr}\circ C^{-1}_{{\mathscr X}_{\pi,1}}(\phi_{f+i}).\] Just taking the same construction as before, we obtain the following result. \begin{thm}\label{thm:equivalent} There exists an equivalent functor $\IC_{{\mathscr X}_{\pi,1}}$ from the category of twisted periodic Higgs-de Rham flows over ${\mathscr X}_{\pi,1}$ to the category of twisted Fontaine-Faltings modules over ${\mathscr X}_{\pi,1}$ with a commutative diagram \begin{equation} \xymatrix{ \THDF(X_1) \ar[r]^{\IC_{X_1 }} \ar[d]_{-\otimes_{{\mathcal O}_{X_{1}}}{\mathcal O}_{{\mathscr X}_{\pi,1}}} & \TMF(X_1) \ar[d]^{-\otimes_{{\mathcal O}_{X_{1}}}{\mathcal O}_{{\mathscr X}_{\pi,1}} }\\ \THDF({\mathscr X}_{\pi,1}) \ar[r]^{\IC_{{\mathscr X}_{\pi,1}}} & \TMF({\mathscr X}_{\pi,1})\ .\\ } \end{equation} \end{thm} \subsection{degree and slope} Recall that ${\mathscr X}_\pi$ is a smooth formal scheme over ${\mathcal B}_{W_\pi}$. Then ${\mathscr X}_{\pi,1}$ and $X_1$ are the modulo-$p$ reductions of ${\mathscr X}_\pi$ and $X$ respectively. Also note that $X_1$ is the closed fiber of $Y_1={\mathcal X}_\pi\otimes_{\bZ_p}{\mathbb F}_p$, ${\mathscr X}_{\pi,1}={\mathscr X}_\pi\otimes_{\bZ_p}{\mathbb F}_p$, ${\mathcal X}_\pi$ and ${\mathscr X}_\pi$. \begin{equation} \xymatrix@R=2mm{ X_1\ar[r] \ar[dd] &{\mathcal X}_{\pi,1} \ar[rr] \ar[dd] \ar[dr] && {\mathscr X}_{\pi,1} \ar[dd]|(0.5)\hole \ar[dr]&\\ && \widetilde{{\mathscr X}_\pi}_1 \ar[rr] \ar[dd] && X_1 \ar[dd]\\ X_1\ar[r] \ar[dd] & {\mathcal X}_\pi \ar[dr] \ar[dd] \ar[rr]|(0.5)\hole && {\mathscr X}_\pi \ar[dr]\ar[dd]|(0.5)\hole &\\ && \widetilde{{\mathscr X}_\pi} \ar[rr] \ar[dd] && {\mathcal X} \ar[dd] \\ \Spec k \ar[r] & \Spec W_\pi \ar[drrr]|(0.34)\hole \ar[rr]|(0.5)\hole \ar[dr] && \Spec {\mathcal B}_{W_\pi} \ar[dr] &\\ && \Spec\widetilde{{\mathcal B}}_\pi \ar[rr] && \Spec W\\ } \end{equation} For a line bundle $V$ on ${\mathscr X}_{\pi,1}$ (resp. ${\widetilde{\sX}}_{\pi,1}$), $V\otimes_{{\mathcal O}_{{\mathscr X}_{\pi,1}}}{\mathcal O}_{X_1}$ (resp. $V\otimes_{{\mathcal O}_{{\widetilde{\sX}}_{\pi,1}}}{\mathcal O}_{X_k}$) forms a line bundle on the special fiber $X_1$ of ${\mathcal X}$. We denote \[\deg(V):=\deg(V\otimes_{{\mathcal O}_{{\mathscr X}_{\pi,1}}}{\mathcal O}_{X_1}).\] For any bundle $V$ on ${\mathscr X}_{\pi,1}$ (resp. ${\widetilde{\sX}}_{\pi,1}$) of rank $r>1$, we denote \[\deg(V):=\deg(\bigwedge_{i=1}^{r}V).\] By Lemma~\ref{lem:FrobLift}, the modulo-$p$ reduction of the Frobenius lifting is globally well-defined. We denote it by $\Phi_1:{\widetilde{\sX}}_{\pi,1}\rightarrow {\mathscr X}_{\pi,1}$. Since ${\widetilde{\sX}}_{\pi,1}$ and ${\mathscr X}_{\pi,1}$ have the same closed subset $X_1$, we have the following diagram \begin{equation} \xymatrix{ X_1 \ar[r]^{\widetilde{\tau}} \ar[d]^{\Phi_{X_1}} & {\widetilde{\sX}}_{\pi,1} \ar[d]^{\Phi_1}\\ X_1 \ar[r]^{\tau} & {\mathscr X}_{\pi,1}\\ } \end{equation} Here $\tau$ and $\widetilde{\tau}$ are closed embeddings and $\Phi_{X_1}$ is the absolute Frobenius lifting on $X_1$. We should remark that the diagram above is not commutative, because $\Phi_1$ does not preserve the defining ideal of $X_1$. \begin{lem}\label{lem:IsoPullbacks} Let $(V,\nabla,\Fil)$ be an object in $\MCF({\mathscr X}_{\pi,1})$ of rank $1$. Then there is an isomorphism \[\Phi_{X_1}^*\circ\widetilde{\tau}^*({\widetilde V})\overset{\sim}{\longrightarrow} \tau^*\circ\Phi_1^*({\widetilde V}).\] \end{lem} \begin{proof}Recall that $\{\mathcal{U}_i\}_i$ is an open covering of ${\mathscr X}$. We express the line bundle $V$ by the transition functions $(g_{ij})$, where $g_{ij}\in \left({\mathcal B}_{R_{ij}}/p{\mathcal B}_{R_{ij}}\right)^\times$. Since $V$ is of rank $1$, the filtration $\Fil$ is trivial. Then by definition ${\widetilde V}$ can also be expressed by $(g_{ij})$. Since $g_{ij}\in{\mathcal B}_{R_{ij}}/p{\mathcal B}_{R_{ij}}$, one has \[(\Phi_{X_1}\mid_{U_{i,1}})^*\circ(\widetilde{\tau}\mid_{\widetilde{\mathcal{U}}_{i,1}})^*(g_{ij}) = (\tau\mid_{\mathcal{U}_{i,1}})^*\circ(\Phi_1\mid_{\mathcal{U}_{i,1}})^*(g_{ij}),\] by diagram~(\ref{diag:FrobLift}). This gives us the isomorphism $\Phi_{X_1}^*\circ\widetilde{\tau}^*({\widetilde V})\overset{\sim}{\longrightarrow} \tau^*\circ\Phi_1^*({\widetilde V})$. \end{proof} \begin{lem}\label{lem:deg&C^-1} Let $(V,\nabla,\Fil)$ be an object in $\MCF({\mathscr X}_{\pi,1})$. Then we have \[\deg({\widetilde V})=\deg(V) \text{ and } \deg(C^{-1}_{{\mathscr X}_{\pi,1}}({\widetilde V}))=p\deg({\widetilde V}).\] \end{lem} \begin{proof} Since the tilde functor and inverse Cartier functor preserve the wedge product and the degree of a bundle is defined to be that of its determinant, we only need to consider the rank $1$ case. Now let $(V,\nabla,\Fil)$ be of rank $1$. The reductions of $V$ and ${\widetilde V}$ on the closed fiber $X_1$ are the same, by the proof of Lemma~\ref{lem:IsoPullbacks}. Then we have \[\deg({\widetilde V})=\deg(V).\] Since the filtration is trivial, the $p$-connection ${\widetilde{\nabla}}$ is also trivial. In this case, the transition functions $G_{ij}$ in Lemma~\ref{gluingFlatBundle} are identities. Thus \[C^{-1}_{{\mathscr X}_{\pi,1}}({\widetilde V})=\Phi_1^*({\widetilde V}).\] Recall that $\deg(\Phi_1^*({\widetilde V}))=\deg(\tau^*\circ\Phi_1^*({\widetilde V}))$ and $\deg({\widetilde V})=\deg(\widetilde{\tau}^*({\widetilde V}))$. Lemma~\ref{lem:IsoPullbacks} implies $\deg(\tau^*\circ\Phi_1^*({\widetilde V}))=\deg(\Phi_{X_1}^*\circ\widetilde{\tau}^*({\widetilde V}))$. Since $\Phi_{X_1}$ is the absolute Frobenius, one has $\deg(\Phi_{X_1}^*\circ\widetilde{\tau}^*({\widetilde V}))=p\deg(\widetilde{\tau}^*({\widetilde V}))$. Composing above equalities, we get $\deg(C^{-1}_{{\mathscr X}_{\pi,1}}({\widetilde V}))=p\deg({\widetilde V})$. \end{proof} \begin{thm}\label{ramified_Thm} Let ${\mathcal E}=\left\{ (E,\theta)_{0}, (V,\nabla,\Fil)_{0}, (E,\theta)_{1}, (V,\nabla,\Fil)_{1}, \cdots\right\}$ be an $L$-twisted $f$-periodic Higgs-de Rham flow with endomorphism structure and log structure over $X_{1}$. Suppose that the degree and rank of the initial term $E_0$ are coprime. Then the projective representation ${\mathbb D}^P\circ\IC_{{\mathscr X}_{\pi,1}}({\mathcal E})$ of $\pi^\text{\'et}_1(X_{K_0}^o)$ is still irreducible after restricting to the geometric fundamental group $\pi^\text{\'et}_1(X^o_{\overline{K}_0})$, where $K_0=W[\frac1p]$. \end{thm} \begin{proof} Let $\rho:\pi^\text{\'et}_1(X_{K_0}^o)\rightarrow \mathrm{PGL}({\mathbb D}^P\circ\IC_{{\mathscr X}_{\pi,1}}({\mathcal E}))$ be the projective representation. Fix a $K_0$-point in $X_{K_0}$, which induces a section $s$ of the surjective map $\pi^\text{\'et}_1(X_{K_0}^o)\rightarrow \mathrm{Gal}(\overline{K}_0/K_0)$. We restrict $\rho$ on $\mathrm{Gal}(\overline{K}_0/K_0)$ by this section $s$. Since the module ${\mathbb D}^P\circ\IC_{{\mathscr X}_{\pi,1}}({\mathcal E})$ is finite, the image of this restriction is finite. And there is a finite field extension $K/K_0$ such that the restriction of $\rho$ by $s$ on $\mathrm{Gal}(\overline{K}_0/K)$ is trivial. Thus \[\rho(\pi^\text{\'et}_1(X_{K}^o))=\rho(\pi^\text{\'et}_1(X^o_{\overline{K}_0})).\] It is sufficient to show that the restriction of $\rho$ on $\pi^\text{\'et}_1(X_{K}^o)$ is irreducible. Suppose that the restriction of ${\mathbb D}^P\circ\IC_{X_1}({\mathcal E})$ on $\pi^\text{\'et}_1(X_{K}^o)$ is not irreducible. Since the functors ${\mathbb D}^P$ and $C^{-1}_{{\mathscr X}_{\pi,1}}$ are compatible with those over $X_1$, the projective representation ${\mathbb D}^P\circ \IC_{{\mathscr X}_{\pi,1}}({\mathcal E}\otimes_{{\mathcal O}_{X_1}}{\mathcal O}_{{\mathscr X}_{\pi,1}})={\mathbb D}^P\circ\IC_{X_1}({\mathcal E})$ is also not irreducible. Thus there exists a non-trivial quotient, which is the image of some nontrivial sub ${\mathcal L}=L\otimes_{{\mathcal O}_{X_1}}{\mathcal O}_{{\mathscr X}_{\pi,1}}$-twisted $f$-periodic Higgs-de Rham flow of ${\mathcal E}\otimes_{{\mathcal O}_{X_1}}{\mathcal O}_{{\mathscr X}_{\pi,1}}$ \[\left\{ (E',\theta')_{0}, (V',\nabla',\Fil')_{0}, (E',\theta')_{1}, (V',\nabla',\Fil')_{1}, \cdots\right\},\] under the functor ${\mathbb D}^P\circ\IC_{{\mathscr X}_{\pi,1}}$ according to Theorem~\ref{thm:functorD^P} and Theorem~\ref{thm:equivalent}. Since $E'_0$ is a sub bundle of $E_0\otimes_{{\mathcal O}_{X_1}}{\mathcal O}_{{\mathscr X}_{\pi,1}}$, we have $1\leq \mathrm{rank}(E'_0) <\mathrm{rank}(E_0)$. By Theorem~4.17 in~\cite{OgVo07}, $\deg(E_{i+1})=p\deg(E_i) \text{ for }i\geq 0$ and $\deg(E_{0})=p\deg(E_{f-1})+\mathrm{rank}(E_{0})\times \deg(L)$. Thus \begin{equation} \frac{\deg (E_0)}{\mathrm{rank} (E_0)}=\frac{\deg (L)}{1-p^f}. \end{equation} Similarly, by Lemma~\ref{lem:deg&C^-1}, one gets \[\frac{\deg (E'_0)}{\mathrm{rank} (E'_0)}=\frac{\deg ({\mathcal L})}{1-p^f}.\] Since $\deg({\mathcal L})=\deg(L)$, one has $\deg(E_0)\cdot\mathrm{rank}(E_0')=\deg(E_0')\cdot\mathrm{rank}(E_0)$. Since $\deg (E_0)$ and $\mathrm{rank} (E_0)$ are coprime, and $\mathrm{rank}(E'_0)$ is divided by $\mathrm{rank}(E_0)$. This contradicts to $1\leq \mathrm{rank}(E'_0) <\mathrm{rank}(E_0)$. Thus the projective representation ${\mathbb D}^P\circ\IC_{X_1}({\mathcal E})$ is irreducible. \end{proof} \section{Appendix: explicit formulas} In this appendix, we give an explicit formula of the self-map $\varphi_{\lambda,p}$ in Theorem~\ref{Thm:selfmap_formula} and an explicit formula of multiplication by $p$ map in Theorem~\ref{thm:multp_formula}. Then the Conjecture~\ref{conj-1} is equivalent to: \begin{equation}\label{equ:main} \frac1{a^p}\left(\frac{\det(B_0)}{\det(B_{m+1})}\right)^2=\frac{a^p}{\lambda^{p-1}}\left(\frac{\det(A_{m+1})}{\det(A_p)}\right)^2 \end{equation} here $m=\frac{p-1}{2}$ and matrices $A_{m+1}$, $A_p$, $B_0$ and $B_{m+1}$ are given as following: \begin{equation*} A_{i}=\left( \begin{array}{cccccc} \delta_{m} & \cdots & \delta_{i-2} & \delta_{i} &\cdots &\delta_{p-1} \\ \delta_{m-1} & \cdots & \delta_{i-3} & \delta_{i-1} &\cdots &\delta_{p-2} \\ \vdots & \ddots & \vdots &\vdots &\ddots &\vdots \\ \delta_{2} & \cdots & \delta_{i-m} & \delta_{i+2-m} &\cdots &\delta_{m+2} \\ \delta_{1} & \cdots & \delta_{i-1-m}&\delta_{i+1-m} &\cdots &\delta_{m+1} \\ \end{array} \right) \end{equation*} \begin{equation*} B_0=\left(\begin{array}{rrrrr} a^p\gamma_{3m} & a^p\gamma_{3m-1} &\cdots & a^p\gamma_{2m+1} & a^p\gamma_{2m}\\ \gamma_{m} & a^p\gamma_{3m} &\cdots & a^p\gamma_{2m+2} & a^p\gamma_{2m+1}\\ \gamma_{m+1} &\gamma_{m} &\cdots & a^p\gamma_{2m+3} & a^p\gamma_{2m+2}\\ \vdots &\vdots&\ddots &\vdots &\vdots \\ \gamma_{2m-1} &\gamma_{2m-2} &\cdots & \gamma_{m} & a^p\gamma_{3m}\\ \end{array}\right) \end{equation*} \begin{equation*} B_{m+1}= \left(\begin{array}{rrrrrr} \gamma_m & a^p\gamma_{3m} & a^p\gamma_{3m-1} &\cdots & a^p\gamma_{2m+1} \\ \gamma_{m+1} & \gamma_{m} & a^p\gamma_{3m} &\cdots & a^p\gamma_{2m+2} \\ \gamma_{m+2} & \gamma_{m+1} &\gamma_{m} &\cdots & a^p\gamma_{2m+3} \\ \vdots &\vdots &\vdots&\ddots &\vdots \\ \gamma_{2m} & \gamma_{2m-1} &\gamma_{2m-2} &\cdots & \gamma_{m} \\ \end{array}\right). \end{equation*} and \[\delta_n=\frac{\lambda^p(1-a^p)-(\lambda^p-a^p)\lambda^n}{n}\] \[ \gamma_n=(-1)^{m+n}\sum_{\begin{array}{c} i+j=n-m\\ 0\leq i,j\leq m\\ \end{array}}{m\choose i}{m\choose j}\lambda^{m-j}. \] By Proposition~\ref{compTwoConj}, we reduce Conjecture~\ref{conj-1} to the following conjecture: \begin{conj}\label{var_conj} The following equation holds \begin{equation} \det(A_p)=c\lambda^{m^2}(\lambda-1)^{m^2}\cdot \det(B_{m+1}), \end{equation} where \[c=(-1)^{m}\cdot \det\left(\begin{array}{cccc} \frac{1}{m} & \frac{1}{m+1} & \cdots & \frac{1}{p-2} \\ \frac{1}{m-1} & \frac{1}{m} & \cdots & \frac{1}{p-3} \\ \vdots &\vdots &\ddots &\vdots \\ \frac{1}{1} & \frac{1}{2} & \cdots & \frac{1}{m} \\ \end{array}\right).\] \end{conj} By using Maple, Conjecture~\ref{var_conj} has been checked for odd prime $p<50$. Thus Conjecture~\ref{conj-1} holds for $p<50$. \subsection{Self-map}\label{Calculate_Selfmap} To compute the self-map $\varphi_{\lambda,p}$, we recall the explicit construction of the inverse Cartier functor in curve case and give some notations used in the computation. For the general case, see the appendix of \cite{LSYZ14}. Let $k$ be a perfect field of characteristic $p\geq 3$. For simplicity, we may assume that $k$ is algebraic closed. Let $W=W(k)$ be the ring of Witt vectors and $W_n=W/p^n$ for all $n\geq1$ and $\sigma:W\rightarrow W$ be the Frobenius map on $W$. Let $X_1$ be a smooth algebraic curve over $k$ and $\overline{D}$ be a simple normal crossing divisor. We assume that $(X_1,\overline{D})$ is $W_2(k)$-liftable and fix a lifting $(X_2,D)$ \cite{EV-92}. For a sufficiently small open affine subset $U$ of $X_2$, the Proposition 9.7 and Proposition 9.9 in \cite{EV-92} give the existence of log Frobenius lifting over $U$, respecting the divisor $D\cap U$. We choose a covering of affine open subsets $\{U_i\}_{i\in I}$ of $X_2$ together with a log Frobenius lifting $F_i:U_i\rightarrow U_i$, respecting the divisor $D\cap U_i$ for each $i\in I$. Denote $R_i=\mathcal O_{X_2}(U_i)$, $R_{ij}=\mathcal O_{X_2}(U_{ij})$ and \[\Phi_i=F_i^\#: R_i\rightarrow R_i.\] For any object $\aleph$ (e.g. open subsets, divisors, sheaves, etc.) over $X_2$, we denote by $\overline{\aleph}$ its reduction on $X_1$. Denote by $\Phi$ the $p$-th power map on all rings of characteristic $p$. Thus $\overline{\Phi}_i=\Phi$ on $\overline{R}_{ij}$. Since $F_i$ is a log Frobenius lifting, $\mathrm{d}\Phi_i$ is divided by $p$ and which induces a map \begin{equation*} \frac{\mathrm{d}\Phi_i}{p}:\Omega_{X_1}^1(\mathrm{log} \overline{D})(\overline{U}_i)\otimes_{\Phi} \overline{R}_i \rightarrow \Omega_{X_1}^1(\mathrm{log} \overline{D})(\overline{U}_i). \eqno{(\frac{\mathrm{d}\Phi_i}{p})} \end{equation*} Let $(E,\theta)$ be a logarithmic Higgs bundle with nilpotent Higgs field over $X_1$ of exponent$\leq p-1$ and rank $r$. Now, we give the construction of $C^{-1}_{X_1\subset X_2}(E,\theta)$. Locally we set \begin{equation*} \begin{split} &V_i =E(\overline{U}_i)\otimes_\Phi \overline{R}_i,\\ &\nabla_i = \mathrm{d} + \frac{\mathrm{d}\Phi_i} {p}(\theta \otimes_\Phi1): V_i\rightarrow V_i\otimes_{\overline{R}_i} \Omega_{X_1}^1(\mathrm{log} \overline{D})(\overline{U}_i),\\ &G_{ij}= \mathrm{exp}(h_{ij}(\theta \otimes_\Phi1)): V_i\mid_{\overline{U}_{ij}}\rightarrow V_j\mid_{\overline{U}_{ij}}.\\ \end{split} \end{equation*} where $h_{ij}:\Omega^1_{X_1}(\overline{U}_{ij})\otimes_\Phi \overline{R}_{ij}\rightarrow \mathcal{O}_{\overline{U}_{ij}}$ is the homomorphism given by the Deligne-Illusie's Lemma~\cite{JJIC02}. Those local data $(V_i,\nabla_i)$'s can be glued into a global sheaf $V$ with an integrable connection $\nabla$ via the transition maps $\{ G_{ij} \}$ (Theorem 3 in \cite{LSZ12a}). The inverse Cartier functor on $(E,\theta)$ is defined by \[C^{-1}_1(E,\theta):=(V,\nabla).\] Let $e_{i,\cdot}=\{e_{i,1},e_{i,2},\cdots,e_{i,r}\}$ be a basis of $E(\overline{U}_i)$. Then \[\Phi^*e_{i,\cdot}:=\{e_{i,1}\otimes_\Phi 1,e_{i,2}\otimes_\Phi 1,\cdots,e_{i,r}\otimes_\Phi 1\}\] forms a basis of $V_i$. Now under those basis, there are $r\times r$-matrices $\omega_{\theta,i}$, $\omega_{\nabla,i}$ with coefficients in $\Omega_{X_1}^1(\mathrm{log}\overline{D})(\overline{U}_i)$, and matrices $\mathcal F_{ij}$, $\mathcal{G}_{ij}$ over $\overline{R}_{ij}$, such that \begin{equation*} (e_{i,\cdot})=(e_{j,\cdot}) \cdot \mathcal F_{ij} \eqno{(\mathcal F_{ij})} \end{equation*} \begin{equation*} \theta(e_{i,\cdot})=(e_{i,\cdot}) \cdot \omega_{\theta,i} \eqno{(\omega_{\theta,i})} \end{equation*} \begin{equation*} \nabla_i(\Phi^*e_{i,\cdot})=(\Phi^*e_{i,\cdot})\cdot \omega_{\nabla,i} \eqno{(\omega_{\nabla,i})} \end{equation*} \begin{equation*} G_{ij}(\Phi^*e_{i,\cdot})=(\Phi^*e_{j,\cdot})\cdot \mathcal{G}_{ij} \eqno{(\mathcal{G}_{ij})} \end{equation*} By the definition of $\nabla_i$, one has \begin{equation} \label{nablai} \omega_{\nabla,i}=\frac{\mathrm{d}\Phi_i}{p}(\omega_{\theta,i}\otimes_\Phi 1). \end{equation} We choose and fix a parameter $t_{ij}$ on $U_{ij}$ for every two elements $\{i,j\}$ in I. Then $\Omega^1_{X_1}(\overline{U}_{ij})$ is a free module over $\overline{R}_{ij}=\mathcal{O}_{X_1}(\overline{U}_i\cap \overline{U}_j)$ of rank $1$ generated by $\mathrm{d}t_{ij}$, and there is a matrix $A_{\theta,ij}$ over $\overline{R}_{ij}$ with \begin{equation*} \omega_{\theta,i}=A_{\theta,ij}\cdot \mathrm{d} t_{ij}. \eqno{(A_{\theta,ij})} \end{equation*} Explicitly, the Deligne-Illusie's map $h_{ij}$ is given by \begin{equation}\label{h_ij} h_{ij}(f\cdot\mathrm{d}t_{ij}\otimes_\Phi 1)=\Phi(f)\cdot\frac{\Phi_i(t_{ij})-\Phi_j(t_{ij})}{p}, \end{equation} So we have \begin{equation} h_{ij}(\theta\otimes_\Phi 1)(\Phi^*e_{i,\cdot}) =(\Phi^*e_{i,\cdot})\cdot \mathcal G^\Delta_{ij} \end{equation} and \begin{equation} \mathcal G_{ij}=\Phi(\mathcal{F}_{ij})\exp(\mathcal G^\Delta_{ij}) \end{equation} where \begin{equation*} \mathcal G^\Delta_{ij}=\Phi(A_{\theta,ij}) \frac{\Phi_i(t_{ij})-\Phi_j(t_{ij})}{p}. \eqno{(\mathcal G^\Delta_{ij})} \end{equation*} \subsubsection*{Computation of our example:} Let $\lambda\in W_2(k)$ with $\lambda\not\equiv 0,1\pmod{p}$ and let $X_2=\mathrm{Proj}\, W_2[T_0,T_1]$. Let $D$ be the divisor of $X_2$ associated to the homogeneous ideal $(T_0T_1(T_1-T_0)(T_1-\lambda T_0))$. By using $t=T_0^{-1}T_1$ as a parameter, we can simply write $D=\{0,1,\lambda,\infty\}$. Denote $U_1=X_2 \setminus \{0,\infty\}$, $U_2=X_2 \setminus \{1,\lambda\}$, $D_1=\{1,\lambda\}$ and $D_2=\{0,\infty\}$. Then $\{U_1,U_2\}$ forms a covering of $X_2$, \[\begin{split} & R_1=\mathcal O(U_1)=W_2[t,\frac1t], \\ & R_2=\mathcal O(U_2)=W_2[\frac{t-\lambda}{t-1},\frac{t-1}{t-\lambda}],\\ & R_{12}=\mathcal O(U_1\cap U_2)=W_2[t,\frac1t,\frac{t-\lambda}{t-1},\frac{t-1}{t-\lambda}],\\ & \Omega_{X_2}^1(\log D)(U_1)=W_2[t,\frac1t]\cdot \mathrm{d}\log\left(\frac{t-\lambda}{t-1}\right),\\ & \Omega_{X_2}^1(\log D)(U_2)=W_2[\frac{t-\lambda}{t-1},\frac{t-1}{t-\lambda}]\cdot \mathrm{d}\log t.\\ \end{split}\] Over $U_{12}$, one has \[\mathrm{d}\log \left(\frac{t-\lambda}{t-1}\right)=\frac{(\lambda-1)t}{(t-\lambda)(t-1)}\cdot \mathrm{d}\log t.\] Denote $\Phi_1(\frac{t-\lambda}{t-1})=\left(\frac{t-\lambda}{t-1}\right)^p$ and $\Phi_2(t)=t^p$, which induce two Frobenius liftings on $R_{12}$. One checks that $\Phi_i$ can be restricted on $R_i$ and forms a log Frobenius lifting respecting the divisor $D_i$. Moreover \begin{equation}\label{equ:dF_1/p} \frac{\mathrm{d} \Phi_1}{p}\left(\mathrm{d}\log \frac{t-\lambda}{t-1} \otimes_{\Phi} 1\right) = \mathrm{d}\log \frac{t-\lambda}{t-1}, \end{equation} and \begin{equation}\label{equ:dF_2/p} \frac{\mathrm{d} \Phi_2}{p}\left(\mathrm{d}\log t \otimes_{\Phi} 1\right) = \mathrm{d}\log t. \end{equation} \paragraph{\emph{Local expressions of the Higgs field and the de Rham bundle.}} Let $(E,\theta)$ be a logarithmic graded semistable Higgs bundle over $X_1=\mathbb P^1_k$ with $E=\mathcal O\oplus \mathcal O(1)$. Then the cokernel of \[\theta: \mathcal O(1) \rightarrow \mathcal O\otimes \Omega_{X_1}^1(\log \overline{D})\] is supported at one point $a\in\mathbb P_{k}^1(\overline{k})$, which is called the zero of the Higgs field. Conversely, for any given $a\in\mathbb P_{k}^1(\overline{k})$, up to isomorphic, there is a unique graded semistable logarithmic Higgs field on $\mathcal O\oplus \mathcal O(1)$ such that its zero equals to $a$. Assume $a\neq \infty$, we may choose and fix a basis $e_{i,j}$ of $\mathcal O(j-1)$ over $U_i$ for $1\leq i,j\leq 2$ such that \begin{equation} \mathcal{F}_{12}=\left(\begin{array}{cc} 1 & 0\\0 & \frac{t}{t-1}\\ \end{array}\right), \end{equation} \begin{equation} \omega_{\theta,1}=\left(\begin{array}{cc} 0 & \frac{t-a}{\lambda-1}\\0 & 0\\ \end{array}\right)\cdot\mathrm{d}\log\frac{t-\lambda}{t-1}, \end{equation} By (\ref{nablai}), we have \begin{equation} \omega_{\nabla,1}= \left(\begin{array}{cc} 0 & \left(\frac{t-a}{\lambda-1}\right)^p\\ 0 & 0 \\ \end{array}\right) \cdot\mathrm{d}\log\frac{t-\lambda}{t-1} \end{equation} We choose $t_{12}=t$ as the parameter on $U_{12}$. Then \begin{equation} A_{\theta,12}=\left(\begin{array}{cc} 0 & \frac{t-a}{(t-1)(t-\lambda)}\\0 & 0\\ \end{array}\right) \end{equation} \begin{equation} \mathcal G^\Delta_{12} =\Phi(A_{\theta,12})\cdot z_{12} \end{equation} \begin{equation} \mathcal G_{12}=\left(\begin{array}{cc} 1& g \\ 0 & \frac{t^p}{(t-1)^p}\\ \end{array}\right) \end{equation} where \begin{equation*} z_{12}= \frac{\Phi_1(t)-\Phi_2(t)}{p} \eqno{(z_{12})} \end{equation*} and \begin{equation*} g=\frac{(t-a)^p}{(t-\lambda)^p(t-1)^p} \cdot z_{12}. \eqno{(g)} \end{equation*} \paragraph{\emph{Hodge filtration.}} Since $X_1= \mathbb{P}^1_k$ and $(V,\nabla)$ is semi-stable of degree $p$, the bundle $V$ is isomorphic to $\mathcal{O}(m) \oplus \mathcal{O}(m+1)$ with $p=2m+1$. So the filtration on $(V,\nabla)$ \[ 0\subset \mathcal{O}(m+1) \subset V \] is the graded semi-stable Hodge filtration on $V$. Choose a basis $e_i$ of $\mathcal{O}(m+1)$ on $U_i$ such that $e_1=\left(\frac{t}{t-1}\right)^{m+1}e_2$ on $U_{12}$. In the following, we will write down the inclusion map $\iota:\mathcal{O}(m+1) \rightarrow V$ explicitly via those basis. Before this, we shall fix some notations $\mathrm{pr}$, $A$ and $\alpha_i$. The map $\mathrm{pr}$ is the quotient map of $k$-vector spaces \begin{equation*} \mathrm{pr}:R_{12} \twoheadrightarrow \frac{R_{12}}{R_1+\left(\frac{t}{t-1}\right)^{m+1} R_2} \eqno{(\mathrm{pr})} \end{equation*} For all $n\in \{1,2,\cdots,p-2\}$, we denote \begin{equation}\label{nota:delta} \delta_n=\frac{\lambda^p(1-a^p)-(\lambda^p-a^p)\lambda^n}{n}, \end{equation} and $A=A(\lambda,a)$ the matrix of size $m\times (m+1)$ \begin{equation*} A=\left( \begin{array}{ccccc} \delta_{m} & \delta_{m+1} & \cdots &\delta_{p-2} &\delta_{p-1} \\ \delta_{m-1} & \delta_{m} & \ddots &\vdots &\vdots \\ \vdots & \ddots & \ddots &\delta_{m+1} &\delta_{m+2} \\ \delta_{1} & \cdots & \delta_{m-1}&\delta_{m} &\delta_{m+1} \\ \end{array} \right)_{m\times (m+1)} \eqno{(A)} \end{equation*} For $m+1\leq i\leq p$, we denote by $A_i$ the submatrix of $A$ by removing the $(i-m)$-column \begin{equation}\label{matrixA_i} A_{i}=\left( \begin{array}{cccccc} \delta_{m} & \cdots & \delta_{i-2} & \delta_{i} &\cdots &\delta_{p-1} \\ \delta_{m-1} & \cdots & \delta_{i-3} & \delta_{i-1} &\cdots &\delta_{p-2} \\ \vdots & \ddots & \vdots &\vdots &\ddots &\vdots \\ \delta_{2} & \cdots & \delta_{i-m} & \delta_{i+2-m} &\cdots &\delta_{m+2} \\ \delta_{1} & \cdots & \delta_{i-1-m}&\delta_{i+1-m} &\cdots &\delta_{m+1} \\ \end{array} \right). \end{equation} and \begin{equation}\label{nota:alpha} \alpha_{i}=(-1)^{i}\cdot\det A_i \end{equation} Obviously, the vector $(\alpha_{m+1},\alpha_{m+2},\cdots,\alpha_p)^T$ is a solution of $AX=0$. \begin{lem}\label{mainlem: f h} $\mathrm{i)}$. Let $f,h$ be two elements in $\overline{R}_1$. Then the $\overline{R}_1$-linear map from $\overline{R}_1\cdot e_1$ to $V(\overline{U}_1)$, which maps $e_1$ to $e_{11}\otimes_\Phi h+e_{12}\otimes_\Phi f$, can be extended to a global map of vector bundles $\mathcal O(m+1)\rightarrow V$ if and only if \[f\in \sum_{i=0}^m k\cdot\frac1{t^i} \quad \text{ and } \quad \mathrm{pr}(fg)=0.\] $\mathrm{ii)}$. Suppose $f=(1,\frac1t,\cdots,\frac{1}{t^m})X$ with $X\in k^{(m+1)\times 1}$. Then $\mathrm{pr}(fg)=0$ if and only if $AX=0$. $\mathrm{iii)}$. The matrix $A$ is of maximal rank and the vector $(\alpha_{m+1},\alpha_{m+2},\cdots,\alpha_p)^T$ is a $k$-basis of the $1$-dimensional space of solutions of $AX=0$. \end{lem} \begin{proof} i). Over $U_{12}$, one has \begin{equation} \iota(e_2)=(e_{21}\otimes_\Phi 1,e_{22}\otimes_\Phi 1)\left( \begin{array}{c} (h+fg)\cdot \left(\frac{t-1}{t}\right)^{m+1}\\ \\ f\cdot \left(\frac{t}{t-1}\right)^{m}\\ \end{array} \right) \end{equation} Thus $\iota$ can be extended globally if and only if $h+fg\in (\frac{t}{t-1})^{m+1} R_2$ and $f\cdot (\frac{t}{t-1})^{m}\in R_2$. That is equivalent to $f\in R_1\cap \left(\frac{t-1}{t}\right)^m R_2$ and $fg\in R_1+(\frac{t}{t-1})^{m+1} R_2$. The result follows from the fact that $R_1\cap \left(\frac{t-1}{t}\right)^m R_2$ is a $k$-vector space with a basis $\{1,\frac1t,\cdots,\frac{1}{t^m}\}$. ii). By directly computation, one checks that \[\mathrm{pr}(fg)=\mathrm{pr}\left(\frac{t}{t^p-1},\frac{t^2}{t^p-1},\cdots,\frac{t^{m}}{t^p-1}\right)\cdot A \cdot \left(\begin{array}{c} a_{m+1}\\ a_{m+2}\\ \vdots \\ a_p\\ \end{array}\right). \] iii). Since $V\simeq \mathcal O(m)\oplus\mathcal O(m+1)$, the $k$-vector space $\mathrm{Hom}(\mathcal O(m+1),V)$ is of $1$-dimensional. By i) and ii), the $k$-vector space of solutions of $AX=0$ is of $1$-dimensional. \end{proof} \paragraph{\emph{Two notations $[\cdot]$ and $\{\cdot\}$.}} We have inclusion maps $\overline{R}_i\rightarrow \overline{R}_{12}$, for $i=1,2$. Under these inclusions, we have the following direct sum decomposition as free $k$-vector spaces \[\overline{R}_{12}= \overline{R}_1 \oplus \frac{t}{t-1} \overline{R}_2.\] We denote the projection map to the first summand by $[\cdot]$ and the projection map to the second summand by $\{\cdot\}$. Denote \begin{equation} f_o=\frac{\alpha_{m+1}t^{m+1}+\alpha_{m+2}t^{m+2}+\cdots+\alpha_pt^p }{t^p}\quad \text{ and } \quad h_o=-[f_og]. \end{equation} By Lemma~\ref{mainlem: f h}, we have following result. \begin{cor}\label{cor:HodgeFil} the Hodge filtration of $(V,\nabla)$ on $U_1$ is given by \[0\subset \overline{R}_1\cdot v_{12} \subset V(\overline{U}_1),\] where $v_{12}=e_{11}\otimes_\Phi h_o + e_{12}\otimes_\Phi f_o$. \end{cor} \paragraph{\emph{The Higgs field of the graded Higgs bundle.}} We extend $v_{12}$ (defined in Corollary~\ref{cor:HodgeFil}) to an $\overline{R}_1$-basis $\{v_{11},v_{12}\}$ of $V(\overline{U}_1)$. Assume $v_{11}=e_{11}\otimes_\Phi h_1 + e_{12}\otimes_\Phi f_1$ and denote $P=\left(\begin{array}{cc} h_1& h_2\\ f_1 & f_2\\ \end{array}\right)$, which is an invertible matrix over $\overline{R}_1$ with determinant $d:=\det(P)\in \overline{R}_1^\times$. One has \begin{equation} (v_{11},v_{12})=(e_{11}\otimes_\Phi 1,e_{12}\otimes_\Phi 1)\left(\begin{array}{cc} h_1 & h_o\\f_1 & f_o\\ \end{array}\right) \end{equation} and \begin{equation} \nabla(v_{11},v_{12})= (v_{11},v_{12})\cdot \upsilon_{\nabla,1} \end{equation} where $\upsilon_{\nabla,1}=\left(P^{-1}\cdot \mathrm{d}P+ P^{-1}\cdot \omega_{\nabla,1} \cdot P\right)$ equals to \begin{equation*} \left(\begin{array}{cc} \frac{f_o\mathrm{d}h_1-h_o\mathrm{d}f_1}{\mathrm{d}\log \frac{t-\lambda}{t-1}} + f_1f_o \left(\frac{t-a}{\lambda-1}\right)^p & \frac{f_o\mathrm{d}h_o-h_o\mathrm{d}f_o}{\mathrm{d}\log \frac{t-\lambda}{t-1}} +f_o^2 \left(\frac{t-a}{\lambda-1}\right)^p \\ \frac{-f_1\mathrm{d}h_1+h_1\mathrm{d}f_1 }{\mathrm{d}\log \frac{t-\lambda}{t-1}}-f_1^2 \left(\frac{t-a}{\lambda-1}\right)^p & \frac{-f_1\mathrm{d}h_o+h_1\mathrm{d}f_o}{\mathrm{d}\log \frac{t-\lambda}{t-1}}-f_1f_o \left(\frac{t-a}{\lambda-1}\right)^p \\ \end{array}\right)\cdot \frac{\mathrm{d}\log \frac{t-\lambda}{t-1}}{d}. \end{equation*} Taking the associated graded Higgs bundle, the Higgs field $\theta'$ on $\mathrm{Gr}(V,\nabla,\Fil)(\overline{U}_1)=V(\overline{U}_1)/(\overline{R}_1\cdot v_{12}) \oplus \overline{R}_1\cdot v_{12}$ is given by \begin{equation}\label{equ:gradingHiggsField} \theta'(e_{12}')= \frac1d\left(\frac{f_o\mathrm{d}h_o-h_o\mathrm{d}f_o}{\mathrm{d}\log \frac{t-\lambda}{t-1}} +f_o^2 \left(\frac{t-a}{\lambda-1}\right)^p\right)\cdot \left(e'_{11}\otimes \mathrm{d}\log \frac{t-\lambda}{t-1}\right) \end{equation} over $\overline{U}_1$, where $e'_{11}$ is the image of $v_{11}$ in $V(\overline{U}_1)/(\overline{R}_1\cdot v_{12})$ and $e'_{12}=v_{12}$ in $\overline{R}_1v_{12}$. Thus the zero of the graded Higgs bundle $\mathrm{Gr}(V,\nabla,\Fil)$ is the root of polynomial \begin{equation} P_{\theta'}(t)= \frac{f_o\cdot\mathrm{d}h_o-h_o\cdot \mathrm{d}f_o}{\mathrm{d}\log \frac{t-\lambda}{t-1}} +f_o^2\cdot \left(\frac{t-a}{\lambda-1}\right)^p. \end{equation} \begin{lem}\label{lem:gradHiggPoly} Define $\alpha_p$ and $\alpha_{m+1}$ as in (\ref{nota:alpha}). Then \[ P_{\theta'}(t)= \frac{\alpha_p^2}{\lambda-1}t-\frac{\alpha_{m+1}^2}{\lambda-1}\cdot\frac{a^p}{\lambda^{p-1}}.\] \end{lem} \begin{proof} Since $h_o=-[f_og]=\{f_og\}-f_og$, the polynomial $P_{\theta'}(t)$ equals to \[\frac{(t-\lambda)(t-1)}{\lambda-1} \left( f_o\frac{\mathrm{d}\{f_og\}}{\mathrm{d}t} -\{f_og\}\frac{\mathrm{d}f_o}{\mathrm{d}t} \right)+ f_o^2\left( \left(\frac{t-a}{\lambda-1}\right)^p -\frac{(t-\lambda)(t-1)}{\lambda-1}\cdot\frac{\mathrm{d}g}{\mathrm{d}t} \right).\] Recall that $\Phi_2(t)=t^p$ and $\Phi_1(t)=\frac{\left(\frac{t-\lambda}{t-1}\right)^p-\lambda^\sigma}{\left(\frac{t-\lambda}{t-1}\right)^p-1}$, one has $\mathrm{d}\Phi_2(t)=p\cdot t^{p-1}\mathrm{d}t$ and $\mathrm{d}\Phi_1(t)=p\cdot\frac{(t-1)^{p-1}(t-\lambda)^{p-1}}{(1-\lambda)^{p-1}}\mathrm{d}t$. Since $g=\frac{(t-a)^p}{(t-\lambda)^p(t-1)^p} \cdot z_{12}$ with $z_{12}=\frac{\Phi_1(t)-\Phi_2(t)}{p}$, we have \[\left(\frac{t-a}{\lambda-1}\right)^p -\frac{(t-\lambda)(t-1)}{\lambda-1}\cdot\frac{\mathrm{d}g}{\mathrm{d}t}=\frac{(t-\lambda)(t-1)}{\lambda-1}\cdot\frac{t^{p-1}(t^p-a^p)}{(t^p-\lambda^p)(t^p-1)}.\] \[\frac{\mathrm{d}g}{\mathrm{d}t}=\frac{(t-a)^p}{(t-\lambda)^p(t-1)^p}\cdot\left((\lambda-1)^{p-1}(t-1)^{p-1}(t-\lambda)^{p-1}-t^{p-1}\right)\] \emph{Claim}: Suppose $G$ is some power series contained in \[ \left\{\sum\limits_{\ell=m+1}^{\infty} a_{\ell}\cdot\left(\frac{t}{t-1}\right)^\ell +\sum\limits_{\ell=m+1}^{\infty}b_{\ell}\cdot \left(\frac{t}{t-\lambda}\right)^\ell | a_{\ell},b_{\ell} \in k \right\} \] and $F$ belongs to $\left\{\sum\limits_{i=0}^{m}a_i\cdot \frac1{t^i} |a_i \in k\right\}$. Then $\frac{(t-1)(t-\lambda)}{\lambda-1}\left(F\frac{\mathrm{d}G}{\mathrm{d}t}-G\frac{\mathrm{d}F}{\mathrm{d}t}\right)$ is contained in $R_2$ and divided by $\frac{t}{t-1}$ in $R_2$. The Claim follows from \[\frac{(t-1)(t-\lambda)}{\lambda-1} \left( \left(\frac{t-1}{t}\right)^i \frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{t}{t-1}\right)^j- \left(\frac{t}{t-1}\right)^j \frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{t-1}{t}\right)^i \right)\] \[=(i+j)\left( \left(\frac{t}{t-1}\right)^{j-i} -\frac{\lambda}{\lambda-1}\cdot \left(\frac{t}{t-1}\right)^{j-1-i}\right),\] and \[\frac{(t-1)(t-\lambda)}{\lambda-1} \left( \left(\frac{t-\lambda}{t}\right)^i \frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{t}{t-\lambda}\right)^j- \left(\frac{t}{t-\lambda}\right)^j \frac{\mathrm{d}}{\mathrm{d}t} \left(\frac{t-\lambda}{t}\right)^i \right)\] \[=(i+j)\left(-\left(\frac{t}{t-\lambda}\right)^{j-i} -\frac{1}{\lambda-1}\cdot \left(\frac{t}{t-\lambda}\right)^{j-1-i}\right).\] By the claim, $\frac{(t-\lambda)(t-1)}{\lambda-1} \left( f_o\frac{\mathrm{d}\{f_og\}}{\mathrm{d}t} -\{f_og\}\frac{\mathrm{d}f_o}{\mathrm{d}t} \right)\in \frac{t}{t-1}\overline{R}_2$. i.e. \[\left[\frac{(t-\lambda)(t-1)}{\lambda-1} \left( f_o\frac{\mathrm{d}\{f_og\}}{\mathrm{d}t} -\{f_og\}\frac{\mathrm{d}f_o}{\mathrm{d}t} \right)\right]=0.\] On the other hand, $P_{\theta'}(t)\in R_1$, one has \begin{equation*} \begin{split} P_{\theta'}(t) & =[P_{\theta'}(t)]\\ & = \left[f_o^2\left( \left(\frac{t-a}{\lambda-1}\right)^p -\frac{(t-\lambda)(t-1)}{\lambda-1}\cdot\frac{\mathrm{d}g}{\mathrm{d}t}\right)\right]\\ &=\left[f_o^2\left(\frac{(t-\lambda)(t-1)}{\lambda-1}\cdot\frac{t^{p-1}(t^p-a^p)}{(t^p-\lambda^p)(t^p-1)}\right)\right]\\ &=\left[\frac{(\alpha_{m+1}+\alpha_{m+2}t+\cdots+\alpha_pt^m)^2\cdot (t-a)^p }{(\lambda-1)(t-\lambda)^{p-1}(t-1)^{p-1}}\right]\\ \end{split} \end{equation*} Obviously, there are polynomials $f_\infty(t)$, $f_1(t)$ and $f_\lambda(t)$, which are divided by $t$, such that \begin{equation*} \frac{(\alpha_{m+1}+\alpha_{m+2}t+\cdots+\alpha_pt^m)^2\cdot (t-a)^p }{(\lambda-1)(t-\lambda)^{p-1}(t-1)^{p-1}}-f_\infty(t)-f_1\left(\frac{t}{t-1}\right)-f_\lambda\left(\frac{t}{t-\lambda}\right) \end{equation*} is a constant. Taking value at $t=0$, this constant is just $-\frac{\alpha_{m+1}^2}{\lambda-1}\cdot\frac{a^p}{\lambda^{p-1}}$. By definition of $[\cdot]$, we know that \[ P_{\theta'}(t)=f_\infty(t) -\frac{\alpha_{m+1}^2}{\lambda-1}\cdot\frac{a^p}{\lambda^{p-1}}.\] Since $\deg((\alpha_{m+1}+\alpha_{m+2}t+\cdots+\alpha_pt^m)^2\cdot (t-a)^p)-\deg((\lambda-1)(t-\lambda)^{p-1}(t-1)^{p-1})=1$, the polynomial $f_\infty(t)$ is of degree $1$. Comparing the coefficients of the first terms, we get \[f_\infty(t)=\frac{\alpha_p^2}{(\lambda-1)}\cdot t.\qedhere\] \end{proof} \begin{thm}\label{Thm:selfmap_formula} Let $A_p$ and $A_{m+1}$ be defined as in (\ref{matrixA_i}). Then the self-map $\varphi_{\lambda,p}$ is given by \[\varphi_{\lambda,p}(a) = \frac{a^p}{\lambda^{p-1}}\cdot \left(\frac{\det(A_{m+1})}{\det(A_p)}\right)^2.\] \end{thm} \begin{proof} For $a\neq\infty$, this follows Lemma~\ref{lem:gradHiggPoly}. For $a=\infty$, we can change the parameter $t$ to compute the self-map at $a$. \end{proof} \subsection{Multiple by $p$ map on elliptic curve} Let $k$ be a perfect field of characteristic $p\geq 3$. Let $\lambda\in k$ with $\lambda\not\equiv 0,1\mod p$. The Weierstrass function \[y^2=x(x-1)(x-\lambda)\] defines an elliptic curve $C_{\lambda}$ over $k$. Let $Q_1=(a,b)$ be a $k$-point on $C_\lambda$. We denote \[Q_n=(a_n,b_n):=\underbrace{Q_1+Q_1+\cdots+Q_1}_n.\] Then $a_n$ is a rational function of $a$. In this appendix, we will give an explicit formula of this rational function for the case of $n=p$. Without lose of generality, we may assume that $k$ is algebraic closed and $a\neq 0,1,\lambda,\infty$. Since the divisor $(p+1)(\infty)-p(Q_1)$ is of degree $1$, the space of global sections of its associated line bundle is of $1$-dimension. Choosing a nontrivial global section $\alpha$, then \[\mathrm{div}(\alpha)=p(Q_1)+(Q_p)-(p+1)(\infty)\] and it is also a global section of $\mathcal O_{C_\lambda}\Big((p+1)(\infty)\Big)$. On the other hand, the $k$-vector space of the global sections of $\mathcal O_{C_\lambda}\Big((p+1)(\infty)\Big)$ are of $(p+1)$-dimension with basis ($m=\frac{p-1}{2}$) \[1,x,x^2,\cdots,x^{m+1},y,yx,\cdots,yx^{m-1}.\] So we can write $\alpha$ in the form \[\alpha=f-yg,\] where $f,g\in k[x]$ with $\deg(f)\leq m+1$ and $\deg(g)\leq m-1$. Since \begin{equation} \begin{split} \mathrm{div}((x-a)^p(x-a_p)) & =p(Q_1)+p(-Q_1)+(Q_p)+(-Q_p)-(2p+2)(\infty)\\ &=\mathrm{div}(\alpha\overline{\alpha}) \end{split} \end{equation} Here $\overline{\alpha}:=f+yg$.\\ By multipling suitable constant to $\alpha$, we may assume \begin{equation}\label{TwoWayFaction} (x-a)^p(x-a_p)=\alpha\overline{\alpha}=f^2-x(x-1)(x-\lambda)g^2. \end{equation} Comparing the degree on both side, one gets \[\deg(f)=m+1 \text{ and } \deg(g)\leq m-1.\] Writing $f$ in form $f=\beta_0+\beta_1x+\cdots+\beta_{m+1}x^{m+1},$ we denote \begin{equation} \beta=\left(\begin{array}{c} \beta_0\\ \beta_1\\ \cdots\\ \beta_{m+1}\\ \end{array}\right) \end{equation} and consider the first terms and the constant terms on both sides of (\ref{TwoWayFaction}). Then one gets \begin{equation} a_p=\frac{1}{a^p} \left(\frac{\beta_0}{\beta_{m+1}}\right)^2. \end{equation} So in order to get the rational function we want, we need to determine the ratio $[\beta_0:\beta_{m+1}]$. In the following, we will define a full rank matrix $B$ of size $(m+1)\times(m+2)$ such that $(\beta_0,\beta_1,\cdots,\beta_{m+1})^T$ is a non-zero solution of $BX=0$. Then the ratio $[\beta_0:\beta_{m+1}]$ can be described by the determinants of submatrices of $B$. Expand the polynomial \begin{equation} \Big(x(x-1)(x-\lambda)\Big)^{m}=\gamma_m x^m+\gamma_m x^{m+1}+\cdots \gamma_{3m} x^{3m}, \end{equation} where \begin{equation}\label{nota:gamma_n} \gamma_n=(-1)^{m+n}\sum_{\begin{array}{c} i+j=n-m\\ 0\leq i,j\leq m\\ \end{array}}{m\choose i}{m\choose j}\lambda^{m-j}. \end{equation} and denote \begin{equation*} B=\left(\begin{array}{rrrrrr} \gamma_m & a^p\gamma_{3m} & a^p\gamma_{3m-1} &\cdots & a^p\gamma_{2m+1} & a^p\gamma_{2m}\\ \gamma_{m+1} & \gamma_{m} & a^p\gamma_{3m} &\cdots & a^p\gamma_{2m+2} & a^p\gamma_{2m+1}\\ \gamma_{m+2} & \gamma_{m+1} &\gamma_{m} &\cdots & a^p\gamma_{2m+3} & a^p\gamma_{2m+2}\\ \vdots &\vdots &\vdots&\ddots &\vdots &\vdots \\ \gamma_{2m} & \gamma_{2m-1} &\gamma_{2m-2} &\cdots & \gamma_{m} & a^p\gamma_{3m}\\ \end{array}\right). \eqno{(B)} \end{equation*} \begin{lem} $B\cdot\beta=0$ \end{lem} \begin{proof} Since $a\neq 0,1,\lambda,\infty$, the function $x(x-1)(x-\lambda)$ is invertible in $k[[x-a]]$. Thus the $\frac1{\sqrt{x(x-1)(x-\lambda)}}$ is an element in $k[[x-a]]$. Since \[\Big(x(x-1)(x-\lambda)\Big)^{p}\equiv \Big(a(a-1)(a-\lambda)\Big)^{p} \pmod{(x-a)^p}, \] one has \begin{equation}\label{equ:sqrt1} \sqrt{x(x-1)(x-\lambda)} \equiv \pm\frac{\Big(a(a-1)(a-\lambda)\Big)^{p/2}}{\Big(x(x-1)(x-\lambda)\Big)^{m}} \pmod{(x-a)^p}. \end{equation} Since $(x-a)^p(x-a_p)=f^2-x(x-1)(x-\lambda)g^2$ and $x-a\nmid fg$, one gets \begin{equation}\label{equ:sqrt2} \sqrt{x(x-1)(x-\lambda)} \equiv \pm \frac{f}{g} \pmod{(x-a)^p}. \end{equation} Now comparing (\ref{equ:sqrt1}) and (\ref{equ:sqrt2}), one gets \begin{equation}\label{equ:mainf} f\cdot \Big(x(x-1)(x-\lambda)\Big)^{m} \equiv \pm \Big(a(a-1)(a-\lambda)\Big)^{p/2}\cdot g \pmod{(x-a)^p}. \end{equation} Consider the map of $k$-vector spaces \begin{equation*} \mathrm{pr': k[[x-a]]} \twoheadrightarrow \frac{k[[x-a]]}{(x-a)^p\cdot k[[x-a]]+\sum\limits_{i=0}^{m-1} k\cdot x^i } \eqno{(\mathrm{pr'})}. \end{equation*} From (\ref{equ:mainf}), we have $\mathrm{pr'}\left(f\cdot \Big(x(x-1)(x-\lambda)\Big)^{m}\right)=0$. By direct computation, one checks that for all $0\leq i\leq m+1$ \[\mathrm{pr'}\left(x^i\cdot \Big(x(x-1)(x-\lambda)\Big)^{m}\right)= \sum_{j=m}^{3m} \gamma_j\cdot \mathrm{pr'}(x^{i+j}).\] Since $x^{n+p}\equiv a^px^n \mod(x-a)^p$, one has $\mathrm{pr'}(x^{n+p})=a^p\cdot \mathrm{pr'}(x^n)$. Thus \[\mathrm{pr'}\left(x^i\cdot \Big(x(x-1)(x-\lambda)\Big)^{m}\right)=\sum_{j=m+i}^{p-1} \gamma_{j-i}\cdot \mathrm{pr'}(x^{j})+ \sum_{j=m}^{m+i-1} a^p\gamma_{p+j-i}\cdot \mathrm{pr'}(x^{j}) \] Therefore \begin{equation} \mathrm{pr'}\left(f\cdot \Big(x(x-1)(x-\lambda)\Big)^{m}\right)=\mathrm{pr'}(x^m,x^{m+1},\cdots,x^{2m})\cdot (B\cdot\beta) \end{equation} The lemma follows. \end{proof} Recall that $\gamma_n$ is defined in~(\ref{nota:gamma_n}), we denote $B_0$ and $B_{m+1}$ to be two submatrices of $B$ as following \begin{equation}\label{matrixB0} B_0=\left(\begin{array}{rrrrr} a^p\gamma_{3m} & a^p\gamma_{3m-1} &\cdots & a^p\gamma_{2m+1} & a^p\gamma_{2m}\\ \gamma_{m} & a^p\gamma_{3m} &\cdots & a^p\gamma_{2m+2} & a^p\gamma_{2m+1}\\ \gamma_{m+1} &\gamma_{m} &\cdots & a^p\gamma_{2m+3} & a^p\gamma_{2m+2}\\ \vdots &\vdots&\ddots &\vdots &\vdots \\ \gamma_{2m-1} &\gamma_{2m-2} &\cdots & \gamma_{m} & a^p\gamma_{3m}\\ \end{array}\right) \end{equation} \begin{equation}\label{matrixB_m+1} B_{m+1}= \left(\begin{array}{rrrrrr} \gamma_m & a^p\gamma_{3m} & a^p\gamma_{3m-1} &\cdots & a^p\gamma_{2m+1} \\ \gamma_{m+1} & \gamma_{m} & a^p\gamma_{3m} &\cdots & a^p\gamma_{2m+2} \\ \gamma_{m+2} & \gamma_{m+1} &\gamma_{m} &\cdots & a^p\gamma_{2m+3} \\ \vdots &\vdots &\vdots&\ddots &\vdots \\ \gamma_{2m} & \gamma_{2m-1} &\gamma_{2m-2} &\cdots & \gamma_{m} \\ \end{array}\right). \end{equation} \begin{cor} $[\beta_0:\beta_{m+1}]=[(-1)^{m+1}\det(B_0):\det(B_{m+1})]$. \end{cor} Now, we get the self-map on $\mathbb P^1_k$ induced by the multiplication by $p$ map. \begin{thm}\label{thm:multp_formula} $a_p=\frac{1}{a^p}\cdot \left(\frac{\det(B_{0})}{\det(B_{m+1})}\right)^2$. \end{thm} \begin{prop}\label{compTwoConj} The Conjecture~\ref{var_conj} implies the equation (\ref{equ:main}) and the Conjecture~\ref{conj-1}. \end{prop} \begin{proof} Regard every terms of $A_{m+1}$, $A_p$, $B_0$ and $B_{m+1}$ as polynomials of $a,\lambda$. One checks directly that \begin{equation*} A^T_{m+1}(\lambda,a)=\lambda^{2p}a^pA_p(\frac1\lambda,\frac{1}{a}) \end{equation*} \begin{equation*} B^T_{0}(\lambda,a)=\lambda^{m}a^pB_{m+1}(\frac1\lambda,\frac{1}{a}) \end{equation*} On the other hand, by Conjecture~\ref{var_conj}, \[\det(A_p(\frac1\lambda,\frac1a))=c\lambda^{-2m^2}(1-\lambda)^{m^2}\cdot \det(B_{m+1}(\frac1\lambda,\frac1a)).\] Thus \[\det(A_{m+1})=c\lambda^{m^2+m}(1-\lambda)^{m^2}a^{-p}\cdot \det(B_0).\] The Proposition follows. \end{proof} \begin{rmk} The Conjecture~\ref{var_conj} holds for odd prime $p<50$. This was checked directly by using Maple. By Proposition~\ref{compTwoConj}, our main conjecture holds for $p<50$. \end{rmk} \section{Appendix: the torsor maps induced by inverse Cartier functor and Grading functor} In this section, we will describe the torsor map induced by inverse Cartier functor (Proposition~\ref{prop:torsor IC}) and Grading functor (Proposition~\ref{prop:torsor_Grading}) via maps between cohomology groups. \subsection*{Some notations} Let $T$ be a vector space and assume $L$ is $T$-torsor space. Then for a given point $\ell\in L$, the torsor structure gives a bijection $\iota_\ell: L\rightarrow T$. We call $\iota_\ell(\ell')$ the difference between $\ell'$ and $\ell$, or we say that $\ell'$ differs from $\ell$ by $\iota_\ell(\ell')$. Denote $c(\ell',\ell):=\iota_\ell(\ell')$. Denote $W= W(k)$ and $W_n=W/p^nW$. Let $\mathcal{X}$ be a proper smooth $W$-scheme with normal crossing divisor ${\mathcal D}$, $X_n := \mathcal{X} \times_W W_n$ and $D_n:={\mathcal D}\otimes_WW_n$. Let ${\mathcal X} = \bigcup {\mathcal U}_i$ be a covering of small affine open subsets and let $\Phi_i$ be a Frobenius lifting on ${\mathcal U}_i$ which preserves the log divisor, i.e. $\Phi_i^*({\mathcal D}\cap {\mathcal U}_i)=p({\mathcal D}\cap {\mathcal U}_i)$. Denote ${\mathcal U}_{ij}={\mathcal U}_i\cap{\mathcal U}_j$. For a vector bundle $L$ over $X_n$, write $L({\mathcal U}_i):=L({\mathcal U}_i\times_WW_n)$ and $L({\mathcal U}_{ij}):=L(({\mathcal U}_i\cap{\mathcal U}_j)\times_WW_n)$ for short. In this section, all Higgs bundles and de Rham bundles are logarithmic with respect to the divisor ${\mathcal D}$. \subsection{Lifting space of Higgs Bundles and de Rham bundles}\label{section_LSHBFB} Let $(\overline{E},\overline{\theta})$ be a Higgs bundle over $X_{n-1}$. Denote its reduction modulo $p$ by $(E , \theta)$. Now we want to study the space of $W_n$-liftings of $(\overline{E},\overline{\theta})/X_{n-1}$.\\ \begin{lem}\label{lem:HiggsTorsor} The space of $W_n$-liftings of $(\overline{E},\overline{\theta})/X_{n-1}$ is an $H^1_{Hig}(X_1 , \mE nd (E , \theta))$-torsor. \end{lem} \begin{proof} Consider two $W_n$-lifting $(\check E,\check \theta)/X_n$ and $(\hat E,\hat\theta)$ of $(\overline{E},\overline{\theta})/X_{n-1}$. Denote by $(\check E_i , \check\theta_i)$ (resp. $(\hat E_i , \hat\theta_i)$) the restriction of $(\check E,\check \theta)$ (resp. $(\hat E, \hat\theta)$) on ${\mathcal U}_i\times_WW_n$. Locally we can always find isomorphisms $\gamma_i: \check E_i \overset{\sim}{\longrightarrow} \hat E_i$ over ${\mathcal U}_i\times_WW_n$ which lifts $\mathrm{id}_{\overline{E}_i}$. Set \begin{equation*} f_{ij}:= \gamma^{-1}_j|_{{\mathcal U}_{ij}}\circ \gamma_i|_{{\mathcal U}_{ij}} - \mathrm{id} \in \mE nd(\check E)({\mathcal U}_{ij}) \end{equation*} \begin{equation*} \omega_i := \gamma_i^{-1} \circ \hat\theta_i \circ \gamma_i - \check\theta_i \in \mE nd(\check E)({\mathcal U}_i) \otimes \Omega^1_{{\mathcal X}/W}(\log {\mathcal D})({\mathcal U}_i) \end{equation*} Since $(\check E, \check \theta)$ and $(\hat E,\hat \theta)$ are both $W_n$-liftings of $(\overline{E},\overline{\theta})$, we have \[ f_{ij} \equiv 0\pmod{p^{n-1}} \qquad \text{ and } \qquad \omega_i \equiv 0 \pmod{p^{n-1}}.\] Thus $\overline{f}_{ij}:=\frac{f_{ij}}{p^{n-1}}\pmod{p}$ is a well defined element in $\mE nd(E)({\mathcal U}_{ij})$ and $\overline\omega_i:=\frac{\omega_i}{p^{n-1}}\pmod{p}$ is a well defined element in $\mE nd(E)({\mathcal U}_i) \otimes \Omega^1_{{\mathcal X}/W}(\log {\mathcal D})({\mathcal U}_i)$. These local datum give us a $\check{\mathrm{C}}\mathrm{ech}$ representative \[(\overline{f}_{ij} , \overline\omega_i) \in H^1_{Hig}(X_1 , \mE nd (E,\theta))\] of the difference of the two liftings. Conversely, we can construct a Higgs bundle from datum $(\check E,\check\theta)$ and $(\overline f_{ij}, \overline\omega_i)\in H^1_{Hig}(X_1 , \mE nd (E,\theta))$. Locally over ${\mathcal U}_i\times_WW_n$, the new Higgs bundle is given by \[(\check E_i,\check \theta_i + p^{n-1}\overline\omega_i),\] and the gluing transform is \[\mathrm{id}+p^{n-1}\overline f_{ij}:{\hat E\mid_{({\mathcal U}_i\cap {\mathcal U}_j)\times_WW_n}}\rightarrow {\hat E\mid_{({\mathcal U}_i\cap {\mathcal U}_j)\times_WW_n}}.\] Since $(\overline f_{ij}, \overline\omega_i)$ is a $1$-cocycle,the local datum are glued into a new Higgs bundle. Moreover the following diagram commutes \begin{equation} \xymatrix@C=3cm{ \left(\check E_i,\check \theta_i+p^{n-1}\overline\omega_i\right) \ar[r]^{\mathrm{id}+p^{n-1}\overline f_{ij}} \ar[d]^{\gamma_i} & \left(\check E_j,\check \theta_j+p^{n-1}\overline\omega_j\right)\ar[d]^{\gamma_j}\\ \left(\hat E_i,\hat \theta_i\right) \ar[r]^{\mathrm{id}} & \left(\hat E_j,\hat \theta_j\right)\\ } \end{equation} and this new Higgs bundle is isomorphic to $(\hat E,\hat\theta)$ via local isomorphisms $\gamma_i$. \end{proof} Similarly, for de Rham bundle $(\overline{V},\overline{\nabla})$ over $X_{n-1}$, denote its reduction modulo $p$ by $(V,\nabla)$. \begin{lem} The space of $W_n$-liftings of $(\overline{V},\overline{\nabla})$ is an $H^1_{dR}(X_1 , \mE nd (V,\nabla))$-torsor. \end{lem} \subsection{lifting space of graded Higgs bundles and filtered de Rham bundles with Griffith transversality} Let $(\overline{V},\overline{\nabla},\overline{\Fil})$ be a filtered de Rham bundle over $X_{n-1}$ satisfying Griffith transversality. Denote its modulo $p$ reduction by $(V,\nabla,\Fil)$. Denote by $(\overline{E},\overline{\theta})$ the graded Higgs bundle $\mathrm{Gr}(\overline{V},\overline{\nabla},\overline{\Fil})$ over $X_n$ and denote by $(E,\theta)$ the graded Higgs bundle $\mathrm{Gr}(V,\nabla,\Fil)$ over $X_1$. The filtration $\overline{\Fil}$ on $\overline{V}$ induced sub complex of the de Rham complex $\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)$ \begin{equation*} \xymatrix { 0 \ar[r] & \Fil^p\mE nd(V) \ar[r] \ar@{^(->}[d] & \Fil^{p-1}\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^1(\log {\mathcal D}) \ar[r] \ar@{^(->}[d] & \Fil^{p-2}\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^2(\log {\mathcal D}) \ar[r] \ar@{^(->}[d] & \cdots\\ 0 \ar[r] \ar[r] & \mE nd(V) \ar[r]^(0.4){\nabla^{\mE nd}} & \mE nd(V)\otimes \Omega_{{\mathcal X}/W}^1(\log {\mathcal D}) \ar[r]^{\nabla^{\mE nd}} & \mE nd(V)\otimes \Omega_{{\mathcal X}/W}^2(\log {\mathcal D}) \ar[r]^(0.7){\nabla^{\mE nd}} & \cdots\\} \end{equation*} here $\Fil^p\mE nd(V)= \sum\limits_{j-i=p}\left(\Fil^iV\right)^\vee\otimes \Fil^jV$. We denote this sub complex by $\Fil^p\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)$. When $p$ runs all integers, these sub complexes give an exhaustive and decreasing Filtration on $\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)$. Taking the associated graded object, one gets an isomorphism of Higgs complexes \[\mathrm{Gr}\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right) \simeq \left(\mE nd(E)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}), \theta^{\mE nd}\right).\] Denote by $\overline{E}^p=\mathrm{Gr}^p(\overline{V})$ the $p$-th grading piece of $\overline{V}$. Then Higgs bundle $(\overline{E},\overline{\theta})$ is graded with $\overline{E}=\oplus \overline{E}^p$ and $\overline{\theta}:\overline{E}^p\rightarrow \overline{E}^{p-1}\otimes\Omega_{{\mathcal X}/W}^1(\log {\mathcal D})$. The decomposition of $\overline{E}$ induces a decomposition of its reduction $E=\oplus E^p$ and a decomposition $\mE nd(E)=\oplus \mE nd(E)^p$ with \[\mE nd(E)^p:= \bigoplus_{j-i=p} (E^i)^\vee\otimes E^j.\] Thus the complex \begin{equation}\label{Higgs_direct_summand} 0\rightarrow \mE nd(E)^0 \rightarrow \mE nd(E)^{-1}\otimes \Omega_{{\mathcal X}/W}^1(\log {\mathcal D}) \rightarrow \mE nd(E)^{-2}\otimes \Omega_{{\mathcal X}/W}^2(\log {\mathcal D})\rightarrow\cdots, \end{equation} is a direct summand of the Higgs complex, which is just the $0$-th grading piece of the de Rham complex $\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)$. Taking the second hypercohomology of this complex one gets a direct summand of $H^1_{Hig}(X_1 , \mE nd (E ,\theta))$, we denote it by $\mathrm{Gr^0}H^1_{Hig}(X_1 , \mE nd (E ,\theta))$. \begin{lem} The space of $W_n$-liftings of the graded Higgs bundle $(\overline{E},\overline{\theta})/X_{n-1}$ is a $\mathrm{Gr}^0H^1_{Hig}(X_1 , \mE nd (E ,\theta))$-torsor. \end{lem} \begin{proof} The proof is the same as that of Lemma~\ref{lem:HiggsTorsor}, except that one may choose $\gamma_i$ which preserves the grading structures on both sides. Thus \begin{equation*} \overline f_{ij} \in \mE nd(E)^0({\mathcal U}_{ij}) \qquad \text{ and } \qquad \overline\omega_i \in \mE nd(E)^{-1}({\mathcal U}_i) \otimes \Omega^1_{{\mathcal X}/W}(\log {\mathcal D})({\mathcal U}_i). \end{equation*} Hence one has $(\overline{f}_{ij}, \overline\omega_i) \in \mathrm{Gr}^0H^1_{Hig}(X_1, \mE nd (E,\theta))$. \end{proof} Let $(\check V,\check \nabla,\check \Fil)$ and $(\hat V,\hat \nabla,\hat \Fil)$ be two filtered de Rham bundles satisfying Griffith transversality over $X_{n}$, which are liftings of $(\overline{V},\overline{\nabla},\overline{\Fil})$. \begin{lem}\label{lem:Hodge_Torsor} The difference between $(\hat V,\hat \nabla)$ and $(\check V,\check\nabla)$ is contained in the hyper cohomology ${\mathbb H}^1\left(\Fil^0\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)\right)$. \end{lem} \begin{proof} The proof the similar as that of Lemma~\ref{lem:HiggsTorsor}. Denote by $(\check V_i , \check\nabla_i)$ (resp. $(\hat V_i , \hat\nabla_i)$) the restriction of $(\check V,\check \nabla)$ (resp. $(\hat V, \hat\nabla)$) on ${\mathcal U}_i\times_WW_n$. Locally we can always find isomorphisms $\gamma_i: \check V_i \overset{\sim}{\longrightarrow} \hat V_i$ over ${\mathcal U}_i\times_WW_n$ that is strict under the Hodge filtrations on both sides and lifts the $\mathrm{id}_{\overline{V}_i}$. Then \begin{equation*} f_{ij}:= \gamma^{-1}_j \circ \gamma_i|_{U_{ij}} - Id_{\check V_{ij}} \in \Fil^0\mE nd(\check V)(({\mathcal U}_i\cap{\mathcal U}_j)\times_WW_n) \end{equation*} Since the connections satisfy Griffith transversality, \begin{equation*} \omega_i := \gamma_i^{-1} \circ \hat \nabla_i \circ \gamma_i - \check\nabla_i \in \Fil^{-1}\mE nd(\check V)({\mathcal U}_i\times_WW_n) \otimes \Omega^1_{{\mathcal X}/W}(\log {\mathcal D})({\mathcal U}_i) \end{equation*} Similarly as in the proof of Lemma~\ref{lem:HiggsTorsor}, these local data give us a $\check{\mathrm{C}}\mathrm{ech}$ representative \[(\overline f_{ij} , \overline\omega_i) \in {\mathbb H}^1\left(\Fil^0\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)\right).\] of the difference of the two liftings. \end{proof} \subsection{The torsor map induced by grading} In this subsection, we will describe the difference between the grading objects of two filtered de Rham bundles satisfying Griffith transversality over $X_{n}$. The morphism of complexes \[\Fil^0 \left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right) \rightarrow \mathrm{Gr}^0\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)\] induces a $k$-linear map of the hyper cohomology groups \begin{equation}\label{Grading_Torsor_map} \xymatrix{ {\mathbb H}^1\left(\Fil^0\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)\right) \ar[r]\ar[dr] &{\mathbb H}^1\left(\mathrm{Gr}^0\left(\mE nd(V)\otimes \Omega_{{\mathcal X}/W}^\bullet(\log {\mathcal D}),\nabla^{\mE nd}\right)\right) \ar@{=}[d]\\ &\mathrm{Gr}^0H^1_{Hig}(X_1 , \mE nd (E ,\theta))\\ } \end{equation} Suppose $(\check V,\check \nabla,\check \Fil)$ and $(\hat V,\hat \nabla,\hat \Fil)$ are two filtered de Rham bundles satisfying Griffith transversality over $X_{n}$, which are liftings of $(\overline{V},\overline{\nabla},\overline{\Fil})$. \begin{prop}\label{prop:torsor_Grading} The difference between $\mathrm{Gr}(\hat V,\hat \nabla,\hat \Fil)$ and $\mathrm{Gr}(\check V,\check\nabla,\check\Fil)$ is just the image of the difference between $(\hat V,\hat \nabla)$ and $(\check V,\check\nabla)$ under the morphism in~\emph{(\ref{Grading_Torsor_map})}. \end{prop} \begin{proof}Choosing local isomorphisms $\gamma_i:\check V_i\rightarrow \hat V_i$ as in the proof of Lemma~\ref{lem:Hodge_Torsor} and taking the associated graded object, one gets isomorphism $\mathrm{Gr}(\check V_i,\check\Fil_i)\rightarrow \mathrm{Gr}(\hat V_i,\hat\Fil_i)$. Then this proposition can be checked directly. \end{proof} \begin{rmk} Let ${\mathcal L}$ be a line bundle over ${\mathcal X}$. And $\mE nd((E,\theta)\otimes {\mathcal L})$ is canonical isomorphic to $\mE nd((E,\theta))$. Let $(\check E,\check\theta)$ and $(\hat E,\hat\theta)$ be two liftings of $(\overline E,\overline \theta)$. Then the difference between $(\check E,\check\theta)$ and $(\hat E,\hat\theta)$ is the same as the difference between $(\check E,\check\theta)\otimes {\mathcal L}$ and $(\hat E,\hat\theta)\otimes {\mathcal L}$. Now, the proposition still holds for replacing the grading functor by the composition functor of grading functor and twisting by a line bundle. \end{rmk} \subsection{Torsor map induced by inverse Cartier functor} Assume $(\overline{E},\overline\theta)$ is a graded Higgs bundle and assume it is isomorphic to $\mathrm{Gr}((\overline{V},\overline{\nabla},\overline{\Fil})_{-1})$ for some filtered de Rham bundle $(\overline{V},\overline{\nabla},\overline{\Fil})_{-1}$ over $X_{n-1}$ satisfying Griffith transversality. Denote $(\overline{V},\overline{\nabla})=C^{-1}_{n-1}((\overline{E},\overline{\theta}),(\overline{V},\overline{\nabla},\overline{\Fil})_{-1}\pmod{p^{n-2}})$. Let $(\check E,\check \theta)$ be a graded Higgs bundle which lifts $(\overline{E},\overline\theta)$. The inverse Cartier transform $C^{-1}_n$ on $((\check E,\check \theta),(\overline{V},\overline{\nabla},\overline{\Fil})_{-1})$ define a de Rham bundle $(\check V,\check \nabla)$ over $X_n$ which lifts $(\overline{V},\overline{\nabla})$. \begin{equation*} \xymatrix@C=2cm@R=0cm{ & & (\check V,\check \nabla)\ar@{..}[dd]\\ &(\check E,\check \theta) \ar@{..}[dd] \ar[ur]^{C^{-1}_n} & \\ (\overline{V},\overline{\nabla},\overline{\Fil})_{-1} \ar[dr]^{\mathrm{Gr}} && (\overline{V},\overline{\nabla})\\ &(\overline{E},\overline{\theta}) \ar[ur]^{C^{-1}_{n-1}} &\\ } \end{equation*} Let $(\hat E,\hat \theta)$ be another graded Higgs bundle which lifts $(\overline{E},\overline\theta)$. Denote \[(\hat V,\hat \nabla):=C^{-1}_n((\hat E,\hat \theta),(\overline{V},\overline{\nabla},\overline{\Fil})_{-1}).\] Assume $(\hat E,\hat \theta)$ differs from $(\check E,\check \theta)$ by an element $(\overline f_{ij} , \overline\omega_i) \in H^1_{Hig}(X_1 , \mE nd (E ,\theta))$. Let $t_1,t_2,\cdots,t_d$ be local coordinate of the small affine open subset ${\mathcal U}_i\cap{\mathcal U}_j$ of ${\mathcal X}$. There are two Frobenius liftings $\Phi_i$ and $\Phi_j$ on the overlap subset. Denote $z^J=\prod\limits_\ell z_\ell^{j_\ell}$ and $z_\ell=\frac{\Phi^*_i(t_\ell)-\Phi^*_j(t_\ell)}{p}$. For a field $\theta$ on a bundle over $X_1$, we denote \[h_{ij}(\theta)=\sum_{\ell} \theta(\partial_\ell)\otimes_\Phi z_\ell\] which is obviously Frobenius semilinear. Now, we want compute the difference between $(\hat V,\hat \nabla)$ and $(\check V,\check \nabla)$. \begin{prop}\label{prop:torsor IC} The de Rham bundle $(\hat V,\hat \nabla)$ differs from $(\check V,\check \nabla)$ by the element \[\left(\Phi^*(\overline f_{ij})+h_{ij}(\omega_j),\frac{\Phi^*}{p}(\overline\omega_i)\right) \in H^1_{dR}(X_1 , \mE nd (V ,\nabla)).\] \end{prop} \begin{proof} Recall the diagram~\ref{diag:C^{-1}} and denote $(\widetilde{\check V},\widetilde{\check \nabla})={\mathcal T}_n((\check E,\check \theta),(\overline{V},\overline{\nabla},\overline{\Fil})_{-1})$ and $(\widetilde{\hat V},\widetilde{\hat \nabla})={\mathcal T}_n((\hat E,\hat \theta),(\overline{V},\overline{\nabla},\overline{\Fil})_{-1})$. From the the definition of functor ${\mathcal T}_n$, both are bundles with $p$-connections and both are lifting of a bundle with $p$-connection over $X_{n-1}$. Their reductions modulo $p$ are the same Higgs bundle $(E,\theta)$. From the definition of the functor ${\mathcal T}_n$, the difference between $(\widetilde{\hat V},\widetilde{\hat \nabla})$ and $(\widetilde{\check V},\widetilde{\check \nabla})$ is also equal to $(\overline f_{ij} , \overline\omega_i) \in H^1_{Hig}(X_1 , \mE nd (E ,\theta))$. Now, let $\widetilde{\gamma}_i: \widetilde{\check V}\mid_{{\mathcal U}_i} \rightarrow \widetilde{\hat V}\mid_{{\mathcal U}_i} $ be a local isomorphism and $\Phi_i$ be the local lifting of the absolute Frobenius on ${\mathcal U}_i$. The following diagram (not commutative in general) \begin{equation} \xymatrix{ \Phi_i^*(\widetilde{\check V}\mid_{{\mathcal U}_{ij}}) \ar[r]^{G_{ij}(\widetilde{\check \nabla})} \ar[d]_{\Phi_i^*(\widetilde{\gamma}_i)} & \Phi_j^*(\widetilde{\check V}\mid_{{\mathcal U}_{ij}}) \ar[d]^{\Phi_j^*(\widetilde{\gamma}_j)}\\ \Phi_i^*(\widetilde{\hat V}\mid_{{\mathcal U}_{ij}}) \ar[r]^{G_{ij}(\widetilde{\hat \nabla})} & \Phi_j^*(\widetilde{\hat V}\mid_{{\mathcal U}_{ij}}) \\ } \end{equation} Recall the de Rham bundle $(\check V,\check \nabla)$ (resp. $(\hat V,\hat \nabla)$) is defined by gluing $\{\Phi_i^*(\widetilde{\check V}\mid_{{\mathcal U}_i})\}$ (resp. $\{\Phi_i^*(\widetilde{\hat V}\mid_{{\mathcal U}_i})\}$) via isomorphisms $G_{ij}(\widetilde{\check \nabla})$ (resp. $G_{ij}(\widetilde{\hat \nabla})$). The $G_{ij}$'s are given by Taylor formula \begin{equation} G_{ij}(\widetilde{\check\nabla})(e\otimes 1)=\sum_J\frac{\widetilde{\check\nabla}(\partial)^J}{J!}(e)\otimes z^J, \quad e\in \widetilde{\check V}({\mathcal U}_i). \end{equation} Denote \[ g_{ij}=G_{ij}(\widetilde{\check \nabla})^{-1}\circ \Phi_j^*(\widetilde{\gamma}_j)^{-1}\circ G_{ij}(\widetilde{\hat \nabla}) \circ \Phi_i^*(\widetilde{\gamma}_i)-\mathrm{id}_{\Phi_i^*(\widetilde{\check V}\mid_{U_i\cap U_j})},\] and \[\omega_{\nabla,i} = \Phi_i^*(\widetilde{\gamma}_i)^{-1}\circ \check\nabla_i\circ \Phi_i^*(\widetilde{\gamma}_i)-\hat\nabla_i\] which are trivial modulo $p^{n-1}$. Denote $\overline g_{ij}=\frac{g_{ij}}{p^{n-1}}\pmod{p}\in \mE nd(V)({\mathcal U}_{ij})$ and $\overline\omega_{\nabla,i}=\frac{\omega_{\nabla,i}}{p^{n-1}}\in \mE nd(V({\mathcal U}_i))\otimes\Omega_{{\mathcal X}/W}(\log {\mathcal D})({\mathcal U}_i)$ . Then the de Rham bundle $(\hat V,\hat \nabla)$ differs from $(\check V,\check \nabla)$ by the element $(\overline g_{ij},\omega_{\nabla,i})\in H^1_{dR}(X_1,\mE nd(V,\nabla))$. Now let's express $\overline g_{ij}$ and $\overline \omega_{\nabla,i}$ with $\overline f_{ij}$ and $\overline{\omega}_i$. Since \[\widetilde{\gamma}_j: (\widetilde{\check V}\mid_{{\mathcal U}_j},\widetilde\gamma_j^{-1}\circ\widetilde{\hat\nabla}\circ\widetilde\gamma_j) \rightarrow (\widetilde{\hat V}\mid_{{\mathcal U}_j}, \widetilde{\hat \nabla})\] is an isomorphism of twisted de Rham bundle, one has following commutative diagram \begin{equation} \xymatrix@C=3cm{ \Phi_i^*(\widetilde{\check V}\mid_{{\mathcal U}_{ij}}) \ar[r]^{G_{ij}(\widetilde\gamma_j^{-1}\circ\widetilde{\hat\nabla}\circ\widetilde\gamma_j)} \ar[d]_{\Phi_i^*(\widetilde{\gamma}_j)} & \Phi_j^*(\widetilde{\check V}\mid_{{\mathcal U}_{ij}}) \ar[d]^{\Phi_j^*(\widetilde{\gamma}_j)}\\ \Phi_i^*(\widetilde{\hat V}\mid_{{\mathcal U}_{ij}}) \ar[r]^{G_{ij}(\widetilde{\hat \nabla})} & \Phi_j^*(\widetilde{\hat V}\mid_{{\mathcal U}_{ij}})\\ } \end{equation} Hence \begin{equation} \begin{split} g_{ij} & =G_{ij}(\widetilde{\check \nabla})^{-1} \circ G_{ij}(\widetilde\gamma_j^{-1}\circ\widetilde{\hat\nabla}\circ\widetilde\gamma_j) \circ \Phi_i^*(\widetilde{\gamma}_j)^{-1} \circ \Phi_i^*(\widetilde{\gamma}_i)-\mathrm{id}\\ & =G_{ij}(\widetilde{\check \nabla})^{-1} \circ G_{ij}(\widetilde{\check \nabla}+p^{n-1}\overline\omega_j) \circ \Phi_i^*(\widetilde{\gamma}_j^{-1} \circ\widetilde{\gamma}_i) -\mathrm{id}\\ & =\left(\mathrm{id}+p^{n-1}\sum_{\ell}\overline{\omega}_j(\partial_\ell) \otimes_{\Phi_j} z_\ell\right) \circ \Phi_i^*(\mathrm{id}+p^{n-1}\overline f_{ij}) -\mathrm{id}\\ &=p^{n-1}\left(\sum_{\ell}\overline{\omega}_j(\partial_\ell) \otimes_{\Phi_j} z_\ell + \Phi^*(\overline f_{ij}) \right)\\ &=p^{n-1}\left( h_{ij}(\overline\omega_j)+ \Phi^*(\overline f_{ij}) \right)\\ \end{split} \end{equation} and $\overline{g}_{ij}=\sum\limits_{\ell}\overline{\omega}(\partial_\ell) \otimes_{\Phi} z_\ell + \Phi(\overline f_{ij})$. Since $\check\nabla_i=\frac{\Phi_i^*}{p}\left(\widetilde{\check\nabla}\right)$ and $\hat\nabla_i=\frac{\Phi_i^*}{p}\left(\widetilde{\hat\nabla}\right)$ \begin{equation*} \begin{split} \omega_{\nabla,i} & = \frac{\Phi_i^*}{p}\left( \widetilde{\gamma}_i^{-1} \circ \widetilde{\hat \nabla}_i \circ \widetilde{\gamma}_i -\widetilde{\check \nabla}_i \right) \end{split} \end{equation*} Thus $\overline{\omega}_{\nabla,i}=\frac{\omega_{\nabla,i}}{p^{n-1}}=\frac{\Phi^*}{p}\left(\overline\omega_i\right)$. \end{proof} \begin{cor}\label{InvCar_Torsor_map} The map from $\mathrm{Gr}^0H^1_{Hig}(X_1 , \mE nd (E ,\theta))$ to $H^1_{dR}(X_1 , \mE nd (V ,\nabla))$ mapping $(\overline f_{ij} , \overline\omega_i)$ to $\left(\Phi^*(\overline f_{ij})+h_{ij}(\overline\omega_j),\frac{\Phi^*}{p}(\overline\omega_i)\right)$ is a Frobenius semilinear map between $k$-vector spaces. \end{cor} \begin{proof}This follows that $\Phi^*$, $h$ and $\frac{\Phi^*}{p}$ are all Frobenius semilinear. \end{proof} \textbf{Acknowledgement}. We would like to thank Adrian Langer particularly for pointing out an incomplete proof of the local freeness Higgs sheaves in Section 3.3. Fortunately, the proof can be now fixed by applying for Theorem 2.1 and Corollary 2.8 in his very recent paper~\cite{Langer19}. We thank Mao Sheng and Carlos Simpson for their interests in this paper. We thank warmly Christian Pauly for pointing out a paper of de Jong and Osserman. We are grateful to Xiaotao Sun, Deqi Zhang, Shouwu Zhang and Junyi Xie for the discussion on dynamic properties of self-maps in characteristic $p$. We also thank Duco van Straten for telling us the paper of Kontsevich about the $\ell$-adic representations. The discussion with Ariyan Javanpeykar about the self-map on the moduli space of Higgs bundles over the projective line with logarithmic structure on marked points is very helpful for us, and he also read the preliminary version of this paper carefully and suggested a lot for the improvement. We thank him heavily.
2,869,038,156,158
arxiv
\section{Introduction} The study of galaxy populations in high-redshift large-scale structures can give us important clues about the star formation history and galaxy formation process in such environments \citep{wf91,bekki98,kodama98,vandok05,mei06}. However, the detection of distant galaxy clusters is not trivial. At high redshifts, the use of techniques like the red-cluster sequence \citep{RCS}, is less efficient. The reason is the proportional decrease of the number of red galaxies in clusters for increasing redshift \citep[e.g.][]{bo84}. Therefore, in order to detect high-z clusters through optical imaging, one requires at least photometric redshift information, which implies imaging in four or more bands. Additionally, in order to avoid the high observational cost of observing large areas of the sky, one can use several indicators of the presence of high-z clusters to select the fields to be observed. Several of these tracers, like extended X-ray emission \citep[e.g.,][]{romer01}, the Sunyaev-Zeldovich decrement or bright radio-emitting galaxies have been largely used to trace clustering of galaxies. These techniques have strengths and drawbacks. The first two, for example, depend on the presence of a hot intra-cluster medium, which may bias samples against recently forming clusters. In this paper, we will consider another possible tracer of the presence of clusters: physically close pairs of quasars. Quasars are relatively rare astronomical objects and hence, if they are distributed following galaxies, the presence of two or more such objects in a relatively small volume should be a good indicator of a rich environment. Actually, in structure formation scenarios with bias between barionic and dark matter distribution \citep{kaiser84} it is expected that high redshift objects form in large high--redshift density fluctuations and, therefore, such correlation between quasar concentration and clusters is somewhat expected, unless, for some reason, quasars avoid clusters. However, most observational evidence shows that high redshift quasars do tend to follow the overall large scale structures. Whether quasars inhabit or not high density regions in low redshifts is a subject of dispute. \citet{col02}, for example, claim that at $0.1 \le z \le 0.25$, quasars (both radio--loud and radio--quiet) tend to reside in low density regions. On the other hand \citet{mullis}, using a sample of X-ray selected quasars, conclude that those objects trace closely the underlying mass distribution. \citet{ilona02} also points out that $0.2<z <0.3$ quasars follow the large-scale structure traced by galaxy clusters, but they also note the complete absence of radio--quiet QSO's at the very center of galaxy clusters. At higher redshift, however, most observational results suggest that quasars prefer groups or clusters \citep{HG98,wold00, wold01}. One very convincing example is the structure found by \citet{haines01} at $z = 1.226$ around a radio-quiet quasar belonging to a large quasar structure \citep{cc91,cc94}. The same behaviour appears to be followed by radio-loud quasars. A good example is the work by \citet{sg02}, who found a highly significant excess of galaxies around radio-loud quasars at $1.0 < z < 1.6$. \citet{tanaka01} also points in the same direction by reporting an overdensity of galaxies around a quasar concentration at $z\sim1.1$. An exception is the work by \citet{coil07} who, through an analysis of the clustering of quasars and galaxies at $0.7 < z < 1.4$, concluded that quasars and blue galaxies are found in the same environment, which differs from that occupied by the red galaxy population. Regarding specifically quasar pairs, \citet{zs01} found an statistically significant excess of high-redshift quasar pairs with separations between 1 and 5 Mpc in projected distance. This suggests that such quasar pairs belong to sizable physical structures (precursors of today's clusters and superclusters of galaxies) and therefore, they can be used as tracers of high-redshift large-scale structures. Going to even larger redshifts, \citet{dj03} found that a quasar pair at $z = 4.96$ is associated with a large-scale structure. Thus, an interesting form to search for high-redshift clusters and other large-scale structures is examining the environment inhabited by quasar pairs. In this work, we describe a multi-color photometric study of the field around four quasar pairs at $z \sim 1$, using the instrument GMOS in both Gemini North and South telescopes. One of the pairs in our sample, QP0110-0219, has been previously studied by \citet{surdej86}, who found hints of the presence of a cluster around it. The new data we present here allow us to confirm this claim. There are no studies in the literature for the other three quasar pairs. The outline of this paper is the following: in Section 2 we describe the sample and the data reduction procedures. The galaxy photometry is discussed in Section 3. Section 4 outlines our approach to obtain photometric redshifts and presents then application to our galaxy sample. The environments of the quasar pairs are discussed in section 5. Finally, in Section 6 we summarize our results. Throughout this paper, we adopt a $\Lambda CDM$ concordance cosmology with $\Omega_m = 0.3$ and $\Omega_{\Lambda} = 0.7$, and we use the value $h = 0.7$ in the Hubble constant, $H_0 = 100 ~h$ km s$^{-1}$ Mpc$^{-1}$. \section{Observations and data reduction} \subsection{Sample selection} In this paper, we study a sample of four fields around quasar pairs at $z \sim 1$. We selected the pairs from \citet{vv01} quasar catalog considering redshift differences smaller than 0.01 and projected angular separations smaller than 300 arcsec. We did not consider pairs with angular separations smaller than 15 arcsecs to avoid including gravitational lens. With these parameters, we found 84 quasar pairs. Five of them had redshifts between 0.9 and 1.0, and four were observed with Gemini telescopes. The main sample characteristics are shown in Table \ref{car}. It includes: the quasar names, their coordinates, redshifts, angular separation, and the name adopted for the pairs in this paper. We have checked the spectra of the quasars in our sample to certify that we indeed did not pick any cases of gravitationally lensed images of one only quasar. The parity of QP0110-0219 ~is discussed by \citet{surdej86}. Considering the redshift difference and spectral characteristics, they conclude that the quasars Q 0107-0235 ~and PB 6291 ~are different objects. For the other pairs, quasar spectra are available in the 2dF QSO Redshift Survey \citep{croom04}\footnote{http://www.2dfquasar.org/Spec\underline{~}Cat/2qzsearch2.html}. Our visual examination of the spectra indicates that also in this case the differences in redshifts and spectral characteristics suggest that they are indeed different objects and not lensed images of the same quasar. Moreover, in QP1310+0007, QP1355-0032, and QP0110-0219 ~one of the quasars is radio-loud and the other is radio-quiet. Consequently, we are confident that none of the pairs in our sample are produced by gravitational lensing. \subsection{Imaging and data reduction} The four fields in Table \ref{car} were observed with GMOS N and S mounted on Gemini telescopes. The imaging was done in four filters of the SDSS system \citep{fuku96}: $g'$, $r'$, $i'$, and $z'$. The log of observations is presented in Table \ref{obs2}, which shows the telescope used, the exposure time, and the Gemini program identification number. All observations were performed in photometric conditions. The typical FWHM for point sources was $\sim$ 0.7 arcsec in all images. Data reduction was performed using the Gemini IRAF \footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} package. The images were bias corrected, flat fielded, and fringe corrected in the standard way. After that, they were combined and cleaned of cosmic ray events and bad pixels producing, then, the final images, appropriate for science analysis. \section{Photometry and object detection \label{phot-detect}} We have used the IRAF package {\it daophot} to calculate the photometric zero-point for each band in each field in the AB SDSS photometric system. The calibration was made using stars from the Landolt catalog \citep{lan} also calibrated in the AB SDSS system. Using the dispersion in the magnitudes of the stars, we have estimated the accuracy of the magnitude zero-point as 0.01 in $g'$ and $r'$ bands, 0.02 in $i'$ band, and 0.03 in $z'$ band. We have used SExtractor \citep{sex} to detect objects over the final image frames. First, we ran the program on the images of each photometric band and selected the image that showed the highest number of detected objects. Second, using such image as reference, we ran the program again in "dual image mode". We used a top-hat filter and detected objects above 1.5 $\sigma$, which corresponded to median isophotal levels of 27.1, 26.4, 26.4, and 25.4 mag arcsec$^{-2}$ in $g'$, $r'$, $i'$, and $z'$, respectively. In order to run the program in "dual image mode", it was necessary to align the images. Thus, because of rotations and shifts, parts of the images near the borders were lost. Positions and magnitudes (total and aperture) were obtained for all objects present in all bands for each pair. We adopted 3 arcsec aperture magnitudes, $m_{ap}$, to measure colors. Aperture magnitudes were also obtined to compare our data with others in the literature. For the total magnitude of an object, we have addopted a color corrected isophotal magnitude. For example, if the objects were detected in the g' band, then $g' = g_{iso}$, $r'= g_{iso} - (g_{ap}-r_{ap})$, etc., where $m_{iso}$ is the isophotal magnitude given by SExtractor. After measuring the magnitudes they were corrected for Galaxy extinction, with the absorption coefficients $A_{\lambda}$ obtained from \citet{schlegel98} using NED and interpolated to GMOS bands. We have used the class-star parameter of SExtractor, which ranges from 0 (galaxies) to 1 (stars), to separate stars from galaxies. Figure \ref{cs} shows this parameter versus the $i'$ magnitude for the pair QP0110-0219. The star symbol in the plot represents all objects with FWHM $\le$ seeing. If we consider all objects with class-star $<$ 0.8 as galaxies, a threshold often adopted in the literature \citep[e.g.][]{kodama04,kaputi06}, we find that 2 \% of the objects with FWHM $\le$ seeing have class-star $<$ 0.8 and that 7 \% of the objects with FWHM $>$ seeing have class-star $\ge$ 0.8. On the other hand, if we adopt a threshold of 0.9, we have a similar contamination ($\sim$2 \%) of the galaxy sample and only 4 \% of the objects with FWHM $>$ seeing have now class-star $\ge$ 0.9. The results for the other fields are similar. We then decided to adopt the class-star value of 0.9 to separate stars from galaxies. The same criteria was adopted by \citet{capak04} to determine number counts in the HHDFN. In order to estimate the completeness magnitude of the observations, we have ploted the logarithmic number of detected objects as a function of the total magnitude in the band used for detection. From visual inspection of the turnover magnitudes, we estimated that the observations are complete down to $i'$ = 24 for QP1310+0007 ~and QP0110-0219, $g'$ = 25 for QP1355-0032, and $g'$ = 24.5 for QP0114-3140. It is interesting to know how the magnitudes above compare with those of a $M^*$-galaxy at the redshift of the pairs. We have estimated the value $M^*$ in two ways, as follows. \citet{ellis04} obtained $K^* \sim 18$ for clusters of galaxies with redshifts between 0.8 and 1.0. Considering the value for $(I-K) \sim 2.9$ obtained by \citet{stanford02} for the cluster 3C 184 ($z = 0.996$), we have $I^* \sim 20.9$. On the other hand, using spectrophotometric synthesis models, \citet{fuku95} obtained $(i'-I_c) \sim 0.7$ for galaxies at $z = 0.8$. Then, we obtain $i'^* \sim 21.6$. In the second case, we may consider the Coma cluster as representative of a $z = 0$ cluster. \citet{mobasher03} studied its luminosity function and found $M^*_R \sim -21.79 + 5 ~log ~h_{65}$. For galaxies at $z = 0$, \citet{fuku95} obtained $(r'-R_c) \sim 0.22$ and $(r'-i') \sim 0.30$, therefore $M^*_{i'} \sim -21.71$ for the cosmology adopted here. We have calculated $i'^*$ with \begin{equation} m = M(z=0) + 5 \log d_L[Mpc] + 25 + k(z) + e(z) \end{equation} where $d_L$ is the luminosity distance, $k(z)$ is the k-correction and $e(z)$ is the evolution correction. In the cosmology adopted here, $d_L \sim 6000 Mpc$ at $z \sim 1$. Using $k(z)$ and $e(z)$ values published by \citet{fuku95} and \citet{po97} respectively ($k_{i'} \sim 0.9$ and $e_{i'} \sim -1.3$) and the value obtained to $M^*_{i'}$ at $z = 0$, we obtain $i'^* \sim 21.8$. Considering the uncertainties of the approaches, the agreement of the two values is very good. We have then adopted the mean value $i'^* \sim 21.7$. This value is similar to that obtained by \citet{bla06} ($i^*_{775} = 22.0 \pm 0.1$ AB) for early-type galaxies at $z = 0.83$. This result shows that the completeness magnitude of our fields corresponds to $\sim M^* + 2$ at the redshift of the pairs. \subsection{Comparison with HHDFN and ACS-GOODS photometry \label{comp}} Our approach to compute photometric redshifts (\S \ref{zphot}) makes use of a training set with galaxies of known redshifts measured in the same photometric bands. Consequently, photometric redshifts are very sensitive to small zero-point changes. In order to examine this point, we have made a comparison between our photometry with those available for the HHDFN (Hawaii Hubble Deep Field North) region \citep{capak04} and for the ACS-GOODS (Advanced Camera for Surveys - Great Observatories Origins Deep Survey) region \citep{cowie04}. Although one region is contained in the other, the photometric and spectroscopic data available for them are different. A comparison with HHDFN is useful because it contains ACS-GOODS and has a photometric completeness similar to those of our fields. ACS-GOODS, on the other hand, has hundreds of mesured spectroscopic redshifts and will be adopted as the training set for our photometric redshift method. Since the photometry of these fields (in UBVRIz$^\prime$) is diferent of our photometry (in $g'r'i'z'$), they were interpolated to the GMOS bands adopted here. First we have considered the HHDFN region which has photometry complete down to $R = 24.5$. We compared the photometric distribution of galaxies in our fields within the magnitude completeness limit of each band with the corresponding distributions using the interpolated magnitudes. We estimated the shift that is required in the zero-point of our photometric bands so that the median of the magnitude distribution (for galaxies brighther than the completeness limit) of HHDFN and ours match each other. We have used an iterative algorithm, adding to our magnitudes a shift obtained in each step until convergence. The median of the absolute value of the shifts is 0.07. Many are the possible sources of these shifts. A possibility is the magnitude interpolation required in this approach. Another one is cosmic variance, since the galaxy catalogues considered here are not large enough. Indeed, a photometric study with SDSS data made by \citet{fuku04} shows that, besides Galaxy extinction, the principal cause for variations in number counts is the large-scale clustering of galaxies. This dispersion increases for smaller areas, being greater than 0.2 magnitudes for areas smaller than 0.01 $deg^2$, as is our case. After appling the zero-point shifts in our magnitudes, we compared our galaxy number counts with those from the ACS-GOODS data in 0.5 mag intervals for objects brighter than the apperture magnitude $z' = 22$. It can be verified that, for all fields, we obtain a good match between our number counts and those of ACS-GOODS. \section{Photometric redshift analysis} \label{zphot} Determining the redshifts of the galaxies in our fields allows us to separate the galaxies belonging to a possible cluster or group at the redshift of the pair from the foreground and/or background galaxies. Here we adopt photometric redshifts for this task. Photometric redshift estimation is often done by comparing the magnitudes of an object with the magnitudes of templates obtained with spectrophotometric evolution models, as is the case of Zpeg \citep{zpeg} and HyperZ \citep{bol}. Here we adopt another approach: instead of a galaxy model, we use real data- magnitudes and spectroscopic redshifts- obtained in galaxy surveys. We compare the magnitudes of our galaxies with magnitudes in the same bands of real galaxies with known spectroscopic redshift to parametrize a local empirical relationship between magnitudes or colors with redshift. This is done using a locally weighted regression algorithm (LWR) developed by our group (Santos et al. 2007, in preparation). The same type of data-driven approach is adopted in the ANNz photometric redshift package \citep{annz}, which applies instead artificial neural networks for this task. \subsection{Method} LWR is an algorithm designed to provide a continuous non-linear mapping between sets of variables (e.g., Atkeson, Moore \& Schall 1997). Our LWR method is discussed in detail and compared with other methods in Santos et al. (2007, in prep.). Here we only outline its main characteristics. The method works with magnitudes or colors (and even with other galaxy properties, like diameter or type) but here we use colors. The method works with two data sets: the training set (having known spectroscopic redshifts) and the test set (for which we want to calculate the redshift). Obviously, both sets must have the magnitudes and/or colors measured in the same bands and in the same photometric system. LWR establishes a linear relationship between colors and redshifts that is local because the redshift estimation in a given point in color space weights more heavily the data points in the neighborhood of this point than those more distant. The training set contains colors and spectroscopic redshift for all objects. From these values we build a redshift estimator which will be applied to our galaxies, in the test set. We assume that the local relation between colors and redshifts is linear: \begin{equation} \label{lin} z({\bf x}) = a_0 + {\bf a}^T.{\bf x}= a_0 + \Sigma_{i=1}^{n} a_i x_i \end{equation} where ${\bf x}$ is a vector containing the $n$ colors of a given object, $z({\bf x})$ is the redshift and $T$ stands for transpose matrix. For each object in the test set, with colors in point ${\bf x}$, we determine the values of coefficients $a_0 ... a_n$ and then the redshift by minimizing the weighted $\chi^2$ function with the $N$ objects of the training set: \begin{equation} \chi^2 = \Sigma_{j=1}^{N} \omega_j^2 \left(y_j-a_0-{\bf a}^T . {\bf x}_j\right)^2 \end{equation} where $\omega_j$ is the weight associated to the $j-$th data point. The locality of the fitting is assured by adopting a weight function which decreases as the euclidean distance $d({\bf x},{\bf x}_j)$ between points ${\bf x}$ and ${\bf x}_j$ increases: \begin{equation} \label{k} \omega_j = \exp\left(\frac{-d^2({\bf x},{\bf x}_j)}{2K^2}\right) \end{equation} The $K$ parameter is a kernel-width that determines the ``effective volume'' around point ${\bf x}$: only points within this sphere effectively affect the values of the parameters and the redshift estimate. This parameter was determined in this work by dividing randomly the training set objects in 2/3 for training (TS) and 1/3 for validation (VS). For galaxies in VS we computed $z_{phot}$ with eq. \ref{lin} using galaxies in TS to obtain the coefficients. This procedure was then repeated one hundred times. For each realization of TS and VS, we compute the rms square deviation between $z_{phot}$ and $z_{spec}$ for the objects in VS, $\sigma_z$, and choose as optimal $K$ the value for which $\sigma_z$ is minimum. It is worth mentioning that redshift estimates with the LWR method are heavily dependent on the training set adopted (besides, of course, the set of colors available). In particular, the redshift accuracy increases with the size of the training set and depends strongly on the homogeneity of the photometric calibration of the training and test sets. \subsection{Application to our sample} \label{aplic} We adopt in this work the (interpolated) photometric data and spectroscopic redshifts of the ACS-GOODS region as our training set. Since the method allows using colors or magnitudes for photometric redshift estimation, we have used colors (but none of the results reported in the next section depend of this choice). Using magnitudes we would have to limit our sample at $z' = 22$ (the ACS-GOODS spectroscopic completeness), but our sample photometry goes deeper; it is complete at least down to $i' = 24$. When we use colors, such limit is not necessary and we are able to estimate photometric redshifts for fainter objects, even without spectroscopic data for $z' > 22$. The value $K=0.33$ was determined by the procedure described in the previous section as the median value of 100 simulations. The histogram in Figure \ref{zph} shows the redshift error distribution for all simulations. In what follows we consider the mean value, $\sigma_z = 0.16$, as the redshift error for the training set ACS-GOODS. Figure \ref{zph} also shows the comparison between $z_{phot}$ and $z_{spec}$ for the 1/3 of galaxies from ACS-GOODS used for validation corresponding to the simulation with this mean value. Having obtained photometric redshifts for all fields containing quasar pairs, we may start looking for structures around the redshift of the pairs, that is, objects with $z_{phot} = z_{pair} \pm \Delta z$. We made experiments with $\Delta z$ equal to 0.1, 0.16 and 0.2, obtaining similar results. Therefore we only present here results with $\Delta z = \sigma_z = 0.16$. For comparison, \citet{toft03}, in a photometric study of the galaxy cluster MG2016+112 at $z = 1$, adopted $\Delta z = 0.25$. Note that the error $\sigma_z$ should depend of the photometric errors, the number of colors, and the size of the training set. \section{Results} We present in Figure \ref{dr} the galaxy redshift distribution in the field of each quasar pair. All fields show a peak in the interval $z \in[z_{pair}-\sigma_z,z_{pair}+\sigma_z]$. We now analyse some properties of the galaxy distribution in this redshift interval aiming at constraining the nature of the environment inhabited by the quasar pairs of our sample. \subsection{Galaxy overdensities} We must know the expected number of galaxies in this interval, to verify the significance of the galaxy excess around $z_{pair}$. For this estimate we have assumed that the HHDFN region is representative of the overall galaxy distribution. This sample is appropriate for this analysis because its photometric depth is comparable to that of our fields. Photometric redshifts were obtained with the method discussed in the previous section. We then defined the galaxy overdensity in the interval $z_{pair} \pm \sigma_z$ as \begin{equation} \label{ec} \delta = \frac{n_{pair}-n_{H}}{n_{H}} \end{equation} where $n_{pair}$ and $n_{H}$ are the number densities of galaxies in this redshift interval for a given field and for the HHDFN, respectively. We have considered as galaxies in HHDFN all objects with $z_{phot} > 0$. Values of $\delta$ for each pair are shown in Table \ref{prop}. Errors in $\delta$ were determined assuming Poissonian errors for $N_{pair}$ and $N_H$. The overdensity $\delta$ ranges from 0.6 to 1.6 and is significant in all cases. Note that these results are affected by cosmic variance, since we have used only one reference field and the area occupied by HHDFN (0.2 square degrees) is very small, so that $n_H$ is affected by the galaxy clustering in the HHDFN region. We have arrived at similar results using the VIMOS VLT Deep Survey around the Chandra Deep Field South \citep{vimos} and the Gemini Deep Deep Survey \citep{gdds} regions. \subsection{Distribution of galaxies} In order to investigate the clustering properties of the galaxies in the chosen redshift interval around a quasar pair, we calculated the median projected distance between galaxies and compared them with the same quantities obtained with 1000 simulations of random uniform galaxy distributions with the same number of objects and the same projected area of the observed fields. We may then define a confidence level, $CL$, that a field presents a galaxy distribution more clustered than an uniform distribution: \begin{equation} \label{clex} CL = \frac{N (\Delta \theta > \Delta \theta_{f})}{N_s} \end{equation} where $N (\Delta \theta > \Delta \theta_{f})$ is the number of simulated fields with median projected distances larger than that of the observed fields and $N_s$ is the total number of simulations. We summarize the results of this analysis in Table \ref{prop}. Two pairs are strongly clustered (QP1355-0032, QP0110-0219), one is moderately clustered (QP1310+0007) and one (QP0114-3140) is not clustered at all. \subsection{Richness} In order to estimate the richness of our fields, we have adopted an approach similar to the traditional Abell's richness criterion \citep{abell58}, defined as the number of galaxies brighter than $m_3 + 2$ (where $m_3$ is the magnitude of the third brightest cluster member) within a radius of $1.5 ~h_{100}^{-1}$ Mpc of the cluster center. A cluster is considered rich if it contains more than 30 galaxies according to such a definition. The Abell's radius considering the cosmological model adopted here is 2.1 Mpc. Assuming that the brightest galaxy in the redshift interval $z_{pair} \pm \sigma_z$ is the brightest cluster galaxy, we computed the number of galaxies brighter than $i'_3+2$ by scaling their number in each field to the Abell area ($N^{esc}=N/\Sigma$, where $\Sigma=A_{par}/A_{Abell} \sim 0.5$). This result was corrected for contamination due to background/foreground objects using counts in the HHDFN region. The results are shown in Table \ref{prop}. All but one of the putative clusters are rich, according with this criterion. The field of QP1310+0007 ~seems to be the poorest of our four fields, and is poor also with Abell's criterion. However, this is the pair with the brightest galaxy in the corresponding redshift interval among all quasar pairs, and its poorness may be an effect of galaxy counts, since they grow strongly for increasing magnitude. Furthermore, note that our fields are smaller than the Abell's radius, then the quasars could be in a poor cluster, group, or in the neighborhood of a cluster. On the other hand, the pair QP0114-3140, which is not rich by the results of Sections 5.1 and 5.2, has R=0 in Abell's classification. It is, then, appropriate to look for other richness estimators to confirm or not these results. Another useful richness indicator is the number of bright galaxies, assumed here as those brighter than $i'^*+1$ present in the field. This number is estimated for galaxies in the pair redshift interval and is corrected with the corresponding HHDFN counts (scaled to the field area). The results are also presented in Table \ref{prop}. All fields seems to contain a considerable number of bright galaxies. It is interesting to compare our results with those obtained by \citet{pos02}. These authors studied a variety of Abell-like richness indicators in an I-band cluster survey. One of these indicators, $N_{A,0.5}$, is defined as the number of galaxies with magnitude between $m_{3}$ and $m_{3}+2$ within a radius of 666 $h_{75}^{-1}$ $kpc$. They show that this indicator is related to Abell's richness, $N_A$, as $N_{A,0.5} \sim 0.44 N_{A}$. We have used this relation to estimate Abell's richness from $N_{A,0.5}$. For 31 clusters with redshifts between 0.9 and 1.0, we obtain $N_{A} = 54$ galaxies. That means that fields have a richness $R \sim 1$, which may be compared with the numbers present in Table \ref{prop}: only QP1310+0007 ~seems poorer than the clusters at comparable redshift studied by \citet{pos02}. \subsection{The red sequence} The red sequence is a characteristic of the color-magnitude diagrams of early-type galaxies of groups and clusters. In a color-magnitude diagram these galaxies have very similar colors following a linear relation and their integrated colors are progressively bluer for weaker magnitudes. This relation is also known as color-magnitude relation (CMR). We have examined the red sequence in the $(i'-z') \times i'$ diagram of galaxies in the redshift interval of each pair. The use of the color $(i'-z')$ is based on its capability to identify early-type galaxies, since at $z \sim 1$ the 4000 \AA~ break lies in the $i'$ band and consequently the early-type galaxy color $(i'-z')$ are very red. The fields around quasar pairs QP1310+0007 ~and QP0110-0219 ~(Figures \ref{cla} and \ref{clc} - top-right) present a peak in the color distribution at $0.6 \le i'-z' \le 1.0$. Comparing our data with a similar distribution for HHDFN, we verify that these peaks represent an excess of 1.7 $\sigma$ and 3.3 $\sigma$, respectively. Therefore, in this interval, we would expect to find a red sequence in the color-magnitude diagram. Indeed, for QP0110-0219 ~ we note clearly that the galaxies form a red sequence in $i'-z' \sim 0.8$ (Figure \ref{clc} - top-left), the value obtained by \citet{tanaka06} in spectroscopically confirmed structures at $z \sim 0.9$. Besides, if we consider the projected distribution of these red-sequence galaxies (Figure \ref{clc} - bottom), we notice that they have a filamentary-like distribution similar to what is observed in other $z \sim 1$ clusters, and considered typical of clusters in process of formation (e.g., Toft et al. 2003). The red galaxies of QP1310+0007~ present a broad distribution in the color-magnitude diagram (Figure \ref{cla} - top-left). They also present a clump-like projected distribution (Figure \ref{cla} - bottom). The other two fields have less-significant red sequences (Figures \ref{clb} and \ref{cld}). The cluster CL1604+4321, at $z \sim 0.9$, the less massive of the clusters studied by \citet{home06}, presents a lack of bright elliptical galaxies ($\sim M^*$). The authors suggest that this cluster has not yet had time to complete the red sequence. This may be also the case for the structures associated to QP1355-0032 ~and QP0114-3140. \subsection{Properties of the fields around quasar pairs} The properties of the environment associated with each quasar pair are summarized in Table \ref{resumo}. We now discuss each pair individually. \subsubsection{QP1310+0007} This quasar pair is formed by J131046+0006 ~(a radio-quiet object) and J131055+0008 ~(a radio-loud quasar) at redshifts 0.925 and 0.933, respectively. They have an angular separation of 177 arcsec, corresponding to 1.4 Mpc in the adopted cosmology. Its density contrast is the smallest among all quasar pairs. However, the galaxy distribution analysis shows that the galaxies in this field are clustered at some degree, i.e., the median projected distance between galaxies is smaller than in a random uniform field in 67 \% of the simulations. This field has been classified as poor with the Abell's criterion, and we have found 20 galaxies with magnitude $i'<i'^*+1$. The galaxy color distribution shows a prominent peak in $ 0.6 \le i'-z' \le 1.0$, corresponding to the red sequence. The red galaxies present a clumpy distribution, but without central condensation. The presence of a significant amount of early-type galaxies plus the relative poorness of this field indicate that this can be the seed of a structure that can became a rich galaxy cluster at $z=0$. \subsubsection{QP1355-0032} J135457-0034 ~and J135504-0030 ~constitute this pair; the first is radio-loud and the second is radio-quiet. The projected separation between them is 252 arcsec, or 2.0 Mpc, with redshifts 0.932 and 0.934, respectively. The redshift interval $z_{pair} \pm \sigma_z$ shows the largest galaxy excess among all quasar pairs discussed in this work. The median projected distance between galaxies resulted smaller than in a random uniform field in 98.5 \% of the cases, meaning that these galaxias are strongly clustered. This is the richest field in our sample accordingly to Abell's criterion and also the one with the largest number of bright galaxies ($i'<i'^*+1$), however its red sequence is modest and the red galaxies do not present a clustered distribution. Its richness and number of bright galaxies are the major indications that this quasar pair is probably in a galaxy cluster. \subsubsection{QP0110-0219} This quasar pair is formed by a radio-loud quasar (Q 0107-0235) and a radio-quiet quasar (PB 6291) at redshifts 0.958 and 0.956, respectively. The angular separation of 77 arcsec (0.6 Mpc) is the smallest of the sample. The overdensity in the redshift interval is significant as for the other pairs. The galaxy distribution is the most clustered of all samples, accordingly to the CL values in Table \ref{prop}. The field is rich by Abell's criterion, but presents only 12 bright galaxies. The red sequence is clearly present in the color-magnitude diagram at $i'-z' \sim 0.8$. The red galaxies present a filamentary-like distribution and there is a galaxy excess around the radio-loud quasar. These results indicate that QP0110-0219 ~is indeed a rich cluster. Moreover, we have verified that QP0110-0219 ~has been serendipitously detected (but unreported) in X-ray with a pointed \textit{ROSAT} PSPC observation of 6.6~ks. We have estimated the bolometric X-ray luminosity assuming that all detected flux (background corrected) comes from the ICM: $L_{X, \rm bol} \sim 5 \times 10^{45}\,$ergs~s$^{-1}$. Such luminosity is well above a typical cluster X-ray luminosity and may be contaminated by the X-ray emission from one or both quasars. On the other hand, the typical quasar X-ray luminosity [2--10 keV] is around $10^{44}\,$erg~s$^{-1}$, thus the quasars in the pair may not account for all X-ray emission. Besides, the total emission [0.5--8.0 keV] within 3 arcmin is about 10 times higher than the typical quasar emission in this band; therefore the observed X-ray flux is consistent with emission from quasars and a possible cluster around them. This is the single pair in our sample detected in X-rays so far. The other fields were not detected in the Rosat All Sky Survey, nor in the pointed observations. \subsubsection{QP0114-3140} This pair is formed by radio-quiet quasars. J011441-3139 ~has $z = 0.974$ and J011446-3141 ~has $z = 0.968$. The separation between them is 144 arcsec (1.1 Mpc). This field shows a significant overdensity, no clustering, and no red sequence. It also seems rich with Abell's criterion and has a comparatively large number of bright galaxies. The evidence for the presence of a rich cluster at the redshift of the quasar pair is not as compelling in this case, compared with the other 3 pairs. \section{Summary} We have studied the environment traced by quasar pairs at $z \sim 1$, using images in $g'$, $r'$, $i'$, and $z'$ bands obtained with GMOS at Gemini North and South. In order to identify galaxies in a redshift interval close to that of the quasar pairs, we have estimated photometric redshifts with the LWR method, using ACS-GOODS data as a training set. The rms dispersion of the difference between our photometric redshift and the spectroscopic redshift in the training set is $\sigma_z = 0.16$. We have adopted the interval $z = z_{pair}$ $\pm$ $\sigma_z$ for the analysis of the pair environment. When compared with the HHDFN region, all fields show a significant overdensity in the redshift interval of the pair. In all cases this excess is larger than 3.5 $\sigma$. We investigated the clustering of the galaxies near the pair by estimating a confidence level, $CL$, that the galaxies are more concentrated than in a uniform distribution. We have also estimated the richness of each redshift interval with a variant of Abell's criterion, as well as by the number of bright galaxies. We verified whether a red sequence is present and the form of the projected distribution of red galaxies. The analysis indicates that probably three out of our four quasar pairs are members of galaxy custers. For one of the pairs we did not find strong evidence for it: QP0114-3140~ could be in a poor cluster, group, or in the neighborhood of a cluster, since our fields are lesser than Abell's radius. Taken at face value, this result shows that quasar pairs are indeed good tracers of the large scale structure at high $z$. However, with only four quasar pairs in our sample we are not able to say at what level targeting a quasar pair increases the probability of finding a rich galaxy cluster as compared to targeting a single quasar. A study of larger and homogeneous samples would be necessary to clarify this point. An extension of our work to other redshifts may be also useful and may provide interesting clues on the evolution of large-scale structure and galaxy clustering. \acknowledgments This work is based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Particle Physics and Astronomy Research Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), CNPq (Brazil) and CONICET (Argentina). We would like to thank the Gemini staff for obtaining the observations in queue mode. We also thank G. B. Lima Neto for many useful discussions, and the anonymous referee, whose comments helped to improve this paper. We are grateful for the support provided by the Brazilian agencies CNPq and FAPESP. We made use of the NASA/IPAC Extragalactic Database, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
2,869,038,156,159
arxiv
\@startsection{section}{1}{\z@{\@startsection{section}{1}{\zeta@} {3ex plus-1ex minus-.2ex}{1pt plus1pt}{\large\sf\bfseries\boldmath}} \def\@startsection{subsection}{2}{\z@{\@startsection{subsection}{2}{\zeta@} {1.5ex plus-1ex minus-.2ex}{0.01pt plus1pt}{\sf\slshape}} \def\@startsection{subsubsection}{3}{\z@{\@startsection{subsubsection}{3}{\zeta@} {1.5ex plus-1ex minus-.2ex}{0.01pt plus0.2pt}{\sf\boldmath}} \def\@startsection{paragraph}{4}{\z@{\@startsection{paragraph}{4}{\zeta@} {.75ex \@plus.5ex \@minus.2ex}{-2mm}{\sf\bfseries\boldmath}} \makeatother \allowdisplaybreaks \seceq \begin{document} \thispagestyle{empty} \noindent{\small \hfill{HET-1783} \\ $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ {} } \vspace*{8mm} \begin{center} {\large \bf On Linearized Nordstr\" om Supergravity in \vskip0.1pt Eleven and Ten Dimensional Superspaces (2)} \\ [12mm] {\large { S.\ James Gates, Jr.,\footnote{[email protected]}$^{a}$} Yangrui Hu\footnote{[email protected]}$^{a}$, Hanzhi Jiang\footnote{[email protected]}$^{a,b}$, and S.-N. Hazel Mak\footnote{[email protected]}$^{a}$ } \\*[12mm] \emph{ \centering $^{a}$Brown Theoretical Physics Center, and \\[1pt] Department of Physics, Brown University, \\[1pt] Box 1843, 182 Hope Street, Barus \& Holley 545, Providence, RI 02912, USA \\[12pt] ${}^{b}$Department of Physics \& Astronomy, \\[1pt] Rutgers University, Piscataway, NJ 08855-0849, USA } \\*[78mm] { ABSTRACT}\\[4mm] \parbox{142mm}{\parindent=2pc\indent\baselineskip=14pt plus1pt We present aspects of the component description of linearized Nordstr\" om Supergravity in eleven and ten dimensions. The presentation includes low order component fields in the supermultiplet, the supersymmetry variations of the scalar graviton and gravitino trace, their supercovariantized field strengths, and the supersymmetry commutator algebra of these theories. } \end{center} \vfill \noindent PACS: 11.30.Pb, 12.60.Jv\\ Keywords: supersymmetry, scalar supergravity, off-shell \vfill \clearpage \newpage \@startsection{section}{1}{\z@{Introduction} A mathematically consistent (however requiring restrictions on the allowed general coordinate transformations) and simplified version of gravitation is provided by a variant \cite{N1,N2} that may be called ``Nordstr\" om Gravity.'' In a previous work \cite{NordSG1}, we initiated a program of investigating\footnote{In this previous work, a substantial citation review is undertaken and interested readers are encouraged to familiarize themselves with the literature via this means.} whether one can construct Nordstr\" om Supergravity extensions in eleven and ten dimensional spacetimes of this simplified of limit of gravitation at the linearized level. One pointed motivation for our efforts has been the recent progress \cite{PF1,PF2} in the derivation of M-Theory corrections to 11D Supergravity. A series of procedures connecting the corrections to a 3D, $\cal N$ = 2 Chern-Simons theory \cite{CST1,CST2,CST3} (used in a role roughly analogous to world-sheet $\sigma$-model $\beta$-function calculations for string corrections) has been successfully demonstrated. Though the works in \cite{PF1,PF2} have presented a method of deriving these corrections beyond the supergravity limit, these {\it {solely}} treat purely bosonic M-Theory corrections, with no equivalent results describing fermionic corrections. One traditional way of accomplishing this is to embed the purely bosonic results into a superspace formulation. This impels us to a renewed interest in 11D supergravity in superspace. The goal we are pursuing is to find a Salam-Strathdee superspace \cite{Ssp8c}, as modified by Wess \& Zumino \cite{WZ1,WZ2}, such that superspace Bianchi identities do {\em {not imply}} equations of motion for the component fields contained within the superspace description of Nordstr\" om SG. In particular, we are {\em {not}} currently investigating the prospect of writing action formulae for such supermultiplets of fields. While actions are the ``gold standard,'' it is useful to recall (as done below), this is not the first time the off-shell structure of a supergravity theory has been explored, {\it {without}} the additional exploration of an action principle. This distinguishes our work from efforts taken by other. For example, there is a substantial literature that uses the concept of ``pure spinors'' \cite{P1,P2,P3,P4,P5,P6,P7} where the endpoint of action principles (see especially \cite{P6,P7}) has been presented. While classically such approaches appear to work, there are troubling signs \cite{Q1,Q2,Q3,Q4} that more needs to be done to justify complete acceptance at the level of quantum theory. Perhaps an intuitive way to argue is in order to achieve quantization, it should be implemented in terms of variables that are basically free. The non-minimal pure spinor goes some way to giving a free resolution, but the resulting space of fields does not have a well-defined trace. The work of \cite{Q3} offered a ``fix'' but {\em {if}} in the end the prescription still is not an integral over free variables with no other qualifications, then a proper quantization is still likely to be impeded. Therefore, we adopt the rather more cautious approach by raising the query of whether it is possible to follow the pathway established by Wess \& Zumino \cite{WZ1,WZ2} wherein a ``simple'' Salam-Strathdee superspace is used as a starting point to building, in our case, Nordstr\" om SG in which Bianchi identities do not force equations of motion. While it is often overlooked, the first off-shell description of 4D, $\cal N$ = 1 supergravity was actually carried out by Breitenlohner \cite{B1} who took an approach equivalent to starting with the component fields of the Wess-Zumino gauge 4D, $\cal N$ = 1 vector supermultiplet $(v{}_{\un{a}}, \, \lambda_b, {\rm d})$ together with their familiar SUSY transformation laws, \begin{equation} \eqalign{ {\rm D}_a \, v{}_{\un b} ~&=~ (\gamma_{\un b}){}_a {}^c \, \lambda_c ~~~, ~~~ {~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} \cr {\rm D}_a \lambda_b ~&=~ - \,i \, \fracm 14 ( [\, \gamma^{\un c}\, , \, \gamma^{\un d} \,]){}_a{}_b \, (\, \pa_{\un c} \, v{}_{\un d} ~-~ \pa_{\un d} \, v{}_{\un c} \, ) ~+~ (\gamma^5){}_{a \,b} \, {\rm d} ~~, {~~~~~~} \cr {\rm D}_a \, {\rm d} ~&=~ i \, (\gamma^5 \gamma^{\un c} ){}_a {}^b \, \, \pa_{\un c} \lambda_b ~~~, \cr } \label{V1} \end{equation} followed by choosing as the gauge group the space time translations, SUSY generators, and the spin angular momentum generators as well as allowing additional internal symmetries. For the space time translations, this requires a series of replacements of the fields according to: \begin{equation} \eqalign{ v{}_{\un b} ~& \to ~ h{}_{\un{b} \, \un{c}} ~~~, ~~~ \lambda_b ~ \to ~ \psi{}_{\un{c} \, b} \,~~~, ~~~ {\rm d} ~ \to ~ A{}_{\un{c}} ~~~~~, } \label{V2} \end{equation} (in the notation in \cite{B1} $A{}_{\un{a}} $ = $B^5{}_{\un{a}}$) while for the SUSY generators, the replacements occur according to: \begin{equation} \eqalign{ v{}_{\un b} ~& \to ~ \chi{}_{\un{b} \, c} ~~~, ~~~ \lambda_b ~ \to ~ \phi{}_{b \, c} \, ~~~, ~~~ {\rm d} ~ \to ~ \chi{}_{c}{}^5 ~~~~~, } \end{equation} and finally for the spin angular momentum generator, a replacement of \begin{equation} \eqalign{ v{}_{\un b} ~& \to ~ \omega{}_{\un{b} \, \un{c} \, \un{d}} ~~~, ~~~ \lambda_b ~ \to ~ \chi{}_{b \, \un{c} \, \un{d}} \, ~~~, ~~~ {\rm d} ~ \to ~ D{}_{\un{c} \, \un{d}} ~~~~~, } \end{equation} was used. However, to be more exact, Breitenlohner also allowed for more symmetries like chirality to be included. Because the vector supermultiplet was off-shell (up to WZ gauge transformations) the resulting supergravity theory was off-shell and included a redundant set of auxiliary component fields, i.\ e.\ this is not an irreducible description of supergravity. But as seen from (\ref{V2}) the supergravity fields were all present and together with the remaining component fields a complete superspace geometry can be constructed. In our approach to Nordstr\" om SG, the analog of the Wess-Zumino gauge 4D, $\cal N$ = 1 vector supermultiplet is played by a scalar superfield in any of the 11D or 10D superspaces to be studied. This scalar superfield guarantees off-shell supersymmetry. However, like the approach taken by Breitenlohner, the resulting theory is expected to be reducible. Also like this earlier approach, the question of an action principle is not addressed. The structure of the remainder of this work looks as follows. In chapter two, a review of 4D, $\cal N$ = 1 supergravity in superspace is given. This proves a detailed description of how to extract component results from the superspace geometry. Using the foundation in {\it {Superspace}} \cite{SpRSp8BK}, the general formalism for obtaining component level results is reviewed. In this context the composition rules for the parameter of spacetime translations, parameters of SUSY transformations, and Lorentz transformations are presented relating these to supergeometrical quantities is given. Next the SUSY transformation rules for the frame field, gravitino field, and spin connection relating these to supergeometrical quantities are given. Finally, the ``supercovariantized'' field strengths of the frame field, gravitino field, and spin connection relating these to supergeometrical quantities are given. and related to supergeometrical quantities. This chapter ends with the linearization of these results. Chapter three uses the technology of the second chapter to present component level of the Nordstr\" om SG for 11D, $\cal N $ = 1 superspace, 10D, $\cal N $ = 2A superspace, 10D, $\cal N $ = 2B superspace, and 10D, $\cal N $ = 1 superspace. Component level descriptions of the local SUSY commutator algebras are provided. Linearized curvatures and torsions supertensors are presented and the supersymmetry variations of the linearized Nordstr\" om ``scalar'' graviton, the linearized spin-1/2 Nordstr\" om gravitino, and the spin connection are obtained. Chapter four is a short chapter in comparison to the two that precede it. In 4D, $\cal N$ = 1 supergravity \cite{S1,SFSG}, the concept of the ``chiral compensator'' was introduced some time ago. We demonstrate evidence that such a compensator exist for the 10D, $\cal N $ = 2B superspace. This is unique among supergravity theories in eleven and ten dimensions. However, we also present evidence that though such a chiral superfields appear to consistently exist, the linearized Nordstr\"om superspace is such that a chiral superfield of this type cannot be used as a compensator. The fifth chapter is devoted to our conclusions and a summary. \newpage \@startsection{section}{1}{\z@{Superspace Perspective On Component Results} In our previous paper \cite{NordSG1}, we restricted our focus solely to superfield considerations in eleven and ten dimensions. However given those result, the technology developed in {\em {Superspace}} \cite{SpRSp8BK} allows a presentation of some of the component results. In particular, the equations indicated in section (5.6) in this book can be applied to the case of eleven and ten dimensions. This is true even though the sole focus of the book is the case of 4D, $\cal N$ = 1 supersymmetry. Nonetheless, the discussion in the book can be easily modified for use in 11D and 10D superspace theories. The relevant equations were designated as (5.6.13), (5.6.16) - (5.6.18), (5.6.21), (5.6.22) - (5.6.24), (5.6.28), (5.6.33), and (5.6.34). For the convenience of the reader, we bring these results all together in the text to follow. After this chapter, these are all going to be appropriately modified for the cases of 11D, $\cal N$ = 1, 10D, $\cal N$ = 2A, 10D, $\cal N$ = 2B, and 10D, $\cal N$ = 1 superspaces, respectively. \@startsection{subsection}{2}{\z@{Recollection of 4D, $\cal N$ = 1 Component/Superspace Results} In the context of 4D, $\cal N$ = 1 superspace supergravity, we may distinguish among three types of symmetries: \vskip1mm $~$ \noindent (a.) space time translations with generator $iK_{GC}(\xi^{\un {m}})$, dependent \newline \indent $~~~~~~~$ on local parameters $\xi^{\un {m}}(x)$, $~$ \newline \indent $~$ (b.) SUSY transformations with generator $iK_{Q}(\epsilon^{\un {\alpha}})$ dependent on \newline \indent $~~~~~~~$ local parameters $\epsilon^{\un {\alpha}}(x)$, and $~$ \newline \indent $~$ (c.) tangent space transformations with generator $iK_{TS}(\lambda^{\iota})$ depend \newline \indent $~~~~~~~$ -ent on local parameters $\lambda^{\iota}(x)$. \vskip0.9mm \noindent The tangent space transformations act as ``internal angular momentum,'' chirality, etc. on all ``flat indices'' associated with the superspace quantities. The commutator algebra of two SUSY transformations generated by $iK_{Q}(\epsilon{}_1{}^{\un {\alpha}})$, and $iK_{Q}(\epsilon{}_2{}^{\un {\alpha}})$, respectively takes the form \begin{equation} \big[ \, iK_{Q}(\epsilon{}_1) ~,~ iK_{Q}(\epsilon{}_2) \, \big] ~=~iK_{GC}(\xi^{\un {m}})+iK_{Q}(\epsilon)+ iK_{TS}(\lambda^{\iota}) ~~~, \label{e1} \end{equation} where the parameters $\xi^{\un {m}}$, $\epsilon^{\un {\delta}}$, and $\lambda^{\iota}$ on the RHS of this equation are quadratic in $\epsilon{}_1$ and $\epsilon{}_2$, dependent on linear and quadratic terms in the gravitino, and linear terms in the superspace torsions and curvature supertensors according to: \begin{align} \xi^{\un {m}} ~=&~ -\Big[(\epsilon_1^{\ \un {\alpha}}\bar{\epsilon}_2^{\ \dot {\un \beta}}+\bar{\epsilon}_1^{\ \dot {\un{\beta}}}\epsilon_2^{\ \un {\alpha}})T_{\un {\alpha} \dot {\un{\beta}}}^{\ \ \un {c}} +\epsilon_1^{\ \un {\alpha}}\epsilon_2^{\ \un {\beta}}T_{\un {\alpha}\un {\beta}}^{\ \ \un {c}} + \bar{\epsilon}_1^{\ \dot {\un{\alpha}}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}}T_{\dot {\un{\alpha}} \dot {\un{\beta}}}^{\ \ \un {c}}\Big] e_{\un {c}}^{\ \un {m}} ~~~, \label{e2} \\ \epsilon^{\un {\delta}} ~=&~ - \Big[ (\epsilon_1^{\ \un {\alpha}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}}+\bar{\epsilon}_1^{\ \dot {\un{\beta}}} \epsilon_2^{\ \un {\alpha}})(T_{\un {\alpha}\dot {\un{\beta}}}^{\ \ \un {\delta}} + T_{\un {\alpha}\dot {\un{\beta}}}^{\ \ \un {c}}\psi_{\un {c}}^{\ \un {\delta}}) + \epsilon_1^{\ \un {\alpha}}\epsilon_2^{\ \un {\beta}}(T_{\un {\alpha}\un {\beta}}^{\ \ \un {\delta}} + T_{\un {\alpha}\un {\beta}}^{\ \ \un {c}}\psi_{\un {c}}^{\ \un {\delta}}) + \bar{\epsilon}_1^{\ \dot {\un{\alpha}}} \bar{\epsilon}_2^{\ \dot {\un{\beta}}}(T_{\dot {\un{\alpha}}\dot {\un{\beta}}}^{\ \ \un {\delta}}+T_{\dot {\un{\alpha}}\dot {\un{\beta}}}^{\ \ \un {c}}\psi_{\un {c}}^{\ \un {\delta}}) \Big] ~~~, \label{e4} \\ \lambda^{\iota} ~=&~ - \Big[(\epsilon_1^{\ \un {\alpha}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}} + \bar{\epsilon}_1^{\ \dot {\un{\beta}}}\epsilon_2^{\ \un {\alpha}})(R_{\un {\alpha}\dot {\un{\beta}}}^{\ \ \ \iota} + T_{\un {\alpha}\dot {\un{\beta}}}^{\ \ \un {c}}\phi_{\un {c}}^{\ \iota}) + \epsilon_1^{\ \un {\alpha}}\epsilon_2^{\ \un {\beta}}(R_{\un {\alpha}\un {\beta}}^{\ \ \ \iota} + T_{\un {\alpha} \un {\beta}}^{\ \ \un {c}}\phi_{\un {c}}^{\ \iota}) + \bar{\epsilon}_1^{\ \dot {\un{\alpha}}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}}(R_{\dot {\un{\alpha}}\dot {\un{\beta}}}^{\ \ \ \iota}+T_{\dot {\un{\alpha}}\dot {\un{\beta}}}^{\ \ \un {c}} \phi_{\un {c}}^{\ \iota}) \Big] ~~~. \label{e3} \end{align} The supersymmetry variations of the inverse frame field $e_{\un {a}}^{\ \un {m}}(x)$, gravitino $\psi_{\un {a}}^{\ \un {\delta}}(x)$, and connection fields for the tangent space symmetries $\phi_{\un {a}}^{\ \iota} (x)$ take the forms below and are expressed in terms dependent on linear and quadratic in the gravitino, and linear in the superspace torsions and curvature supertensors. \begin{align} \delta_{Q}e_{\un {a}}^{\ \un {m}} ~=&~ - \Big[~\epsilon^{\un {\beta}}T_{\un {\beta}\un {a}}^{\ \ \un {d}} +\bar{\epsilon }^{\dot {\un{\beta}}}T_{\dot {\un{\beta}}\un {a}}^{\ \ \un {d}} +(\bar{\epsilon}^{\dot {\un{\beta}}}\psi_{\un {a}}^{\ \un {\gamma}} + \epsilon^{\un {\gamma}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\beta}}})T_{\dot {\un{\beta}}\un {\gamma}}^{\ \ \un {d}} \nonumber\\ & {~~~~~} + \epsilon^{\un {\beta}}\psi_{\un {a}}^{\ \un {\gamma}}T_{\un {\gamma}\un {\beta}}^{\ \ \un {d}} +\bar{\epsilon}^{\dot {\un{\beta}}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\gamma}}}T_{\dot {\un{\beta}}\dot {\un{\gamma}}}^{\ \ \un {d}}~\Big] \, e_{\un {d}}^{\ \un {m}} ~~~, \label{e5} \\ \delta_{Q}\psi_{\un {a}}^{\ \un {\delta}} ~=&~ \textbf{{\rm D}}_{\un {a}}\epsilon^{\un {\delta}} - \epsilon^{\un {\beta}} (T_{\un {\beta}\un {a}}^{\ \ \un {\delta}} + T_{\un {\beta}\un {a}}^{\ \ \un {e}}\psi_{\un {e}}^{\ \un {\delta}}) - \bar{\epsilon}^{\dot {\un{\beta}}}(T_{\dot {\un{\beta}}\un {a}}^{\ \ \un {\delta}} + T_{\dot {\un{\beta}}\un {a}}^{\ \ \un {e}}\psi_{\un {e}}^{\ \un {\delta}}) \nonumber\\ & -(\bar{\epsilon}^{\dot {\un{\beta}}}\psi_{\un {a}}^{\ \un {\gamma}} + \epsilon^{\un {\gamma}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\beta}}})(T_{\un {\gamma}\dot {\un{\beta}}}^{\ \ \un {\delta}} + T_{\un {\gamma}\dot {\un{\beta}}}^{\ \ \un {e}}\psi_{\un {e}}^{\ \un {\delta}}) \nonumber\\ & -\epsilon^{\un {\beta}}\psi_{\un {a}}^{\ \un {\gamma}}(T_{\un {\beta}\un {\gamma}}^{\ \ \un {\delta}} + T_{\un {\beta} \un {\gamma}}^{\ \ \un {e}}\psi_{\un {e}}^{\ \un {\delta}}) -\bar{\epsilon}^{\dot {\un{\beta}}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\gamma}}}(T_{\dot {\un{\beta}}\dot {\un{\gamma}}}^{\ \ \un {\delta}} + T_{\dot {\un{\beta}}\dot {\un{\gamma}}}^{\ \ \un {e}}\psi_{\un {e}}^{\ \un {\delta}}) ~~~, \label{e6} \\ \delta_{Q}\phi_{\un {a}}^{\ \iota} ~=&~ - \epsilon^{\un {\beta}}(R_{\un {\beta}\un {a}}^{\ \ \ \iota} + T_{\un {\beta}\un {a}}^{\ \ \un {e}}\phi_{\un {e}}^{\ \iota}) - \bar{\epsilon}^{\dot {\un{\beta}}}(R_{\dot {\un{\beta}}\un {a}}^{\ \ \ \iota} + T_{\dot {\un{\beta}}\un {a}}^{\ \ \un {e}}\phi_{\un {e}}^{\ \iota}) \nonumber\\ & -(\bar{\epsilon}^{\dot {\un{\beta}}}\psi_{\un {a}}^{\ \un {\gamma}} + \epsilon^{\un {\gamma}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\beta}}})(R_{\un {\gamma}\dot {\un{\beta}}}^{\ \ \ \iota} + T_{\un {\gamma}\dot {\un{\beta}}}^{\ \ \un {e}} \phi_{\un {e}}^{\ \iota}) \nonumber\\ & -\epsilon^{\un {\beta}}\psi_{\un {a}}^{\ \un {\gamma}}(R_{\un {\beta}\un {\gamma}}^{\ \ \ \iota} + T_{\un {\beta}\un {\gamma}}^{\ \ \un {e}}\phi_{\un {e}}^{\ \iota}) -\bar{\epsilon}^{\dot {\un{\beta}}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\gamma}}}(R_{\dot {\un{\beta}}\dot {\un{\gamma}}}^{\ \ \ \iota} + T_{\dot {\un{\beta}}\dot {\un{\gamma}}}^{\ \ \un {e}}\phi_{\un {e}}^{\ \iota}) ~~~. \label{e7} \end{align} The supersymmetry covariantized versions of the torsions, gravitino field strength and field strengths associated respective with the inverse frame field $e_{\un {a}}^{\ \un {m}}(x)$, gravitino $\psi_{\un {a}}^{\ \un {\delta}}(x)$, and connection fields for the tangent space symmetries $\phi_{\un {a}}^{\ \iota} (x)$ take the forms below and are expressed in terms dependent on linear and quadratic in the gravitino, and linear in the superspace torsions and curvature supertensors. \begin{align} T_{\un {a}\un {b}}^{\ \ {\un c}} ~= &~ t_{\un {a}\un {b}}^{\ \ {\un c}} + \psi_{[\un {a}}^{\ \un {\delta}} T_{\un {\delta}\un {b}]}^{\ \ {\un c}} + \bar{\psi}_{[\un {a}}^{\ \dot {\un{\delta}}}T_{\dot {\un{\delta}}\un {b}]}^{\ \ {\un c}} + \psi_{[\un {a}}^{\ \un {\delta}}\bar{\psi}_{\un {b}]}^{\ \dot {\un{\epsilon}}}T_{\un {\delta}\dot {\un{\epsilon}}}^{\ \ {\un c}} + \psi_{\un {a}}^{\ \un {\delta}}\psi_{\un {b}}^{\ \un {\epsilon}}T_{\un {\delta}\un {\epsilon}}^{\ \ {\un c}} +\bar{\psi}_{\un {a}}^{\ \dot {\un{\delta}}}\bar{\psi}_{\un {b}}^{\ \dot {\un{\epsilon}}} T_{\dot {\un{\delta}}\dot {\un{\epsilon}}}^{\ \ {\un c}} ~~~, \label{e9} \\ T_{\un {a}\un {b}}^{\ \ \gamma} ~= &~ t_{\un {a}\un {b}}^{\ \ \gamma} + \psi_{[\un {a}}^{\ \un {\delta}} T_{\un {\delta}\un {b}]}^{\ \ \gamma} + \bar{\psi}_{[\un {a}}^{\ \dot {\un{\delta}}}T_{\dot {\un{\delta}}\un {b}]}^{\ \ \gamma} + \psi_{[\un {a}}^{\ \un {\delta}}\bar{\psi}_{\un {b}]}^{\ \dot {\un{\epsilon}}}T_{\un {\delta}\dot {\un{\epsilon}}}^{\ \ \gamma} + \psi_{\un {a}}^{\ \un {\delta}}\psi_{\un {b}}^{\ \un {\epsilon}}T_{\un {\delta}\un {\epsilon}}^{\ \ \gamma} +\bar{\psi}_{\un {a}}^{\ \dot {\un{\delta}}}\bar{\psi}_{\un {b}}^{\ \dot {\un{\epsilon}}} T_{\dot {\un{\delta}}\dot {\un{\epsilon}}}^{\ \ \gamma} ~~~, \label{e8} \\ R_{\un {a}\un {b}}^{\ \ \iota} ~= &~ r_{\un {a}\un {b}}^{\ \ \iota} + \psi_{[\un {a}}^{\ \un {\delta}} R_{\un {\delta}\un {b}]}^{\ \ \iota} + \bar{\psi}_{[\un {a}}^{\ \dot {\un{\delta}}}R_{\dot {\un{ \delta}}\un {b}]}^{\ \ \iota} + \psi_{[\un {a}}^{\ \un {\delta}}\bar{\psi}_{\un {b}]}^{\ \dot {\un{\epsilon}}} R_{\un {\delta}\dot {\un{\epsilon}}}^{\ \ \ \iota} + \psi_{\un {a}}^{\ \un {\delta}}\psi_{\un {b}}^{\ \un {\epsilon}}R_{\un {\delta}\un {\epsilon}}^{\ \ \iota} +\bar{\psi}_{\un {a}}^{\ \dot {\un{\delta}}}\bar{\psi}_{\un {b}}^{\ \dot {\un{\epsilon}}} R_{\dot {\un{\delta}}\dot {\un{\epsilon}}}^{\ \ \iota} ~~~ . \label{e10} \end{align} In the linearized limit of these theories, not all of the terms in (\ref{e2}) - (\ref{e10}) appear. Instead these equations take the forms \begin{align} \xi^{\un {m}} ~=&~ -\Big[(\epsilon_1^{\ \un {\alpha}}\bar{\epsilon}_2^{\ \dot {\un \beta}}+\bar{\epsilon}_1^{\ \dot {\un{\beta}}}\epsilon_2^{\ \un {\alpha}})T_{\un {\alpha} \dot {\un{\beta}}}^{\ \ \un {c}} +\epsilon_1^{\ \un {\alpha}}\epsilon_2^{\ \un {\beta}}T_{\un {\alpha}\un {\beta}}^{\ \ \un {c}} + \bar{\epsilon}_1^{\ \dot {\un{\alpha}}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}}T_{\dot {\un{\alpha}} \dot {\un{\beta}}}^{\ \ \un {c}}\Big]e_{\un {c}}^{\ \un {m}} ~~~, \label{eF2} \\ \epsilon^{\un {\delta}} ~=&~ -\Big[ (\epsilon_1^{\ \un {\alpha}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}}+\bar{\epsilon}_1^{\ \dot {\un{\beta}}} \epsilon_2^{\ \un {\alpha}})(T_{\un {\alpha}\dot {\un{\beta}}}^{\ \ \un {\delta}}+T_{\un \alpha\dot{\un \beta}}^{\ \ \un c}\psi_{\un c}^{\ \un \delta}) + \epsilon_1^{\ \un {\alpha}}\epsilon_2^{\ \un {\beta}}( T_{\un {\alpha}\un {\beta}}^{\ \ \un {\delta}}+T_{\un {\alpha}\un {\beta}}^{\ \ \un c}\psi_{\un c}^{\ \un \delta}) + \bar{\epsilon}_1^{\ \dot {\un{\alpha}}}\bar{\epsilon}_2^{\ \dot {\un {\beta}}}(T_{\dot {\un{\alpha}}\dot {\un{\beta}}}^{\ \ \un {\delta}}+T_{\underline{\dot\alpha}\underline{\dot\beta}}^{\ \ \un c}\psi_{\un c}^{\ \un \delta})\Big] ~~~, \label{eF4} \\ \lambda^{\iota} ~=&~ -\Big[(\epsilon_1^{\ \un {\alpha}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}}+\bar{\epsilon}_1^{\ \dot {\un{\beta}}}\epsilon_2^{\ \un {\alpha}})(R_{\un {\alpha}\dot {\un{\beta}}}^{\ \ \ \iota} +T_{\un\alpha\dot{\un\beta}}^{\ \ \un c}\phi_{\un c}^{\ \iota}) + \epsilon_1^{\ \un {\alpha}}\epsilon_2^{\ \un {\beta}}(R_{\un {\alpha} \un {\beta}}^{\ \ \ \iota}+T_{\un\alpha\un\beta}^{\ \ \un c}\phi_{\un c}^{\ \iota}) + \bar{\epsilon}_1^{\ \dot {\un{\alpha}}}\bar{\epsilon}_2^{\ \dot {\un{\beta}}}(R_{\dot {\un{\alpha}} \dot {\un{\beta}}}^{\ \ \ \iota}+T_{\dot{\un\alpha}\dot{\un\beta}}^{\ \ \un c}\phi_{\un c}^{\ \iota})\Big] ~~~, \label{eF3} \\ \delta_{Q}e_{\un {a}}^{\ \un {m}} ~=&~ -\Big[~\epsilon^{\un {\beta}}T_{\un {\beta}\un {a}}^{\ \ \un {d}} +\bar{\epsilon }^{\dot {\un{\beta}}}T_{\dot {\un{\beta}}\un {a}}^{\ \ \un {d}} +(\bar{\epsilon}^{\dot {\un{\beta}}}\psi_{\un {a}}^{\ \un {\gamma}} + \epsilon^{\un {\gamma}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\beta}}})T_{\dot {\un{\beta}}\un {\gamma}}^{\ \ \un {d}} + \epsilon^{\un {\beta}}\psi_{\un {a}}^{\ \un {\gamma}}T_{\un {\gamma}\un {\beta}}^{\ \ \un {d}} +\bar{\epsilon}^{\dot {\un{\beta}}}\bar{\psi}_{\un {a}}^{\ \dot {\un{\gamma}}}T_{\dot {\un{\beta}}\dot {\un{\gamma}}}^{\ \ \un {d}}\Big] \, e_{\un {d}}^{\ \un {m}} ~~~, \label{eF5} \\ \delta_{Q}\psi_{\un {a}}^{\ \un {\delta}} ~=&~ \textbf{{\rm D}}_{\un {a}}\epsilon^{\un {\delta}} - \epsilon^{\un {\beta}} T_{\un {\beta}\un {a}}^{\ \ \un {\delta}} - \bar{\epsilon}^{\dot {\un{\beta}}}T_{\dot {\un{\beta}}\un {a}}^{\ \ \un {\delta}} ~~~, \label{eF6} \\ \delta_{Q}\phi_{\un {a}}^{\ \iota} ~=&~ - \epsilon^{\un {\beta}}R_{\un {\beta}\un {a}}^{\ \ \ \iota} - \bar{\epsilon}^{\dot {\un{\beta}}}R_{\dot {\un{\beta}}\un {a}}^{\ \ \ \iota} ~~~, \label{eF7} \\ T_{\un {a}\un {b}}^{\ \ \gamma} ~= &~ t_{\un {a}\un {b}}^{\ \ \gamma} ~~~, \label{eF8} \\ T_{\un {a}\un {b}}^{\ \ {\un c}} ~= &~ t_{\un {a}\un {b}}^{\ \ {\un c}} ~~~, \label{eF9} \\ R_{\un {a}\un {b}}^{\ \ \iota} ~= &~ r_{\un {a}\un {b}}^{\ \ \iota} ~~~ . \label{eF10} \end{align} The terms on the RHS of the final three equation correspond to the non-supercovariantized versions of the respective torsions, gravitino field strength and connection field strengths. \newpage \@startsection{section}{1}{\z@{Higher Dimensional Component Considerations} In the following four subsections, we will appropriately adapt these results to the cases of eleven and ten dimensional formulations appropriate for Nordstr\" om supergravity in those contexts. There are four steps: \newline \indent (a.) define a Nordstr\" om SG linearized superspace supercovariant derivative in terms \newline \indent $~~~~~$ of a scalar prepotential leading to component fields, \newline \indent (b.) express the geometrical tensors of each respective superspace in terms of the \newline \indent $~~~~~$ component field presented in the previous part, \newline \indent (c.) express the `composition rules' of the parameters of general coordinate, local \newline \indent $~~~~~$ Lorentz, and local SUSY transformations, and \newline \indent (c.) write the component level SUSY transformation laws \vskip02.pt \noindent that we undertake in each of the four cases of 11D, $\cal N$ = 1, 10D, $\cal N$ = 1, 10D, $\cal N$ = 2A, and 10D, $\cal N$ = 2B, theories. \@startsection{subsection}{2}{\z@{Adaptation To 11D, $\cal N$ = 1 Component/Superspace Results: Step 1} In the case of the 11D N(ordstr\" om)-SG covariant derivatives we define \begin{align} \nabla_{\alpha} ~=~& {\rm D}_{\alpha} + \frac{1}{2} \Psi {\rm D}_{\alpha} + \frac{1}{10} (\gamma^{\un{d}\un{e}})_{\alpha}{}^{\beta} ({\rm D}_{\beta}\Psi) {\cal M}_{\un{d}\un{e}} ~~~, \\ \nabla_{\un{a}} ~=~& \pa_{\un{a}} + \Psi\pa_{\un{a}} + i \frac{1}{4} (\gamma_{\un{a}} )^{\alpha\beta} ({\rm D}_{\alpha}\Psi) {\rm D}_{\beta} - (\pa_{\un{c}}\Psi) {\cal M}_{\un {a}}{}^{\un{c}} ~~~, \label{11d-1} \end{align} and ``split'' the spatial 11D N-SG covariant derivative into two parts \begin{equation} \nabla_{\un{a}}| ~=~ {\bf{D}}_{\un{a}} + \psi_{\un{a}}{}^{\gamma} \nabla_{\gamma}| ~~~. \label{11d-2} \end{equation} On taking the $\theta$ $\to$ 0 limit the latter terms allows an identification with the gravitino and the leading term in this limit yields a component-level linearized gravitationally covariant derivative operator given by \begin{equation} \begin{split} {\bf{D}}_{\un{a}} ~=~& e_{\un{a}} + \phi_{\un a}{}^{\iota} \mathcal{M}_{\iota} ~=~ \pa_{\un{a}} +\Psi \pa_{\un{a}} + \phi_{\un a}{}^{\iota} \mathcal{M}_{\iota} ~~~. \end{split} \label{11d-3} \end{equation} By comparison of the LHS to the RHS of (\ref{11d-3}), we see that a linearized frame field $e_{\un{a}} {}^{\un m}$ = $( \, 1 \,+\, \Psi)\delta{}_{\un{a}} {}^{\un m}$ emerges to describe a scalar graviton. Finally, comparison of the coefficient of the Lorentz generator $ \mathcal{ M}_{\iota} $ as it appears in the latter two forms of (\ref{11d-3}) informs us the spin connection is given by \begin{equation} \phi_{\un{c}}{}^{\un{d}\un{e}} ~=~ - \frac{1}{2} \delta_{\un{c}}{}^{[\un{d}} (\pa^{\un{e}]} \Psi) ~~~. \end{equation} Comparing the result in (\ref{11d-1}) with the one in (\ref{11d-2}) a component gravitino is identified via \begin{equation} \psi_{\un{a}}{}^{\gamma} ~=~ i \frac{1}{4} (\gamma_{\un{a}})^{\gamma\delta} ({\rm D}_{\delta}\Psi) ~~~. \end{equation} However, as this expression contains an explicit $\gamma$-matrix we see that it really defines the non-conformal $\emph{spin-\fracm{1}{2}}$ part of the gravitino to be \begin{equation} \psi_{\beta} ~\equiv~ (\gamma^{\un{a}})_{\beta\gamma} \psi_{\un{a}}{}^{\gamma} ~~~. \end{equation} This is to be expected. As a Nordstr\" om type theory only contains a scalar graviton, it follows only the ``$\gamma$-trace'' of the gravitino can occur. So then we have \begin{equation} {\rm D}_{\beta}\Psi ~=~ i \frac{4}{11} (\gamma^{\un{a}})_{\beta\gamma} \psi_{\un{a}}{}^{\gamma} ~\equiv~ i \frac{4}{11} \psi_{\beta} ~~~, \end{equation} in the $\theta$ $\to$ 0 limit. In order to complete the specification of the geometrical superfields also requires explicit definitions of the bosonic terms to second order in D-derivatives. So we define bosonic fields: \begin{align} K ~=~ C^{\gamma\delta} ({\rm D}_{\gamma} {\rm D}_{\delta} \Psi) ~~~,~~~ K_{[3]} ~=~ (\gamma_{[3]})^{\gamma\delta} ({\rm D}_{\gamma} {\rm D}_{\delta} \Psi) ~~~,~~~ K_{[4]} ~=~ (\gamma_{[4]})^{\gamma\delta} ({\rm D}_{\gamma} {\rm D}_{\delta} \Psi) ~~~, \label{Fr1} \end{align} or in other words, \begin{equation} \frac{1}{2} {\rm D}_{[\gamma} {\rm D}_{\delta]} \Psi ~=~ \frac{1}{32} \Big\{ C_{\gamma\delta} K - \frac{1}{3!} (\gamma^{[3]})_{\gamma\delta} K_{[3]} + \frac{1}{4!} (\gamma^{[4]})_{\gamma\delta} K_{[4]} \Big\} ~~~. \label{Fr2} \end{equation} We emphasize that the component fields (the $K$'s) are defined by the $\theta$ $\to$ 0 limit of these equations. The results in (\ref{Fr1}) and (\ref{Fr2}) follow as results from a Fierz identity \begin{equation} \delta{}_{[\gamma }{}^{\alpha} \delta{}_{\delta] }{}^{\beta} ~=~ \frac{1}{16} \Big\{ C_{\gamma\delta} C^{\alpha\beta} - \frac{1}{3!} (\gamma^{[3]})_{\gamma\delta} (\gamma_{[3]})^{\alpha\beta} + \frac{1}{4!} (\gamma^{[4]})_{\gamma\delta} (\gamma_{[4]})^{\alpha\beta} \Big\} ~~~, \end{equation} valid for 11D spinors. \@startsection{subsection}{2}{\z@{Adaptation To 11D, $\cal N$ = 1 Component/Superspace Results: Step 2} Torsions: \begin{align} T_{\alpha\beta}^{\ \ \un{c}} ~=~& i(\gamma^{\un{c}})_{\alpha\beta} ~~~, &&\\ T_{\alpha\beta}^{\ \ \gamma} ~=~& i\frac{3}{110}(\gamma^{[2]})_{\alpha\beta}(\gamma_{ [2]})^{\gamma\delta}\psi_{\delta} ~~~, &&\\ T_{\alpha\un{b}}^{\ \ \un{c}} ~=~& i\frac{3}{11} \Big[ \delta_{\un b}^{\ \un c} \delta_{\alpha}^{\ \beta} + \frac{3}{5} (\gamma_{\un b}^{\ \un c})_{\alpha}^{\ \beta} \Big] \psi_{\beta} ~~~, &&\\ T_{\alpha\un{b}}^{\ \ \gamma} ~=~& i\frac{1}{128} \Big[ - (\gamma_{\un{b}})_{\alpha}^{\ \gamma} K + \frac{1}{2} (\gamma^{[2]})_{\alpha}^{\ \gamma} K_{\un{b}[2]} - \frac{1}{3!} (\gamma_{\un{b}[3]})_{\alpha}^{\ \gamma} K^{[3]} + \frac{1}{3!} (\gamma^{[3]})_{\alpha }^{\ \gamma} K_{\un{b}[3]} \nonumber\\ & \qquad {~\,} - \frac{1}{4!} (\gamma_{\un{b}[4]})_{\alpha}^{\ \gamma} K^{[4]} \Big] + \frac{1 }{8} \Big[ \delta_{\un b}^{\ \un c} \delta_{\alpha}^{\ \gamma} + 3 (\gamma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] (\pa_{\un{c}}\Psi) ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \un{c}} ~=~& 0 ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \gamma} ~=~& \frac{1}{11}(\gamma_{[\un a})^{\gamma\delta}(\pa_{ \un b]}\psi_{\delta}) ~~~. \end{align} Curvatures: \begin{align} R_{\alpha\beta}^{\ \ \ \un{d}\un{e}} ~=~& \frac{1}{80} \Big[ (\gamma^{\un{d}\un{e}})_{\alpha \beta} K + (\gamma_{[1]})_{\alpha\beta} K^{[1]\un{d}\un{e}} - \frac{1}{3!} (\gamma^{\un{d} \un{e}[3]})_{\alpha\beta} K_{[3]} - \frac{1}{2} (\gamma_{[2]})_{\alpha\beta} K^{[2]\un{d}\un{e }} \nonumber \\ & \qquad + \frac{1}{5!4!} \epsilon^{\un{d}\un{e}[5][4]} (\gamma_{[5]})_{\alpha\beta} K_{[4]} \Big] ~~~, &&\\ R_{\alpha\un{b}}^{\ \ \ \un{d}\un{e}} ~=~& i\frac{4}{11} \Big[ \delta_{\un b}^{\ [\un d} (\pa^{ \un e]} \psi_{\alpha}) + \frac{1}{5} (\gamma^{\un{d}\un{e}})_{\alpha}^{\ \delta}(\pa_{\un b}\psi_{\delta}) \Big] ~~~, &&\\ R_{\un{a}\un{b}}^{\ \ \ \un{d}\un{e}} ~=~& -(\pa_{[\un{a}}\pa^{[\un{d}}\Psi)\delta_{ \un{b}]}^{\ \un{e}]} ~~~. \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 11D, $\cal N$ = 1 Component/Superspace Results: Step 3} Parameter Composition Rules: \begin{align} \begin{split} \xi^{\un{m}} ~=~& - i \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} (\gamma^{\un{c}})_{\alpha\beta} \delta_{\un{c}}{}^{\un {m}} (1+\Psi) ~~~, \end{split} \\ \begin{split} \lambda^{\un{d}\un{e}} ~=~& - \frac{1}{80} \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} \Big[ (\gamma^{\un{d}\un{ e}})_{\alpha\beta} K + (\gamma_{[1]})_{\alpha\beta} K^{[1]\un{d}\un{e}} - \frac{1}{3!} (\gamma^{\un{d}\un{e}[3]} )_{\alpha\beta} K_{[3]} - \frac{1}{2} (\gamma_{[2]})_{\alpha\beta} K^{[2]\un{d}\un{e}} \\ & \qquad {~~~~~~~~~~} + \frac{1}{5!4!} \epsilon^{\un{d}\un{e}[5][4]} (\gamma_{[5]})_{\alpha\beta} K_{[4]} \Big] + i \frac{1}{2} \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} (\gamma^{[\un{d}})_{\alpha\beta} (\pa^{\un{e}]} \Psi) ~~~, \end{split} \\ \begin{split} \epsilon^{\delta} ~=~& i \frac{1}{11} \epsilon_1^{\alpha}\epsilon_2^{\beta} \left[ (\gamma^{[1]})_{\alpha\beta}(\gamma_{[1] })^{\delta\epsilon} - \frac{3}{10} (\gamma^{[2]})_{\alpha\beta}(\gamma_{[2]})^{\delta\epsilon} \right] \psi_{\epsilon} ~~~. \end{split} \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 11D, $\cal N$ = 1 Component/Superspace Results: Step 4} SUSY transformation laws: \begin{align} \begin{split} \delta_{Q} e_{\un{a}}{}^{\un{m}} ~=~& -i \frac{4}{11} \epsilon^{\beta} \left[ \delta_{\un a}{}^{\un d } \delta_{\beta}{}^{\gamma} + \frac{1}{5} (\gamma_{\un a}{}^{\un d})_{\beta}{}^{ \gamma} \right] \delta_{\un d}{}^{\un m} \psi_{\gamma} ~~~, \end{split} \\ \begin{split} \delta_{Q}\psi_{\un {a}}{}^{\delta} ~=~& (1 + \Psi) \pa_{\un{a}} \epsilon^{\delta} - \epsilon^{\delta} (\pa_{\un{c}} \Psi) \mathcal{M}_{\un{a}}{}^{\un{c}} \\ & - i\frac{1}{128} \epsilon^{\beta} \Big[ - (\gamma_{\un{a}})_{\beta}^{\ \delta} K + \frac{1}{2} (\gamma^{[2]})_{\beta}^{\ \delta} K_{\un{a}[2]} - \frac{1}{3!} (\gamma_{\un{a}[3]})_{\beta}^{\ \delta} K^{[3]} + \frac{1}{3!} (\gamma^{[3]})_{\beta}^{ \ \delta} K_{\un{a}[3]} \\ & \qquad {~~~~~~~~} - \frac{1}{4!} (\gamma_{\un{a}[4]})_{\beta}^{\ \delta} K^{[4]} \Big] - \frac{1}{8} \epsilon^{\beta} \Big[ \delta_{\un{a}}{}^{\un{c}} \delta_{\beta}^{\ \delta} + 3 (\gamma_{\un{a}}{}^{\un{c}})_{\beta}^{\ \delta} \Big] (\pa_{\un{ c}}\Psi) ~~~, \end{split} \\ \begin{split} \delta_{Q} \phi_{\un {a}}{}^{\un{d}\un{e}} ~=~& -i \frac{4}{11} \epsilon^{\beta} \left[ \delta_{\un a} {}^{ [\un d} (\pa^{\un e]} \psi_{\beta}) + \frac{1}{5}(\gamma^{\un{d}\un{e}})_{\beta}{}^{ \delta} (\pa_{\un a}\psi_{\delta}) \right] ~~~. \end{split} \end{align} In the remaining subsections of the chapter, the steps described for the case of the 11D, $\cal N$ = 1 theory above will be repeated, essentially line by line, in each of the cases for 10D, $\cal N$ = 1, 10D, $\cal N$ = 2A, and 10D, $\cal N$ = 2B superspaces. This will imply a certain repetitive nature to the respective presentation. There will only be slight various in explicit details. We are able to minimize this very slightly by noting the result in (\ref{11d-3}) applies universally in all three cases. So we will not explicitly rewrite it nor its resultant implications several more times. \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 1 Component/Superspace Results: Step 1} In the case of 10D $\mathcal{N} = 1$ N-SG covariant derivatives we define \begin{align} {~~~~~~~~~~} \nabla_{\alpha} ~=~& {\rm D}_{\alpha}+\frac{1}{2}\Psi {\rm D}_{\alpha}+\frac{1}{10}(\sigma^{\un{a}\un{ b}})_{\alpha}^{\ \beta}({\rm D}_{\beta}\Psi){\cal M}_{\un{a}\un{b}} ~~~, &&\\ \nabla_{\un{a}} ~=~& \pa_{\un{a}} + \Psi\pa_{\un{a}} -i\frac{2}{5}(\sigma_{\un{a}})^{\alpha\beta} ({\rm D}_{\alpha}\Psi){\rm D}_{\beta} - (\pa_{\un{c}}\Psi){\cal M}_{\un{a}}{}^{\un{c}} ~~~, \label{10d1-1} \end{align} and ``split'' the spatial 10D $\mathcal{N} = 1$ N-SG covariant derivative into two parts \begin{equation} \nabla_{\un{a}}| ~=~ {\bf{D}}_{\un{a}} + \psi_{\un{a}}{}^{\gamma} \nabla_{\gamma}| ~~~. \label{10d1-2} \end{equation} Comparing the result (\ref{10d1-1}) in with the one in (\ref{10d1-2}) a component gravitino is identified via \begin{equation} \psi_{\un{a}}{}^{\gamma} ~=~ - i \frac{2}{5} (\sigma_{\un{a}})^{\gamma\delta} ({\rm D}_{\delta}\Psi) ~~~. \end{equation} However, as this expression contains an explicit $\sigma$-matrix we see that it defines the non-conformal $\emph{spin-\fracm{1}{2}}$ part of the gravitino to be \begin{equation} \psi_{\beta} ~\equiv~ (\sigma^{\un{a}})_{\beta\gamma} \psi_{\un{a}}{}^{\gamma} ~~~. \end{equation} and it follows only the ``$\sigma$-trace'' of the gravitino can occur. So then we have \begin{equation} {\rm D}_{\beta}\Psi ~=~ i \frac{1}{4} (\sigma^{\un{a}})_{\beta\gamma} \psi_{\un{a}}{}^{\gamma} ~\equiv~ i \frac{1 }{4} \psi_{\beta} ~~~, \end{equation} in the $\theta$ $\to$ 0 limit. The complete specification of the geometrical superfields also requires explicit definitions of the bosonic terms to second order in D-derivatives. We take advantage of the 10D Fierz identity \begin{equation} \delta{}_{[\gamma }{}^{\alpha} \delta{}_{\delta] }{}^{\beta} ~=~ \frac{1}{48} \, (\sigma^{[3]})_{\gamma\delta} (\gamma_{[3]})^{\alpha\beta} ~~~, \end{equation} valid for 10D spinors, so we may define a bosonic field: \begin{align} G_{[3]} ~=~ (\sigma_{[3]})^{\gamma\delta} ({\rm D}_{\gamma} {\rm D}_{\delta} \Psi) ~~~, \end{align} or in other words, \begin{equation} \frac{1}{2} {\rm D}_{[\gamma} {\rm D}_{\delta]} \Psi ~=~ \frac{1}{16\times 3!} (\sigma^{[3]})_{\gamma\delta} G_{[3]} ~~~. \end{equation} We emphasize that the component field (the $G$) is defined by the $\theta$ $\to$ 0 limit of these equations. \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 1 Component/Superspace Results: Step 2} Torsions: \begin{align} T_{\alpha\beta}^{\ \ \un{c}} ~=~ & i(\sigma^{\un{c}})_{\alpha\beta} ~~~, &&\\ T_{\alpha\beta}^{\ \ \gamma} ~=~ & 0 ~~~, &&\\ T_{\alpha\un{b}}^{\ \ \un{c}} ~=~ & i\frac{3}{20} \left[ \delta_{\un{b}}^{\ \un{c}}\delta_{\alpha}^{\ \delta} + (\sigma_{\un{b}}^{\ \un{c}})_{\alpha}^{\ \delta }\right] \psi_{\delta} ~~~, &&\\ T_{\alpha\un{b}}^{\ \ \gamma} ~=~ & i\frac{1}{80} \Big[-(\sigma^{[2]})_{\alpha}^{\ \gamma} G_{\un{b}[2]} + \frac{1}{3} (\sigma_{\un{b}[3]})_{\alpha}^{\ \gamma} G^{[3]} \Big] - \frac{3}{10} \Big[ \delta_{\un{b}}^{\ \un{c}} \delta_{\alpha}^{\ \gamma} - (\sigma_{\un{b}}^{\ \un{c}})_{\alpha}^{\ \gamma} \Big] (\pa_{\un{c }}\Psi) ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \un{c}} ~=~ & 0 ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \gamma} ~=~ & -\frac{1}{10}(\sigma_{[\un a})^{\gamma\delta}(\pa_{\un b]}\psi_{\delta}) ~~~. \end{align} Curvatures: \begin{align} R_{\alpha\beta}^{\ \ \ \un{d}\un{e}} ~=~ & -i\frac{6}{5}(\sigma^{[\un{d}})_{\alpha\beta} (\pa^{\un{e}]}\Psi) - \frac{1}{40} \Big[ \frac{1}{3!} (\sigma^{\un{d}\un{e}[3]})_{\alpha\beta} G_{[3]} + (\sigma_{[1]})_{\alpha\beta} G^{[1]\un{d}\un{e}} \Big] ~~~, &&\\ R_{\alpha\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & i \frac{1}{4} \Big[ \delta_{\un b}^{\ [\un d}(\pa^{\un e]} \psi_{\alpha}) + \frac{1}{5}(\sigma^{\un{d}\un{e}})_{\alpha}^{\ \gamma} (\pa_{\un b}\psi_{\gamma}) \Big] ~~~, &&\\ R_{\un{a}\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & -(\pa_{[\un{a}}\pa^{[\un{d}} \Psi)\delta_{\un{b}]}^{\ \un{e}]} ~~~. \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 1 Component/Superspace Results: Step 3} Parameter Composition Rules: \begin{align} \xi^{\un{m}} ~=~& - i \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} (\sigma^{\un{c}})_{\alpha\beta} \delta_{\un{c}}{}^{\un {m}} (1+\Psi) ~~~, \\ \lambda^{\un{d}\un{e}} ~=~& \frac{1}{40} \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} \Big[ \frac{1}{3!} (\sigma^{ \un{d}\un{e}[3]})_{\alpha\beta} G_{[3]} + (\sigma_{[1]})_{\alpha\beta} G^{[1]\un{d}\un{e}} \Big] + i \frac{ 17}{10} \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} (\sigma^{[\un{d}})_{\alpha\beta} (\pa^{\un{e}]} \Psi) ~~~, \\ \epsilon^{\delta} ~=~& - i \frac{1}{10} \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} (\sigma^{\un{c}})_{\alpha\beta} (\sigma_{\un{c}})^{\delta\epsilon} \psi_{\epsilon} ~~~. \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 1 Component/Superspace Results: Step 4} SUSY transformation laws: \begin{align} \delta_{Q} e_{\un{a}}{}^{\un{m}} ~=~& -i \frac{1}{4} \epsilon^{\beta} \left[ \delta_{\un a}{}^{\un d} \delta_{\beta}{}^{\gamma} + \frac{1}{5} (\sigma_{\un a}{}^{ \un d})_{\beta}{}^{\gamma} \right] \delta_{\un d}{}^{\un m} \psi_{\gamma} ~~~, \\ \begin{split} \delta_{Q}\psi_{\un {a}}{}^{\delta} ~=~& (1 + \Psi) \pa_{\un{a}} \epsilon^{\delta} - \epsilon^{\delta} (\pa_{\un{c}} \Psi) \mathcal{M}_{\un{a}}{}^{\un{c}} \\ & - i\frac{1}{80} \epsilon^{\beta} \Big[-(\sigma^{[2]})_{\beta}^{\ \delta} G_{\un{a}[2]} + \frac{1}{3} (\sigma_{ \un{a}[3]})_{\beta}^{\ \delta} G^{[3]} \Big] + \frac{3}{10} \epsilon^{\beta} \Big[ \delta_{\un{a}}^{\ \un{c}} \delta_{\beta}^{ \ \delta} - (\sigma_{\un{a}}^{\ \un{c}})_{\beta}^{\ \delta} \Big] (\pa_{\un{c}}\Psi) ~~~, \end{split} \\ \delta_{Q} \phi_{\un {a}}{}^{\un{d}\un{e}} ~=~& - i \frac{1}{4} \epsilon^{\beta} \left[ \delta_{\un a }^{\ [\un d} (\pa^{\un e]} \psi_{\beta}) + \frac{1}{5}(\sigma^{\un{de}})_{\beta}^{\ \gamma}( \pa_{\un a}\psi_{\gamma}) \right] ~~~. \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2A Component/Superspace Results: Step 1} In the case of 10D $\mathcal{N} = 2A$ N-SG covariant derivatives we define \begin{align} \nabla_{\alpha} ~=~& {\rm D}_{\alpha} + \frac{1}{2}\Psi {\rm D}_{\alpha}+\frac{1}{10}(\sigma^{\un{ a}\un{b}})_{\alpha}^{\ \beta}({\rm D}_{\beta}\Psi){\cal M}_{\un{a}\un{b}} ~~~, \\ \nabla_{\dot{\alpha}} ~=~& {\rm D}_{\dot{\alpha}} + \frac{1}{2}\Psi {\rm D}_{\dot{\alpha}} +\frac{1}{10}(\sigma ^{\un{a}\un{b}})_{\dot{\alpha}}^{\ \dot{\beta}}({\rm D}_{\dot{\beta}}\Psi){\cal M}_{\un{a}\un{b}} ~~~, \\ \nabla_{\un{a}} ~=~& \pa_{\un{a}}+\Psi\pa_{\un{a}} - i\frac{1}{5}(\sigma_{\un{a}})^{\delta \gamma}({\rm D}_{\delta}\Psi){\rm D}_{\gamma} - i\frac{1}{5}(\sigma_{\un{a}})^{\dot{\delta} \dot{ \gamma}} ({\rm D}_{\dot{\delta}}\Psi){\rm D}_{\dot{\gamma}}-(\pa_{\un{c}} \Psi){\cal M}_{\un{a}}^{\ \un{c}} ~~~, \label{10d2a-1} \end{align} and ``split'' the spatial 10D $\mathcal{N} = 2A$ N-SG covariant derivative into three parts \begin{equation} \nabla_{\un{a}}| ~=~ {\bf{D}}_{\un{a}} + \psi_{\un{a}}{}^{\gamma} \nabla_{\gamma}| + \psi_{\un{a}}{}^{\dot{\gamma}} \nabla_{\dot{\gamma}}| ~~~. \label{10d2a-2} \end{equation} On taking the $\theta$ $\to$ 0 limit the latter terms allow an identification with the component gravitinos are identified via \begin{align} \psi_{\un{a}}{}^{\gamma} ~=~& - i \frac{1}{5} (\sigma_{\un{a}})^{\gamma\delta} ({\rm D}_{\delta}\Psi) ~~~, ~~~ \psi_{\un{a}}{}^{\dot{\gamma}} ~=~ - i \frac{1}{5} (\sigma_{\un{a}})^{\dot{\gamma}\dot{\delta}} ({\rm D }_{\dot{\delta}}\Psi) ~~~. \end{align} However, as this expression contains an explicit $\sigma$-matrix we see that it really defines the non-conformal $\emph{spin-\fracm{1}{2}}$ part of the gravitino to be \begin{align} \psi_{\beta} ~\equiv~& (\sigma^{\un{a}})_{\beta\gamma} \psi_{\un{a}}{}^{\gamma} ~~~, ~~~ \psi_{\dot{\beta}} ~\equiv~ (\sigma^{\un{a}})_{\dot{\beta}\dot{\gamma}} \psi_{\un{a}}{}^{\dot{\gamma}} ~~~. \end{align} It follows only the ``$\sigma$-trace'' of the gravitino can occur. So then we have \begin{align} {\rm D}_{\beta}\Psi ~=~ & i \frac{1}{2} (\sigma^{\un{a}})_{\beta\gamma} \psi_{\un{a}}{}^{\gamma} ~\equiv~ i \frac{1}{2} \psi_{\beta} ~~~, ~~~ {\rm D}_{\dot{\beta}}\Psi ~=~ i \frac{1}{2} (\sigma^{\un{a}})_{\dot{\beta}\dot{\gamma}} \psi_{\un{a}} {}^{\dot{\gamma}} ~\equiv~ i \frac{1}{2} \psi_{\dot{\beta}} ~~~, \end{align} in the $\theta$ $\to$ 0 limit. In order to complete the specification of the geometrical superfields also requires explicit definitions of the bosonic terms to second order in D-derivatives. So we define bosonic fields: \begin{align} G_{[3]} ~=~& (\sigma_{[3]})^{\gamma\delta} ({\rm D}_{\gamma} {\rm D}_{\delta} \Psi) ~~~, ~~~ H_{[3]} ~=~ (\sigma_{[3]})^{\dot{\gamma}\dot{\delta}} ({\rm D}_{\dot{\gamma}} {\rm D}_{\dot{\delta}} \Psi) ~~~, \end{align} \begin{align} N ~=~& C^{\gamma\dot{\delta}} ({\rm D}_{\gamma} {\rm D}_{\dot{\delta}} \Psi) ~~~, ~~~ N_{[2]} ~=~ (\sigma_{[2]})^{\gamma\dot{\delta}} ({\rm D}_{\gamma} {\rm D}_{\dot{\delta}} \Psi) ~~~, ~~~ N_{[4]} ~=~ (\sigma_{[4]})^{\gamma\dot{\delta}} ({\rm D}_{\gamma} {\rm D}_{\dot{\delta}} \Psi) ~~~, \end{align} or in other words, \begin{align} \frac{1}{2} {\rm D}_{[\gamma} {\rm D}_{\delta]} \Psi ~=~& \frac{1}{16\times 3!} (\sigma^{[3]})_{\gamma\delta} G_{[3]} ~~~, ~~~ \frac{1}{2} {\rm D}_{[\dot{\gamma}} {\rm D}_{\dot{\delta}]} \Psi ~=~ \frac{1}{16\times 3!} (\sigma^{ [3]})_{\dot{\gamma}\dot{\delta}} H_{[3]} ~~~, \end{align} and \begin{equation} {\rm D}_{\gamma} {\rm D}_{\dot{\delta}} \Psi ~=~ \frac{1}{16} \Big\{ C_{\gamma\dot{\delta}} N + \frac{1}{2!} (\sigma^{[2]})_{\gamma\dot{\delta}} N_{[2]} + \frac{1}{4!} (\sigma^{[4]})_{\gamma\dot{\delta}} N_{[4]} \Big\} ~~~. \end{equation} We emphasize that the component fields (the $G$'s, $H$'s and $N$'s) are defined by the $\theta$ $\to$ 0 limit of these equations. \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2A Component/Superspace Results: Step 2} Torsions: \begin{align} T_{\alpha\beta}^{\ \ \un{c}} ~=~ & i(\sigma^{\un{c}})_{\alpha\beta} ~~~, &&\\ T_{\alpha\beta}^{\ \ \gamma} ~=~ & i \frac{1}{10}(\sigma^{\un a})_{\alpha\beta}(\sigma_{\un a})^{\gamma\delta}\psi_{\delta} ~~~, &&\\ T_{\alpha\beta}^{\ \ \dot\gamma} ~=~ & - i \frac{1}{10}(\sigma^{\un a})_{\alpha\beta}(\sigma_{\un a})^{\dot\gamma \dot\delta}\psi_{\dot\delta} ~~~, &&\\ T_{\dot\alpha\dot\beta}^{\ \ \un{c}} ~=~ & i(\sigma^{\un{c}})_{\dot\alpha\dot\beta} ~~~, &&\\ T_{\dot\alpha\dot\beta}^{\ \ \gamma} ~=~ & -i\frac{1}{10}(\sigma^{\un a})_{\dot\alpha\dot\beta}(\sigma_{\un a} )^{\gamma\delta}\psi_{\delta} ~~~, &&\\ T_{\dot\alpha\dot\beta}^{\ \ \dot\gamma} ~=~ & i\frac{1}{10}(\sigma^{\un a})_{\dot\alpha\dot\beta}(\sigma_{\un a})^{\dot\gamma\dot\delta}\psi_{\dot\delta} ~~~, &&\\ T_{\alpha\dot\beta}^{\ \ \un c} ~=~ & 0 ~~~, &&\\ T_{\alpha\dot\beta}^{\ \ \gamma} ~=~ & i\frac{1}{4}\left[ \delta_{\alpha}^{\ \gamma} \delta_{\dot\beta}^{\ \dot\delta} + \frac{1}{10} (\sigma^{\un a \un b})_{\alpha}^{\ \gamma}(\sigma_{\un a\un b})_{\dot\beta}^{\ \dot\delta}\right] \psi_{\dot\delta} ~~~, &&\\ T_{\alpha\dot\beta}^{\ \ \dot\gamma} ~=~ & i\frac{1}{4}\left[ \delta_{\dot\beta}^{\ \dot\gamma} \delta_{\alpha}^{\ \delta} + \frac{1}{ 10} (\sigma^{\un a\un b})_{\dot\beta}^{\ \dot\gamma} (\sigma_{\un a \un b})_{\alpha}^{\ \delta} \right] \psi_{\delta} ~~~, &&\\ T_{\alpha\un{b}}^{\ \ \un{c}} ~=~ & i\frac{1}{5} \Big[ 2 \delta_{\un{b}}^{\ \un{c}} \delta_{\alpha}^{\ \delta} + (\sigma_{\un{b}}^{\ \un{c}})_{\alpha}^{\ \delta} \Big] \psi_{\delta} ~~~, &&\\ T_{\alpha\un{b}}^{\ \ \gamma} ~=~ & i\frac{1}{80} \Big[ -\frac{1}{2}(\sigma^{[2]})_{\alpha}^{\ \gamma} G_{\un{b}[2]} + \frac{1}{3!} (\sigma_{\un{b}[3]})_{\alpha}^{\ \gamma} G^{ [3]} \Big] - \frac{2}{5} \Big[ \delta_{\un{b}}^{\ \un{c}} \delta_{\alpha}^{\ \gamma} - (\sigma_{\un{b}}^{\ \un{c}} )_{\alpha}^{\ \gamma} \Big] (\pa_{\un{c}}\Psi) ~~~, &&\\ T_{\alpha\un{b}}^{\ \ \dot\gamma} ~=~ & -i\frac{1}{80} \Big[(\sigma_{\un{b}})_{\alpha}^{\ \dot\gamma} N - (\sigma^{[1]})_{\alpha}^{\ \dot\gamma} N_{\un b[1]} + \frac{1}{2} (\sigma_{\un b[2]})_{\alpha}^{\ \dot\gamma} N^{[2]} - \frac{1}{3!}(\sigma^{[3]})_{\alpha}^{\ \dot\gamma} N_{\un b[3]} \nonumber&&\\ & \qquad {~~~~} + \frac{1}{4!} (\sigma_{\un b[4]})_{\alpha}^{\ \dot\gamma} N^{[4]} \Big] ~~~, &&\\ T_{\dot\alpha\un{b}}^{\ \ \un{c}} ~=~ & i \frac{1}{5} \Big[ 2 \delta_{\un{b}}^{\ \un{c}} \delta_{\dot\alpha}^{\ \dot\delta} + (\sigma_{\un{b}}^{\ \un{c}})_{\dot\alpha}^{\ \dot\delta} \Big] \psi_{\dot\delta} ~~~, &&\\ T_{\dot\alpha\un{b}}^{\ \ \gamma} ~=~ & -i\frac{1}{80} \Big[ (\sigma_{\un{b}})_{\dot\alpha}^{\ \gamma} N + (\sigma^{[1]} )_{\dot\alpha}^{\ \gamma} N_{\un b[1]} - \frac{1}{2} (\sigma_{\un b[2]})_{\dot\alpha}^{\ \gamma} N^{[2]} - \frac{1}{3!} (\sigma^{[3]})_{\dot\alpha}^{\ \gamma} N_{\un b[3]} \nonumber&&\\ & \qquad {~~~~} + \frac{1}{4!} (\sigma_{\un b[4]})_{\dot\alpha}^{\ \gamma} N^{[4]} \Big] ~~~, &&\\ T_{\dot\alpha\un{b}}^{\ \ \dot\gamma} ~=~ & i\frac{1}{80} \Big[ -\frac{1}{2} (\sigma^{[2]})_{\dot\alpha}^{\ \dot\gamma} H_{\un{b}[2]} + \frac{1}{3!} (\sigma_{\un{b}[3]})_{\dot\alpha}^{\ \dot\gamma} H^{[3]} \Big] - \frac{2}{5} \Big[ \delta_{\un{b}}^{\ \un{c}} \delta_{\dot\alpha}^{\ \dot\gamma} - (\sigma_{\un{b}}^{\ \un{c}} )_{\dot\alpha}^{\ \dot\gamma} \Big] (\pa_{\un{c}}\Psi) ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \un{c}} ~=~ & 0 ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \gamma} ~=~ & -\frac{1}{10}(\sigma_{[\un{a }})^{\gamma\delta}(\pa_{\un{b}]}\psi_{\delta}) ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \dot\gamma} ~=~ & -\frac{1}{10}(\sigma_{[\un{a }})^{\dot\gamma\dot\delta}(\pa_{\un{b}]}\psi_{\dot\delta}) ~~~. \end{align} Curvatures: \begin{align} R_{\alpha\beta}^{\ \ \ \un{d}\un{e}} ~=~ & -i\frac{6}{5}(\sigma^{[\un{d}})_{\alpha\beta} (\pa^{\un{e}]}\Psi) - \frac{1}{40} \left[\frac{1}{3!}(\sigma^{\un{d}\un{e}[3]})_{\alpha\beta} G_{[3]} + (\sigma_{[1]})_{\alpha\beta} G^{[1]\un{d}\un{e}} \right] ~~~, &&\\ R_{\dot\alpha\dot\beta}^{\ \ \ \un{d}\un{e}} ~=~ & -i\frac{6}{5}(\sigma^{[\un{d}})_{\dot\alpha\dot\beta} (\pa^{\un{e}]}\Psi) - \frac{1}{40} \left[\frac{1}{3!} (\sigma^{\un{d}\un{e}[3]})_{\dot\alpha\dot\beta} H_{[3]} + (\sigma_{[1]})_{\dot\alpha\dot\beta} H^{[1] \un{d}\un{e}} \right] ~~~, &&\\ R_{\alpha\dot\beta}^{\ \ \ \un{d}\un{e}} ~=~ & \frac{1}{40} \Big[ (\sigma^{\un{d}\un{e}})_{\alpha\dot{ \beta}} N - C_{\alpha\dot{\beta}} N^{\un{d}\un{e}} + \frac{1}{2} (\sigma^{\un{d}\un{e}[2]})_{\alpha\dot{ \beta}} N_{[2]} \nonumber\\ & \qquad - \frac{1}{2} (\sigma_{[2]})_{\alpha\dot{\beta}} N^{\un{d}\un{e}[2]} + \frac{1}{4!4!} \epsilon^{\un{d}\un{e}[4][\Bar{4}]} (\sigma_{[4]})_{\alpha\dot{\beta}} N_{[\Bar{4}]} \Big] ~~~, &&\\ R_{\alpha\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & i\frac{1}{2} \Big[ \delta_{\un{b}}^{\ [\un{d}}(\pa^{ \un{e}]}\psi_{\alpha}) + \frac{1}{5} (\sigma^{\un{d}\un{e}})_{\alpha }^{\ \gamma}(\pa_{\un{b}}\psi_{\gamma}) \Big] ~~~, &&\\ R_{\dot\alpha\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & i\frac{1}{2} \Big[ \delta_{\un{b}}^{\ [\un{d}}( \pa^{\un{e}]}\psi_{\dot\alpha}) + \frac{1}{5}(\sigma^{\un{d}\un{e}})_{\dot\alpha }^{\ \dot\gamma}(\pa_{\un{b}}\psi_{\dot\gamma}) \Big] ~~~, &&\\ R_{\un{a}\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & -(\pa_{[\un{a}}\pa^{[\un{d}} \Psi)\delta_{\un{b}]}^{\ \un{e}]} ~~~. \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2A Component/Superspace Results: Step 3} Parameter Composition Rules: \begin{align} \xi^{\un{m}} ~=~& - i \big[~ \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} (\sigma^{\un{c}})_{\alpha\beta} + \epsilon_{1}{}^{ \dot{\alpha}} \epsilon_{2}{}^{\dot{\beta}} (\sigma^{\un{c}})_{\dot{\alpha} \dot{\beta}} ~\big]~ \delta_{\un{c}}{}^{\un {m}} (1 + \Psi) ~~~, \\ \begin{split} \lambda^{\un{d}\un{e}} ~=~& - \frac{1}{40} ( \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\dot{\beta}} + \epsilon_{1}{}^{\dot{\beta}} \epsilon_{2}{}^{\alpha} ) \Big[ (\sigma^{\un{d}\un{e}})_{\alpha\dot{\beta}} N - C_{\alpha\dot{\beta}} N^{\un{d}\un{e}} + \frac{1}{2} (\sigma^{\un{d}\un{e}[2]})_{\alpha\dot{\beta}} N_{[2]} \\ & \qquad {~~~~~~~~~~~~~~~~~~~~~~~~} - \frac{1}{2} (\sigma_{[2]})_{\alpha\dot{\beta}} N^{\un{ d}\un{e}[2]} + \frac{1}{4!4!} \epsilon^{\un{d}\un{e}[4][\Bar{4}]} (\sigma_{[4]})_{\alpha\dot{\beta}} N_{[\Bar{4}]} \Big] \\ & + \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} \left[ i \frac{17}{10} (\sigma^{[\un{d}})_{\alpha\beta} (\pa^{\un{ e}]}\Psi) + \frac{1}{40} \Big[ \frac{1}{3!}(\sigma^{\un{d}\un{e}[3]})_{\alpha\beta} G_{[3]} + (\sigma_{[1]})_{\alpha \beta} G^{[1]\un{d}\un{e}} \Big] \right] \\ & + \epsilon_{1}{}^{\dot{\alpha}} \epsilon_{2}{}^{\dot{\beta}} \left[ i \frac{17}{10} (\sigma^{[\un{d}})_{\dot{ \alpha}\dot{\beta}} (\pa^{\un{e}]}\Psi) + \frac{1}{40} \Big[ \frac{1}{3!}(\sigma^{\un{d}\un{e}[3]} )_{\dot{\alpha}\dot{\beta}} H_{[3]} + (\sigma_{[1]})_{\dot{\alpha}\dot{\beta}} H^{[1]\un{d}\un{e}} \Big] \right] ~~~, \end{split} \\ \begin{split} \epsilon^{\delta} ~=~& - i\frac{1}{4}( \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\dot{\beta}} + \epsilon_{1}{}^{\dot{\beta}} \epsilon_{2}{}^{ \alpha} ) \left[ \delta_{\alpha}^{\ \delta} \delta_{\dot{\beta}}^{\ \dot{\epsilon}} + \frac{1}{10} (\sigma^{[2]})_{\alpha}^{\ \delta}( \sigma_{[2]})_{\dot\beta}^{\ \dot\epsilon} \right] \psi_{\dot\epsilon} \\ & \qquad - i \frac{1}{5} \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} (\sigma^{\un c})_{\alpha\beta} (\sigma_{\un c})^{ \delta\epsilon}\psi_{\epsilon} ~~~. \end{split} \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2A Component/Superspace Results: Step 4} SUSY transformation laws: \begin{align} \delta_{Q} e_{\un{a}}{}^{\un{m}} ~=~& -i \frac{1}{2} \epsilon^{\beta} \left[ \delta_{\un a}{}^{\un d} \delta_{\beta}{}^{\gamma} + \frac{1}{5} (\sigma_{\un a}{}^{ \un d})_{\beta}{}^{\gamma} \right] \delta_{\un d}{}^{\un m} \psi_{\gamma} -i \frac{1}{2} \epsilon^{\dot{\beta}} \left[ \delta_{\un a}{}^{\un d} \delta_{\dot{\beta}}{}^{ \dot{\gamma}} + \frac{1}{5} (\sigma_{\un a}{}^{ \un d})_{\dot{\beta}}{}^{\dot{\gamma}} \right] \delta_{\un d}{}^{\un m} \psi_{\dot{\gamma}} ~~~, \\ \begin{split} \delta_{Q}\psi_{\un {a}}{}^{\delta} ~=~& (1 + \Psi) \pa_{\un{a}} \epsilon^{\delta} - \epsilon^{\delta} (\pa_{\un{c}} \Psi) \mathcal{M}_{\un{a}}{}^{\un{c}} \\ & - i\frac{1}{80} \epsilon^{\beta} \Big[ - \frac{1}{2} (\sigma^{[2]})_{\beta}{}^{\delta} G_{\un{a}[2]} + \frac{1}{ 3!} (\sigma_{\un{a}[3]})_{\beta}{}^{\delta} G^{[3]} \Big] + \frac{2}{5} \epsilon^{\beta} \Big[ \delta_{\un{a}}{}^{\un{ c}} \delta_{\beta}{}^{\delta} - (\sigma_{\un{a}}{}^{\un{c}})_{\beta}{}^{\delta} \Big] (\pa_{\un{c}}\Psi) \\ & + i\frac{1}{80} \epsilon^{\dot {\beta}} \Big[ (\sigma_{\un{a}})_{\dot\beta}{}^{\delta} N + (\sigma^{[1]})_{\dot\beta}{}^{ \delta} N_{\un{a}[1]} - \frac{1}{2} (\sigma_{\un{a}[2]})_{\dot\beta}{}^{\delta} N^{[2]} - \frac{1}{3!} (\sigma^{[3]})_{ \dot\beta}{}^{\delta} N_{\un{a}[3]} \\ & \qquad {~~~~~~} + \frac{1}{4!} (\sigma_{\un{a}[4]})_{\dot\beta}{}^{\delta} N^{[4]} \Big] ~~~, \end{split} \\ \delta_{Q} \phi_{\un {a}}{}^{\un{d}\un{e}} ~=~& - i\frac{1}{2} \epsilon^{\beta} \Big[ \delta_{\un{a}}^{\ [\un{d}} (\pa^{\un{e}]}\psi_{\beta}) + \frac{1}{5}(\sigma^{\un{d}\un{e}})_{\beta}^{\ \gamma}(\pa_{\un{a}}\psi_{\gamma}) \Big] - i\frac{1}{2} \epsilon^{\dot{\beta}} \Big[ \delta_{\un{a}}^{\ [\un{d}}(\pa^{\un{e}]}\psi_{\dot\beta}) + \frac{1}{5}(\sigma^{\un{d}\un{e}})_{\dot\beta}^{\ \dot\gamma}(\pa_{\un{a}}\psi_{\dot\gamma}) \Big] ~~~. \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2B Component/Superspace Results: Step 1} In the case of 10D $\mathcal{N} = 2B$ N-SG covariant derivatives we define \begin{align} \nabla_{\alpha} ~=~& {\rm D}_{\alpha} + \frac{1}{2} \Psi {\rm D}_{\alpha} + \frac{1}{10} (\sigma^{\un{a} \un{b}})_{\alpha}{}^{\beta} ({\rm D}_{\beta}\Psi) {\cal M}_{\un{a}\un{b}} ~~~, \\ \Bar{\nabla}_{\alpha} ~=~& \Bar{\rm D}_{\alpha} + \frac{1}{2}\Bar{\Psi} \Bar{\rm D}_{\alpha} + \frac{1}{10} (\sigma^{\un{a}\un{b}})_{\alpha}{}^{\beta} (\Bar{\rm D}_{\beta}\Bar{\Psi}) {\cal M}_{\un{a}\un{b}} ~~~, \\ \begin{split} \nabla_{\un{a}} ~=~& \pa_{\un{a}} + \frac{1}{2}\Psi\pa_{\un{a}} + \frac{1}{2}\Bar{\Psi} \pa_{ \un{a}} - i\frac{1}{32} (\sigma_{\un{a}})^{\alpha \beta} ({\rm D}_{\alpha}\Bar{\Psi}) \Bar{\rm D}_{\beta} - i\frac{1}{32} (\sigma_{\un{a}})^{\alpha \beta} (\Bar{\rm D}_{\alpha} \Psi) {\rm D}_{\beta} \\ & - i\frac{27}{160} (\sigma_{\un{a}})^{\alpha \beta} ({\rm D}_{\alpha}\Psi) \Bar{\rm D}_{\beta} - i\frac{27}{ 160} (\sigma_{\un{a}})^{\alpha \beta} (\Bar{\rm D}_{\alpha}\Bar{\Psi}) {\rm D}_{\beta} \\ & -\frac{1}{2}(\pa_{\un{c}}\Psi){\cal M}_{\un{a}}{}^{\un{c}} - \frac{1}{2} (\pa_{\un{c}}\Bar{ \Psi}) {\cal M}_{\un{a}}{}^{\un{c}} ~~~, \end{split} \label{10d2b-1} \end{align} and ``split'' the spatial 10D $\mathcal{N} = 2B$ N-SG covariant derivative into three parts \begin{equation} \nabla_{\un{a}}| ~=~ {\bf{D}}_{\un{a}} + \psi_{\un{a}}{}^{\gamma} \nabla_{\gamma}| + \Bar{\psi}_{\un{a}}{}^{\gamma}\Bar{\nabla}_{\gamma}| ~~~. \label{10d2b-2} \end{equation} On taking the $\theta$ $\to$ 0 limit the latter terms allows an identification with the gravitino and the leading term in this limit yields a component-level linearized gravitationally covariant derivative operator given by \begin{equation} \begin{split} {\bf{D}}_{\un{a}} ~=~ e_{\un{a}} + \phi_{\un a}{}^{\iota} \mathcal{M}_{\iota} ~=~ \pa_{\un{a}} + \frac{1}{2}(\Psi+\Bar{\Psi}) \pa_{\un{a}} + \phi_{\un a}{}^{\iota} \mathcal{M}_{\iota} ~~~. \end{split} \label{10d2b-3} \end{equation} Comparison of the LHS to the RHS of (\ref{10d2b-3}), we see that a linearized frame field $e_{\un{a}} {}^{\un m}$ = $( \, 1 \,+\, \frac{1}{2}(\Psi+\Bar{\Psi}) \,)\delta{}_{\un{a}} {}^{ \un m}$ emerges to describe a scalar graviton. Finally, comparison of the coefficient of the Lorentz generator $ \mathcal{M}_{\iota} $ as it appears in the latter two forms of (\ref{10d2b-3}) informs us the spin connection is given by \begin{equation} \phi_{\un{c}}{}^{\un{d}\un{e}} ~=~ - \frac{1}{4} \delta_{\un{c}}{}^{[\un{d}} \big( \pa^{\un{e}]} ( \Psi + \Bar{\Psi} ) \big) ~~~. \end{equation} Comparing the result (\ref{10d2b-1}) in with the one in (\ref{10d2b-2}) the component gravitinos are identified via \begin{align} \psi_{\un{a}}{}^{\gamma} ~=~& - i \frac{1}{160} (\sigma_{\un{a}})^{\gamma\delta} \big( \Bar{\rm D}_{\delta} ( 5 \Psi + 27 \Bar{\Psi} ) \big) ~~~, \\ \Bar{\psi}_{\un{a}}{}^{\gamma} ~=~& - i \frac{1}{160} (\sigma_{\un{a}})^{\gamma\delta} \big( {\rm D}_{ \delta} ( 5 \Bar{\Psi} + 27 \Psi ) \big) ~~~, \end{align} which are equivalent to \begin{align} \Bar{\rm D}_{\alpha} ( 5 \Psi + 27 \Bar{\Psi} ) ~=~& i 16 (\sigma^{\un a})_{\alpha\gamma} \psi_{\un a} {}^{\gamma} ~~~,~~~ {\rm D}_{\alpha} ( 5 \Bar{\Psi} + 27 \Psi ) ~=~ i 16 (\sigma^{\un a})_{\alpha\gamma} \Bar{\psi}_{\un a}^{\ \gamma} ~~~. \label{10d2b-4b} \end{align} However, as this expression contains an explicit $\sigma$-matrix we see that it really defines the non-conformal $\emph{spin-\fracm{1}{2}}$ part of the gravitino to be \begin{align} \psi_{\beta} ~\equiv~& (\sigma^{\un{a}})_{\beta\gamma} \psi_{\un{a}}{}^{\gamma} ~~~, ~~~ \Bar{\psi}_{\beta} ~\equiv~ - (\sigma^{\un{a}})_{\beta\gamma} \Bar{\psi}_{\un{a}}{}^{\gamma} ~~~. \end{align} Since the results in (\ref{10d2b-4b}) are under-constrained, we are allowed to introduce a fermionic auxiliary field $\lambda_{\alpha}$ and its complex conjugate $\Bar{\lambda}_{\alpha}$. So then we have \begin{align} \Bar{\rm D}_{\alpha}\Psi ~=~& i \frac{1}{2} (\sigma^{\un a})_{\alpha\gamma} \psi_{\un a }{}^{\gamma} - 27 \Bar{\lambda}_{\alpha} ~\equiv~ i \frac{1}{2} \psi_{\alpha} - 27 \Bar{\lambda}_{\alpha} ~~~, \\ \Bar{\rm D}_{\alpha} \Bar{\Psi} ~=~& i \frac{1}{2} (\sigma^{\un a})_{\alpha\gamma} \psi_{\un a }{}^{\gamma} + 5 \Bar{\lambda}_{\alpha} ~\equiv~ i \frac{1}{2} \psi_{\alpha} + 5 \Bar{\lambda}_{\alpha} ~~~, \\ {\rm D}_{\alpha} \Bar{\Psi} ~=~& i \frac{1}{2} (\sigma^{\un a})_{\alpha\gamma} \Bar{\psi}_{\un a }{}^{\gamma} - 27 \lambda_{\alpha} ~\equiv~ - i \frac{1}{2} \Bar{\psi}_{\alpha} - 27 \lambda_{\alpha} ~~~, \\ {\rm D}_{\alpha}\Psi ~=~& i \frac{1}{2} (\sigma^{\un a})_{\alpha\gamma} \Bar{\psi}_{\un a }{}^{\gamma} + 5 \lambda_{\alpha} ~\equiv~ - i \frac{1}{2} \Bar{\psi}_{\alpha} + 5 \lambda_{\alpha} ~~~,\label{cnGt} \end{align} in the $\theta$ $\to$ 0 limit. Also observe that \begin{align} \Bar{\rm D}_{\alpha} (\Bar{\Psi} - \Psi) ~=~& 32 \Bar{\lambda}_{\alpha} ~~~, ~~~ {\rm D}_{\alpha} (\Psi - \Bar{\Psi}) ~=~ 32 \lambda_{\alpha} ~~~. \end{align} In order to complete the specification of the geometrical superfields also requires explicit definitions of the bosonic terms to second order in D-derivatives. So we define bosonic fields: \begin{align} U_{[3]} ~=~& (\sigma_{[3]})^{\gamma\delta} ({\rm D}_{\gamma} {\rm D}_{\delta} \Psi) ~~~, & \Bar{U}_{[3]} ~=~& - (\sigma_{[3]})^{\gamma\delta} (\Bar{\rm D}_{\gamma} \Bar{\rm D}_{\delta} \Bar{\Psi}) ~~~, \\ X_{[3]} ~=~& (\sigma_{[3]})^{\gamma\delta} (\Bar{\rm D}_{\gamma} \Bar{\rm D}_{\delta} \Psi) ~~~, & \Bar{ X}_{[3]} ~=~& - (\sigma_{[3]})^{\gamma\delta} ({\rm D}_{\gamma} {\rm D}_{\delta} \Bar{\Psi}) ~~~, \\ Y_{[3]} ~=~& (\sigma_{[3]})^{\gamma\delta} ({\rm D}_{\gamma} \Bar{\rm D}_{\delta} \Psi) & \Bar{Y}_{[3]} ~=~& - (\sigma_{[3]})^{\gamma\delta} (\Bar{\rm D}_{\gamma} {\rm D}_{\delta} \Bar{\Psi}) \nonumber\\ ~=~& (\sigma_{[3]})^{\gamma\delta} (\Bar{\rm D}_{\gamma} {\rm D}_{\delta} \Psi) ~~~, & ~=~& - (\sigma_{[3] })^{\gamma\delta} ({\rm D}_{\gamma} \Bar{\rm D}_{\delta} \Bar{\Psi}) ~~~. \end{align} Ue emphasize that the component fields (the $U$'s, $X$'s and $Y$'s) are defined by the $\theta$ $\to$ 0 limit of these equations. \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2B Component/Superspace Results: Step 2} Torsions: \begin{align} T_{\alpha\beta}^{\ \ \un{c}} ~=~ & 0 ~~~, &&\\ T_{\alpha\beta}^{\ \ \gamma} ~=~ & - i\frac{1}{5}(\sigma^{\un c})_{\alpha\beta}(\sigma_{\un c})^{\gamma\delta}\Bar{ \psi}_{\delta} + 2(\sigma^{\un c})_{\alpha\beta}(\sigma_{\un c})^{\gamma\delta} \lambda_{\delta} ~~~, &&\\ T_{\alpha\beta}^{\ \ \bar\gamma} ~=~ & 0 ~~~, &&\\ T_{\bar\alpha\bar\beta}^{\ \ \un{c}} ~=~ & 0 ~~~, &&\\ T_{\bar\alpha\bar\beta}^{\ \ \gamma} ~=~ & 0 ~~~, &&\\ T_{\bar\alpha\bar\beta}^{\ \ \bar\gamma} ~=~ & i\frac{1}{5}(\sigma^{\un c})_{\alpha\beta}(\sigma_{\un c})^{ \gamma\delta} \psi_{\delta} + 2(\sigma^{\un c})_{\alpha\beta}(\sigma_{\un c})^{\gamma\delta} \Bar{\lambda}_{\delta} ~~~, &&\\ T_{\alpha\bar\beta}^{\ \ \un{c}} ~=~ & i(\sigma^{\un c})_{\alpha\beta} ~~~, &&\\ T_{\alpha\bar\beta}^{\ \ \gamma} ~=~ & -i \frac{1}{240} (\sigma^{[3]})_{\alpha\beta}(\sigma_{[3]})^{\gamma\delta} \psi_{\delta} + \frac{1}{8} \Big[ (\sigma^{[3]})_{\alpha\beta}(\sigma_{[3]})^{\gamma\delta} - \frac{1}{30}( \sigma^{[5]})_{\alpha\beta}(\sigma_{[5]})^{\gamma\delta} \Big] \Bar{\lambda}_{\delta} ~~~, \\ T_{\alpha\bar\beta}^{\ \ \bar\gamma} ~=~ & i \frac{1}{240} (\sigma^{[3]})_{\alpha\beta}(\sigma_{[3]})^{\gamma \delta} \Bar{\psi}_{\delta} + \frac{1}{8} \Big[ (\sigma^{[3]})_{\alpha\beta}(\sigma_{[3]})^{\gamma\delta} - \frac{ 1}{30}(\sigma^{[5]})_{\alpha\beta}(\sigma_{[5]})^{\gamma\delta} \Big] \lambda_{\delta} ~~~, \\ T_{\alpha\un b}^{\ \ \un c} ~=~ & - i \frac{1}{5} \Big[ 2 \delta_{\un b}^{\ \un c}\delta_{\alpha}^{\ \gamma} + (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] \Bar{\psi}_{\gamma} + \Big[ - 11 \delta_{\un b}^{\ \un c}\delta_{\alpha}^{\ \gamma} + (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] \lambda_{\gamma} ~~~, &&\\ T_{\alpha\un b}^{\ \ \gamma} ~=~ & \frac{1}{64} \Big[ -31 \delta_{\un b}^{\ \un c} \delta_{\alpha}^{\ \gamma} + 15 (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] (\pa_{\un c}\Psi) + \frac{1}{320} \Big[ 27 \delta_{\un b}^{\ \un c} \delta_{\alpha}^{\ \gamma} + 53 (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] ( \pa_{\un c}\Bar\Psi) &&\nonumber\\ & -i\frac{1}{2560}\Big[\frac{1}{2}(\sigma^{[2]})_{\alpha}^{\ \gamma} \Big( 5 Y_{\un b[2]} - 27 \Bar{Y}_{\un b[2]} \Big) - \frac{1}{3!} (\sigma_{\un b [3]})_{\alpha}^{\ \gamma} \Big( 5 Y^{[3]} - 27 \Bar{Y}^{[3]} \Big) \Big] ~~~, &&\\ T_{\alpha\un b}^{\ \ \bar\gamma} ~=~ & -i\frac{1}{2560}\Big[\frac{1}{2}(\sigma^{[2]})_{\alpha}^{\ \gamma} \Big( - 5 \Bar{X}_{\un b[2]} + 27 U_{\un b[2]} \Big) - \frac{1}{3!} (\sigma_{\un b [3]} )_{\alpha}^{\ \gamma} \Big( - 5 \Bar{X}^{[3]} + 27 U^{[3]} \Big) \Big] ~~~, &&\\ T_{\bar\alpha\un b}^{\ \ \un c} ~=~ & i \frac{1}{5} \Big[ 2 \delta_{\un b}^{\ \un c}\delta_{\alpha}^{\ \gamma} + (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] \psi_{\gamma} + \Big[ - 11 \delta_{\un b}^{\ \un c} \delta_{\alpha}^{\ \gamma} + (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] \Bar{\lambda}_{\gamma} ~~~, &&\\ T_{\bar\alpha\un b}^{\ \ \gamma} ~=~ & -i\frac{1}{2560}\Big[\frac{1}{2}(\sigma^{[2]})_{\alpha}^{\ \gamma} \Big( 5 X_{\un b[2]} - 27 \Bar{U}_{\un b[2]} \Big) - \frac{1}{3!} (\sigma_{\un b [3]})_{ \alpha}^{\ \gamma} \Big( 5 X^{[3]} - 27 \Bar{U}^{[3]} \Big) \Big] ~~~, &&\\ T_{\bar\alpha\un b}^{\ \ \bar\gamma} ~=~ \nonumber& \frac{1}{64} \Big[ -31 \delta_{\un b}^{\ \un c}\delta_{\alpha}^{\ \gamma} + 15 (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] (\pa_{\un c}\Bar\Psi) + \frac{1}{320} \Big[ 27 \delta_{\un b}^{\ \un c} \delta_{\alpha}^{\ \gamma} + 53 (\sigma_{\un b}^{\ \un c})_{\alpha}^{\ \gamma} \Big] (\pa_{\un c}\Psi) &&\\ & -i\frac{1}{2560}\Big[\frac{1}{2}(\sigma^{[2]})_{\alpha}^{\ \gamma} \Big( - 5 \Bar{Y}_{\un b[2]} + 27 Y_{\un b[2]} \Big) - \frac{1}{3!} (\sigma_{\un b [3]})_{\alpha}^{\ \gamma} \Big( - 5 \Bar{Y }^{[3]} + 27 Y^{[3]} \Big) \Big] ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \un c} ~=~ & 0 ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \gamma} ~=~ & -\frac{1}{10}(\sigma_{[\un{a }})^{\gamma\delta}(\pa_{\un{b}]}\psi_{\delta}) ~~~, &&\\ T_{\un{a}\un{b}}^{\ \ \bar\gamma} ~=~ & \frac{1}{10}(\sigma_{[\un{a }})^{\gamma\delta}(\pa_{\un{b}]}\Bar{\psi}_{\delta}) ~~~. \end{align} Curvatures: \begin{align} R_{\alpha\beta}^{\ \ \ \un{d}\un{e}} ~=~ & \frac{1}{40}\left[\frac{1}{3!}(\sigma^{\un{d}\un{e}[3]} )_{\alpha\beta} U_{[3]} - (\sigma_{[1]})_{\alpha\beta} U^{[1] \un{d}\un{e}} \right] ~~~, &&\\ R_{\bar\alpha\bar\beta}^{\ \ \ \un{d}\un{e}} ~=~ & - \frac{1}{40}\left[\frac{1}{3!}(\sigma^{\un{d} \un{e}[3]})_{\alpha\beta} \Bar{U}_{[3]} - (\sigma_{[1]})_{\alpha\beta} \Bar{U}^{[1] \un{d}\un{e}} \right] ~~~, &&\\ R_{\alpha\bar\beta}^{\ \ \ \un{d}\un{e}} ~=~ \nonumber& -i\frac{3}{5}(\sigma^{[\un d})_{\alpha\beta} (\pa^{\un e]}(\Psi+\Bar\Psi)) - i\frac{1}{10}(\sigma^{\un{d}\un{e}\un{f}})_{\alpha\beta}(\pa_{ \un f}(\Psi+\Bar\Psi)) &&\\ & -\frac{1}{80} \Big[ (\sigma_{[1]})_{\alpha\beta} \Big( Y^{[1]\un{d}\un{e}} - \Bar{Y}^{[1]\un{d} \un{e}} \Big) - \frac{1}{2} (\sigma^{[2][\un{d}})_{\alpha\beta} \Big( Y^{\un{e}]}_{\ \ [2]} - \Bar{Y }^{\un{e}]}_{\ \ [2]} \Big) ~~~ &&\\ & {~~~~~~~~\,} - \frac{1}{3!} (\sigma^{\un{d}\un{e}[3]})_{\alpha\beta} \Big( Y_{[3]} - \Bar{Y}_{[3]} \Big) \Big] ~~~, &&\\ R_{\alpha\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & - i \frac{1}{2} \Big[ \delta_{\un{b}}^{\ [\un{d}} (\pa^{ \un{e}]} \Bar{\psi}_{\alpha} ) + \frac{1}{5} (\sigma^{\un{d}\un{e}})_{\alpha}^{\ \gamma} (\pa_{\un{b}} \Bar {\psi}_{\gamma}) \Big] - 11 \delta_{\un{b}}^{\ [\un{d}} (\pa^{\un{e}]} \lambda_{\alpha} ) + (\sigma^{\un{ d}\un{e}})_{\alpha}^{\ \gamma} (\pa_{\un{b}} \lambda_{\gamma}) ~~~, &&\\ R_{\bar\alpha\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & i \frac{1}{2} \Big[ \delta_{\un{b}}^{\ [\un{d}} ( \pa^{\un{e}]} \psi_{\alpha} ) + \frac{1}{5} (\sigma^{\un{d}\un{e}})_{\alpha}^{\ \gamma} (\pa_{\un{b}} \psi_{ \gamma}) \Big] - 11 \delta_{\un{b}}^{\ [\un{d}} (\pa^{\un{e}]} \Bar{\lambda}_{\alpha} ) + (\sigma^{\un{ d}\un{e}})_{\alpha}^{\ \gamma} (\pa_{\un{b}} \Bar{\lambda}_{\gamma}) ~~~, &&\\ R_{\un{a}\un{b}}^{\ \ \ \un{d}\un{e}} ~=~ & - \frac{1}{2} \big( \pa_{[\un{a}}\pa^{[\un{ d}} (\Psi + \Bar\Psi) \big) \delta_{\un{b}]}^{\ \un{e}]} ~~~. \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2B Component/Superspace Results: Step 3} Parameter Composition Rules: \begin{align} \xi^{\un{m}} ~=~& - i ( \epsilon_{1}{}^{\alpha} \Bar{\epsilon}_{2}{}^{\beta} + \Bar{\epsilon}_{1}{}^{\beta} \epsilon_{2}{}^{\alpha} ) (\sigma^{\un{c}})_{\alpha\beta} \delta_{\un c}{}^{\un m} \Big( 1 + \frac{1}{2} (\Psi + \Bar{\Psi} ) \Big) ~~~, \\ \begin{split} \lambda^{\un{d}\un{e}} ~=~& - ( \epsilon_{1}{}^{\alpha} \Bar{\epsilon}_{2}{}^{\beta} + \Bar{\epsilon}_{1}{}^{\beta} \epsilon_{2}{}^{\alpha} ) \bigg[ - i \frac{17}{20}(\sigma^{[\un d})_{\alpha\beta}(\pa^{\un e]}(\Psi+\Bar\Psi)) - i\frac{1}{10} (\sigma^{\un{d}\un{e}\un{f}})_{\alpha\beta}(\pa_{\un f}(\Psi+\Bar\Psi)) \\ & \qquad {~~~~~~~~~~~~~~~~~~~~~} -\frac{1}{80} \Big[ (\sigma_{[1]})_{\alpha\beta} \Big( Y^{[1]\un{d}\un{e}} - \Bar{Y}^{[1] \un{d}\un{e}} \Big) - \frac{1}{2} (\sigma^{[2][\un{d}})_{\alpha \beta} \Big( Y^{\un{e}]}_{\ \ [2]} - \Bar{Y}^{\un{e}]}_{\ \ [2]} \Big) \\ & \qquad {~~~~~~~~~~~~~~~~~~~~~} - \frac{1}{3!} (\sigma^{\un{d}\un{e}[3]})_{\alpha\beta} \Big( Y_{[3]} - \Bar{Y}_{[3]} \Big) \Big] ~ \bigg] \\ & - \frac{1}{40} \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} \left[\frac{1}{3!}(\sigma^{\un{d}\un{e}[3]})_{\alpha\beta} U_{[3]} - (\sigma_{[1]})_{\alpha\beta} U^{[1] \un{d}\un{e}} \right] \\ &+ \frac{1}{40} \Bar{\epsilon}_{1}{}^{\alpha} \Bar{\epsilon}_{2}{}^{\beta} \left[\frac{1}{3!}(\sigma^{\un{d}\un{e}[3]})_{\alpha\beta} \Bar{U}_{[3]} - (\sigma_{[1]})_{\alpha\beta} \Bar{U}^{[1] \un{d}\un{e}} \right] ~~~, \end{split} \\ \begin{split} \epsilon^{\delta} ~=~& - ( \epsilon_{1}{}^{\alpha} \Bar{\epsilon}_{2}{}^{\beta} + \Bar{\epsilon}_{1}{}^{\beta} \epsilon_{2}{}^{\alpha} ) \bigg[ i \frac{1}{10} \Big[ (\sigma^{[1]})_{\alpha\beta}(\sigma_{[1]})^{\delta\epsilon} - \frac{1}{24} (\sigma^{[3]})_{ \alpha\beta}(\sigma_{[3]})^{\delta\epsilon} \Big] \psi_{\epsilon} \\ & \qquad + \frac{1}{8} \Big[ (\sigma^{[3]})_{\alpha\beta}(\sigma_{[3]})^{\delta\epsilon} - \frac{1}{30}(\sigma^{ [5]})_{\alpha\beta}(\sigma_{[5]})^{\delta\epsilon} \Big] \Bar{\lambda}_{\epsilon} \bigg] \\ & - \epsilon_{1}{}^{\alpha} \epsilon_{2}{}^{\beta} \left[ - i\frac{1}{5}(\sigma^{[1]})_{\alpha\beta}(\sigma_{[1]})^{\delta\epsilon} \Bar{\psi}_{\epsilon} + 2(\sigma^{[1]})_{\alpha\beta}(\sigma_{[1]})^{\delta\epsilon} \lambda_{\epsilon} \right] ~~~. \end{split} \end{align} \@startsection{subsection}{2}{\z@{Adaptation To 10D, $\cal N$ = 2B Component/Superspace Results: Step 4} \begin{align} \begin{split} \delta_{Q} e_{\un{a}}{}^{\un{m}} ~=~& - \epsilon^{\beta} \left[ - i \frac{1}{2} \Big[ \delta_{\un a}{}^{\un d} \delta_{\beta}{}^{\gamma} + \frac{1}{5} (\sigma_{\un a}{}^{ \un d})_{\beta}{}^{\gamma} \Big] \Bar{\psi}_{\gamma} + \Big[ -11 \delta_{\un a}{}^{\un d} \delta_{\beta}{}^{\gamma} + (\sigma_{\un a}{}^{ \un d})_{\beta}{}^{\gamma} \Big] \lambda_{\gamma} \right] \delta_{\un d}{}^{\un m} \\ & - \Bar{\epsilon}^{\beta} \left[ i \frac{1}{2} \Big[ \delta_{\un a}{}^{\un d} \delta_{\beta}{}^{\gamma} + \frac{1}{5} (\sigma_{\un a}{}^{ \un d})_{\beta}{}^{\gamma} \Big] \psi_{\gamma} + \Big[ -11 \delta_{\un a}{}^{\un d} \delta_{\beta}{}^{\gamma} + (\sigma_{\un a}{}^{ \un d})_{\beta}{}^{\gamma} \Big] \Bar{\lambda}_{\gamma} \right] \delta_{\un d}{}^{\un m} ~~~, \end{split} \\ \begin{split} \delta_{Q}\psi_{\un {a}}{}^{\delta} ~=~& \Big( 1 + \frac{1}{2} (\Psi + \Bar{\Psi} ) \Big) \pa_{ \un{a}} \epsilon^{\delta} - \frac{1}{2} \epsilon^{\delta} (\pa_{\un{c}} (\Psi+\Bar{\Psi})) \mathcal{M}_{\un{a}}{}^{ \un{c}} \\ & - \frac{1}{64} \epsilon^{\beta} \Big[ -31 \delta_{\un a}^{\ \un c} \delta_{\beta}^{\ \delta} + 15 (\sigma_{\un a }^{\ \un c})_{\beta}^{\ \delta} \Big] (\pa_{\un c}\Psi) - \frac{1}{320} \epsilon^{\beta} \Big[ 27 \delta_{\un a}^{ \ \un c} \delta_{\beta}^{\ \delta} + 53 (\sigma_{\un a}^{\ \un c})_{\beta}^{\ \delta} \Big] (\pa_{\un c}\Bar\Psi) \\ & + i \frac{1}{2560} \epsilon^{\beta} \Big[ \frac{1}{2}(\sigma^{[2]})_{\beta}^{\ \delta} \Big( 5 Y_{\un a[2]} - 27 \Bar{Y}_{\un a[2]} \Big) - \frac{1}{3!} (\sigma_{\un a [3]})_{\beta}^{\ \delta} \Big( 5 Y^{[3]} - 27 \Bar{Y}^{[3]} \Big) \Big] \\ & + i \frac{1}{2560} \Bar{\epsilon}^{\beta} \Big[\frac{1}{2}(\sigma^{[2]})_{\beta}^{\ \delta} \Big( 5 X_{\un a[2]} - 27 \Bar{U}_{\un a[2]} \Big) - \frac{1}{3!} (\sigma_{\un a [3]})_{\beta}^{\ \delta} \Big( 5 X^{[3]} - 27 \Bar{U}^{[3]} \Big) \Big] ~~~, \end{split} \\ \begin{split} \delta_{Q} \phi_{\un {a}}{}^{\un{d}\un{e}} ~=~& i \frac{1}{2} \epsilon^{\beta} \Big[ \delta_{\un{a}}^{\ [\un {d}} (\pa^{\un{e}]} \Bar{\psi}_{\beta} ) + \frac{1}{5} (\sigma^{\un{d}\un{e}})_{\beta}^{\ \gamma} (\pa_{\un{a}} \Bar{\psi}_{\gamma}) \Big] - \epsilon^{\beta} \Big[ - 11 \delta_{\un{a}}^{\ [\un{d}} (\pa^{\un{e}]} \lambda_{\beta} ) + (\sigma^{\un{d}\un{e}})_{\beta}^{\ \gamma} (\pa_{\un{a}} \lambda_{\gamma}) \Big] \\ & - i \frac{1}{2} \Bar{\epsilon}^{\beta} \Big[ \delta_{\un{a}}^{\ [\un{d}} (\pa^{\un{e}]} \psi_{\beta} ) + \frac{1} {5} (\sigma^{\un{d}\un{e}})_{\beta}^{\ \gamma} (\pa_{\un{a}} \psi_{\gamma}) \Big] - \Bar{\epsilon}^{\beta} \Big[ - 11 \delta_{\un{a}}^{\ [\un{d}} (\pa^{\un{e}]} \Bar{\lambda}_{\beta} ) + (\sigma^{\un{d}\un{e}})_{\beta}^{\ \gamma} (\pa_{\un{a}} \Bar{\lambda}_{\gamma}) \Big] ~~~. \end{split} \end{align} \newpage \@startsection{section}{1}{\z@{10D, $\cal N$ = 2B Chiral Compensator Considerations} In the limits where all supergravity fields are set to zero, four sets of super algebras emerge. These take the forms: \newline \noindent (a.) 11D, $\cal N$ = 1, \begin{equation} \left\{ \, {\rm D}_{\alpha} ~,~ {\rm D}_{\beta} \, \right\} ~=~ i\, (\gamma{}^{\un a}){}_{\alpha \beta} \, \pa_{\un a} ~~,~~ \left[ \, {\rm D}_{\alpha} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 ~~,~~ \left[ \, {\pa}_{\un a} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 \end{equation} (b.) 10D, $\cal N$ = 1, \begin{equation} \left\{ \, {\rm D}_{\alpha} ~,~ {\rm D}_{\beta} \, \right\} ~=~ i\, (\sigma{}^{\un a}){}_{\alpha \beta} \, \pa_{\un a} ~~,~~ \left[ \, {\rm D}_{\alpha} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 ~~,~~ \left[ \, {\pa}_{\un a} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 \end{equation} (c.) 10D, $\cal N$ = 2A, \begin{equation} \eqalign{ {~~~~~~} &\left\{ \, {\rm D}_{\alpha} ~,~ {\rm D}_{\beta} \, \right\} ~=~ i\, (\sigma{}^{\un a}){}_{\alpha \beta} \, \pa_{\un a} ~~,~~ \left\{ \, {\rm D}_{\dot \alpha} ~,~ {\rm D}_{\dot \beta} \, \right\} ~=~ i\, (\sigma{}^{\un a}){}_{{\dot \alpha} {\dot \beta}} \, \pa_{\un a} ~~,~~ \left\{ \, {\rm D}_{ \alpha} ~,~ {\rm D}_{\dot \beta} \, \right\} ~=~ 0 ~~, \cr &{~~\,}\left[ \, {\rm D}_{\alpha} ~,~ {\pa}_{\un b} \, \right] ~=~0 ~~,~~{~~~~~~~~~~~~\,~~} \left[ \, {\rm D}_{\dot \alpha} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 ~~,~~ {~~~~~~~~~~\,~~~\,~~} \left[ \, {\pa}_{ \un a} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 ~~, } \end{equation} (d.) 10D, $\cal N$ = 2B, \begin{equation} \eqalign{ {~~~~~~} &\left\{ \, {\rm D}_{\alpha} ~,~ {\rm D}_{\beta} \, \right\} ~=~ 0 ~~,~~ \left\{ \, \Bar{\rm D}_{\alpha} ~,~ \Bar{\rm D}_{\beta} \, \right\} ~=~0 ~~,~~ \left\{ \, {\rm D}_{ \alpha} ~,~ \Bar{\rm D}_{\beta} \, \right\} ~=~ i\, (\sigma{}^{\un a}){}_{\alpha \beta} \, \pa_{\un a} ~~, \cr &{~~\,}\left[ \, {\rm D}_{\alpha} ~,~ {\pa}_{\un b} \, \right] ~=~0 ~~,~~{~~~} \left[ \, \Bar{\rm D}_{\alpha} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 ~~,~~ {~~~~} \left[ \, {\pa}_{ \un a} ~,~ {\pa}_{\un b} \, \right] ~=~ 0 ~~, } \end{equation} We next introduce a complex superfield denoted by $\Omega{}_{d}$ into each of these $d$-dimensional superspaces and seek to probe the implications of impose a first order differential equation imposed on this superfield that utilizes any of the spinorial derivatives above. For either the 11D, $\cal N$ = 1 or 10D, $\cal N$ = 1 superspaces we have \begin{equation} {\rm D}_{ \beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ {\rm D}_{ \alpha} {\rm D}_{ \beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \{ \, {\rm D}_{ \alpha} ~,~ {\rm D}_{ \beta} \} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \pa{}_{ \un c} \, \Omega{}_{d} ~=~ 0 ~~~, \label{Trv1} \end{equation} and by analogy for the 10D, $\cal N$ = 2A superspace we find $$ {\rm D}_{ \beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ {\rm D}_{ \alpha} {\rm D}_{ \beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \{ \, {\rm D}_{ \alpha} ~,~ {\rm D}_{ \beta} \} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \pa{}_{ \un c} \, \Omega{}_{d} ~=~ 0 ~~~, $$ \begin{equation} {~~~~~\,} {\rm D}_{\dot \beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ {\rm D}_{\dot \alpha} {\rm D}_{\dot \beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \{ \, {\rm D}_{\dot \alpha} ~,~ {\rm D}_{\dot \beta} \} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \pa{}_{ \un c} \, \Omega{}_{d} ~=~ 0 ~~~, \label{Trv2} \end{equation} Thus, from (\ref{Trv1}) and (\ref{Trv2}) we find the superfield $\Omega{}_{d}$ in each of these $d$-dimensional superspaces must be a constant. However, upon repeating these considerations for the 10D, $\cal N$ = 2B superspace we find \begin{equation} \eqalign{ {\rm D}_{ \beta} \, \Omega{}_{d} ~&=~ 0 ~~\to~~ {\rm D}_{ \alpha} {\rm D}_{ \beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \{ \, {\rm D}_{ \alpha} ~,~ {\rm D}_{ \beta} \} \, \Omega{}_{d} ~=~ 0 ~~\to~~ 0 ~=~ 0 ~~~, \cr \Bar{\rm D}_{\beta} \, \Omega{}_{d} ~&=~ 0 ~~\to~~ \Bar{\rm D}_{\alpha} \Bar{\rm D}_{\beta} \, \Omega{}_{d} ~=~ 0 ~~\to~~ \{ \, \Bar{\rm D}_{\alpha} ~,~ \Bar{\rm D}_{\beta} \} \, \Omega{}_{d} ~=~ 0 ~~\to~~ 0 ~=~ 0 ~~~,} \label{Trv3} \end{equation} which shows that the superfield $\Omega{}_{d}$ in this case can be a non-trivial representation of the translation operator. The differential equation \begin{equation} \Bar{\rm D}_{\beta} \, \Omega{}_{d} ~=~ 0 ~~~, \end{equation} in the context of four dimensions implies that $\Omega{}_{d}$ is a ``chiral superfield.'' On the other hand the differential equation \begin{equation} {\rm D}_{\beta} \, \Omega{}_{d} ~=~ 0 ~~~, \end{equation} in the context of four dimensions implies that $\Omega{}_{d}$ is a ``anti-chiral superfield.'' While it is not possible to simultaneously impose both conditions because a chiral superfield is the complex conjugate of an anti-chiral one, either one or the other can be imposed. This also means that neither the chiral nor the anti-chiral condition can be applied to a real superfield. Let us return to the results shown (\ref{cnGt}) by focusing only on the equations that contain $ \lambda_{\alpha} $ \begin{equation} \eqalign{ {\rm D}_{\alpha}\Psi ~=~& i \frac{1}{2} (\sigma^{\un a})_{\alpha\gamma} \Bar{\psi}_{\un a }{}^{\gamma} + 5 \lambda_{\alpha} ~\equiv~ - i \frac{1}{2} \Bar{\psi}_{\alpha} + 5 \lambda_{\alpha} ~~~, \cr {\rm D}_{\alpha} \Bar{\Psi} ~=~& i \frac{1}{2} (\sigma^{\un a})_{\alpha\gamma} \Bar{\psi}_{\un a }{}^{\gamma} - 27 \lambda_{\alpha} ~\equiv~ - i \frac{1}{2} \Bar{\psi}_{\alpha} - 27 \lambda_{\alpha} ~~~,} \label{cnGt2} \end{equation} since the remaining equations can be obtain by complex conjugation. In all the other cases we have explored, there is no spinor field such as $ \lambda_{\alpha} $. Taking the difference of the two equations that appear in (\ref{cnGt2}), we may obtain \begin{equation} i \, \fracm 1{32} \, {\rm D}_{\alpha} {\big ( } \, \Psi ~-~ {\Bar \Psi} \, {\big ) } ~=~ i\, \lambda_{\alpha} ~~~. \end{equation} However, the quantity $i \, ( \Psi \,-\, {\Bar \Psi} ) $ is a real superfield. The requirement that $ \lambda_{\alpha} $ = 0 is equivalent to the imposition of an anti-chirality condition on a real superfield and this condition possesses no non-trivial solution. The inability to introduce such a chiral superfield distinguishes the type 2B theory from the other higher dimensional constructions we have considered. At first order in the $\theta$-expansion of $\Psi$ both the spin-1/2 portion of the gravitino ${\Bar \psi}_{\un a }{}^{\gamma} $ {\em {and}} a separate spin-1/2 auxiliary spinor $ \lambda_{\alpha} $ must exist. \newpage \@startsection{section}{1}{\z@{Conclusion \& Possible Future Directions} \label{conclusions} \vskip,2in In this work, we have presented the forms of the superspace torsions and curvature supertensors that are consistent with Nordstr\" om supergravity in eleven and ten dimensional superspaces. For the superspaces in 11D, $\cal N$ = 1, 10D, $\cal N$ = 1, 10D, $\cal N$ = 2A, and 10D, $\cal N$ = 2B, these results are found in the sets of equations given as (3.12) - (3.20), (3.36) - (3.44), (3.62) - (3.85), and (3.110) - (3.134), respectively. To our knowledge, these presentations initiate new results for the superspace torsions and curvature supertensors in these domains. The use of the superfield $\Psi$ in all cases guarantees all of these theories are ``off-shell'' supersymmetric without the need to impose some equations of motion for the fulfillment of a local supersymmetry algebra. The fact that $\Psi$ used in each case does not satisfy any a priori superdifferential constraint implies the closure. Unfortunately, this same fact also implies that each of the descriptions we have provided is not an irreducible one. Exploring the possibility of imposing further superdifferential constraints to obtain one or more irreducible representations is the work for the future. The work completed in this paper also suggests two new pathways to explore elements of 11D, $\cal N$ = 1 supergeometry. \noindent (a.) \newline \indent In the works of \cite{2MT,2MTcf} on the basis of the study of solutions to the 11D superspace Bianchi identities up to engineering dimension one, forms for the superspace torsions and curvature supertensors were proposed. Upon comparing particularly the results in the first of these references to the result derived in the current work as seen in (3.12) - (3.20), apparent concurrence is found. In the work of \cite{2MT}, we can use the definition\footnote{In comparison to these older works we have ``rescaled'' $t_{[2]} $, $U_{[3]}$, $V_{[4]} $, and $Z_{[5]}$ relative to the original definitions. } \begin{equation} \nabla_{\alpha} J_{\beta} ~=~ C_{\alpha\beta} S + (\gamma^{\un{a}})_{\alpha\beta} v_{\un{a}} + \frac{1}{2} (\gamma^{ [2]})_{\alpha\beta} t_{[2]} + \frac{1}{3!} (\gamma^{[3]})_{\alpha\beta} U_{[3]} + \frac{1}{4!} (\gamma^{[4]})_{\alpha\beta} V_{[4]} + \frac{1}{5!} (\gamma^{[5]})_{\alpha\beta} Z_{[5]} ~~~. \label{delJdef} \end{equation} In this former work, we must set the 11D ``on-shell'' superfield $W{}_{\un a \un b \un c \un d}$ to zero to make comparisons. When this is done, then by a change of notation where \begin{equation} \psi{}_{\alpha} ~\to~ J {}_{\alpha} ~~~,~~~ K{}_{\un a} ~\to~ v {}_{\un a} ~~~,~~~ K{}_{[2]} ~\to~ t {}_{[2]} ~~~,~~~ K{}_{[3]} ~\to~ U{}_{[3]} ~~~,~~~ K{}_{[4]} ~\to~ V{}_{[4]} ~~~,~~~ K{}_{[5]} ~\to~ Z{}_{[5]} ~~~,~~~ \end{equation} we then look at (\ref{delJdef}) in contrast to the form of (3.8) and (3.9) in this work. We find in the Nordstr\" om limit, \begin{equation} v {}_{\un a} ~=~ \pa{}_{\un a} \Psi ~~~,~~~ t {}_{[2]} ~=~ 0 ~~~,~~~ Z{}_{[5]} ~= ~ 0 ~~~, \label{fieldsolns} \end{equation} and thus there is significant overlap. In particular, the results in (\ref{fieldsolns}) tell us something interesting about the $J{}_{\alpha}$ tensor. We can decompose it into two parts \begin{equation} J{}_{\alpha} ~=~ J{}_{\alpha}^{(T)} ~+~ {\rm D}{}_{\alpha} \Psi \end{equation} which is equivalent to the usual decomposition of a gauge field into its transverse and longitudinal parts. Upon setting the $J{}_{\alpha}^{(T)}$ = 0, one recovers the Nordstr\" om theory. There is a further feature noted in the work of \cite{2MTcf} that also is indicated as a direction to include in this new pathway of exploration for 11D superspace supergravity. While the notation of superconformal symmetry is not presently understood in a number of approaches to the study of 11D supergravity, the superspace approach in \cite{2MTcf} is indicative of a specific further modification. In particular, by the introduction of a scaling transformation of the supervielbein, it was found that a modification of the spinor-spinor-vector component of the supertorsion that is given by the expression \begin{equation} T{}_{\alpha \beta}{}^{\un c} ~=~ i(\gamma^{\un{c}})_{\alpha\beta} ~+~ i(\gamma^{[\un 2]})_{\alpha\beta} \, {\cal X}{}_{[{\un 2}]}{}^{\un c} ~+~ i(\gamma^{[\un 5]})_{\alpha\beta} \, {\Hat {\cal X}}{}_{[{\un 5}]}{}^{\un c} \end{equation} is consistent with the superspace scale transformations if and only if the ``$\cal X$-tensor'' and ``$\Hat {\cal X}$-tensor'' satisfy the conditions, \begin{equation} {\cal X}{}_{{\un a}{\un c}}{}^{\un c} ~=~ 0 ~~~,~~~ \epsilon{}^{[\un 8] {\un a}{\un b} {\un c}} {\cal X}{}_{{\un a}{\un b}}{}_{\un c} ~=~ 0 ~~~,~~~ {\Hat {\cal X}}{}_{[{\un 4}]{\un c}}{}^{\un c} ~=~ 0 ~~~,~~~ \epsilon{}^{[\un 5] {\un a}{\un b} {\un c} {\un d}{\un e} {\un f}} {\Hat {\cal X}}{}_{{\un a}{\un b}{\un c}{\un d}{\un e}}{}_{\un f} ~=~ 0 ~~~. \end{equation} A detailed and careful study of the 11D superspace supergravity Bianchi identities with the modifications in the current work as well as the works of \cite{2MT,2MTcf} is indicated to assess the form of any equations of motion that emerges in the presence of retaining the on-shell field strength. \noindent (b.) \newline \indent While the pathway for future investigation described above depends on the study of 11D supergravity supercovariant tensors and their Bianchi identities, the ``Breitenlohner Approach'' suggests a second pathway. The 4D, $\cal N$ = 1 Wess-Zumino gauge vector supermultiplet in (\ref{V1}) (or alternately the component level 4D, $\cal N$ = 1 supermultiplet) arises in a very interesting way related to the 4D, $\cal N$ = 1 real pseudoscalar superfield $V$. The components fields in $V$ may be expressed as an expansion in terms of the fermionic superspace D-operators followed by taking the limit as $\theta{}^a$ goes to zero. See equation (4.3.4a) in \cite{SpRSp8BK} and the equivalent expressions using the Majorana superspace coordinates associated with the superspace relevant to the component results in (\ref{V1}) take the forms \begin{equation} \eqalign{ {~~~~~~~~} &C ~=~ V \big| ~~~,~~~ \chi{}_a ~=~ {\rm D}{}_a V \big| ~~~,~~~ M ~=~ C{}^{a b} {\rm D}{}_a {\rm D}{}_b V \big| ~~~,~~~ N ~=~ i (\gamma{}^5){}^{a b} {\rm D}{}_a {\rm D}{}_b V \big| ~~~,~~~ \cr &v{}_{\un a} ~=~ (\gamma{}^5 \gamma{}_{\un a}){}^{a b} {\rm D}{}_a {\rm D}{}_b V \big| ~~~,~~~ \lambda{}^a ~=~ \epsilon{}^{a \, b \,c \, d} {\rm D}{}_b {\rm D}{}_c{\rm D}{}_d V \big| ~~~,~~~ {\rm d} ~=~ \epsilon{}^{a \, b \,c \, d} {\rm D}{}_a {\rm D}{}_b {\rm D}{}_c{\rm D}{}_d V \big| ~~~, \label{fielddefs} }\end{equation} where $ \epsilon{}^{a \, b \,c \, d}$ is the Levi-Civita tensor defined over the Majorana spinor indices. Also we have made adaptations in the notation that are appropriate for Majorana basis conventions in 4D. The results in (\ref{fielddefs}) make clear there are eight bosons and eight fermions contained in this superfield. It is also clear there is a component level gauge 1-form $v{}_{ \un a}$ that occurs at the quadratic order in the $\theta$-expansion of $V$. Now let us consider the situation of an 11D, $\cal N$ = 1 scalar superfield ${\cal V}{}^{(11)}$ analogous to $V$. There are some differences of course. For example, in ${\cal V}{}^{(11)}$ there are 2,147,483,648 bosonic component fields and 2,147,483,648 fermionic component fields. In the 11D, $\cal N$ = 1 superspace, the quadratic order spinor supercovariant derivatives are in the $\{ 1\}$, \{165\}, and \{330\} representations of the 11D Lorentz group whose explicit forms are \begin{equation} \eqalign{ \Delta{}^{(1)} ~&=~ C^{\alpha \beta} \, {\rm D}_{\alpha} \, {\rm D}_{\beta} ~~~, ~~~ \Delta{}^{(165) \, \un{a}\un{b}\un{c}} ~=~ (\gamma^{\un{a}\un{b}\un{c}}) {}^{\alpha \beta} \, {\rm D}_{\alpha} \, {\rm D}_{\beta} ~~~, ~~~ \Delta{}^{(330) \, \un{a}\un{b}\un{c}\un{d}} ~=~ (\gamma^{\un{a}\un{b} \un{c}\un{d}}){}^{\alpha\beta} \, {\rm D}_{\alpha} \, {\rm D}_{\beta} ~~~. } \end{equation} The implication of the existence of these operators is there is no component field at quadratic order in ${\cal V}{}^{(11)}$ that occurs in the $\{11\}$ representation of the 11D Lorentz group. This should be contrasted with the situation in 4D superspace where the operator $ (\gamma{}^5 \gamma{}_{\un a}){}^{a b} {\rm D}{ }_a {\rm D}{}_b$ is in the $\{4\}$ representation of the 4D Lorentz group. However, at quartic order utilizing the 11D spinorial derivatives we can define a superfield by the equation \begin{equation} v{}_{\un a}^{(11)} ~=~ \fracm 1{3!} \left[ \Delta{}^{(165) \, \un{b}\un{c} \un{d}} \Delta{}^{(330)}{}_{\un{a}\un{b}\un{c}\un{d}} \, {\cal V}{}^{(11)} \right] ~~~, \end{equation} that is the analog to one of the equations seen to occur in (\ref{fielddefs}). The ``Breitenlohner Approach'' can be followed by defining an operator valued supergravity co-vector $\bm {\rm {SG}}{}_{\un a}$ through the equation \begin{equation} \eqalign{ {~~~~} {\bm {\rm {SG}}}{}_{\un a} ~=~ &\fracm 1{3!}\left[ \Delta{}^{(165) \, \un{c}\un{d} \un{e}} \Delta{}^{(330)}{}_{\un{a}\un{c}\un{d}\un{e}} \, {\cal V}{}^{(11) \, \un{b}} \right] \, \pa{}_{\un b} ~+~ \fracm 1{3!} \left[ \Delta{}^{(165) \, \un{b}\un{c} \un{d}} \Delta{}^{(330)}{}_{\un{a}\un{b}\un{c}\un{d}} \, {\cal V}{}^{(11) \, \beta} \right] \, {\rm D}{}_{\beta} ~+~ \cr &\fracm 1{2 \cdot 3!} \left[ \Delta{}^{(165) \, \un{b}\un{c} \un{d}} \Delta{}^{(330)}{}_{\un{a}\un{b}\un{c}\un{d}} \, {\cal V}{}^{(11) \, \un{k}\, \un{l}} \right] \, {\cal M}{}_{\un{k} \, \un{l} } ~~~, \label{SGdef} }\end{equation} and above $ \pa{}_{\un b}$, $ {\rm D}{}_{\beta}$, ${\cal M}{}_{\un{k} \, \un{l}}$ denote respectively the 11D partial derivative operator, the 11D spinor superspace derivative, and the 11D Lorentz generators. There are other difference also. For the 4D superfield $V$ need only to be expanded to quartic order in $\theta{}^a$. In the case of the ${\cal V}{}^{(11)}$ the $\theta{}^a$-expansion goes out to the order of $\theta$ raised to the thirty second power. If no obstructions occur, this will describe 11D SG in superspace just as the expressions in (1.1) - (1.4) did for 4D, $\cal N$ = 1 superspace. It is possible the superfields ${\cal V}{}^{(11) \, \un{b}}$, $ {\cal V}{}^{(11) \, \beta}$, and ${\cal V}{}^{(11) \, \un{k}\, \un{l}}$ can be expressed in terms of more fundamental superfields as is the case in 4D, $\cal N$ = 2 superspace supergravity \cite{N2sg}. Moreover, were 11D superspace supergravity to follow the pattern of its lower dimensional ``relatives,'' the conformal part of the 11D graviton will be contained in the first term and the conformal part of the 11D gravitino will be contained in the second term of (\ref{SGdef}). In any case, this approach would put a maximum limit on the number of component fields required in ${\bm {\rm {SG}}}{}_{\un a}$. We simply count the number of free indices on ${\cal V}{}^{(11) \, \un{b}}$, $ {\cal V}{}^{(11) \, \beta}$, and ${\cal V}{}^{(11) \, \un{k}\, \un{l}}$ to find 11 + 32 + 55 = 98 and multiply by the number of component fields in ${\cal V}{}^{(11)}$ to arrive at 210,453,397,504 bosonic component fields and 210,453,397,504 fermionic component fields. In fact depending on the size of the null space (and thus the gauge transformations of ${\cal V}{}^{(11)}$) of the condition \begin{equation} \left[ \Delta{}^{(165) \, \un{b}\un{c} \un{d}} \Delta{}^{(330)}{}_{[ \un{a}|\un{b}\un{c}\un{d}} \, \pa{}_{| \un{e} ]} \left( \delta {\cal V}{}^{(11)} \right) \right] ~=~ 0 ~~~, \end{equation} the numbers could even be considerably less. There is also an argument that can be made to estimate the lower bound on the number of component fields involved. In the works of \cite{GRana1,GRana2} an algorithm was presented that, given a theory that possesses a number of 1D supercharges, determines the size of the smallest {\em {irreducible}} 1D SUSY representation. The theory in 11D, when reduced to 1D, corresponds to a 1D theory with 32 supercharges. For 1D, $N$ = 32 supersymmetry the smallest off-shell representations determined by the algorithm possess 32,768 bosonic fields and 32,768 fermionic fields. Once more multiplying by 98 we are led to a lower bound of 3,211,264 bosonic components and 3,211,264 fermionic components. While 3.2 million component fields may seem a large number, it is far less than one percent of 210 billion. Perhaps now the stage is set for us to (and very roughly paraphrasing Hilbert - reach beyond the level of G\" ottingen's children) understand off-shell eleven dimensional supergravity supergeometry for M-Theory... and as well (with appropriate modifications), for ten dimensional supergravity supergeometries for heterotic and superstrings. \vspace{.05in} \begin{center} \parbox{4in}{{\it ``Every boy in the streets of G\" ottingen understands more $~$ about four dimensional geometry than Einstein. Yet, in $~$ spite of that, Einstein did the work and not the mathe- $~$ maticians.'' \\ ${~}$ \\ ${~}$ }\,\,-\,\, David Hilbert $~~~~~~~~~$} \parbox{4in}{ $~~$} \end{center} $$~~$$ \noindent {\bf Acknowledgements}\\[.1in] \indent We would like to recongnize Martin Cederwall, William Linch, and Warren Siegel for conversations. The research of S.\ J.\ Gates, Jr., Y.\ Hu, and S.-N.\ Mak is supported by the endowment of the Ford Foundation Professorship of Physics at Brown University and partially supported by the U.S. National Science Foundation grant PHY-1315155. \newpage $$~~$$
2,869,038,156,160
arxiv
\section{Introduction} \label{sec:intro} Despite the remarkable success of cosmological models based on \ac{CDM}\ in explaining large-scale structure and other cosmological phenomena, \ac{CDM}\ has faced challenges in predicting aspects of small-scale structure, such as the abundance of dwarf galaxies and the dark-matter density near the centers of galaxies~\citep[e.g.,][]{Weinberg+2015}. The solution to these problems may lie either in baryonic physics (e.g., feedback to the interstellar medium from supernovae or black holes) or in the properties of the dark matter itself. In this paper, we examine aspects of the behavior of \ac{FDM}, which is composed of bosons with extremely small masses, typically ${m_\rb\sim 10^{-21}\mbox{--}10^{-22} \,\mathrm{eV}}$~\citep[e.g.,][]{Hu+2000, Hui+2017}. The corresponding de~Broglie wavelength at velocity $v$ is $\lambda = h/(m_\rb v)\simeq 1.20\,\mathrm{kpc}\, (10^{-22}\,\mathrm{eV} /m_\rb)\allowbreak(100 \,\mathrm{km\ s}^{-1}/v)$; on scales much larger than this, \ac{FDM}\ behaves like \ac{CDM}, but on small scales, it exhibits quite different properties and may match the observations better than \ac{CDM}~\citep[e.g.,][]{Hui+2017}. Several studies have argued that the mass range ${m_\rb\lesssim\mbox{\,a few}\times 10^{-21}\,\mathrm{eV}}$ is ruled out by constraints from the Lyman-$\alpha$ forest power spectrum~\citep{viel+2013,Armengaud+2017, Irsic+2017,kobayashi+2017,Nori+2019}, which would imply that \ac{FDM}\ is indistinguishable from \ac{CDM}\ in its effects on observed small-scale structure. However, (i) these constraints usually rely on assumptions (e.g., uniform ionizing background) that are plausible but may be oversimplified, (ii) the mass constraint can be much weaker in variants of the \ac{FDM}\ model \citep[e.g.,][]{leong+2018}, and (iii) the dynamical processes discussed here can be important whether or not the \ac{FDM}\ particle mass is small enough to influence small-scale structure. Since \ac{FDM}\ particles are bosons, density fluctuations in the dark matter are correlated over a distance of the order of the de~Broglie wavelength.~\citet{Hui+2017} argued that the fluctuating gravitational force from an \ac{FDM}\ field of mean density $\rho$ is similar to that of a classical $N$-body system composed of quasiparticles with effective mass ${ m_\mathrm{eff} \sim \rho \lambda^{3} }$. The relaxation time of a test particle orbiting in a stellar system of radius $R$, containing $N$ bodies of mass $m$, is~\citep{Binney+2008} \begin{equation} \label{eq:intro} t_\mathrm{r} \sim \frac{\sigma^3R^3}{G^2m^2 N \log N} \sim \frac{N t_\mathrm{d}}{\log N }, \end{equation} where $ t_\mathrm{d} \sim R/\sigma $ is the dynamical time and the typical velocity $\sigma\sim {(G m N/R)}^{1/2}$ by the virial theorem. In plausible \ac{CDM}\ models, the particle mass $m$ is so small that the relaxation time is much larger than the age of the universe, so dark halos are collisionless. In \ac{FDM}\ models, however, the effective mass $m_\mathrm{eff}$ of the quasiparticles is large enough that relaxation can be important. Relaxation between the quasiparticles leads to the formation and growth of a central Bose--Einstein condensate or soliton, a process that we do not study here. Relaxation between the quasiparticles and macroscopic objects such as stars can heat, and therefore expand, the stellar system as it evolves toward equipartition with the quasiparticles. Relaxation between the quasiparticles and massive objects such as black holes or globular clusters can stall the inspiral of the massive object toward the center of the galaxy that is otherwise caused by dynamical friction from both baryonic and dark matter. The goal of this paper is to place these physical arguments on a firm quantitative basis by analyzing the nature of the relaxation of classical particles, both test particles and massive objects, in an infinite \ac{FDM}\ halo that is homogeneous in the mean. The paper is organized as follows. In Section~\ref{sec:dcs}, we derive the velocity diffusion coefficients for a zero-mass test particle traveling at constant velocity in a homogeneous halo with stochastic density fluctuations described by a correlation function. These results are used in Section~\ref{sec:tb} to re-derive the classical formulae for the diffusion coefficients in a gravitational $N$-body system and in Section~\ref{sec:dc_fdm} to derive the analogous formulae for an \ac{FDM}\ halo. We find that there are remarkable parallels between the results of the two calculations. In Section~\ref{sec:dyf}, we consider the steady force acting on a massive object traveling through an \ac{FDM}\ halo (dynamical friction), which completes the standard set of diffusion coefficients. We then turn in Section~\ref{sec:ms} to a brief discussion of two applications: the inspiral of a massive object into the galactic center by dynamical friction, which can be halted by relaxation at scales where the mass of the object becomes comparable to the mass of the \ac{FDM}\ quasiparticles, and the expansion and heating of a stellar system such as a bulge embedded in an \ac{FDM}\ halo. We summarize and conclude in Section~\ref{sec:summary}. \subsection{The Coulomb logarithm} \label{sec:coulomb} The factor $\log N$ in \eq~\eqref{eq:intro} is known as the Coulomb logarithm~\citep{Binney+2008}. More generally, it is written as $\log\Lambda$, where $\Lambda \equiv b_{\max} / b_{\min}$ is the ratio between the maximum and minimum scales of the encounters that dominate the diffusion coefficients or relaxation time. The calculations in this paper are for an infinite homogeneous system, for which we invoke the Jeans swindle, that is, we neglect any acceleration due to the homogeneous average density~\citep{Binney+2008}. This assumption breaks down on scales larger than the Jeans length ${(\sigma^2/ G \rho)}^{1/2}$, which for a finite system is comparable to the system's radius $R$ by the virial theorem. At scales much larger than $R$, the density is negligible so $b_{\max}\lesssim R$. In addition, if the system is centrally concentrated and the orbital size $r\ll R$, then the effects of encounters on a scale $b$ with $r\ll b\ll R$ will average to zero.\footnote{This is not true for systems in which there is a global resonance, such as spherical or nearly Keplerian systems~\citep[e.g.,][]{Rauch+1996,Kocsis+2015}. In such cases, ``resonant relaxation'' implies that the slow action associated with the global resonance relaxes much faster than the other actions.} Thus, we can set $b_{\max} \simeq \min(r,R)$. In classical $N$-body systems composed of point particles of mass $m_{\mathrm{p}}$, we set $b_{\min}\simeq b_{90}\equiv Gm_{\mathrm{p}}/\sigma^2$; this is the impact parameter at which a typical particle is deflected by $90^\circ$. If the particles have a non-zero scale length $\varepsilon$, either because they represent star clusters or other sub-systems of non-zero size, or because they have been artificially ``softened'' to reduce integration errors, we set $b_{\min}\simeq \varepsilon$ (see Section~\ref{sec:dcs} for more details). Finally, in modeling relaxation due to \ac{FDM}\ quasiparticles we will show that it is appropriate to set the minimum scale to half of the typical de~Broglie angular wavelength,\footnote{We distinguish between the typical de~Broglie wavelength $\lambda_\sigma = h/(m_\rb\sigma)$ and the typical de~Broglie \emph{angular} wavelength $\lambdabar_\sigma = \hbar/(m_\rb\sigma)$.} $b_{\min} \simeq \lambdabar_\sigma/2 \equiv \hbar/(2m_\rb \sigma)$. These considerations lead us to define classical, softened, and \ac{FDM}\ Coulomb factors: \begin{equation} \label{eq:coulomb} \Lambda_\mathrm{cl}\equiv \frac{b_{\max}}{b_{90}}, \quad \Lambda_\mathrm{soft}\equiv \frac{b_{\max}}{\varepsilon}, \quad \Lambda_\mathrm{FDM}\equiv \frac{2b_{\max}}{\lambdabar_\sigma}. \end{equation} The precise value of $\Lambda$ is uncertain by a factor of order unity because $b_{\max}$ cannot be determined exactly from calculations in an infinite homogeneous medium.\footnote{Accurate treatments of the diffusion coefficients in inhomogeneous equilibrium stellar systems require the analysis of orbital resonances~\citep{Tremaine+1984,Binney+1988, Heyvaerts2010, Chavanis2012, Fouvry+2018}.} This ambiguity is only a minor concern in the common case where $\Lambda\gg 1$. Therefore, for clarity, we shall always express our formulae in terms of one of the three quantities in \eq~\eqref{eq:coulomb}, even when the derivation yields a value for the argument of the log that differs from these by a factor of order unity. \section{Diffusion of a test particle in a fluctuating density field} \label{sec:dcs} In this section, we calculate the velocity diffusion coefficients for a zero-mass test particle embedded in a potential that exhibits stochastic fluctuations but is on average uniform in space and stationary in time. The stochastic fluctuations drive the evolution of the test particle's velocity, and their spatial and temporal correlations determine the degree to which the test particle can respond to these fluctuations.\footnote{A similar approach was pioneered by~\cite{Cohen1975} who calculated the temporal and spatial correlations of the stochastic forces in a finite homogeneous stellar system.} Here we also restrict ourselves to the linear approximation, which for classical particles is equivalent to assuming that $\varepsilon\gg b_{90}$ in the notation of Section~\ref{sec:coulomb}. This approximation is valid for most cases of interest involving \ac{FDM}\@. Consider a time-dependent potential $\Phi(\mathbf{r},t)$ with zero mean, $\langle \Phi(\mathbf{r},t)\rangle=0$, and stationary correlation function, \begin{equation} \label{eq:Phi_corr} \langle \Phi(\mathbf{r}, t) \, \Phi(\mathbf{r}^{\prime}, t^{\prime}) \rangle = C_{\Phi}(\mathbf{r} - \mathbf{r}^{\prime}, t - t^{\prime}); \end{equation} here ${ \langle \,\cdot\, \rangle }$ denotes an ensemble average, i.e., an average over all possible realizations of the potential. It is useful to write the potential in terms of its temporal and spatial Fourier transform, ${ \widehat{\Phi} (\mathbf{k}, \omega) }$, defined by \begin{equation} \label{eq:Phi_ft} \Phi(\mathbf{r}, t) = \!\iint\!\frac{\mathrm{d} \mathbf{k} \mathrm{d} \omega}{{(2\pi)}^4} \, \widehat{\Phi} (\mathbf{k}, \omega) \, \mathrm{e}^{\mathrm{i} \mathbf{k}\cdot\mathbf{r} - \mathrm{i} \omega t}. \end{equation} The correlation function of ${ \widehat{\Phi} (\mathbf{k}, \omega) }$ is given by \begin{equation} \label{eq:Phi_ft_corr} \langle \widehat{\Phi} (\mathbf{k}, \omega) \, \widehat{\Phi}^{*}(\mathbf{k}^{\prime} , \omega^{\prime}) \rangle = {(2\pi)}^4 \widehat{C}_{\Phi} (\mathbf{k}, \omega) \delta (\omega - \omega^{\prime}) \delta(\mathbf{k} - \mathbf{k}^{\prime}), \end{equation} where ${ \widehat{C}_{\Phi} (\mathbf{k}, \omega) }$ is the temporal and spatial Fourier transform of the potential correlation function ${ C_{\Phi}(\mathbf{r}, t) }$. Given the potential in \eq~\eqref{eq:Phi_ft}, the acceleration of the test particle is \begin{equation} \label{eq:eom} \dot{\bv}(\mathbf{r}, t) = -\nabla \Phi(\mathbf{r},t) = - \mathrm{i} \!\iint\frac{\mathbf{k} \mathrm{d} \mathbf{k} \mathrm{d} \omega}{{(2\pi)}^4} \, \widehat{\Phi} (\mathbf{k}, \omega) \, \mathrm{e}^{\mathrm{i} \mathbf{k}\cdot\mathbf{r} - \mathrm{i} \omega t}, \end{equation} and its change in velocity over time $t$ is given by \begin{equation} \label{eq:dv} \Delta \bv(t) = \dint[0][t] s\, \dot{\bv}[\mathbf{r}(s), s]. \end{equation} As we assume that the mean force is zero, we can expand ${ \mathbf{r}(t) }$ around the initial position and velocity, \begin{equation} \label{eq:rt} \mathbf{r}(t) = \mathbf{r}_0 + \bv_0 t + \dint[0][t] s \, (t-s) \, \dot{\bv}(\mathbf{r}_0+\bv_0 s,s) + \cdots . \end{equation} Thus, the change in velocity is given by \begin{align} \label{eq:dv_exp} \Delta \bv(t) = {} & -\mathrm{i} \!\iint\frac{\mathbf{k} \mathrm{d} \mathbf{k} \mathrm{d} \omega}{{(2\pi)}^4} \widehat{\Phi} (\mathbf{k}, \omega) \, \mathrm{e}^{\mathrm{i} \mathbf{k}\cdot\mathbf{r}_0} \dint[0][t] s \, \mathrm{e}^{\mathrm{i}(\mathbf{k}\cdot\bv_0 - \omega) s} \nonumber \\ & + \mathrm{i}\!\iint\frac{\mathbf{k} \mathrm{d} \mathbf{k} \mathrm{d} \omega}{{(2\pi)}^4} \!\!\iint\frac{\mathbf{k} \!\cdot\! \mathbf{k}^{\prime} \mathrm{d} \mathbf{k}^{\prime} \mathrm{d} \omega^{\prime}}{{(2\pi)}^4} \, \widehat{\Phi} (\mathbf{k}, \omega) \, \widehat{\Phi}^{*} (\mathbf{k}^{\prime}, \omega^{\prime}) \mathrm{e}^{\mathrm{i} (\mathbf{k} - \mathbf{k}^{\prime}) \cdot\mathbf{r}_0}\dint[0][t] s \, \mathrm{e}^{\mathrm{i}(\mathbf{k}\cdot\bv_0-\omega) s} \dint[0][s] s^{\prime} \, (s - s^{\prime}) \, \mathrm{e}^{-\mathrm{i}(\mathbf{k}^{\prime} \cdot \bv_0-\omega^{\prime} ) s^{\prime}}, \end{align} in which we have kept only terms up to second order in ${ \widehat{\Phi} (\mathbf{k}, \omega) }$. To proceed forward, we assume that changes in velocity result from the accumulation of many small increments. As a result, the velocity evolution can be described by a Fokker--Planck equation in which the first and second diffusion coefficients are the first and second moments of the transition probability, namely ${ D[\Delta v_i] = \langle \Delta v_i (T) \rangle/T }$ and ${ D[\Delta v_i \Delta v_j] = \langle \Delta v_i(T) \, \Delta v_j(T) \rangle/T }$, where ${\Delta v_i(T)}$ is the change in velocity component $i$ over a time $T$. This is equivalent to ignoring higher moments of the transition probability~\citep[e.g.,][]{Henon1960,Risken1989} and is usually a good assumption if the first and second moments of the transition probability are finite. Therefore, the diffusion coefficients are given by\footnote{In deriving the first of these equations, we have made use of the fact that $\Phi(\mathbf{r},t)$ is real, so $\widehat{C}_{\Phi}(-\mathbf{k},-\omega)=\widehat{C}_{\Phi}(\mathbf{k},\omega)$ for real $\omega$ and $\mathbf{k}$.} \begin{equation} \label{eq:Dv} D[\Delta v_i] = \frac{1}{2} \sum_j \dpd{}{v_j}\, \!\!\iint\! \frac{ \mathrm{d} \mathbf{k} \mathrm{d} \omega}{{(2\pi)}^3}k_i k_j \, \widehat{C}_{\Phi} (\mathbf{k}, \omega) \, K_T (\omega-\mathbf{k}\cdot\bv), \end{equation} and \begin{equation} \label{eq:Dvv} D[\Delta v_i \Delta v_j] = \!\!\iint\! \frac{\mathrm{d} \mathbf{k} \mathrm{d} \omega}{{(2\pi)}^3} k_i k_j \widehat{C}_{\Phi} (\mathbf{k}, \omega) K_T(\omega-\mathbf{k}\cdot\bv), \end{equation} where \begin{equation} \label{eq:KT} K_T(\omega) \equiv \frac{1}{2\pi T} \!\!\int_0^T\! \!\int_0^T \! \mathrm{d} s \mathrm{d} s^{\prime} \mathrm{e}^{\mathrm{i} \omega (s - s^{\prime})} = \frac{1-\cos(\omega T)}{\pi \omega^2 T}, \end{equation} is the finite-time kernel, which is normalized such that ${ \! \int \! \mathrm{d} \omega \, K_T(\omega) = 1 }$. In the limit ${ T\to \inf }$, $K_T(\omega) \to \delta(\omega)$ and $D[\Delta v_i \Delta v_j]$ becomes time-independent, and the process is diffusive. On short timescales, $K_{T} \to T/(2\pi)$, the process is ballistic, and ${D[\Delta v_i \Delta v_j] \simeq \langle \dot{v}_i\dot{v}_j \rangle T}$ describes the instantaneous coherent (in time) force acting on the test particle. \Eqs~\eqref{eq:Dv} and~\eqref{eq:Dvv} satisfy the relation \begin{equation} \label{eq:Dv_fd} D[\Delta v_i] = \frac{1}{2}\sum_j \dpd{}{v_j} D[\Delta v_i \Delta v_j], \end{equation} which is the fluctuation-dissipation relation for a zero-mass test particle~\citep[e.g.,][]{Binney+1988,Binney+2008}.\footnote{This derivation is more general than the one in~\citet{Binney+2008}, which requires that the distribution function of the particles that cause the potential fluctuations is isotropic in velocity space.} We now assume that there is a finite correlation time $T_{\mathrm{c}}$ such that ${ C_{\Phi}(\mathbf{r}, t)\to 0 }$ for ${ |t|\gg T_{\mathrm{c}} }$, and that this correlation time is much shorter than any other time of interest. This assumption allows us to take the limit ${ T \to \infty }$, in which the kernel ${ K_T(\omega) }$ can be approximated as a delta function. \Eqs~\eqref{eq:Dv} and~\eqref{eq:Dvv} then read \begin{align} \label{eq:DvII} D_{i} & = D[\Delta v_i] = \frac{1}{2}\sum_j \dpd{}{v_j} \!\int\! \frac{\mathrm{d} \mathbf{k}}{{(2\pi)}^3} k_i k_j \widehat{C}_{\Phi} (\mathbf{k}, \mathbf{k}\!\cdot\!\bv),\\ \label{eq:DvvII} D_{ij} & = D[\Delta v_i \Delta v_j] = \!\!\int\!\ \frac{\mathrm{d} \mathbf{k}}{{(2\pi)}^3} k_i k_j \widehat{C}_{\Phi} (\mathbf{k}, \mathbf{k}\!\cdot\!\bv). \end{align} Under these approximations, the probability distribution of the velocity of a test particle, ${ P (\bv,t) }$, is governed by the Fokker--Planck equation \begin{equation} \label{eq:FP} \dpd{P (\bv,t)}{t} = -\sum_i\dpd{}{v_i}\big[D_i \, P(\bv,t)\big] +\frac{1}{2}\sum_{ij}\dpd{^2}{v_i\partial v_j}\big[D_{ij} \, P (\bv,t)\big] =\frac{1}{2}\sum_{ij}\dpd{}{v_i}\! \bigg[ \! D_{ij} \, \! \dpd{P (\bv,t)}{v_j} \bigg], \end{equation} where the last equality is derived using \eq~\eqref{eq:Dv_fd}. We now specialize to the case in which the potential fluctuations arise from density fluctuations $\rho (\mathbf{r}, t)$ around a mean field density $\rho_\mathrm{p}$, so ${\langle \rho(\mathbf{r}, t) \rangle = 0}$. Assuming that these fluctuations are a stationary homogeneous random field, the correlation function of the density fluctuations can be written as \begin{equation} \label{eq:rho_corr} \langle \rho(\mathbf{r}, t) \, \rho(\mathbf{r}^{\prime} , t^{\prime}) \rangle = C_\rho(\mathbf{r} - \mathbf{r}^{\prime}, t - t^{\prime}). \end{equation} The potential fluctuations associated with the density fluctuations $\rho(\mathbf{r}, t)$ are given by the Fourier transform of Poisson's equation, ${ \widehat{\Phi} (\mathbf{k} ,\omega) = - 4 \pi G \widehat{\rho} (\mathbf{k}, \omega)/k^2 }$, $k = |\mathbf{k}|$, and the Fourier transforms of the correlation functions are related by \begin{equation} \label{eq:C_R} \widehat{C}_{\Phi}(\mathbf{k}, \omega) = \frac{16 \pi^2 G^2}{k^4} \widehat{C}_{\!\rho} (\mathbf{k}, \omega). \end{equation} The diffusion coefficients, \eqs~\eqref{eq:DvII} and~\eqref{eq:DvvII}, become \begin{align} \label{eq:Dv_rho} D_i &= \frac{G^2}{\pi} \sum_j \dpd{}{v_j}\, \dint \mathbf{k} \frac{k_i k_j}{k^4} \widehat{C}_{\!\rho} (\mathbf{k}, \mathbf{k}\!\cdot\!\bv),\\ \label{eq:Dvv_rho} D_{ij} &= \frac{2G^2}{\pi} \dint \mathbf{k} \frac{k_i k_j}{k^4} \widehat{C}_{\!\rho} (\mathbf{k}, \mathbf{k}\!\cdot\!\bv). \end{align} \subsection{Classical two-body relaxation} \label{sec:tb} We now use the results from the preceding discussion to obtain the diffusion coefficients for a zero-mass test particle interacting with an infinite, homogeneous system of classical ``field'' particles of individual mass $m_{\mathrm{p}}$, characterized by a \ac{DF}\ ${ F_{\mathrm{p}} (\bv) }$. Here, the \ac{DF}\ is normalized such that ${ \!\int\! \mathrm{d} \bv F_{\mathrm{p}} (\bv) }$ is the mass density $\rho_\mathrm{p}$, and we ignore the self-gravity of the particles, considering only the gravitational forces that they exert on the test particle. This is the system examined in the classic work of~\citet{Chandrasekhar1942,Chandrasekhar1943}. Given our assumptions, each field particle travels on a straight line at constant velocity. Then, the density fluctuations around the mean density $\rho_\mathrm{p}$ are given by \begin{equation} \label{eq:rho_nb} \rho(\mathbf{r},t) = m_{\mathrm{p}} \sum_{n} \delta (\mathbf{r} - \mathbf{r}_n - \bv_n t) - \rho_\mathrm{p} , \end{equation} where ${ (\mathbf{r}_n, \bv_n )}$ stands for the position and velocity of the field particle $n$ at time $t=0$. The associated density correlation function is \begin{equation} \label{eq:rho_c} C_{\!\rho}(\mathbf{r}-\mathbf{r}^{\prime},t-t^{\prime})=\langle \rho(\mathbf{r},t) \, \rho(\mathbf{r}^{\prime} , t^{\prime}) \rangle = m_{\mathrm{p}} \dint \bv\, \delta[\mathbf{r} - \mathbf{r}^{\prime} - \bv (t - t^{\prime} )] F_{\mathrm{p}} (\bv) , \end{equation} and its temporal and spatial Fourier transform is \begin{equation} \label{eq:Bft} \widehat{C}_{\!\rho} (\mathbf{k},\omega) = 2\pi m_{\mathrm{p}} \dint \bv'\, \delta(\mathbf{k}\cdot\bv' -\omega) F_{\mathrm{p}}(\bv'). \end{equation} From \eqs~\eqref{eq:Dv_rho} and~\eqref{eq:Dvv_rho}, we obtain the first and second-order diffusion coefficients, \begin{align} \label{eq:Dv_2b} D_i = {} & 2 G^2m_{\mathrm{p}} \!\log\Lambda \dint\widehat{\mathbf{k}} \,\widehat{k}_i \dint \mathbf{v}^{\prime} \, \widehat{\mathbf{k}} \!\cdot\! \dpd{}{\bv} \delta[\widehat{\mathbf{k}} \!\cdot\! (\mathbf{v}^{\prime} - \bv) ] F_{\mathrm{p}}(\bv') \nonumber \\ = {} & 2 G^2m_{\mathrm{p}} \!\log\Lambda \dpd{}{v_i} \dint\widehat{\mathbf{k}} \dint \mathbf{v}^{\prime}\, \delta[\widehat{\mathbf{k}} \!\cdot\! (\mathbf{v}^{\prime} - \bv) ] F_{\mathrm{p}}(\bv') \nonumber \\ = {} & 4\pi G^2m_{\mathrm{p}} \!\log\Lambda \,\dpd{}{v_i} \! \int \! \mathrm{d} \mathbf{v}^{\prime} \frac{F_{\mathrm{p}} (\mathbf{v}^{\prime})}{|\bv - \mathbf{v}^{\prime}|}, \end{align} and \begin{align} \label{eq:Dvv_2b} D_{ij} = {} & 4 G^2 m_{\mathrm{p}} \,\!\log\Lambda \dint \widehat{\mathbf{k}} \, \widehat{k}_i \widehat{k}_j\dint \mathbf{v}^{\prime} \, \delta[\widehat{\mathbf{k}} \!\cdot\! (\mathbf{v}^{\prime} - \bv) ]F_{\mathrm{p}} (\mathbf{v}^{\prime}) \nonumber \\ = {} & 2 G^2 m_{\mathrm{p}} \,\!\log\Lambda \frac{\partial^2}{\partial v_i \partial v_j} \dint \widehat{\mathbf{k}} \, \dint \mathbf{v}^{\prime} \, |\widehat{\mathbf{k}} \!\cdot\! (\mathbf{v}^{\prime} - \bv)|F_{\mathrm{p}} (\mathbf{v}^{\prime}) \nonumber \\ = {} & 4\pi G^2 m_{\mathrm{p}} \!\log\Lambda \frac{\partial^2}{\partial v_i \partial v_j} \int \mathrm{d} \mathbf{v}^{\prime}\, |\bv-\mathbf{v}^{\prime}| F_{\mathrm{p}} (\mathbf{v}^{\prime}), \end{align} where $\widehat{\mathbf{k}}\equiv \mathbf{k}/|\mathbf{k}|$. Here, ${ \log\Lambda = \!\int\! \mathrm{d} k / k =\logk_{\max}/k_{\min}}$ is the Coulomb logarithm (Section~\ref{sec:coulomb}), because we can identify $1/k_{\min}$ and $1/k_{\max}$ as the maximum and minimum scales $b_{\max}$ and $b_{\min}$ to within factors of order unity. To obtain \eqs~\eqref{eq:Dv_2b} and~\eqref{eq:Dvv_2b}, we used the relation \begin{equation} \label{eq:delta_int} \dpd{}{x_{j_1}}\dots \dpd{}{x_{j_\ell}} \dint \widehat{\mathbf{k}}\, \widehat{k}_{i_1}\dots\widehat{k}_{i_n}\delta(\widehat{\mathbf{k}}\!\cdot\!\mathbf{x}) = \dpd{}{x_{i_1}}\dots \dpd{}{x_{i_n}} \dint \widehat{\mathbf{k}} \, \widehat{k}_{j_1}\dots\widehat{k}_{j_\ell}\Delta_{n-\ell}(\widehat{\mathbf{k}}\!\cdot\!\mathbf{x}), \end{equation} where \begin{equation} \label{eq:Delta_n} \Delta_{n}(x) = \begin{cases} \displaystyle \frac{1}{2(n-1)!} \frac{x^n}{|x|}, & n > 0, \\ \delta^{(n)}(x), & n \le 0, \end{cases} \end{equation} is the $n$th integral (derivative) of the Dirac delta function $\delta(x)$. The diffusion coefficients in \eqs~\eqref{eq:Dv_2b} and~\eqref{eq:Dvv_2b} are identical to the standard diffusion coefficients~\citetext{\citealt{Rosenbluth+1957}; see also~\citealt{Chavanis2013b}} for a zero-mass test particle in an infinite homogeneous medium, up to the usual ambiguity in the precise definition of the Coulomb logarithm. Plugging the diffusion coefficients into the Fokker-Planck equation~\eqref{eq:FP}, we obtain the (homogeneous) Landau equation~\citep{Landau1936} for a zero-mass test particle, \begin{equation} \label{eq:Landau} \dpd{P (\bv, t)}{t} = 2G^2 m_{\mathrm{p}} \sum_{ij} \dpd{}{v_i} \dint \mathbf{k} \dint \bv' \, \frac{k_i k_j}{k^4} \delta [\mathbf{k} \!\cdot\! (\bv - \bv')] F_{\mathrm{p}}(\bv') \! \dpd{}{v_j} \!\! P (\bv, t). \end{equation} See~\citet{Chavanis2013b} for the historical connection between this equation and the later treatments of~\citet{Chandrasekhar1942, Chandrasekhar1943} and~\citet{Rosenbluth+1957}. For a Maxwellian velocity distribution, \begin{equation} \label{eq:maxwell} F_{\mathrm{p}}(\bv)=\frac{\rho_\mathrm{p}}{{(2\pi\sigma^2)}^{3/2}} \, \mathrm{e}^{ - v^{2} / (2 \sigma^{2})}, \end{equation} with $v = |\bv|$; the integral expressions for the diffusion coefficients, \eqs~\eqref{eq:Dv_2b} and~\eqref{eq:Dvv_2b}, can be evaluated explicitly. The diffusion coefficients in the directions parallel and perpendicular to the test-particle velocity are~\citep{Binney+2008} \begin{align} \label{eq:Dpar} D[\Delta v_\parallel] = & -\frac{4\pi G^2 \rho_\mathrm{p} m_{\mathrm{p}} \log\Lambda}{\sigma^2} \mathbb{G}(X) \\ \label{eq:D2par} D[(\Delta v_\parallel^2)] = & \frac{4 \sqrt{2} \pi G^2 \rho_\mathrm{p} m_{\mathrm{p}} \log\Lambda}{\sigma} \frac{\mathbb{G}(X)}{X}, \\ \label{eq:D2per} D[{(\Delta\bv_\bot)}^2] = & \frac{4 \sqrt{2} \pi G^2 \rho_\mathrm{p} m_{\mathrm{p}} \log\Lambda}{\sigma} \bigg[\frac{\erf(X) - \mathbb{G}(X)}{X}\bigg], \end{align} where $X \equiv v/\sqrt{2}\sigma$ and \begin{equation} \label{eq:gdef} \mathbb{G}(X)\equiv \frac{1}{2X^2}\left[\erf(X)-\frac{2X}{\sqrt{\pi}}\mathrm{e}^{-X^2}\right]. \end{equation} The Cartesian diffusion coefficients are then \begin{align} \label{eq:Di_c} D_i = {} & \frac{v_i}{v} D[\Delta v_\parallel], \\ \label{eq:Dij_c} D_{ij} = {} & \frac{v_i v_j}{v^2}\big\{D[{(\Delta v_\parallel)}^2]- \textstyle{\frac{1}{2}} D[{(\Delta \bv_\bot)}^2]\big\} + \textstyle{\frac{1}{2}}\delta_{ij}D[{(\Delta \bv_\bot)}^2] . \end{align} Until now, we have considered a classical system composed of point-like particles. We now generalize this to a system of extended field particles, where each particle has a density profile $\rho_n(r) = m_{\mathrm{p}} W_\varepsilon(r)$ with a scale length $\varepsilon$ and ${ \!\int\! \mathrm{d} \mathbf{r}\, W_{\varepsilon} (|\mathbf{r}|) = 1 }$. The density fluctuations are now given by $\rho(\mathbf{r}, t) = m_{\mathrm{p}} \sum_n W_\varepsilon(|\mathbf{r} -\mathbf{r}_n - \bv_n t|) - \rho_\mathrm{p}$, and their correlation function is \begin{equation} \label{eq:R_soft} C_{\!\rho}(\mathbf{r}, t) = m_{\mathrm{p}}\dint \bv \! \dint \mathbf{r}^{\prime} \, W_\varepsilon(\mathbf{r} - \mathbf{r}^{\prime} - \bv t)W_\varepsilon(\mathbf{r}^{\prime}) F_{\mathrm{p}}(\bv). \end{equation} This approach is equivalent to using a softened version of Poisson's equation, \begin{equation} \label{eq:Phi_rho_soft} \Phi_{\varepsilon} (\mathbf{r} ,t) = -G \dint \mathbf{r}^{\prime} \dint \mathbf{r}^{\prime\prime} \frac{\rho(\mathbf{r}^{\prime\prime} , t)}{|\mathbf{r} - \mathbf{r}^{\prime}|} \, W_{\varepsilon} (|\mathbf{r}^{\prime} - \mathbf{r}^{\prime\prime}|), \end{equation} where $W_{\varepsilon}(r)$ is the softening kernel. As discussed in Section~\ref{sec:coulomb}, this softening cures the divergence in the Coulomb logarithm at small scales (large wavenumbers). The Fourier transform of the softened potential is $\widehat{\Phi}_{\varepsilon} (\mathbf{k}, \omega) = -4\pi G \, \widehat{\rho} (\mathbf{k}, \omega) \widehat{W}_{\varepsilon} (k)\, /k^{2}$. The diffusion coefficient is the same as in \eq~\eqref{eq:Dvv_2b}, but now the Coulomb logarithm is \begin{equation} \label{eq:logL_soft} \log\Lambda_\mathrm{soft} = \! \int_{k_{\min}}^{k_{\max}} \frac{\mathrm{d} k}{k} {|\widehat{W}_{\varepsilon} (k)|}^2. \end{equation} If we take the density kernel to be Gaussian, \begin{equation} W_{\varepsilon}(r)=\frac{1}{{(2\pi \varepsilon^{2})}^{3/2}}\mathrm{e}^{-\frac{1}{2}r^{2}/\varepsilon^{2}}, \quad { \widehat{W}_{\varepsilon} (\mathbf{k}) = \mathrm{e}^{-\frac{1}{2} k^2 \varepsilon^{2}} , } \end{equation} then we can let ${ k_{\max} \to \infty }$ and obtain \begin{equation} \label{eq:logL_soft_g} \!\log\Lambda_\mathrm{soft} = \textstyle{\frac{1}{2}} \Gamma (0, k_{\min}^{2} \varepsilon^{2} ), \end{equation} where ${\Gamma (n, x) = \int_x^\infty t^{n-1} \mathrm{e}^{-t} \mathrm{d} t}$ is the ``upper'' incomplete Gamma function. In the limit ${ k_{\min} \varepsilon \to 0 }$, this expression is equivalent to $\log\Lambda_\mathrm{soft}=- \log(k_{\min}\varepsilon)-\frac{1}{2}\gamma_E +\mbox{O}{(k_{\min}\varepsilon)}^{2} = \log(b_{\max}/\varepsilon)+\mbox{O}(1)$ with $\gamma_E$ the Euler constant. This result is consistent with the second of \eqs~\eqref{eq:coulomb}. For a Gaussian density kernel and a Maxwellian \ac{DF}, the density correlation function (eq.~\ref{eq:R_soft}) is \begin{equation} \label{eq:R_soft_m} C_{\!\rho}(\mathbf{r}, t) = \frac{m_{\mathrm{p}} \rho_{\rb}}{8\pi^{3/2} \varepsilon^3{\Big[1 + {(\sigma t / \sqrt{2} \varepsilon)}^2\Big]}^{3/2}} \exp\Bigg[-\frac{{(r/2\varepsilon)}^2}{1 + {(\sigma t / \sqrt{2} \varepsilon )}^2}\Bigg]. \end{equation} The force correlation function $\langle \mathbf{F}(\mathbf{r}, t)\cdot \mathbf{F}(\mathbf{r}^{\prime}, t^{\prime}) \rangle = C_F(\mathbf{r} - \mathbf{r}^{\prime}, t-t^{\prime})$ is given by \begin{equation} \label{eq:C_F} C_F(\mathbf{r}, t) = 4\pi G^2 \dint \mathbf{r}^{\prime} \frac{C_{\!\rho}(\mathbf{r}',t)}{|\mathbf{r} - \mathbf{r}^{\prime}|} = \frac{4 \pi G^2 m_{\mathrm{p}} \rho_{\rb}}{r} \erf \Bigg[\frac{r/2\varepsilon}{\sqrt{1 + {(\sigma t / \sqrt{2} \varepsilon )}^2}}\Bigg], \end{equation} which in the limit $\varepsilon \to 0$ asymptotes to $4 \pi G^2 m_{\mathrm{p}} \rho_{\rb} r^{-1} \erf \! \big[ r / (\sqrt{2} \sigma t) \big]$~\cite[][eq.~15]{Cohen1975}. \subsection{Relaxation by fuzzy dark matter} \label{sec:dc_fdm} In this section, we describe how the stochastic density fluctuations that arise inevitably in an \ac{FDM}\ halo lead to the diffusion of the velocity of a zero-mass test particle. The wavefunction ${ \psi(\mathbf{r}, t) }$ of the \ac{FDM}\ is governed by the Schr\"{o}dinger--Poisson system~\citep{Ruffini+1969} \begin{align} \label{eq:se} \mathrm{i} \hbar \dpd{}{t} \psi(\mathbf{r}, t) & = -\frac{\hbar^2}{2m_\rb}\nabla^2 \psi(\mathbf{r}, t) + m_\rb \Phi(\mathbf{r}, t) \psi(\mathbf{r}, t), \\ \label{eq:pe} \nabla^2 \Phi(\mathbf{r}, t) & = 4\pi G {|\psi(\mathbf{r}, t)|}^2. \end{align} Here, $m_\rb$ is the mass of the \ac{FDM}\ particle, ${ \Phi(\mathbf{r},t) }$ is the gravitational potential, and we have assumed that the wavefunction is normalized such that $|\psi(\mathbf{r},t)|^2$ is the mass density. To parallel our discussion of relaxation in a system of classical particles in the preceding subsection, we assume that the self-gravity of the \ac{FDM}\ can be ignored when considering the interaction of \ac{FDM}\ with a classical test particle. This assumption is similar to the Jeans swindle and is valid when the typical de~Broglie wavelength is much smaller than the scale of the system. In this case, the \ac{FDM}\ wavefunction can be expanded as a collection of plane waves, \begin{equation} \label{eq:psi} \psi(\mathbf{r}, t) = \dint \mathbf{k} \, \varphi(\mathbf{k}) \, \mathrm{e}^{\mathrm{i} \mathbf{k}\cdot\mathbf{r} - \mathrm{i} \omega(k) t}, \end{equation} where \begin{equation} \label{eq:omega} \omega(k) = \frac{\hbar k^2}{2m_\rb}. \end{equation} We assume that the ensemble averages of ${ \varphi(\mathbf{k}) }$ satisfy \begin{equation} \label{eq:A_corrI} \langle \varphi(\mathbf{k}) \rangle = 0, \;\;\; \langle \varphi(\mathbf{k}) \, \varphi(\mathbf{k}^{\prime}) \rangle = 0 \quad\mbox{for}\quad \mathbf{k}\not=\mathbf{k}^{\prime}. \end{equation} These equations are satisfied if each plane wave has a random phase, as is expected if they arrive in the vicinity of the test particle from large distances and different directions. We also assume that \begin{equation} \label{eq:A_corrII} \langle \varphi(\mathbf{k}) \, \varphi^{*} (\mathbf{k}^{\prime}) \rangle = f_k(\mathbf{k}) \, \delta(\mathbf{k} - \mathbf{k}^{\prime}). \end{equation} where ${ f_k (\mathbf{k}) }$ is a \ac{DF}\ defined such that the mean or ensemble-average mass density in the volume ${ \mathrm{d} \mathbf{k} }$ around $\mathbf{k}$ is $f_k(\mathbf{k})\mathrm{d}\mathbf{k}$. These assumptions are only valid when the typical de~Broglie angular wavelength $\lambdabar_\sigma$ is much larger than the typical distance between \ac{FDM}\ particles, $d = {(m_\rb/\rho_{\rb})}^{1/3}$. Therefore, the results in the remainder of this section do not reduce to the classical diffusion coefficients in the classical limit where $\hbar \to 0$. In the Appendix~\ref{sec:classical_lim}, we generalize our derivation to include the classical limit. There, we show that $\lambdabar_\sigma > d$ when the \ac{FDM}\ particle mass exceeds $m_\mathrm{s} \approx {(\rho_{\rb} \hbar^3/\sigma^3)}^{1/4}$, a few tens of $\,\mathrm{eV}$ in a typical galaxy. For $m_\rb \gg m_\mathrm{s}$ the diffusion coefficients become the classical ones, although the system itself is not yet in the classical limit. Classical behavior requires that the position uncertainty after a dynamical time $T_\mathrm{d}$ be small compared to the distance between particles or $m_\rb \gg m_\mathrm{c}$, where $m_\mathrm{c}\approx {(\hbar T_\mathrm{d}/2)}^{3/5}\rho_{\rb}^{2/5}$, about $10^{16} \,\mathrm{eV}$ in a typical galaxy. For any reasonable dark-matter particle mass, the ``classical'' contribution to the relaxation is of no importance. Note that although $\hbar$ is present in the wave function and the dispersion relation (eqs.~\ref{eq:psi} and~\ref{eq:omega}) and thus will appear in many of the following formulas, the following derivations can be understood entirely as a classical field theory: the only trace of the quantum nature of the waves is in the quadratic dispersion relation (see eq.~\ref{eq:omega}), which is rare in classical systems. The density fluctuations of the \ac{FDM}\ field are given by ${\rho(\mathbf{r}, t) = {|\psi(\mathbf{r},t)|}^2 - \rho_{\rb} }$, where the mean \ac{FDM}\ density is ${ \rho_{\rb} = \langle {|\psi(\mathbf{r},t)|}^2 \rangle = \!\!\int\!\mathrm{d} \mathbf{k} \, f_k(\mathbf{k}) }$. From \eq~\eqref{eq:psi}, we obtain \begin{equation} \label{eq:rho_fdm} \rho(\mathbf{r}, t) = \dint \mathbf{k} \dint \mathbf{k}^{\prime} \varphi(\mathbf{k}) \, \varphi^*(\mathbf{k}^{\prime}) \, \mathrm{e}^{\mathrm{i} (\mathbf{k} - \mathbf{k}^{\prime})\cdot\mathbf{r} - \mathrm{i} [\omega(k)-\omega(k^{\prime})]t} - \rho_{\rb}. \end{equation} The density correlation function and its Fourier transform are \begin{align} \label{eq:R_fdm} C_{\!\rho}(\mathbf{r}, t) = {} & \dint \mathbf{k} \dint \mathbf{k}^{\prime} f_k(\mathbf{k}) f_k(\mathbf{k}^{\prime}) \, \mathrm{e}^{\mathrm{i}(\mathbf{k}-\mathbf{k}^{\prime})\cdot\mathbf{r} - \mathrm{i} [\omega(k)-\omega(k^{\prime})]t}, \\ \label{eq:Rhat_fdm} \widehat{C}_{\!\rho} (\mathbf{k}, \omega) = {} & {(2\pi)}^4 \dint \mathbf{k}^{\prime} \dint \mathbf{k}'' f_k(\mathbf{k}) f_k(\mathbf{k}'') \delta (\mathbf{k} - \mathbf{k}^{\prime} + \mathbf{k}'') \, \delta \!\left[\omega - \omega(k^{\prime}) + \omega(k^{\prime\prime})\right]. \end{align} Here, we used \eqs~\eqref{eq:A_corrI} and~\eqref{eq:A_corrII} to obtain \begin{align} \label{eq:A_4pt} \langle \varphi(\mathbf{k}_1)\varphi^*(\mathbf{k}_2)\varphi^*(\mathbf{k}_3)\varphi(\mathbf{k}_4) \rangle = {} & f_k(\mathbf{k}_1)f_k(\mathbf{k}_3)\delta(\mathbf{k}_1-\mathbf{k}_2)\delta(\mathbf{k}_3-\mathbf{k}_4) \nonumber \\ & + f_k(\mathbf{k}_1)f_k(\mathbf{k}_2)\delta(\mathbf{k}_1-\mathbf{k}_3)\delta(\mathbf{k}_2-\mathbf{k}_4). \end{align} Each plane wave travels with velocity ${ \bv = \hbar \mathbf{k} /m_\rb }$, and its velocity \ac{DF}\ is given by ${ F_{\rb} (\bv) \mathrm{d} \bv = f_k (\mathbf{k}) \mathrm{d} \mathbf{k} }$. As a result, \eq~\eqref{eq:Rhat_fdm} can be written as \begin{equation} \label{eq:Rhat_fdm_v} \widehat{C}_{\!\rho} (\mathbf{k}, \omega) = {(2\pi)}^4 \dint \bv_1 \dint \bv_2 \,F_{\rb} (\bv_1) F_{\rb} (\bv_2) \delta\bigg( \mathbf{k} - \frac{2m_\rb}{\hbar} \mathbf{v}_{\mathrm{d}} \bigg)\, \delta \bigg(\omega - \frac{2 m_\rb}{\hbar} \mathbf{v}_{\mathrm{c}} \!\cdot\! \mathbf{v}_{\mathrm{d}} \bigg), \end{equation} in which we have introduced the velocities ${ \mathbf{v}_{\mathrm{c}} = (\bv_1 + \bv_2)/2}$ and ${ \mathbf{v}_{\mathrm{d}} = (\bv_1 - \bv_2)/2}$. Note that in this case the spatial correlation is associated with the velocity difference, while in the classical case it is associated with the distance that a field particle travels over time $t$ (eq.~\ref{eq:Bft}). Therefore, truncating the integrals at large and small scales is equivalent to truncating the velocity difference. Using \eqs~\eqref{eq:Dvv_rho} and~\eqref{eq:Rhat_fdm_v}, we obtain the diffusion coefficient for the test particle \begin{align} \label{eq:Dvv_fdmII} D_{ij} = {} & \frac{32\pi^3G^2\hbar^3}{m_\rb^3} \dint \mathbf{v}_{\mathrm{d}} \dint \mathbf{v}_{\mathrm{c}} \frac{v_{\mathrm{d}}^{i} v_{\mathrm{d}}^{j}}{v_{\mathrm{d}}^5} F_{\rb} (\mathbf{v}_{\mathrm{c}} \!+\! \mathbf{v}_{\mathrm{d}}) F_{\rb} (\mathbf{v}_{\mathrm{c}} \!-\! \mathbf{v}_{\mathrm{d}}) \, \delta \left( \widehat{\mathbf{v}}_{\mathrm{d}} \!\cdot\! \bv - \widehat{\mathbf{v}}_{\mathrm{d}} \!\cdot\! \mathbf{v}_{\mathrm{c}} \right) \nonumber \\ = {} & \frac{16\pi^3G^2\hbar^3}{m_\rb^3} \frac{\partial^2}{\partial v_i \partial v_j} \dint \mathbf{v}_{\mathrm{d}} \dint \mathbf{v}_{\mathrm{c}} \, F_{\rb} (\mathbf{v}_{\mathrm{c}} \!+\! \mathbf{v}_{\mathrm{d}}) F_{\rb} (\mathbf{v}_{\mathrm{c}} \!-\! \mathbf{v}_{\mathrm{d}}) \frac{|\widehat{\mathbf{v}}_{\mathrm{d}} \!\cdot\! \bv - \widehat{\mathbf{v}}_{\mathrm{d}} \!\cdot\! \mathbf{v}_{\mathrm{c}} |}{v_{\mathrm{d}}^3}, \end{align} where $\widehat{\mathbf{v}}_{\mathrm{d}}$ is the unit vector in the direction of $\mathbf{v}_{\mathrm{d}}$. The diffusion coefficient $D_i$ can be obtained from $D_{ij}$ using the fluctuation-dissipation relation (eq.~\ref{eq:Dv_fd}). The integrals over $\mathbf{v}_{\mathrm{d}}$ diverge logarithmically as ${ |\mathbf{v}_{\mathrm{d}}|\to 0 }$. Therefore, we cut off the integration when ${ |\mathbf{v}_{\mathrm{d}}|<v_{d,\min}}$. This cutoff arises naturally from the wave nature of \ac{FDM}\@: because $v=\hbar k/m_\rb$ and we ignore wavenumbers smaller than $k_{\min}=1/b_{\max}$, we should also ignore velocities $v \lesssim \hbar/(m_\rbb_{\max})$. More precisely, we set $v_{d,\min} =\hbar/(2m_\rb b_{\max})=\sigma \lambdabar_\sigma/(2b_{\max})$, where ${ \lambdabar_\sigma = \hbar/(m_\rb\sigma) }$ is the typical de~Broglie angular wavelength. Furthermore, the integral is dominated by the region ${ |\mathbf{v}_{\mathrm{d}}|\ll |\mathbf{v}_{\mathrm{c}}| }$, so we may also cut off the integration when ${ |\mathbf{v}_{\mathrm{d}}|>v_{d,\max}}$, where ${ v_{d,\max}\lesssim \sigma }$, and approximate $F_{\rb}(\mathbf{v}_{\mathrm{c}} \pm \mathbf{v}_{\mathrm{d}})$ by $F_{\rb}(\mathbf{v}_{\mathrm{c}})$. If we write $\mathbf{v}_{\mathrm{d}}\equiv v_{\mathrm{d}} \widehat{\mathbf{k}}$, the first of \eqs~\eqref{eq:Dvv_fdmII} simplifies to \begin{align} \label{eq:Dvv_logL} D_{ij} &= {}\frac{32\pi^3G^2\hbar^3}{m_\rb^3} \log\Lambda_\mathrm{FDM} \dint \widehat{\mathbf{k}}\, \widehat{k}_{i} \widehat{k}_{j}\dint \mathbf{v}^{\prime} F_{\rb}^2(\bv') \delta [\widehat{\mathbf{k}} \!\cdot\! (\bv - \mathbf{v}^{\prime}) ] \nonumber \\ & =4 G^2 m_\mathrm{eff}\,\log\Lambda_\mathrm{FDM} \dint \widehat{\mathbf{k}}\,\widehat{k}_{i} \widehat{k}_{j}\dint \mathbf{v}^{\prime} \delta [\widehat{\mathbf{k}} \!\cdot\! (\bv - \mathbf{v}^{\prime}) ]F_{\mathrm{eff}}(\mathbf{v}^{\prime}) , \end{align} and \begin{align} \label{eq:Dv_logL} D_{i} &= 4\pi G^2 m_\mathrm{eff} \log\Lambda_\mathrm{FDM}\dpd{}{v_i} \dint \mathbf{v}^{\prime} \frac{F_{\mathrm{eff}}(\mathbf{v}^{\prime})}{|\bv - \mathbf{v}^{\prime}|}. \end{align} Here, $\log\Lambda_\mathrm{FDM} = \log(v_{d,\max}/v_{d,\min})=\log(2b_{\max}/\lambdabar_\sigma)+\mbox{O}(1)$, consistent with the third of \eqs~\eqref{eq:coulomb}. We have also defined a new, effective \ac{DF}, \begin{equation} F_{\mathrm{eff}}(\bv)=\frac{\dint \bv\,F_{\rb}(\bv)}{\dint \bv\,F_{\rb}^2(\bv)}F_{\rb}^2(\bv), \end{equation} normalized such that $\ \dint \bv\,F_{\mathrm{eff}}(\bv)=\rho_{\rb}$, and an effective mass, \begin{equation} \label{eq:meffI} m_\mathrm{eff} = \frac{{(2\pi\hbar)}^3\,\dint \bv\,F_{\rb}^2(\bv)}{m_\rb^3\,\,\dint \bv\,F_{\rb}(\bv)}. \end{equation} The diffusion coefficients in \eqs~\eqref{eq:Dv_logL} and~\eqref{eq:Dvv_logL} are identical to the diffusion coefficients in \eqs~\eqref{eq:Dv_2b} and~\eqref{eq:Dvv_2b} for classical particles, except that the particle mass $m_{\mathrm{p}}$ is replaced by the effective mass $m_\mathrm{eff}$, the velocity \ac{DF}\ $F_{\rb}(\bv)$ is replaced by the effective \ac{DF}\ $F_{\mathrm{eff}}(\bv)$, and the Coulomb logarithm $\log\Lambda$ is modified to $\log\Lambda_\mathrm{FDM}$. In effect, the halo acts as if it were composed of quasiparticles with a mass $m_\mathrm{eff}$ that depends on the local halo density and velocity distribution. These results provide a simple recipe for computing the diffusion coefficients for a zero-mass test particle in an \ac{FDM}\ halo. For the Maxwellian velocity distribution (eq.~\ref{eq:maxwell}), the integrations in \eqs~\eqref{eq:Dvv_fdmII}--\eqref{eq:meffI} can be carried out explicitly. The effective \ac{DF}\ is a Maxwellian with the same density and a velocity dispersion $\sigma_\mathrm{eff}=\sigma/\sqrt{2}$. The effective mass is \begin{equation} \label{eq:meff} m_\mathrm{eff} = \frac{\pi^{3/2} \hbar^3 \rho_{\rb}}{m_\rb^3 \sigma^3} = \rho_{\rb}\, {\big(f\lambda_\sigma\big)}^3, \end{equation} where ${ \lambda_\sigma = h/(m_\rb\sigma) }$ is the typical de~Broglie wavelength and $f=1/(2\sqrt{\pi})=0.282$. Moreover, by evaluating the integral in \eq~\eqref{eq:Dvv_fdmII} for a Maxwellian, we can sharpen our estimate of the Coulomb logarithm. We find (cf.~eq.~\ref{eq:logL_soft_g}) \begin{align} \label{eq:logLfdm} \log\Lambda_\mathrm{FDM} &=\int_{v_{d,\min}}^\infty \frac{\mathrm{d} v}{v} \mathrm{e}^{-v^2/\sigma^2} ={\textstyle \frac{1}{2}}\Gamma(0,v_{d,\min}^2/\sigma^2) \nonumber \\ &=\log(\sigma / v_{d,\min}) - {\textstyle \frac{1}{2}}\gamma_E + \mbox{O}{(v_{d,\min}/\sigma)}^2. \end{align} Substituting $v_{d,\min} \simeq\sigma \lambdabar_\sigma /(2b_{\max})$ (see the paragraph preceding eq.~\ref{eq:Dvv_logL}), \begin{equation} \label{eq:logLb} \log\Lambda_\mathrm{FDM} = {\textstyle \frac{1}{2}}\Gamma(0,{\textstyle\frac{1}{4}}\lambdabar_\sigma^2/b_{\max}^2) = \log(2b_{\max}/\lambdabar_\sigma) - {\textstyle\frac{1}{2}}\gamma_E+\mbox{O}{(\lambdabar_\sigma/b_{\max})}^{2}, \end{equation} which is equivalent to the classical case with a softening scale $\varepsilon = \lambdabar_\sigma/2$ (cf.~eq.~\ref{eq:logL_soft_g}). As the density correlation function determines the diffusion coefficients (cf.~eqs.~\ref{eq:Dv_rho} and~\ref{eq:Dvv_rho}), it is instructive to compare the \ac{FDM}\ correlation function to a classical one. For a Maxwellian velocity distribution, the density correlation function (eq.~\ref{eq:R_fdm}) is given by \begin{equation} \label{eq:R_fdm_m} C_{\!\rho}(\mathbf{r}, t) = \frac{\rho_{\rb}^2}{{\big[1 + {(\sigma t / \lambdabar_\sigma)}^2\big]}^{3/2}} \exp\bigg[-\frac{{(r / \lambdabar_\sigma)}^2}{1 + {(\sigma t / \lambdabar_\sigma)}^2}\bigg]. \end{equation} This result can be compared with numerical simulations of \ac{FDM}\ halos~\citep[e.g.,][]{Lin+2018}. Comparing this result with \eq~\eqref{eq:R_soft_m}, we see that the density correlation function is the same as that of a classical system having a Maxwellian \ac{DF}\ with velocity dispersion $\sigma_\mathrm{p} = \sigma_\mathrm{eff}$, a Gaussian density kernel with a softening $\varepsilon = \lambdabar_\sigma/2$, and individual particle mass $m_{\mathrm{p}} = m_\mathrm{eff}$. Note that uncertainties about the Coulomb logarithm are absent from equation (\ref{eq:R_fdm_m}). These results verify the qualitative picture of relaxation in \ac{FDM}\ halos presented by~\citet{Hui+2017}, who assumed that the diffusion coefficients were the same as those of a halo of classical particles with the same velocity dispersion and effective mass $\rho_{\rb}{(f_\mathrm{H}\lambda_\sigma)}^3$, with $f_\mathrm{H}\simeq 0.5$. The calculations in this section show that the actual value of $f_\mathrm{H}$ is between 0.224 and 0.399 depending on the velocity of the test particle and on which of the diffusion coefficient is being evaluated; thus, the diffusion coefficients and relaxation rates are between 2 and 11 times smaller than those assumed by~\citet{Hui+2017}. The formulation here, which defines the effective mass for a distribution with dispersion $\sigma_\mathrm{eff}=\sigma/\sqrt{2}$, is both simpler and more accurate. To estimate $m_\mathrm{eff}$, one can assume that the density is related to the radius $r$ and the one-dimensional velocity dispersion $\sigma$ or the circular speed $v_{\mathrm{c}}$ as in a singular isothermal sphere, \begin{equation} \label{eq:sis} \rho_{\rb}(r)=\frac{\sigma^2}{2\pi Gr^2}=\frac{v_{\mathrm{c}}^2}{4\pi Gr^2}, \end{equation} which leads to \begin{align} \label{eq:meff_approx} m_\mathrm{eff} = \frac{\pi^{1/2} \hbar^3}{2^{1/2} G m_\rb^3v_{\mathrm{c}} r^2} = 1.03\times 10^7 M_\odot \, \power{\frac{r}{1 \,\mathrm{kpc}}}{-2} \power{\frac{m_\rb}{10^{-22} \,\mathrm{eV}}}{-3} \power{\frac{v_{\mathrm{c}}}{200\,\mathrm{km\ s}^{-1}}}{-1}, \end{align} and the typical de~Broglie wavelength is \begin{align} \label{eq:lsig} \lambda_\sigma = \frac{h}{m_\rb \sigma} = 0.85 \,\mathrm{kpc} \power{\frac{m_\rb}{10^{-22} \,\mathrm{eV}}}{-1} \! \power{\frac{v_{\mathrm{c}}}{200 \,\mathrm{km\ s}^{-1}}}{-1}. \end{align} \section{Dynamical friction} \label{sec:dyf} In the previous section, we calculated the stochastic velocity changes of a massless particle moving through a homogeneous \ac{FDM}\ background. In this section, we consider the additional contribution to the velocity change for a particle of non-zero mass. A massive particle moving through the \ac{FDM}\ field creates a gravitational wake behind it that induces a frictional force proportional to the mass of the particle. We will call this force dynamical friction; it is distinct from the velocity drift described by the diffusion coefficient $D_i$ or $D[\Delta v_\parallel]$ (eq.~\ref{eq:Dv_logL}), which is independent of the test star's mass.\footnote{In the literature, it is common to define dynamical friction as the sum of both drift terms.} The frictional force on a point object of mass $m_\mathrm{t}$ traveling at velocity $\bv_{\mathrm{t}}$ through a plane wave with velocity $\bv=\hbar\mathbf{k}/m_\rb$ is~\citep[][see also~\cite{Lora+2012}]{Hui+2017} \begin{equation} \label{eq:f_fric} \mathbf{F_\mathrm{f}} = -4\pi G^2 m^2_\mathrm{t}\rho_\mathrm{b} \frac{\bv_\mathrm{t}-\bv}{{|\bv_\mathrm{t}-\bv|}^3} \, C\left(\beta,\gamma\right), \end{equation} where $C(\beta, \gamma)$ is defined in~\citet[][\eq~D7]{Hui+2017} and \begin{align} \label{eq:beta} \beta = \frac{ Gm_\rb m_\mathrm{t}}{\hbar |\bv_\mathrm{t}-\bv|}, \quad \gamma = \frac{m_\rbb_{\max}}{\hbar}|\bv_\mathrm{t}-\bv|. \end{align} Here, $b_{\max}$ is some large radius around $m_\mathrm{t}$, beyond which we assume that the gravitational force from the medium can be ignored. Integrating \eq~\eqref{eq:f_fric} over the \ac{DF}\ ${ F_{\rb} (\bv) }$, we obtain the rate of velocity drift due to dynamical friction, \begin{equation} \label{eq:Dv_fric} D_\mathrm{f}[\Delta v_i] = {} - 4\pi G^2 m_\mathrm{t} \dint \mathbf{v}^{\prime} \frac{v_{\mathrm{t},i} - v^{\prime}_i}{{|\bv_\mathrm{t}-\mathbf{v}^{\prime}|}^3} \, F_{\rb} (\mathbf{v}^{\prime})\, C\left( \frac{G m_\rb m_\mathrm{t}}{\hbar|\bv_\mathrm{t}-\mathbf{v}^{\prime}|} , \frac{m_\rbb_{\max}}{\hbar}|\bv_\mathrm{t}-\mathbf{v}^{\prime}|\right). \end{equation} To make an approximate estimate of the size of the quantities $\beta$ and $\gamma$, we replace $|\bv_\mathrm{t}-\mathbf{v}^{\prime}|$ by the velocity dispersion $\sigma$. Then, \begin{equation} \gamma\approx \frac{m_\rbb_{\max}}{\hbar}\sigma= \frac{b_{\max}}{\lambdabar_\sigma}={\textstyle\frac{1}{2}}\Lambda_\mathrm{FDM} \end{equation} where as usual $\lambdabar_\sigma=\hbar/(m_\rb\sigma)$ is the typical de~Broglie angular wavelength, and $\Lambda_\mathrm{FDM}$ is the Coulomb factor defined in \eq~\eqref{eq:coulomb}. Similarly, \begin{equation} \beta\simeq \frac{Gm_\rb m_\mathrm{t}}{\hbar\sigma}=\frac{b_{90}}{\lambdabar_\sigma}, \end{equation} where $b_{90}=Gm_\mathrm{t}/\sigma^2$ is the $90^\circ$ deflection radius. In the case $\beta\gg1$, the de~Broglie wavelength is negligible, and we recover the classical formula for dynamical friction (see~\citealt{Hui+2017}). When $\beta\ll1$ we use the result~\citep{Hui+2017} \begin{equation} C(\beta,\gamma)=\mathbb{W}(\gamma)+\mbox{O}(\beta), \end{equation} where \begin{equation} \mathbb{W}(x)\equiv \mbox{Cin}(2x)+\frac{\sin(2x)}{2x}-1, \end{equation} and $\mbox{Cin}(x)\equiv\int_0^x (1-\cos t)\mathrm{d} t/t$ is a cosine integral. Our approximation of a homogeneous medium is only valid if the de~Broglie wavelength is small compared to the system size, so $\gamma\gg1$ and we can use the asymptotic expansion $\mathbb{W}(x)\to \log(2 x) +\gamma_E-1 +\mbox{O}(1/x)$. Thus, $C(\beta,\gamma)\simeq \log\Lambda_\mathrm{FDM}+[\log(|\bv_\mathrm{t}-\mathbf{v}^{\prime}|/\sigma) + \gamma_E -1]$. We may drop the term in square brackets, which is of order unity and hence small compared to the Coulomb logarithm, and write \begin{align} \label{eq:Dv_fric_a} D_\mathrm{f}[\Delta v_i] \simeq {} & - 4\pi G^2 m_\mathrm{t} \!\log\Lambda_\mathrm{FDM} \dint \mathbf{v}^{\prime} \frac{v_i - v^{\prime}_i}{{|\bv - \mathbf{v}^{\prime}|}^3} F_{\rb} (\mathbf{v}^{\prime}) \nonumber \\ = {} & 4\pi G^2 m_\mathrm{t} \!\log\Lambda_\mathrm{FDM}\dpd{}{v_i} \dint \mathbf{v}^{\prime} \frac{1}{|\bv-\mathbf{v}^{\prime}|}F_{\rb} (\mathbf{v}^{\prime}). \end{align} \Eq~\eqref{eq:Dv_fric_a} is identical to the classical formula for dynamical friction~\citep[e.g.,][]{Tremaine+1984,Binney+2008} except that the Coulomb logarithm is defined by the ratio of the size of the system to the de~Broglie wavelength, rather than to the $90^\circ$ deflection radius (i.e., $\Lambda_\mathrm{cl}$ in eq.~\ref{eq:coulomb} is replaced by $\Lambda_\mathrm{FDM}$). Moreover $D_\mathrm{f}[\Delta v_i]$ is identical to the drift coefficient for a test particle $D_i=D[\Delta v_i]$ (eq.~\ref{eq:Dv_2b}), except that the particle mass $m_{\mathrm{p}}$ is replaced by the massive body's mass $m_\mathrm{t}$. For the Maxwellian velocity distribution (eq.~\ref{eq:maxwell}), $D_\mathrm{f}[\Delta v_i]=(v_i/v)D_\mathrm{f}[\Delta v_\parallel]$, where \begin{equation} \label{eq:Ddf} D_\mathrm{f}[\Delta v_\parallel] = -\frac{4\pi G^2 \rho_{\rb} m_\mathrm{t} \log\Lambda_\mathrm{FDM}}{\sigma^2} \,\mathbb{G}(X) , \end{equation} with $\mathbb{G}(X)$ defined in \eq~\eqref{eq:gdef}. \section{Mass segregation} \label{sec:ms} In most current \ac{CDM}\ models, the dark matter consists of elementary particles whose mass is negligible compared to that of any astrophysical object. Even if the \ac{CDM}\ particles are macroscopic objects, say of $1$--$30\, M_\odot$, they are much less massive than objects such as supermassive black holes or globular clusters. Therefore, these objects will inspiral toward the center of a \ac{CDM}\ halo due to dynamical friction~\citep[e.g.,][]{Tremaine+1975, Begelman+1980}. The situation is quite different in an \ac{FDM}\ halo. As shown in earlier sections, \ac{FDM}\ behaves as if it were composed of quasiparticles with an effective mass given by \eq~\eqref{eq:meff_approx}. Thus, although the massive object still loses orbital energy by dynamical friction, it can also can gain energy by gravitational interactions with the quasiparticles. We expect that the inspiral of the massive object will stall if it reaches energy equipartition with the quasiparticles. For similar reasons, individual stars will tend to gain energy from interactions with the \ac{FDM}\ quasiparticles, and this process can lead to the expansion of a stellar system embedded in the halo. To explore these processes quantitatively, we use a simple model of a galaxy containing only \ac{FDM}\@, with density $\rho_{\rb}$ and a Maxwellian velocity distribution with dispersion $\sigma$ (eq.~\ref{eq:maxwell}). Then, we can combine \eqs~\eqref{eq:Dvv_logL}--\eqref{eq:Dv_logL} and~\eqref{eq:Ddf} to obtain the diffusion coefficients for a point mass $m_\mathrm{t}$, \begin{align} \label{eq:Dpar_f} D[\Delta v_\parallel] = & -\frac{4\pi G^2 \rho_{\rb} m_\mathrm{eff} \log\Lambda_\mathrm{FDM}}{\sigma_\mathrm{eff}^2} \big[\mathbb{G}(X_\mathrm{eff}) +\mu_\mathrm{eff}\, \mathbb{G}(X)], \\ \label{eq:D2para} D[{(\Delta v_\parallel)}^2] = & \frac{4 \sqrt{2} \pi G^2 \rho_{\rb} m_\mathrm{eff} \log\Lambda_\mathrm{FDM}}{\sigma_\mathrm{eff}} \frac{\mathbb{G}(X_\mathrm{eff})}{X_\mathrm{eff}}, \\ \label{eq:D2pera} D[{(\Delta \bv_\bot)}^2] = & \frac{4 \sqrt{2} \pi G^2 \rho_{\rb} m_\mathrm{eff} \log\Lambda_\mathrm{FDM}}{\sigma_\mathrm{eff}} \frac{\erf(X_\mathrm{eff}) - \mathbb{G}(X_\mathrm{eff})}{X_\mathrm{eff}}, \end{align} where $\mathbb{G}(X)$ is defined in \eq~\eqref{eq:gdef}, $\sigma_\mathrm{eff}^2/\sigma^2 = 1/2$, $X = v/\sqrt{2}\sigma$, $X_\mathrm{eff} = v/\sqrt{2}\sigma_\mathrm{eff} = v/\sigma$, and \begin{equation} \label{eq:mu} \mu_\mathrm{eff} \equiv \frac{m_\mathrm{t}}{m_\mathrm{eff}}\frac{\sigma_\mathrm{eff}^2}{\sigma^2} = \frac{m_\mathrm{t}}{2m_\mathrm{eff}} \end{equation} is the effective mass ratio. Note the factor of 2 in the definition of $\mu_\mathrm{eff}$, and note also that the classical diffusion coefficient analogous to \eq~\eqref{eq:Dpar_f} for a halo composed of particles of mass $m_{\mathrm{p}}$ is \begin{equation} \label{eq:df_cdm} D[\Delta v_\parallel] = -\frac{4\pi G^2 \rho_\mathrm{p} m_{\mathrm{p}} \log\Lambda_\mathrm{cl}}{\sigma^2} (1+\mu_\mathrm{cl})\mathbb{G}(X), \end{equation} where $\mu_\mathrm{cl}=m_\mathrm{t}/m_{\mathrm{p}}$ (without a factor of 2). We stress again that the diffusion coefficients in \eqs~\eqref{eq:Dpar_f}--\eqref{eq:D2pera} do not go to the classical ones in the limit $\hbar \to 0$ (or $m_\mathrm{eff} \to 0$). This incompleteness is related to our simplifying assumption about the wave function in Section~\ref{sec:dc_fdm} (see discussion after eq.~\ref{eq:A_corrII}). In the Appendix~\ref{sec:classical_lim} we extend the derivation of Section~\ref{sec:dc_fdm} to include the classical limit. There, we show that when $m_\rb \gg m_\mathrm{eff}$, the relaxation becomes the classical one (i.e., as in a system of classical particles of mass $m_\rb$ with velocity dispersion $\sigma$). This ``classical'' correction is negligible for the \ac{FDM}\ mass considered here, $m_\rb < 10^{-20} \,\mathrm{eV}$, for which we can expect that the dynamics will deviate from the standard \ac{CDM}\ case. The specific energy diffusion coefficients are \begin{align} \label{eq:DE_fdm_m} D[\Delta E] = {} & vD[\Delta v_\parallel] + \textstyle{\frac{1}{2}} D[{(\Delta v_\parallel)}^2] + \textstyle{\frac{1}{2}} D[{(\Delta \bv_\bot)}^2] \nonumber \\ = {} & \frac{4 \sqrt{2\pi} G^2 \rho_{\rb} m_\mathrm{eff} \log\Lambda_\mathrm{FDM}}{\sigma_\mathrm{eff}} \big[ \exp(-X_\mathrm{eff}^2) - \mu_\mathrm{eff} \sqrt{\pi} X_\mathrm{eff} \mathbb{G}(X)\big] \nonumber \\ = {} & \frac{8 \sqrt{\pi} G^2\rhobm_\mathrm{eff}\log\Lambda_\mathrm{FDM}}{\sigma}\, \exp(-v^2/\sigma^2) \big[1- \mu_\mathrm{eff} K(v/\sigma)\big], \end{align} and \begin{equation} \label{eq:DEE_fdm_m} D[{(\Delta E)}^2] = v^2D[{(\Delta v_\parallel)}^2] = 8 \sqrt{2} \pi G^2 \rho_{\rb} m_\mathrm{eff} \sigma_\mathrm{eff} \log\Lambda_\mathrm{FDM}X_\mathrm{eff} \mathbb{G}(X_\mathrm{eff}), \end{equation} where we defined the dimensionless function \begin{equation} \label{eq:Kdef} K(x)\equiv \frac{\sqrt{\pi}}{x}\mathrm{e}^{x^2}\erf(x/\sqrt{2}) - \sqrt{2}\mathrm{e}^{x^2/2}. \end{equation} The mean change in energy (eq.~\ref{eq:DE_fdm_m}) arises from the competition between two processes: (i) diffusion (``heating''), a term resulting from the potential fluctuations of the \ac{FDM}\ field that is proportional to $m_\mathrm{eff}$ and pumps energy into the orbit of the massive body, and (ii) dynamical friction (``cooling''), a term resulting from the back-reaction of the massive body on the \ac{FDM}\ that is proportional to the body's mass $m_\mathrm{t}$ and transfers energy from its orbit into the \ac{FDM}\ field. The ratio between cooling and heating is given by $\mu_\mathrm{eff} K(v/\sigma)$. To investigate this process in more detail, let us consider an ensemble of systems, each containing a single body of mass $m_\mathrm{t}$ traveling in a uniform background of \ac{FDM}\@. The velocities of these bodies are distributed according to a Maxwellian \ac{DF}\ $F_\mathrm{t}(\bv_\mathrm{t})$, analogous to \eq~\eqref{eq:maxwell} but with density and velocity dispersion $\rho_\mathrm{t}$ and $\sigma_\mathrm{t}$. The flow of specific energy into the orbits of the massive objects is \begin{align} \label{eq:Edot} \langle \dot{E} \rangle = {} & \frac{1}{\rho_\mathrm{t}} \dint \bv_\mathrm{t} F_\mathrm{t}(\bv_\mathrm{t}) D[\Delta E] \nonumber \\ = {} & \frac{8 \sqrt{\pi} G^2 \rho_{\rb} m_\mathrm{eff} \log\Lambda_\mathrm{FDM}}{\sigma {(1+2\sigma_\mathrm{t}^2/\sigma^2)}^{3/2}} \bigg[1 - \mu_\mathrm{eff} \frac{\sqrt{2}\sigma_\mathrm{t}^2}{\sigma^2} \frac{{(1+2\sigma_\mathrm{t}^2/\sigma^2)}^{3/2}}{{(1+\sigma_\mathrm{t}^2/\sigma^2)}^{3/2}} \bigg]. \end{align} When $m_\mathrm{t}\ll m_\mathrm{eff}$ or $\mu_\mathrm{eff} \ll 1$, heating dominates, and we can use \eq~\eqref{eq:Edot} to write \begin{equation} \label{eq:sigma_t_dot} \od{\sigma_\mathrm{t}^2}{t} = {\textstyle \frac{2}{3}} \langle \dot{E} \rangle = \frac{\sigma^2}{T_\mathrm{heat}} {(1+2\sigma_\mathrm{t}^2/\sigma^2)}^{-3/2}, \end{equation} where we defined \begin{equation} \label{eq:t_heat} T_\mathrm{heat} = \frac{3\sigma^3}{16 \sqrt{\pi} G^2 \rho_{\rb} m_\mathrm{eff} \log\Lambda_\mathrm{FDM}} \! = \! \frac{3m_\rb^3 \sigma^6}{16 \pi^2 G^2 \rho_{\rb}^2 \, \hbar^3 \log\Lambda_\mathrm{FDM}}, \end{equation} as the heating timescale. The solution to \eq~\eqref{eq:sigma_t_dot} is \begin{equation} \label{eq:sigma_t} \frac{\sigma_\mathrm{t}^2(t)}{\sigma^2} ={\textstyle\frac{1}{2}} {\left\{5t/T_\mathrm{heat} + {[1+2\sigma_\mathrm{t}^2(0)/\sigma^2]}^{5/2} \right\}}^{2/5} -\textstyle\frac{1}{2}. \end{equation} Therefore, at time $t$, the velocity dispersion $\sigma_\mathrm{t}^2$ should be at least $\frac{1}{2}{(5 t /T_\mathrm{heat} + 1)}^{2/5}-\frac{1}{2}$ times $\sigma^2$, and $\sigma_\mathrm{t}$ will exceed $\sigma$ in a time $t\lesssim 3 T_\mathrm{heat}$. When $\mu_\mathrm{eff} \gg 1$, cooling dominates, and \eq~\eqref{eq:Edot} can be written as \begin{equation} \label{eq:sigma_t_dota} \od{\sigma_\mathrm{t}^2}{t} = -\frac{\sigma_\mathrm{t}^2}{T_\mathrm{cool}}{(1+\sigma_\mathrm{t}^2/\sigma^2)}^{-3/2}, \end{equation} in which we defined \begin{equation} \label{eq:t_cool} T_\mathrm{cool} = \frac{3 \sigma^3 }{8\sqrt{2\pi} G^2 m_\mathrm{t}\, \rho_{\rb} \log\Lambda_\mathrm{FDM}}, \end{equation} as the cooling time. As expected, $T_\mathrm{cool}$ is independent of the effective mass of the \ac{FDM}\ and is identical to the classical result except for a change in the Coulomb logarithm. When $T_\mathrm{heat}$ and $T_\mathrm{cool}$ are smaller than the lifetime of the system, the distribution of velocities of the ensemble of massive objects will relax to a steady state, which is determined by requiring that its \ac{DF}\ $F_\mathrm{t}(\bv)$ satisfy the zero-flux condition in energy space: \begin{equation} \label{eq:1} \od{}{v} \Big\{ D[{(\Delta E)}^2] \, v \, F_\mathrm{t} (v) \Big\} = 2 v^2 D[(\Delta E)] F_\mathrm{t} (v). \end{equation} This is solved to give \begin{equation} \label{eq:df_ss} F_\mathrm{t} (v) \propto \frac{1}{v D[{(\Delta E)}^2]} \, \exp \int_0^v \mathrm{d} v^{\prime} \frac{2v^{\prime}\,D[\Delta E]}{D[{(\Delta E)}^2]} , \end{equation} with the normalization chosen so that ${ \!\int\! \mathrm{d}\bv\,F_\mathrm{t}(v)=\rho_\mathrm{t} }$. As the diffusion coefficients depend linearly on the halo mass density $\rho_{\rb}$, the velocity distribution ${ F_\mathrm{t} (v) }$ depends on $\rho_{\rb}$ only through $\mu_\mathrm{eff}$, via the dependence of the effective mass $m_\mathrm{eff}$ on $\rho_{\rb}$ (eq.~\ref{eq:meff}). For a classical $N$-body system composed of particles of mass $m_{\mathrm{p}}$, ${ F_\mathrm{t} (v) }$ is a Maxwellian with mean-square velocity $3(m_{\mathrm{p}}/m_\mathrm{t})\sigma^2$. In contrast, the steady-state velocity distribution of an ensemble of massive bodies interacting gravitationally with the \ac{FDM}\ field is only approximately Maxwellian (see Figure~\ref{fig:dfv}), although the mean-square velocity is close to $3\sigma_\mathrm{eff}^2/\mu_\mathrm{eff}=3(m_\mathrm{eff}/m_\mathrm{t})\sigma^2$, similar to the classical relation (see Figure~\ref{fig:sigma}). \begin{figure}[ht] \plotone{Figure1} \caption{The steady-state velocity distribution for an ensemble of massive bodies interacting with an \ac{FDM}\ field, as obtained from \eq~\eqref{eq:df_ss} for several values of the effective mass ratio $\mu_\mathrm{eff}\equiv m_\mathrm{t}/(2 m_\mathrm{eff})$ (solid lines). The velocity is plotted in units of the velocity dispersion, $\sigma_\mathrm{t}$, where $\sigma_\mathrm{t}^2=\langle v^2\rangle/3$. The steady-state velocity distribution approaches a Maxwellian (dashed line) in the limits $\mu_\mathrm{eff} \to 0$ and $\mu_\mathrm{eff} \to \infty$.\label{fig:dfv}} \end{figure} \begin{figure}[ht] \plotone{Figure2} \caption{In thermal equilibrium, the velocity dispersion $\sigma_\mathrm{t}$ of an ensemble of baryons is related to the effective velocity dispersion $\sigma_\mathrm{eff} = \sigma/\sqrt{2}$ of the \ac{FDM}\ halo. The ratio of dispersions depends on the effective mass ratio $\mu_\mathrm{eff}\equiv m_\mathrm{t}/(2m_\mathrm{eff})$ (solid line) and is close to (within 30\%) but not equal to the standard relation $\sqrt{m_\mathrm{t}/m_{\mathrm{p}}} \, \sigma_\mathrm{t}/\sigma= 1$ (dashed line) that corresponds to the thermal equilibrium in a background of classical particles of mass $m_{\mathrm{p}}$ and velocity dispersion $\sigma$.\label{fig:sigma}} \end{figure} \subsection{Examples} We now give two examples of the dynamical interaction between an \ac{FDM}\ halo the and baryonic objects orbiting within it. These examples are based on a simplified model of the \ac{FDM}\ halo, consisting of two components: \paragraph{The central soliton} Near the center, the \ac{FDM}\ is condensed into a soliton, which is the ground-state solution of the Schr\"{o}dinger--Poisson equations. The density of the soliton can be approximated by~\citep{Schive+2014a} \begin{equation} \label{eq:rhos} \rho_\mathrm{s}(r) \approx \frac{0.019 M_\odot \,\mathrm{pc}^{-3}}{{[1 + 0.091{(r/r_\mathrm{s})}^2]}^8} \power{\frac{m_\rb}{10^{-22}\, \mathrm{eV}}}{-2} \power{\frac{r_\mathrm{s}}{\,\mathrm{kpc}}}{-4}. \end{equation} The total soliton mass is \begin{equation} \label{eq:M_s} M_\mathrm{s}=4\pi\int_0^\infty r^2 \mathrm{d} r\,\rho_\mathrm{s}(r) = 2.2\times 10^8M_\odot\,\power{\frac{r_\mathrm{s}}{\,\mathrm{kpc}}}{-1} \power{\frac{m_\rb}{10^{-22}\,\mathrm{eV}}}{-2}. \end{equation} Numerical simulations of the evolution of \ac{FDM}\ halos in a cosmological context find that the soliton core radius $r_\mathrm{s}$ is related to the total halo (virial) mass $M_\mathrm{h}$ by~\citep{Schive+2014b} \begin{equation} \label{eq:core} r_\mathrm{s} \simeq 0.16\,\mathrm{kpc}\,{\left(\frac{m_\rb}{10^{-22}\,\mathrm{eV}}\right)}^{-1}{\left(\frac{M_\mathrm{h}}{10^{12}M_\odot}\right)}^{-1/3}, \end{equation} The relation between halo mass and peak circular speed $v_{\max}$ outside the soliton is the same as in \ac{CDM}\ \citep{Klypin+2011}, \begin{equation} \label{eq:vmax} v_{\max}=155\,\mathrm{km\ s}^{-1}\,\power{\frac{M_\mathrm{h}}{10^{12}M_\odot}}{0.316}, \end{equation} so the relation between the soliton core radius and the peak circular speed is \begin{equation} \label{eq:corev} r_\mathrm{s} \simeq 0.12\,\mathrm{kpc}\,{\left(\frac{m_\rb}{10^{-22}\,\mathrm{eV}}\right)}^{-1}\power{\frac{v_{\max}}{200\,\mathrm{km\ s}^{-1}}}{-1.05}. \end{equation} It is instructive to compare the typical de~Broglie wavelength in the galaxy to the radius of the soliton, and we can do this in two ways. (i) The wavelength for a particle traveling at the circular speed at a distance $r$ outside the soliton is given by the simple formula \begin{equation} \label{eq:solrad} \lambda=\frac{h}{m_\rb}\power{\frac{r}{GM_\mathrm{s}}}{1/2}=3.91 \, r_\mathrm{s}\power{\frac{r}{r_\mathrm{s}}}{1/2}. \end{equation} In other words, the de~Broglie wavelength just outside the soliton is on the order the soliton radius. (ii) The de~Broglie wavelength for a particle traveling at the peak circular speed is \begin{equation} \label{eq:solrad1} \lambda=\frac{h}{m_\rb v_{\max}}=0.60\,\mathrm{kpc} {\left(\frac{m_\rb}{10^{-22}\,\mathrm{eV}}\right)}^{-1}\power{\frac{v_{\max}}{200\,\mathrm{km\ s}^{-1}}}{-1}. \end{equation} Once again, the de~Broglie wavelength is a few times the soliton radius (eq.~\ref{eq:corev}). The agreement between methods (i) and (ii) reflects the fact that the empirical relation~\eqref{eq:core} implies that the peak circular speed in the soliton is almost the same (25\% smaller) as the peak circular speed in the halo, independent of the particle mass and almost independent of the halo mass. This coincidence is an unexplained feature of the evolution of \ac{FDM}\ halos. \paragraph{The halo} Outside the soliton, the mean \ac{FDM}\ density distribution is expected to be similar to that of \ac{CDM}\ halos, which can be fit empirically by the~\citet{nfw1997} profile. We shall adopt an even simpler model, in which outside the soliton, the \ac{FDM}\ density is given by a singular isothermal sphere (eq.~\ref{eq:sis}). The effective mass (eq.~\ref{eq:meff}) of the \ac{FDM}\ field at radius $r\gg r_\mathrm{s}$ is then given by \eq~\eqref{eq:meff_approx}, and the typical de~Broglie wavelength is given by \eq~\eqref{eq:lsig}. \subsubsection{Inspiral of a massive object} An object of mass $m_\mathrm{t}$ on a circular orbit of initial radius $r_{\mathrm{i}}$ will inspiral toward the central soliton if the effective mass ratio $\mu_\mathrm{eff} \gg 1$ (eq.~\ref{eq:DE_fdm_m}). The inspiral time is~\citep[][eq.\ 8.12]{Binney+2008} \begin{align} \label{eq:tf} t_\mathrm{inspiral} = {} & \frac{1.65 \, r_{\mathrm{i}}^2 \sigma}{\log\Lambda_\mathrm{FDM} G m_\mathrm{t}} \nonumber \\ = {} & \frac{84.9 \,\mathrm{Gyr}}{\log \Lambda_\mathrm{FDM}} \frac{10^7 M_\odot}{m_\mathrm{t}} \frac{v_{\mathrm{c}}}{200 \,\mathrm{km\ s}^{-1}} \power{\frac{r_{\mathrm{i}}}{4 \,\mathrm{kpc}}}{2}. \end{align} The object will spiral to the center in less than the age of the galaxy, $T_\mathrm{age}$, if \begin{equation} \label{eq:ri} r_{\mathrm{i}} < 2.08\,\mathrm{kpc}\, \power{\frac{\log\Lambda_\mathrm{FDM}}{\log 10} \frac{m_\mathrm{t}}{10^7 M_\odot} \frac{T_\mathrm{age}}{10 \,\mathrm{Gyr}} \frac{200\,\mathrm{km\ s}^{-1}}{v_{\mathrm{c}}}}{1/2}. \end{equation} However, as the radius of the orbit shrinks, the effective mass grows as $r^{-2}$ (eq.~\ref{eq:meff_approx}), so $\mu_\mathrm{eff} \propto r^2$. The effective mass ratio is less than unity inside a stalling radius \begin{align} \label{eq:r1} r_\mathrm{stall} = & {{(2\pi)}^{1/4}\left(\frac{\hbar^3}{Gm_\mathrm{t}m_\rb^3v_{\mathrm{c}}}\right)}^{1/2} =1.43\,\mathrm{kpc}\,\power{\frac{m_\mathrm{t}}{10^7 M_\odot}}{-1/2} \power{\frac{m_\rb}{10^{-22} \,\mathrm{eV}}}{-3/2} \power{\frac{v_{\mathrm{c}}}{200 \,\mathrm{km\ s}^{-1}}}{-1/2}. \end{align} In early-type galaxies (ellipticals and spiral bulges), the mass of the central black hole is correlated with the velocity dispersion~\citep{Kormendy+2013}, \begin{equation} \label{eq:msig} \log_{10} \frac{M_\bullet}{10^9M_\odot}=-0.51\pm0.05+(4.4\pm0.3)\log_{10}\frac{\sigma}{200\,\mathrm{km\ s}^{-1}} \end{equation} with a scatter of about 0.3 dex. If we assume that the circular speed and dispersion are related by $\sigma=v_{\mathrm{c}}/\sqrt{2}$ as in the isothermal sphere, and that the mass of the inspiraling black hole is $m_\mathrm{t}=fM_\bullet$ with $f<1$, then these relations can be rewritten as \begin{equation} \label{eq:ri_msig} r_{\mathrm{i}} < 3.1\,\mathrm{kpc}\, \power{\frac{f}{0.1}\frac{\log\Lambda_\mathrm{FDM}}{\log 10} \frac{T_\mathrm{age}}{10 \,\mathrm{Gyr}}}{1/2}\power{\frac{\sigma}{200\,\mathrm{km\ s}^{-1}}}{1.7}; \end{equation} \begin{equation} \label{eq:r1_msig} r_\mathrm{stall} = 0.70\,\mathrm{kpc}\,\power{\frac{f}{0.1}}{-1/2}\power{\frac{m_\rb}{10^{-22} \,\mathrm{eV}}}{-3/2} \power{\frac{\sigma}{200 \,\mathrm{km\ s}^{-1}}}{-2.7}. \end{equation} These results suggest that the inspiral of supermassive black holes in \ac{FDM}\ halos may be stalled at orbital radii of a few hundred parsecs, a possibility that has been discussed already by~\citet{Hui+2017}. Although we believe that the physical mechanism described here is robust, there are two (related) shortcomings in these calculations: (i) for the parameters of interest, the stalling radius can be comparable to or even smaller than the typical de~Broglie radius $\lambda_\sigma$ (eq.~\ref{eq:lsig}); because the maximum scale of the encounters for which our approximations are valid is then $b_{\max}\simeq r_\mathrm{stall}$, the argument of the Coulomb logarithm is $\Lambda_\mathrm{FDM}=2r_\mathrm{stall}/\lambdabar_\sigma$ (eq.~\ref{eq:logLb}), small enough that the assumption $\Lambda_\mathrm{FDM} \gg 1$ on which our calculations are based is suspect; (ii) the stalling radius is not much larger than the core radius of the central soliton $r_\mathrm{s}$ (eq.~\ref{eq:core}), and inside the soliton, heating by fluctuations in the \ac{FDM}\ vanishes\footnote{This conclusion assumes that the soliton is in its ground state. Simulations by~\citet{Veltmaat+2018} suggest that the soliton typically exhibits strong density oscillations, which could add energy to nearby orbits.} although dynamical friction does not. These limitations are related because the de~Broglie wavelength just outside the soliton is of order the soliton radius (eqs.~\ref{eq:solrad} and~\ref{eq:solrad1}). In Figure~\ref{fig:friction}, we illustrate how energy diffusion due to scattering by \ac{FDM}\ quasiparticles tampers with the otherwise deterministic inspiral due to dynamical friction. We followed the orbit of a massive object ($m_\mathrm{t} = 4\times 10^5M_\odot$) in a singular isothermal sphere (eq.~\ref{eq:sis}) having circular speed $v_{\mathrm{c}} = 200\,\mathrm{km\ s}^{-1}$, and applied random velocity changes using the diffusion coefficients~\eqref{eq:Dpar_f}--\eqref{eq:D2pera} with $m_\rb = 10^{-21} \,\mathrm{eV}$. We repeated this process $1000$ times, and Figure~\ref{fig:friction} shows the median and 68\% confidence band of the orbital radius as a function of time. For comparison, we also applied (deterministic) velocity changes due to dynamical friction only (eq.~\ref{eq:Ddf}) both for \ac{FDM}\ and \ac{CDM}\ halos, which differ only in the Coulomb logarithm. The results shown in Figure~\ref{fig:friction} are consistent with our claim that a massive object that is inspiraling to the center by dynamical friction will tend to stall, on average, at a radius where the effective mass ratio $\mu_\mathrm{eff} \simeq 1$. In Figure~\ref{fig:r0_r1}, we show the relation between the maximum inspiral distance $r_{\mathrm{i}}$ (eq.~\ref{eq:ri}), the stalling radius $r_\mathrm{stall}$ (eq.~\ref{eq:r1}), and the typical de~Broglie wavelength $\lambda_\sigma$ (eq.~\ref{eq:lsig}) of a galaxy with circular speed $v_{\mathrm{c}} = 200 \,\mathrm{km\ s}^{-1}$, for a range of massive objects and \ac{FDM}\ particle masses. Similarly, in Figure~\ref{fig:rstall_msig}, we show the relation between the maximum inspiral distance $r_{\mathrm{i}}$ (eq.~\ref{eq:ri_msig}), the stalling radius $r_\mathrm{stall}$ (eq.~\ref{eq:r1_msig}), and the typical de~Broglie wavelength $\lambda_\sigma$ (eq.~\ref{eq:lsig}) for a massive object that is a fraction $f=0.1$ of the central black hole mass inferred from the $M$--$\sigma$ relation (eq.~\ref{eq:msig}) for galaxies with a range of velocity dispersions and \ac{FDM}\ particle masses. \begin{figure}[ht] \plotone{Figure3} \caption{The inspiral of a massive object ($m_\mathrm{t}=4\times10^5 M_\odot$) on a circular orbit in a spherical galaxy with constant circular speed $v_{\mathrm{c}} = 200 \,\mathrm{km\ s}^{-1}$ (eq.~\ref{eq:sis}). The dotted line shows the evolution of the orbital radius due to dynamical friction if the galaxy is composed of \ac{CDM}\ (eq.~\ref{eq:df_cdm} with $\mu_\mathrm{cl}\gg1$). The dashed line shows the evolution due to dynamical friction if the galaxy is composed of \ac{FDM}\ (eq.~\ref{eq:Ddf}) and diffusion terms are ignored. This differs from the \ac{CDM}\ case only through the Coulomb logarithm. The solid line and shaded region show the evolution in an \ac{FDM}\ galaxy including both dynamical friction and diffusion, assuming an \ac{FDM}\ mass $m_\rb=10^{-21}\,\mathrm{eV}$. We have carried out 1000 realizations of the orbital evolution, and the solid line and shaded region show the median and central $68\%$ region. The median radius saturates close to where $\mu_\mathrm{eff} = 1$ (dashed-dotted horizontal line). This behavior is different from the case where diffusion is ignored (dashed line), for which dynamical friction causes the orbit to decay at least down to the de~Broglie wavelength $\lambda_\sigma$ (dashed horizontal line).\label{fig:friction}} \end{figure} \begin{figure}[ht] \plotone{Figure4} \caption{A massive object initially on a circular orbit will spiral to the center within ${ 10 \,\mathrm{Gyr} }$ if it lies below the dashed lines (eq.~\ref{eq:ri}). Different colors represent different assumptions about the mass of the \ac{FDM}\ particle, and the heavy dashed-dotted black line shows the same curve for \ac{CDM}\@. The solid lines show the radius $r_\mathrm{stall}$ (eq.~\ref{eq:r1}) where stochastic potential fluctuations cause the inspiral to stall. The solid lines terminate when the stalling radius is smaller than the typical de~Broglie wavelength $\lambda_\sigma$ (dotted lines); at smaller distances, the evolution is dominated by interactions with the soliton, which exerts dynamical friction but has no potential fluctuations. We assume that the density of the \ac{FDM}\ is that of a singular isothermal sphere (eq.~\ref{eq:sis}) with circular speed ${ v_{\mathrm{c}} = 200 \,\mathrm{km\ s}^{-1} }$.\label{fig:r0_r1}} \end{figure} \begin{figure}[ht] \plotone{Figure5} \caption{Same as Figure~\ref{fig:r0_r1}, except that the horizontal axis is the velocity dispersion of the host galaxy and the mass of the inspiraling object is a fraction $f=0.1$ of the mass of the central black hole inferred from the ${M\!-\!\sigma}$ relation (eq.~\ref{eq:msig}).\label{fig:rstall_msig}} \end{figure} \subsubsection{Heating of a spherical stellar population} We consider the effect of \ac{FDM}\ fluctuations on a stellar system having a Maxwellian \ac{DF}\ with velocity dispersion $\sigma_\mathrm{t}$. We assume that the gravitational potential is dominated by \ac{FDM}\ and that the typical radius of the stellar system is $r_\star$. Since $m_\mathrm{eff}$ is much larger than the mass of any individual star, dynamical friction and cooling are negligible. The heating timescale is given by \eqs~\eqref{eq:t_heat} and~\eqref{eq:sis}: \begin{align} \label{eq:t_heat_bis} T_\mathrm{heat} = {} & \frac{3m_\rb^3 v_{\mathrm{c}}^2 r_\star^4}{8 \hbar^3 \log\Lambda_\mathrm{FDM}} \nonumber \\ \approx {} & \frac{2.08 \,\mathrm{Gyr}}{\log\Lambda_\mathrm{FDM}} \power{\frac{r_\star}{1 \,\mathrm{kpc}}}{4} \power{\frac{m_\rb}{10^{-22} \,\mathrm{eV}}}{3} \power{\frac{v_{\mathrm{c}}}{200 \,\mathrm{km\ s}^{-1}}}{2}. \end{align} The heating will be significant if $T_\mathrm{heat}$ is less than a third of the age of the galaxy $T_\mathrm{age}$ (see discussion following eq.~\ref{eq:sigma_t}), which occurs if $r_\star < r_\mathrm{heat}$ where the heating radius is \begin{equation} \label{eq:rheat} r_\mathrm{heat}=1.13\,\mathrm{kpc}{\left(\log\Lambda_\mathrm{FDM} \frac{T_\mathrm{age}}{10\,\mathrm{Gyr}}\right)}^{1/4}{\left(\frac{v_{\mathrm{c}}}{200\,\mathrm{km\ s}^{-1}}\right)}^{-1/2} {\left(\frac{m_\rb}{10^{-22}\,\mathrm{eV}}\right)}^{-3/4}. \end{equation} The approximations we are using are only valid if the orbital radius is significantly larger than the de~Broglie wavelength. Setting $r_\star=\lambda_\sigma$ we obtain a minimum heating time \begin{align} \label{eq:t_heat_min} T_\mathrm{heat}^{\min} = {} & \frac{24 \pi^4 \hbar }{m_\rb v_{\mathrm{c}}^2\log\Lambda_\mathrm{FDM}} \nonumber \\ \approx {} & 0.43 \,\mathrm{Gyr} \power{\frac{m_\rb}{10^{-22} \,\mathrm{eV}}}{-1} \power{\frac{v_{\mathrm{c}}}{200 \,\mathrm{km\ s}^{-1}}}{-2} . \end{align} In this result, we have evaluated the Coulomb logarithm at $b_{\max}=\lambda_\sigma$; thus, $\log\Lambda_\mathrm{FDM} = \log 2b_{\max}/\lambdabar_\sigma=\log(4\pi) \approx 2.5$. We remark that the term ``heating'' is misleading: the interaction with \ac{FDM}\ fluctuations adds energy to the stellar population, thereby causing it to expand, but the velocity dispersion of the stars may either grow or decay as a result of this expansion depending on the radial profile of the gravitational potential of the galaxy. To illustrate this process, we followed the evolution of a population of test particles representing stars in an isothermal density distribution (eq.~\ref{eq:sis}). The self-consistent gravitational potential of this distribution is $\Phi(r)=v_{\mathrm{c}}^2\log r$, which we modified to $\Phi(r)=\tfrac{1}{2}v_{\mathrm{c}}^2 \log(r^2+r_0^2)$ for reasons given below. The initial velocities of the test particles were drawn from an isotropic Maxwellian distribution with velocity dispersion $\sigma_\mathrm{t}$ and the initial positions were drawn from the radial distribution $dn \propto r^2 {(r_0^2 + r^2)}^{-v_{\mathrm{c}}^2/(2\sigma_\mathrm{t}^2)}dr$, which ensures that the initial phase-space distribution is a stationary solution of the collisionless Boltzmann equation. We introduced a core radius $r_0$ into the potential, so the integral over the radial distribution remains convergent when $\sigma_{\mathrm{t}}^2 \le 3 v_{\mathrm{c}}^2$. The actual value of $r_0$ is unimportant since we set $r_0= 0.2 \lambda_\sigma$ and turned off the diffusion coefficients when $r < \lambda_\sigma$. We used the diffusion coefficients from \eqs~\eqref{eq:Dpar_f}--\eqref{eq:D2pera}. In Figure~\ref{fig:num_exp} we show the evolution in an \ac{FDM}\ halo having $v_{\mathrm{c}} = 200 \,\mathrm{km\ s}^{-1}$ and $m_\rb = 10^{-21} \,\mathrm{eV}$. This figure shows the expansion (upper panels) and heating (lower panels) of a system of $5600$ test particles with initial velocity dispersion $\sigma_\mathrm{t} = v_{\mathrm{c}}/2$ (left panels) and $\sigma_\mathrm{t} = v_{\mathrm{c}} / \sqrt{2}$ (right panels). In both cases, within a few Gyr, the velocity dispersion of the test particles exceeds that of the \ac{FDM}\ (dashed horizontal lines), and the stellar density develops a core in the region outside the de~Broglie wavelength and inside the radius where $T_\mathrm{heat}$ equals the age (shaded regions). Our assumption that fluctuations in the \ac{FDM}\ density have no effect inside the de~Broglie wavelength $\lambda_\sigma$ is oversimplifying, so the sharp changes in density and dispersion at $\lambda_\sigma$ are unrealistic. \begin{figure*}[ht] \includegraphics[width=0.49\textwidth]{Figure6a}\includegraphics[width=0.49\textwidth]{Figure6b} \includegraphics[width=0.49\textwidth]{Figure6c}\includegraphics[width=0.49\textwidth]{Figure6d} \caption{The expansion of a system of test particles in an isothermal \ac{FDM}\@ halo with particle mass $m_\rb = 10^{-21} \,\mathrm{eV}$ and circular speed $v_{\mathrm{c}} = 200 \,\mathrm{km\ s}^{-1}$. The velocity dispersion $\sigma=v_{\mathrm{c}}/\sqrt{2} \simeq 141 \,\mathrm{km\ s}^{-1}$ (dashed horizontal lines). The test-particle distribution is initially isothermal with velocity dispersion $\sigma_\mathrm{t}=v_{\mathrm{c}}/2 = 100 \,\mathrm{km\ s}^{-1}$ (left panels) and $v_{\mathrm{c}}/\sqrt{2} \approx 141 \,\mathrm{km\ s}^{-1}$ (right panels). In the region outside the de~Broglie wavelength $\lambda_\sigma$ (dashed vertical lines), where the heating time is less than the age (top axis and shaded regions), the number density decreases (upper panels) and the velocity dispersion increases (bottom panels) as a function of time.\label{fig:num_exp}} \end{figure*} In Figure~\ref{fig:rheat}, we show the heated region as a function of the circular speed for several values of the particle mass $m_\rb$, along with the effective radii and maximum circular speeds for the ATLAS$^\mathrm{3D}$ sample of elliptical galaxies~\citep{Cappellari+2013}. These galaxies are most sensitive to particle masses in the range $m_\rb \sim 10^{-22}$--$10^{-23} \,\mathrm{eV}$, which is somewhat smaller than the mass range of interest for influencing small-scale structure. To probe larger masses we need to look for evidence of heating at smaller radii, but here, (i) most galaxies are not dark-matter dominated, and (ii) the \ac{FDM}\ may be in the form of a ground-state soliton and thus would not heat the stars. \begin{figure}[ht] \plotone{Figure7} \caption{The heated region, in which $T_\mathrm{heat} \le 5 \,\mathrm{Gyr}$ (solid lines, eq.~\ref{eq:t_heat_bis}) and $r > \lambda_\sigma$ (dashed lines), for several values of the boson mass $m_\rb$. In the colored regions, old stellar systems will be heated significantly, causing the stellar system to expand and its velocity dispersion to grow (see Figure~\ref{fig:num_exp}). For comparison, we show as circles the projected half-light radius and maximum circular speed $v_{\mathrm{c}}^{\max}$ (circles) for the ATLAS$^\mathrm{3D}$ sample of 260 early-type galaxies~\citep{Cappellari+2013}. The effects of \ac{FDM}\ heating are overestimated in most of these because they are not dark-matter dominated near their centers; however, the 14 galaxies marked in red are estimated to have a dark-matter fraction larger than $0.5$~\citep{Cappellari+2013b}.\label{fig:rheat}} \end{figure} \section{Summary and Conclusions} \label{sec:summary} Fuzzy dark matter (\ac{FDM}) is an intriguing alternative to \ac{CDM}\ that may resolve some or all of the failures of \ac{CDM}\ to predict the properties of the structure of galaxies on scales less than $30\,\mathrm{kpc}$ or so. \ac{FDM}\ exhibits a rich set of novel phenomena. In particular, the density and gravitational potential fluctuations in an isolated \ac{CDM}\ halo gradually decay as sub-halos are destroyed and tidal tails are phase-mixed. In contrast, an isolated \ac{FDM}\ halo exhibits persistent density fluctuations that arise because of the limited number of eigenstates that it contains. A test particle moving through the fluctuating \ac{FDM}\ potential is subject to stochastic velocity changes. We calculated the diffusion coefficients that govern its resulting orbital evolution. For a Maxwellian velocity distribution with dispersion $\sigma$, these diffusion coefficients are the same as the diffusion coefficients in a classical $N$-body system if (i) the classical system is assumed to be composed of quasiparticles with an effective mass $m_\mathrm{eff}$ that depends on the local density (eq.~\ref{eq:meff}); and (ii) the velocity dispersion of the quasiparticles is taken to be $\sigma/\sqrt{2}$; (iii) the lower limit of the range of scales in the classical Coulomb logarithm, usually taken to be $b_{90}$, the impact parameter for a $90^\circ$ deflection, is replaced by $\lambdabar_\sigma/2$, half of the typical de~Broglie angular wavelength of the \ac{FDM}\@. Similarly, the dynamical friction force on a massive particle orbiting in an \ac{FDM}\ halo is given by the classical formula except that the Coulomb logarithm is modified as described in (iii). In this paper, we assumed that the mean potential is infinite and homogeneous. In the classical case, this assumption implies that the unperturbed particles travel on straight lines at constant velocity, while in an \ac{FDM}\ halo, it implies that the unperturbed wavefunction is a collection of plane waves of constant wavenumber and frequency. This is a standard simplification that is usually reasonably accurate when the Coulomb logarithm is much larger than unity, which in turn occurs when the radial scale is much larger than the typical wavelength. Unfortunately, the effects of \ac{FDM}\ scattering on stellar systems are typically strongest at radii that are comparable to the wavelength. In such cases, the present derivation is incomplete, and further numerical and theoretical studies are needed. Nevertheless, we believe that our main conclusions are not seriously compromised by this limitation. We showed that a massive object that is spiraling into the center of the galaxy by dynamical friction is subject to stochastic velocity fluctuations when it reaches a radius where its mass is comparable to the effective mass of the \ac{FDM}\ quasiparticles. As this point, the \ac{FDM}\ fluctuations pump energy into the orbit at roughly the same rate that it is drained by dynamical friction, so the inspiral will tend to stall. Stars, which are much lighter than the \ac{FDM}\ quasiparticles, will on average gain energy from the \ac{FDM}\ fluctuations in the region where the relaxation time is much smaller than the age of the galaxy. Thus, stellar systems on scales of a few hundred pc to a few kpc will expand, and the heating time within the stellar system will be comparable to its lifetime. Therefore, one should not observe systems in which the heating time is much shorter than the age of the galaxy. At the present, time neither of these physical processes offers a robust new constraint on the mass range of possible \ac{FDM}\ particles. Other effects of relaxation due to \ac{FDM}\ are discussed by~\citet{Hui+2017}. Finally, in this paper, we have discussed only the effects of \ac{FDM}\ fluctuations on the orbits of classical objects such as stars and black holes. The effects of \ac{FDM}\ fluctuations on the \ac{FDM}\ halo itself are also important, but these must be analyzed using other tools~\citep[e.g.,][]{Levkov+2018, Mocz+2018}. \acknowledgments\ We thank Philip Mocz for thoughtful comments on an earlier version of this manuscript. BB acknowledges support from the Schmidt Fellowship. JBF acknowledges support from Program number HST-HF2--51374 which was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5--26555.
2,869,038,156,161
arxiv
\section{Introduction} Layered $\rm Fe_2As_2$-materials have raised enormous attention due to the discovery of superconductivity with transition temperatures $T_\mathrm{C}$ up to 28~K in LaFeAsO$_{1-x}$F$_x$ .~\cite{Kamihara2008} Upon substitution of La by rare earths, $T_\mathrm{C}$ is increased to above 50~K.~\cite{Chen2008a,Cheng2008,Ren2008c,Hess2009} Interestingly, evolution of superconductivity is associated to the suppression of a magnetically ordered orthorhombic phase, which has been found in the undoped parent compound.~\cite{Luetkens2009} In RFeAsO , both tetragonal distortion and magnetic ordering are observed at intermediate temperatures around $\sim 150$\,K. A spin density wave (SDW)-type of antiferromagnetic order evolves slightly below the temperature $T_{\rm{S}}$\ of orthorhombic distortion of the tetragonal high temperature phase.~\cite{Cruz2008,Klauss2008,Drew2008a,Maeter09} Here, we present thermal expansion data of polycrystalline RFeAsO with R = La,Ce,Pr,Sm,Gd. Our measurements yield a very sensitive measure of the volume changes of the materials. We find clear anomalies of the coefficient of linear thermal expansion $\alpha$ at the structural and magnetic transitions, i.e. at $T_{\rm{S}}$\ and $T_{\rm{N}}$ , respectively. It has been shown earlier for LaFeAsO , that anomalous contributions to $\alpha$ are visible far above $T_{\rm{S}}$ .~\cite{Wang09} Similar effects are found in RFeAsO\ with R = Ce,Pr,Sm,Gd. In addition, magnetic ordering of the rare earth moments is accompanied by low temperature anomalies of the thermal expansion coefficient. Preparation and characterization of the polycrystalline samples has been described in Ref.~\cite{Kondrat2009}. The crystal structure and the composition were confirmed by powder x-ray diffraction. In addition, our samples have been characterized by means of specific heat, magnetization, transport, and $\mu$SR experiments.~\cite{Maeter09,Kondrat2009,Klingeler2008} For the thermal expansion measurement a three-terminal capacitance dilatometer was utilized, which allows an accurate study of sample length changes. We measured the macroscopic length $L(T)$ of the samples and calculated the coefficient of linear thermal expansion $\alpha = 1/L \cdot dL/dT$, which is the first temperature derivative of $L(T)$. For our polycrystalline samples the volume expansion coefficient $\beta$ is given as $\beta = 3 \alpha$. \begin{figure} \centering \includegraphics[width=0.9\columnwidth,clip]{fig1b.eps} \caption{Temperature dependence of the coefficient of linear thermal expansion, $\alpha (T)$, of RFeAsO (R = La,Ce,Pr,Sm,Gd). Two anomalies indicated by the dashed lines are associated to a structural distortion at $T_{\rm{S}}$\ and SDW-formation at $T_{\rm{N}}$ . For PrFeAsO , no clear anomaly can be attributed to $T_{\rm{N}}$ . The arrow indicates $T_{\rm{N}}^{Pr,\mu}$\ taken from $\mu$SR data.~\cite{Maeter09} } \label{fig1} \end{figure} Figure~\ref{fig1} shows the linear thermal expansion coefficient $\alpha$ of RFeAsO\ with R=La,Ce,Pr,Sm,Gd, between 5~K and 250~K. For all R (except Pr), the thermal expansion coefficient exhibits two huge anomalies with opposite sign. The anomalies in $\alpha (T)$ can be attributed to the structural and SDW transitions of the compound. The transition temperatures determined from the positions of the extrema are marked by the dashed lines and are listed in Table~\ref{table}. The SDW formation at $T_\mathrm{N}$ generates negative anomalies in the thermal expansion coefficients. Note, that for PrFeAsO\ the SDW-anomaly is not visible although the onset of magnetic order at $T_{\rm{N}}^{\mu}$ = 137\,K has been demonstrated in $\mu$SR studies.~\cite{Maeter09} According to the Ehrenfest relation, the negative anomalies in $\alpha (T)$ at $T_\mathrm{N}$ qualitatively imply a negative hydrostatic pressure dependence of $T_{\rm{N}}$ . This finding is in agreement with resistivity studies on LaFeAsO.~\cite{F4-08-7} The anomalies in $\alpha$ at $T_\mathrm{N}$ indicate a strong coupling of the magnetic transition to the crystal lattice. However, the shape of the anomalies deviates from what is expected for second-order phase transitions, probably due to the closeness to $T_{\rm{S}}$ . \begin{table} \caption{\label{table}Magnetic and structural transition temperatures of RFeAsO\ with R = La,Ce,Pr,Sm,Gd as deduced from Figures \ref{fig1} and \ref{fig2}. For comparison, transition temperatures from $\mu$SR (Ref.~\cite{Maeter09}) and resistivity studies (Ref.~\cite{Kondrat2009}) are listed, too.} \begin{center} \begin{tabular}{ccccccc} \br R&$T_{\rm{N}}$ &$T_{\rm{S}}$ & $T_{\rm{N}}^{R}$ & $T_{\rm{N}}^{\mu}$ &$T_{\rm{S}}^{\rho}$ &$T_{\rm{N}}^{R,\mu}$ \\ \mr La & (137$\pm$ 1)\,K & (157$\pm$ 1)\,K & - & 139\,K & 158\,K & - \\ Ce & (134$\pm$ 2)\,K & (148$\pm$ 2)\,K & $\lesssim 5$\,K & 137\,K & 151\,K & 4.4\,K \\ Pr & - & (147$\pm$ 5)\,K & (11.3$\pm$ 0.3)\,K & 123\,K & 136\,K & 11\,K \\ Sm & (136$\pm$ 2)\,K & (148$\pm$ 5)\,K & $\lesssim 5$\,K & 138\,K & 160\,K & 4.7\,K \\ Gd & (128$\pm$ 2)\,K & (136$\pm$ 5)\,K & - & - & - & - \\ \br \end{tabular} \end{center} \end{table} In contrast, the structural transition at $T_\mathrm{S}$ gives rise to a positive anomaly in $\alpha$. Remarkably, this anomaly is very broad, extending to temperatures far above $T_\mathrm{S}$. In particular, it has been shown previously for LaFeAsO\ that the anomalous contributions to $\alpha$ extend to significantly higher temperatures than the corresponding anomalies found in specific heat, magnetization, and resistivity. The enhanced $\alpha$ suggests the presence of strong fluctuations preceding the structural transitions at $T_\mathrm{S}$. So far, the origin of these fluctuations is unknown. One might attribute them to a competing instability in vicinity of the actual ground state. A possible scenario is a competing orthomagnetic phase which was suggested in Ref.~\cite{Lorenzana2008}. In this scenario, long range order of the competing magnetic phase is hindered by the orthorhombic distortion, whereas the increase of the corresponding anomalous positive contribution to the thermal expansion coefficient is truncated by the structural transition at $T_{\rm{S}}$. \begin{figure}[h] \includegraphics[width=19pc]{fig2b.eps}\hspace{2pc}% \begin{minipage}[b]{17pc}\caption{\label{fig2}Temperature dependence of the coefficient of linear thermal expansion, $\alpha (T)$, of RFeAsO (R = Ce,Pr,Sm,Gd) at temperatures below $T=20$\,K where magnetic ordering of the rare earth moments is expected. Note, that the experimental setup provides data for $T\geq 5$\,K, i.e. the complete anomaly is only visible for PrFeAsO\ which exhibits $T_{\rm{N}}^{Pr}$ = 11.3\,K.} \end{minipage} \end{figure} In the materials with magnetic R-sites, magnetic ordering of the rare earth moments is found (see, e.g., \cite{Kimber08,Zhao2008natmat,Maeter09}). The evolution of rare earth magnetic order is accompanied by strong volume changes. Coupling of magnetic and lattice degrees of freedom is clearly visible in \figref{fig2}. For PrFeAsO , there is a pronounced peak of the thermal expansion coefficient at $T_{\rm{N}}^{Pr}$ = (11.3$\pm$0.3)\,K. The observed ordering temperature agrees to previous neutron and $\mu$SR data.~\cite{Kimber08,Maeter09}. Qualitatively, the strong positive anomaly implies a positive hydrostatic pressure dependence of $T_{\rm{N}}^{Pr}$ . Also for CeFeAsO\ and SmFeAsO , the data in \figref{fig2} indicate a strong positive anomaly in $\alpha$ slightly below 5\,K, which is the lower temperature limit of our device. In contrast, no anomaly is seen for GdFeAsO . Note, that magnetic R-site ordering occurs in our samples at $T_{\rm{N}}^{Ce,\mu}$ =4.4\,K, $T_{\rm{N}}^{Sm,\mu}$ =4.7\,K, and $T_{\rm{N}}^{Gd}$ = 3.7\,K.~\cite{Maeter09}. In conclusion, our thermal expansion studies have been shown being a sensitive probe for structural changes as well as Fe and rare earth magnetic ordering in RFeAsO (R = La,Ce,Pr,Sm,Gd). The magnetic and structural ordering phenomena are associated to large anomalies in $\alpha$, which allow to determine the phase diagram. Our data imply a negative pressure dependence of the Fe-ordering transition and a positive one for Pr,Ce, and Sm ordering. Strong fluctuations at $T\gg T_\mathrm{S}$ indicate a competing, possibly magnetic instability to the ground state. \ack We thank M. Deutschmann, S. M\"uller-Litvanyi, R. M\"uller, J. Werner, and S. Ga{\ss} for technical support. Work was supported by the DFG through FOR 538 and project BE1749/12. \section*{References} \providecommand{\newblock}{}
2,869,038,156,162
arxiv
\section{Introduction} Recent advances in networked systems such as sensor networks as well as the increasing need for solving high dimensional problems more efficiently have stimulated a significant interest in distributed optimization methods. In the distributed optimization approach, each node of a network solves a sub-problem locally based on information it sends and receives from its neighborhood. Distributed optimization has many applications, such as trajectory optimization for formation control of vehicles \cite{jadbabaie2003coordination, stipanovic2004decentralized, fax2004information}, decentralized control of power systems \cite{bakirtzis2003decentralized}, packet routing \cite{stern1977class}, and estimation problems in sensor networks \cite{ogren2004cooperative}. In this paper, we propose and analyze a distributed optimization method for the convex optimization problems of the following form \begin{align} \label{Eq:emperical_11} \min_{x\in \mathcal{X}} f(x)\coloneqq \dfrac{1}{n}\sum_{i=1}^{n} f_{i}(x), \end{align}\normalsize where $f_{i}(\cdot),i=1,2,\cdots,n$ are convex functions. Further, $\mathcal{X}\subset {\rm I\!R}^{d}$ is a non-empty, convex, compact set that is characterized by a set of inequality constraints \begin{align} \label{Eq:emperical_22} \mathcal{X}\coloneqq \{x\in {\rm I\!R}^{d}:g_{k}(x)\leq 0,k=1,2,\cdots,m\}, \end{align}\normalsize where $g_{k}:{\rm I\!R}^{d}\rightarrow {\rm I\!R}$ are convex functions for all $k=1,2,\cdots,m$. More specifically, we propose distributed deterministic and stochastic primal-dual algorithms for the optimization problem in eqs. \eqref{Eq:emperical_11}-\eqref{Eq:emperical_22}. At each step of the distributed algorithms, the primal variables are projected onto the Euclidean ball centered at the origin that contains the feasible set $\mathcal{X}$, that is $\mathcal{X}\subseteq {\rm I\!B}_{d}(R)\coloneqq\{x\in {\rm I\!R}^{d}:\|x\|_{2}\leq R\}$. Since the projection onto the Euclidean ball has a closed form expression, each step of the distributed algorithm is computed efficiently. \subsection{Contributions} We prove a convergence rate for the distributed deterministic and stochastic primal-dual algorithm under the Lipschitz continuity assumption on the objective function and the inequality constraints. In particular, we prove convergence rates for achieving the optimal value of the objective function. We also prove two constraint violation bounds for the primal-dual algorithm. In particular, we show that when one of the inequality constraints is binding at the optimal point(s), there is a trade-off in the convergence rate and the constraint violation rate. In particular, improving the convergence rate deteriorates the constraint violation rate and vice versa. Interestingly, we show that such a trade-off does not exist if the constraints are strictly feasible at the optimal point(s). The convergence analysis we present relies on the regularization of the Lagrangian multipliers in the Lagrangian function. In particular, by augmenting the Lagrangian function with a quadratic regularization term, we establish an upper bound on the norm of the Lagrangian multipliers that is inversely proportional to the parameter of the regularization. By controlling the norm of the Lagrangian multipliers, we in turn control the norm of the sub-gradients of the Lagrangian function. We also propose a distributed stochastic primal-dual algorithm to efficiently solve the constrained optimization problems with a large number of constraints (\textit{i.e.} large $m$). In each step of the stochastic algorithm, each agent only needs to compute one sub-gradient of the inequality constraints. In contrast, in the deterministic primal-dual algorithm, the sub-gradients of all the constraints are needed. \subsection{Related Works} Distributed optimization methods dates back to the seminal work of Bertsekas and Tsitsiklis on parallel computation \cite{bertsekas1989parallel}. More recent developments in distributed optimization are concerned with developing efficient distributed algorithms for constrained optimization problems, \textit{e.g.}, see \cite{koshal2011multiuser,duchi2012dual,lee2013distributed,ram2010distributed}. In \cite{duchi2012dual}, a distributed dual averaging algorithm is proposed, where each agent projects its local variable onto the feasible set $\mathcal{X}$. When the feasible set has more structure, \textit{i.e.}, it can be written as the intersection of finitely many simple convex constraints, a distributed random projection algorithm is studied \cite{lee2013distributed}. Therein, the projection is computed locally by each agent based on the random observations of the local constraint components. In the case of optimization with coupled linear equality constraints, \textit{i.e.}, when the decision variables of agents must jointly satisfy a set of linear equality constraints, distributed penalty and barrier function methods are studied \cite{li2014decoupling}. Moreover, based on a game theoretic argument, the asymptotic convergence to the optimal solution has been proved. For distributed optimization with a set of global non-linear inequality constraints like this paper, distributed primal-dual methods are studied in \cite{koshal2011multiuser,yuan2011distributed,zhu2012distributed}. A variation of this method is also studied \cite{chang2014distributed}, where each agent has local inequality constraints. However, the proposed methods in \cite{koshal2011multiuser,yuan2011distributed,zhu2012distributed} require a projection of the Lagrangian multipliers onto a simplex at each algorithm iteration, where the simplex itself is compute using a Slater vector. Since computing a Slater vector can be computationally expensive in practice, such distributed primal-dual methods are not suitable when agents have a low computational budget. \subsection{Organization} The rest of this paper is organized as follows. In Section \ref{Sec:Problem_Statement}, we present the list of assumptions and define the Lagrangian function. In Section \ref{Section:Distributed Deterministic Primal-Dual Algorithm}, we describe a distributed regularized primal-dual algorithm and prove a convergence rate. We also prove two asymptotic bounds on the constraint violation of the primal-dual solutions. In Section \ref{Section:Distributed Stochastic Primal-Dual Method}, we describe a distributed stochastic primal-dual algorithm and prove convergence rates in expectation and with a high probability. In Section \ref{Sec:Numerical_Simulations}, we present numerical simulations for both deterministic and stochastic algorithms on random and structured graphs. Lastly, in Section \ref{Discussion_and_Conclusion}, we discuss our results and conclude the paper. \textbf{Notation}. Throughout the paper, we work with the standard $\ell_{2}$-norm which we denote by $\|\cdot\|$. We define the sub-differential set of a function $f:{\rm I\!R}^{d}\rightarrow {\rm I\!R}$ as follows \small \begin{align*} &\partial f(x)\\ &\coloneqq \left\{\nabla f \in {\rm I\!R}^{d}\big |f(y)+\langle \nabla f,y-x \rangle \leq f(x),\forall\ x,y\in \text{dom}(f) \right\}. \end{align*}\normalsize Furthermore, we denote the projection of a point $x$ onto the set $\mathcal{X}$ by $\Pi_{\mathcal{X}}(x)\coloneqq \arg\min_{y\in \mathcal{X}}\|x-y\|$. We also use the standard notation $[x]_{+}\coloneqq \max\{0,x\}$ and use the shorthand notation for sets, \textit{e.g.}, $[n]=\{1,2,\cdots,n\}$. For two vectors $x=(x_{1},\cdots,x_{n})$ and $y=(y_{1},\cdots,y_{n})$, $x\preceq y$ means the element-wise inequality $x_{i}\leq y_{i},\forall i\in [n]$. We use the standard asymptotic notation for sequences. If $a_{n}$ and $b_{n}$ are positive sequences, then $a_{n}=\mathcal{O}(b_{n})$ means that $\lim \sup_{n\rightarrow \infty} a_{n}/b_{n}< \infty$, whereas $a_{n} = \Omega(b_{n})$ means that $\lim \inf_{n\rightarrow \infty} a_{n}/b_{n} > 0$. Furthermore, $a_{n}=\widetilde{\mathcal{O}}(b_{n})$ implies $a_{n}=\mathcal{O}(b_{n}\text{poly}\log(b_{n}))$. Moreover $a_{n}=o(b_{n})$ means that $\lim_{n\rightarrow \infty}a_{n}/b_{n}=0$ and $a_{n}=\omega(b_{n})$ means that $\lim_{n\rightarrow \infty} a_{n}/b_{n}=\infty$. Lastly, we have $a_{n}=\Theta(b_{n})$ if $a_{n}=\mathcal{O}(b_{n})$ and $a_{n}=\Omega(b_{n})$. \section{Preliminaries} \label{Sec:Problem_Statement} In this section, we formally state the optimization problem as well as the assumptions that we consider in the rest of the paper. \subsection{The Lagrangian function} We consider distributed primal-dual algorithms for solving the optimization problem characterized in eqs. \eqref{Eq:emperical_1}-\eqref{Eq:emperical_2}, which we repeat here \begin{subequations} \begin{align} \label{Eq:emperical_1} &\min_{x\in \mathcal{X}} f(x)\coloneqq \dfrac{1}{n}\sum_{i=1}^{n} f_{i}(x),\\ \label{Eq:emperical_2} &\mathcal{X}\coloneqq \{x\in {\rm I\!R}^{d}:g_{k}(x)\leq 0,k=1,2,\cdots,m\}. \end{align} \end{subequations} We denote the optimal solution of the problem in eqs. \eqref{Eq:emperical_1}-\eqref{Eq:emperical_2} by $x_{\ast}$. Often, when convenient, we will write the inequality constraints $g_{k}(x)\leq 0$, $k=1,\cdots,m,$ compactly as $g(x)\preceq 0$ with $g(x)\coloneqq (g_{1}(x),\cdots, g_{m}(x))^{T}$. Similarly, we use $\nabla g(x)$ to denote the matrix $\nabla g(x)\coloneqq (\nabla g_{1}(x),\cdots,\nabla g_{m}(x))^{T}\in {\rm I\!R}^{m\times d}$. To describe a distributed optimization algorithm for the constraint optimization problem in eqs. \eqref{Eq:emperical_1}-\eqref{Eq:emperical_2}, we define a Lagrangian function for each agent. Specifically, each function $f_{i}(\cdot)$ in eq. \eqref{Eq:emperical_1} is assigned with one agent in a network of $n$ nodes. The regularized Lagrangian function associated with the $i$-th agent is then defined by, \begin{align} \label{Eq:Lagrangian_function} L_{i}(x,\lambda)\coloneqq f_{i}(x)+\langle \lambda,g(x)\rangle -\dfrac{\eta}{2}\|\lambda\|_{2}^{2}, \end{align}\normalsize for all $i=1,2,\cdots,n$, where $\lambda \coloneqq (\lambda_{1},\cdots,\lambda_{m})$ is the vector of the Lagrangian multipliers. We also define the sub-gradients of the Lagrangian function as follows \begin{align} \label{Eq:sub_gradient_L1} \nabla_{x} L_{i}(x,\lambda)&\coloneqq \nabla f_{i}(x) +\sum_{k=1}^{m}\lambda_{k}\cdot \nabla g_{k}(x) , \\ \label{Eq:sub_gradient_L2} \nabla_{\lambda} L_{i}(x,\lambda)&\coloneqq g(x)-\eta \lambda. \end{align}\normalsize Based on the definition of the Lagrangian function $L_{i}(\cdot,\cdot)$ in \eqref{Eq:Lagrangian_function}, we design a distributed algorithm for the following minimax optimization problem \begin{align} \label{Eq:min-max-regularized} \min_{x\in {\rm I\!R}^{d}}\max_{\lambda\in {\rm I\!R}^{m}_{+}}\dfrac{1}{n} \sum_{i=1}^{n}L_{i}(x,\lambda). \end{align}\normalsize \subsection{Assumptions} \label{Assumptions} We make the following assumptions about the feasible set and the underlying functions: \begin{assumption} \textsc{(Compact Feasible Set)} \label{Assumption:Compact_Feasible_Set} The feasible set $\mathcal{X}$ is non-empty, convex, and compact. Furthermore, the feasible set $\mathcal{X}$ is known by each agent. \end{assumption} Let $R\in {\rm I\!R}_{+}$ denotes the smallest radius of the $\ell_{2}$-ball centered at the origin that contains the feasible set, \textit{\textit{i.e.}}, $\mathcal{X}\subseteq {\rm I\!B}_{d}(R)\coloneqq \{x\in {\rm I\!R}^{d}: \|x\|\leq R\}.$ \begin{assumption}\textsc{(Slater Condition)} There exists a Slater vector $x\in \text{relint}(\mathcal{X})$ such that $g_{k}(x)<0$ for all $k=1,2,\cdots,m$. \end{assumption} Under the Slater condition, the primal problem in eq. \eqref{Eq:emperical_1}-\eqref{Eq:emperical_2} and its dual problem have the same optimal objective value, and a dual optimal solution $\lambda_{\ast}$ exists and is finite $\lambda_{\ast}<\infty$, see \cite{nedic2009sub-gradient}. The primal-dual pair $(x_{\ast},\lambda_{\ast})\in\mathcal{X}\times {\rm I\!R}_{+}$ is a saddle point of the minimax optimization problem in eq. \eqref{Eq:min-max-regularized}, if it satisfies the inequalities \begin{align} \sum_{i=1}^{n} L_{i}(x_{\ast},\lambda)\leq \sum_{i=1}^{n} L_{i}(x_{\ast},\lambda_{\ast})\leq \sum_{i=1}^{n} L_{i}(x,\lambda_{\ast}), \end{align} for all $x\in \mathcal{X}$, and $\lambda\in {\rm I\!R}_{+}$. Note that the saddle point $(x_{\ast},\lambda_{\ast})$ is not unique, unless at least one function $f_{i}(\cdot)$ is strictly convex. Therefore, in the following, the primal-dual pair $(x_{\ast},\lambda_{\ast})$ denotes a generic saddle point of the minimax problem \eqref{Eq:min-max-regularized}. The following assumption is standard in the optimization literature: \begin{assumption} \label{Assumption:Lipschitz Functions} \textsc{(Lipschitz Functions)} We assume that the functions $f_{i}(\cdot)$ and $g_{k}(\cdot)$ are convex on the Euclidean ball ${\rm I\!B}_{d}(R)$, for all $i\in [n]$ and $k\in [m]$. Further, the sub-gradients $\nabla f_{i}(x)\in \partial f_{i}(x)$, and $\nabla g_{k}(x)\in \partial g_{k}(x),\forall k\in [m]$ are bounded \begin{align*} \|\nabla f_{i}(x)\|&\leq L,\quad \|\nabla g_{k}(x)\|\leq L, \end{align*} for all $x\in {\rm I\!B}_{d}(R)$, where $L<\infty$ is a constant. \end{assumption} In Assumption \ref{Assumption:Lipschitz Functions}, the Lipschitz continuity conditions on the underlying functions are defined on the Euclidean ball ${\rm I\!B}_{d}(R)$, which is a larger set compared to the feasible set $\mathcal{X}$. This extension is essential since we confine the primal variables to the Euclidean ball ${\rm I\!B}_{d}(R)$ instead of $\mathcal{X}$ to simplify the projection in the primal-dual algorithm (cf. Algorithm \ref{CHalgorithm}). The communication network between the $n$ agents is represented with a connected graph $G=(V,E)$, where $V=\{1,2,\cdots,n\}$ is the set of nodes of the graph, and $E\subseteq V\times V$ is the set of edges between those nodes. Thus, $(i,j)\in E$ if the node (agent) $i$ communicates with the node (agent) $j$, and vice versa. We assume that the connectivity graph is fixed in the sense that it does not change during the algorithm runtime. Associated with the graph $G=(V,E)$, we consider a weight matrix $W\coloneqq [W]_{ij},(i,j)\in V\times V$ for averaging the information that each node receives from its neighbors. We consider the following assumption regarding $W$: \begin{assumption} \label{Assumption:Weight_matrix} \textsc{(Doubly Stochastic Weight Matrix)} The graph $G$ and the weight matrix $W$ satisfy the following conditions: \begin{itemize} \item The graph $G$ is connected. \item The weight matrix $W$ is doubly stochastic, \begin{align*} W\times \mathbbm{1}_{n}&=\mathbbm{1}_{n},\\ \mathbbm{1}^{T}_{n}\times W&=\mathbbm{1}^{T}_{n}, \end{align*} where $\mathbbm{1}_{n}\in {\rm I\!R}^{n}$ is the column vector with all elements equal to one. \item The weight matrix $W$ respects the structure of the graph $G=(V,E)$, \textit{i.e.}, \begin{align*} &W_{ij}>0\quad \text{if}\quad (i,j)\in E\\ &W_{ij}=0\quad \text{if}\quad (i,j)\notin E. \end{align*} \end{itemize} \end{assumption} For $n\times n$ doubly stochastic matrices, the singular values can be sorted in a non-increasing fashion $\sigma_{1}(W)\geq \sigma_{2}(W)\geq \cdots \geq \sigma_{n}(W)\geq 0$, where $\sigma_{1}(W)=1$ due to Assumption \ref{Assumption:Weight_matrix}. Throughout the paper, we refer to $1-\sigma_{2}(W)$ as the spectral gap of the matrix $W$. In the following, we review two popular weight matrices $W$ that are proposed in the optimization literature: \paragraph{Lazy Metropolis Matrix} Motivated by the hitting time of the lazy Markov chains, Olshevsky \cite{olshevsky2014linear} has proposed the \textit{lazy Metropolis} matrix for the weight matrix, \textit{i.e.}, \begin{align} \label{Eq:LazyasIam} [W]_{ij}=\begin{cases} \dfrac{1}{2\max(d(i)+1,d(j)+1)} & \text{if}\ (i,j)\in E \\ 0, & \text{if}\ (i,j)\not\in E. \end{cases} \end{align} Here, $d(i)$ and $d(j)$ are degrees of the nodes $i$ and $j$, respectively. To choose the weights according to eq. \eqref{Eq:LazyasIam}, agents will need to spend an additional round at the beginning of the algorithm broadcasting their degrees to their neighbors. It is easy to verify that the lazy Metropolis matrix $W$ is stochastic, symmetric, and diagonally dominant. Further, due to the symmetry, the singular values are simply the absolute value of the eigenvalues. More importantly, the inverse of the spectral gap has an upper bounded proportional to $n^{2}$ \cite{olshevsky2014linear}. Specifically, as shown in \cite{olshevsky2014linear}, regardless of the graph structure $G$, the spectral gap corresponding to the lazy Metropolis weight matrix is given by \begin{align} \label{Eq:Moreimportnantly} \dfrac{1}{1-\sigma_{2}(W)}\leq 71n^{2}. \end{align} \paragraph{Normalized Graph Laplacian} Another popular choice of the weight matrix is the graph Laplacian \cite{duchi2012dual}. Consider the graph adjacency matrix $A$, where $A_{ij}=0$ if $(i,j)\not = 1$, and $A_{ij}=1$ otherwise. Further, consider the diagonal matrix $D\coloneqq \text{Diag}(d_{1},\cdots,d_{n})$, where $d_{i}\coloneqq \sum_{j=1}^{n}A_{ij}$. The normalized graph Laplacian is defined as \begin{align*} \mathcal{L}(G)\coloneqq I-D^{-1/2}A D^{-1/2}. \end{align*} Now, let $\delta\coloneqq \max_{i\in V}\sum_{j=1}^{n}A_{ij}$. When the matrix is degree regular, \textit{i.e.}, $d_{i}=d$ for all $i\in [n]$, the following weight matrix $W$ is proposed in \cite{duchi2012dual}, \begin{align*} W\coloneqq I-\dfrac{d}{d+1}\mathcal{L}. \end{align*} Further, for the case of non-degree regular graphs, the following weight matrix is proposed \begin{align*} W\coloneqq I-\dfrac{1}{d_{\max}+1}D^{1/2}\mathcal{L}D^{1/2}, \end{align*} where $d_{\max}\coloneqq \max_{i\in V}d_{i}$. \section{Distributed Deterministic Primal-Dual Algorithm} \label{Section:Distributed Deterministic Primal-Dual Algorithm} In our proposed distributed primal-dual algorithm, the $i$-th agent maintains a local copy of the primal variables $x_{i}(t) \in {\rm I\!R}^{d}$ and the Lagrangian multipliers $\lambda_i(t)\in {\rm I\!R}^{m}$. Here, $x_{i}(t)$ and $\lambda_{i}(t)$ stands for the estimate of the $i$-th agent of the decision variable $x$ and $\lambda$ after $t$ steps. Therefore, $x_i(t)$ and $\lambda_i(t)$ have the same dimension as the primal variable $x$ and dual variable $\lambda$. The initialization and update rule of $x_i(t)$ and $\lambda_i(t)$ is described in Algorithm \ref{CHalgorithm}. \begin{algorithm}[t!] \caption{\footnotesize{\textsc{Distributed Regularized Primal-Dual Method}}} \label{CHalgorithm} \begin{algorithmic}[1] \State \textbf{Initialize}: $x_{i}(0)=0\in {\rm I\!B}_{d}(R)$, $\lambda_{i}(0)=0\in {\rm I\!R}_{+}^{m}, \forall i\in V$ and a non-negative, non-increasing step size sequence $\{\alpha(t)\}_{t=0}^{\infty}$. \For{$t=0,1,2,\cdots$ at the $i$-th node} \State Update the auxiliary primal and dual variables \begin{subequations} \begin{align} \label{Eq:aux_1} y_{i}(t)&=x_{i}(t)-\alpha(t)\nabla_{x}L_{i}(x_{i}(t),\lambda_{i}(t)),\\ \label{Eq:aux_2} \gamma_{i}(t)&=\lambda_{i}(t)+\alpha(t)\nabla_{\lambda}L_{i}(x_{i}(t),\lambda_{i}(t)). \end{align}\normalsize \end{subequations} \State Run the consensus steps \begin{subequations} \begin{align} \label{Eq:That_is_the_way} x_{i}(t+1)&= \Pi_{{\rm I\!B}_{d}(R)}\left(\sum_{j=1}^{n}[W]_{ij}y_{j}(t)\right)\\ \label{Eq:EasyProjection1} &=\dfrac{R\cdot \left(\sum_{j=1}^{n}[W]_{ij}y_{j}(t)\right)}{\max\{R,\|\sum_{j=1}^{n}[W]_{ij}y_{j}(t)\|\}}, \\ \label{Eq:EasyProjection2} \lambda_{i}(t+1)&=\Pi_{{\rm I\!R}^{m}_{+}}\left(\sum_{j=1}^{n}[W]_{ij}\gamma_{j}(t)\right). \end{align}\normalsize \end{subequations} \State Compute the weighted average: $\widehat{x}_{i}(t)={\sum_{s=0}^{t+1}\alpha(s)x_{i}(s)\over \sum_{s=0}^{t+1}\alpha(s)}$ for all $i\in V$. \EndFor \State \textbf{Output}: $\widehat{x}_{i}(t)$ for all $i\in V$. \end{algorithmic} \end{algorithm} We remark that Algorithm \ref{CHalgorithm} is an example of ``anytime algorithm", meaning that it is stoppable at any time and it returns $\widehat{x}_{i}(t)$ as the solution of the $i$-th agent to the optimization problem in eq. \eqref{Eq:emperical_1}-\eqref{Eq:emperical_2}. Moreover, the solution improves as $t$ increase in the sense that the cumulative objective function $f(\widehat{x}_{i}(t))$ of the $i$-th agent tends to the optimal objective value $f(x_{\ast})$ for all $i\in [n]$ as $t\rightarrow \infty$. In Algorithm \ref{CHalgorithm}, the projection onto the Euclidean ball ${\rm I\!B}_{d}(R)$ is essential since without it, the primal variables $x_{i}(t+1)$ in eq. \eqref{Eq:EasyProjection1} can take any value from ${\rm I\!R}^{d}$. In such circumstance, the Lipschitz continuity of functions $f_{i}(\cdot)$ and $g_{k}(\cdot)$ in Assumption \ref{Assumption:Lipschitz Functions} must be extended to the entire Euclidean space ${\rm I\!R}^{d}$, which is too stringent for many functions. However, the projections onto the Euclidean ball ${\rm I\!B}_{d}(R)$ and the non-negative orthant ${\rm I\!R}_{+}^{m}$ in eqs. \eqref{Eq:EasyProjection1}-\eqref{Eq:EasyProjection2}, respectively, have closed form solutions. Therefore, each iteration of Algorithm \ref{CHalgorithm} can be computed efficiently. Notice that since the Euclidean ball ${\rm I\!B}_{d}(R)$ contains the feasible set $\mathcal{X}$, the inequality constraints in eq. \eqref{Eq:emperical_2} can be violated. To provide a guarantee on the asymptotic feasibility of solutions of Algorithm \ref{CHalgorithm}, we establish an upper bound on the constraint violation and prove that it goes to zero as the number of steps goes to infinity $t\rightarrow \infty$ (cf. Theorem \ref{Thm:3}). \begin{remark} \label{Remark} To compute a concise convergence rate, in Algorithm \ref{CHalgorithm} we use the special initialization $x_{i}(0)=0\in {\rm I\!B}_{d}(R)$, $\lambda_{i}(0)=0\in {\rm I\!R}_{+}^{m}$. Without this restriction, the convergence analysis of Algorithm \ref{CHalgorithm} is valid, but the convergence rates differ from what we present in this paper. In practice, Algorithm \ref{CHalgorithm} can be initialized from any feasible point in the Euclidean ball ${\rm I\!B}^{d}_{2}(r)\times {\rm I\!R}_{+}$ as we demonstrate in the numerical simulations (cf. Section \ref{Sec:Numerical_Simulations}). \end{remark} \subsection{Comparison with Related Primal-Dual Methods} Augmented Lagrangian methods for constrained optimization have been studied extensively \cite{koshal2011multiuser,mahdavi2012trading,mahdavi2012stochastic,yuan2016regularized}. In \cite{mahdavi2012trading}, a regularized online primal-dual method is studied, where it has been shown that it achieves a sub-linear `regret' and satisfies the inequality constraints asymptotically. However, the analysis of \cite{mahdavi2012trading} is not applicable to the multi-agent settings since it does not provide a guarantee for the boundedness of the norm of the Lagrangian multipliers $\|\lambda_{i}(t)\|$. It turns out that bounding this norm is essential for analyzing the `consensus terms' (cf. Lemma \ref{Lemma:Consensus}). To ensure the boundedness of the norm of the Lagrangian multipliers in the multiagent settings, a distributed regularized primal-dual algorithm similar to Algorithm \ref{CHalgorithm} is proposed in \cite{yuan2016regularized}. However, the optimization problem only includes one constraint $g(x)\leq 0$ (\textit{i.e.}, $m=1$) under the additional assumption that $\min_{x:g(x)=0} \|\nabla g(x)\|_{2}\geq \rho, \nabla g(x)\in \partial g(x)$, for some $\rho>0$. Moreover, the analysis of the convergence rate in \cite{yuan2016regularized} depends on $\rho$. Specifically, the difference in function value at the final estimate and the optimal value is upper bounded by an expression which is proportional to $1/\rho$. Therefore, when $\rho$ is small, the upper bound is potentially very loose. More importantly, the convergence rate of \cite{yuan2016regularized} has a network scaling of $\mathcal{O}(n^{3})$ compared to $\mathcal{O}(\log^{3\over 2}(n))$ that we prove in this paper (cf. Theorem \ref{Thm:3}). To bound the norm of the Lagrangian multipliers in distributed primal-dual methods, a different strategy is pursed in \cite{yuan2011distributed,zhu2012distributed, chang2014distributed,koshal2011multiuser}. Specifically, consider a Slater vector $\tilde{x}\in \text{relin}(\mathcal{X}$), \textit{i.e.}, the vector that satisfies \begin{align} \label{Eq:Assumption} g(\tilde{x})\prec 0. \end{align} Let $\mu \coloneqq \min_{k=1,2,\cdots,m}\{-g_{k}(\tilde{x})\}$ and define \begin{align*} \mathfrak{F}(\lambda)\coloneqq \inf_{x\in \mathcal{X}} f(x)+\langle\lambda,g(x)\rangle. \end{align*} In the proposed primal-dual algorithms in \cite{yuan2011distributed,zhu2012distributed, chang2014distributed,koshal2011multiuser}, each agent projects its local Lagrangian multipliers $\lambda_{i}(t)$ onto the following simplex \begin{align} \Lambda \coloneqq \{\lambda\in {\rm I\!R}_{+}^{m}:{\|\lambda\|_{1}\leq \mu^{-1}\cdot(f(\tilde{x})-\mathfrak{F}(\hat{\lambda}))}\}, \end{align} where $\hat{\lambda}\in {\rm I\!R}_{+}^{m}$ is an arbitrary vector. However, there are two drawbacks with the projection onto the simplex: First, to compute the simplex $\Lambda$, a Slater vector $\tilde{x}$ must be computed which is inefficient.\footnote{To guarantee a zero duality gap, we also require the Slater condition (or any other constraint qualifications) to hold. However, computing a Slater vector is not needed in Algorithm \ref{CHalgorithm}.} For instance, to compute a Slater vector $\tilde{x}$ for a feasible set defined by linear inequality constraints $\mathcal{X}=\{x\in {\rm I\!R}^{d}:\langle a_{k},x\rangle\leq b_{k},k=1,2,\cdots,m\}$ where $a_{k}\in {\rm I\!R}^{d},b_{k}\in {\rm I\!R}$, we must solve the following optimization problem \begin{align} \label{Eq:Slater_vector} \tilde{x}=\arg \min_{x\in \mathcal{X}} b-Ax, \end{align} where $b\coloneqq (b_{1},\cdots,b_{m})^{T}\in {\rm I\!R}^{m\times 1}$ and $A\coloneqq (a_{1};\cdots;a_{m})\in {\rm I\!R}^{m\times d}$. Provided that there exists a vector $\tilde{x}$ that satisfies $A\tilde{x}<b$, the solution of the minimization problem \eqref{Eq:Slater_vector} yields a Slater vector. Second, the projection onto the simplex $\Lambda$ requires solving a separate minimization problem. In comparison, the projection in Algorithm \ref{CHalgorithm} is onto the non-negative orthant ${\rm I\!R}^{m}_{+}$ which can be computed efficiently by replacing each negative component of the vector $\sum_{j=1}^{n}[W]_{ij}\lambda_{j}(t)$ in Step 4 of Algorithm \ref{CHalgorithm} with zero. \subsection{Convergence Rate and Constraint Violation Bounds} \label{Section:Distributed Regularized Primal-Dual Algorithm} Here, we prove a convergence rate for Algorithm \ref{CHalgorithm}, and also establish two constraint violation bounds. We defer the proof of the theorems to Appendix \ref{App:Proofs_of_Main_Results}. We first establish a general upper bound for the cost function in eq. \eqref{Eq:emperical_1} for an arbitrary choice of the stepsize $\alpha(t)$ and the regularization parameter $\eta$: \begin{lemma} \label{Lem:1} After $T\in \mathbb{N}$ iterations of Algorithm \ref{CHalgorithm}, the estimation $\widehat{x}_{i}(T)$ of the primal variable of each agent $i=1,2,\cdots,n$ satisfies \footnotesize \begin{align} \label{Eq:Prop_inequality} &f(\widehat{x}_{i}(T))-f(x_{\ast})\leq \dfrac{1}{\sum_{t=0}^{T-1}\alpha(t)}\Bigg[\dfrac{1}{2} \|x_{\ast}\|^{2}\\ \nonumber &\hspace{-2mm}+\dfrac{L}{n} \sum_{t=0}^{T-1}\sum_{j=1}^{n}\alpha(t) \|x_{i}(t)-x_{j}(t)\|-\dfrac{\eta}{2n}\sum_{t=0}^{T-1}\sum_{j=1}^{n}\alpha(t)\|\lambda_{j}(t)\|^{2}\\ \nonumber &\hspace{-2mm} +\dfrac{1}{2n}\sum_{t=0}^{T-1}\sum_{j=1}^{n}\alpha^{2}(t)\left(\|\nabla_{x}L_{j}(x_{j}(t),\lambda_{j}(t))\|^{2}+\|\nabla_{\lambda}L_{j}(x_{j}(t),\lambda_{j}(t))\|^{2}\right)\Bigg]. \end{align}\normalsize \end{lemma} \begin{proof} See Appendix \ref{Proof of Proposition 1}. \end{proof} To parse the upper bound in Lemma \ref{Lem:1}, we examine each term separately. The first term is intuitive as it measures the distance between the initial point (which is chosen to be the origin $x_{i}(0)=0,\forall i\in [n]$ in Algorithm \ref{CHalgorithm}) and an optimal point $x_{\ast}$. The second term measures the distance between the primal variables of different agents in the network since it includes the pairwise difference $\|x_{i}(t)-x_{j}(t)\|$. This term, often referred to as the ``consensus" term in the distributed optimization literature, is related to the spectral gap of the weight matrix $W$. The third term is due to the regularization term that is included in the Lagrangian function \eqref{Eq:Lagrangian_function}. Although choosing an arbitrary large regularization parameter $\eta$ results in a smaller upper bound, certain trade-offs between the convergence rate of the algorithm and the constraint violation of the inequality constraints \eqref{Eq:empirical_risk_form_1} prohibits a large value for $\eta$ (see Theorem \ref{Thm:3} below). The last term in the upper bound \eqref{Eq:Prop_inequality} includes the norms of the sub-gradients defined in eqs. \eqref{Eq:sub_gradient_L1}-\eqref{Eq:sub_gradient_L2}. In the earlier studies of the primal-dual methods, these norms were bounded under different assumptions on the feasible set, see \cite{yuan2011distributed,zhu2012distributed, chang2014distributed}. The challenge is due to the fact that the Lagrangian multipliers $\lambda_{i}(t)$ in the sub-gradients may take a large value. Therefore, to ensure that the vector of Lagrangian multipliers has a bounded norm, various assumptions on the feasible set and the inequality constraints were considered. In our analysis, the norm of the Lagrangian multipliers is controlled by adding the regularization term to the Lagrangian function, see \eqref{Eq:Lagrangian_function}. Based on Lemma \ref{Lem:1}, we derive the following explicit convergence rate for Algorithm \ref{CHalgorithm} using a decreasing stepsize: \begin{theorem} \textsc{(Convergence rate)} \label{Thm:2} Consider $T$ iterations of Algorithm \ref{CHalgorithm} with the stepsize $\alpha(t)={R\over \sqrt{t+1}}$ and the regularization parameter $\eta\alpha(t)\leq {1\over 2}$. The estimation of the $i$-th agent $\widehat{x}_{i}(T)$ satisfies \small \begin{align} \label{Eq:ConvergenceRate_Rationa0} f(\widehat{x}_{i}(T))-f(x_{\ast})&\leq \dfrac{RC\log(T)}{\sqrt{T}-1}, \quad T\geq 2, \end{align}\normalsize for all $i=1,2,\cdots,n$, where $C$ is defined as follows, \small \begin{align} \label{Eq:CONSTANT_C} C \coloneqq 1+{5\over 2} mL^{2}R^{2}+20 L^{2} \left(1+\dfrac{nm^{3/2}LR}{\eta}\right)^{2}\left(\dfrac{\log(T\sqrt{nT})}{1-\sigma_{2}(W)}\right)^{3\over 2}. \end{align}\normalsize \end{theorem} \begin{proof} See Appendix \ref{App:TheRestofTheProof}. \end{proof} Let us emphasize a few points about Theorem \ref{Thm:2}. The constraint $\eta\alpha(t)\leq 1$ on the regularization parameter can be easily satisfied since $\{\alpha(t)\}_{t=0}^{\infty}$ is a decreasing sequence and usually takes a small value. In addition, with a regularization parameter $\small \eta=\Theta(\sqrt{n})$, the scaling of the algorithm is $\mathcal{O}(\log^{3\over 2}(\sqrt{n}))$ which is slightly worse than the dual averaing algorithm by a factor of $\log(\sqrt{n})$ \cite{duchi2012dual}. Moreover, when $\eta=\Theta(\sqrt{m})$, the upper bound in eq. \eqref{Eq:ConvergenceRate_Rationa0} grows linearly in the number of constraints $m$. It is interesting to see whether the linear growth rate can be improved. In the upper bound \eqref{Eq:ConvergenceRate_Rationa0}, the convergence rate of the algorithm is given by $\widetilde{\mathcal{O}}(T^{-{1\over 2}})$ when $\eta$ is independent of $T$. It is well-known that a lower bound for the regret of the centralized first order methods with non-smooth objective functions has an order of $\Omega(T^{-1/2})$, see \cite{agarwal2009information}. Therefore, when $\eta$ is independent of the number of steps $T$, Algorithm \ref{CHalgorithm} is order optimal up to a polynomial factor of $\log(T)$. As mentioned in Section \ref{Sec:Problem_Statement}, the local variable of each agent $\widehat{x}_{i}(T)$ in Algorithm \ref{CHalgorithm} is computed via the projection of the primal variables onto the Euclidean ball ${\rm I\!B}_{d}(R)$ that contains the feasible set $\mathcal{X}$. Therefore, in principle the inequality constraints can be violated. In the next theorem, we show that the upper bound on the constraint violation is related to the regularization parameter $\eta$. \begin{theorem}\textsc{(Constraint Violation Bound)} \label{Thm:3} Consider $T$ iterations of Algorithm \ref{CHalgorithm} with the stepsize $\alpha(t)={R \over \sqrt{t+1}}$ and the regularization parameter $\eta\alpha(t)\leq {1\over 2},\forall t\in [T]$. Further, The constraint violation has the following asymptotic bound for all $i\in V$, \small \begin{align} \label{Eq:FirstConstraintViolationBound} \left\|\left[\dfrac{1}{n}\sum_{i=1}^{n}g(\widehat{x}_{i}(T)) \right]_{+}\right\|^{2}_{2}=\mathcal{O}(\eta), \end{align}\normalsize Furthermore, if the optimal solution $x_{\ast}$ is strictly feasible $g(x_{\ast})\prec 0$ at an optimal point, we have \small \begin{align} \label{Eq:SecondConstraintViolationBound} \left\|\left[\dfrac{1}{n}\sum_{i=1}^{n}g(\widehat{x}_{i}(T)) \right]_{+}\right\|^{2}_{2}=\mathcal{O}\left(\dfrac{\eta \log(T)}{\sqrt{T}}\right). \end{align}\normalsize \end{theorem} \begin{proof} The proof is deferred to Appendix \ref{Proof of Theorem 3}. \end{proof} From Theorems \ref{Thm:2} and \ref{Thm:3}, we observe that when one of the constraints is binding at the optimal solution, \textit{i.e.}, $g_{k}(x_{*})=0$ for at least one coordinate $k\in [m]$, there is a tension between the convergence rate in eqs. \eqref{Eq:ConvergenceRate_Rationa0}-\eqref{Eq:CONSTANT_C} and the decay rate of the constraint violation bound in eq. \eqref{Eq:FirstConstraintViolationBound}. Clearly, by adopting a $\eta$, we obtain a small constraint violation bound. However, a small $\eta$ yields a large upper bound in eqs. \eqref{Eq:ConvergenceRate_Rationa0}-\eqref{Eq:CONSTANT_C}. To examine this trade-off more precisely, suppose $\eta=\Theta(T^{-r})$ for $r\in (0,1/2)$. In this case, the convergence rate as characterized in eqs. \eqref{Eq:ConvergenceRate_Rationa0}-\eqref{Eq:CONSTANT_C} is $\widetilde{\mathcal{O}}(1/T^{{1\over 2}-r})$, while the constraint violation in eq. \eqref{Eq:FirstConstraintViolationBound} becomes $\mathcal{O}(1/T^{r})$. Interestingly, when the inequality constraints are satisfied strictly at an optimal point, \textit{i.e.}, $g_{k}(x_{\ast})< 0, \forall k\in [m]$, then the constraint violation bound in eq. \eqref{Eq:SecondConstraintViolationBound} decays to zero as $T\rightarrow \infty$ even for $\eta=\mathcal{O}(1)$. Consequently, there is no trade-off between the convergence rate and the constraint violation when $g_{k}(x_{\ast})<0, \forall k\in [m]$. \section{Distributed Stochastic Primal-Dual Method} \label{Section:Distributed Stochastic Primal-Dual Method} \begin{algorithm}[t!] \caption{\footnotesize{\textsc{Distributed Stochastic Primal-Dual Method}}} \label{CHalgorithm-1} \begin{algorithmic}[1] \State \textbf{Initialize}: $x_{i}(0)=0\in {\rm I\!B}_{d}(R)$, $\lambda_{i}(0)=0\in {\rm I\!R}_{+}^{m}, \forall i\in V$ and a non-negative, non-increasing stepsize sequence $\{\alpha(t)\}_{t=0}^{\infty}$. Select $p_{i}(0)={\tt{Uniform}}\{1,2,\cdots,m\}$. \For{$t=0,1,2,\cdots$ at the $i$-th node $i\in V$} \State Draw a random index $K_{i}(t)\in\{1,2,\cdots,m\}$ according to the distribution $K_{i}(t)\sim p_{i}(t)$. \State Update the primal and dual variables \begin{align*} y_{i}(t)&=x_{i}(t)-\alpha(t)\nabla_{x}\widehat{L}_{i}(x_{i}(t),\lambda_{i}(t);K_{i}(t))\\ \gamma_{i}(t)&=\lambda_{i}(t)+\alpha(t)\nabla_{\lambda}\widehat{L}_{i}(x_{i}(t),\lambda_{i}(t)). \end{align*} \State Run the consensus step \begin{align*} x_{i}(t+1)&= \Pi_{{\rm I\!B}_{d}(R)}\left(\sum_{j=1}^{n}[W]_{ij}y_{j}(t)\right)\\ &=\dfrac{R\cdot \left(\sum_{j=1}^{n}[W]_{ij}y_{j}(t)\right)}{\max\{R,\|\sum_{j=1}^{n}[W]_{ij}y_{j}(t)\|_{2}\}}, \\ \lambda_{i}(t+1)&=\Pi_{{\rm I\!R}_{+}^{m}}\left(\sum_{j=1}^{n}[W]_{ij}\gamma_{i}(t)\right). \end{align*} \State Update $p_{i}(t)={1\over \|\lambda_{i}(t)\|_{1}}(\lambda_{1}(t),\cdots,\lambda_{m}(t)),\forall i\in V$. Set $p_{i}(t)={\tt{Uniform}}\{1,\cdots,m\}$ if $\lambda_{i}(t)=0$. \State Compute the weighted average: $\widehat{x}_{i}(t)={\sum_{s=0}^{t+1}\alpha(s)x_{i}(s)\over \sum_{s=0}^{t+1}\alpha(s)}$ for all $i\in V$. \EndFor \State \textbf{Output}: $\widehat{x}_{i}(t)$ for all $i\in V$. \end{algorithmic} \end{algorithm} As mentioned earlier in the previous section, the projection onto the Euclidean ball ${\rm I\!B}_{d}(R)$ in eq. \eqref{Eq:EasyProjection1} of Algorithm \ref{CHalgorithm} has a closed form expression, and thus it can be computed efficiently. However, the algorithm may still be computationally inefficient, especially when there is a large number of constraints. This is due to the fact that the sub-gradients of all the constraints must be calculated in eq \eqref{Eq:sub_gradient_L1}. To resolve this issue, in this section we propose a distributed stochastic primal-dual algorithm. In contrast to Algorithm \ref{CHalgorithm} which requires the sub-gradients of all inequality constraints at each step, the stochastic algorithm only requires one sub-gradient, namely the sub-gradient associated with the constraint that has the largest Lagrangian multiplier. More precisely, at each step $t=0,1,2,\cdots$ of the stochastic algorithm, we prescribe a distribution $p_{i}(t)\coloneqq (p_{i,1}(t),p_{i,2}(t),\cdots,p_{i,m}(t)),\sum_{k=1}^{m}p_{i,k}(t)=1$ for each agent on the set of labels $\{1,2,\cdots,m\}$ associated with the inequality constraints \eqref{Eq:emperical_2}. The distribution $p_{i}(t)$ of each agent is determined based on the observed Lagrangian multipliers at time $t$, \textit{i.e.}, \begin{align*} p_{i,k}(t)\coloneqq \lambda_{i,k}(t)/\|\lambda_{i}(t)\|_{1}, \quad \|\lambda_{i}(t)\|_{1}\not=0. \end{align*} When the Lagrangian multipliers are all zero $\lambda_{i}(t)=0\in {\rm I\!R}_{+}^{m}$, we consider a uniform distribution, \textit{i.e.}, $p_{i}(t)={\tt{Uniform}}\{1,\cdots,m\}$. Let $K_{i}(t)$ denotes a random variable with the distribution $p_{i,k}(t)$, that is $p_{i,k}(t)={\rm I\!P}[K_{i}(t)=k]$. For each given index $k\in \{1,2,\cdots,m\}$ and for the pair of variables $(x,\lambda)\in {\rm I\!B}_{d}(R)\times {\rm I\!R}_{+}^{m}$, we also let \begin{subequations} \begin{align} \label{Eq:Label1} \nabla_{x}\widehat{L}_{i}(x,\lambda;k)&\coloneqq \nabla f_{i}(x)+\|\lambda\|_{1}\nabla g_{k}(x)\\ \label{Eq:Label2} \nabla_{\lambda}\widehat{L}_{i}(x,\lambda)& \coloneqq g(x)-\eta \lambda. \end{align} \end{subequations} Equipped with these definitions, in Algorithm \ref{CHalgorithm-1} we present the distributed stochastic primal-dual algorithm. Let $\mathfrak{F}_{t}$ denotes the $\sigma$-algebra of all random variables $\{(x_{i}(s),\lambda_{i}(s),K_{i}(s))\}_{s=0}^{t-1}$. Conditioned on $\mathfrak{F}_{t}$, the stochastic sub-gradient $\nabla_{x}\widehat{L}_{i}(x_{i}(t),\lambda_{i}(t);K_{i}(t))$ defined in eq. \eqref{Eq:Label1} is an \textit{unbiased} estimate of the deterministic sub-gradient $\nabla_{x}L_{i}(x_{i}(t),\lambda_{i}(t))$ defined in eq. \eqref{Eq:sub_gradient_L1}. In particular, by computing the expectation of the estimator $\nabla_{x}\widehat{L}_{i}(x_{i}(t),\lambda_{i}(t);K_{i}(t))$ with respect to the distribution $p_{i}(t)$, we obtain the deterministic sub-gradient \begin{align} \nonumber &{\rm I\!E}_{p_{i}(t)}[\nabla_{x}\widehat{L}_{i}(x_{i}(t),\lambda_{i}(t);K_{i}(t))|\mathfrak{F}_{t}]\\ \nonumber &=\sum_{k=1}^{m}\left(\nabla f_{i}(x_{i}(t))+\|\lambda_{i}(t)\|_{1}\nabla g_{k}(x_{i}(t))\right)p_{i,k}(t)\\ \nonumber &=\nabla f_{i}(x_{i}(t))+ \sum_{k=1}^{m}\|\lambda_{i}(t)\|_{1}\nabla g_{k}(x_{i}(t))\dfrac{\lambda_{i,k}(t)}{\|\lambda_{i}(t)\|_{1}}\\ \nonumber &=\nabla f_{i}(x_{i}(t))+ \sum_{k=1}^{m}\lambda_{i,k}(t)\nabla g_{k}(x_{i}(t)))\\ \label{Eq:Unbiased_Estimator} &=\nabla_{x}L_{i}(x_{i}(t),\lambda_{i}(t)). \end{align} In addition, the stochastic sub-gradient $\nabla_{\lambda}\widehat{L}_{i}(x_{i}(t),\lambda_{i}(t))$ corresponds to the deterministic definition in eq. \eqref{Eq:sub_gradient_L2}, \textit{i.e.}, \begin{align} \label{Eq:Unbiased_Estimator01} \nabla_{\lambda}\widehat{L}_{i}(x_{i}(t),\lambda_{i}(t))=\nabla_{\lambda}L_{i}(x_{i}(t),\lambda_{i}(t)). \end{align} \subsection{Convergence Rate and Constraint Violation Bounds} As we demonstrated in eq. \eqref{Eq:Unbiased_Estimator}, the stochastic sub-gradient defined in eq. \eqref{Eq:Label1} is an unbiased estimator for the deterministic sub-gradient. We thus leverage the method of bounded martingale difference to derive a high probability convergence bound for Algorithm \ref{CHalgorithm-1}. \begin{theorem} \label{Thm:High_Probability_Bound} Consider Algorithm \ref{CHalgorithm-1} with the stepsize $\alpha(t)={R\over \sqrt{t+1}}$ and the regularization parameter $\eta\alpha(t)\leq {1\over 2}$. Let $\widehat{x}_{i}(T)$ denotes the estimate of the $i$-th agent at the end of $T$ iterations. Then, \begin{itemize} \item [(i)] With the probability of at least $1-{1\over T}$, \begin{align} \label{Eq:Stochastic} f(\widehat{x}_{i}(T))&-f(x_{\ast})\\ \nonumber &\leq \dfrac{\log(T)}{\sqrt{T}-1}\left(RC+\dfrac{4\sqrt{10}nm^{2}L^{2}R^{3}}{\eta}\right), \end{align} for all $i\in V$ and all $T\geq 2$, where $C$ is the constant defined in eq. \eqref{Eq:CONSTANT_C}. \item[(ii)] The expected convergence rate is given by \begin{align} &{\rm I\!E}[f(\widehat{x}_{i}(T))-f(x_{\ast})]\leq \dfrac{RC\log(T)}{\sqrt{T}-1}. \end{align} \end{itemize} \end{theorem} From eq. \eqref{Eq:Stochastic}, we observe that $f(\widehat{x}_{j}(T))\rightarrow f(x_{\ast})$ almost surely as $T\rightarrow \infty$. Moreover, by comparing the high probability bound in eq. \eqref{Eq:Stochastic} with the convergence rate of the deterministic algorithm in \eqref{Eq:ConvergenceRate_Rationa0}, we see that both Algorithms \ref{CHalgorithm} and \ref{CHalgorithm-1} yield the same convergence rate of $\mathcal{O}(\log(T)/\sqrt{T})$. This is due to the fact that in both algorithms, the averaging step (Steps 4 of Algorithm \ref{CHalgorithm} and Step 5 of Algorithm \ref{CHalgorithm-1}) is the bottleneck of the convergence rate. In the next theorem, we address the constraint violation performance of Algorithm \ref{CHalgorithm-1}. The proof is omitted since it is similar to the proofs of Theorems \ref{Thm:3} and \ref{Thm:High_Probability_Bound}. \begin{theorem} Consider $T$ iterations of Algorithm \ref{CHalgorithm} with the stepsize $\alpha(t)={R \over \sqrt{t+1}}$ and the regularizer's parameter $\eta\alpha(t)\leq {1\over 2},\forall t\in [T]$. With the probability of at least $1-{1\over T}$, the constraint violation has the following asymptotic bound for all $i\in V$, \small \begin{align} \left\|\left[\dfrac{1}{n}\sum_{i=1}^{n}g(\widehat{x}_{i}(T)) \right]_{+}\right\|^{2}_{2}=\mathcal{O}(\eta), \end{align}\normalsize Furthermore, if the optimal solution $x_{\ast}$ is strictly feasible at an optimal point $g(x_{\ast})\prec 0$, we have \small \begin{align} \left\|\left[\dfrac{1}{n}\sum_{i=1}^{n}g(\widehat{x}_{i}(T)) \right]_{+}\right\|^{2}_{2}=\mathcal{O}\left(\dfrac{\eta \log(T)}{\sqrt{T}}\right). \end{align}\normalsize \end{theorem} \section{Numerical Experiments} \label{Sec:Numerical_Simulations} In this section, we report the numerical simulations studying the convergence of the regularized primal-dual method for distributed regression on synthetic data. To demonstrate the performance of Algorithm \ref{CHalgorithm}, we consider two examples of smooth and non-smooth classifiers. \begin{itemize} \item \textbf{Smooth case}: we consider a logistic loss function with a norm constraint as well as a set of box constraints \begin{subequations} \begin{align} \label{Eq:SVM} &\min_{x\in {\rm I\!R}^{d}} f(x)\coloneqq \dfrac{1}{n}\sum_{i=1}^{n}\log(1+\exp(b_{i}\langle a_{i},x\rangle) )\\ \nonumber &\text{subject to} \quad g_{k}(x)=-l-x_{k}\leq 0, \\ \nonumber &\hspace{17mm} g_{k+d}(x)=x_{k}-u\leq 0, \quad k=1,\cdots,d, \\ \label{Eq:SVM_cons1} &\hspace{17mm} \|x\|_{2}\leq 1, \end{align}\normalsize \end{subequations} where $(a_{i},b_{i})\in {\rm I\!R}^{d}\times \{-1,+1\}$. \item \textbf{Non-smooth case}: we consider a hinge loss function with a norm constraint as well as a set of box constraints \begin{subequations} \begin{align} \label{Eq:SVM1} &\min_{x\in{\rm I\!R}^{d}} f(x)\coloneqq \dfrac{1}{n}\sum_{i=1}^{n}\left[1-b_{i}\langle a_{i},x\rangle\right]_{+}\\ \nonumber &\text{subject to} \quad g_{k}(x)=-l-x_{k}\leq 0, \\ \nonumber &\hspace{17mm} g_{k+d}(x)=x_{k}-u\leq 0, \quad k=1,\cdots,d\\ \label{Eq:SVM_cons2} &\hspace{17mm} \|x\|_{2}\leq 1, \end{align}\normalsize \end{subequations} where $(a_{i},b_{i})\in {\rm I\!R}^{d}\times \{-1,+1\}$. \end{itemize} The optimization problems of the type \eqref{Eq:SVM}-\eqref{Eq:SVM_cons1} and \eqref{Eq:SVM1}-\eqref{Eq:SVM_cons2} are common in the context of classification in supervised learning, where $\{(a_{1},b_{1}),\cdots,(a_{n},b_{n})\}$ is the set of $n$ training data such that $a_{i}$ is the feature vector (a.k.a. the explanatory variables in the regression), and $b_{i}$ is its associated label. In the case of the logistic classifier, to make a prediction given a new vector $a$, the classifier outputs $b=\pm 1$ with the probability of ${\rm I\!P}(b=\pm 1 | a,x)=\dfrac{1}{1+\exp(\pm \langle x,a\rangle )}$. In the case of the hinge loss function, the goal is to obtain a linear classifier of the form $a\mapsto \text{sign}(\langle a,x \rangle)$ for some vector $x\in {\rm I\!R}^{d}$. In our simulations with the logistic classifier, we generate $a_{i}$ from a uniform distribution on the unit sphere. We then choose a random vector from Gaussian distribution $w\sim \mathsf{N}(0,I_{d\times d})$ and generate the labels $b_{i}\sim {\tt{Bernoulli}} (p)$, where $p=\dfrac{1}{1+\exp(\langle w,a_{i}\rangle) }$. It is straightforward to verify that $L=\max_{i=1,2,\cdots,n}\|a_{i}\|=1$ and $R=1$. Note that the solution of the optimization problem in eq. \eqref{Eq:SVM} approximates $w$ under the restrictions specified in Eqs. \eqref{Eq:SVM_cons1}. We consider vectors of the dimension $d=5$ (thus $m=10$) and study three different network sizes, $n\in \{50,100,200\}$ and two different upper/lower limits $l=u=0.1$. To show that Algorithm \ref{CHalgorithm} works for any initialization, instead of using the origin as the initialization point of Algorithm \ref{CHalgorithm}, we generate a random vector $v\in \mathsf{N}(0,I_{d\times d})$ and then choose $x_{i}(0)=v/\|v\|_{2}$. We also use the stepsize $\alpha(t)={R}/{\sqrt{t+1}}$ in all simulations, where here $R=1$. For a graph $G$ of $n$ nodes, let $\varepsilon_{G}(t;n)$ denotes the maximum relative error of the network, \textit{i.e.}, $\varepsilon_{G}(t;n) \coloneqq \max_{i=1,2,\cdots,n}\left|\dfrac{f(\widehat{x}_{i}(t))-f(x_{\ast})}{f(\widehat{x}_{i}(0))-f(x_{\ast})}\right|$ for every node in the graph $i\in V$. Further, we define $\delta_{G}(t;n)\coloneqq \max_{i=1,2,\cdots,n}\|g(\widehat{x}_{i}(t))\|/\|g(\widehat{x}_{i}(0))\|$ as the maximum constraint violation among all the nodes in the network. In the case of the centeralized the primal-dual method, we similarly use $\varepsilon (t,n)$ and $\delta(t;n)$ to denote the relative error gap and the constraint violation, respectively. In our simulations, we use MATLAB convex programming toolbox ${\tt{CVX}}$ \cite{grant2008cvx} to compute $f(x_{\ast})$. To investigate the performance of Algorithm \ref{CHalgorithm} on different networks, we consider random and structured graphs in our simulations, namely (a): Watts-Strogatz small-world graph model \cite{watts1998collective}, (b) Erd\"{o}s-R\'{e}yni random graph \cite{bollobas1998random}, (c) unwrapped 8-connected neighbors lattice, (d) two-clique graph (barbell graph). See Fig. \ref{Fig:1}. The Watts-Strogatz model is a mathematical model to generate random graphs with small-world properties, \textit{i.e.}, graphs that are highly clustered locally (like regular lattices) and with a small separation globally. Social networks is an example where each person is only five or six people away from anyone else. Watts-Strogatz model has two structural features, namely the clustering and the average path length. These features are captured by two parameters, namely the mean degree $K$, and a parameter $\vartheta$ that interpolates between a lattice $(\vartheta=0)$ and a random graph $(\vartheta=1)$. \hspace*{-10mm}\begin{figure*}[t!] \begin{center} \subfigure{ \includegraphics[trim={2cm 1.5cm 1.5cm .2cm}, width=.2\linewidth]{Small-World.pdf} \includegraphics[trim={2cm .75cm 1.5cm .2cm}, width=.2\linewidth]{Erdos-Reyni.pdf} \includegraphics[trim={2cm .75cm 1.5cm .2cm}, width=.2\linewidth]{Barbell.pdf} \includegraphics[trim={2cm 1.5cm 2cm .2cm}, width=.2\linewidth]{Lattice.pdf} } \subfigure{ \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{Logistic_Watts.pdf} \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{Logistic_Erdos.pdf} \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{Logistic_Barbell.pdf} \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{Logistic_Lattice.pdf} } \subfigure{ \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{hinge_Watts.pdf} \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{hinge_Erdos.pdf} \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{hinge_Barbell.pdf} \includegraphics[trim={.6cm .2cm .4cm .2cm}, width=.2\linewidth]{hinge_Lattice.pdf} } \end{center} \caption{\footnotesize{Illustration of three graph models used in simulations and the corresponding average maximum relative error gap $\varepsilon_{G}(T;n)$ with $n=100$ ($l=u=1$, $d=5$) for logistic loss function (middle row) and hinge loss function (bottom row). Top to bottom: Watts-Strogatz graph with $K=20$ and $\vartheta=0.02$, Erd\"{o}s-R\'{e}yni random graph with $p=0.06$, unwrapped 8-connected neighbors lattice, two-clique (barbell) graph.} } \label{Fig:1} \end{figure*} In the Erd\"{o}s-R\'{e}yni random graph, the edge between each pair of nodes is included in the graph with the probability $p$ independent from every other edges. Note that the Watts-Strogatz small-world graph model reduces to the Erd\"{o}s-R\'{e}yni random graph model when $\vartheta=1$. To aggregate the information of neighbors, we use the weight matrix $W$ according to the Lazy metropolis matrix in \eqref{Eq:LazyasIam}. In the unwrapped graph, each node is adjacent to $8$ neighbors. Lastly, in the barbell graph, we have two cliques of size $n/2$ which are connected by a few links. Figure \ref{Fig:1} shows the maximum error $\varepsilon_{G}(T;n)$ for the distributed dual averaging algorithm \cite{duchi2012dual} (dotted lines), and the distributed deterministic primal-dual algorithm (solid lines). For both algorithms, we used the normalized graph Laplacian as the weight matrix (cf. Section \ref{Assumptions}). In the distributed dual averaging algorithm, the stepsize is given by $\alpha(t)={R\sqrt{1-\sigma_{2}(W)}\over 4L\sqrt{t+1}}$. For the primal-dual algorithm, the stepsize $\alpha(t)={R\over \sqrt{t+1}}$ is independent of the spectral gap. It is clear from Figure \ref{Fig:1} that on the barbell graph as well as on the lattice, the convergence of both algorithms is slow. This is due to the fact that the spectral gap $1-\sigma_{2}(W)$ of both networks is quite small and thus reaching consensus on these networks is more difficult. We also observe that the primal-dual algorithm shows an oscillatory behavior on the barbell graph, whereas the dual averaging algorithm does not. This difference is attributed to the choice of stepsizes. In the dual averaging algorithm, the stepsize is modulated by the spectral gap. Therefore, when the spectral gap is very small (as is the case for the barbell graph), the stepsize is small which suppresses the oscillations. In contrast, the stepsize of the primal-dual algorithm is independent of the spectral gap. Notice that due to incorporating the spectral gap in the dual averaging algorithm, the structure of the network must be known a priori by each agent. This requires extra communication at the beginning of the dual averaging algorithm. Figure \ref{Fig:2} shows the constraint violation as well as the convergence rate in the centralized primal-dual algorithm without regularization and the decentralized regularized primal-dual algorithm with the values $u=l=0.1$. In this simulations, we choose the initial points $(x_{i}(0),\lambda_{i}(0))$ of the distributed primal-dual algorithm randomly from the feasible region (cf. Remark \ref{Remark}). In this particular example, we observe that in the decentralized primal-dual algorithm, the algorithm output $\widehat{x}_{i}(t)$ is almost feasible for all $t$ and $i\in V$. In contrast, in the centralized primal-dual algorithm, the outputs are infeasible. Here, we thus clearly observe that the regularization can mitigate the constraint violation. \begin{figure*}[t!] \centering \centering \subfigure[]{ \label{fig:first} \includegraphics[trim={.6cm .2cm .5cm .5cm}, width=.23\linewidth]{Fig11FF.pdf} }\hspace{-2mm} \subfigure[]{ \label{fig:second} \includegraphics[trim={.6cm .2cm .5cm .5cm}, width=.23\linewidth]{Fig33FF.pdf} }\hspace{-2mm} \subfigure[]{ \label{fig:third} \includegraphics[trim={.6cm .2cm .5cm .5cm}, width=.23\linewidth]{Fig22FFRevised.pdf} }\hspace{-2mm} \subfigure[]{ \label{fig:fourth} \includegraphics[trim={.6cm .2cm .5cm .5cm}, width=.23\linewidth]{Fig44FFRevised.pdf} } \caption{\footnotesize{Distributed logistic regression on synthetic data using Watts-Strogratz graph with $K=20$, $\vartheta=0.02$, $\eta=1$, $\alpha(t)= {1\over \sqrt{t+1}}$ and $l=u=0.001$, Panel (a): Constraint violation $\delta(T;n)$ of the centralized PD algorithm without regularization, Panel (b): Convergence rate $\varepsilon(T;n)$ of the centralized PD algorithm without regularization, Panel (c) Constraint violation $\delta_{\mathcal{G}}(T;n)$ of the decentralized PD algorithm, Panel (d): Convergence rate $\varepsilon_{\mathcal{G}}(T;n)$ of the decentralized PD algorithm.}} \label{Fig:2} \end{figure*} \section{Conclusion and Discussion} \label{Discussion_and_Conclusion} In this paper, we have studied a distributed regularized primal-dual methods for convex optimization of separable objective functions with inequality constraints. In the proposed distributed methods, the Lagrangian function is regularized with the squared norm of the Lagrangian multipliers. As a result, the norm of Lagrangian multipliers are bounded from above, and consequently, the norm of sub-gradients of the Lagrangian function are also bounded. Using this regularization, we proved a convergence rate for attaining the optimal objective value, and we also presented an asymptotic analysis of the constraint violation of the primal-dual solutions. We showed an interesting trade-off between the convergence rate of the algorithm and the constraint violation bound. In particular, by choosing a large regularization parameter, we can achieve a fast convergence rate. However, a large regularization parameter increases the constraint violation of the primal-dual estimations. Interestingly, when the constraints are satisfied strictly at an optimal solution, such a trade-off does not exists. We also proposed and analyzed a distributed stochastic primal-dual algorithm. At each step of the stochastic algorithm, one inequality constraint is selected randomly, and its sub-gradient is computed. Therefore, for optimization problems with many inequality constraints, a distributed stochastic primal-dual algorithm is more efficient compared to the deterministic algorithm. As a future research, it is interesting to have a comprehensive analysis of the distributed penalty/barrier function methods, and compare their convergence rates with the distributed primal-dual algorithms we developed in this paper.
2,869,038,156,163
arxiv
\section{Introduction} Recently, there has been significant progress in our understanding of QCD at high energy, based on the observations that \texttt{(i)} the gluon number fluctuations play an important role in the evolution towards saturation and the unitarity limit \cite{MS04,IMM04} and \texttt{(ii)} the QCD evolution in the presence of fluctuations and saturation is in the same universality class as a series of problems in statistical physics, the prototype of which being the `reaction--diffusion' problem \cite{MP03,IMM04,IT04}. These observations have developed into a rich correspondence between high--energy QCD and modern problems in statistical physics, which relates topics of current research in both fields, and which has already allowed us to deduce some insightful results in QCD by properly translating the corresponding results from statistical physics \cite{MP03,IMM04,IT04}. To put such theoretical developments into a specific physical context, let us consider $\gamma^*$--proton deep inelastic scattering (DIS) at high energy, or small Bjorken--$x$. We shall view this process in a special frame in which most of the total energy is carried by the proton, whose wavefunction is therefore highly evolved, while the virtual photon has just enough energy to dissociate long before splitting into a quark--antiquark pair in a colorless state (a `color dipole'), which then scatters off the gluon distribution in the proton. The transverse size $r$ of the dipole is controlled by the virtuality $Q^2$ of $\gamma^*$ (roughly, $r^2\sim 1/Q^2$), so for $Q^2\gg \Lambda_{{\rm QCD}}^2$ one can treat the dipole scattering in perturbation theory. But for sufficiently small $x$, even such a small dipole can see a high--density gluonic system, and thus undergo strong scattering. Specifically, the small--$x$ gluons to which couple the projectile form a {\em color glass condensate\,}\cite{CGC} (CGC), i.e., a multigluonic state which is characterized by high quantum occupancy, of order $1/\alpha_s$, for transverse momenta $k_\perp$ below the {\em saturation momentum} $Q_s(x)$, but which becomes rapidly dilute when increasing $k_\perp$ above $Q_s$. The saturation scale rises very fast with the energy, $Q_s^2(x)\sim x^{-\lambda}$, and is the fundamental scale in QCD at high energy. In particular, the external dipole is strongly absorbed provided its size $r$ is large on the scale set by $1/Q_s$, whereas for $r\ll 1/Q_s$ one rather has weak scattering, or `color transparency'. In turn, the small--$x$ gluons are produced through quantum evolution, i.e., through radiation from {\em color sources} (typically, other gluons) with larger values of $x$, whose internal dynamics is `frozen' by Lorentz time dilation. Let $\tau =\ln 1/x$ denote the {\em rapidity\,}; it takes, roughly, a rapidity interval $\Delta\tau \sim 1/\alpha_s$ to emit one small--$x$ gluon; thus, in the high energy regime where $\alpha_s\tau \gg 1$, the dipole meets with well developed gluon cascades, as illustrated in Fig. \ref{dis_blois1}. Three types of processes can be distinguished in Fig. \ref{dis_blois1}, which for more clarity are singled out in Fig. \ref{BREMfig}. \begin{figure} \begin{center} \centerline{\epsfig{file=dis_blois1.eps,height=5.cm}} \caption{\sl An instantaneous gluon configuration in the proton wavefunction as `seen' in DIS at small $x$. \label{dis_blois1}} \end{center} \end{figure} The first process, Fig. \ref{BREMfig}.a, represents one step in the standard BFKL evolution \cite{CGC}; by iterating this step, one generates gluon ladders which are resummed in the solution to the BFKL equation \cite{CGC}. However, by itself, the latter is well known to suffer from conceptual difficulties in the high energy limit : \texttt{(i)} The BFKL estimate for the dipole scattering amplitude $T_\tau(r)$ grows exponentially with $\tau$ (i.e., like a power of the energy), and thus eventually violates the unitarity bound $T_\tau(r)\le 1$. (The upper limit $T_\tau=1$ corresponds to the black disk limit, in which the dipole is totally absorbed by the target.) \texttt{(ii)} The BFKL ladder is not protected from deviations towards the non--perturbative domain at low transverse momenta $k_\perp^2\simle \Lambda_{{\rm QCD}}^2$ (`infrared diffusion'). With increasing energy, the BFKL solution receives larger and larger contributions from such soft intermediate gluons, and thus becomes unreliable. These `small--$x$ problems' of the BFKL equation are both cured by the {\em gluon recombination} processes $(n\to 2)$ illustrated in Fig. \ref{BREMfig}.b which are important at high energy, when the gluon density in the target is large, and lead to {\em gluon saturation} and the formation of the CGC. Such processes are included in the Balitsky--JIMWLK equation \cite{CGC}, a non--linear, functional, generalization of the BFKL evolution which describes the approach towards gluon saturation in the target and preserves the unitarity bound in the evolution of the scattering amplitudes. \begin{figure}[t] \centerline{\hspace{1.cm}\epsfxsize=3.7cm\epsfbox{BREM1b.eps} \hspace{.3cm}\epsfxsize=4.cm\epsfbox{BREM2a.eps} \hspace{.3cm}\epsfxsize=4.cm\epsfbox{BREM2b.eps}} \vspace*{0.2cm} \hspace{3.7cm} (a)\hspace{3.7cm} (b)\hspace{4.cm}(c) \caption{\sl Gluon processes which occur in one step of high energy evolution. \label{BREMfig}} \end{figure} However, the Balitsky--JIMWLK equation misses \cite{IT04} the process in Fig. \ref{BREMfig}.c --- the $2\to n$ gluon splitting --- which describes the bremsstrahlung of additional small--$x$ gluons in one step of the evolution. By itself, this process is important in the {\em dilute} regime, where it leads to the construction of higher--point gluon correlation functions from the dominant 2--point function. But once generated, the $n$--point functions with $n>2$ are rapidly amplified by the subsequent BFKL evolution (the faster the larger is $n$) and then they play an important role in the non--linear dynamics leading to saturation. Thus, such splitting processes {\em are} in fact important for the evolution towards high gluon density, as originally observed in numerical simulations \cite{Salam} of Mueller's `color dipole picture' \cite{Mueller}, and more recently reiterated in the analysis in Refs. \cite{MS04,IMM04,IT04}. Equations including both merging and splitting in the limit where the number of colors $N_c$ is large have recently became available \cite{IT04} (see also Refs. \cite{MSW05,BIIT05}), but their general solutions have not yet been investigated (except under some additional approximations \cite{IT04,Soyez}). Still, as we shall argue now, the {\em asymptotic} behaviour of the corresponding solutions --- where by `asymptotic' we mean both the high--energy limit $\tau\to\infty$ and the weak coupling limit $\alpha_s\to 0$ --- can be {\em a priori} deduced from universality considerations, by exploiting the correspondence between high--energy QCD and the reaction--diffusion problem of statistical physics \cite{IMM04}. To that aim, it is convenient to rely on the event--by--event description \cite{IMM04} of the scattering between the external dipole and the hadronic target (cf. Fig. \ref{dis_blois1}) and to use the large--$N_c$ approximation to replace the gluons in the target wavefunction by color dipoles \cite{Mueller}. Then, the dipole--target scattering amplitude corresponding to a given event can be estimated as \begin{eqnarray}\label{Tf} T_{\tau}(r,b) \,\simeq \,\alpha_s^2 \,f_{\tau} (r,b)\,,\end{eqnarray} where $\alpha_s^2$ is the scattering amplitude between two dipoles with comparable sizes and nearby impact parameters, and $f_{\tau} (r,b)$ is the {\em occupation number} for target dipoles with size $r$ at impact parameter $b$, and is a {\em discrete} quantity: $f=0,1,2,\dots$. Thus, in a given event, the scattering amplitude is a multiple integer of $\alpha_s^2$. In this dipole language, the $2\to 4$ gluon splitting depicted in Fig. \ref{BREMfig}.c is tantamount to $1\to 2$ dipole splitting, and generates {\em fluctuations} in the dipole occupation number and hence in the scattering amplitude. Thus, the evolution of the amplitude $T_{\tau}(r,b)$ with increasing $\tau$ represents a {\em stochastic process} characterized by an expectation value $\langle T(r,b)\rangle_{\tau} \simeq \alpha_s^2 \,\langle f (r,b) \rangle_{\tau}$, and also by fluctuations $\delta T \sim \alpha_s^2\delta f \sim \sqrt{\alpha_s^2 T}$ (we have used the fact that $\delta f \sim \sqrt{f}$ for fluctuations in the particle number). Clearly, such fluctuations are relatively important (in the sense that $\delta T \simge T$) only in the {\em very} dilute regime where $\langle f \rangle\simle 1$, or $\langle T\rangle\simle \alpha_s^2$. Eq.~(\ref{Tf}) applies so long as the scattering is weak, $ T \ll 1$, but by extrapolation it shows that the unitarity corrections are expected to be important when the dipole occupation factor becomes of order $1/\alpha_s^2$. Consider first the formal limit $\alpha_s^2\to 0$, in which the maximal occupation number $N\sim 1/\alpha_s^2$ becomes arbitrarily large. Then one can neglect the particle number fluctuations and follow the evolution of the scattering amplitude in the {\em mean field approximation} (MFA). This is described by the Balitsky--Kovchegov equation \cite{CGC}, a non--linear version of the BFKL equation which, as shown in Ref. \cite{MP03}, lies in the same universality class as the Fisher--Kolmogorov--Petrovsky--Piscounov (FKPP) equation (the MFA for the reaction--diffusion process and related phenomena in biology, chemistry, astrophysics, etc; see \cite{Saar} for recent reviews and more references). The FKPP equation reads, schematically, \begin{eqnarray}\label{BK} \partial_\tau T(\rho,\tau)\,=\, \partial_\rho^2 T(\rho,\tau)\,+\, T(\rho,\tau)\,- \,T^2(\rho,\tau),\end{eqnarray} in notations appropriate for the dipole scattering problem: $T(\rho,\tau)\equiv \langle T(r)\rangle_{\tau}$ and $\rho\equiv \ln(r_0^2/r^2)$, with $r_0$ a scale introduced by the initial conditions at low energy. Note that weak scattering ($T\ll 1$) corresponds to small dipole sizes ($r\ll 1/Q_s$), and thus to large values of $\rho$. The three terms on the r.h.s. of Eq.~(\ref{BK}) describe, respectively, diffusion, growth and recombination. The first two among them represent (an approximate version of) the BFKL dynamics, while the latter is the non--linear term which describes multiple scattering and thus ensures that the evolution is consistent with the unitarity bound $T\le 1$. Specifically, the solution $T_\tau(\rho)$ to Eq.~(\ref{BK}) is a {\em front} which interpolates between two fixed points : the saturation fixed point $T=1$ at $\rho\to -\infty$ and the unstable fixed point $T=0$ at $\rho\to \infty$ (see Fig. \ref{TWave5}). The position of the front, which marks the transition between strong scattering ($T\sim 1$) and, respectively, weak scattering ($T\ll 1$), defines the {\em saturation scale\,}: $\rho_s(\tau)\equiv \ln(r_0^2 Q_s^2(\tau))$. With increasing $\tau$, the front moves towards larger values of $\rho$, as illustrated in Fig. \ref{TWave5}. Note that the dominant mechanism for propagation is the BFKL growth in the tail of the distribution at large $\rho$ : the front is {\em pulled} by the rapid growth of a small perturbation around the unstable state. In view of that, the {\em velocity} of the front $\lambda\equiv {d\rho_s}/{d\tau}$ is fully determined by the linearized version of Eq.~(\ref{BK}), which describes the dynamics in the tail. Specifically, by solving the BFKL equation one finds \cite{SCALING,MT02,MP03} that, for $\rho >\rho_s(\tau)$ and sufficiently large $\tau$, \begin{eqnarray} \label{TBFKL} T_\tau(\rho) \,\simeq\,{\rm e}^{\omega \bar\alpha_s \tau} \,{\rm e}^{-\gamma\rho} \,=\,{\rm e}^{-\gamma(\rho -\rho_s(\tau))}\, ,\qquad \rho_s(\tau)\equiv c \bar\alpha_s \tau,\end{eqnarray} where $\bar{\alpha}_s = {\alpha}_s N_c/\pi$, $\gamma=0.63..$, and $c \equiv \omega/\gamma=4.88..\,$. From Eq.~(\ref{TBFKL}) one can immediately identify the velocity of the front in the MFA as $\lambda_0 = c\bar\alpha_s$. Since $Q_s^2(\tau) \simeq Q_0^2\, {\rm e}^{\lambda_0 \tau}$, it is furthermore clear that $\lambda_0$ plays also the role of the {\em saturation exponent} (here, in the MFA). \comment{ Note also that the propagation of the front, as described by Eq.~(\ref{TBFKL}), represents a {\em traveling wave} \cite{MP03} : in a comoving frame, the shape of the front is independent of $\tau$. This is the property originally referred to as `geometric scaling' \cite{geometric,SCALING}, and which might explain a remarkable regularity observed in the small--$x$ data for DIS at HERA \cite{geometric}. Namely, for $x\le 0.01$, the total cross--section $\sigma_{\gamma^*p}(x,Q^2)$ for the absorbtion of the virtual photon shows approximate scaling as a function of $Q^2/Q_s^2(x)$, with $Q_s^2(x) \propto (1/x)^\lambda$ and $\lambda \approx 0.3$ from a fit to the data.} \begin{figure}[t] \centerline{\epsfxsize=15.cm\epsfbox{TWave5.eps}} \caption{\sl Evolution of the continuum front of the Balitsky--Kovchegov equation with increasing $\tau$. \label{TWave5}} \end{figure} What is the validity of the mean field approximation ? We have earlier argued that the gluon splitting processes (cf. Fig. \ref{BREMfig}.c) responsible for dipole number fluctuations should play an important role in the dilute regime. This is further supported by the above considerations on the {\em pulled} nature of the front: Since the propagation of the front is driven by the dynamics in its tail where the fluctuations are {\em a priori} important, the front properties should be strongly sensitive to fluctuations. This is indeed known to be the case for the corresponding problem in statistical physics \cite{BD,Saar}, as it can be understood from the following, qualitative, argument: Consider a particular realization of the stochastic evolution of the target, and the corresponding scattering amplitude which is discrete (in steps of $\alpha_s^2$). Because of discreteness, the microscopic front looks like a histogram and thus is necessarily {\em compact} : for any $\tau$, there is only a finite number of bins in $\rho$ ahead of $\rho_s$ where $T_\tau$ is non--zero (see Fig. \ref{TWave6}). This property has important consequences for the propagation of the front. In the empty bins on the right of the tip of the front, the local, BFKL--like, growth is not possible anymore (as this would require a seed). Thus, the only way for the front to progress there is via {\it diffusion}, i.e., via radiation from the occupied bins at $\rho <\rho_{\rm tip}$ (compare in that respect Figs. \ref{TWave5} and \ref{TWave6}). But since diffusion is less effective than the local growth, we expect the velocity of the microscopic front (i.e., the saturation exponent) to be reduced as compared to the respective prediction of the MFA. \begin{figure}[t] \centerline{\epsfxsize=14.cm\epsfbox{TWave6.eps}} \caption{\sl Evolution of the discrete front of a microscopic event with increasing rapidity $\tau$. The small blobs are meant to represent the elementary quanta $\alpha_s^2$ of $T$ in a microscopic event. \label{TWave6}} \end{figure} To obtain an estimate for this effect, we shall rely again on the universality of the asymptotic ($\tau\to\infty$ and $N\equiv 1/\alpha_s^2\gg 1$) behaviour\cite{IMM04}. Namely, from the experience with the reaction--diffusion process and related problems in statistical physics \cite{BD,Saar}, one knows indeed that the dominant behaviour for large evolution `time' and large (but finite) occupancy $N\gg 1$ is independent of the details of the microscopic dynamics, and thus is the same for all the processes whose mean field limit ($N\to\infty$) is governed by the FKPP equation (\ref{BK}). In particular, the dominant contribution to the correction $\lambda_N- \lambda_0$ to the front velocity is known to be universal, and can be obtained through the following, intuitive, argument, due to Brunet and Derrida \cite{BD}: For a given microscopic front and $N\gg 1$, the MFA should work reasonably well everywhere except in the vicinity of the tip of the front, where the occupation number $f$ becomes of order one (corresponding to $T\sim \alpha_s^2$ in the QCD problem) and thus the linear growth term becomes ineffective. Accordingly, Brunet and Derrida suggested a modified version of the FKPP equation (\ref{BK}) in which the `BFKL--like' growth term is switched off when $T< \alpha_s^2$ : \begin{eqnarray}\label{BKBD} \partial_\tau T(\rho,\tau)\,=\, \partial_\rho^2 T\,+\,\Theta\big(T - \alpha_s^2\big) T(1- T).\end{eqnarray} By solving this equation in the linear regime, they have obtained the first correction to the front velocity as compared to the MFA (in notations adapted to QCD; see Ref. \cite{IMM04} for details): \begin{eqnarray}\label{ls} \lambda\,\simeq\,\bar\alpha_s\left[c\,-\, \frac{\kappa}{\ln^2(1/\alpha_s^2)}\,+\,{\cal O} \big(1/\ln^3 \alpha_s^2\big)\right] \,, \end{eqnarray} where the numbers $c \approx 4.88$ and $\kappa \approx 150$ are fully determined by the linear (BFKL) equation. In QCD, the same result has been first obtained through a different but related argument by Mueller and Shoshi \cite{MS04}. Note the extremely slow convergence of this result towards its mean field limit: the corrective term vanishes only logarithmically with decreasing $1/N=\alpha_s^2$, rather than the power--like suppression usually found for the effects of fluctuations. This is related to the high sensitivity of the pulled fronts to fluctuations, as alluded to above. Such a slow convergence, together with the relatively large value of the numerical factor $\kappa$, make that the above estimate for $\lambda$, although {\em exact} for asymptotically small $\alpha_s^2$, is in fact useless for practical applications. To understand the subasymptotic corrections and, more generally, the behaviour of the saturation momentum and of the scattering amplitudes for realistic values of $\tau$ and $\alpha_s$, one needs to solve the actual evolution equations of QCD \cite{IT04,MSW05,BIIT05}, a program which is currently under way \cite{Soyez}.
2,869,038,156,164
arxiv
\subsection{\@startsection{subsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.3\linespacing}% {\bfseries\centering}} \makeatother \makeatletter \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.3\linespacing}% {\centering}} \makeatother \makeatletter \def\ifx\protect\@typeset@protect\expandafter\footnote\else\expandafter\@gobble\fi{\ifx\protect\@typeset@protect\expandafter\footnote\else\expandafter\@gobble\fi} \makeatother \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{example}[theorem]{Example} \newtheorem{problem}[theorem]{Problem} \newtheorem{question}[theorem]{Question} \newtheorem{observation}[theorem]{Observation} \newtheorem{fact}[theorem]{Fact} \newtheorem{strategy}[theorem]{Strategy} \newtheorem{remark}[theorem]{Remark} \newtheorem{notation}[theorem]{Notation} \newtheorem{context}[theorem]{Context} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{oproblem}[theorem]{Open Problem} \newtheorem{construction}[theorem]{Construction} \newtheorem{convention}[theorem]{Convention} \newtheorem{htheorem}[theorem]{\underline{Hopeful} Theorem} \newtheorem*{nproblem}{Problem} \newtheorem*{theorem1}{Theorem 1.1} \newtheorem*{theorem3}{Theorem 1.3} \newtheorem*{theorem4}{Theorem 1.4} \newtheorem*{theorem5}{Theorem 1.5} \newcounter{claimcounter} \numberwithin{claimcounter}{theorem} \newenvironment{claim}{\stepcounter{claimcounter}{\noindent {\underline{\em Claim \theclaimcounter}.}}}{} \newenvironment{claimproof}[1]{\noindent{{\em Proof.}}\space#1}{\hfill $\rule{0.40em}{0.40em}$} \setcounter{MaxMatrixCols}{20} \newcommand{\pureindep}[1][]{% \mathrel{ \mathop{ \vcenter{ \hbox{\oalign{\noalign{\kern-.3ex}\hfil$\vert$\hfil\cr \noalign{\kern-.7ex} $\smile$\cr\noalign{\kern-.3ex}}} } }\displaylimits_{#1} } } \begin{document} \begin{abstract} We prove that every quasi-Hopfian finitely presented structure $A$ has a $d$-$\Sigma_2$ Scott sentence, and that if in addition $A$ is computable and $Aut(A)$ satisfies a natural computable condition, then $A$ has a computable $d$-$\Sigma_2$ Scott sentence. This unifies several known results on Scott sentences of finitely presented structures and it is used to prove that other not previously considered algebraic structures of interest have computable $d$-$\Sigma_2$ Scott sentences. In particular, we show that every right-angled Coxeter group of finite rank has a computable $d$-$\Sigma_2$ Scott sentence, as well as any strongly rigid Coxeter group of finite rank. Finally, we show that the free projective plane of rank $4$ has a computable $d$-$\Sigma_2$ Scott sentence, thus exhibiting a natural example where the assumption of quasi-Hopfianity is used (since this structure is not Hopfian). \end{abstract} \title[Computable Scott Sentences for Finitely Presented Structures]{Computable Scott Sentences for Quasi-Hopfian Finitely Presented Structures} \author{Gianluca Paolini} \address{Department of Mathematics ``Giuseppe Peano'', University of Torino, Italy.} \date{\today} \maketitle \section{Introduction} Scott proved that for every countable $L$-structure $A$ there is an $L$-sentence $\Theta_A$ of the logic $\mathfrak{L}_{\omega_1, \omega}$ such that for any countable $L$-structure $B$, $B \models \Theta_A$ if and only if $A \cong B$. Recently, there have been a number of papers (such\footnote{This list does not intend to be complete and we are sorry for omissions.} as \cite{ho, knight1, knight2, trainor1, trainor2}) which dealt with the problem of determination of the syntactic complexity of {\em optimal} Scott sentences for a given finitely generated structure, i.e. Scott sentences which are best as possible with respect to a given syntactic stratification of $\mathfrak{L}_{\omega_1, \omega}$. In particular, said results dealt with the following stratification of the formulas of $\mathfrak{L}_{\omega_1, \omega}$, a classification which is central in {\em computable model theory}. We say: \begin{enumerate}[(1)] \item $\varphi(\bar{x})$ is computable $\Pi_0$ and computable $\Sigma_0$ if it is finitary quantifier-free; \item For an ordinal (resp. a computable ordinal) $\alpha >0$: \begin{enumerate}[(2.1)] \item $\varphi(\bar{x})$ is $\Sigma_\alpha$ (resp. computable $\Sigma_\alpha$) if it is a disjunction (resp. a computably enumerable disjunction) of formulas of the form $\exists\bar{y}\psi(\bar{x}, \bar{y})$, where $\psi$ is a $\Pi_\beta$-formula (resp. computable $\Pi_\beta$-formula) for some $\beta < \alpha$; \end{enumerate} \begin{enumerate}[(2.2)] \item $\varphi(\bar{x})$ is $\Pi_\alpha$ (resp. computable $\Pi_\alpha$) if it is a conjunction (resp. a computably enumerable conjunction) of formulas of the form $\forall\bar{y}\psi(\bar{x}, \bar{y})$, where $\psi$ is a $\Sigma_\beta$-formula (resp. computable $\Pi_\beta$-formula) for some $\beta < \alpha$; \end{enumerate} \begin{enumerate}[(2.3)] \item $\varphi(\bar{x})$ is $d$-$\Sigma_\alpha$ if it is a conjunction of a $\Sigma_\alpha$-formula and a $\Pi_\alpha$-formula. \end{enumerate}\end{enumerate} As remarked in \cite{knight2} every finitely generated structure has a $\Sigma_3$ Scott sentence. On the other hand, it has been known for a while that many finitely generated structures of interest have a $d$-$\Sigma_2$ Scott sentence (or even a computable $d$-$\Sigma_2$ Scott sentence), such as for example finitely generated abelian groups \cite{calvert, knight2}, free groups of finite rank \cite{knight1}, and the infinite dihedral group \cite{knight2}. This motivated a whole program in computable model theory toward the identification of dividing lines for optimal Scott sentences of finitely generated structures. Among the many results we mention the resolution in the negative in \cite{trainor1} of an important open problem: does every finitely generated group has a $d$-$\Sigma_2$ Scott sentence? Furthermore, in \cite{ho} and \cite{trainor2} many new examples of finitely generated structures with $d$-$\Sigma_2$ (resp. computable $d$-$\Sigma_2$) Scott sentences were exhibited, \mbox{among which rings, fields, modules, etc.} In this work we try to identify some common abstract properties of a finitely presented structure $A$ ensuring that $A$ has a d-$\Sigma_2$ Scott sentence (resp. a computable d-$\Sigma_2$ Scott sentence). The main ingredient of our general approach is the following weaker version of Hopfianity, which we refer to as {\em quasi-Hopfianity}. \begin{definition} Let $A$ be a finitely generated structure. We say that $A$ is quasi-Hopfian if there exists a finite generating set $\bar{a}$ of $A$ such that whenever $f: A \rightarrow A$ is a surjective homomorphism of $A$ which is injective on $\bar{a}$ we have that $f \in Aut(A)$. \end{definition} Clearly, every Hopfian structure is quasi-Hopfian, but for example, as proved in \cite{johnson, sandler_hopfian}, free projective planes are quasi-Hopfian but not Hopfian. The utility of the notion of Hopfianity in the study of optimal Scott sentences of finitely generated structures is already well-known, for examples in \cite{trainor2} it is proved that finitely presented Hopfian groups have d-$\Sigma_2$ Scott sentences. Our contribution to the subject is twofold, on one hand we show that the weaker notion of quasi-Hopfinaity implies d-$\Sigma_2$ Scott sentences for {\em any finitely presented structure} $A$. On the other hand, and more interestingly, we define a computable condition on $Aut(A)$ imposing that a quasi-Hopfian computable structure $A$ has a {\em computable} d-$\Sigma_2$ Scott sentence. This computable condition is often implicitly verified in the study of group of automorphisms of finitely presented structures, and it is intended to establish a bridge between the algebraic study of $Aut(A)$ and the study of optimal Scott sentences for $A$. The last part of the paper focuses on applications. Firstly, we show that our general criterion covers the case of free groups of finite rank \cite{knight1}, free abelian groups of finite rank \cite{knight2}, and the infinite dihedral group \cite{knight2}. Secondly, we use our methods to prove the existence of computable d-$\Sigma_2$ Scott sentence for a large subclass of a class of finitely presented structures of central interest in group theory which, at the best of our knowledge, has not yet been considered in computable model theory: {\em Coxeter groups of finite rank}. Finally, we prove that the free projective plane of rank $4$ (which, as already observed, is not Hopfian) has a computable d-$\Sigma_2$-Scott sentence, thus exhibiting a \mbox{natural example where quasi-Hopfianity is used.} \medskip We now state our results: \begin{theorem}\label{main_theorem} Let $A = \langle a_1, ..., a_n \rangle_A$ be a quasi-Hopfian finitely presented structure. Then the $Aut(A)$-orbit of $(a_1, ..., a_n)$ in $A$ is $\Pi_1$-definable, and so $A$ has a $d$-$\Sigma_2$ Scott sentence (by \cite{alvir}). Suppose further that $A$ is computable and that there is a finite $X \subseteq Aut(A)$ and a computable $F: \omega^n \rightarrow \omega$ such that $Aut(A) = \langle X \rangle_{Aut(A)}$ and: \begin{equation}\tag{$\star$}\text{for every $\alpha \in Aut(A)$, $lg_X(\alpha) \leq F(lg_{S}(\alpha(a_1)), ..., lg_{S}(\alpha(a_n)))$}, \end{equation} where $S = \{ a_1, ..., a_n \}$, $lg_X(\alpha)$ is computed in $Aut(A)$ and $lg_{S}(\alpha(s_i))$ is computed in $A$. Then the $Aut(A)$-orbit of $(a_1, ..., a_n)$ in $A$ is definable by a {\em computable} $\Pi_1$-formula, and so $A$ has a {\em computable} $d$-$\Sigma_2$ Scott sentence (by \cite{alvir}). \end{theorem} \begin{corollary}[\cite{knight1, knight2}]\label{free_groups_corollary} Free groups of finite rank, free abelian groups of finite rank, and the infinite dihedral group have computable $d$-$\Sigma_2$ Scott sentences. \end{corollary} \begin{corollary}\label{strongly_rigid_corollary} Let $G$ be a computable quasi-Hopfian finitely presented group and suppose that $Inn(G)$ has finite index in $Aut(G)$. Then $G$ has a computable $d$-$\Sigma_2$ Scott sentence. Thus, every strongly rigid Coxeter group of finite rank has a computable $d$-$\Sigma_2$ Scott sentence. In particular, every strongly $2$-spherical Coxeter group of finite rank has a computable $d$-$\Sigma_2$ Scott sentence, as well as every Coxeter group which acts effectively, properly and cocompactly on the affine \mbox{or hyperbolic plane.} \end{corollary} In relation to the corollary above, we wish to observe that the strongly rigid Coxeter groups of finite rank have been characterized in \cite{muller}, as a result of a joint effort involving various authors, and that the two specific cases mentioned in the statement of the corollary are just particular cases of this general classification. \begin{corollary}\label{Artin_Coxeter} Let $G$ be a finite graph product of primary cyclic groups. Then $G$ has a computable $d$-$\Sigma_2$ Scott sentence. In particular, if $G$ is a right-angled Coxeter group of finite rank, then $G$ has a computable $d$-$\Sigma_2$ Scott sentence. \end{corollary} We conjecture that the methods from \cite{laurence_artin} combined with our general results yields that every right-angled Artin group of finite rank also has a has a computable $d$-$\Sigma_2$ Scott sentence, but this is out of the scope of the present paper \begin{corollary}\label{cor_free_planes} Let $\pi^4$ be the free projective plane of rank $4$ (cf. \cite{hall}). Then $\pi^4$ is computable and has a computable $d$-$\Sigma_2$ Scott sentence. \end{corollary} In Section~\ref{sec_proof} we introduce the necessary notation and then prove Theorem~\ref{main_theorem} and Corollaries~\ref{free_groups_corollary}~and~\ref{strongly_rigid_corollary}. In Section~\ref{sec_Coxeter} we introduce Coxeter groups and graph products of primary cyclic groups and prove what is needed to establish Corollary~\ref{Artin_Coxeter}. In Section~\ref{sec_planes} we introduce free projective planes and prove Corollary~\ref{cor_free_planes}. \section{Proof of Main Theorem}\label{sec_proof} Our definition of finitely presented structure is standard, so we write $A = \langle \bar{a} \mid \varphi_1(\bar{a}), ..., \varphi_n(\bar{a}) \rangle$, where, for all $i \in [1, n]$, the formulas $\varphi_i(\bar{a})$ are assumed to be atomic $L$-formulas, for details see e.g. \cite[Section~9.2]{hodges} where this is explained and justified with care. Concerning the notions of length of an $L$-term and of length of an element $a$ of an $L$-structure $A$ with respect to a generating set $X$ for $A$, denoted as $lg_X(a)$, any reasonable definition (e.g. \cite[Chapter~1]{hodges}) makes our theorems true and so we prefer to remain vague. On the other hand, when dealing with groups or other particular structures, where the exact notion we choose might be relevant for the statements of the corresponding results, we use the notion of length established in that area of research (most notably we will do this for group theory). \begin{notation} Let $L$ be a finite language and $A$ a finitely generated $L$-structure. For the rest of this section we will assume that $A$ is finitely presented and we will fix one such presentation, and denote it as $A = \langle \bar{a} \mid \varphi_1(\bar{a}), ..., \varphi_n(\bar{a}) \rangle$, where, for all $i \in [1, n]$, the formulas $\varphi_i(\bar{a})$ are assumed to be atomic $L$-formulas. \end{notation} \begin{notation}\label{isolation_notation} Let $A = \langle \bar{a} \mid \varphi_1(\bar{a}), ..., \varphi_n(\bar{a}) \rangle$, with $\bar{a} = (a_1, ..., a_m)$, all the $a_i$'s distinct and $\varphi_1(\bar{x}), ..., \varphi_n(\bar{x})$ a sequence of positive atomic formulas specifying a presentation of $A$ in the generators $\bar{a}$. Let $\psi(\bar{x})$ be the following formula: $$\bigwedge_{i \in [1, n]} \varphi_i(\bar{x}) \wedge \bigwedge_{i \neq j \in [1, n]} x_i \neq x_j.$$ Let then $X_*$ be the collection of $m$-tuples $\bar{b}$ of distinct element of $A$ such that: \begin{enumerate}[(i)] \item $A \models \psi(\bar{b})$; \item $\langle \bar{b} \rangle_A \neq A$. \end{enumerate} For every $\bar{b} \in X_*$ fix terms $t_{(\bar{b}, 1)}(\bar{x}), ..., t_{(\bar{b}, m)}(\bar{x})$ s.t. $A \models \bigwedge_{i \in [1, m]} b_i = t_{(\bar{b}, i)}(\bar{a})$. \end{notation} \begin{remark} In the context of Notation~\ref{isolation_notation}, notice that if $A \models \psi(\bar{b})$, then the map $\bar{a} \mapsto \bar{b}$ is injective and it extends uniquely to an homomorphism of $A$ \end{remark} \begin{lemma}\label{crucial_lemma} In the context of Notation~\ref{isolation_notation}, so that $A = \langle \bar{a} \rangle_A$, assume that $A$ is quasi-Hopfian, then the $Aut(A)$-orbit of $\bar{a}$ is defined in $A$ by the $\Pi_1$-formula $\Theta(\bar{x})$: $$\psi(\bar{x}) \wedge \bigwedge_{\bar{b} \in X_*} \forall \bar{y} \neg (\bigwedge_{i \in [1, n]} \varphi_i(\bar{y}) \wedge \bigwedge_{i \in [1, n]} x_i = t_{(\bar{b}, i)}(\bar{y})).$$ \end{lemma} \begin{proof} We want to show that $\bar{b} = (b_1, ..., b_n) \models \Theta(\bar{x})$ if and only if there exists $\alpha \in Aut(A)$ such that $\alpha(\bar{a}) = \bar{b}$. Concerning the implication ``left-to-right'', suppose that $A \models \Theta(\bar{b})$. It suffices to show that $b \notin X_*$, since then the map $f: A \rightarrow A$ which maps $a_i \mapsto b_i$ is on one hand surjective (recall the definition of $X_*$) and on the other hand injective on $\bar{a}$ (recall that $\psi(\bar{x})$ is a conjunct of $\Theta(\bar{x})$), and so, by quasi-Hopfianity, $f$ is an automorphism of $A$. Suppose that $\bar{b} \in X_*$, then we have: $$A \models \psi(\bar{a}) \wedge \bigwedge_{i \in [1, n]} b_i = t_{(\bar{b}, i)}(\bar{a}),$$ contradicting the fact that $\bar{b} \models \Theta(\bar{x})$. Concerning the implication ``right-to-left'', let $\alpha \in Aut(A)$ and let $\bar{b} = \alpha(\bar{a})$, we want to show that $A \models \Theta(\bar{b})$. Clearly, $A \models \psi(\bar{b})$. For the sake of contradiction, suppose that for some $\bar{c} \in X_*$ we have that: $$A \models \exists \bar{y} (\psi(\bar{y}) \wedge \bigwedge_{i \in [1, n]} b_i = t_{(\bar{c}, i)}(\bar{y})).$$ Then there exists $\bar{d} \in A$ such that: \begin{equation}\label{eq} A \models \psi(\bar{d}) \wedge \bigwedge_{i \in [1, n]} b_i = t_{(\bar{c}, i)}(\bar{d}). \end{equation} But then, since $A = \langle \bar{a} \rangle_A$, $\alpha \in Aut(A)$ and $\bar{b} = \alpha(\bar{a})$, by the second conjunct of (\ref{eq}) we have that $\langle \bar{d} \rangle_A = A$, and so by the quasi-Hopfianity of $A$ we have that: $$\beta: a_i \mapsto d_i \in Aut(W).$$ Furthermore: $$\gamma: d_i \mapsto t_{(\bar{c}, i)}(\bar{d}) = b_i \in Aut(W).$$ Hence, we have: $$\begin{array}{rcl} (\beta^{-1} \circ \gamma \circ \beta)(a_i) & = & (\beta^{-1} \circ \gamma)(d_i)\\ & = & \beta^{-1}(t_{(\bar{c}, i)}(d_1, ..., d_{n}))\\ & = & t_{(\bar{c}, i)}(\beta^{-1}(d_1), ..., \beta^{-1}(d_{n}))) \\ & = & t_{(\bar{c}, i)}(a_1, ..., a_{n}) \\ & = & c_i. \\ \end{array}$$ and so the map $a_i \mapsto c_i = t_{(\bar{c}, i)}(\bar{a}) \in Aut(A)$, contradicting the fact that $\bar{c} \in X_*$. \end{proof} \begin{proof}[Proof of Theorem~\ref{main_theorem}] The first claim is by Lemma~\ref{crucial_lemma}. We show that under the additional assumptions we have that the $Aut(A)$-orbit of $(a_1, ..., a_n)$ in $A$ is definable by a {\em computable} $\Pi_1$-formula. It suffices to show that in this case the conjunction: $$\bigwedge_{\bar{b} \in X_*} \forall \bar{y} \neg (\bigwedge_{i \in [1, n]} \varphi_i(\bar{y}) \wedge \bigwedge_{i \in [1, n]} x_i = t_{(\bar{b}, i)}(\bar{y})).$$ is computably enumerable. To this extent, let $X_+ = \{ \bar{b} \in A^n : A \models \psi(\bar{b}) \}$. We exhibit an algorithmic procedure which takes as input tuples $\bar{b} \in X_+$ and answers $\bf{YES}$ if $\bar{b} \in X_*$ and answers $\bf{NO}$ if $\bar{b} \in X_+ \setminus X_{*}$. Let $\bar{b} \in X_+$ and fix terms $t_{(\bar{b}, 1)}(\bar{x}), ..., t_{(\bar{b}, n)}(\bar{x})$ such that $A \models \bigwedge_{i \in [1, n]} b_i = t_{(\bar{b}, i)}(\bar{a})$. Notice that as the language is finite there are only finitely many terms of each length. Since $A$ is computable we can assume without loss of generality that for every $i \in [1, n]$ we have that $lg_{S}(b_i) \leq lg(t_{(\bar{b}, i)}(\bar{x}))$ (recall that $S = \{a_1, ..., a_n \}$). Now, let $k = F(lg_{S}(b_1), ..., lg_{S}(\alpha(b_n)))$ and enumerate all the elements $\alpha \in Aut(A)$ such that $\alpha = \alpha_1^{\pm 1} \circ \cdots \circ \alpha_m^{\pm 1}$, with $\alpha_i \in X$ and $m \leq k$, and call the resulting finite collection of automorphisms $B_0$. Then in order to decide if $\bar{b} \in X_*$ or not it suffices to check if for some $\beta \in B_0$ we have that: $$\beta(a_1) = t_{(\bar{b}, 1)}(\bar{a}) = b_1, ..., \beta(a_n) = t_{(\bar{b}, n)}(\bar{a}) = b_n,$$ and this is a computable task, since $A$ is assumed to be a computable structure. \end{proof} \begin{proof}[Proof of Corollary~\ref{strongly_rigid_corollary}] Let $G = \langle S\rangle_G$ be as in the assumption of the corollary. Let $\beta_1, ..., \beta_k$ be representatives of the cosets of $Inn(G)$ in $Aut(G)$ and let $\alpha_1, ..., \alpha_n$ be the inner automorphisms corresponding to the generators in $S$ (so that the automorphisms $\alpha_1, ..., \alpha_n$ generate $Inn(G)$). Then letting: $$X = \{\alpha_1, ..., \alpha_n\} \cup \{\beta_1, ..., \beta_k\} \subseteq Aut(G),$$ and $F: \omega^{n} \rightarrow \omega$ be the following function\footnote{We are not interested here in optimal functions $F$ that make $(\star)$ true.}: $F(m_1, ..., m_n) = (\sum_{i \in [1, n]} m_i) + 1$, we have that condition $(\star)$ of Theorem~\ref{main_theorem} is verified for this choice of $X$ and $F$. \smallskip \noindent Concerning the claims about Coxeter groups, this is by Fact~\ref{fact1} and \cite{muller}, specifically in \cite[page 539, line -10]{muller} it is observed that the result of \cite{muller} imply that strongly $2$-spherical Coxeter groups are strongly rigid, and it is well-known that if a finitely gen. Coxeter group $W$ is strongly rigid, then $Inn(W)$ has finite index in $Aut(W)$. \end{proof} \begin{proof}[Proof of Corollary~\ref{free_groups_corollary}] It is well-known that these groups are Hopfian and have solvable word problem. Thus, to conclude, it suffices to show that the assumptions of Theorem~\ref{main_theorem} are satisfied. Concerning the case of free groups, take as $X$ the set of Nielsen transformations and let $F(m_1, ..., m_n) = \sum_{i \in [1, n]} m_i$; then, as noted in the proof of \cite[Theorem~2.6]{knight2}, Nielsen proved that condition $(\star)$ of Theorem~\ref{main_theorem} is verified for this choice of $X$ and $F$. Concerning the case of free abelian groups, this follows from the fact that $Aut(\mathbb{Z}^n)$ is the set of $n \times n$ invertible $\mathbb{Z}$-matrices, and this group is generated by certain finitely many matrices (see e.g. \cite[Appendix~C]{elman}). The claim about the infinite dihedral group is by Corollary~\ref{Artin_Coxeter}. \end{proof} \section{Coxeter Groups and Graph Products of Groups}\label{sec_Coxeter} In this section we deal with applications to Coxeter groups, a class of groups that arises in a multitude of ways in several areas of mathematics, such as algebra \cite{humphreys}, geometry \cite{davis} and combinatorics \cite{brenti}. We now define what a Coxeter group is. \begin{definition}[Coxeter groups]\label{def_Coxeter_groups} Let $S$ be a set. A matrix $m: S \times S \rightarrow \{1, 2, . . . , \infty \}$ is called a {\em Coxeter matrix} if it satisfies: \begin{enumerate}[(1)] \item $m(s, s') = m(s' , s)$; \item $m(s, s') = 1 \Leftrightarrow s = s'$. \end{enumerate} For such a matrix, let $S^2_{*} = \{(s, s') \in S^2 : m(s, s' ) < \infty \}$. A Coxeter matrix $m$ determines a group $W$ with presentation: $$ \begin{cases} \text{Generators:} \; \; S \\ \text{Relations:} \; \; (ss')^{m(s,s')} = e, \text{ for all } (s, s' ) \in S^2_{*}. \end{cases} $$ A group with a presentation as above is called a Coxeter group, and the pair $(W, S)$ is a called a Coxeter system. The rank of the Coxeter system $(W, S)$ is $|S|$. The rank of the group $W$ is the rank of any Coxeter system $(W, S)$ with $|S|$ minimal. In this paper we are only interested in Coxeter groups of finite rank. \end{definition} \begin{notation}\label{def_Coxeter_graph} In the context of Definition~\ref{def_Coxeter_groups}, the Coxeter matrix $m$ is often equivalently represented by a labeled graph $\Gamma$ whose node set is $S$ and whose edges are the pairs $\{s, s' \}$ such that $m(s, s') < \infty$, with label $m(s, s')$. Notice that some authors consider instead the graph $\Delta$ such that $s$ and $s'$ are adjacent iff $m(s, s ) \geq 3$. In order to try to avoid confusion we refer to the first graph as the Coxeter graph of $(W, S)$ (and usually denote it with the letter $\Gamma$), and to the second graph as the Coxeter diagram of $(W, S)$ (and usually denote it with the letter $\Delta$). \end{notation} \begin{definition}\label{def_irreducible} Let $(W, S)$ be a Coxeter system with Coxeter diagram $\Delta$ (recall Notation~\ref{def_Coxeter_graph}). We say that $(W, S)$ is irreducible if $\Delta$ is connected. \end{definition} \begin{definition}[Right-angled Coxeter and Artin groups]\label{def_Artin_Coxeter} Let $m$ be a Coxeter matrix and let $W$ be the corresponding Coxeter group. We say that $W$ is right-angled if the matrix $m$ has values in the set $\{ 1, 2, \infty\}$. In this case the Coxeter graph $\Gamma$ associated to $m$ is simply thought as a graph (instead of a labeled graph), whith edges corresponding to the pairs $\{ s, s' \}$ such that $m(s, s') = 2$. A right-angled Artin group is defined as in the case of right-angled Coxeter groups with the omission in the defining presentation of the requirement that generators have order~$2$. \end{definition} Artin groups will not play a role in the rest of the paper, we gave the definition of right-angled Artin groups in Definition~\ref{def_Artin_Coxeter} to give context to the conjecture made before Corollary~\ref{cor_free_planes}, i.e. that our methods might apply also to these structures. \begin{definition}[Strongly $2$-spherical Coxeter groups] Let $(W, S)$ be a Coxeter system of finite rank with Coxeter matrix $m$. We say that the Coxeter system $(W, S)$ is $2$-spherical if $m$ has only finite entries. We say that $(W, S)$ is strongly $2$-spherical if in addition $W$ is not finite and $(W, S)$ is irreducible. We say that the Coxeter group $W$ is $2$-spherical (resp. strongly $2$-spherical) if there is $S \subseteq W$ such that $(W, S)$ is a $2$-spherical (resp. strongly $2$-spherical) Coxeter system. \end{definition} It is a stadard fact that a finitely generated group is computable if and only if it has solvable word problem \cite{rabin} -- this is why we state Facts~\ref{fact1}~and~\ref{fact_hop_graph_product}. \begin{fact}\label{fact1} Coxeter groups of finite rank are Hopfian and have \mbox{solvable word problem.} \end{fact} \begin{proof} As well-known such groups are linear groups over the real numbers and thus residually finite, and in particular Hopfian. The solvability of the word problem is also well-known (first proved by Tits), see e.g. \cite[Section~3.4]{davis} for a reference. \end{proof} \begin{definition} Let $(W, S)$ be a Coxeter system. We say that $W$ is a strongly rigid Coxeter group if for every $T \subseteq W$ such that $(W, T)$ is a Coxeter system there exists $w \in W$ such that $T = S^w$, where $S^w$ stands for $\{wsw^{-1} : s \in S \}$. \end{definition} \begin{definition}\label{def_cyclic_prod} Let $\Gamma = (V, E)$ be a graph and $\mathbf{p}: V \rightarrow \{ p^n : p \text{ prime, } n \geq 1 \}$ a graph coloring\footnote{The non-edges of $\Gamma$ may be equivalently interpreted as edges labelled $\infty$, in which case the graph $\Gamma$ is complete, but we chose not to use this convention in our presentation.}. We define a group $G(\Gamma, \mathbf{p})$ with the following presentation: $$ \langle V \mid a^{\mathbf{p}(a)} = 1, \; bc = cb : \mathbf{p}(a) \neq \infty \text{ and } b E c \rangle.$$ We call groups of the form $G(\Gamma, \mathbf{p})$ graph products of primary cyclic groups. \end{definition} \begin{convention}\label{convention_finite_graph} From now on all the graph products of primary cyclic groups $G(\Gamma, \mathbf{p})$ considered in this paper are assumed to be such that $\Gamma$ is finite. \end{convention} \begin{fact}[\cite{green}]\label{fact_hop_graph_product} Graph products of primary cyclic groups $G(\Gamma, \mathbf{p})$ (with $\Gamma$ finite, cf. Convention~\ref{convention_finite_graph}) are Hopfian and have solvable word problem. \end{fact} \begin{definition}\label{complete_subgroups} Let $G(\Gamma, \mathbf{p})$ be a graph product of primary cyclic groups. A subgroup of $G$ which is generated by the vertices of a maximal complete subgraph (a.k.a. a maximal clique) of $\Gamma$ is called a maximal complete subgroup. \end{definition} \begin{notation} Let $G(\Gamma, \mathbf{p})$ be a graph product of primary cyclic groups. We denote by $Spe(G)$ the subgroup of $Aut(G)$ consisting of those automorphisms $\alpha$ of $G$ such that $\alpha(v)$ is a conjugate of $v$ for every $v \in \Gamma$. We denote by $F(\Gamma)$ the subgroup of $Aut(G)$ consisting of those automorphisms of $G$ which map each maximal complete subgroup of $G$ to a maximal complete subgroup of $G$ (cf. Definition~\ref{complete_subgroups}). \end{notation} \begin{remark} Let $G = G(\Gamma, \mathbf{p})$ be a graph product of primary cyclic groups and denote by $G_{ab}$ the abelianization of $G$. Then $F(\Gamma)$ is isomorphic to the image of $Aut(G)$ under the natural map $Aut(G) \rightarrow Aut(G_{ab})$. In particular, $F(\Gamma)$ is finite. \end{remark} \begin{fact}[{\cite[Theorem~1.2]{gutierrez}}]\label{semi-direct} Let $G = G(\Gamma, \mathbf{p})$ be a graph product of primary cyclic groups. Then $Aut(G)$ admits the following semi-direct product decomposition: $$Aut(G) = Spe(G) \rtimes F(\Gamma).$$ \end{fact} \begin{definition}[{\cite[Proposition~4.2, Definition~4.3]{laurence}}]\label{def_length_autos} Let $G(\Gamma, \mathbf{p})$ be a graph product of primary cyclic groups. For $\alpha \in Spe(G)$ with $\alpha(v) = w_v v w_v^{-1}$ and $w_v v w_v^{-1}$ a normal form, we define the length of $\alpha$, denoted as $|\alpha|$, to be $\sum_{v \in \Gamma} lg_{\Gamma}(w_v)$. \end{definition} \begin{definition}\label{partial_conj} Let $\Gamma$ be a graph, $v \in \Gamma$ and $C$ a union of connected components of $\Gamma \setminus N^*(v)$, where $N^*(v) = \left\{ v' \in \Gamma : v E_{\Gamma} v' \right\} \cup \{ v \}$. We define a map $\pi_{(s, C)}$ as: $$\begin{cases} \pi_{(s, C)}(t) = sts \;\;\;\; \text{ if } t \in C \\ \pi_{(s, C)}(t) = t \;\;\;\;\;\;\; \text{ otherwise. } \end{cases} $$ The maps of the form $\pi_{(s, C)}$ are called the {\em partial conjugations} of $G(\Gamma, \mathbf{p})$. \end{definition} \begin{fact}[\cite{gutierrez}]\label{partial_conj_fact} The partial conjugations are automorphisms of $G(\Gamma, \mathbf{p})$. \end{fact} \begin{fact}[{\cite[Theorem~4.1]{laurence}}]\label{laurence} Let $G(\Gamma, \mathbf{p})$ be a graph product of primary cyclic groups. Then $Spe(G)$ is generated by the set $X$ of partial conjugations corresponding to the graph $\Gamma$. Further, if $\alpha \in Spe(G)$, then $lg_X(\alpha) \leq |\alpha|$ (cf. Definition~\ref{def_length_autos}). \end{fact} \begin{proof}[Proof of Corollary~\ref{Artin_Coxeter}] Let $G(\Gamma, \mathbf{p})$ be a graph product of primary cyclic groups. Let $\alpha_1, ..., \alpha_n$ be a list of the partial conjugations corresponding to the graph $\Gamma$ and let $\beta_1, ..., \beta_k$ be a list of the elements of $F(\Gamma)$. Let also: $$X = \{\alpha_1, ..., \alpha_n\} \cup \{\beta_1, ..., \beta_k\} \subseteq Aut(G),$$ and $F: \omega^{n} \rightarrow \omega$ be the following function\footnote{We are not interested here in optimal functions $F$ that make $(\star)$ true.}: $F(m_1, ..., m_n) = (\sum_{i \in [1, n]} m_i) + 1$. Then, by Fact~\ref{fact1}, we can apply Theorem~\ref{main_theorem}, and by Facts~\ref{semi-direct}~and~\ref{laurence} we have that $(\star)$ is verified for this choice of $X$ and $F$, and so we are done. \end{proof} \section{The Free Projective Plane of Rank $4$}\label{sec_planes} \begin{definition}[{\cite{hall}}]\label{def_plane} A {\em partial plane} is a system of points and lines satisfying: \begin{enumerate}[(A)] \item through any two distinct points $p$ and $p'$ there is at most one line $p \vee p'$; \item any two distinct lines $\ell$ and $\ell'$ intersect in at most one point $\ell \wedge \ell'$. \end{enumerate} We say that a partial plane is a {\em projective plane} if in (A)-(B) above we replace ``at most'' with ``exactly one''. We say that a projective plane is non-degenerate if it contains a quadrangle, i.e. four points such that no three of them are collinear. \end{definition} \begin{definition}[{\cite{hall}}]\label{free_extension} Given a partial plane $P$ we define a chain of partial planes $(P_n : n < \omega)$, by induction on $n < \omega$, as follows: \newline $n = 0)$. Let $P_n = P$. \newline $n = 2k +1)$. For every pair of distinct points $p, p' \in P_{2k}$ not joined by a line add a new line $p \vee p'$ to $P_{2k}$ incident \mbox{with only $p$ and $p'$. Let $P_n$ be the resulting plane.} \newline $n = 2k >0)$. For every pair of parallel lines $\ell, \ell' \in P_{2k-1}$ add a new point $\ell \wedge \ell'$ to $P_{2k-1}$ incident \mbox{with only $\ell$ and $\ell'$. Let $P_n$ be the resulting plane.} \newline We define the {\em free projective extension} of $P$ to be $F(P) : = \bigcup_{n < \omega} P_n$. \end{definition} \begin{notation}\label{pi_n} Given $4 \leq n \leq \omega$, we let $\pi_0^n$ be the partial plane consisting of a line $\ell$, $n-2$ points on $\ell$ and $2$ points off of $\ell$, and we let $\pi^n = F(\pi^n_0)$. We refer to the plane $\pi^n$, for $4 \leq n \leq \omega$, as the free projective plane of rank $n$. Further, given $k < \omega$, we say that $x \in F(\pi^n_0) = \bigcup_{m < \omega} P_m$ is of stage $k$ if $x \in P_k \setminus P_{k-1}$. \end{notation} \begin{notation}\label{notation_theory} Model-theoretically we consider projective planes $P$ as $L$-structures in a language $L = \{ 0, 1, S_1, S_2, I, \vee, \wedge \}$, where we let the following: \begin{enumerate}[(i)] \item $0$ and $1$ are constant symbols; \item $S_1$ specifies the set of points of $P$ and $S_2$ specifies the set of lines of $P$; \item $I$ is a symmetric binary relation specifying the point-line incidence relation; \item the interpretation of $\wedge$ (intersection of lines) and $\vee$ (join of points) are extended naturally so that $(P, 0, 1, \wedge, \vee)$ becomes a modular geometric lattice. Explicitly, for $p, p' \in S_1$ and $\ell, \ell' \in S_2$ we let $p \wedge p' = 0$, $\ell \vee \ell' = 1$ and: $$p \wedge \ell = \begin{cases} p \; \text{ if } \; p I \ell \\ 0 \; \text{ otherwise;} \end{cases} p \vee \ell = \begin{cases} \ell \; \text{ if } \; p I \ell \\ 1 \; \text{ otherwise.} \end{cases}$$ \end{enumerate} \end{notation} \begin{remark}\label{collineation} Notice that under this choice of language $L$, if $P$ is a projective plane then $Aut(P)$ is the collineation group of $P$, i.e. the set of bijections of $P$ sending points to points, lines to lines and preserving the point-line incidence relation. \end{remark} \begin{fact}\label{hopfian_fact} Let $4 \leq n < \omega$, then $\pi^n$ is finitely presented, quasi-Hopfian but not Hopfian. \end{fact} \begin{proof} The fact that $\pi^n$ is finitely presented is clear. Concerning quasi-hopfianity, let $f: \pi^n \rightarrow \pi^n$ be a surjective homomorphism of $\pi^n$ which is injective on $\pi^n_0$, then, by \cite[Th.~3]{sandler_hopfian}, $f(\pi^n_0)$ generates $\pi^n$ freely, hence $f \in Aut(A)$. Finally, the fact that $\pi^n$ is not Hopfian is proved in \cite{johnson}. \end{proof} \begin{proposition}\label{computable_fact} Let $4 \leq n < \omega$, then $\pi^n$ is computable. \end{proposition} \begin{proof} In \cite{kobaev} it is proved that, for $4 \leq n < \omega$, $\pi^n$ is computable in a language specifying the set of points, the set of lines and the graph of the partial functions $\vee$ and $\wedge$. Furthermore, in \cite{nikitin} it is proved, under the same computable representation, that the incidence problem for $\pi^n$ is decidable. Thus, it is immediate to see that $\pi^n$ is computable also with respect to our choice of language (Notation~\ref{notation_theory}). \end{proof} \begin{notation} Clearly for $n = 4$ we can consider $\pi^n_0$ as consisting simply of four points $A_1, A_2, B_1, B_2$. Let now $a_1 = (A_1 \vee A_2) \wedge (B_1 \vee B_2)$ and $a_2 = (A_1 \vee B_1) \wedge (A_2 \vee B_2)$, and consider the collineations (cf. Remark~\ref{collineation} for a definition of collineation) of $\pi^4$ detemined by the following assignments: $$\theta_1 = \left( \begin {array}{c} A_1 \mapsto A_2 \\%\noalign{\medskip} A_2 \mapsto A_1 \\%\noalign{\medskip} B_1 \mapsto B_1 \\%\noalign{\medskip} B_2 \mapsto B_2 \\%\noalign{\medskip} \end {array} \right), \quad \theta_2 = \left( \begin {array}{c} A_1 \mapsto A_2 \\%\noalign{\medskip} A_2 \mapsto B_1 \\%\noalign{\medskip} B_1 \mapsto B_2 \\%\noalign{\medskip} B_2 \mapsto A_1 \\%\noalign{\medskip} \end {array} \right), \quad \phi = \left( \begin {array}{c} A_1 \mapsto A_1 \\%\noalign{\medskip} A_2 \mapsto a_1 \\%\noalign{\medskip} B_1 \mapsto B_1 \\%\noalign{\medskip} B_2 \mapsto a_2 \\%\noalign{\medskip} \end {array} \right). $$ \end{notation} \begin{fact}[{\cite{sandler}}] $Aut(\pi^4) = \langle \theta_1, \theta_2, \phi \rangle_{Aut(\pi^4)}$, where $\theta_1$ and $\theta_2$ generate a group $S_4$ isomorphic to the symmetric group on the four points $A_1, A_2, B_1, B_2$, while $\phi^2 = id_{\pi^4}$. Thus, any element $\alpha \in Aut(\pi^4)$ can be written as follows, where $P_{i_j} \in S_4$: \begin{equation}\tag{$*$} \alpha = P_{i_1} \circ \phi \circ P_{i_2} \circ \cdots \circ P_{i_n} \circ \phi \circ P_{i_{n+1}}. \end{equation} We say that $\alpha \in Aut(\pi^4)$ is of length (at most) $n$ if $\alpha$ can be written in form $(*)$ above and in $(*)$ there are $n$ occurrences of the collineation $\phi$. \end{fact} \begin{definition} We say that $\alpha \in Aut(\pi^4)$ is of stage $n < \omega$ if for every $C \in \{A_1, A_2, B_1, B_2\}$ we have that $\alpha(C)$ is of stage $ \leq n$ (in the sense of Notation~\ref{pi_n}). \end{definition} \begin{fact}[{\cite{sandler}}]\label{sandler_fact} Every collineation $\alpha \in Aut(\pi^4)$ of stage $n$ is of length $\leq n$. \end{fact} \begin{proof}[Proof of Corollary~\ref{cor_free_planes}] Let $X = \{\theta_1, \theta_2, \phi\}$ and $F: \omega^{n} \rightarrow \omega$ be the following function\footnote{We are not interested here in optimal functions $F$ that make $(\star)$ true.}: $F(m_1, ..., m_n) = \sum_{i \in [1, n]} 2(m_i + 1)$, then by Facts~\ref{hopfian_fact}~and~\ref{sandler_fact} and Proposition~\ref{computable_fact} we can apply Theorem~\ref{main_theorem} and $(\star)$ is verified \mbox{for this choice of $X$ and $F$.} \end{proof}
2,869,038,156,165
arxiv
\section{}\label{} \section{Introduction and \mbox{p-A} data analysis} The suppression of charmonium states in nuclear collisions is considered as one of the most powerful signatures of the production of a deconfined state~\cite{Sat86}. However, it was soon realized that cold nuclear matter effects, and in particular the interaction of the projectile and target nucleons with the $c\overline {c}$ pair, may sizeably contribute to the observed suppression~\cite{Ger88}. Such effects are usually studied in \mbox{p-A} collisions, then extrapolated to \mbox{A-A} and compared with the observed yield in nuclear collisions, as a function of centrality. At the SPS, the NA50 experiment has performed an accurate measurement of J/$\psi$ production in \mbox{p-A} collisions at 400/450 GeV~\cite{Ale06}, i.e. with an incident proton energy higher than the energy per nucleon of \mbox{Pb-Pb}~\cite{Ale05} and \mbox{In-In}~\cite{Arn07} collisions, studied by the NA50 and NA60 experiments, respectively. Cold nuclear matter effects have been parameterized by fitting the A-dependence of the J/$\psi$ production cross section per \mbox{N-N} collision with the usual $A^{\alpha}$ power-law, or calculating, in the frame of the Glauber model, the J/$\psi$ absorption cross section $\sigma_{abs}^{\rm J/\psi}$. An extrapolation to \mbox{A-A}, based on the assumption of a constant $\sigma_{abs}^{\rm J/\psi}$ as a function of incident energy and c.m. rapidity has revealed, by comparison with J/$\psi$ production yields in \mbox{A-A}, the presence of a suppression which exceeds cold nuclear matter effects (anomalous suppression). In order to provide reference \mbox{p-A} data collected at the same energy and kinematic domain of the \mbox{A-A} data, NA60 has studied for the first time J/$\psi$ production in \mbox{p-A} collisions at 158 GeV. The incident beam, with an intensity of $\sim 5\cdot 10^8$ protons/s, was sent towards a target system made of several subtargets, with mass numbers ranging from 9 (Be) to 238 (U), which were simultaneously exposed to the beam. For details on the NA60 experiment, based on a muon spectrometer coupled to a Si pixel vertex telescope, see e.g.~\cite{Usa05}. The analysis of the J/$\psi$ production data at 158 GeV has been performed in the rapidity domain $0.28<y_{cm}<0.78$, covered with reasonable acceptance by all the sub-targets. The preliminary results shown in this paper refer to cross-section ratios $\sigma_{pA}^{\rm J/\psi}/\sigma_{pBe}^{\rm J/\psi}$ between the target with mass number $A$ and the lightest one (Be). In this way, the beam luminosity factors cancel out, apart from a small beam attenuation factor. On the other hand, the track reconstruction efficiencies in the vertex spectrometer do not cancel out completely, since each target sees the vertex spectrometer under a slightly different angle. Therefore, the efficiency of the vertex spectrometer has been computed with the highest possible granularity (down to the single-pixel level, when track statistics is large enough) and on a run-per-run basis. As a check, we have verified, injecting in our Monte-Carlo these efficiencies, that we were able to reproduce the muon pair matching rate (of the order of $\sim60$\%), and its time evolution, observed for J/$\psi$ events in real data. \section{Results on \mbox{p-A} collisions} In Fig.~\ref{fig:1}(left) we present the J/$\psi$ cross-section ratios, relative to Be, for the 7 nuclear targets (Be, Al, Cu, In, W, Pb and U) exposed to the beam. The results are shown as a function of $L$, the mean thickness of nuclear matter crossed by the $c\overline{c}$ pair in its way through the nucleus. \begin{figure}[ht] \centering \includegraphics[scale=0.33]{fig1left.eps} \includegraphics[scale=0.33]{fig1right.eps} \caption[]{Left: J/$\psi$ cross-section ratios for \mbox{p-A} collisions at 158 GeV (circles) and 400 GeV (squares), as a function of $L$. Right: compilation of $\alpha$ vs $x_F$.} \label{fig:1} \end{figure} The systematic errors shown in Fig.~\ref{fig:1}(left) include contributions from uncertainties on target thicknesses, on the $y$ distribution used in the acceptance calculation, and on the reconstruction efficiency. We only quote the fraction of the total systematic error which is not common to all the points (i.e. the one which affects the evaluation of nuclear effects). By fitting the A-dependence of the cross-section ratios in the frame of the Glauber model, we get $\sigma_{abs}^{\rm J/\psi}$(158 GeV)= 7.6$\pm$0.7(stat.)$\pm$0.6(syst.) mb. Alternatively, a fit using the $A^{\alpha}$ power-law gives $\alpha$(158 GeV)=$0.882\pm 0.009\pm 0.008$. In Fig.~\ref{fig:1}(left) we also show the results of the same analysis, carried out on a data sample taken by NA60 at 400 GeV, with the same configuration of the experimental set-up, in order to minimize the relative systematic errors. These results refer to the rapidity range $-0.17<y_{cm}<0.33$, corresponding to the same rapidity in the lab of the 158 GeV data. We clearly note that the A-dependence of this data sample is less steep than the one measured at 158 GeV. We get $\sigma_{abs}^{\rm J/\psi}$(400 GeV)= 4.3$\pm$0.8(stat.)$\pm$0.6(syst.) mb, and $\alpha$(400 GeV)=$0.927\pm 0.013\pm 0.009$. Nuclear effects on J/$\psi$ production at 400 GeV had already been studied by NA50, in the range $-0.425 < y_{cm} < 0.575$, close to the one of the NA60 data. Their result~\cite{Ale06}, $\sigma_{abs}^{\rm J/\psi}$(400 GeV)=4.6$\pm$0.6 mb, is in excellent agreement with our findings. The observation of a dependence of nuclear effects on the incident proton energy can be further investigated by comparing our results with previous studies done at fixed target energies. To do that, in Fig.~\ref{fig:1}(right) we show a compilation of $\alpha$ values as a function of $x_F$, including results from HERA-B at 920 GeV~\cite{Abt09}, from E866 at 800 GeV~\cite{Lei00} and from NA50 at 450 GeV~\cite{Ale04}. Our analysis at 400 and 158 GeV has been plotted in Fig.~\ref{fig:1}(right) for various $x_F$ bins. We notice that nuclear effects become stronger (smaller $\alpha$) at higher $x_{\rm F}$, and that, for a certain $x_{\rm F}$, they are also stronger for a lower incident proton energy. It is worthwhile to note that a theoretical description of the kinematic dependence of cold nuclear matter effects on J/$\psi$ production is still missing. An interplay of final state dissociation of the $c\overline{c}$ pair, parton shadowing and possibly initial state energy loss has been shown to reproduce some of the observed features (see e.g.~\cite{Vog00}), but clearly more work is needed in order to arrive at a quantitative description. \section{Anomalous J/$\psi$ suppression in \mbox{In-In} and \mbox{Pb-Pb} collisions} The \mbox{p-A} results at 158 GeV shown in the previous section have been collected at the same energy and in the same $x_{\rm F}$ range of the SPS \mbox{A-A} data. It is therefore natural to use these results to calculate the expected size of cold nuclear matter effects on J/$\psi$ production in nuclear collisions. In order to do that, we have determined, as a function of the forward energy $E_{\rm ZDC}$ and using the Glauber model, the expected shape $dN^{expec}_{{\rm J}/\psi}/dE_{\rm ZDC}$, assuming that J/$\psi$ production scales with the number of \mbox{N-N} collisions and that the produced J/$\psi$ are absorbed in nuclear matter according to the value $\sigma_{abs}^{\rm J/\psi}$(158 GeV) given in the previous section. The measured $dN_{{\rm J}/\psi}/dE_{\rm ZDC}$ has then been normalized to $dN^{expec}_{{\rm J}/\psi}/dE_{\rm ZDC}$ using the procedure detailed in Ref.~\cite{Arn07}. This procedure, which was used up to now, does not take explicitly into account, when extrapolating from \mbox{p-A} to \mbox{A-A}, the presence of shadowing effects. It can be shown~\cite{Arn09} that in the kinematic region where \mbox{A-A} data are measured ($0<y_{cm}<1$), this method leads to an overestimation of the anomalous suppression, of the order of $\sim 5\%$. We have therefore corrected our result for this small bias, using the EKS98~\cite{Esk99} parameterization of shadowing effects. In Fig.~\ref{fig:2}(left) we present, as a function of the number of participants, our result for the anomalous J/$\psi$ suppression in \mbox{In-In} and \mbox{Pb-Pb} collisions. We can see that up to $N_{part}\sim 200$ the J/$\psi$ yield is, within errors, compatible with our extrapolation of cold nuclear matter effects. For $N_{part}> 200$ an anomalous suppression is present, which reaches $\sim20-30\%$ for central \mbox{Pb-Pb} collisions. With this new evaluation of the anomalous suppression, the effect becomes smaller with respect to the past. This is essentially due to the larger $\sigma_{abs}^{\rm J/\psi}$ value now used in the determination of the nuclear reference. \begin{figure}[ht] \centering \includegraphics[scale=0.33]{fig2leftnew.eps} \includegraphics[scale=0.33]{fig2rightnew.eps} \caption[]{Left: anomalous J/$\psi$ suppression in \mbox{In-In} (circles) and \mbox{Pb-Pb} collisions (triangles), as a function of $N_{\rm part}$. The boxes around the \mbox{In-In} points represent correlated systematic errors. The filled box corresponds to the uncertainty in the absolute normalization of the \mbox{In-In} points. A 12\% global error, due to the uncertainty on $\sigma_{abs}^{\rm J/\psi}$(158 GeV), is not shown. Right: the J/$\psi$ polarization parameters $\lambda$ and $\nu$, in the helicity reference frame, as a function of $p_{\rm T}$, for NA60 data, compared with recent results from HERA-B. The boxes represent the total errors.} \label{fig:2} \end{figure} \section{J/$\psi$ polarization in \mbox{p-A} collisions} A study of the J/$\psi$ polarization in \mbox{p-A} collisions can be performed by studying the angular distribution of the decay muons. This study has been shown to be relevant, at collider energies, for investigating the quarkonium production models, since various theoretical approaches~\cite{Lan09} predict different values, as a function of $p_{\rm T}$, for the polarization parameters $\lambda$, $\mu$, $\nu$, obtained through a fit of the muon angular distribution $d^2\sigma/d\cos\theta d\phi \propto 1+\lambda\cos^2\theta+\mu\sin 2\theta\cos\phi+(\nu/2)\sin^2\theta\cos 2\phi$. In Fig.~\ref{fig:2}(right) we present our preliminary results for $\lambda$ and $\nu$ ($\mu$ is compatible with zero everywhere), obtained in the helicity reference frame, compared with recent HERA-B results~\cite{Abt092}. The data seem to indicate slightly negative $\lambda$ values at low $p_{\rm T}$, which level around zero at larger transverse momentum. $\nu$ values are close to zero in the $p_{\rm T}$ range explored by NA60. \section {Conclusions} We have shown new results on J/$\psi$ production in \mbox{p-A} collisions at 158 and 400 GeV. We see that nuclear effects become more important when moving towards lower energy, an observation that remains valid when extending the comparisons to other sets of results. Using the new 158 GeV results for determining the expected cold nuclear matter effects in \mbox{A-A} results in a smaller anomalous suppression with respect to previous estimates. The effect is anyway still sizeable ($\sim$25\%) for central \mbox{Pb-Pb} collisions.
2,869,038,156,166
arxiv
\section{Introduction} \label{s.intro} Let $G = (V_G,E_G)$ be a simple graph (no loops or multiple edges) with vertex set $V_G = \{x_1, \dots, x_n\}$ and edge set $E_G$. We can associate to $G$ a square-free monomial ideal \[\mathcal{I}(G) = (x_ix_j ~|~ \{x_i, x_j\} \in E_G) \subset R = k[x_1, \dots, x_n]\] The ideal $\mathcal{I}(G)$ is usually referred to as the \textbf{edge ideal} of $G$. In recent years there has been a flurry of work investigating how the combinatorial data of $G$ appears in algebraic invariants and properties of $\mathcal{I}(G)$. We mention, for example, the works \cite{eghp, FVTchordal, HVT, HVTsurvey, HHZ, J, JK, K, SVV, V1, Z}. In this paper, we examine how a particular structure of $G$ affects the Cohen-Macaulayness and sequentially Cohen-Macaulayness of its edge ideal. The property of being sequentially Cohen-Macaulay was first introduced by Stanley \cite{Stanley} as a generalization of the well-known property of being Cohen-Macaulay. We recall the definition of sequentially Cohen-Macaulay modules over a polynomial ring. \begin{definition} \label{d.scm} Let $M$ be a graded module over $R = k[x_1,\dots,x_n]$. We say that $M$ is \textbf{sequentially Cohen-Macaulay} if there exists a filtration \[ 0=M_0 \subset M_1 \subset \cdots \subset M_r = M \] of $M$ by graded $R$-modules such that $\dim M_i/M_{i-1} < \dim M_{i+1}/M_i$ for all $i$, where $\dim$ denotes Krull dimension, and $M_i/M_{i-1}$ is Cohen-Macaulay for all $i$. \end{definition} A graph $G$ is said to be \textbf{(sequentially) Cohen-Macaulay} if $R/\mathcal{I}(G)$ is a (sequentially) Cohen-Macaulay $R$-module. Stanley introduced the notion of sequential Cohen-Macaulayness in connection with work of Bj\"orner and Wachs on nonpure shellability. Pure shellable complexes are Cohen-Macaulay, and Stanley identified sequential Cohen-Macaulayness as the appropriate analogue in the nonpure setting; that is, all nonpure shellable complexes are sequentially Cohen-Macaulay. The notion of sequentially Cohen-Macaulayness has arisen in a number of interesting contexts. For example, Peskine characterized the sequentially Cohen-Macaulay $R$-modules in terms of vanishing or Cohen-Macaulayness of certain Ext modules. For a proof, see Herzog and Sbarra's paper \cite{HSb}, where they also show that if $I$ is a homogeneous ideal, $R/I$ is sequentially Cohen-Macaulay if and only if the local cohomology modules of $R/I$ and $R/\text{Gin}(I)$ have the same Hilbert functions (using the reverse-lex gin). On the combinatorial side, one particularly interesting result is due to Duval, who showed in \cite{D} that algebraic shifting preserves the $h$-triangle of a simplicial complex $\Delta$ if and only if $\Delta$ is sequentially Cohen-Macaulay. Classifying all Cohen-Macaulay or sequentially Cohen-Macaulay graphs is intractable, and thus it is natural to study some special classes of graphs. Of particular interest is the class of {\it trees} and {\it forests}, or slightly more generally, {\it chordal} graphs. For example, Faridi \cite{Faridi} showed that simplicial trees are sequentially Cohen-Macaulay; the first author and Van Tuyl \cite{FVTchordal} extended this property in the case of graphs to the class of all chordal graphs; and, on the other hand, Herzog, Hibi, and Zheng \cite{HHZ} proved that a chordal graph is Cohen-Macaulay if and only if it is {\it unmixed}. Our paper complements this work, asking: Given an arbitrary graph $G$, how can one modify $G$ to obtain a sequentially Cohen-Macaulay graph? Motivated by Villarreal's work in \cite{V1}, we investigate the effect of adding ``whiskers'' to a graph. To add a \textbf{whisker} at a vertex $y$ to $G$, one adds a new vertex $x$ and the edge connecting $x$ and $y$ to $G$. We denote by $G \cup W(y)$ the graph obtained from $G$ by adding a whisker at $y$. More generally, if $S \subset V_G$ is a subset of the vertices of $G$, then we denote by $G \cup W(S)$ the graph obtained from $G$ by adding a whisker at each vertex in $S$. The origin of the name is clear from a picture of a whisker added to each vertex of a cycle, and the terminology appears in \cite{SVV}. A primary inspiration for this paper is Villarreal's theorem from \cite{V1} (where contributions of Vasconcelos, Herzog, and Fr\"oberg are also cited); see also \cite[Theorem 2.1]{SVV}. He showed that if $G$ is a graph, and $H$ is the graph formed by adding a whisker to {\it every} vertex of $G$, then $H$ is Cohen-Macaulay. This result is sharp: One needs in general to add a whisker to all vertices of $G$, and adding fewer whiskers can actually make a Cohen-Macaulay graph no longer Cohen-Macaulay. The goal of this paper is to explore how adding different configurations of whiskers to a graph $G$ affects the weaker property of a graph being sequentially Cohen-Macaulay. Our first main result is: \medskip \noindent {\bf Theorem~\ref{t.whiskerscm}.} Let $G$ be a simple graph and let $S \subset V_G$. Suppose that $G \backslash S$ is a chordal graph or a five-cycle $C_5$. Then $G \cup W(S)$ is a sequentially Cohen-Macaulay graph. \medskip Theorem \ref{t.whiskerscm} has a number of interesting consequences. For example, Corollary \ref{c.manycor} says that if $S \subset V_G$ is a vertex cover, then $G \cup W(S)$ is sequentially Cohen-Macaulay. Thus to create sequentially Cohen-Macaulay graphs by adding whiskers, the number of whiskers is not as important as their configuration. On the other hand, Corollary \ref{c.n-3} says that if $|S| \ge |V_G|-3$, then $G \cup W(S)$ is sequentially Cohen-Macaulay. This gives a bound on the number of vertices so that adding this many whiskers, regardless of their configuration, always results in a sequentially Cohen-Macaulay graph. Furthermore, we recover Villarreal's theorem on creating Cohen-Macaulay graphs in Corollary~\ref{c.whiskercm}. Our approach uses Alexander duality of edge ideals. If $\Delta$ is a simplicial complex, the faces of the \textbf{Alexander dual} complex $\Delta^*$ are the complements of the nonfaces of $\Delta$. One can define Alexander duality for square-free monomial ideals without reference to simplicial complexes; the duality maps generators of $I$ to primary components of $I^{\vee}$. A powerful feature of Alexander duality is that it allows us to link the sequentially Cohen-Macaulay property with homological features of the dual. To do this, Herzog and Hibi introduced the concept of componentwise linearity \cite{HH}. If $I$ is a homogeneous ideal, write $(I_d)$ for the ideal generated by all homogeneous degree $d$ elements of $I$. The ideal $I$ is \textbf{componentwise linear} if for all $d \in \mathbb{N}$, $(I_d)$ has a linear resolution. All ideals with linear resolutions are componentwise linear, but so are all stable ideals and a number of others; see, e.g., \cite{FVT}. When $I$ is a square-free monomial ideal, Herzog and Hibi found a useful criterion for $I$ to be componentwise linear. Write $(I_{[d]})$ for the ideal generated by all square-free monomials of degree $d$ in $I$. Then $I$ is componentwise linear if and only if for all $d \in \mathbb{N}$, $(I_{[d]})$ has a linear resolution. Moreover, in \cite{HH}, Herzog and Hibi generalized the analogous result on Cohen-Macaulayness from \cite[Theorem 3]{ER}. \begin{theorem} \label{t.scmcwl} Let $I$ be a square-free monomial ideal in $R=k[x_1,\dots,x_n]$. $R/I$ is sequentially Cohen-Macaulay over $k$ if and only if $I^{\vee}$ is componentwise linear in $R$. \end{theorem} Theorem~\ref{t.scmcwl} allows us to investigate the sequentially Cohen-Macaulay property of an ideal by determining when the Alexander dual is componentwise linear, and Herzog and Takayama's theory of linear quotients \cite{HT} gives a useful way to prove that an ideal is componentwise linear. A monomial ideal $I$ is said to have \textbf{linear quotients} if $I$ has a system of minimal generators $\{u_1, \dots, u_r\}$ with $\deg u_1 \le \dots \le \deg u_r$ such that for all $1 \le i \le r-1$, $(u_1,\dots,u_i):(u_{i+1})$ is generated by linear forms. It is easy to see that if $I$ is an ideal generated in a single degree, and $I$ has linear quotients, then $I$ has a linear resolution. We shall use this observation frequently: If $(I_d)$ has linear quotients for all $d$, then $I$ is componentwise linear. In particular, if $I$ is a square-free monomial ideal, and for all $d$, $(I_{[d]})$ has a linear resolution, then $I$ is componentwise linear. Note that having linear quotients is independent of the characteristic of the field $k$. The proof of Theorem~\ref{t.whiskerscm} is based upon examining the degree $d$ pieces of the Alexander duals of $\mathcal{I}(G \backslash S)$ and $\mathcal{I}(G \cup W(S))$; we make the following definition: \begin{definition} \label{d.graphlq} We say that a graph $G$ has \textbf{dual linear quotients} if for each degree $d \in \mathbb{N}$, $(\mathcal{I}(G)^{\vee}_{[d]})$ has linear quotients. \end{definition} If $G$ has dual linear quotients, then $G$ is sequentially Cohen-Macaulay (over a field of any characteristic); this approach has been used in papers of Faridi \cite{Faridi} and the first author and Van Tuyl \cite{FVTchordal}, and we exploit it throughout the paper. Our second main result addresses the converse of Theorem~\ref{t.whiskerscm}. We give a necessary condition on $G \backslash S$ for $G \cup W(S)$ to be sequentially Cohen-Macaulay. \medskip \noindent{\bf Theorem~\ref{t.notscm}.} Let $G$ be a simple graph and let $S \subset V_G$. If $G \backslash S$ is not a sequentially Cohen-Macaulay graph, then $G \cup W(S)$ is not sequentially Cohen-Macaulay. \medskip To prove Theorem \ref{t.notscm}, we examine syzygies of the Alexander dual of $\mathcal{I}(G \cup W(S))$ via simplicial homology and {\it upper Koszul simplicial complexes} associated to square-free monomial ideals. Our arguments are inspired by \cite{FVTchordal}. \medskip \noindent{\bf Acknowledgement:} The authors would like to thank Adam Van Tuyl for many stimulating discussions on the topic. \section{Preliminaries} \label{s.prelims} In this section, we fix some notation for graphs and discuss a result from Alexander duality that we shall use in the rest of the paper. Throughout, $G = (V_G,E_G)$ denotes a simple graph, which is a graph without any loops or multiple edges. Often, we shall simply write $G$ and not specify its vertex and edge sets. For a vertex $x$ of $G$ we use $N(x)$ to denote the set of neighbors of $x$, which are the vertices connected to $x$ by an edge of $G$. An \textbf{induced subgraph} of $G$ is a subgraph $H$ of $G$ with the property that if $\{z_1, z_2\} \subset V_H$ and $z_1z_2 \in E_G$, then $z_1z_2 \in E_H$. \begin{notation} Let $G$ be a simple graph, and let $S=\{y_1,\dots,y_n\}$ be a subset of vertices of $G$. By $G \cup W(S)$ we mean the graph with whiskers $x_iy_i$, for each $1 \le i \le n$, attached to $G$. For simplicity, we shall use $\{x_1y_1, \dots, x_ny_n\}$ to denote $W(S)$ in this case. We use $G \backslash \{y_1,\dots,y_r\}$ to mean the subgraph obtained from $G$ by removing the vertices $y_1,\dots,y_r$ and all edges incident to at least one of these vertices. \end{notation} Cycles are important in several of our results. A \textbf{cycle} in a simple graph $G$ is an alternating sequence of distinct vertices and edges $C = v_1e_1v_2e_2 \dots v_{n-1}e_{n-1}v_ne_nv_1$ in which the edge $e_i$ connects the vertices $v_i$ and $v_{i+1}$ ($v_{n+1} \cong v_1$) for all $i$. In this case, we say $C$ has \textbf{length} $n$ and call $C$ an $n$-cycle. We shall also use $C_n$ to denote an $n$-cycle. We are particularly interested in chordal graphs, which includes all trees and forests and has been the object of much study in recent years. We call a graph $G$ \textbf{chordal} if for all $n \ge 4$, every $n$-cycle in $G$ has a \textbf{chord}, which is an edge connecting two non-consecutive vertices of the cycle. Notice that an induced subgraph of a chordal graph is also chordal. Thus, as a byproduct of the arguments of \cite[Theorem 3.2]{FVTchordal}, we get a theorem on chordal graphs that we use in inductive arguments in the next section: \begin{theorem} \label{t.chordalscm} Let $G$ be a chordal graph, and let $H$ be an arbitrary induced subgraph of $G$. Then $H$ has dual linear quotients. \end{theorem} The generators of the Alexander dual of an edge ideal represent covers of the associated graph. We recall some terminology relating to these covers. \begin{definition} \label{d.vc} A \textbf{vertex cover} of a graph $G$ is a subset $V \subset V_G$ of the vertices of $G$ such that every edge of $G$ is incident to at least one vertex of $V$ (in particular, isolated vertices need not appear in a vertex cover). If $V$ is a vertex cover of $G$, then it is a \textbf{minimal vertex cover} of $G$ if no proper subset of $V$ is a vertex cover of $G$. For simplicity, we write vertex covers as monomials, so $\{z_1,\dots,z_r\}$ will be written as $z_1 \cdots z_r$. A graph is said to be \textbf{unmixed} if every minimal vertex cover has the same cardinality. \end{definition} Using Alexander duality, one can describe how being Cohen-Macaulay differs from being sequentially Cohen-Macaulay in the square-free monomial ideal case. By \cite[Lemma 3.6]{FVTchordal}, if $I \subset R$ is square-free monomial ideal, then $R/I$ is Cohen-Macaulay if and only if $R/I$ is sequentially Cohen-Macaulay and $I$ is unmixed. Consequently, because the unmixedness of a graph is a combinatorial property (on the cardinality of minimal vertex covers), investigating the Cohen-Macaulayness of a graph reduces to studying the sequentially Cohen-Macaulayness of such a graph. In particular, a sequentially Cohen-Macaulay graph is Cohen-Macaulay if and only if it is unmixed. \section{Whiskers and sequentially Cohen-Macaulay graphs}\label{s.scm} In this section, we explore how to add a configuration of whiskers to an arbitrary graph to create a sequentially Cohen-Macaulay graph. Let $G$ be a graph, and let $S \subset V_G$. Our primary question is: What conditions on $S$ make $G \cup W(S)$ sequentially Cohen-Macaulay? Because being sequentially Cohen-Macaulay is a weaker property than being Cohen-Macaulay, one expects that $S$ needs not be all of $V_G$ to ensure that $G \cup W(S)$ is sequentially Cohen-Macaulay. The focus of this section is on sufficient conditions that guarantee the sequential Cohen-Macaulayness of $G \cup W(S)$, and our results also recover Villarreal's theorem on Cohen-Macaulayness as a consequence. Because the vertex covers of $G$ are the generators of the Alexander dual of $\mathcal{I}(G)$, we are often interested in ways to partition the set of vertex covers of $G$ of a particular cardinality. For any graph $G$ and vertex $x \in V_G$ with $N(x)=\{y_1,\dots,y_t\}$, we can decompose the set of vertex covers of $G$ of size $d$ in the following way: Any vertex cover of $G$ of size $d$ is either $x$ times a vertex cover of $G \backslash \{x\}$ of size $d-1$, or it is $y_1\cdots y_t$ times a vertex cover of $G \backslash \{x,y_1,\dots,y_t\}$ of size $d-t$. For our purposes, we frequently consider the case in which $G$ contains a whisker $xy$, where $x$ is the vertex of degree one. In this case, the vertex covers of $G$ are decomposed based on covers of $G \backslash \{x\}$ and covers of $G \backslash \{x,y\}$. In particular, the set of vertex covers of $G$ of size $d$ is the union of $x$ times the vertex covers of $G \backslash \{x\}$ of size $d-1$ and $y$ times the vertex covers of $G \backslash \{x,y\}$ of size $d-1$. The next theorem is the first step in exploring how to add whiskers to a graph to make it sequentially Cohen-Macaulay. \begin{theorem} \label{t.subgraphlq} Let $G'$ be a simple graph and let $S = \{y_1, \dots, y_n\} \subset V_{G'}$ be a subset of the vertices of $G'$. Let $\{x_1y_1, \dots, x_ny_n\}$ be whiskers of $G'$ at $S$ and let $G = G' \cup W(S)$. Suppose that if $H \subset G$ is an induced subgraph of $G$ such that both \begin{enumerate} \item[$(i)$] $\{x_1,\dots,x_{n-1}\} \subset V_H$, and \item[$(ii)$] $x_n \not \in H$ and $y_n \not \in H$ \end{enumerate} hold, then $H$ has dual linear quotients. Then all induced subgraphs $K \subset G$ such that $\{x_1,\dots,x_n\} \subset V_K$ have dual linear quotients. \end{theorem} \begin{proof} For simplicity of notation, let $\{z_1, \dots, z_r\} = V_{G'} \backslash S$. Fix an induced subgraph $K \subset G$ as in the statement of the theorem. Consider first the case in which $y_n \not\in K$. Then $x_n$ is an isolated vertex of $K$. Let $H$ be the graph obtained from $K$ by removing the isolated vertex $x_n$. Clearly, $H$ satisfies properties $(i)$ and $(ii)$ of the hypothesis. Thus, $H$ has dual linear quotients, i.e., $(\mathcal{I}(H)^{\vee}_{[d]})$ has linear quotients for all $d \in \mathbb{N}$. Since the only edge to which $x_n$ is incident in $G$ is $x_ny_n$, the minimal generating set of $\mathcal{I}(K)$ is the same as the minimal generating set of $\mathcal{I}(H)$; thus the minimal generating sets of $\mathcal{I}(K)^{\vee}$ and $\mathcal{I}(H)^{\vee}$ are the same, though the first is an ideal of $k[x_1,\dots,x_n,y_1,\dots,y_{n-1},z_1,\dots,z_r]$, and the second is an ideal of $k[x_1,\dots,x_{n-1},y_1,\dots,y_{n-1},z_1,\dots,z_r]$. Thus for all $d \in \mathbb{N}$, by \cite[Lemma 2.9]{FVT}, since $(\mathcal{I}(H)^{\vee}_{[d]})$ has linear quotients, so does $(\mathcal{I}(K)^{\vee}_{[d]})$. Consider instead the case in which $y_n \in K$. Let $H$ be the subgraph of $K$ obtained by removing $x_n$, $y_n$, and all edges incident to $x_n$ or $y_n$. Again, $H$ satisfies $(i)$ and $(ii)$ of the hypotheses; and thus, $H$ has dual linear quotients. Fix a degree $d \in \mathbb{N}$. Let $A_1,\dots,A_a$ be the monomials that represent all vertex covers of $K \backslash \{x_n\}$ of size $d-1$, and let $B_1,\dots,B_b$ be the monomials that represent all vertex covers of $H=K \backslash \{x_n,y_n\}$ of size $d-1$; that is, $(B_1,\dots,B_b)=(\mathcal{I}(H)^{\vee}_{[d-1]})$. We have \[ (\mathcal{I}(K)^{\vee}_{[d]}) = x_n(A_1,\dots,A_a) + y_n(B_1,\dots,B_b).\] The $A_i$s are monomials in the variables $x_1,\dots,x_{n-1},y_1,\dots,y_n,z_1,\dots,z_r$, and the $B_i$s are monomials in the variables $x_1,\dots,x_{n-1},y_1,\dots,y_{n-1},z_1,\dots,z_r$. Since $H$ has dual linear quotients, $(\mathcal{I}(H)^{\vee}_{[d]})$ has linear quotients for all $d$. We may assume that the $B_i$s are indexed in the order that gives linear quotients (that is, $(B_1,\dots,B_{i-1}):B_i$ is generated by a subset of the variables for all $i$). We wish to show that the ideal $(y_nB_1, \dots, y_nB_b, x_nA_1, \dots,x_nA_a)$ has linear quotients. Since $(B_1,\dots,B_b)$ has linear quotients (in that order), it suffices to show that for all $j$, \[ (y_nB_1,\dots, y_nB_b, x_nA_1, \dots, x_nA_{j-1}):x_nA_j \] is generated by a subset of the variables. To this end, we consider two possibilities. Suppose first that $y_n$ divides $A_j$. Then $x_nA_j=x_ny_nC$, where $C$ is a vertex cover of $H = K \backslash \{x_n,y_n\}$ of size $d-2$. Let $T$ be the set of variables in $\{x_1,\dots,x_{n-1}$, $y_1,\dots,y_{n-1}$, $z_1,\dots,z_r\}$ which are not in the support of $C$, and suppose $u \in T$. Then $uC$ is a vertex cover of $H$ of size $d-1$, so it is one of the $B_i$. Therefore for any $u \in T$, $ux_nA_j \in (y_nB_1,\dots,y_nB_b)$. Moreover, note that $(y_nB_1,\dots,y_nB_b,x_nA_1,\dots,x_nA_{j-1})$ is a square-free monomial ideal; thus, if $m$ is a minimal monomial generator of $$(y_nB_1, \dots, y_nB_b, x_nA_1, \dots, x_nA_{j-1}) : x_nA_j,$$ then $m$ is square-free. Hence $x_n$, $y_n$, and variables dividing $C$ do not divide $m$. Thus \[ (y_nB_1,\dots,y_nB_b,x_nA_1,\dots,x_nA_{j-1}):x_nA_j = (\mbox{all variables } u \in T).\] Next we assume that $y_n$ does not divide $A_j$. Note that any $A_j$ that is not divisible by $y_n$ is one of the $B_i$s because a cover of $K \backslash \{x_n\}$ not containing $y_n$ is a cover of $H$. Thus $A_j=B_{i_j}$ for some $i_j$. Consider a monomial $m$ for which $mx_nA_j \in (y_nB_1,\dots,y_nB_b)$. Then since $y_n$ does not divide $A_j$, $y_n$ must divide $m$. But $y_nx_nA_j=y_nx_nB_{i_j} \in (y_nB_1,\dots,y_nB_b)$, so $y_nx_nA_j \in (y_nB_1,\dots,y_nB_b)$. Hence $(y_nB_1,\dots,y_nB_b):x_nA_j = (y_n)$. The last remaining situation is when $y_n$ does not divide $A_j$, and $m$ is a monomial such that $mx_nA_j$ lands in the ideal $(x_nA_1, \dots, x_nA_{j-1})$. This case requires a bit more work. We need to specify an order for the $A_i$ monomials. Note that, so far we have not used any feature of the ordering of the $A_i$s. We may pick an order so that all the $A_i$s not divisible by $y_n$ are indexed first, and those that are divisible by $y_n$ are last. Suppose that $\{A_1, \dots, A_t\}$ are all the $A_i$s that are not divisible by $y_n$, and $A_l = B_{i_l}$ for $l = 1, \dots, t$. For each $1 \le l \le t$, $A_l = B_{i_l}$ is a vertex cover of $K \backslash \{x_n\}$ not containing $y_n$, so it is divisible by all variables in $N(y_n) \backslash \{x_n\}$. Let $D$ be the monomial given by the variables in $N(y_n) \backslash \{x_n\}$. For $1 \le l \le t$, let $C_l = A_l/D$. Then, clearly, $\{C_1,\dots, C_t\}$ are the vertex covers of $L = K \backslash \{x_n,y_n,N(y_n)\}$ of size $d-1-u$, where $u = |N(y_n)|-1$. Conversely, if $C$ is a vertex cover of $L$, then $CD$ is a vertex cover of $K \backslash \{x_n\}$ not containing $y_n$. Thus $\{C_1, \dots, C_t\}$ are all vertex covers of $L$ of size $d-1-u$. Since $x_n \in N(y_n)$ but $x_j \not \in N(y_n)$ for $j \not = n$, it is easy to see that $L$ satisfies $(i)$ and $(ii)$ of the hypotheses. Therefore, $L$ has dual linear quotients. This implies that the ideal $(C_1, \dots, C_t)$ has linear quotients. We shall reindex $\{A_1, \dots, A_t\}$ so that $C_1, \dots, C_t$ is the order of the generators in which $(C_1, \dots, C_t)$ has linear quotients. Now suppose that $m$ is a monomial so that $mx_nA_j \in (x_nA_1, \dots, x_nA_{j-1})$. Dividing by the monomial given by $N(y_n)$ (including $x_n$), we have $m C_j \in (C_1,\dots,C_{j-1})$. Since $(C_1,\dots,C_t)$ has linear quotients, $(C_1,\dots,C_{j-1}):C_j=(x_{p_1},\dots,x_{p_v})$ for some subset of the variables. Thus if $mx_nA_j \in (x_nA_1, \dots, x_nA_{j-1})$, then some variable $x_{p_w}$ divides $m$. We have shown that $(y_nB_1,\dots,y_nB_b, x_nA_1,\dots, x_nA_a)$ has linear quotients. This is true for any $d \in \mathbb{N}$. Hence, the conclusion follows. \end{proof} We are now ready to prove our first main result. \begin{theorem} \label{t.whiskerchordal} Let $G$ be a simple graph. Let $S \subset V_G$ be such that $G \backslash S$ is a chordal graph. Then $G \cup W(S)$ is a sequentially Cohen-Macaulay graph. \end{theorem} \begin{proof} Let $S = \{y_1, \dots, y_n\}$ and $W(S) = \{x_1y_1, \dots, x_ny_n\}$. It suffices to show that $G \cup W(S)$ has dual linear quotients. We shall first construct a class of subgraphs of $G$ as follows. Let $G_0=G \backslash S$. Let $G_1 = G_0 \cup \{x_1,y_1\} \cup \{ \mbox{edges of $G$ incident to vertices of $V_{G_0} \cup \{x_1, y_1\}$}\}$. More generally, for $1 \le i \le n$, let $G_i = G_{i-1} \cup \{x_i, y_i\} \cup \{ \mbox{edges of $G$ incident to vertices of $V_{G_{i-1}} \cup \{x_i, y_i\}$}\}$. Observe that $G_n = G$. Now, the conclusion will follow if we can show that every induced subgraph $K$ of $G$ (in particular, $G$ itself) containing $\{x_1, \dots, x_n\}$ has dual linear quotients. To this end, we shall use induction on $i$ to show that every induced subgraph $K$ of $G_i$ containing $\{x_1, \dots, x_i\}$ has dual linear quotients for $i = 0, \dots, n$. Indeed, for $i = 0$, the assertion follows from Theorem \ref{t.chordalscm}. Suppose $i \ge 1$. Consider an arbitrary induced subgraph $H$ of $G_i$ such that $\{x_1, \dots, x_{i-1}\} \subset V_H$, $x_i \not\in V_H$ and $y_i \not\in V_H$. Then $H$ is also an induced subgraph of $G_{i-1} = G_i \backslash \{x_i, y_i\}$. Thus, by induction, $H$ has dual linear quotients. It now follows from Theorem \ref{t.subgraphlq} that every induced subgraph $K$ of $G_i$ with $\{x_1, \dots, x_i\} \subset V_K$ has dual linear quotients. The theorem is proved. \end{proof} We can extend Theorem \ref{t.whiskerchordal} slightly. \begin{theorem} \label{t.whiskerscm} Let $G$ be a simple graph and let $S \subset V_G$. Suppose $G \backslash S$ is a chordal graph or a five-cycle $C_5$. Then $G \cup W(S)$ is a sequentially Cohen-Macaulay graph. \end{theorem} \begin{proof} If $G \backslash S$ is chordal, then the conclusion is Theorem \ref{t.whiskerchordal}. Suppose that $G \backslash S = C_5$. The inductive argument of Theorem \ref{t.whiskerchordal} rests on the fact that every induced subgraph of $G_0 = G \backslash S$ has dual linear quotients. In our current situation, $G_0 = G \backslash S = C_5$. Moreover, $\mathcal{I}(C_5)^{\vee}=(x_1x_2x_4, x_1x_3x_4,x_1x_3x_5,x_2x_3x_5,x_2x_4x_5)$ has dual linear quotients in the given order of the generators. Any proper subgraph of $C_5$ is a forest and thus also has dual linear quotients. Hence, arguments similar to those in Theorem \ref{t.whiskerchordal} yield the assertion. \end{proof} Theorems~\ref{t.whiskerchordal} and \ref{t.whiskerscm} give many interesting corollaries about configurations of whiskers that can be added to obtain a sequentially Cohen-Macaulay graph. Part (i) below gives a particularly easy way to ensure one constructs a sequentially Cohen-Macaulay graph. \begin{corollary} \label{c.manycor} Let $G$ be a simple graph, and let $S \subset V_G$. \begin{enumerate} \item[(i)] If $S$ is a vertex cover of $G$, then $G \cup W(S)$ is sequentially Cohen-Macaulay. \item[(ii)] If $G \backslash S$ is a forest, then $G \cup W(S)$ is sequentially Cohen-Macaulay. \item[(iii)] If $G=C$ is a cycle, and $y$ is a vertex of $C$, then $C \cup W(y)$ is sequentially Cohen-Macaulay. \end{enumerate} \end{corollary} \begin{proof} For (i), since $S$ is a vertex cover of $G$, $G \backslash S$ is a graph of isolated vertices. A graph without any edges is clearly a chordal graph. Thus the assertion is a consequence of Theorem \ref{t.whiskerchordal}. (ii) is immediate from Theorem~\ref{t.whiskerchordal} since every forest is a chordal graph. Finally, removing a vertex from a cycle produces a tree, so (iii) follows from (ii). \end{proof} Corollary~\ref{c.manycor}(iii) allows one to make a cycle into a sequentially Cohen-Macaulay graph quite easily; only one whisker is necessary. With Van Tuyl, we noticed this phenomenon after doing many computations in the computer algebra system Macaulay 2 \cite{M2}, and it was a primary initial motivation for this paper. Notice that Corollary~\ref{c.manycor} states that to obtain sequentially Cohen-Macaulay graphs, the number of whiskers is not as important as their configuration. Our next corollary complements Corollary \ref{c.manycor} to give a bound on the number of whiskers to add to a graph, regardless of how they are picked, to obtain a sequentially Cohen-Macaulay graph. \begin{corollary} \label{c.n-3} Let $G$ be a simple graph and let $S \subset V_G$. Assume that $|S| \ge |V_G| - 3$. Then $G \cup W(S)$ is a sequentially Cohen-Macaulay graph. \end{corollary} \begin{proof} Since $|S| \ge |V_G| - 3$, $G \backslash S$ is a graph on at most 3 vertices. Thus, $G \backslash S$ is either a three-cycle, a tree, or a graph with isolated vertices. These are all chordal graphs, and hence, the conclusion follows from Theorem \ref{t.whiskerchordal}. \end{proof} We shall see in Example \ref{e.allbutfour} that the bound $|V_G|-3$ in Corollary \ref{c.n-3} is sharp. Additionally, Corollary \ref{c.n-3} allows us to recover Villarreal's theorem. \begin{corollary} \label{c.whiskercm} Let $G$ be a simple graph with vertex set $V_G$. Then $G \cup W(V_G)$ is a Cohen-Macaulay graph. \end{corollary} \begin{proof} By Corollary \ref{c.n-3} we know that $G' = G \cup W(V_G)$ is sequentially Cohen-Macaulay. Thus it suffices to show that $G'$ is unmixed; i.e., all minimal vertex covers of $G'$ have the same cardinality. Suppose $V_G = \{y_1, \dots, y_n\}$ and $W(V_G) = \{x_1y_1, \dots, x_ny_n\}$. Let $V$ be an arbitrary minimal vertex cover of $G'$. Clearly, for each $i=1, \dots, n$, $V$ has to contain one of the vertices $\{x_i,y_i\}$ (to cover the edge $x_iy_i$). Moreover, since $V$ is minimal, for each $i=1, \dots, n$, $V$ contains exactly one of the vertices $\{x_i,y_i\}$. Hence, $|V| = n$. This is true for any minimal vertex cover $V$ of $G'$. Thus, the assertion follows. \end{proof} In the final theorem of this section, we isolate the condition from the proof of Theorem~\ref{t.whiskerchordal} that yields that result and its corollaries. \begin{theorem} \label{t.iff} Let $G$ be a simple graph with $S$ a subset of its vertex set. Then all induced subgraphs of $G \backslash S$ have dual linear quotients if and only if all induced subgraphs of $G \cup W(S)$ have dual linear quotients. \end{theorem} \begin{proof} If all induced subgraphs of $G \cup W(S)$ have dual linear quotients, then so do all induced subgraphs of $G \backslash S$ since any induced subgraph of $G \backslash S$ is an induced subgraph of $G \cup W(S)$. Assume instead that all induced subgraphs of $G \backslash S$ have dual linear quotients. Then an argument identical to the proof of Theorem~\ref{t.whiskerchordal} shows that all induced subgraphs of $G \cup W(S)$ have dual linear quotients. \end{proof} We give two examples showing that the hypotheses of Theorem~\ref{t.iff} cannot easily be weakened. \begin{example} \label{e.notdlq} Let $G$ be a simple graph with $S \subset V_G$, and assume that $G \backslash S$ has dual linear quotients. In this example, we show that if there exists a subgraph of $G \backslash S$ that does not have dual linear quotients, then $G \cup W(S)$ may fail to be sequentially Cohen-Macaulay. Let $G$ be the graph on the vertex set $V_G=\{x_1,\dots,x_6\}$ together with edge set $E_G=\{x_1x_2,x_2x_3,x_3x_4,x_1x_4,x_3x_5,x_4x_5,x_5x_6\}$. Let $S=\{x_6\}$, so $G \cup W(S)$ is the graph $G$ along with a new vertex $x_7$ and edge $x_6x_7$. Then \[ \mathcal{I}(G\backslash S)^{\vee} = (x_1x_3x_4,x_2x_3x_4,x_1x_3x_5,x_2x_4x_5),\] which has linear quotients in the order in which the generators are listed. Hence the graph $G \backslash S$ has dual linear quotients. Note, however, that not all induced subgraphs of $G \backslash S$ have dual linear quotients; the four-cycle comprised of the vertices $\{x_1,\dots,x_4\}$ is not even sequentially Cohen-Macaulay. Now we consider $G \cup W(S)$. We have \[ \mathcal{I}(G \cup W(S))^{\vee} = (x_1x_3x_4x_6, x_2x_3x_4x_6, x_1x_3x_5x_6, x_2x_4x_5x_6, x_1x_3x_5x_7, x_2x_4x_5x_7). \] The minimal graded free resolution of $\mathcal{I}(G \cup W(S))^{\vee}$ is \[ 0 \longrightarrow R(-7) \longrightarrow R(-5)^5 \oplus R(-6) \longrightarrow R(-4)^6 \longrightarrow \mathcal{I}(G \cup W(S))^{\vee} \longrightarrow 0,\] where $R=k[x_1,\dots,x_7]$. Therefore $\mathcal{I}(G \cup W(S))^{\vee}$ is not componentwise linear because of the syzygies in degrees six and seven. Hence $G \cup W(S)$ is not sequentially Cohen-Macaulay. \end{example} \begin{example} \label{e.dlq} Again we assume that $G$ is a simple graph with $S \subset V_G$ such that $G \backslash S$ has dual linear quotients. Now we show that even if there exists a subgraph of $G \backslash S$ that does not have dual linear quotients (and, in fact, is not sequentially Cohen-Macaulay), $G \cup W(S)$ may itself have dual linear quotients. Let $G$ be the graph with $V_G=\{x_1,\dots,x_6\}$ and edge set \[E_G=\{x_1x_2,x_2x_3,x_3x_4,x_1x_4,x_3x_5,x_4x_5,x_2x_6,x_3x_6\}.\] Let $S=\{x_6\}$. Then $\mathcal{I}(G \backslash S)^{\vee}$ is the same as in Example~\ref{e.notdlq}, and hence $G \backslash S$ has dual linear quotients. Note that the induced subgraph on the vertices $\{x_1,\dots,x_4\}$ is a four-cycle, which is not sequentially Cohen-Macaulay. We claim that $G \cup W(S)$ has dual linear quotients. The dual of the edge ideal is \[\mathcal{I}(G \cup W(S))^{\vee} = (x_1x_3x_4x_6, x_2x_3x_4x_6, x_1x_3x_5x_6, x_2x_4x_5x_6, x_2x_3x_4x_7, x_1x_2x_3x_5x_7).\] One can check that this ideal has linear quotients with respect to the order in which the generators are listed. Therefore $G \cup W(S)$ is sequentially Cohen-Macaulay. \end{example} Consequently, if $G \backslash S$ has dual linear quotients, but there exists a subgraph of $G \backslash S$ without dual linear quotients, then $G \cup W(S)$ may or may not be sequentially Cohen-Macaulay. This is the primary reason for our techniques in the proofs in Section~\ref{s.scm}; in our inductive approach, we assume that {\it all} induced subgraphs of a certain type have dual linear quotients to avoid cases like Example~\ref{e.notdlq}. We conclude this section by remarking that it is difficult to find results analogous to Theorem~\ref{t.iff} for Cohen-Macaulay graphs. One is tempted to conjecture that if $G$ is a simple graph, and $S$ is a subset of $V_G$ such that all induced subgraphs of $G\backslash S$ have dual linear quotients and are unmixed, then $G \cup W(S)$ is Cohen-Macaulay. Unfortunately, this is false. An easy counterexample is the case in which $G$ is one edge connecting two vertices $y_1$ and $y_2$. $G \backslash \{y_1\}$ trivially has dual linear quotients and is unmixed. However, $k[x,y_1,y_2]/(xy_1,y_1y_2)$ is not Cohen-Macaulay. The difficulty in searching for the appropriate analogue to Theorem~\ref{t.iff} is guaranteeing the unmixedness of $G \cup W(S)$. \section{Whiskers and non-sequentially Cohen-Macaulay graphs} \label{s.notscm} In the previous section, we gave sufficient conditions for getting sequentially Cohen-Macaulay graphs by adding whiskers. In this section, we address the converse problem. Our primary interest is necessary conditions on a graph $G$ and $S \subset V_G$ so that $G \cup W(S)$ has a chance to be sequentially Cohen-Macaulay. To show that certain graphs are not sequentially Cohen-Macaulay, we exploit Alexander duality and show that the dual of the edge ideal is not componentwise linear. This requires investigating the syzygies of the dual, and to do that, we use simplicial homology. Define a square-free vector to be a vector with its entries in $\{0,1\}$. For any monomial ideal $M$, we define the \textbf{upper Koszul simplicial complex of} {\boldmath $M$}: \[K^{{\rm \bf b}}(M)=\, \, \mbox{\{square-free vectors {{\rm \bf a}} \, such that } \frac{x^{{\rm \bf b}}}{x^{{\rm \bf a}}} \in M\}.\] See, e.g., \cite{MS}. Using the relation \[ \beta_{i,{{\rm \bf b}}}(M) = \dim_k \tilde{H}_{i-1}(K^{{\rm \bf b}}(M),k), \] which is \cite[Theorem 1.34]{MS}, we can compute the $\mathbb N^n$-graded Betti numbers of $M$. We use this technique in the following theorem. \begin{theorem} \label{t.notscm} Let $G$ be a simple graph and let $S \subset V_G$. If $G \backslash S$ is not sequentially Cohen-Macaulay, then $G \cup W(S)$ is not sequentially Cohen-Macaulay. \end{theorem} \begin{proof} Let $\{z_1, \dots, z_r\} = V_G \backslash S$. Because $G \backslash S$ is not sequentially Cohen-Macaulay, there exists $d$ such that $I:=(\mathcal{I}(G \backslash S)^{\vee}_{[d]})$ has a nonlinear $i$th syzygy in its minimal free resolution. Suppose this nonlinear syzygy occurs in the multi-degree ${{\rm \bf b}}$ corresponding to the square-free monomial $z_{i_1} \dots z_{i_l}$ for some $l > d+i$. Then $\dim_k \tilde{H}_{i-1}(K^{{{\rm \bf b}}}(I),k) \not = 0$. Let $\{y_1, \dots, y_n\} = S$, and let $W(S) = \{x_1y_1, \dots, x_ny_n\}$. Let $J=(\mathcal{I}(G \cup W(S))^{\vee}_{[d+n]})$. Let ${{\rm \bf c}}$ be the multi-degree of the monomial $z_{i_1} \dots z_{i_l} y_1 \dots y_n$. We claim that the simplicial complexes $K^{{{\rm \bf b}}}(I)$ and $K^{{{\rm \bf c}}}(J)$ are the same. By definition, a square-free vector ${{\rm \bf a}}$ is in $K^{{{\rm \bf c}}}(J)$ if and only if $\frac{x^{{\rm \bf c}}}{x^{{\rm \bf a}}} \in (\mathcal{I}(G \cup W(S))^\vee_{[d+n]})$. In other words, a square-free vector ${{\rm \bf a}}$ is in $K^{{\rm \bf c}}(J)$ if and only if the square-free monomial corresponding to ${{\rm \bf c}}-{{\rm \bf a}}$ gives a vertex cover of $G \cup W(S)$. Since ${{\rm \bf c}}$ has 0 in entries corresponding to $\{x_1, \dots, x_n\}$, if ${{\rm \bf c}}-{{\rm \bf a}}$ gives a vertex cover of $G \cup W(S)$, then ${{\rm \bf c}}-{{\rm \bf a}}$ must have 1 in all entries corresponding to $\{y_1, \dots, y_n\}$, and ${{\rm \bf a}}$ must have 0 in all entries corresponding to $\{x_1, \dots, x_n\}$. Therefore, the only places in which ${{\rm \bf a}}$ may be nonzero are in entries corresponding to the $z_i$s. Thus the vectors ${{\rm \bf a}}$ such that ${{\rm \bf c}}-{{\rm \bf a}}$ gives a vertex cover of $G \cup W(S)$ are exactly the same as the vectors $({{\rm \bf a}}', {\bf 0})$, where ${\bf 0} = (0, \dots, 0) \in \mathbb{Z}^{2n}$ appears in the entries corresponding to $\{x_1, \dots, x_n,y_1,\dots,y_n\}$, so that ${{\rm \bf b}}-{{\rm \bf a}}'$ gives a vertex cover of $G \backslash S$. Hence the ${{\rm \bf a}}'$ in $K^{{{\rm \bf c}}}(J)$ are exactly the ${{\rm \bf a}}$ in $K^{{{\rm \bf b}}}(I)$. This implies that the simplicial complexes $K^{{\rm \bf b}}(I)$ and $K^{{\rm \bf c}}(J)$ are the same. We now have $\dim_k \tilde{H}_{i-1}(K^{{{\rm \bf c}}}(J),k) = \dim_k \tilde{H}_{i-1}(K^{{\rm \bf b}}(I),k) \not = 0$. This gives a nonlinear $i$th syzygy of $J$ since $l> d+i$ implies $l+n>i+d+n$, and $J$ is generated in degree $d+n$. Hence, $J$ does not have a linear resolution, and thus $G \cup W(S)$ is not sequentially Cohen-Macaulay by Theorem \ref{t.scmcwl}. \end{proof} As a corollary, we can identify certain vertex sets to which adding whiskers does not yield a sequentially Cohen-Macaulay graph. \begin{corollary} \label{c.cyclenotscm} Let $G$ be a simple graph and let $S \subset V_G$. Suppose that $G \backslash S = C_n$, where $n$ is neither three nor five. Then $G \cup W(S)$ is not sequentially Cohen-Macaulay. \end{corollary} \begin{proof} Since $n$ is neither three nor five, by \cite[Proposition 4.1]{FVTchordal}, $C_n$ is not sequentially Cohen-Macaulay. The assertion is a consequence of Theorem \ref{t.notscm}. \end{proof} Corollary~\ref{c.cyclenotscm} helps show the sharpness of Corollary \ref{c.n-3}, as illustrated in the following example. \begin{example} \label{e.allbutfour} Let $G$ be a four-cycle with vertices $\{y_1, \dots, y_4\}$ and an edge $y_1y$ attached at $y_1$, so $\mathcal{I}(G)=(y_1y_2,y_2y_3,y_3y_4,y_1y_4,y_1y).$ Let $S=\{y\}$, and add a whisker $xy$ to $G$ to form $G \cup W(S)$. Then \[\mathcal{I}(G \cup W(S))^{\vee} = (y_1y_3y,y_2y_4y,y_1y_3x,y_1y_2y_4x) \subset R=k[y_1,\dots,y_4, y,x].\] Note that $(\mathcal{I}(G \cup W(S))_3^{\vee})$ has minimal graded free resolution \[ 0 \longrightarrow R(-4) \oplus R(-5) \longrightarrow R(-3)^3 \longrightarrow I \longrightarrow 0,\] which is not a linear resolution. Hence the Alexander dual of $\mathcal{I}(G \cup W(S))$ is not componentwise linear, and therefore $G \cup W(S)$ is not sequentially Cohen-Macaulay. \end{example} The primary case we have not considered is when a graph is sequentially Cohen-Macaulay but does not have dual linear quotients. This case will require different methods; sequential Cohen-Macaulayness can depend on the underlying field $k$, but having dual linear quotients is independent of the field. We give an example of this phenomenon. \begin{example} \label{e.chardepend} Let $\Delta$ be a minimal triangulation of the real projective plane. Then $\Delta$ is Cohen-Macaulay over a field $k$ if and only if the characteristic of $k$ is not two \cite{BH}. From this, we can construct an example of a graph $G$ that is Cohen-Macaulay if and only if the base field does not have characteristic two, using the method described in \cite{HHZ}. Let $P$ be the face poset of $\Delta$; then the order complex of $P$ has the property that all its minimal nonfaces are subsets of cardinality two, so the associated Stanley-Reisner ideal is generated by degree two monomials and hence is an edge ideal (with a large number of generators). The polynomial ring modulo this edge ideal is Cohen-Macaulay if and only if the ground field does not have characteristic two, just like the simplicial complex $\Delta$. \end{example} We know of no example of a graph $G$ with a small number of vertices that is (sequentially) Cohen-Macaulay over fields of some characteristics but not others, and it would be interesting to know of small examples if they exist.
2,869,038,156,167
arxiv
\section{Introduction} The notion of a balanced word originates from combinatorial studies of words \cite{L}, with its applications appearing in various areas where one needs to distribute objects as ``evenly'' as possible, for example, in algorithms dealing with synchronisation of processes and optimisation \cite{AGH,PV,S}, optimal scheduling \cite{G,MV}, construction of musical scales \cite{R}. Balanced words over a binary alphabet are well studied. The infinite aperiodic balanced words are called Sturmian words \cite{V,F} while the periodic ones are called Christoffel words \cite{C,BLRS}. For some applications it is more convenient to consider the latter as circular balanced words which can be imagined as finite words written along a circle and read periodically. While there is a complete classification of balanced circular words over a binary alphabet given in terms of Christoffel words, general classification of balanced circular words over arbitrary $N$-ary alphabets (which set we denote $[\mathcal{B}_{N}]$) is lacking. The celebrated particular case is described by the so-called Fraenkel's conjecture \cite{Fraenkel,AGH} (proven to hold for $N\leqslant 7$) which states that there is unique, modulo isomorphism of alphabets, balanced circular word with pairwise different occurrences of letters, namely the so-called Fraenkel word. The following important partial result is also available: some of the circular balanced words over higher-$N$ alphabets can be constructed inductively starting from those over some lower-$N$ alphabet \cite{Gr,BCDJL}. Namely, consider a circular balanced word over $N$-ary alphabet with $n$ occurrences of some letter $a$ such that $n = km$ with some integers $m$ and $k>1$. Then one can denote $a_0 := a$ and introduce $k-1$ new letters $a_1,\dots,a_{k-1}$. By substituting $p$th occurrence of the letter $a$ in the initial circular word by $a_{p\;(\mathrm{mod}\;k)}$ one arrives at a new circular balanced word over $(N+k-1)$-ary alphabet. Applied at $N=3$, the words resulting from this procedure, supplemented with the Fraenkel word, give a complete classification of the set $[\mathcal{B}_{3}]$. In order to compare the classification for $[\mathcal{B}_{3}]$ with possible difficulties arising on a path towards describing $[\mathcal{B}_{N}]$ for any $N$ we would like to draw the reader's attention to the work \cite{BCDJL} where particular classes of circular balanced words for $N=4,5,6$ were found numerically. In the present work we propose a number of graphical constructions for the words $[\mathcal{B}_{3}]$, except the Frankel word. First, there is an equivalent definition available for the circular words in question by generalising the well-known $2$-dimensional discrete approximation representation of Christoffel words \cite{L} to $3$ dimensions, with discrete walks demanded to approach particular planes from below without crossing. We also arrange the aforementioned set of words in a graph isomorphic to the Calkin-Wilf tree. Along with the balanced property one can consider other characteristics of words based on the notion of abelian equivalence, see \cite{Pu} and references therein. Two finite words are called abelian-equivalent if they can be obtained from each other by permutations of letters. The function which counts the number of classes of abelian-equivalent factors of a word is called the abelian complexity. For a binary alphabet balanced circular words coincide with the set of all circular words with abelian complexity $\leqslant 2$. In this work we focus on words with abelian complexity $\leqslant 3$ over a ternary alphabet: first, we propose a classification of the circular words as above (which set we denote $[\mathcal{M}_{3}]$) and show that they include balanced circular words as a proper subset. Meanwhile, for the ternary circular balanced words $[\mathcal{B}_{3}]$ we distinguish those with abelian complexity exactly $3$. Second, we construct an uncountably infinite set of bi-infinite aperiodic words with abelian complexity $3$ which generalises the result of \cite{RSZ}. The paper is organised as follows. In Section \ref{sec:definitions} we recall the definitions concerning words, their properties and operations on them. In Section \ref{sec:classification} we recall the classifications for $[\mathcal{B}_{2}]$ and $[\mathcal{B}_{3}]$, give a classification for $[\mathcal{M}_{3}]$ and construct (some of the) bi-infinite aperiodic words with abelian complexity $3$. In Section \ref{sec:geometry} we propose an equivalent classification of $[\mathcal{B}_{3}]$ by generalising discrete approximations to a $3$-dimensional space. There we also propose a way of organising the set $[\mathcal{B}_{3}]$ in a form of a binary tree. Technical proofs are placed in the appendix section \ref{sec:proofs}. \section{Balanced and abelian properties of circular words}\label{sec:definitions} \subsection{Alphabet and words} Let $A_N = \{\mathbb{0},\mathbb{1},\mathbb{2},\dots\}$ be a $N$-ary alphabet\footnote{Despite decimal digits are available only for $N\leqslant 10$, the purpose of the current paper is covered by $N=2,3$.} supplemented by an order-preserving map $\iota:\{0,\dots, N-1\}\to A_N$. A {\it word} over an alphabet $A_N$ is an element of the free monoid $\mathcal{A}^*_N$ generated by $A_N$, with the unit element $\varepsilon\in \mathcal{A}^*_{N}$ referred to as the empty word. Each word $w$ is written as $w=a_1a_2\dots a_\ell$ for letters $a_1,a_2,\dots,a_{\ell}$. An integer $\ell\geqslant 0$ is called the length of $w$ and denoted by $|w|=\ell$. By definition, $\varepsilon$ is the word with zero length. Let $\mathcal{A}^{\ell}_{N}\subset \mathcal{A}^*_{N}$ be a subset of words of length $\ell$. There is a decomposition of $\mathcal{A}^*_{N}$ by the words' length: \begin{equation*} \mathcal{A}^*_{N} = \bigcup_{\ell = 0}^{\infty} \mathcal{A}^{\ell}_{N}. \end{equation*} The monoid $\mathcal{A}^*_N$ admits the following automorphisms. Let $\mathfrak{S}_N$ be the symmetric group whose elements are permutations of letters of $A_N$, i.e. for $\sigma\in\mathfrak{S}_N$, $\mathbb{0} \to \sigma(\mathbb{0})$, $\mathbb{1}\to\sigma(\mathbb{1})$, {\it etc.}, acting on $\mathcal{A}^*_N$ as homomorphisms: a word $w=a_1\dots a_\ell\in\mathcal{A}^{\ell}_N$ is mapped to $\sigma(w):=\sigma(a_1)\dots\sigma(a_{\ell})$ and $\sigma(\varepsilon) = \varepsilon$. Let $I$ denote invertion of a word, $I(a_1\dots a_{\ell}) = a_{\ell}\dots a_1$, and let $\mathsf{Z}_{\ell}$ be an additive group acting by cyclic permutations of letters in a word generated by $T(a_1a_2\dots a_{\ell}) = a_2\dots a_{\ell} a_1$. We denote by $\mathsf{Z}$ the whole group of cyclic permutations of finite words of any length. The group $\mathsf{Z}$ is a subgroup of a bigger group $\mathsf{PZ}\supset \mathsf{Z}$ generated by both $T$ and $I$. Actions of $\mathsf{PZ}$ and $\mathfrak{S}_N$ mutually commute, and hence the whole group of automorphisms under consideration is $G_N = \mathfrak{S}_N\times \mathsf{PZ}$. We define {\it circular words} as classes of words $[\mathcal{A}^*_{N}]:=\mathcal{A}^*_N\slash \mathsf{Z}$ related by cyclic permutations of letters. For any representative $w\in\mathcal{A}^*_N$ we denote the respective circular word as $[w]\in [\mathcal{A}^*_{N}]$. We will say that $[w]\in[\mathcal{A}^*_{N}]$ has length $\ell$ and write $[w]\in[\mathcal{A}^{\ell}_{N}]$ if $|w| = \ell$. Two words $w$, $w^{\prime}$ belonging to the same class $[\mathcal{A}^*_{N}]$, {\it i.e.} related by $\mathsf{Z}$-action, are said to be conjugate. Automorphisms of $[\mathcal{A}^*_{N}]$ are given by the factor-group $[G_{N}]:=G_N\slash \mathsf{Z}\cong \mathfrak{S}_N\times\mathbb{Z}_2$ (with the factor $\mathbb{Z}_2$ generated by the inversion $I$). For a set of words $X\subset \mathcal{A}^*_N$ define $[X]\subset[\mathcal{A}^*_{N}]$ to be a set of all circular words containing representatives from $X$. Let $H$ be a subgroup of $G_{N}$ (respectively, $[G_{N}]$). Then for any subset $X$ of $\mathcal{A}^*_{N}$ (respectively, of $[\mathcal{A}^*_{N}]$) notation $HX$ stands for the set of images of $X$ under the action of $H$. A word $u\in \mathcal{A}^*_{N}$ is a {\it factor} of another word $w\in \mathcal{A}^*_N$, denoted as $u\subset w$, if $w=vuv'$ for some words $v,\,v'\in \mathcal{A}^*_N$. In particular, any word is a factor of itself. For two words $w,w^{\prime}\in \mathcal{A}^*_N$ conjugate to each other there exist factors $u,v\in \mathcal{A}^*_N$ such that $w=uv$ and $w'=vu$. A factor $u\in\mathcal{A}^*_{N}$ of a circular word $[w]\in[\mathcal{A}^*_{N}]$ is understood as $u\subset w^{\prime}\in [w]$ for some representative $w^{\prime}$ and denoted as $u\subset [w]$. If for a word $w\in \mathcal{A}^*_N$ there exists a factor $v\subset w$ such that $w = \underbrace{v\dots v}_{p>1}$, then we will say that $w$ is the $p$th power of $v$ and write $w = v^p$, while otherwise a word will be said to be {\it primitive}. Note that if a word is primitive, then so are its conjugates. Moreover, if a word $w$ is a $p$th power of some primitive factor $v$, then any word conjugate to $w$ is a $p$th power of a primitive factor conjugate to $v$. Due to that, the following definitions are correct in the sense of independence of choices of representatives: {\it i)} a circular word will be said to be primitive if it contains a primitive representative, {\it ii)} a circular word $[w]$ will be said to be a $p$th power of a primitive circular word $[v]$ (denoted by $[w] = [v]^p$) if for any representative $w^{\prime}\in [w]$ there is $v^{\prime}\in [v]$, such that $w^{\prime} = v^{\prime p}$. Note that $[v]^p = [v^p]$. \subsection{Characteristics of distribution of letters} Let $|w|_a$ denote the number of distinct occurrences of a letter $a$ in a word $w\in \mathcal{A}^*_N$. To any word $w\in\mathcal{A}^*_N$ we associate its {\it Parikh vector} which we will write in a form of a formal sum of letters from $A_N$ with integer coefficients: \begin{equation*} \Psi(w) := |w|_{\mathbb{0}}\,\mathbb{0} + |w|_{\mathbb{1}}\,\mathbb{1} +\dots = \sum_{a\in A_N}|w|_{a}\,a\,. \end{equation*} The following characteristics commonly used in combinatorics on words describes distribution of letters within a word. \begin{defn}\label{def:balanced} A non-empty word $w\in \mathcal{A}^*_{N}$ is balanced if for each pair of factors $u,v\subset w $ such that $|u| = |v|$ we have \begin{equation*} \Psi(u) - \Psi(v) = \delta_{\mathbb{0}}\,\mathbb{0} + \delta_{\mathbb{1}}\,\mathbb{1} + \dots\quad\text{with all}\quad |\delta_a|\leqslant 1. \end{equation*} A circular word $[w]\in[\mathcal{A}^*_{N}]$ is balanced if every $w^{\prime} \in [w]$ is balanced. \end{defn} Note that because $\Psi(u)-\Psi(v)$ compares letter contents of two words of the same length, one has $\sum_{a\in A_N}\delta_a = 0$ in the above definition. \begin{defn} For a word $w\in \mathcal{A}^{\ell}_N$ its $n$-spectrum (with $1\leqslant n \leqslant \ell$) is defined as \begin{equation*} \mathsf{spec}_n w := \left\{\Psi(u)\;\middle|\;\text{for all}\; u\subseteq w\;\text{with}\; |u| = n\, \right\} \,. \end{equation*} For a circular word $[w]$ its spectrum $\mathsf{spec}_n [w]$ is a union of spectra of all representatives in the class: \begin{equation*} \mathsf{spec}_n [w] := \bigcup_{w^{\prime}\in [w]} \mathsf{spec}_n w^{\prime}. \end{equation*} Spectra of cardinality $1$ will be called trivial. \end{defn} With the definition of spectra at hand, uniformity of distribution of letters within a word can be described by the notion of abelian complexity \cite{Pu}. \begin{defn}\label{def:Myhill} The function $\ab{n}(w)=\# \mathsf{spec}_{n}w$ (respectively, $\ab{n}[w]=\# \mathsf{spec}_{n}[w]$) with $1\leqslant n\leqslant |w|$ is called the abelian complexity. If $p$ is a minimal integer such that for all $1\leqslant n\leqslant |w|$ we have $\ab{n}(w)\leqslant p$ (respectively, $\ab{n}[w]\leqslant p$) then we will say that the word $w$ (respectively, $[w]$) is abelian-$p$-bounded. \end{defn} Note that $\rho^{\text{ab}}_{1}(w) \leqslant N$ is nothing else but the number of different letters from $A_N$ entering $w$. We will focus on those words that contain all of the $N$ letters by introducing a subset $\mathcal{A}_N\subset \mathcal{A}^*_N$ such that for any $w\in \mathcal{A}_N$ we have $\rho^{\text{ab}}_{1}(w) = N$. We denote the set of respective circular words by $[\mathcal{A}_{N}]$. We introduce the following notation $\mathcal{B}_N\subset \mathcal{A}_N$ for balanced words and $[\mathcal{B}_{N}]\subset [\mathcal{A}_{N}]$ for circular balanced words. The set $\mathcal{B}_N$ is closed under the action of $[G_{N}]$, but not under the whole $G_N$. As for the set $[\mathcal{B}_{N}]$, it is preserved by the action of $[G_{N}]$. \vskip 0.2cm \begin{lemma}\label{thm:BsubM} Let $K_{N} = \max_{k}\begin{pmatrix}N\\k \end{pmatrix}$. For a balanced word $w\in\mathcal{B}_N$ (respectively, balanced circular word $[w]\in[\mathcal{B}_{N}]$) of length $\ell$ we have $\ab{n}(w)\leqslant K_{N}$ (respectively, $\ab{n}[w]\leqslant K_{N}$) for all $1\leqslant n\leqslant \ell$. \end{lemma} \begin{proof} Note that for $w\in\mathcal{B}_N$ (the case of a circular word $[w]\in[\mathcal{B}_{N}]$ is treated along the same lines) for any two its factors $u,v\in w$ with $|u| = |v|$ we have \begin{equation}\label{eq:balanced_difference} \Psi(u) - \Psi(v) = a_1 + \dots + a_r - b_1 -\dots - b_r\,. \end{equation} such that all letters on the {\it rhs} are pairwise different. Let us focus on the factors of a fixed length $n$ and choose some factor $u$ (with $|u| = n$). If for some factor $v$ one has $\Psi(u) - \Psi(v) = \varepsilon_a\,a + \dots$ with $\varepsilon = \pm 1$, then for any other factor $v^{\prime}$ one finds $\Psi(u) - \Psi(v^{\prime}) = \varepsilon_a k\,a + \dots$ with $k \in \{0,1\}$. Indeed, if $k = -1$ then $\Psi(v^{\prime}) - \Psi(v) = 2\varepsilon_a\,a + \dots$ in contradiction with the assumed balanced property. As a result, all letters in the alphabet are divided into tree classes: one of them is constituted by the letters that never appear on the {\it rhs} of \eqref{eq:balanced_difference}, while the rest is divided into two classes according to the values $\varepsilon_a$. Let the cardinalities of the latter be $N_{-}$, $N_{+}$ (without loss of generality we assume $N_{-} < N_{+}$). Because $\Psi(u)\in\mathsf{spec}_n w$ and any non-trivial {\it rhs} of \eqref{eq:balanced_difference} implies a contribution to the $n$-spectrum different from $\Psi(u)$, one has the following estimate: \begin{equation*} \#\mathsf{spec}_n w \leqslant \sum_{k=0}^{N_{+}}\begin{pmatrix} N_{+}\\ k \end{pmatrix} \begin{pmatrix} N_{-}\\ k \end{pmatrix} = \begin{pmatrix} N_{+} + N_{-}\\ N_{+} \end{pmatrix}\,. \end{equation*} The above equality is due to the Vandermonde's identity. Recalling that $N_{+} + N_{-} \leqslant N$, one arrives at the highest estimate for the above bound to be $K_N$. \end{proof} \paragraph{Remark.} There is an interesting open question whether the bound in the above lemma can be improved or not. \vskip 0.2cm For a ternary alphabet considered in this work $K_{3} = 3$. We denote the set of abelian-$3$-bounded circular words by $[\mathcal{M}_{3}]$. \vskip 0.2cm According to the following lemma, balanced circular words are fully classified by their primitive factors (the proof is straightforward). \begin{lemma}\label{lem:building_blocks} For a primitive circular word $[w]\in [\mathcal{A}_{N}]$, for any integer $p \geqslant 2$ the following assertions are equivalent: \begin{itemize} \item[1)] $[w]\in [\mathcal{A}_{N}]$ is balanced, \item[2)] the $p$th power $[w^p]\in [\mathcal{A}_{N}]$ is balanced. \end{itemize} \end{lemma} We denote the subsets of primitive words as $[\mathfrak{b}_{N}]\subset[\mathcal{B}_{N}]$ and $[\mathfrak{m}_{N}]\subset[\mathcal{M}_{N}]$. \vskip 0.2cm As an additional point to the above general part we bring two lemmas containing useful facts concerning abelian complexity. Firstly, abelian complexity of different spectra of a circular word appear to be related. \begin{lemma}\label{lem:spec_spec} For any $[w]\in [\mathcal{A}^{\ell}_{N}]$ we have \begin{equation*} \ab{n}[w] = \ab{\ell-n}[w]\quad \text{for all}\quad 1\leqslant n< \ell\,. \end{equation*} \end{lemma} Another simplification about cardinalities of spectra of circular words comes in relation with their primitivity (the proof follows from \cite{CH}). \begin{lemma}\label{lem:reducibility} For any $[w]\in [\mathcal{A}^{\ell}_{N}]$ the two assertions are equivalent: \begin{itemize} \item[1)] there exist $1\leqslant n < \ell$ such that $\ab{n}[w]=1$, \item[2)] $[w] = [u^p]$ for some $p>1$. \end{itemize} \end{lemma} \section{Classification of balanced\\ and abelian-$3$-bounded words over $A_{3}$}\label{sec:classification} \subsection{Balanced circular words over $A_2$}\label{sec:Christoffel_def} Before turning to the classification of the sets $[\mathcal{B}_{3}]$ and $[\mathcal{M}_{3}]$ we recall some known facts about balanced words and analogous classification for $A_2$. Two-letter alphabet $A_2$ serves as a starting point where complete classification of balanced circular words is available and given by Christoffel words. Among a number of equivalent definitions of the latter we choose the following one giving a graphical representation as a discrete approximation of a line with a rational slope. \begin{defn}\label{def:Christoffel} Let $q = (M-k)\slash k$ for some positive integers $k$ and $M>k$ such that $k$ and $M-k$ are coprime. Consider a line in $\mathbb{R}^2$ parametrised as $y=q\,x$ and a path which starts at the origin $(0,0)$ and is constructed by performing consecutive unit steps $\xi=(1,0)$ and $\eta = (0,1)$ such that: i) the step $\eta$ is performed always when it does not lead to going strictly above the line, ii) the whole path intersects the line twice. Then reading the steps consecutively as $\xi\to\mathbb{0}$ and $\eta\to\mathbb{1}$ leads to a word $C(k,M-k)$ over $A_2$ which is referred to as Christoffel word of a slope $q$. \end{defn} \noindent Note that, according to the above definition, $|C(k,M-k)| = M$, $|C(k,M-k)|_{\mathbb{0}} = k$, $|C(k,M-k)|_{\mathbb{1}} = M-k$. We denote the set of Christoffel words (respectively, circular Christoffel words) by $\mathfrak{c}$ (respectively, $[\mathfrak{c}]$) and the set of all their powers by $\mathcal{C}$ (respectively, $[\mathcal{C}]$). \begin{figure}[H] \centering \includegraphics[width=0.24\textwidth]{Ch15.pdf} \hspace{0.1\textwidth} \includegraphics[width=0.24\textwidth]{Ch64.pdf} \caption{Graphical representation for Christoffel word $C(2,3) = \mathbb{0}\mathbb{1}\mathbb{0}\mathbb{1}\letb$ with the slope $q=3\slash 2$ and its square $C(2,3)^2 = \mathbb{0}\mathbb{1}\mathbb{0}\mathbb{1}\letb\mathbb{0}\mathbb{1}\mathbb{0}\mathbb{1}\letb$.} \label{fig:my_label} \end{figure} The form of Christoffel words given by Definition \ref{def:Christoffel} is not preserved by $G_2=\mathfrak{S}_2\times \mathsf{PZ}$. Note that the so-defined Christoffel words are called the lower Christoffel words. Analogously, the upper Christoffel words parameterize the path that lies above the line segment and are given as inversion of lower Christoffel words. The set of lower Christoffel words is not invariant under the action of $\mathfrak{S}_2$. To show this, consider a permutation $\sigma_{(\mathbb{0}\mathbb{1})}\in\mathfrak{S}_2$ such that $\mathbb{0}\to\mathbb{1}, \mathbb{1}\to\mathbb{0}$. Then for a Christoffel word $C(2,3)= \mathbb{0}\mathbb{1}\mathbb{0}\mathbb{1}\letb$ we have $\sigma_{(\mathbb{0}\mathbb{1})}\big( \mathbb{0}\mathbb{1}\mathbb{0}\mathbb{1}\letb\big)=\mathbb{1}\mathbb{0}\mathbb{1}\mathbb{0}\leta$, which does not meet the Definition \ref{def:Christoffel}. A remarkable feature of circular words $[\mathcal{C}]$ containing powers of lower Christoffel words as a representatives is that their set is closed under the action of $[G_{2}]=\mathfrak{S}_2\times\mathbb{Z}_2$. For the above example we get $[\sigma_{(\mathbb{0}\mathbb{1})}\big( \mathbb{0}\mathbb{1}\mathbb{0}\mathbb{1}\letb\big)] = [\mathbb{0}\leta\mathbb{1}\mathbb{0}\mathbb{1}]$ with a Christoffel representative $C(3,2)$. In particular, the lower and upper Christoffel words are representatives of the same circular word. In the present paper we focus on circular words, and therefore working only with lower Christoffel words is sufficient for our purpose. The following lemma shows that balanced circular words are exhausted by Christoffel representatives, see \cite{BLRS,CH}. \begin{lemma}\label{lem:Christoffel_balanced_Myhill} For $[w]\in [\mathcal{A}_{2}]$ the following conditions are equivalent: \begin{itemize} \item[1)] there exists a Christoffel word $w^{\prime}$ such that $[w] = [w^{\prime p}]$ (for $p\geqslant 1$), \item[2)] $[w]$ is balanced, \item[3)] the abelian complexity $ \ab{n}[w]\leqslant 2$ for all $ 1\leqslant n\leqslant |w|$. \end{itemize} \end{lemma} \vskip 0.2cm We note the following well-known property of the Christoffel words which we will use hereafter, see \cite{BLRS}. Let $w\in\mathfrak{c}$ be a Christoffel word, then $w=\mathbb{0} Q\mathbb{1}$, where $Q$ is a {\it palindrome}\footnote{Palindromes are inversion-invariant words, {\it i.e.} those satisfying $I(Q) = Q$.}. \begin{lemma}\label{lem:palindrome} A Christoffel word $w$ is the unique representative in the class $[w]$ which has the form $w=\mathbb{0} Q\mathbb{1}$ where $Q$ is a palindrome. \end{lemma} \subsection{Classification of balanced circular words over $A_3$}\label{sec:balanced_for_A3} Let $\mathcal{C}^{\prime}\subset \mathcal{C}$ be a subset of powers of Christoffel words with even total number of $\mathbb{0}$, and let a subset $\mathfrak{c}^{\prime}\subset\mathcal{C}^{\prime}$ be constituted by words of the form $C(k,M-k)^{p}$ with $p = 1$ for $k$ even and $p=2$ for $k$ odd. Along the lines of \cite{Gr} consider a map $\phi:\mathcal{C}^{\prime}\to \mathcal{A}_{3}$ substituting each $\mathbb{1}$ by $\mathbb{2}$ and each {\it even} $\mathbb{0}$ by $\mathbb{1}$. For example, $\phi(\mathbb{0}\leta\mathbb{1}) = \mathbb{0}\mathbb{1}\mathbb{2}$ and $\phi(\mathbb{0}\mathbb{1}\mathbb{0}\mathbb{1}) = \mathbb{0}\mathbb{2}\mathbb{1}\mathbb{2}$. Note that since any word in $\mathcal{C}^{\prime}$ necessarily contains at least one symbol $\mathbb{1}$ and at least two symbols $\mathbb{0}$, any $\phi$-image is indeed in $\mathcal{A}_{3}$. The map $\phi$ respects powers of elements from $\mathcal{C}^{\prime}$: for any $c\in\mathcal{C}^{\prime}$ we have $\phi(c^p) = \phi(c)^p$. \paragraph{Remark.} Alternatively, one could consider the set $\mathcal{C}^{\prime\prime}\subset \mathcal{C}$ of Christoffel words with even total amount of letters $\mathbb{1}$ and consider a map $\phi^\prime:\mathcal{C}^{\prime\prime}\to \mathcal{A}_{3}$ substituting each even $\mathbb{1}$ by $\mathbb{2}$. Let us show that modulo $[G_{2}]$-action this choice is equivalent to considering the initially proposed set $\mathcal{C}^{\prime}$ and map $\phi$. As was noted in Section \ref{sec:Christoffel_def} the set of circular words $[\mathcal{C}]$ is closed under the action of $[G_{2}]$, in particular we have $\sigma_{(\mathbb{0}\mathbb{1})}\big[\mathcal{C}^{\prime\prime}\big]=\big[\mathcal{C}^{\prime}\big]$, $\sigma_{(\mathbb{0}\mathbb{1})}\in\mathfrak{S}_2$. Let $\sigma_{(\mathbb{0}\mathbb{2}\mathbb{1})}\in\mathfrak{S}_3$ be the permutation $\mathbb{0}\to\mathbb{2},\,\mathbb{1}\to\mathbb{0},\,\mathbb{2}\to\mathbb{1}$. It is straightforward to verify that $\sigma_{(\mathbb{0}\mathbb{2}\mathbb{1})}\big[\phi(\mathcal{C}^{\prime\prime})\big]=\big[\phi(\mathcal{C}^{\prime})\big]$, therefore $\mathfrak{S}_3\big[\phi(\mathcal{C}^{\prime\prime})\big]=\mathfrak{S}_3\big[\phi(\mathcal{C}^{\prime})\big]$. \vskip 0.2cm As a part of a classification of circular balanced words over $A_N$, the celebrated Fraenkel's conjecture says that for $N\geqslant 3$ there is a unique, up to $\mathfrak{S}_{N}$-action, primitive circular balanced word $\mathbf{F}_N$ (Fraenkel word) with pairwise distinct amounts of letters. It is constructed inductively as $\mathbf{F}_N = \mathbf{F}_{N-1}\iota(N)\mathbf{F}_{N-1}$ starting from $\mathbf{F}_1 = [\mathbb{0}]$ and has $\Psi(\mathbf{F}_N) = \sum_{j=0}^{N-1} 2^{j}\iota(j)$. For $N=3,\dots ,7$ the Fraenkel's conjecture is proven to hold, see \cite{AGH,Si,T}. Note that Fraenkel words are inversion-invariant. Denote the set of $\mathfrak{S}_{3}$-images of the circular Fraenkel word over $A_3$ as $[\mathfrak{f}_{3}]$ and let $[\mathcal{F}_{3}]$ stand for all their powers. The following theorem completely describes the set $[\mathcal{B}_{3}]$ (for the proof of the first part of the assertion see \cite{Gr,BCDJL}). \begin{theorem}\label{thm:main_1} \begin{itemize} \item[1)] Balanced circular words over $A_3$ are given as \begin{equation*} [\mathcal{B}_{3}] = \mathfrak{S}_3\big[\phi(\mathcal{C}^{\prime})\big]\sqcup [\mathcal{F}_{3}]. \end{equation*} \item[2)] Primitive balanced circular words over $A_{3}$ are given as \begin{equation*} [\mathfrak{b}_{3}] =\mathfrak{S}_3[\phi(\mathfrak{c}^{\prime})] \sqcup[\mathfrak{f}_{3}] . \end{equation*} \noindent The upper bound $K_3$ is achived by all non-trivial spectra of $[\mathfrak{f}_{3}]$ and $\mathfrak{S}_3\big[\phi(C(k,M-k))\big]$ with $k$ even. \end{itemize} \end{theorem} \subsection{Classification of circular abelian-$3$-bounded words over $A_3$} In order to classify circular words with abelian complexity $\leqslant 3$ we define the {\it twisted words} constructed from powers of Cristoffel words as follows. Recall that any Christoffel word is of the form $\mathbb{0} Q\mathbb{1}$. For powers of $\mathbb{0} Q \mathbb{1}$ we define twisted words with interchanging $\mathbb{1}\mathbb{0}\to\mathbb{0}\mathbb{1}$ at (some of) the borders of primitive factors. For example from $C(2,1)^3 = \mathbb{0}\leta\mathbb{1}\,\mathbb{0}\leta\mathbb{1}\,\mathbb{0}\leta\mathbb{1}$ one can construct three twisted words: $\mathbb{0}\leta\mathbb{0}\,\mathbb{1}\mathbb{0}\mathbb{1}\,\mathbb{0}\leta\mathbb{1}$, $\mathbb{0}\leta\mathbb{1}\,\mathbb{0}\leta\mathbb{0}\,\mathbb{1}\mathbb{0}\mathbb{1}$ and $\mathbb{0}\leta\mathbb{0}\,\mathbb{1}\mathbb{0}\leta\,\mathbb{1}\mathbb{0}\mathbb{1}$. We denote by $\mathcal{C}^{\text{tw}}$ the set of twisted words constructed from the set $\mathcal{C}^{\prime}$ of powers of Christoffel words with even number of zeros. For the following circular word $[\mathbb{0}\mathbb{1}\mathbb{2}\mathbb{1}\mathbb{0}]$ denote its $\mathfrak{S}_3$-image by $[\mathfrak{d}_{3}]$ and all their powers by $[\mathcal{D}_{3}]$. \begin{theorem}\label{thm:main_2} For $A_{3}$, the set of circular abelian-$3$-bounded words is \begin{equation*} [\mathcal{M}_{3}] = [\mathcal{B}_{3}]\sqcup \mathfrak{S}_3\big[\phi(\mathcal{C}^{\text{{\normalfont tw}}})\big]\sqcup[\mathcal{D}_{3}]. \end{equation*} \end{theorem} \subsection{Bi-infinite aperiodic abelian-$3$-bounded words over $A_3$} Construction of the words $\phi(\mathcal{C}^{\text{{\normalfont tw}}})$ can be applied to obtain some infinite and bi-infinite aperiodic ternary words with abelian complexity $3$. The following proposition generalises Theorem $4.3$ in \cite{RSZ}. Recall that a bi-infinite word $W = \dots a_{-1}a_0 a_{1}\dots$ is referred to as periodic if there is a positive integer $p$ such that $a_{i+p} = a_{i}$ for all $i\in \mathbb{Z}$. As a weaker property, $W$ is called ultimately periodic if there is $J\in\mathbb{Z}$ such that $a_{i+p} = a_{i}$ for all $i\geqslant J$. If $W$ is not ultimately periodic, it is called aperiodic. \begin{proposition}\label{thm:infinite} Let $\omega\in \{\mathbb{0},\mathbb{1}\}^{\mathbb{Z}}$ be an aperiodic bi-infinite word, and let $Q$ be a palindrome obtained as $\phi(C(m,n)) = \mathbb{0} Q \mathbb{2}$, $m$ even. Then the image of $\omega$ under the morphism $\mathbb{0} \to Q\mathbb{2}\mathbb{0}$, $\mathbb{1}\to Q\mathbb{0}\mathbb{2}$ is abelian-$3$-bounded with abelian complexity $3$. \end{proposition} The proof of the above proposition follows from the Step $4$ in the proof of Theorem \ref{thm:main_2}. There is an immediate corollary for infinite words. \begin{corollary} Let $W = \dots a_{-1} a_0 a_{+1}\dots $ be a bi-infinite ternary word obtained via the morphism described in Proposition \ref{thm:infinite}. Then, fixing any $i\in\mathbb{Z}$, the infinite word $W^{\prime} = a_{i}a_{i + 1}\dots$ is abelian-$3$-bounded with abelian complexity $3$. \end{corollary} \section{Geometrical constructions}\label{sec:geometry} \subsection{Balanced words as $3$-dimensional\\ discrete approximations} In this section we propose a geometrical way to construct all balanced words, except the Fraenkel word, by generalising discrete approximation representation for Christoffel words from Definition \ref{def:Christoffel} to $3$-dimensional space. First, we define the notion of a $3$-dimensional discrete approximation for a pair of rational slopes $(q_1,q_2)$ as follows. Let points of a $3$-dimensional space $\mathbb{R}^{3}$ be parametrised as $(x,y,z)$. Starting from $(0,0,0)$ one constructs a path under a plane $z = q_1 x + q_2 y$ by performing unit steps $\xi = (1,0,0)$, $\eta = (0,1,0)$, $\zeta = (0,0,1)$. Step $\zeta$ is performed always if it does not lead to getting above the plane. Otherwise, one performs one of the steps $\xi,\eta$ such that they alternate along the path and step $\xi$ is made first. Procedure can be terminated at any point $(x^*,y^*,z^*)$ where $x^* = y^*$ and $z^* = q_1\,x^* + q_2\,y^*$. Such point always exists because numbers $q_1,q_2$ are rational. Next, for any word $w\in\mathcal{A}_{3}$ one can consider a line $\gamma(w)\subset\mathbb{R}^3$, which starts at the origin $(0,0,0)$ and proceeds by unit steps $\xi$, $\eta$, $\zeta$ for the letters $\mathbb{0}$, $\mathbb{1}$ and $\mathbb{2}$ respectively. The line $\gamma(w)$ will be referred to as graphical representation for $w$. \begin{figure}[H] \centering \includegraphics[width=0.4\textwidth]{discrete.pdf} \caption{Graphical representation for $\phi(C(4,3)) = \mathbb{0}\mathbb{1}\mathbb{2}\mathbb{0}\mathbb{2}\mathbb{1}\mathbb{2}$: blue (respectively, green and red) lines correspond to steps along $x$ (respectively, $y$ and $z$) read as $\mathbb{0}$ (respectively, $\mathbb{1}$ and $\mathbb{2}$). The line $\gamma(\phi(C(4,3)))$ is a $3$-dimensional discrete approximation of a pair of slopes $(q,q)$ with $q=3\slash 4$.} \label{fig:representation_3d} \end{figure} The following theorem gives a graphical way of constructing the set $\phi(\mathcal{C}^{\prime})$ which sufficiently parametrises the set of balanced circular words $[\mathcal{B}_{3}]\backslash[\mathcal{F}_{3}]$ by Theorem \ref{thm:main_1}. \begin{theorem} For a ternary word $w\in\mathcal{A}_{3}$ the following two assertions are equivalent: \begin{itemize} \item[1)] there is a Christoffel word such that $w = \phi(C(m,n)^{p})$, with $pm$ even, \item[2)] the graphical representation $\gamma(w)$ is a discrete approximation of pairs of rational slopes $(\tfrac{n}{m},\tfrac{n}{m})$ by reading step vectors as $\xi\to\mathbb{0}$, $\eta\to\mathbb{1}$, $\zeta\to\mathbb{2}$ along the path. \end{itemize} \end{theorem} \begin{proof} First, we demonstrate that $2$-dimensional discrete approximation with the slope $q$ and $3$-dimensional discrete approximation for the pair $(q,q)$ can be constructed from one another. For a $3$-dimensional discrete approximation of a pair of rational slopes $(q,q)$ consider its orthogonal projection to the plane $x = y$. As soon as $z$-coordinate of any point is preserved under the projection, crossing the plane $z = q(x+y)$ by performing step $\zeta$ in $\mathbb{R}^3$ is equivalent to crossing the projection of the plane. More to that, each step $\xi$, $\eta$ increases the value of $z$ for the plane $z = q(x+y)$ by $q$, and the same holds for the projection. Finally, both $\xi$, $\eta$ are projected to the same horizontal step vector. If the length of the latter is scaled to be $1$ then $3$-dimensional discrete approximation of a pair $(q,q)$ is projected to a $2$-dimensional discrete approximation of a slope $q$ with even number of horizontal steps. Reverting all the steps, any $2$-dimensional discrete approximation of a rational slope $q$ with even number of horizontal steps can be turned to a $3$-dimensional discrete approximation of the pair $(q,q)$. Consider $w = \phi(C(m,n)^p)$ with $p = 1$ (respectively, $2$) for $m$ even (respectively, odd). Let us verify that $q = n\slash m$ is the slope for a sought discrete approximation. Indeed, orthogonal projection of the graphical representation of $w$ to the plane $x = y$ leads to the graphical representation for $C(m,n)^r$ (because of the structure of the map $\phi$). Because the latter is a $2$-dimensional discrete approximation, we arrive at the conclusion that graphical representation of any $w\in\phi(\mathcal{C}^{\prime})$ is a $3$-dimensional discrete approximation. Other way around, any $3$-dimensional discrete approximation of a pair of rational slopes $(q,q)$ is projected to a $2$-dimensional discrete approximation of a slope $q$, which is equivalent to a word from $\mathcal{C}^{\prime}$. \end{proof} \subsection{Graph for balanced circular words} Relation of balanced circular words over $A_3$ to Christoffel words over $A_2$, according to the Theorems \ref{thm:main_1} and \ref{thm:main_2}, allows us to arrange them in a binary tree as follows. Consider a graph of pairs of coprime numbers (Calkin–Wilf tree), then substitute each pair $(m,n)$ at each vertex by a triple $(m,m,n)$. \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{graphnumb.pdf} \caption{A graph of pairs of coprime numbers.} \label{fig:graph_primes} \end{figure} Each coprime triple $(m,m,n)$ implies a word $[w]\in[\mathcal{B}_{3}]$ such that $|w|_{\mathbb{0}} = |w|_{\mathbb{1}} = m$ and $|w|_{\mathbb{2}} = n$. Recall that any word $[w]\in[\mathcal{B}_{3}]\backslash [\mathcal{F}_{3}]$ is constructed as a $\phi$-image either of $C(2m,n)$ (for odd $n$) or $C(m,n^{\prime})C(m,n^{\prime})$ (in this case assign $n = 2n^{\prime}$) which allows us to arrange the set $[\mathcal{B}_{3}]\backslash [\mathcal{F}_{3}]$ in a graph presented on fig.~\ref{fig:graph_numb} (with representatives fixed up to cyclic permutations and $\mathfrak{S}_3$-action). To our knowledge, pairs of words joined by edges are not related by a morphism. \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{graphchrword.pdf} \caption{} \label{fig:graph_numb} \end{figure} Note that $(2m,n)$ (for $n$ odd) and $(m,n^{\prime})$ (for $n=2n^{\prime}$) are indeed pairs of coprime numbers provided that $(m,n)$ is a coprime pair. All words from $[\mathcal{B}_{3}]\backslash [\mathcal{F}_{3}]$ (modulo interchanging letters by $\mathfrak{S}_3$) indeed enter the graph because to any Christoffel word $C(m,n)$ there corresponds a pair of coprime numbers $(\frac{m}{2},n)$ (for $m$ even) or $(m,2n)$ (for $m$ odd) belonging to the left graph on fig \ref{fig:graph_primes}. In order to make inversion symmetry of the words from $[\mathcal{B}_{3}]$ manifest, as well as ``factor out'' the action of $\mathfrak{S}_3$, we consider the following illustration. A circular word of length $\ell$ is represented by a graph with $\ell$ vertices placed on an oriented circle. Letters are mapped to vertices one-by-one such that each next letter is mapped to the next vertex. Edges join vertices in a way that any maximal subset of vertices corresponding to the same letter become vertices of a polygon. \begin{figure}[H] \centering \includegraphics[width=0.8\textwidth]{graph2.pdf} \caption{Illustration for the words from $[\mathcal{B}_{3}]\backslash[\mathcal{F}_{3}]$ trivialising $\mathfrak{S}_{3}$-action and making inversion symmetry manifest. Colouring of vertices and edges is made to visually separate alternating letters $\mathbb{0}$, $\mathbb{1}$ from $\mathbb{2}$ in $\phi(\mathcal{C}^{\prime})$.} \label{fig:graph} \end{figure} \section*{Acknowledgements} We are grateful to Anna Frid for enlightening discussions and instructive comments during preparation of the manuscript. The work of Y.G. is supported by a joint grant ``50/50'' UMONS -- Universit\'e Fran\c{c}ois Rabelais de Tours.
2,869,038,156,168
arxiv
\section{Appendix} Table~\ref{tab:signpi} shows runtimes for each of the three Circa\xspace optimizations compared to ReLU baseline runtime. \input{tables/PI_runtime} \section{Background}\label{sec:background} \subsection{Private Inference} We consider a client-server model where the client sends their input to the server for inference using the server's model. The client and the server wish to keep both the input and model private during the inference computation. Our threat model is the same as prior work on private inference~\cite{liu2017oblivious, juvekar2018gazelle, mishra2020delphi}. More specifically, we operate in a two-party setting (i.e., client and server) where participants are honest-but-curious---they follow the protocol truthfully but may try to infer information about the other party's input/model during execution. We take the Delphi protocol~\cite{mishra2020delphi} as a baseline and implement our optimizations over it. Delphi uses HE and SS for linear layers, where computationally expensive HE operations are performed in an offline phase and online computations only require lightweight SS operations. For non-linear activations, Delphi uses ReLUs, evaluated using GCs, and polynomial activations ($x^2$), evaluated using Beaver multiplication triples~\cite{beaver1995precomputing}. While Circa\xspace does not use polynomial activations, it uses Beaver triples in its stochastic ReLU implementation. Next, we introduce the necessary cryptographic primitives and provide an overview of the Delphi protocol. \subsection{Cryptographic Primitives}\label{sec:crypto} \textbf{Finite Fields.} The cryptographic primitives described subsequently operate on values in a finite field of integer modulo a prime $p$, $\mathbb{F}_{p}$, i.e., the set $\{0,1,\ldots,p-1\}$. In practice, positive values will be represented with integers in range $[0,\frac{p-1}{2})$, and negative values will be integers in range $[\frac{p-1}{2},p)$. \textbf{Additive Secret Sharing.} Additive secret shares of a value $x$ can be created for two parties by randomly sampling a value $r$ and setting shares as $\langle x\rangle_1 = r$ and $\langle x\rangle_2 = x-r$. The secret can be reconstructed by adding the shares $x=\langle x\rangle_1+\langle x\rangle_2$. Performing additions over two shared values is straightforward in this scheme, each party simply adds their respective shares to obtain an additive sharing of the result. \textbf{Beaver Multiplication Triples.} This protocol~\cite{beaver1995precomputing} is used to perform multiplications over two secret shared values. A set of multiplication triples are generated offline from random values $a$ and $b$, such that the first party receives $\langle a\rangle_1$, $\langle b\rangle_1$, $\langle ab\rangle_1$, and the second party receives $\langle a\rangle_2$, $\langle b\rangle_2$, $\langle ab\rangle_2$. In the online phase $x$ and $y$ are secret shared among parties such that the first party holds $\langle x\rangle_1$, $\langle y\rangle_1$ and the second party holds $\langle x\rangle_2$, $\langle y\rangle_2$. To perform multiplication they consume a set of triples generated offline and at the end of the protocol the first party obtains $\langle xy\rangle_1$ and the second party obtains $\langle xy\rangle_2$. \textbf{Homomorphic Encryption.} HE~\cite{elgamal1985public} is a type of encryption that enables computation directly on encrypted data. Assuming a public key $k_{pub}$ and corresponding secret key $k_{sec}$, an encryption function operates on a plaintext message to create a ciphertext $c=E(m, k_{pub})$, and a decryption function obtains the message from the ciphertext $m=D(c, k_{sec})$. An operation $\odot$ is homomorphic if for messages $m_1$, $m_2$ we have a function $\star$ such that decrypting the ciphertext $E(m_1, k_{pub}) \star E(m_2, k_{pub})$, which we also write as $\textrm{HE}(m_1 \odot m_2 )$, gives $m_1 \odot m_2$. \textbf{Garbled Circuits.} GCs~\cite{yao1986generate} enable two parties to collaboratively compute a {Boolean} function on their private inputs. The function is first represented as a Boolean circuit $C$. One party (the \emph{garbler}) encodes the circuit using procedure $\tilde{C} \leftarrow Garble(C)$ and sends it to the second party (the \emph{evaluator}). The evaluator also obtains labels of the inputs and is able to evaluate the circuit using procedure $Eval(\tilde{C})$ without learning intermediate values. Finally, evaluator will share the output labels with the garbler, and both parties obtain the output in plaintext. The cost of GC is largely dependent on the size of the Boolean circuit being computed. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/delphi_new.pdf} \vspace{-1.5em} \caption{An illustration of the Delphi protocol, which Circa\xspace uses. Delphi is a hybid protocol that uses HE (offline) and SS (online) for linear layers, and GC and Beaver multiplication triples for ReLU and polynomial non-linear layers respectively. } \vspace{-1em} \label{fig:delphi} \end{figure} \subsection{Delphi Protocol} We now briefly describe the linear and non-linear components of the Delphi protocol, depicted also in Figure~\ref{fig:delphi}. For simplicity, we will focus on the $i^{th}$ layer, with input $\mathbf{y_i}$ and output $\mathbf{y_{i+1}}$. The layer performs linear computation $\mathbf{x_{i}} = \mathbf{W_i}.\mathbf{y_i}$ (here $\mathbf{W_i}$ are the server's weights) followed by a non-linear activation $\mathbf{y_{i+1}} = \textrm{ReLU}(\mathbf{x_{i}})$. The protocol consists of an input independent \emph{offline phase}, and an input dependent \emph{online phase}. As a starting point, the client samples random vectors $\mathbf{r_i}\in \mathbb{F}^n_p$ in the offline phase. For input $\mathbf{y_1}$, the client computes $\mathbf{y_1}-\mathbf{r_1}$ and sends it to the server. For the $i^{th}$ layer, the client and the server start with secret shares of the layer input, $\langle \mathbf{y_i} \rangle_{c} = \mathbf{r_i}$ and $\langle \mathbf{y_i} \rangle_{s} = \mathbf{y_i}-\mathbf{r_i}$, and use the Delphi protocol to obtain shares of the output, $\langle \mathbf{y_{i+1}} \rangle_{c} = \mathbf{r_{i+1}}$ and $\langle \mathbf{y_{i+1}} \rangle_{s} = \mathbf{y_{i+1}}-\mathbf{r_{i+1}}$. \textbf{Linear computation:} In the offline phase, the server samples random vectors $\mathbf{s_i}\in \mathbb{F}^n_p$. Using HE on the server side, as shown in Figure~\ref{fig:delphi}, the client then obtains $\mathbf{W_i}.\mathbf{r_i}-\mathbf{s_i}$ without learning the server's randomness or weights and without the server learning the client's randomness. In the \emph{online} phase, the server computes $\mathbf{W_i}.(\mathbf{y_i}-\mathbf{r_i})+\mathbf{s_i}$, at which point the client and the server hold additive secret shares of $\mathbf{x_i} = \mathbf{W_i}.\mathbf{y_i}$. Circa\xspace uses the same protocol for linear layers as Delphi. \textbf{Non-linear computation:} In this description we will focus on ReLU computations (for completeness, Figure~\ref{fig:delphi} also illustrates how quadratic activations are computed). During the offline phase, the server creates a Boolean circuit $C$ for each ReLU in the network, garbles the circuit and sends it to the client along with labels corresponding to the client's input. In the online phase, the linear layer protocol produces the server's share of the ReLUs input. The server sends labels corresponding to its share to the client. The client then evaluates the GC, which outputs the server's share, $\mathbf{y_{i+1}}-\mathbf{r_{i+1}}$, for the next linear layer. Online GC evaluation is the most expensive component of Delphi's online PI latency. Circa\xspace focuses on reducing this cost. \section{Conclusion}\label{sec:conclusion} This paper presented Circa\xspace, a new method to significantly speed up PI by lowering the high cost of ReLUs via approximation. Our overarching objective was to minimize the amount of logic that must occur inside the expensive GC. To achieve this goal we reformulated ReLU as an explicit sign test and mask, where only the sign test is evaluated with GCs and showed that we can truncate, or simply remove, many of the least significant bits to the sign test for even more savings. Though the sign test and truncation optimizations introduce error, we rigorously evaluated the effects and found a negligible impact on accuracy. Compared to a baseline protocol (Delphi) for PI, Circa\xspace provided over 3$\times$ (2.2$\times$ for quadratic Delphi) speedup. Furthermore, we showed how existing state-of-the-art PI optimizations can be combined with Circa\xspace for even more savings, resulting in an additional speedup of 1.8$\times$ over a baseline protocol. {In absolute terms, Circa\xspace can run TinyImageNet inferences within 2 seconds, bringing PI another step closer to real-time deployment.} \textbf{Limitations and Societal Impact} Circa\xspace applies only for inference and not for training, and additionally only applies to certain types of deep networks. Private inference seeks to protect individuals from having to reveal sensitive data, but might also allow unregulated misuse of deep learning services. \section{Evaluation}\label{sec:eval} In this section we evaluate Circa\xspace and validate our error model. We show that our optimizations have minimal effect on network accuracy and show the runtime and storage benefits of Circa\xspace. \subsection{Experimental Setup}\label{sec:setup} We perform experiments on ResNet18~\cite{he2016deep}, ResNet32~\cite{he2016deep} and VGG16~\cite{simonyan2014very}. Since Circa\xspace can be used to replace ReLU activations in \emph{any} network, we also perform experiments on DeepReDuce-optimized models~\cite{jha2021deepreduce} that are the current state of the art for fast PI. We train these networks on CIFAR-10/100~\cite{krizhevsky2010cifar} and TinyImageNet~\cite{yao2015tiny} datasets. CIFAR-10/100 (C10/100) datatset has 50k training and 10k test images (size $32\times32$) separated into 10/100 output classes. TinyImageNet (Tiny) consist of 200 output classes with 500 training and 50 test samples (size $64\times64$) per class. Prior work on PI typically evaluate on smaller datasets and lower resolution images because PI is prohibitively expensive for ImageNet or higher resolution inputs. \input{plots/FaultModelValidation} Circa\xspace uses the Delphi protocol as a base, but substantially modifies the way ReLUs are implemented. We use the SEAL library~\cite{dowlin2015manual} for HE, and fancy-garbling library~\cite{fancygarble} for GC. We benchmark PI runtime on an Intel i9-10900X CPU running at 3.70GHz with 64GB of memory. The baseline accuracy of the models in PI is reported using an integer model with network values in a prime field. To obtain an integer model, we scale and quantize model parameters and input to 15 bits (as in Delphi), and pick a 31 bit prime field ($p=2138816513$) to ensure that multiplication of two 15-bit values does not exceed the field. The baseline accuracy of our integer models is reported in Table~\ref{tab:CircaC10C100Tiny}. \input{plots/SCurve.tex} \subsection{Experimental Results} \textbf{Validating the Stochastic ReLU Fault Model.} We begin by validating our model of stochastic ReLUs described in Section~\ref{sec:stochrelu} (Theorem~\ref{th:stochrelu} and \ref{th:truncstochrelu}) against Circa\xspace's stochastic ReLU implementation. Figure~\ref{fig:FaultModelValidation}(a) plots the fault probability of stochastic ReLU with 18-bit truncation in the PosZero mode against the histogram of ResNet18's activations after the first convolution layer. According to our fault model, small positive activations in truncation range ($0\leq x<2^{18}$) incur a high fault probability, and we have $P=({2^{18}-x})/{2^{18}}$. For values outside of this range, the fault probability is small and grows proportional to activation absolute value, and we have $P={|x|}/{p}$. Figure~\ref{fig:FaultModelValidation}(b) plots fault rates on ResNet18 trained on C100 for PosZero sotchastic ReLU mode. We plot the total fault rate for all activations and the fault rate for only positive activations. Points in the plot indicate measurements from the implementation and the lines show our estimates using the model. We observe that our model is consistent with the implementation across a wide range of truncation values. As expected, as we increase the amount of truncation, the fault rates increase. With 28 bits of truncation, all positive activations are faulty. The total fault rate is $60\%$, which is lower than the positive fault rate because negative activations incur relatively few faults, as predicted by our model. \textbf{Fault Rates and Test Error vs. Truncation.} Figure~\ref{fig:CiraAccWithFaultRate} shows the relationship between truncation, fault rates, and test accuracy. The experiments are done using the C100 and Tiny datsets with ResNet18 and DeepReDuce-optimized networks. Each plot shows data for both NegPass and PosZero modes. We observe that in all cases, Circa\xspace is able to truncate 17-19 bits with negligible accuracy loss at fault rates up to $10\%$. We also find that the PosZero version of Circa\xspace is consistently slightly better than NegPass for these models, enabling between 1-2 extra bits of truncation. For all subsequent experiments, we pick the fault mode and truncation such that the resulting accuracy is within $1\%$ of baseline. \begin{wrapfigure}{r}{0.40\textwidth} \centering \includegraphics[width=0.4\textwidth]{./figs/gc_size.pdf} \vspace{-2em} \caption {Garbled circuit size comparison between baseline ReLU, naive sign and Circa\xspace stochastic ReLUs. } \vspace{-1.5em} \label{fig:gc-size} \end{wrapfigure} \textbf{Circa\xspace Accuracy and PI Runtime on Baseline Models.} Table~\ref{tab:CircaC10C100Tiny} shows the accuracy and runtime of Circa\xspace for the C10/100 and Tiny datasets applied on top of standard ResNet18/32 and VGG16 models. Circa\xspace achieves $2.6\times$ to $3.1\times$ PI runtime improvement, in each instance with less than $1\%$ accuracy reduction. The runtime improvements are larger for Tiny because the baseline networks are different owing to Tiny's higher spatial resolution. \textbf{Circa\xspace vs. State of the Art.} Delphi, the baseline protocol on which we build Circa\xspace, reduces PI runtime by replacing selected ReLUs with cheaper quadratic activations. For C100, Delphi reduces the number of ReLUs in ResNet32 by $1.2\times$ with $~1\%$ accuracy loss compared to baseline which, at best, translates to an equal reduction in PI latency. For the same setting, Circa\xspace achieves a $2.6\times$ reduction in PI latency. Circa\xspace can be applied on \emph{any} pre-trained ReLU network. Table~\ref{tab:CircaDeepReDuce} shows Circa\xspace's accuracy and PI runtime applied to DeepReDuce-optimized networks, the current state of the art in PI, across a range of ReLU counts on C100 and Tiny. Circa\xspace reduces DeepReDuce PI latency by $1.6\times$ to $1.8\times$, with less than $1\%$ accuracy loss. Moreover, Circa\xspace improves the set of Pareto optimal points. For example, Circa\xspace achieves $75.34\%$ accuracy on C100 with 1.65s PI runtime, while DeepReduce has both higher runtime (1.71s) and lower accuracy ($74.7\%$). For Tiny, Circa\xspace increases DeepReduce's accuracy from $59.18\%$ to $61.63\%$ at effectively iso-runtime (3.18s vs. 3.21s). Finally, SAFENet~\cite{SAFENET}, the state of the art that DeepReDuce improved on, achieves $1.9\times$ speedup over Delphi on ResNet32/C100, while Circa\xspace achieves a $2.6\times$ speedup. \input{tables/circa_c10_c100_tiny} \input{tables/DeepreduceWithcirca_c100_tiny.tex} \begin{comment} \begin{table}[] \caption{GC Size} \label{tab:gc-size} \centering \begin{tabular}{lcccc}\toprule & ReLU (31 bit) & Sign (31 bit) & $\textrm{Sign}_{\textrm{apx}}$ (31 bits) & $\textrm{Sign}_{\textrm{apx}}$ (12 bits) \\ \midrule Size (B)/ReLU & 17672 & 12248 & 9224 & 3752 \\ \bottomrule \end{tabular} \end{table} \end{comment} \textbf{Effectiveness of Circa\xspace Optimizations.} Circa\xspace encompasses three optimizations that build on top of each other, buying us multiplicative savings in GC size and PI runtime. Figure~\ref{fig:gc-size} shows the GC size after each optimization. Replacing the baseline 31-bit ReLU GC with a 31-bit sign GC reduces GC size by $1.4\times$, (with no accuracy loss), a 31-bit stochastic sign GC is $1.9\times$ smaller, and truncating the stochastic sign to 12-bits achieves $4.7\times$ saving over the baseline. The runtime improvements from each of these optimizations are shown in Table 3 in the Appendix. \section{Introduction}\label{sec:intro} Today, Machine Learning as a Service (MLaaS) provides high-quality user experiences but comes at the cost of privacy---clients either share their personal data with the server or the server must disclose its model to the clients. Ideally, both the client and server would preserve the privacy of their inputs and model without sacrificing quality. A recent and growing body of work has focused on designing and optimizing cryptographic protocols for private inference (PI). With PI, MLaaS computations are performed obliviously; without the server seeing the client's data nor the client learning the server's model. PI protocols are built using cryptographic primitives including homomorphic encryption (HE), Secret Sharing (SS), and secure multiparty computation (MPC). The challenge is that all known protocols for PI incur impractically high overheads, rendering them unusable. Existing PI frameworks~\cite{liu2017oblivious, juvekar2018gazelle, mishra2020delphi} are based on \textit{hybrid} protocols, where different cryptographic techniques are used to evaluate different network layers. Delphi~\cite{mishra2020delphi}, a leading solution based on Gazelle~\cite{juvekar2018gazelle}, uses additive secret sharing for convolution and fully-connected layers. Secret sharing supports fast addition and multiplication by moving large parts of the computation to an offline phase~\cite{beaver1995precomputing}. Thus, convolutions can be computed at near plaintext speed. Non-linear functions, notably ReLU, cannot enjoy the same speedups. Most protocols (including Delphi, Gazelle, and MiniONN~\cite{liu2017oblivious}) use Yao's Garbled Circuits (GC)~\cite{yao1986generate} to process ReLUs. GCs allow two parties to collaboratively and privately compute arbitrary Boolean functions. At a high-level, GCs represent functions as encrypted two-input truth tables. This means that computing a function with GCs requires the function be decomposed into a circuit of binary gates that processes inputs in a bit-wise fashion. Thus, evaluating ReLUs privately is extremely expensive, to the point that PI inference runtime is {dominated} by ReLUs~\cite{mishra2020delphi,cryptonas}. Therefore, reducing ReLU cost is critical to realizing practical PI. There are two general approaches for minimizing the cost of ReLUs: designing new architectures that limit ReLU counts, and optimizing the cost per ReLU. Prior work has almost exclusively focused on minimizing ReLU counts. Work along this line includes replacing or approximating ReLUs with quadratics or polynomials (e.g., CryptoNets~\cite{gilad2016cryptonets}, Delphi~\cite{mishra2020delphi}, SAFENet~\cite{SAFENET}), designing new networks architectures to maximize accuracy per ReLU (e.g., CryptoNAS~\cite{cryptonas}), and more aggressive techniques that simply remove ReLUs from the network (e.g., DeepReDuce~\cite{jha2021deepreduce}). Relatively little attention has been given to minimizing the cost of the ReLU operation itself. In this paper we propose Circa\xspace\footnote{In Latin, ``circa" means approximately.}, a novel method to reduce ReLU cost based on a new \emph{stochastic ReLU} function. First, we refactor the ReLU as a sign computation followed by a multiplication, allowing us to push the multiplication from GC to SS, leaving only a sign computation in the GC. Next, we approximate the sign computation to further reduce GC cost. This approximation is not free; it results in stochastic sign evaluation where the results are sometimes incorrect (we call incorrect computations {faults} to differentiate from inference error/accuracy). Finally, we show that stochastic sign can be optimized even further by truncating its inputs; truncation introduces new faults, but only for small positive or negative values. Our key insight is that deep networks are highly resilient to stochastic ReLU fault behaviour, which provides significant opportunity for runtime benefit. The stochastic ReLU introduces two types of faults. First, the sign of a ReLU can be incorrectly computed with probability proportional to the magnitude of the input (this probability is the ratio of the input magnitude over field prime.) In practice we find this rarely occurs as most ReLU inputs are very small (especially compared to the prime) and thus the impact on accuracy is negligible. Second, truncation can cause \emph{either} small positive {or} small negative values to fault. Circa\xspace allows users to choose between these two probabilistic fault modes. In \textit{NegPass}, small negative numbers experience a fault with some probability and are incorrectly passed through the ReLU. Alternatively, \textit{PosZero} incorrectly resolves small positive inputs to zero. Empirically, we find deep networks to be highly resilient against such faults, tolerating more than 10\% fault rate without sacrificing accuracy. Compared to Delphi, Circa\xspace-optimized networks run up to 3$\times$ times faster. We further show that Circa\xspace is orthogonal to the current best practice for ReLU count reduction~\cite{jha2021deepreduce}. When combined, we observe an additional 1.8$\times$ speedup. \section{Circa\xspace Methodology}\label{sec:method} We now describe Circa\xspace, beginning with a cost analysis of the GC design used in prior work. We then describe three optimizations to reduce GC size and latency that form the core of Circa\xspace. \subsection{Cost Analysis of ReLU GC} The inputs to a conventional ReLU GC are the client's and server's shares of $x$, i.e., $\langle x \rangle_{c}$ and $\langle x \rangle_{s}$, and random value $r$ from the client. Each is a value in the field $\mathbb{F}_{p}$, implemented using $ m = \lceil log(p) \rceil$ bits. Prior work~\cite{liu2017oblivious, juvekar2018gazelle, mishra2020delphi} implements ReLU with a circuit that performs several computations contributing to the GC cost as shown in Figure~\ref{fig:relu}(a). First, $x = \langle x \rangle_{c}$ + $\langle x \rangle_{s} \bmod p$ is computed by obtaining $z= \langle x \rangle_{c}$ + $ \langle x \rangle_{s}$ and $z-p$ using two m-bit adder/subtractor modules (ADD/SUB). $z$ is checked for overflow, and either $z$ or $z-p$ is selected using a multiplexer (MUX). Then, $x$ is compared with $\frac{p}{2}$ using an m-bit comparator (>), and a MUX outputs either $0$ or $x$. Finally, the GC outputs the server's share of $\textrm{ReLU}(x)$ by performing a modulo subtraction of $r$ from the output of the previous MUX using two ADD/SUB modules and another MUX. This design, used by Delphi and Gazelle, results in a GC size of $17.2$KB per ReLU. Overall, the GCs for ResNet32, as implemented in Delphi, require close to $5$GB of client-side storage per inference\footnote{Note that a separate GC must be constructed for each ReLU in a network and GCs cannot be reused across inferences.}. GC size directly correlates with PI latency, resulting in prohibitive online runtime. \subsection{Circa\xspace's Stochastic ReLU}\label{sec:stochrelu} Circa\xspace's Stochastic ReLU is built using three optimizations that work together to reduce GC size. We describe each optimization below. \begin{figure} \centering \subfloat[]{\includegraphics[width=.33\textwidth]{figs/relu_new.pdf}}\hfill \subfloat[]{\includegraphics[width=.33\textwidth]{figs/sign_new.pdf}}\hfill \subfloat[]{\includegraphics[width=.33\textwidth]{figs/signapx_new.pdf}} \vspace{-.5em} \caption{Comparing implementations of the ReLU function for PI. (a) depicts the implementations in prior work~\cite{liu2017oblivious, juvekar2018gazelle, mishra2020delphi} where the ReLU function is implemented with GC, (b) shows a naive implementation of the sign function followed by multiplication triples, and (c) describes the implementation in Circa\xspace with an optimized sign function followed by multiplication triples. The heaviest part of the computation in each of these implementations relates to the GC which is shown in shaded blocks.} \vspace{-1em} \label{fig:relu} \end{figure} \textbf{Refactoring ReLUs.} Our first observation is that $\textrm{ReLU}(x)$ can be refactored as $x.\textrm{sign}(x)$, where $\textrm{sign}(x)$ equals $1$ if $x\geq0$ and $0$ otherwise. Since multiplications can be evaluated cheaply online using Beaver triples, only $\textrm{sign}(x)$ must be implemented in GC. Let $v = \textrm{sign}(x)$; the GC computes the server's share of $v$, $\langle v \rangle_{s} = \textrm{sign}(x) - r$, using shares of $x$ and random value $r$ provided by the client. The client will then set its share to $\langle v \rangle_{c} = r$. Figure~\ref{fig:relu}(b) shows our naive GC implementation for $\textrm{sign}(x)$. As in the $\textrm{ReLU}(x)$ GC (Figure~\ref{fig:relu}(a)), we first compute $x = \langle x \rangle_{c}$ + $\langle x \rangle_{s} \bmod p$ using two ADD/SUB modules and a MUX, and use a comparator to check $x$ against $\frac{p}{2}$. By having the client pre-compute and provide both $-r$ and $1-r$ as inputs to the GC we save two ADD/SUB modules, since we no longer need to perform these computations inside the GC. Note that the client selects $r$ and can compute $-r$ and $1-r$ by itself at plaintext speed. Formally, our GC for the sign function implements: \begin{equation} sign \big(\langle x \rangle_{c}, \langle x \rangle_{s}, -r, 1-r \big) = \begin{cases} -r & \text{if $\langle x \rangle_{c} + \langle x \rangle_{s} \bmod p > \frac{p}{2}$ } \\ 1-r & \text{otherwise} \end{cases} \end{equation} The client and server now hold secret shares of both $x$ (i.e., $\langle x \rangle_{c}$ and $\langle x \rangle_{s}$), and $v$ (i.e., $\langle v \rangle_{c}$ and $\langle v \rangle_{s}$). We now use pre-computed Beaver multiplication triples, as described in Section~\ref{sec:crypto}, to compute shares of $y=x.\textrm{sign}(x)$. This multiplication is cheap and the optimized ReLU is smaller and faster than standard GC implementation in Figure~\ref{fig:relu}(a), providing modest benefits. \textbf{Stochastic Sign.} Our second optimization further reduces the cost of the sign computation. As noted previously, the naive sign GC still uses high-cost components inside the GC because of the need to perform \emph{modulo} additions to exactly reconstruct $x$; modulo additions require expensive checks for overflow and a subsequent subtraction. Our next optimization only looks at the regime without overflow, greatly simplifying the GC at the cost of introducing occasional faults in sign computation. Figure~\ref{fig:relu}(c) shows our proposed stochastic sign optimization that reduces the logic inside the GC to only a comparator and a MUX. We first formally define the stochastic sign function. \begin{equation}\label{eq:stochrelu} \widetilde{sign} \big (p-\langle x \rangle_{c}, \langle x \rangle_{s}, -r, 1-r \big ) = \begin{cases} -r & \text{if $\langle x \rangle_{s} \leq p-\langle x \rangle_{c}$ } \\ 1-r & \text{otherwise} \end{cases} \end{equation} Note that in the stochastic sign GC, the client sends the negated value of its share (or $p-\langle x \rangle_{c}$) instead of the share directly. This optimization avoids the need to compute $p-\langle x \rangle_{c}$ inside the GC itself. We formalize the fault rates of the stochastic ReLU below. \begin{theorem}\label{th:stochrelu} For any $x \in \mathbb{F}_{p}$, assuming shares $\langle x \rangle_s=x+t \bmod{p}$ and $\langle x \rangle_c=p-t$ where $t$ is picked uniformly at random from $\mathbb{F}_{p}$, $$P \Big\{ \widetilde{sign} \big (\langle x \rangle_{s}, p-\langle x \rangle_{c}, -r, 1-r \big ) \neq sign \big(\langle x \rangle_{c}, \langle x \rangle_{s}, -r, 1-r \big) \Big\} = \frac{|x|}{p}.$$ \end{theorem} \begin{proof} Consider the case where $x$ is positive, i.e., $x\leq\frac{p}{2}$. The wrong sign is assigned to $x$ if $\langle x \rangle_s \leq p-\langle x \rangle_c$, which can be rewritten as $x+t \bmod{p} \leq t$. This is true when adding $x$ and $t$ incurs an overflow, with $t\geq p-x$. Since $t$ is drawn at random, the probability of error $P=\frac{x}{p}$. A similar analysis for negative values of $x$ shows that a wrong sign is assigned when $x+t < p$ which results an error probability of $P=\frac{|x|}{p}$, where $|x|=p-x$ for $x>\frac{p}{2}$. \end{proof} \textbf{Truncated Stochastic Sign.} Our third and most effective optimization builds on the observation that the $\langle x \rangle_s \leq p - \langle x \rangle_c $ (equivalently, $\langle x \rangle_s \leq t$) check in the stochastic sign GC can be performed on {truncated} values. We show that truncation introduces an additional fault mode; in particular, the check is incorrect with some probability for small positive values of $x$ in the range $[0, 2^{k})$ (i.e., values that truncate to $0$ with $k$-bit truncation) but is correct for all other values of $x$. Let $\lfloor x \rfloor_k$ represent truncation of the $k$ least significant bits of $x$, i.e., only the $m-k$ most significant bits of $x$ are retained. We define a truncated stochastic sign as: \begin{equation}\label{eq:truncstochrelu} \widetilde{sign}_{k} \big (p-\langle x \rangle_{c}, \langle x \rangle_{s}, -r, 1-r \big ) = \widetilde{sign} \big (\lfloor p-\langle x \rangle_{c} \rfloor_{k}, \lfloor \langle x \rangle_{s} \rfloor_{k}, -r, 1-r \big ) \end{equation} where $k$ represents the amount of truncation. We now prove that the truncated stochastic sign function incurs additional errors (over the stochastic sign) only for small positive values. \begin{theorem}\label{th:truncstochrelu} For any $x \in \mathbb{F}_{p}$, assuming shares $\langle x \rangle_s=x+t \bmod{p}$ and $\langle x \rangle_c=p-t$ where $t$ is picked uniformly at random from $\mathbb{F}_{p}$, and assuming $\widetilde{sign} \big (\langle x \rangle_{c}, p-\langle x \rangle_{s}, -r, 1-r \big ) = sign \big(\langle x \rangle_{c}, \langle x \rangle_{s}, -r, 1-r \big)$, then: $$P \Big\{ \widetilde{sign}_{k} \big (\langle x \rangle_{c}, p-\langle x \rangle_{s}, -r, 1-r \big ) \neq \widetilde{sign} \big (\langle x \rangle_{c}, p-\langle x \rangle_{s}, -r, 1-r \big ) \} = \frac{2^k-|x|}{2^k} \quad \forall x \in [0, 2^{k}),$$ and zero otherwise. \end{theorem} \begin{proof} For negative $x$, the stochastic sign is error-free if $\langle x\rangle_s \leq p-\langle x\rangle_c$ (equivalently $\langle x\rangle_s \leq t$). Consequently the truncated stochastic sign would not incur an error, since $\lfloor\langle x\rangle_s \rfloor_k = \lfloor t \rfloor_k$ is assigned a negative sign. The additional error in this case is when $\lfloor\langle x\rangle_s \rfloor_k = \lfloor t \rfloor_k$ for positive values of $x$. Therefore the error happens when $\lfloor x+t \rfloor_k = \lfloor t \rfloor_k$, or $x+t$ does not overflow to higher $p-k$ bits. So for $|x|>2^k$ there is no error. In other case, the error happens when $x+t>2^k$ or equivalently $t>2^k-|x|$. Assuming a uniform distribution the error probability would be $P=\frac{2^k-|x|}{2^k}$. \end{proof} \textbf{Putting it All Together: the Stochastic ReLU.} We define Circa\xspace's stochastic ReLU as $\widetilde{\textrm{ReLU}}_{k}(x)=x.\widetilde{sign}_{k}(x)$ with $\widetilde{sign}_{k}$ defined in Eq.~\ref{eq:truncstochrelu}. Stochastic ReLUs incur \emph{two} types of faults: (1) a sign error, independent of $k$, with probability $\frac{|x|}{p}$ (in practice $|x|\ll p$ for typical choices of prime), and (2) \emph{small} positive values in the truncation range $x \in [0, 2^{k})$ are zeroed out with high probability $\frac{2^k-|x|}{2^k}$; however, for small values we expect the impact on network accuracy to be low. We note that Eq.~\ref{eq:stochrelu} could have been defined such that $\widetilde{sign}$ outputs $-r$ for $\langle x \rangle_{s} < p-\langle x \rangle_{c}$. With this modification, truncation errors occur with the same probability but for small negative values in range $[p-2^k, p)$ that are passed through to the ReLU output. That is, our stochastic ReLU can operate in one of two modes: (1) \textbf{PosZero}, that zeros out small positive values, or (2) \textbf{NegPass}, that passes through small negative values. \section{Related Work}\label{sec:related} \textbf{PI Protocols.} Over the past few years a series of papers have proposed and optimized protocols for private machine learning. CryptoNets~\cite{gilad2016cryptonets} demonstrates an HE only protocol for inference using the MNIST dataset. SecureML~\cite{mohassel2017secureml} shows how secret sharing could be used for MNIST inference and trains linear regression models~\cite{mohassel2017secureml}. MiniONN~\cite{liu2017oblivious} combines secret sharing with multiplication triples and GCs, allowing them to run deeper networks, and forms the foundation for a series of follow-on protocols. While MiniONN generates multiplication triples for each multiplication in a linear layer, Gazelle~\cite{juvekar2018gazelle} uses an efficent additive HE protocol to speed up linear layers. Delphi shows how significant speedup can be obtained over {Gazelle} by moving heavy cryptographic computations offline. XONN~\cite{riazi2019xonn} enables private inference using only GCs for binarized neural networks and leverages the fact that XORs can be computed for free in the GC protocol to achieve speedups. Another approach is to replace GCs with secure enclaves and process linear layers on GPUs for more performance~\cite{slalom}. Some have also focused on privacy enhanced training~\cite{falcon, cryptGPU}, typically assuming a different threat model than this work. \textbf{ReLU optimization.} Prior work has also looked at designing ReLU optimized networks. A common approach is to replace ReLUs with quadratics~\cite{gilad2016cryptonets, mishra2020delphi, SAFENET, fastercryptonets}. While effective in reducing GC cost, this complicates the training process and can degrade accuracy. Another approach is to design novel ReLU-optimized architectures. CryptoNAS~\cite{cryptonas} develops the idea of a ReLU budget and designs new architectures to maximize network accuracy per ReLU. DeepReDuce is recent work that proposes simply removing ReLUs from the network altogether~\cite{jha2021deepreduce}. DeepReDuce is the current state-of-the-art solution for PI, and we demonstrated how Circa\xspace can be used on top of it for even more savings. \textbf{Fault tolerant inference.} Many have previously shown neural inference is resilient to faults even during inference. In the systems community, fault tolerant properties are often used to improve energy-efficiency and runtimes~\cite{minerva, ares, jeff, maxnvm}. Others have shown that networks can tolerate approximation to reduce model size by pruning insignificant weights and activations and possibly compressing them~\cite{deepCompression, weightless, dropconnect, masr}.
2,869,038,156,169
arxiv
\section{Introduction} Understanding the origin of the Galactic field population is one of the most important outstanding problems in astrophysics. The field is likely a mixture of many types of star-forming region, but at present we have little information on the `average' star forming region that populates the field in terms of its mass, density, kinematics, chemical signature and binary properties. The initial mass function (IMF) of stars is one potential clue to the dominant star formation event. If the IMF were to vary as a function of environment, then this would place constraints on the star formation event which dominantly contributes to the field. However, many studies of star-forming (SF) regions, clusters and associations over the past twenty years suggest that the environment in which stars form has little influence on the IMF, which appears to be invariant in Galactic SF regions, and is the same as in the field \citep*[][and references therein]{Bastian10}. Binary stars are potentially more of a strong constraint on the origin of the Galactic field than the IMF. The seminal paper by \citet{Duquennoy91} found that the multiplicity fraction (hereafter `binary fraction') of Solar-type G-dwarf stars (primary masses in the range 0.8 -- 1.2\,M$_\odot$) to be $f_{\rm bin} = 0.58$, where \begin{equation} f_{\rm bin} = \frac{B + T + ...}{S + B + T + ...}, \end{equation} and $S$, $B$ and $T$ are the number of single, binary and triple systems, respectively. These authors also demonstrated that the period distribution can be approximated with a log-normal distribution extending over many orders of magnitude; from spectroscopic (close) binary systems with separations $\sim 10^{-3}$\,au to extremely wide (`common proper motion') systems with separations $\sim 10^5$\,au. In general, the surveys of binary stars are usually sensitive to companions with a mass ratio $q > 0.1$ \citep[e.g.][]{Duchene13b}, where \begin{equation} q = \frac{m_s}{m_p} \end{equation} and $m_p$ and $m_s$ are the masses of the primary (usually more massive) and secondary component stars, respectively. Following the work on Solar-type primaries, \citet{Fischer92} collated the binary statistics for lower-mass M-dwarf stars. Unfortunately, the data were not as complete as for the G-dwarfs, but suggested that the binary fraction of M-dwarfs was slightly lower ($f_{\rm bin} = 0.42$) but with a log-normal separation distribution with a similar mean and variance to the G-dwarfs. In principle, it should be possible to compare the overall fraction, separation distribution (and other orbital parameters) of binaries in SF regions to those in the field. Unfortunately, observations of binaries in SF regions are often limited to a (comparatively) narrow separation range \citep[usually 10s -- 1000s au, ][and references therein]{King12b}. Depending on the local density of a SF region, it is usually these systems -- `intermediate' binaries -- which are susceptible to destruction through dynamical encounters \citep{Heggie75,Hills75a,Hills75b}, often in a way that is difficult to account for in a simple analytical model \citep{Fregeau06,Parker12b}. This in turn makes comparing the binary statistics between different regions somewhat difficult. As an example, the Orion Nebula Cluster (ONC) contains no wide ($>$1000\,au) binaries \citep{Scally99}, and in the range 62 -- 620\,au has a binary fraction and separation distribution which is consistent with the field \citep{Reipurth07}. On the other hand, the Taurus association appears to contain an excess of wide binaries \citep{Kohler98}, as do several other regions and open clusters \citep[e.g.][]{Patience02,Kohler08}. Do these differences between regions point to different star formation outcomes \citep[as suggested by][]{King12b}, or are they merely the result of dynamical evolution of a common primordial population in regions with different densities \citep{Marks12}? The apparent excess of binary systems with intermediate/wide separations (10s -- 1000s au) in some SF regions led \citet{Kroupa95a} to postulate that the primordial binary fraction could be as high as unity, and that binaries form from a universal initial period distribution \citep{Kroupa95b,Kroupa11}, which is modified by dynamical interactions in dense regions and clusters such as the ONC, but not in more sparse regions such as Taurus. This model has been invoked to explain the binary properties in several regions \citep{Marks12}, although the comparison with observations is necessarily limited to the observed separation range. \citet{Kroupa95a,Kroupa99} and \citet{Marks12} suggest that the binary fraction ($\sim 0.5$) and log-normal separation distribution in the field result from the dynamical processing of binaries which form from the \citet{Kroupa95b} universal period distribution and a binary fraction of unity. In recent years, several groups of authors have conducted new observations of binary stars in the field, and updated the statistics. \citet{Raghavan10} revisited the Solar-type G-dwarfs in the field and re-affirmed the earlier work by \citet{Duquennoy91}; they found an overall binary fraction of 0.46 and a log-normal period/separation distribution with a peak at $\bar{a} = 50$\,au. Most notably, \citet{Bergfors10} showed that the M-dwarfs in the field have a lower binary fraction -- 0.34 -- than that determined by \citet{Fischer92}, and \citet{Janson12} demonstrated that the M-dwarf separation distribution peaks at a significantly lower value than for the G-dwarfs (16\,au instead of 50\,au). Observations of A-star binaries \citep{deRosa12,DeRosa14} show that they have a binary fraction of 0.48, slightly higher than that of G-dwarfs, but their separation distribution peaks at much higher values (389\,au), whereas observations of brown dwarf binaries \citep{Burgasser07} show they have a binary fraction of 0.15 and a separation distribution that peaks at much lower values (4.6\,au) than the more massive M-, G- and A-type binaries. In Fig.~\ref{log_normals} we show the log-normal fits to the separation distributions, normalised to the respective binary fractions for the brown dwarfs (orange line), M-dwarfs (blue line), G-dwarfs (red line) and A-stars (green line). In Fig.~\ref{cumulatives} we show these separation distributions as cumulative distributions (with the same colour scheme). A summary of these distributions and the parameters used to create them, along with the literature references, are provided in Table~\ref{field_props}. In addition to the decreasing peak of the separation distribution with decreasing primary mass, the width (i.e.\,\,variance) of the distribution also decreases from G-dwarfs to brown dwarfs, although the width of the A-star distribution is similar to the M-dwarf distribution. \citet{DeRosa14} point out that their observations of A-stars are not sensitive to sub-30\,au binaries, which could imply that the separation distribution for A-stars is wider than observed. \citet{Duchene13b} argue for a double-peaked separation distribution for A-star binaries, and we discuss this possibility and its implications further in Section~\ref{results}. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[scale=0.33]{log-normals.ps}} \end{center} \caption[bf]{The log-normal fits to binary separation distributions in the Galactic field. From right to left, the fit to A-star binaries \citep{DeRosa14} is shown by the green line, the fit to G-dwarfs \citep{Raghavan10} is shown by the red line, the fit to M-dwarfs is shown by the blue line \citep{Janson12} and the fit to very low mass binaries \citep{Burgasser07,Thies07} is shown by the orange line. See Table~\ref{field_props} for details of each log-normal fit. Each distribution is normalised to the binary fraction in the field.} \label{log_normals} \end{figure} \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[scale=0.33]{cumulatives.ps}} \end{center} \caption[bf]{Cumulative distributions of the log-normal fits to binary separation distributions in the Galactic field. From right to left, the distribution for A-star binaries \citep{DeRosa14} is shown by the green line, the fit to G-dwarfs \citep{Raghavan10} is shown by the red line, the fit to M-dwarfs is shown by the blue line \citep{Janson12} and the fit to very low mass binaries \citep{Burgasser07,Thies07} is shown by the orange line.} \label{cumulatives} \end{figure} \begin{table*} \caption[bf]{Binary properties of systems observed in the Galactic field. We show the spectral type of the primary mass, the main sequence mass range this corresponds to, the binary fraction $f_{\rm bin}$, the observed mean separation $\bar{a}$, and the mean (${\rm log}\,\bar{a}$) and variance ($\sigma_{{\rm log}\,\bar{a}}$) of the log-normal fits to these observed separation distributions.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Type & Primary mass & $f_{\rm bin}$ & $\bar{a}$ & ${\rm log}\,\bar{a}$ & $\sigma_{{\rm log}\,\bar{a}}$ & Ref. \\ \hline A & $1.5 < m/$M$_\odot \leq 3.0$ & 0.48 & 389\,au & 2.59 & 0.79 & \citet{DeRosa14} \\ \hline G-dwarf & $0.8 < m/$M$_\odot \leq 1.2$ & 0.46 & 50\,au & 1.70 & 1.68 & \citet{Raghavan10} \\ \hline M-dwarf & $0.08 < m/$M$_\odot \leq 0.45$ & 0.34 & 16\,au & 1.20 & 0.80 & \citet{Bergfors10,Janson12} \\ \hline Brown dwarf & $0.02 < m/$M$_\odot \leq 0.08$ & 0.15 & 4.6\,au & 0.66 & 0.4 & \citet{Burgasser07,Thies07} \\ \hline \end{tabular} \end{center} \label{field_props} \end{table*} In summary, the observed binary fraction, and the average separation, appear to decrease as a function of decreasing primary mass in the Galactic field. In this paper, we investigate the extent to which these differences are due to the effects of dynamical evolution on a single common primordial binary population (i.e.\,\,a binary fraction of unity and the \citet{Kroupa95b,Kroupa11} initial period distribution). We describe the method for setting up dense SF regions and binaries in our $N$-body simulations in Section~\ref{method}, we describe the results in Section~\ref{results}, we provide a discussion in Section~\ref{discuss} and we conclude in Section~\ref{conclude}. \section{Method} \label{method} In this section, we describe the method used to set up and run our numerical simulations of the evolution of binary population in star forming regions. \subsection{Star forming region set up} Observations of many young star forming regions suggest that stars form in filamentary distributions \citep{Andre10,Arzoumanian11}, which leads to a hierarchical, or self-similar substructured spatial distribution of stars \citep[e.g.][]{Cartwright04,Schmeja06,Gouliermis14}. A convenient way of creating substructure as the initial conditions for $N$-body simulations is to use a fractal distribution \citep{Goodwin04a}, where the degree of substructure is described by just one number, the fractal dimension, $D$. In this paper, we use very fractal ($D = 1.6$ in three dimensions) distributions as the initial conditions -- although both observations \citep{Cartwright04,Schmeja08,Sanchez09,Gouliermis14} and simulations of star formation \citep{Schmeja06,Girichidis12,Dale12a,Dale13} show that higher fractal dimensions (less substructure) are possible. The fractals are set up so that local velocities of stars are correlated \citep{Goodwin04a} but distant stars can have very different velocities. For full details of the construction of the fractals, we refer the reader to \citet{Goodwin04a} and \citet{Parker14b}. The fractals each contain 1500 stars and have a radius $r_F = 1$\,pc, which leads to local densities of $\rho_{\rm local} \sim 10^3$\,M$_\odot$\,pc$^{-3}$. We adopt two different initial dynamical states for our simulated star forming regions. In two sets of simulations we scale the velocities to be subvirial ($\alpha_{\rm vir} = 0.3$, where the virial ratio $\alpha_{\rm vir} = T/|\Omega|$; $T$ and $|\Omega|$ are the total kinetic energy and total potential energy of the stars, respectively). This leads to the erasure of substructure within several crossing times, and the region collapses to form a centrally concentrated, bound star cluster. When the substructure has these high initial densities, virtually all of the dynamical destruction of binaries occurs before the subsequent collapse and formation of a cluster \citep*{Parker11c}. In a third set of simulations, we scale the velocities to be supervirial, $\alpha_{\rm vir} = 1.5$. These regions expand, but retain some substructure and can result in the formation of an association-like complex, or a binary cluster \citep{Parker14b}. The initial density of the substructure in both our simulated subiviral and supervirial regions is significantly more dense than the majority of nearby star-forming regions \citep{Bressert10}, which will likely contribute to the Galactic field. This is to allow for the possibility that the field may be the sum of clusters that were more dense at earlier epochs \citep{Longmore14}, which would imply that the field binaries are more likely to have been dynamically processed, or that the local star-forming regions in \citet{Bressert10} are somehow not representative of all regions. We draw \emph{primary} stellar masses from the analytic determination of the IMF by \citet{Maschberger13} which has a probability density function of the form: \begin{equation} p(m) \propto \left(\frac{m}{\mu}\right)^{-\alpha}\left(1 + \left(\frac{m}{\mu}\right)^{1 - \alpha}\right)^{-\beta} \label{imf}. \end{equation} Eq.~\ref{imf} essentially combines the log-normal approximation for the IMF derived by \citet{Chabrier03,Chabrier05} with the \citet{Salpeter55} power-law slope for stars with mass $>$1\,M$_\odot$. Here, $\mu = 0.2$\,M$_\odot$ is the average stellar mass, $\alpha = 2.3$ is the Salpeter power-law exponent for higher mass stars, and $\beta = 1.4$ is the power-law exponent to describe the slope of the IMF for low-mass objects \citep[which also deviates from the log-normal form;][]{Bastian10}. Finally, we sample from this IMF within the mass range $m_{\rm low} = 0.01$\,M$_\odot$ to $m_{\rm up} = 50$\,M$_\odot$. \subsection{Binary populations} We utilise two separate binary populations in our simulations. In one set of simulations all binaries form from a `common' primordial population, i.e.\,\,the binary fraction is unity (everything forms in a binary) and the semi-major axes are drawn from the same initial distribution. In this scenario, we are testing the hypothesis that the observed decrease in binary fraction and mean separation as a function of primary mass is due to the dynamical processing of a common population, and that systems with lower primary masses (and therefore lower binding energy on average) are simply more susceptible to destruction. In the remaining simulations, we use set the binary fractions and semi-major axis distributions as a function of primary mass, as observed in the Galactic field. In all cases, secondary masses are drawn from a flat mass ratio ($q$) distribution, as observed in the Galactic field and most star-forming regions \citep{Metchev09,Reggiani11a,Reggiani13}. Dynamical evolution is not expected to alter the shape of the mass ratio distribution \citep{Parker13b}. In all cases, orbital eccentricities are drawn from a flat distribution, as observed for the G-dwarf field binaries \citep{Raghavan10} and also M-dwarfs \citep{Abt06} -- the initial eccentricity distribution (if different from the field) from the star formation process remains unconstrained by observations \citep{Duchene13b}. \subsubsection{Common separation distribution} In a series of pioneering papers, \citet{Kroupa95a,Kroupa95b} studied the dynamical evolution of binary populations in star clusters and suggested that the primordial binary fraction should be higher than observed in the field, and that an excess of intermediate--wide binaries (with separations $>1000$\,au) was necessary to explain the apparent excess of wide binaries the Taurus association compared to the field distributions for G- and M-dwarfs \citep{Duquennoy91,Fischer92}. \citet{Kroupa95a} suggested a primordial binary population with $f_{\rm bin} = 1$ and a period distribution of the following form: \begin{equation} f\left({\rm log_{10}}P\right) = \eta\frac{{\rm log_{10}}P - {\rm log_{10}}P_{\rm min}}{\delta + \left({\rm log_{10}}P - {\rm log_{10}}P_{\rm min}\right)^2}, \end{equation} where ${\rm log_{10}}P_{\rm min}$ is the logarithm of the minimum period in days and ${\rm log_{10}}P_{\rm min} = 0$. $\eta = 3.5$ and $\delta = 100$ are the numerical constants adopted by \citet{Kroupa95a} and \citet{Kroupa11} to fit the observed pre-main sequence distributions. However, drawing binary periods from a particular distribution and then converting to separation means that if $P$ is constant, $a$ is proportional to $m_1 + m_2$, the binary system mass. So for a similar orbital period, an M-dwarf binary will have a smaller semi-major axis than a G-dwarf binary -- and hence is `harder' and could be less likely to be destroyed. To avoid this, we approximate the \citet{Kroupa95a} period distribution in terms of semi-major axes as: \begin{equation} f\left({\rm log_{10}}a\right) = \eta\frac{{\rm log_{10}}a - {\rm log_{10}}a_{\rm min}}{\delta + \left({\rm log_{10}}a - {\rm log_{10}}a_{\rm min}\right)^2}, \label{coma} \end{equation} where ${\rm log_{10}}a$ is the logarithm of the semi-major axis in au and ${\rm log_{10}}a_{\rm min} = -2$ ($a_{\rm min} = 0.01$\,au). The constants are now $\eta = 5.25$ and $\delta = 77$. This then avoids the small differences in separation distribution as a function of primary mass when using a common period distribution (although we ran a test simulation and found that the differences to the results are minimal -- see Section~\ref{results}). \subsubsection{Field separation distributions} \begin{table} \caption[bf]{Summary of simulation set-ups. The columns show the simulation suite number, number of stars, $N_{\rm stars}$, initial virial ratio of the regions, $\alpha_{\rm vir}$, initial binary fraction, $f_{\rm bin}$, and the initial semi-major axis distribution, $f(a)$.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Sim.\,No. & $N_{\rm stars}$ & $\alpha_{\rm vir}$& $f_{\rm bin}$ & $f(a)$ \\ \hline 1 & 1500 & 0.3 & 1.00 & Common (Eq.~\ref{coma}) \\ 2 & 1500 & 0.3 & field & field \\ 3 & 1500 & 1.5 & field & field \\ \hline \end{tabular} \end{center} \label{cluster_sims} \end{table} \begin{table*} \caption[bf]{A summary of the results for the simulations containing binaries drawn from a common population (identical separation distribution and binary fraction). From left to right, the columns are primary component mass-type, the input binary fraction, $f_{\rm bin}$ (init.), the actual binary fraction calculated before dynamical evolution, $f_{\rm bin}$ (0\,Myr), the binary fraction calculated after 10\,Myr of dynamical evolution, $f_{\rm bin}$ (10\,Myr), the median separation before dynamical evolution, $\tilde{a}$ (0\,Myr), and the median separation after 10\,Myr of dynamical evolution, $\tilde{a}$ (10\,Myr).} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Primary & $f_{\rm bin}$ (init.) & $f_{\rm bin}$ (0\,Myr) & $f_{\rm bin}$ (10\,Myr) & $\tilde{a}$ (0\,Myr) & $\tilde{a}$ (10\,Myr) \\ \hline A-star & 1.00 & 0.75 & 0.56 & 47\,au & 18\,au \\ \hline G-dwarf & 1.00 & 0.73 & 0.55 & 34\,au & 14\,au \\ \hline M-dwarf & 1.00 & 0.69 & 0.42 & 33\,au & 8.9\,au \\ \hline Brown dwarf & 1.00 & 0.56 & 0.25 & 30\,au & 5.5\,au \\ \hline \end{tabular} \end{center} \label{summary_cold_common} \end{table*} In the remaining simulations, we use the observed binary properties in the Galactic field as the initial conditions for our binary populations. A summary of the differing properties as a function of primary mass is given in Table~\ref{field_props}. Systems with a primary mass in the range $0.02 < m/$M$_\odot \leq 0.08$ are brown dwarf binaries, with a corresponding fraction $f_{\rm bin} = 0.15$ and a log-normal semi-major axis distribution with mean ${\rm log}\,\bar{a} = 0.66$ and variance $\sigma_{{\rm log}\,\bar{a}} = 0.40$ \citep{Burgasser07,Thies07}. Systems with primary masses in the range $0.08 < m/$M$_\odot \leq 0.45$ are M-dwarf binaries, with a fraction $f_{\rm bin} = 0.34$ and a log-normal semi-major axis distribution with mean ${\rm log}\,\bar{a} = 1.20$ and variance $\sigma_{{\rm log}\,\bar{a}} = 0.80$ \citep{Bergfors10,Janson12}. Systems with primary masses in the range $0.8 < m/$M$_\odot \leq 1.2$ are G-dwarf binaries with a fraction $f_{\rm bin} = 0.46$ and a log-normal semi-major axis distribution with mean ${\rm log}\,\bar{a} = 1.70$ and variance $\sigma_{{\rm log}\,\bar{a}} = 1.68$ \citep{Raghavan10}. Systems with primary masses in the range $1.5 < m/$M$_\odot \leq 3.0$ are A-star binaries with a fraction $f_{\rm bin} = 0.48$ and a log-normal semi-major axis distribution with mean ${\rm log}\,\bar{a} = 2.59$ and variance $\sigma_{{\rm log}\,\bar{a}} = 0.79$ \citep{DeRosa14}. There is also evidence of a bimodal distribution for A-stars, with a second peak around 0.01\,au \citep{Duchene13b} -- however, these binaries are unlikely to be altered by dynamical evolution and we do not include them in our simulations. We also include more massive binaries with $f_{\rm bin} = 1.0$ and an \citet{Opik24} distribution of semi-major axes in the range $0 < a < 50$\,au for all binaries with primary masses greater than 3.0\,M$_\odot$, as suggested by the observations of \citet{Sana13}, although we do not consider these binaries further in our analysis. Any binaries lying outside the mass ranges discussed (e.g. K-type and F-type primaries) are assigned the same properties as the G-dwarfs. Again, these are not considered in the subsequent analysis. \\ We place binaries or single stars at the position of each system in the fractal distribution and run the simulations for 10\,Myr using the \texttt{kira} integrator in the Starlab package \citep{Zwart99,Zwart01}. We do not include stellar evolution in the simulations. A summary of the simulation parameter space is given in Table~\ref{cluster_sims}. \section{Results} \label{results} In this Section we describe the effects of dynamical evolution on binaries drawn from a single common population in collapsing star-forming regions (Sect.~\ref{results:common}), binaries drawn from the the field populations in collapsing star-forming regions (Sect.~\ref{results:fieldcollapse}) and binaries drawn from the field populations in expanding star-forming regions (Sect.~\ref{results:warmexpand}). We identify binaries using the nearest neighbour algorithm described in \citet{Parker09a} and \citet{Kouwenhoven10}. If two stars are mutual nearest neighbours, and their separation is less than a quarter of the mean separation between stars in the simulation, then we determine whether the two stars are energetically bound. If so, we classify them as a binary system. We note that other methods to identify binaries (and multiple systems) are also utilised in the literature \citep[e.g.][]{Bate09}. \subsection{Common primordial binary population} \label{results:common} In the first set of simulations, we draw all the binary systems from a single, common population. The input binary fraction is unity, and the separation distribution is constructed to mimic the pre-main sequence period distribution in \citet{Kroupa95a}\footnote{We also used the original period distribution in one set of simulations and converted periods to separations using the component masses; the results are very similar.}. We summarise the results in Table~\ref{summary_cold_common}. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[scale=0.4]{fmult_Or_C0p3F1p61pBmS_10_4_la.ps}} \end{center} \caption[bf]{Evolution of the binary fraction for binaries with properties drawn from a single primordial population in simulated dense star forming regions undergoing cool-collapse. The first (top, green) line shows the evolution of the A-star binary fraction; the second (red) line shows the evolution of the G-dwarf binary fraction; the third (blue) line shows the evolution of the M-dwarf binary fraction; and the fourth (bottom, orange) line shows the evolution of the brown dwarf binary fraction. } \label{cold_common_bin_frac} \end{figure} \begin{figure*} \begin{center} \setlength{\subfigcapskip}{10pt} \subfigure[Brown dwarf binaries, 10 Myr]{\label{cold_common_BD_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pBmS_10_BD-BD.ps}}} \hspace*{0.8cm} \subfigure[M-dwarf binaries, 10 Myr]{\label{cold_common_M_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pBmS_10_M-dwarf.ps}}} \vspace*{0.25cm} \subfigure[G-dwarf binaries, 10 Myr]{\label{cold_common_G_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pBmS_10_G-dwarf.ps}}} \hspace*{0.8cm} \subfigure[A-star binaries, 10 Myr]{\label{cold_common_A_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pBmS_10_A-star.ps}}} \end{center} \caption[bf]{Evolution of the separation distributions for binaries with properties drawn from a single primordial population in simulated dense star forming regions undergoing cool-collapse. In all panels, the open histogram shows the distribution at 0\,Myr (i.e. before dynamical evolution) and the hashed histogram shows the distribution after 10\,Myr. In panel (a) we show the evolution of the distribution for brown dwarf (BD) binaries; the log-normal approximation to the data from \citet{Thies07} is shown by the (solid) orange line (normalised to a binary fraction of 0.15), and the log-normal approximation to the data assuming `missing' systems \citep{Basri06} is shown by the (dot-dashed) magenta line (normalised to a binary fraction of 0.26). In panel (b) we show the evolution of the distribution for M-dwarf binaries; the log-normal approximation to the data by \citet{Janson12} is shown by the (solid) blue line (normalised to a binary fraction of 0.34). In panel (c) we show the evolution of the distribution for G-dwarf binaries; the log-normal approximation to the data by \citet{Raghavan10} is shown by the (solid) red line. In panel (d) we show the evolution of the distribution for A-star binaries; the log-normal approximation to the visual binary data by \citet{DeRosa14} is shown by the (solid) green line (normalised to a binary fraction of 0.48), and the fit to the bimodal distribution discussed in \citet{Duchene13b} is shown by the (dot-dashed) purple line (normalised to a binary fraction of 0.70). } \label{cold_common_sep_results_dist} \end{figure*} \begin{figure*} \begin{center} \setlength{\subfigcapskip}{10pt} \subfigure[Brown dwarf binaries, 10 Myr]{\label{cold_common_BD_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pBmS_10_BD-BD.ps}}} \hspace*{0.8cm} \subfigure[M-dwarf binaries, 10 Myr]{\label{cold_common_M_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pBmS_10_M-dwarf.ps}}} \vspace*{0.25cm} \subfigure[G-dwarf binaries, 10 Myr]{\label{cold_common_G_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pBmS_10_G-dwarf.ps}}} \hspace*{0.8cm} \subfigure[A-star binaries, 10 Myr]{\label{cold_common_A_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pBmS_10_A-star.ps}}} \end{center} \caption[bf]{Evolution of the cumulative separation distributions for binaries with properties drawn from a single primordial population in simulated dense star forming regions undergoing cool-collapse. In all panels, the dotted line shows the distribution at 0\,Myr (i.e. before dynamical evolution) and the solid line shows the distribution after 10\,Myr. In all panels the dashed lines show the respective cumulative distributions of the log-normal fits to the data for each primary mass range observed in the Galactic field (detailed in Table~\ref{field_props}). In panel (a) the cumulative distribution proposed by \citep{Basri06} is shown by the dot-dashed magenta line. In panel (d) the bimodal distribution discussed in \citet{Duchene13b} is shown by the dot-dashed purple line.} \label{cold_common_sep_results_cum} \end{figure*} As noted in \citet{Parker11c}, the large number of wide ($> 10^4$\,au) systems generated by this distribution precludes them from being physically bound in our locally dense \citep[$\rho_{\rm local} \sim 10^3$\,M$_\odot$\,pc$^{-3}$,][]{Parker14b} simulated regions. The binary fractions at 0\,Myr (i.e.\,\,before dynamical evolution), are therefore substantially lower than the `initial' binary fraction of unity (see Fig.~\ref{cold_common_bin_frac}). In Fig.~\ref{cold_common_bin_frac} we show the evolution of the binary fraction for our four chosen primary mass ranges. Systems with a brown dwarf primary are shown by the (bottom) orange line, those with an M-dwarf primary are shown by the (lower middle) blue line, those with a G-dwarf primary are shown by the (upper middle) red line and the (top) green line is for A-star primaries. We immediately see that fewer low-mass binaries are bound at 0\,Myr compared to high-mass systems (for example, the binary fraction of brown dwarf binaries at 0\,Myr is 0.56, compared to 0.75 for A-star binaries). The subsequent dynamical evolution (both in the substructure, and when the region collapses to form a spherical cluster) reduces the binary fractions further. After 10\,Myr the binary fraction of brown dwarf binaries is 0.25, compared to 0.42 for the M-dwarfs, 0.55 for the G-dwarfs, and 0.56 for the A-stars. At first sight, dynamical evolution of a common binary population appears to result in binary fractions roughly consistent (although a little high) with those observed in the Galactic field, and the trend for lower binary fraction with lower primary mass is recovered. However, examination of the separation distribution after 10\,Myr suggests that a common primordial binary fraction and separation distribution cannot be the dominant initial conditions for star formation in binaries. In Fig.~\ref{cold_common_sep_results_dist} we show histograms of the separation distributions at 0\,Myr (before dynamical evolution; the open histogram in all panels) and at 10\,Myr (the hashed histograms). In Fig.~\ref{cold_common_BD_sepdist} we show the log-normal fit by \citet{Thies07} to the observed separation distribution of very low mass binaries presented in \citet{Burgasser07} by the solid orange line, which has a mean separation of $\bar{a} = 4.6$\,au. We also show the log-normal fit by \citet{Basri06}, which accounts for potentially closer \citep{Maxted05} and wider \citep{Bouy06,Dhital11} brown dwarf binaries that remain undiscovered (the dot-dashed magenta line). The histograms of the binaries in the simulations are normalised to the respective binary fractions at 0\,Myr (0.56) and 10\,Myr (0.25), whereas the fits to the data are normalised to the observed binary fractions \citep[0.15,][]{Thies07}, or 0.26 in the case of the fit by \citet{Basri06}. In Fig.~\ref{cold_common_M_sepdist} we show the log-normal fit to the observed separation distribution of M-dwarf binaries by \citet{Janson12}, which has a mean separation of $\bar{a} = 16$\,au, by the solid blue line. This is normalised to the binary fraction of M-dwarfs in the field \citep[0.34;][]{Bergfors10}. The open histogram shows the distribution of binary separations at 0\,Myr in the simulations, normalised to the initial binary fraction (0.69), and the hashed histogram shows the distribution at 10\,Myr, normalised to the binary fraction (0.42). The log-normal fit to the G-dwarf binaries by \citet{Raghavan10}, normalised to the observed binary fraction of 0.46 is shown by the solid red line in Fig.~\ref{cold_common_G_sepdist}. The initial separation distribution, normalised to the binary fraction at 0\,Myr (0.73) is shown by the open histogram, and the sepration distribution at 10\,Myr is shown by the hashed histogram, normalised to the binary fraction (0.55). In Fig.~\ref{cold_common_A_sepdist} we show the log-normal fit to the observed visual A-star binaries \citep{deRosa12,DeRosa14}, which has a mean separation of $\bar{a} = 389$\,au, by the solid green line, normalised to the A-star visual binary fraction of 0.48 \citep{DeRosa14}. \citet{Duchene13b} discuss a bi-modal separation distribution for A-star binaries, which takes into account short-separation spectroscopic binaries in associations, which may make up a significant fraction of the field but have separations that the survey of \citet{DeRosa14} is not sensitive to. We show the bi-modal distribution in \citet{Duchene13b} by the purple dot-dashed line in Fig.~\ref{cold_common_A_sepdist}, which is normalised to a binary fraction of 0.70. The binaries at 0\,Myr in our simulations are shown by the open histogram (normalised to the initial binary fraction of 0.75) and the separation distribution of binaries remaining after 10\,Myr is shown by the hashed histogram (normalised to the binary fraction of 0.56). In these histograms normalised to binary fractions, too many close ($<10$\,au) binaries are produced (and remain after dynamical evolution) for the brown dwarf and M-dwarf primary mass ranges (Figs.~\ref{cold_common_BD_sepdist}~and~\ref{cold_common_M_sepdist}). After dynamical evolution, too many G-dwarf binaries in the separation range 1 -- 200\,au are present (Fig.~\ref{cold_common_G_sepdist}), and the previously reported deficit of wide binaries in dense clusters is also apparent \citep{Parker11c}. When we compare the evolution of the separation distribution of A-star binaries in our simulations, the common primordial separation distribution over-produces binaries in the range 1 -- 100\,au, and under-produces binaries in the range 100 -- 10$^5$\,au (see Fig.~\ref{cold_common_A_sepdist}). We also note that that a common primordial separation distribution is also inconsistent with the bi-modal separation distribution for A-stars presented in \citet{Duchene13b}. The histograms in Fig.~\ref{cold_common_sep_results_dist} show the evolution of both the binary fraction and the separation distribution. For completeness, we now examine only the evolution of the separation distributions for each primary mass range using cumulative distributions. The results are shown in Fig.~\ref{cold_common_sep_results_cum} where the fits to the data as observed in the Galactic Field are shown by the dashed lines in each panel. The alternative fit to the brown dwarf binary distribution by \citet{Basri06} is shown by the dot-dashed (magenta) line in Fig.~\ref{cold_common_BD_sepcum}, and the bimodal fit to the A-star data by \citet{Duchene13b} is shown by the dot-dashed (purple) line in Fig.~\ref{cold_common_A_sepcum}. In all panels, the separation distribution as measured at 0\,Myr in the simulations is shown by the dotted lines, and the distribution after 10\,Myr of dynamical evolution is shown by the solid lines. As indicated in the histograms in Fig.~\ref{cold_common_sep_results_dist}, far too many close binaries remain after dynamical evolution, which dominate the cumulative separation distributions. In the case of the G-dwarfs and A-stars, the distributions at 0\,Myr already sit to the left (closer separations) of the observed field distributions. The subsequent dynamical evolution then shifts these distributions to even closer separations on average. The only binaries which have a roughly similar distribution to the field are those with brown dwarf primaries, where the mean separation after 10\,Myr of evolution is similar to the mean in the observed separation distribution . However, the overall shape of the distribution in the simulations is wider than the more commonly adopted fit to the \citet{Burgasser07} data by \citet{Thies07} -- see also \citet{Duchene13b}; instead the separation distribution in the simulations is more similar to the (postulated) extended distribution from \citet{Basri06}. In summary, the evolution of a common primordial binary population can account for a decreasing binary fraction as a function of decreasing primary mass, but not for the observed differences in both the mean, and shape, of the separation distribution between systems with different primary masses in the field. \subsection{Field-like binary population} Here we will focus on two sets of simulations with the same binary population, where we take the binary fractions and separation distributions observed in the Galactic field as initial conditions. We then evolve the star forming regions in two distinct ways; in one set of simulations the regions are subvirial (i.e.\,\,collapsing) and in the other they are supervirial (expanding) so that we can determine the fraction and properties of systems that form through capture during the dissolution of the regions \citep{Kouwenhoven10,Moeckel10}. \subsubsection{Regions undergoing cool-collapse} \label{results:fieldcollapse} We first examine the evolution of an initially field-like binary population in a cool, collapsing star forming region. The results are summarised in Table~\ref{summary_cold_field}. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[scale=0.4]{fmult_Or_C0p3F1p61pRmR_10_4_la.ps}} \end{center} \caption[bf]{Evolution of the binary fraction for binaries with properties drawn from the field population distributions in simulated dense star forming regions undergoing cool-collapse. The first (top, green) line shows the evolution of the A-star binary fraction; the second (red) line shows the evolution of the G-dwarf binary fraction; the third (blue) line shows the evolution of the M-dwarf binary fraction; and the fourth (bottom, orange) line shows the evolution of the brown dwarf binary fraction. } \label{cold_field_bin_frac} \end{figure} \begin{figure*} \begin{center} \setlength{\subfigcapskip}{10pt} \subfigure[Brown dwarf binaries, 10 Myr]{\label{cold_field_BD_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pRmR_10_BD-BD.ps}}} \hspace*{0.8cm} \subfigure[M-dwarf binaries, 10 Myr]{\label{cold_field_M_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pRmR_10_M-dwarf.ps}}} \vspace*{0.25cm} \subfigure[G-dwarf binaries, 10 Myr]{\label{cold_field_G_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pRmR_10_G-dwarf.ps}}} \hspace*{0.8cm} \subfigure[A-star binaries, 10 Myr]{\label{cold_field_A_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_C0p3F1p61pRmR_10_A-star.ps}}} \end{center} \caption[bf]{Evolution of the separation distributions for binaries with properties drawn from the field population distributions in simulated dense star forming regions undergoing cool-collapse. In all panels, the open histogram shows the distribution at 0\,Myr (i.e. before dynamical evolution) and the hashed histogram shows the distribution after 10\,Myr. In panel (a) we show the evolution of the distribution for brown dwarf (BD) binaries; the log-normal approximation to the data from \citet{Thies07} is shown by the (solid) orange line (normalised to a binary fraction of 0.15), and the log-normal approximation to the data assuming `missing' systems \citep{Basri06} is shown by the (dot-dashed) magenta line (normalised to a binary fraction of 0.26). In panel (b) we show the evolution of the distribution for M-dwarf binaries; the log-normal approximation to the data by \citet{Janson12} is shown by the (solid) blue line (normalised to a binary fraction of 0.34). In panel (c) we show the evolution of the distribution for G-dwarf binaries; the log-normal approximation to the data by \citet{Raghavan10} is shown by the (solid) red line. In panel (d) we show the evolution of the distribution for A-star binaries; the log-normal approximation to the visual binary data by \citet{DeRosa14} is shown by the (solid) green line (normalised to a binary fraction of 0.48), and the fit to the bimodal distribution discussed in \citet{Duchene13b} is shown by the (dot-dashed) purple line.} \label{cold_field_sep_results_dist} \end{figure*} \begin{table*} \caption[bf]{A summary of the results for the simulations of regions undergoing cool-collapse which contain binaries drawn from the field distributions (see Table~\ref{field_props} for details). From left to right, the columns are primary component mass-type, the input binary fraction, $f_{\rm bin}$ (init.), the actual binary fraction calculated before dynamical evolution, $f_{\rm bin}$ (0\,Myr), the binary fraction calculated after 10\,Myr of dynamical evolution, $f_{\rm bin}$ (10\,Myr), the median separation before dynamical evolution, $\tilde{a}$ (0\,Myr), and the median separation after 10\,Myr of dynamical evolution, $\tilde{a}$ (10\,Myr).} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Primary & $f_{\rm bin}$ (init.) & $f_{\rm bin}$ (0\,Myr) & $f_{\rm bin}$ (10\,Myr) & $\tilde{a}$ (0\,Myr) & $\tilde{a}$ (10\,Myr) \\ \hline A-star & 0.48 & 0.38 & 0.34 & 222\,au & 46\,au \\ \hline G-dwarf & 0.46 & 0.36 & 0.32 & 25\,au & 18\,au \\ \hline M-dwarf & 0.34 & 0.33 & 0.24 & 16\,au & 12\,au \\ \hline Brown dwarf & 0.15 & 0.16 & 0.12 & 4.8\,au & 4.4\,au \\ \hline \end{tabular} \end{center} \label{summary_cold_field} \end{table*} \begin{figure*} \begin{center} \setlength{\subfigcapskip}{10pt} \subfigure[Brown dwarf binaries, 10 Myr]{\label{cold_field_BD_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pRmR_10_BD-BD.ps}}} \hspace*{0.8cm} \subfigure[M-dwarf binaries, 10 Myr]{\label{cold_field_M_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pRmR_10_M-dwarf.ps}}} \vspace*{0.25cm} \subfigure[G-dwarf binaries, 10 Myr]{\label{cold_field_G_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pRmR_10_G-dwarf.ps}}} \hspace*{0.8cm} \subfigure[A-star binaries, 10 Myr]{\label{cold_field_A_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_C0p3F1p61pRmR_10_A-star.ps}}} \end{center} \caption[bf]{Evolution of the cumulative separation distributions for binaries with properties drawn from the field population distributions in simulated dense star forming regions undergoing cool-collapse. In all panels, the dotted line shows the distribution at 0\,Myr (i.e. before dynamical evolution) and the solid line shows the distribution after 10\,Myr. In all panels the dashed lines show the respective cumulative distributions of the log-normal fits to the data for each primary mass range observed in the Galactic field (detailed in Table~\ref{field_props}). In panel (a) the cumulative distribution proposed by \citep{Basri06} is shown by the dot-dashed magenta line. In panel (d) the bimodal distribution discussed in \citet{Duchene13b} is shown by the dot-dashed purple line.} \label{cold_field_sep_results_cum} \end{figure*} Fig.~\ref{cold_field_bin_frac} shows the evolution of the binary fraction over 10\,Myr for our four chosen primary mass ranges. As in the case of the common primordial binary population, the calculated binary fractions of the A-stars (the top green line) and G-dwarfs (upper middle red line) before dynamical evolution (0\,Myr) are lower than the input values, because the widest binaries are not physically bound in the high density star-forming regions. The M-dwarf and brown dwarf binaries are generally so close that they are all bound before dynamical evolution. In fact, in this suite of simulations, the calculated brown dwarf binary fraction is slightly higher than the input fraction, likely due to wide pairs of brown dwarfs being classified as binaries due to the correlated velocities in the fractals [this is evident in the bin between 500 -- 1000\,au in the separation distribution (Fig.~\ref{cold_field_BD_sepdist}), which lies outside the range of the log-normal distribution used to generate the input separations]. During the subsequent dynamical evolution, the binary fractions are reduced from their initial values. The greatest change occurs for M-dwarf primaries, where the binary fraction is lowered from 0.33 (0\,Myr) to 0.24 (10\,Myr). This is slightly less than the change reported in \citet{Parker11c} for similar initial conditions for star forming regions. However, in that paper the input binary distribution contained a far higher proportion of wide M-dwarf systems because their separations were drawn from the \citet{Fischer92} fit to the M-dwarf separation distribution. The reduction of the binary fractions of A-star, G-dwarf and brown dwarf primaries are minimal (0.07, 0.06 and 0.04, respectively). The separation distributions, normalised to binary fraction are shown in Fig.~\ref{cold_field_sep_results_dist}. In all cases, the open histograms show the separation distribution before dynamical evolution (0\,Myr) and after 10\,Myr (the hashed histogram). The log-normal fits to the observed data are shown by the solid lines in each panel, and the alternative postulated fits to the brown dwarf binaries and the A-star binaries are shown by the dot-dashed lines in Figs.~\ref{cold_field_BD_sepdist}~and~\ref{cold_field_A_sepdist}, respectively. As detailed in Section~\ref{results:common}, these histograms combine the evolution of the binary fraction \emph{and} separation distribution. The brown dwarf binary fraction decreases, but the shape of the distribution is still consistent with that observed in the field (Fig.~\ref{cold_field_BD_sepdist}). The M-dwarf, G-dwarf and A-star distributions show that too few wide ($>$100\,au) binaries remain after dynamical evolution, although in the case of the G-dwarfs and A-stars, a substantial fraction of systems from the input distribution are not physically bound before dynamical evolution takes place. \begin{table*} \caption[bf]{A summary of the results for the simulations of regions undergoing warm expansion which contain binaries drawn from the field distributions (see Table~\ref{field_props} for details). From left to right, the columns are primary component mass-type, the input binary fraction, $f_{\rm bin}$ (init.), the actual binary fraction calculated before dynamical evolution, $f_{\rm bin}$ (0\,Myr), the binary fraction calculated after 10\,Myr of dynamical evolution, $f_{\rm bin}$ (10\,Myr), the median separation before dynamical evolution, $\tilde{a}$ (0\,Myr), and the median separation after 10\,Myr of dynamical evolution, $\tilde{a}$ (10\,Myr).} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Primary & $f_{\rm bin}$ (init.) & $f_{\rm bin}$ (0\,Myr) & $f_{\rm bin}$ (10\,Myr) & $\tilde{a}$ (0\,Myr) & $\tilde{a}$ (10\,Myr) \\ \hline A-star & 0.48 & 0.36 & 0.48 & 234\,au & 239\,au \\ \hline G-dwarf & 0.46 & 0.38 & 0.40 & 23\,au & 39\,au \\ \hline M-dwarf & 0.34 & 0.33 & 0.27 & 16\,au & 13\,au \\ \hline Brown dwarf & 0.15 & 0.15 & 0.12 & 4.8\,au & 4.6\,au \\ \hline \end{tabular} \end{center} \label{summary_warm_field} \end{table*} When we examine the the cumulative separation distributions (Fig.~\ref{cold_field_sep_results_cum}), we see that the shape of the brown dwarf separation distribution does not deviate from the input (field) distribution (Fig.~\ref{cold_field_BD_sepcum}), and the alteration of the M-dwarf separation distribution Fig.~\ref{cold_field_M_sepcum}) is not as drastic as suggested when the data are binned and normalised to the binary fraction, as in the histogram in Fig.~\ref{cold_field_M_sepdist}. The G-dwarf separation distribution contains slightly too few wide binaries, even before dynamical evolution, which is apparent when comparing the simulation at 0\,Myr (dotted line) to the observed distribution in the field (the dashed line) in Fig.~\ref{cold_field_G_sepcum}. The subsequent dynamical evolution (shown by the solid line) shows two effects; the destruction of intermediate-wide binaries (with separations in the range 10 -- 1000\,au), but also the formation of very wide binaries ($>$1000\,au) as the cluster expands after collapse. A similar effect is also found for the A-stars (the solid line in Fig.~\ref{cold_field_A_sepcum}), and as with the G-dwarfs, the observed field distribution (the dashed line) lies to the right (wider separations) than both the 0\,Myr and 10\,Myr distributions in the simulations. \subsubsection{Regions undergoing warm expansion} \label{results:warmexpand} Now we examine the evolution of a field-like binary population in a warm, expanding star-forming region. The dynamical evolution of these regions (without primordial binaries) was studied in detail by \citet{Parker14b}, and we refer the interested reader to that work for further details. The results are summarised in Table~\ref{summary_warm_field}. In Fig.~\ref{warm_field_bin_frac} we show the evolution of the binary fraction over 10\,Myr for these expanding regions. The most striking feature of this plot is that, whilst the initially dense substructure processes some of the binaries, during the expansion of these regions the more massive stars form binary systems through capture \citep{Kouwenhoven10,Moeckel10}. This is seen in the evolution of the binary fractions; the A-star fraction rises from 0.36 to 0.48 and the G-dwarf fraction also increases slightly. The histograms of the evolution of the separation distributions in Fig.~\ref{warm_field_sep_results_dist} clearly show the destruction of some binaries (in all panels), but also the formation of wider systems in panels (c) and (d). This formation of these wider binaries is highly mass dependent; virtually no brown dwarf and M-dwarf binaries form during the regions' expansion, whereas significant numbers of G-dwarf and A-star binaries form. We interpret this as being due to the more massive G- and A-stars having a higher collisional crosss section, and are therefore more likely to form binaries via capture as the star forming region dissolves into the field. One potential caveat here is that the input separation distribution in the simulations contains many wide G-dwarf and A-star binaries, so these binaries which `form' during the dissolution may just be quasi-primordial binaries that become bound when the regions attain lower density. We exclude this possibility for two reasons. Firstly, we can tag primordial systems in our simulations, and the systems which form binaries do not always do so with their birth partner. In these simulations, after 10\,Myr and over all separations, 100\,per cent of brown dwarf primaries are with their birth partner, and the fraction decreases to 93\,per cent for M-dwarfs, 72\,per cent for G-dwarf and 58\,per cent for A-stars (i.e.\,\,42\,per cent of A-star binaries are not birth binaries). If we restrict our separation range of interest to $>$1000\,au, then no brown dwarf binaries were born in this separation range, only 21\,per cent of M-dwarf primaries are with their birth partner, rising to 28\,per cent for G-dwarfs and 27\,per cent for A-stars (the reason for this trend is that fewer M-dwarfs can form 1000\,au binaries than can G-dwarfs or A-stars due to the respective shapes of the separation distributions). Secondly, we ran a further suite of simulations with no primordial binaries \citep[similar to those in][]{Kouwenhoven10} and also found an increase in the number of wide binaries as a function of increasing primary mass which form over 10\,Myr. \begin{figure} \begin{center} \rotatebox{270}{\includegraphics[scale=0.4]{fmult_Or_H1p5F1p61pRmR_10_4_la.ps}} \end{center} \caption[bf]{Evolution of the binary fraction for binaries with properties drawn from the field population distributions in simulated dense star forming regions undergoing warm expansion. The first (top, green) line shows the evolution of the A-star binary fraction; the second (red) line shows the evolution of the G-dwarf binary fraction; the third (blue) line shows the evolution of the M-dwarf binary fraction; and the fourth (bottom, orange) line shows the evolution of the brown dwarf binary fraction.} \label{warm_field_bin_frac} \end{figure} \begin{figure*} \begin{center} \setlength{\subfigcapskip}{10pt} \subfigure[Brown dwarf binaries, 10 Myr]{\label{warm_field_BD_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_H1p5F1p61pRmR_10_BD-BD.ps}}} \hspace*{0.8cm} \subfigure[M-dwarf binaries, 10 Myr]{\label{warm_field_G_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_H1p5F1p61pRmR_10_G-dwarf.ps}}} \vspace*{0.25cm} \subfigure[G-dwarf binaries, 10 Myr]{\label{warm_field_M_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_H1p5F1p61pRmR_10_M-dwarf.ps}}} \hspace*{0.8cm} \subfigure[A-star binaries, 10 Myr]{\label{warm_field_A_sepdist}\rotatebox{270}{\includegraphics[scale=0.35]{Sepdist_Or_H1p5F1p61pRmR_10_A-star.ps}}} \end{center} \caption[bf]{Evolution of the separation distributions for binaries with properties drawn from the field population distributions in simulated dense star forming regions undergoing warm expansion. In all panels, the open histogram shows the distribution at 0\,Myr (i.e. before dynamical evolution) and the hashed histogram shows the distribution after 10\,Myr. In panel (a) we show the evolution of the distribution for brown dwarf (BD) binaries; the log-normal approximation to the data from \citet{Thies07} is shown by the (solid) orange line (normalised to a binary fraction of 0.15), and the log-normal approximation to the data assuming `missing' systems \citep{Basri06} is shown by the (dot-dashed) magenta line (normalised to a binary fraction of 0.26). In panel (b) we show the evolution of the distribution for M-dwarf binaries; the log-normal approximation to the data by \citet{Janson12} is shown by the (solid) blue line (normalised to a binary fraction of 0.34). In panel (c) we show the evolution of the distribution for G-dwarf binaries; the log-normal approximation to the data by \citet{Raghavan10} is shown by the (solid) red line. In panel (d) we show the evolution of the distribution for A-star binaries; the log-normal approximation to the visual binary data by \citet{DeRosa14} is shown by the (solid) green line (normalised to a binary fraction of 0.48), and the fit to the bimodal distribution discussed in \citet{Duchene13b} is shown by the (dot-dashed) purple line.} \label{warm_field_sep_results_dist} \end{figure*} \begin{figure*} \begin{center} \setlength{\subfigcapskip}{10pt} \subfigure[Brown dwarf binaries, 10 Myr]{\label{warm_field_BD_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_H1p5F1p61pRmR_10_BD-BD.ps}}} \hspace*{0.8cm} \subfigure[M-dwarf binaries, 10 Myr]{\label{warm_field_M_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_H1p5F1p61pRmR_10_M-dwarf.ps}}} \vspace*{0.25cm} \subfigure[G-dwarf binaries, 10 Myr]{\label{warm_field_G_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_H1p5F1p61pRmR_10_G-dwarf.ps}}} \hspace*{0.8cm} \subfigure[A-star binaries, 10 Myr]{\label{warm_field_A_sepcum}\rotatebox{270}{\includegraphics[scale=0.35]{Sep_cum_Or_H1p5F1p61pRmR_10_A-star.ps}}} \end{center} \caption[bf]{Evolution of the cumulative separation distributions for binaries with properties drawn from the field population distributions in simulated dense star forming regions undergoing warm expansion. In all panels, the dotted line shows the distribution at 0\,Myr (i.e. before dynamical evolution) and the solid line shows the distribution after 10\,Myr. In all panels the dashed lines show the respective cumulative distributions of the log-normal fits to the data for each primary mass range observed in the Galactic field (detailed in Table~\ref{field_props}). In panel (a) the cumulative distribution proposed by \citep{Basri06} is shown by the dot-dashed magenta line. In panel (d) the bimodal distribution discussed in \citet{Duchene13b} is shown by the dot-dashed purple line.} \label{warm_field_sep_results_cum} \end{figure*} This is confirmed when we examine the cumulative separation distributions (Fig.~\ref{warm_field_sep_results_cum}), which show almost no change to the shape of the separation distributions for brown dwarf and M-dwarf binaries, but a substantial number of wide G-dwarf and A-star binaries form during the regions' dissolution. Note that we have not considered the formation of hierarchical systems (triples, quadruples, etc) in our analysis; a high fraction of wide binaries in the field may actually be such systems and thus counted as `binary systems'. Any such systems in our simulations would merely reinforce the results described above. \section{Discussion} \label{discuss} The $N$-body simulations presented in Section~\ref{results} show that the field binary population cannot be explained by the dynamical evolution of one single, common primordial population (i.e.\,\,binary fraction and separation distribution). Whilst dynamical evolution can explain the decreasing binary fraction with primary mass, it cannot account for the decreasing peak in mean separation with decreasing primary mass. The main problem is that in order to sculpt the initial separation distribution to match the observed deficit of intermediate/wide ($>$100\,au) low-mass (brown dwarf and M-dwarf) binary systems, the SF regions need to be so dense that the resultant dynamical evolution results in too few G-dwarf and A-star binaries with intermediate/wide separations remaining. If some SF regions are initially supervirial, correlated velocities on local scales \citep{Larson82} can result in the formation of wide G-dwarf and A-star binaries \citep{Kouwenhoven10,Moeckel10}, but then do not destroy enough hard/intermediate M-dwarf and brown dwarf binaries. The above findings appear to contradict the earlier work by \citet{Kroupa95a,Kroupa95b,Kroupa08,Marks12}, who find that a common primordial binary population can explain the field binary population. This earlier work compared the results of $N$-body simulations to the G-dwarf and M-dwarf binary distributions observed in the field by \citet{Duquennoy91} and \citet{Fischer92}. Whilst the updated G-dwarf separation distribution and fraction presented by \citet{Raghavan10} does not differ greatly from \citet{Duquennoy91}, the improved M-dwarf surveys by \citet{Bergfors10} and \Citet{Janson12} suggest that both the binary fraction, and separation distribution lie inbetween the fractions/separation distributions of brown dwarf and G-dwarf binaries. Interestingly, the data presented by \citet{Fischer92} did suggest this, but only the recent surveys by \citet{Bergfors10} and \citet{Janson12} were able to demonstrate a clear difference from the G-dwarf distribution. When compared to these updated observations, the hypothesised common primordial binary population from \citet{Kroupa95a,Kroupa11} is incompatible with the observations (regardless of primary mass range). If the primordial binary population used as an `input' in the simulations is field-like (in terms of binary fraction and separation distribution), the brown dwarf and M-dwarf binaries are usually so tight that dynamical evolution does not alter the shape of the separation distribution, and the only effect of dynamics is to slightly lower the initial binary fraction. On the other hand, the G-dwarf and A-star separation distributions are significantly altered, owing to the prevelance of wide ($>10^3$\,au) systems in these distributions. The problem then becomes one of forming a population of wide binary systems with G-dwarf and A-star primaries. This is readily achieved if part of the observed field population originates from the dissolution of supervirial star forming regions \citep{Kouwenhoven10,Moeckel10}, or the replenishment of soft binaries in star clusters \citep{Moeckel11a}. In the former scenario, the formation of binaries with higher mass primaries (e.g.\,\,G-dwarf and A-star) is preferred over lower mass (M-dwarf and brown dwarf) primaries, so the field binary population naturally occurs from a mix of sub-virial and supervirial SF regions (or put simply, clusters and associations). Whilst this is still ``fine tuning'' to some extent, the advantage of the field population over the common population scenario as an input is that the brown dwarf and M-dwarf binaries are not overproduced. Furthermore, the argument for binary disruption rests on the assumption that the field population originates in dense star forming environments. \citet{Bressert10} suggested that only 25\,per cent of \emph{current} star forming regions are dense enough to cause destruction of binaries, although the reliance on the local surface density in this work was later questioned by a number of authors \citep{Moeckel12,Gieles12,Pfalzner12}. \citet{Parker12d} suggested that the fraction of `dynamically active' stars quoted in \citet{Bressert10} could be revised upward to $\sim$50\,per cent. Binaries in the field are of a similar age to the Sun (or older) and it is possible that star formation occured in more dense environments at earlier epochs \citep[][and references therein]{Longmore14}. However, it seems unlikely that \emph{all} field stars originate in very dense environments and a more quiescent formation environment would imply less dynamical sculpting of the field binary population. With these arguments in mind, we suggest that the binary populations observed in the field are most likely \emph{indicative} of the primordial populations from star formation -- and dynamical evolution has not played a significant role in altering the separation distributions or the binary fraction. \citet{Parker13b} show that the companion mass ratio distribution is not significantly altered by dynamical interactions and we suggest that this can be extended (with caution -- see below) to the separation distribution and binary fraction. Hydrodynamic simulations that predict the orbital properties and fraction of binaries \citep[e.g.][]{Donate04b,Goodwin04b,Offner10} can then be compared to the observations of the field, especially if they make predictions for the binary properties as a function of primary mass \citep{Bate12}. Indeed, earlier work by \citet{Moeckel10} took the end output of hydrodynamical simulations by \citet{Bate09} and evolved them for a further 10\,Myr with an $N$-body code. They also noted that very little dynamical processing occured during the subsequent $N$-body evolution, and concluded that the binary properties were mainly set during the star formation phase in the hydrodynamical calculation. Whilst the simulations presented here are purely $N$-body, and the initial conditions differ to those in the $N$-body simulations of \citet{Moeckel10}, our conclusions are similar. We note that our simulations do not contain any gas; whilst this is unlikely to affect the dynamical destruction of intermediate/wide binaries in a SF region \citep[or the subsequent dynamical evolution of the region, e.g.\,][]{Offner09,Kruijssen12a}, it is possible that dynamical friction in dense gas could lead to the orbital decay of binary stars \citep[and subsequent merging of the component stars,][]{Stahler10}. This in turn would mean that the close binaries in the field, whilst unaffected by dynamical interactions in SF regions, may not be primordial in the sense that close separation ($<$1\,au) binaries are preferentially destroyed (and the overall binary fraction is lowered) by the effects of gas friction, if all close binaries originate from SF regions with gas densities $> 10^5$\,cm$^{-3}$ \citep{Korntreff12}. However, given that a single common binary population (identical binary fraction and mean separation) across all primary masses is ruled out by dynamical interactions due to the observational constraint of decreasing mean separation and binary fraction with primary mass, we appeal to Occam's Razor and suggest that as the field is a better input population to fit the observations after dynamical evolution, it is likely to be closer to the primordial population than most other distributions. Even if dynamical evolution has only had a modest effect on the binary fraction and orbital parameters of most systems in the field, we emphasize that much work still remains to be done in understanding the formation and evolution of binary stars \citep[e.g.][]{Reipurth14}. Most observations of young star forming regions can only probe binaries with separations in the intermediate regime \citep[10s -- 1000s\,au -- ][]{King12a,King12b,Duchene13b}. It is these systems that are most likely to change through dynamical interactions \citep{Heggie75,Hills75a,Hills75b} -- and often in an unpredictable way \citep{Parker12b}. It is quite possible that dynamical interactions may play a significant role in altering the binary properties in some regions -- especially low-mass regions where the effects of dynamics can be highly stochastic \citep{Becker13}. We also have very little information on `hard' binaries in clusters -- which are likely to influence the global dynamical evolution of star clusters and star-forming regions \citep[e.g.][]{Allison11,Geller13b,Parker14b}. Furthermore, observations of Class 0 protostars by \citet{Chen13} suggest that the binary fraction of these objects is higher than that found in the later-stage Class~I Young Stellar Objects (YSOs) by \citet{Connelley08}, which in turn is higher than the fraction found for G-dwarf Main Sequence binaries by \citet{Raghavan10}. \citet{Connelley08} also show that the shape of the spearation distribution of YSO binaries is not log-normal, but rather flat in log-space [an \citet{Opik24} distribution]. We first note that the difficulty in determining accurate masses for Class~0/Class~I objects means that the binary poperties of the objects in the samples of \citet{Connelley08} and \citet{Chen13} may not always be comparable with the \citet{Raghavan10} sample, as their eventual MS primary masses may be different. However, regardless of this caveat, we have shown in this paper that external perturbations on binaries from dynamical interactions in SF regions are unlikely to account for a drastic change in the binary fraction and orbital separation distribution of binary systems. Therefore, if the binary fraction and separation distribution of protostellar and YSO binaries are different to that of MS binaries, it is possible that the differences may be due to internal dynamical evolution of these young systems \citep{Reipurth14}, rather than external dynamical evolution in the SF environment. This is particularly relevant as a high fraction of stars form in triple, quadruple and higher order multiple systems \citep{Tokovinin08,Tokovinin14}, which may become (or even form) dynamically unstable \citep[][and references therein]{Reipurth12,Reipurth14}. \section{Conclusions} \label{conclude} We have presented $N$-body simulations of the evolution of dense star-forming regions to determine the impact of dynamical interactions on different primordial binary populations. Our results are summarised as follows:\\ (i) The field binary population is not the end-product of dynamical processing of a common primordial binary population \citep{Kroupa95b} with an initial binary fraction of unity and an excess of systems with intermediate/wide separations ($100 - 10^4$\,au). Whilst dynamical evolution does cause the binary fraction to decrease as a function of primary mass with these initial conditions, the binary fractions are typically too high and the difference in mean separation as a function of decreasing primary mass as observed for binaries in the Galactic field \citep{Duchene13b} is not reproduced. (ii) If the primordial binary population is similar to that in the field, very few brown dwarf and M-dwarf binaries are destroyed, due to their average semi-major axis being well-within the `hard' binary regime at 4.6\,au \citep[the brown dwarfs;][]{Burgasser07} and 16\,au \citep[the M-dwarfs;][]{Bergfors10,Janson12}. This suggests that the majority of binaries have been unaffected by dynamical interactions, as M-dwarfs make up the majority of stars in the Universe \citep{Bastian10,Bochanski10}. (iii) G-dwarf and A-star binaries in the field are observed to have wider separations, with peaks at 50\,au \citep{Raghavan10} and 389\,au \citep{DeRosa14}, respectively. Whilst some intermediate/wide G-dwarf and A-star binaries could be destroyed in dense star forming environments, the formation of wide binaries during the dissolution of star-forming regions \citep{Kouwenhoven10,Moeckel10} is a strong function of primary mass. Therefore, if the field is a mixture of systems from clusters and associations (or just from expanding associations) then the formation of these observed wide binary G-dwarf and A-star systems are a natural outcome of the dissolution of star-forming regions into the field. (iv) The combination of points (ii) and (iii), and the possibility that not all star forming regions are dense enough to dynamically affect binaries \citep{Bressert10}, leads us to suggest that the binary fractions and semi-major axis distributions in the field are \emph{indicative} of the primordial population. (v) However, more complete observations of pre-main sequence binaries in star forming regions are desperately required in order to determine the origin of the Galactic field population. Currently, observed visual binaries straddle the hard/soft boundary, and it is these systems that are most likely not to be indicative of the primordial population, especially as they are susceptible to stochastic destruction \citep{Parker12b}. The above conclusions do not detract from the fact that binaries are an important part of the star formation process, and in principle can tell us much about star formation. All we have shown here is that their primordial orbital separations and overall fraction are probably not drastically altered in dense star-forming environments, and the need for star formation to produce a large excess of primordial wide-intermediate binary systems to compensate for dynamical destruction in star-forming regions is not required. Finally, we note that our work is not actually in conflict with earlier numerical simulations which did require a significant degree of processing of (mainly) intermediate M-dwarf binaries to explain the field population \citep{Kroupa95b}; this earlier work did not have the updated binary statistics on brown dwarfs, M-dwarfs and A-stars at its disposal. \section*{Acknowledgements} We thank the anonymous referee for their comments and suggestions, which improved the original manuscript. RJP acknowledges support from the Royal Astronomical Society in the form of a research fellowship. MRM acknowledges support from the Swiss National Science Foundation (SNF). The simulations in this work were performed on the \texttt{BRUTUS} computing cluster at ETH Z{\"u}rich. \bibliographystyle{mn2e}
2,869,038,156,170
arxiv
\section{Introduction} Online social networks can be hijacked by malicious actors who run massive influence campaigns on a population and potentially disrupt societies. Online extremists have utilized social networks to recruit and spread propaganda \citep{markszamanisis}. There have been multiple reports alleging that foreign actors attempted to penetrate U.S. social networks in order to manipulate elections \citep{ref:russianbots,shane2017fake,guilbeault2016twitter,byrnes2016bot}. Additionally, there have been other studies suggesting similar operations occurred in European elections \citep{ferrara2017disinformation}. The perpetrators created fake accounts, or ``bots'', which shared politically polarizing content, much of it fake news, in order to amplify it and extend its reach. Furthermore, many of these fake accounts also directly interacted with humans to promote their agenda \citep{ref:russianbots1}. While no one knows exactly how many people were impacted by these influence campaigns, it has still become a concern for the U.S. government. Members of Congress have not been satisfied with the response of major social networks \citep{ref:russianbots_govtresponse} and have asked them to take actions to prevent future interference in the U.S. democratic process by foreign actors \citep{ref:russianbots_feinstein}. Social network counter-measures are needed to combat these influence campaigns. This could consist of one using social network agents to influence a network in a way that negates the effect of the malicious influence campaign. There are multiple components to such counter-measures, but a key one is identifying targets in the network for influence by these agents. If one has a limited supply of agents, then one needs a method to optimally identify high value targets. \textbf{Our Contributions.} In this work we present a method to identify such targets and quantify the impact of \textit{stubborn} agents on the opinions of others in the network. We begin by proposing a model for opinion dynamics in a social network. A novel aspect of our model is that the individuals are allowed to grow stubborn with time and be less affected by new social media posts. This reflects real behaviors in social networks and is motivated by research in both social psychology and political science. We also allow the stubbornness rate to be heterogeneous across individuals. We prove that under fairly general conditions on the stubbornness rate, when there are stubborn agents in the network, the opinions converge to an equilibrium given by a linear system. We then present a discrete optimization formulation for the problem of optimally placing stubborn agents in a network to maximally shift the equilibrium opinions. We consider a slight variant of the traditional influence maximization approaches where instead of converting a non-stubborn agent into a stubborn agent, we simply introduce a stubborn agent into the network and constrain the number of individuals that this agent can communicate with. We consider two objective functions: the mean opinion in the network and the number of individuals in the network whose opinion is above a given threshold. We show that the mean opinion is a monotone and submodular function, allowing us to utilize a greedy approach where we have the stubborn agent target individuals one at a time in the network. Finally, we show in real social networks with tens of thousands of users that stubborn agents strategically targeting a small number of nodes can non-trivially influence a large population. Furthermore, we show that our greedy algorithm outperforms several common benchmarks. This paper is outlined as follows. We begin with a literature review in Section \ref{sec:lit_review}. We then present our opinion dynamics model in Section \ref{sec:model}. Convergence results for the model are presented in Section \ref{sec:stubborn_agents}. Our greedy algorithm for stubborn agent placement is presented in Section \ref{sec:optimization}. Performance results for our algorithm on real social networks are presented in Section \ref{sec:results}. We conclude in Section \ref{sec:conclusion}. We include all proofs in Section \ref{sec:proofs}, details on the construction of our datasets in \ref{sec:data_analysis}, and implementation details for our method for estimating the polarity of a tweet in Section \ref{sec:neural_network}. \section{Literature Review}\label{sec:lit_review} There has been a rich literature studying opinion dynamics in social networks. One of the most popular models here is the voter model \citep{clifford1973model,holley1975ergodic} where each node updates its opinion to match that of a randomly selected neighbor. There is a large body of literature studying limiting behavior in this model \citep{cox1986diffusive, gray1986duality, krapivsky1992kinetics, liggett2012interacting, sood2005voter}. The model of \cite{degroot1974reaching} is another popular way to describe opinion dynamics. In this model, a node's opinion is updated to a convex combination of the opinions of itself and its neighbors. This model has connections with distributed consensus algorithms \citep{tsitsiklis1984problems,tsitsiklis1986distributed,olshevsky2009convergence,jadbabaie2003coordination}. In contrast to these approaches, there are also Bayesian models of opinion dynamics in social networks \citep{bikhchandani1992theory, banerjee2004word,acemoglu2011bayesian,banerjee1992simple,jackson2010social}. In these model, a node's opinion is updated using Bayes' Theorem applied to the opinions of its neighbors. The notion of stubborn agents with immutable opinions was introduced by \cite{mobilia2003does}. Analysis has been done of the impact of stubborn agents in various opinion models \citep{galam2007role,wu2004social, chinellato2015dynamical, mobilia2007role,yildiz2013binary,acemouglu2013opinion,ghaderi2013opinion}. In \cite{ghaderi2013opinion} the authors studied stubborn agents in the model of \cite{degroot1974reaching} and observed that an analogy can be made between the equilibrium opinions and voltages in an electrical circuit. We find a similar connection in our results. This electric circuit connection led \cite{vassio2014message} to propose a function known as harmonic influence centrality, which measured how much a single node could shift the average opinion in the network by switching its own opinion. The question of optimizing the placement of agents in a social network to maximize some type of influence was first proposed by \cite{kempe2003maximizing} for a diffusion model. Subsequent results have presented a variety of algorithms for this problem \citep{kempe2005influential,leskovec2007cost,chen2009efficient,chen2010scalable}. \cite{yildiz2013binary} studied optimal stubborn agent placement in the voter model. Generally speaking, these algorithms make use of the fact that the objective function is submodular, so a good solution can be found using a greedy approach, as shown by \cite{nemhauser1978analysis}. Our optimization formulation for placing agents in a network also makes use of this property. While much analysis has been done on the effect of stubborn agents, the models used assume that the other individuals in the network have stationary behavior. However, numerous psychological studies have found that people grow stubborn over time (see the review in \cite{roberts2006patterns} and the references therein). In politics especially, the bulk of empirical evidence supports the hypothesis that susceptibility to changes in ideology and partisanship is very high during early adulthood and significantly lower later in life \citep{alwin1991women,alwin1991aging,sears1975political,sears1981life,sears1983persistence,glenn1980values,jennings1984partisan,jennings2014generations,markus1979political,converse1979plus,sears1999evidence}. Therefore, we believe that opinion dynamics models should include time-varying opinion update processes, where agents become stubborn with time. Convergence conditions under time-varying dynamics have been studied in \cite{chatterjee1977towards} and later in \cite{hatano2005agreement,wu2006synchronization,tahbaz2008necessary}. These models do not explicitly consider increasing stubbornness nor the presence of stubborn agents. To the best of our knowledge, our work is the first to rigorously analyze convergence in an opinion dynamics model with stubborn agents and increasing stubbornness. Additionally, previous opinion dynamics models assume individuals communicate their exact opinion in the network. However, in reality people may only transmit a noisy version of their latent opinion. The previous psychological survey of \cite{mason2007situating}, and the references therein, have argued for modeling latent opinions on a continuous spectrum while allowing for modeling the information communicated between agents on an arbitrary (potentially discrete) spectrum. For instance, we often can only observe an agent's binary decision, but there are frequently many benefits to allowing their underlying latent opinion to be modeled on a continuous spectrum. To the best of our knowledge, \cite{urbig2003attitude} is the only study to consider a framework that separately models communicated and latent opinion, and in this study they do not consider this process with mathematical rigor. \section{Opinion Dynamics Model}\label{sec:model} We begin by presenting a general model for the dynamics of opinions between interacting agents in a network. Our model allows for full heterogeneity among the agents. There is heterogeneity in the agents' activity levels, meaning the agents can post content at different rates. There is also heterogeneity in how the agents' opinions evolve in response to seeing new posted content. In Section \ref{sec:stubborn_agents}, we then present a series of theoretical results concerning the convergence of the agent opinions to an equilibrium, including conditions under which convergence occurs, an explicit characterization of the equilibrium, and the rate of convergence to the equilibrium. We consider a finite set of agents $\mathcal{V} = \left\{ 1, \ldots, N \right\}$ situated in a social network represented by a directed graph $\mathcal{G} \left(\mathcal{V}, \mathcal{E} \right)$, where $\mathcal{E}$ is the set of edges representing the connectivity among these individuals. An edge $(i, j) \in \mathcal{E}$ is considered to be directed from $i$ to $j$ and this means that agent $i$ can influence agent $j$. One can view the direction of the edges as indicating the flow of information. In social networks parlance, we say $j$ \emph{follows} $i$. We define the neighbor set of an agent $i \in \mathcal{V}$ as $\mathcal{N}_i = \left\{j \; | \; (j,i) \in \mathcal E \right\}$. This is the set of individuals who $i$ can be influenced by, i.e. whose posts can be seen by $i$. For clarity of exposition, we denote the out-degree neighbor set of an agent $i$ as $\mathcal{N}^o_i = \left\{j \; | \; (i,j) \in \mathcal E \right\}$. This set is also known as the \emph{followers} of $i$. At each time $t\in \mathbb{Z}_{\geq0}$, each agent $i \in \mathcal{V}$ holds an opinion or belief $\theta_i(t) \in [0,1]$. An opinion near zero indicates opposition to an issue or topic, while an opinion near one indicates support for it. We define the full vector of opinions at time $t$ by $\theta(t)$ for simplicity. We also allow there to be two types of agents: non-stubborn and stubborn. Non-stubborn agents have an opinion update rule based on communication with their neighbors that we will specify later, while stubborn agents never change their opinions. We will denote the set of stubborn agents by $\mathcal{V}_0 \in \mathcal{V}$ and the set of non-stubborn agents by $\mathcal{V}_1 = \mathcal{V} \; \setminus \; \mathcal{V}_0$. For clarity of exposition, we assume that $\mathcal{V}_0 = \left\{ 1, \ldots, |\mathcal{V}_0 | \right\}$. At time $t=0$, each agent $i\in \mathcal{V}$ starts with an initial opinion $\theta_i(0)$. The opinions of the stubborn agents stay constant in time, meaning \begin{align*} \theta_i(t) = \theta_i(0), \quad \quad i\in\mathcal{V}_0, \; t\in \mathbb{Z}_{\geq0}. \end{align*} We will now introduce the opinion update rule for the non-stubborn agents. In our analysis, we will focus on a scenario where at each time $t \in \mathbb{Z}_{\geq0}$ a random agent $j\in\mathcal{V}$ communicates with some set of its followers $\mathcal{N}^o_j$ by posting a piece of content. If agent $j$ posts at time $t+1$, we assume that the post has a random opinion $Y_j(t+1) \in [0,1]$ where $\mathbb{E}\bracket{Y_j(t+1)|\theta_j(t)} = \theta_j(t)$. If agent $j$ communicates at time $t+1$ with an agent $i$ such that $j \in \mathcal{N}_i$, then agent $i$ updates his opinion to a convex combination of his own current opinion and agent $j$'s communicated opinion: \begin{align}\label{eq:update_rule} \theta_i(t+1) = \left(1-\omega_i(t) \right) \theta_i(t) + \omega_i(t)Y_j(t+1) \end{align} where $\omega_i(t) \in [0,1]$ is some deterministic stubbornness factor for agent $i$ that is changing in time. On the other hand, if agent $i$ does not see a new opinion at time $t+1$, then $\theta_i(t+1) = \theta_i(t)$. We now note that in many previous studies, the random opinion $Y_j(t+1)$ is often always assumed to be agent $j$'s exact opinion at time $t$, given by $\theta_j(t)$. In our analysis, we relax this and only assume that agent $j$ communicates an opinion $Y_j(t+1) \in [0,1]$ that is unbiased, meaning $\mathbb{E}\bracket{Y_j(t+1)|\theta_j(t)} = \theta_j(t)$. This property where the agent does not express his exact opinion in the content he posts is known as limited verbalisation \cite{urbig2003attitude,mason2007situating}. One should note that if $\omega_i(t)$ shrinks to zero as time increases, agent $i$ weighs communicated opinions less and therefore becomes more stubborn. In previous studies, $\omega_i(t)$ is assumed to be constant in time, which is not necessarily an accurate model of human behavior. As suggested in Mason, Conroy, and Smith \cite{mason2007situating} and Roberts and Viechtbauer \cite{roberts2006patterns}, a model with limited verbalisation and time-evolving update rules is a more realistic model of opinion dynamics. One interesting case is where $\omega_i(t) = (t+1)^{-1}$ and an agent $i$ observes exactly one opinion at every unit of time. In this case, it is not difficult to see that \begin{align*} \theta_i(t) = \frac{1}{t}\sum_{s=1}^t Y(s). \end{align*} where above we dropped the subscript for the origin of the posts for simplicity. This corresponds to an update rule where an individual's opinion is simply the average of all previous posts he has seen. Next we describe the communication pattern of the agents. We depart from the model in \cite{ghaderi2013opinion}, where at each discrete time-step all agents in the network communicate. Rather, we allow the agents to communicate randomly, which is a more accurate model of how individuals in real social networks behave. For notation, let $p_{ji}$ denote the probability that agent $j$ communicates with agent $i$ at time $t$. We assume the communication probabilities are constant in time. This reflects a situation where people's relative rate of activity with respect to each other in a social network does not change. This is a reasonable assumption for many real social networks. For our model, we have the following stochastic update rule for non-stubborn agent $i\in \mathcal{V}_1$: \begin{align*} \theta_{i}(t+1) = \begin{cases} \left(1-\omega_i(t) \right)\theta_i(t) + \omega_i(t)Y_j(t+1) & \quad \text{ w.p. } p_{ji} \text{ if $j \in \mathcal{N}_i$} \\ \theta_i(t) & \quad \text{ w.p. } 1 - \sum_{j \in \mathcal{N}_i}p_{ji}. \\ \end{cases} \end{align*} Taking expectations, we have for non-stubborn agents \begin{align*} \mathbb{E} \left[ \theta_i(t+1) \right] = \mathbb{E}\left[\theta_i(t) \right] \left(1 - \omega_i(t)\sum_{j \in \mathcal{N}_i } p_{ji} \right) + \omega_i(t)\sum_{j \in \mathcal{N}_i}\mathbb{E}\left[ \theta_j(t) \right]p_{ji}. \end{align*} Then, we can write that \begin{align*} \mathbb E \left[\mathbf{\theta}(t+1)\right] = \mathbf{A} (t) \mathbb E\left[\theta(t)\right] \end{align*} where $\mathbf A(t) = \mathbf I + \mathbf\Omega(t) \mathbf A$, $\mathbf A$ is a $| \mathcal V | \times \mathcal V |$ matrix given by \begin{align}\label{eq:A} A_{ik} = \begin{cases} 0 & \text{if } i \in \mathcal{V}_0 \\ -\sum_{j\in \mathcal{N}_i} p_{ji} & \text{if } i \in \mathcal{V}_1, k=i\\ p_{ki} & \text{if } i \in \mathcal{V}_1, k \in \mathcal{N}_i \\ 0 & \text{otherwise,} \end{cases} \end{align} and $\mathbf\Omega(t)$ is a $| \mathcal V | \times \mathcal V |$ diagonal matrix with $\mathbf \Omega_{ii}(t) = \omega_i(t)$ for non-stubborn agent $i \in \mathcal{V}_1$, and zero otherwise. Throughout the paper, we make the assumption that for all $i\in \mathcal{V}_1$, $\sum_{j\in \mathcal{N}_i} p_{ji} \leq 1$. Due to the structure of $\mathbf A$, we can write it in the block-matrix form \begin{align*} \mathbf A = \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{F} &\mathbf{G} \end{bmatrix} \end{align*} where $\mathbf {F}$ is a $|\mathcal{V}_1| \times |\mathcal{V}_0| $ matrix and $\mathbf{G}$ is a $|\mathcal{V}_1| \times |\mathcal{V}_1| $ matrix. The matrix $\mathbf F$ captures communications from the stubborn agents to the non-stubborn agents, while $\mathbf G$ captures the communication network among the non-stubborn agents. Finally, to simplify our notation, let $\mathbf \theta_{\mathcal{V}_0}$ denote the vector of the initial opinions of the stubborn agents and $\mathbf \theta_{\mathcal{V}_1}(t)$ denote the vector of the opinions of the non-stubborn agents at time $t$. Similarly, we let $\mathbf \Omega_{\mathcal{V}_1}(t)$ denote the submatrix of $\mathbf{\Omega}(t)$ corresponding to the non-stubborn agents. \section{Theoretical Results}\label{sec:stubborn_agents} We now present our convergence results for the opinions in our model. Since who communicates and what is communicated are random, the opinions of non-stubborn agents are also random. Therefore, there are some key questions we aim to answer. The overarching question is under what conditions on the stubbornness factor $\mathbf \Omega (t)$ do the opinions converge. In more detail, we would like to know if the opinions converge in expectation to an equilibrium, and if so, what are the equilibrium values. Also, are the equilibrium opinions themselves random or do they converge to deterministic values. To answer these questions, we have obtained results for the limiting values of the expectation and variance of the opinions. We also consider the rate of convergence to the equilibrium value when the agents all share the same stubbornness growth rate and the communication graph is symmetric, which allows us to illustrate the connection between our model and previous results in stochastic approximation theory and Markov chain convergence analysis. All proofs can be found in Section \ref{sec:proofs}. Throughout this section, we make the assumption that the underlying graph $\mathcal{G} \left(\mathcal{V}, \mathcal{E} \right)$ is connected and for each non-stubborn agent $v\in\mathcal V_1$ there exists a directed path from some stubborn agent to $v$. We note that this assumption is not especially stringent. First off, if the graph has multiple connected components then the results from this section can be applied to each connected component separately. Furthermore, if there are some non-stubborn agents which are not influenced by any stubborn agent, then there is no link in $\mathcal{E}$ connecting the set $\mathcal{R}$ of such non-stubborn agents to $\mathcal{V}\setminus\mathcal{R}$. Then we can decompose this set of non-stubborn agents $\mathcal{R}$ from the rest of the graph, and consider the convergence of this subgraph using the results in Section \ref{sec:no_stubborn_agents}. \subsection{Opinion Mean}\label{ssec:mean} We begin with the expectation result. \begin{theorem}\label{thm:1} Suppose the underlying graph $\mathcal{G} \left(\mathcal{V}, \mathcal{E} \right)$ is connected and for each non-stubborn agent $v\in\mathcal V_1$ there exists a directed path from some stubborn agent to $v$. Then, if $\sum\limits_{s=0}^t \min_{i\in \mathcal{V}_1} \omega_i(s)$ diverges we have that \begin{align}\label{eq:mean} \lim_{t\to \infty} \mathbb E \left[\mathbf \mathbf{\theta}_{\mathcal{V}_1}(t)\right] = -\mathbf G^{-1} \mathbf{F} \mathbf \theta_{\mathcal{V}_0}. \end{align} \end{theorem} The equilibrium solution from Theorem \ref{thm:1} has a structure that resembles Ohm's Law from circuit theory. Such a connection was also made in \cite{ghaderi2013opinion}, which applies to a time-homogeneous and deterministic update rule. An electric circuit can be viewed as a graph. Each node $i$ has a voltage $V_i$ and each edge $(i,j)$ has a conductance $G_{ij}$. A current from node $i$ to node $j$ is defined as $I_{ij}$. From Ohm's Law we have that $I_{ij} = G_{ij}(V_i-V_j)$ \citep{agarwal2005foundations}. Conservation of current requires that all current flowing to a node must sum to zero (current flowing away from a node has a negative value). Mathematically, this is given by $\sum_{j\in\mathcal N_i} I_{ji} = 0$. To connect the circuit model with our opinion equilibrium, we write down equation \eqref{eq:mean} for a single non-stubborn node $i$. It can be shown using equation \eqref{eq:A} that this gives \begin{align}\label{eq:ohm} \sum_{j\in\mathcal N_i}p_{ji} (\mathbb E[\theta_j]-\mathbb E[\theta_i])&=0. \end{align} To see the circuit analogy, we define the ``voltage'' of a node as its expected opinion ($V_i = \mathbb E[\theta_i]$) and the conductance on an edge pointing from $j$ to $i$ as the rate of communication from $j$ to $i$ ($G_{ji}=p_{ji}$). Using Ohm's Law, we obtain a ``current'' from $j$ to $i$ given by $I_{ji} = p_{ji}(\mathbb E[\theta_j]-\mathbb E[\theta_i])$. If we enforce conservation of current at each node, we obtain our equilibrium solution given by equation \eqref{eq:ohm} (or in matrix form by equation \eqref{eq:mean}). The analogy is very natural for our model. The voltage is the polarity of an individual's opinion and stubborn agents are fixed voltage sources. The conductance measures how easily information flows along an edge, analogous to an electrical circuit where conductance measures how easily current flows along an edge. The equilibrium condition simply gives how opinions/voltages are distributed in a social network/circuit. It also suggests that one can place stubborn agents in the network to manipulate the non-stubborn opinions, just as one can place voltage sources in a circuit to manipulate the node voltages. We discuss this more in Section \ref{sec:optimization}. The result states that for convergence in expectation to occur, $\omega_i(t)$ cannot decay too fast. If this occurs, then new opinions are ignored and updates will become too small. This can result in the final opinion depending upon the initial condition. However, if $\omega_i(t)$ decays slow enough, where slow means $\sum_{s=0}^t \min_{i\in\mathcal V_1}\omega_i(s)$ diverges, then the agents will keep listening to new communications and updating their opinions. In this case, the expectation of their final opinions are independent of their initial value. \subsection{Opinion Variance}\label{ssec:variance} We next consider the variance of the opinions. Let $\Sigma\left[ \theta(t) \right] = \mathbb{E} \left[ \left[ \theta(t) - \mathbb{E}[\theta(t)] \right] {\left[ \theta(t) - \mathbb{E}[\theta(t) \right]}]^T \right]$ denote the covariance matrix of $\theta(t)$. We have the following result. \begin{theorem}\label{thm:2} Suppose the assumptions from Theorem \ref{thm:1} hold. Additionally, suppose that $\sum\limits_{s=0}^t \max_{i \in \mathcal{V}_1}\omega_i^2(s)$ converges. Then, \begin{align*} \lim_{t\to \infty} \Sigma\left[ \theta(t) \right] = \mathbf 0. \end{align*} \end{theorem} Taken together, Theorems \ref{thm:1} and \ref{thm:2} characterize the class of stubbornness factors $\mathbf \Omega(t)$ where convergence occurs in $L^2$. If $\mathbf \Omega(t)$ does not decrease too rapidly ($\sum_{s=0}^t \min_{i\in\mathcal V_1} \omega_i(s)$ diverges) then the expectation of the final opinions of the non-stubborn agents do not depend on their initial conditions. If we also have that $\mathbf \Omega(t)$ decreases sufficiently rapidly ($\sum_{s=0}^t \max_{i\in\mathcal V_1} \omega_i^2(s)$ converges), then the opinions' covariance will go to zero. To parameterize this region, assume $\mathbf \Omega(t)$ has the form $c_i t^{-\delta_i}$ for some constants $c_i$ and $\delta_i$ for all $i$. Then Theorems \ref{thm:1} and \ref{thm:2} are satisfied for $1/2< \delta_i \leq 1$ for all $i$. \subsection{Convergence Rate}\label{ssec:rate} Studying the convergence rates of the opinions to the equilibrium in a very general situation is difficult because it depends on both the structure of the graph and the stubbornness factor of each individual agent. To simplify our results, we assume that each non-stubborn agent has a stubbornness factor of the form $\omega_i(t) = c/(t+ \tau+1)$ for some nonnegative constants $c$ and $\tau$, where $c \leq \tau$ to ensure that the $\omega_i(t) \leq 1$ for all $t$. Then we have the following result regarding the rate of convergence of the system to it's equilibrium value. \begin{theorem}\label{thm:3} Suppose the underlying graph $\mathcal{G} \left(\mathcal{V}, \mathcal{E} \right)$ is connected and for each non-stubborn agent $v\in\mathcal V_1$ there exists a directed path from some stubborn agent to $v$, and that the stubbornness factors are of the form $\omega_i(t) = c/(t+\tau+1)$ for all $i\in \mathcal{V}_1$. Then $\mathbf G$ can be written as $\mathbf G = \mathbf U \mathbf \Lambda \mathbf{U}^{-1} $ where $\mathbf \Lambda + \mathbf{\Lambda}^*$ is a real symmetric negative definite matrix, and $\mathbf \Lambda ^*$ denotes the Hermitian transpose of $\mathbf \Lambda$. If $c > \left( |\lambda_{\max} \left(\mathbf \Lambda + \mathbf \Lambda^* \right)| \right)^{-1}$, where $\lambda_{\max} \left(\mathbf \Lambda + \mathbf \Lambda^* \right)$ is the maximal eigenvalue of $\mathbf \Lambda + \mathbf \Lambda^*$, and $\tau \geq c$, then we have that \begin{align*} \mathbb E \left[ \frac{1}{2} \norm{\theta_{\mathcal{V}_1}(t) - \theta_{\mathcal{V}_1}^* }_2 \right] \leq \sqrt{\frac{C}{t+\tau+2}} \end{align*} where $C = \kappa(\mathbf U)^2 \max \left\{ \frac{1}{4} \norm{ \theta_{\mathcal{V}_1}(0) - \theta_{\mathcal{V}_1}^*}_2^2 , |\mathcal{V}_1| c^2 {(|\lambda_{\max} \left(\mathbf \Lambda + \mathbf \Lambda^* \right)|c-1)}^{-1} \right\}$, $\kappa\left(\mathbf{U}\right)$ is the condition number of $\mathbf{U}$, and $\theta_{\mathcal{V}_1}^* = -\mathbf G^{-1} \mathbf F \theta_{\mathcal{V}_0}$. \end{theorem} To gain insight into Theorem \ref{thm:3} we begin by assuming that $\mathbf G$ is symmetric. In this case, we can choose $\mathbf U$ to be the identity matrix and $\mathbf \Lambda = \mathbf G$, since $ \mathbf G$ is a real symmetric negative definite matrix. In this case, $\lambda_{\max} \left(\mathbf \Lambda + \mathbf \Lambda^* \right) = 2\lambda_{\max}\left(\mathbf G\right)$ and $\kappa(\mathbf U) = 1$. In this scenario, the constants used in the Theorem depend only on how far our initial opinions $\theta_{\mathcal{V}_1} (0)$ are from the equilibrium, the number of non-stubborn agents $|\mathcal{V}_1|$, the eigenvalue $\lambda_{\max}\left(\mathbf G\right)$, and our stubborness factor parameters. Additionally, when one considers the Markov matrix $\mathbf{I} + \mathbf{A}$, the largest eigenvalue of this matrix not equal to one is $1 + \lambda_{\max}\left(\mathbf G\right)$, and this eigenvalue is closely related to the spectral gap of the Markov chain. This spectral gap appears in many other contexts where it is used to study the mixing time for a Markov chain \citep{levin2017markov}. Furthermore, when the matrix $\mathbf G$ is symmetric our results resemble classical convergence results in stochastic approximation theory \citep{nemirovski2009robust}. In particular, when all of the agents have the same stubbornness factor and the matrix $\mathbf G$ is symmetric, our opinion dynamics model has a natural relation to agents collectively maximizing a concave objective function via a noisy gradient oracle. In considering Theorem \ref{thm:3} for the non-symmetric case, it is easy to question why the matrices $\mathbf U$ and $\Lambda$ must be introduced. Based on our previous discussion and a closer inspection of the details in the proof, one can notice that if the Hermitian form of $\mathbf{G}$ given by $\left( \mathbf G + \mathbf G^* \right)/2$ is negative semi-definite, we can obtain the same convergence result intuition developed above by replacing $|\lambda_{\max}(\mathbf G) ) |$ with $\sigma_{\min}(\mathbf G)$, the minimal singular value of $\mathbf G$. However, it is not always the case that this Hermitian form is negative semi-definite. To see this, consider the graph in Figure \ref{fig:example_graph}. In this graph, the node $k$ is stubborn, and the other two nodes are non-stubborn. In this case, we have that \begin{align*} \mathbf G = \begin{bmatrix} -0.25 & 0.25 \\ 0.49 & -0.5 \end{bmatrix} \end{align*} \begin{figure} \centering \begin{tikzpicture}[ ->, auto, thick, el/.style = {inner sep=2pt, align=left, sloped}, main node/.style={circle, draw, font=\sffamily\Large\bfseries} ] \node[main node] (k) at (2, 0) {$k$}; \node[main node] (j) at (0, 0) {$j$}; \node[main node] (i) at (-2, 0) {$i$}; \path[every node/.style={font=\sffamily\small}] (k) edge [bend left] node {$p_{kj} = 0.01$} (j) (j) edge [bend left] node {$p_{ji}=0.25$} (i) (i) edge [bend left] node {$p_{ij}=0.49$} (j); \end{tikzpicture} \caption{Example graph where the Hermitian form of the matrix $\mathbf G$ is indefinite. Node $k$ is stubborn and nodes $i$ and $j$ are non-stubborn. } \label{fig:example_graph} \end{figure} One of the eigenvalues of the Hermitian form is $-\frac{3}{8} + \frac{\sqrt{6101}}{200} > 0$. Because of this, in the proof of Theorem \ref{thm:3} we are unable to bound $\mathbb E \left[ \left( \theta_{\mathcal{V}_1}(t) - \theta_{\mathcal{V}_1}^* \right)^T\left( \mathbf{G} + \mathbf{G}^T \right) \left(\theta_{\mathcal{V}_1}(t) - \theta_{\mathcal{V}_1}^*) \right) \right]$ from above by a constant negative value. Nevertheless, in the non-symmetric case we are able to show that we can find some negative definite matrix $\Lambda + \Lambda^*$ that is similar to $\mathbf G + \mathbf{G}^T$ via the similarity transformation $\mathbf U$. By considering our convergence rate in this transformed space, we are able to obtain a $O\left( 1/\sqrt{t} \right)$ type convergence rate, but we must include some constants that depend on the condition number of this similarity transformation. \subsection{Convergence With No Stubborn Agents}\label{sec:no_stubborn_agents} In this section we consider our time-varying opinion dynamics model from before without the presence of stubborn agents. When considering opinion dynamics models without stubborn agents, a common approach is to show conditions under which the model achieves consensus in the limit. Our model can then naturally relate to previous work on ergodicity in nonhomogeneous Markov chains. We encourage the interested reader to see \cite{seneta1973historical} and the references therein for a historical summary of the theory of nonhomogeneous Markov chains. Additionally, our model relates to the work of \cite{touri2011ergodicity}, in which they show that a process having the ``infinite flow property'' is a necessary condition for its ergodicity. They also consider conditions under which the infinite flow property is a sufficient condition for ergodicity, but we find that these conditions are not easily satisfied for our model. Before we begin our analysis, we will briefly review some results and terminology in the theory of nonhomogeneous Markov chains. In studying nonhomogeneous matrix products, it is common to consider an \textit{ergodicity coefficient} which is a continuous scalar function $\mu(\cdot)$ defined for a stochastic matrix $\mathbf{P}$ that satisfies $0 \leq \mu (\mathbf{P}) \leq 1$. A coefficient of ergodicity is proper if \begin{align*} \mu(\mathbf P) = 0 \iff \mathbf{P} = \mathbf{e}v^T \end{align*} where $v$ is a stochastic vector and $\mathbf e$ is a vector of all ones. In our analysis, we make use of the following proper ergodicity coefficient due to Markov: \begin{align*} \tau(\mathbf P) = 1 -\min_{i,j} \sum\limits_{s=1}^n \min(p_{is}, p_{js}), \end{align*} where $\mathbf P $ is an $n \times n $ stochastic matrix. It is well known that this ergodicity coefficient is submultiplicative, meaning that if $\mathbf P_1$ and $\mathbf P_2$ are stochastic matrices, then $\tau_1 \left(\mathbf P_1 \mathbf P_2 \right) \leq \tau_1 \left( \mathbf P_1\right) \tau_1 \left(\mathbf P _2 \right) $. In the study of nonhomogeneous matrix products it is also common to consider a class of stochastic matrices that are called \textit{scrambling}, which means that $\tau_1 (\mathbf P) < 1$. It is easy to see that a stochastic matrix is scrambling if and only if no two rows are orthogonal. Because our problem is defined by the $\mathbf A$ matrix, we will say that $\mathbf A$ is scrambling if there exists no rows $i$ and $j$ such that \begin{align*} a_{is}a_{js} = 0 \end{align*} for all $s$. In our analysis below, we will assume that the matrix $\mathbf A$ is scrambling, which corresponds to communication networks where for every two people there is at least one person they both listen to. In practice, with the widespread nature of news and technology this might be a reasonable assumption. Lastly, to measure the disparity in the network we will make use of the centering matrix $\mathbf C = \mathbf I - \frac{1}{n} \mathbf e \mathbf e^T$, which is a positive semi-definite matrix with one eigenvalue equal to zero and all others equal to one. The product $\mathbf C\theta(t)$ measures how much each opinion deviates from the average opinion in the network. The following result shows under what conditions opinion consensus is achieved. \begin{theorem}\label{thm:4} Suppose that the matrix $\mathbf A$ is scrambling, there are no stubborn agents, $\sum\limits_{s=0}^t \min_{i\in \mathcal{V}} \omega_i(s)$ diverges, and $\sum\limits_{s=0}^t \max_{i \in \mathcal{V}}\omega_i^2(s)$ converges. Then \begin{align*} \mathbf C \theta(t) \to \mathbf 0 \end{align*} in $L^2$. \end{theorem} From Theorem \ref{thm:4} we are able to see that under the same assumptions on the stubbornness growth rates as in Theorem \ref{thm:2}, we are able to guarantee consensus under an additional assumption on the graph. If we were to remove the scrambling assumption on $\mathbf A$, and replaced it with the assumptions that the underlying graph has a single recurrent class that is aperiodic and all the agents have the same stubbornness growth rate (i.e. $\mathbf \Omega(t) = \omega(t) $ for some $\omega(t)$), then from variants of the previous proofs it follows that $\mathbf C \theta(t) \to \mathbf 0$ if $\sum_s\min_{i \in \mathcal{V}}\omega(s) \to \infty$ and $\sum_s\max_{i \in \mathcal{V}} \omega^2(s) < \infty$. However, we believe the assumption of equal stubbornness growth rates to be an especially stringent one, and thus we will not focus on these conditions in great detail. \subsection{Harmonic Influence Centrality}\label{sec:centrality} The equilibrium condition of our model given by equation \eqref{eq:mean} (or equivalently equation \eqref{eq:ohm}) allows us to evaluate the relative influence of each individual agent in the network. We define influence as follows. Imagine we are able to switch an agent's opinion from zero to one and ask what is the change in the average opinion in the network as a result of this switch. This allows us to define \emph{harmonic influence centrality} which was first proposed in \cite{vassio2014message}. There, harmonic influence centrality measured how much a stubborn agent increased the average non-stubborn opinion if it flipped its opinion from zero to one while all other stubborn nodes had opinion equal zero. To make harmonic influence centrality a more operational measure, we consider the actual opinion of stubborn agents in the network rather than setting them all equal to one. We then define harmonic influence centrality as a function $c:\mathcal V\rightarrow\mathbb R$ that maps each agent in the network to a real number that equals the change in average non-stubborn opinion when it is made stubborn and flips its opinion from zero to one. We now present expressions for the harmonic influence centrality of agents in the network. To simplify notation we will treat the equilibrium opinions as deterministic, following our results from Theorems \ref{thm:1} and \ref{thm:2}. We consider the case of stubborn and non-stubborn agents separately, as they result in different expressions. \begin{theorem}\label{thm:centrality} Consider a network with opinion equilibrium given by $-\mathbf G\theta_{\mathcal V_1} = \mathbf F\theta_{\mathcal V_0}$. For any stubborn agent $i\in\mathcal V_0$, the harmonic influence centrality is \begin{align} c(i)& = \frac{-1}{|\mathcal V_1|} \sum_{j\in\mathcal V_1}\paranth{\mathbf G^{-1}\mathbf F}_{ji}\label{eq:centrality_stubborn} \end{align} % and for any non-stubborn agent $i\in\mathcal V_1$, the harmonic influence centrality is \begin{align} c(i)& = \frac{1}{|\mathcal V_1|-1}\paranth{\frac{\sum_{j\in\mathcal V_1} G^{-1}_{ji}}{ G^{-1}_{ii}}-1}.\label{eq:centrality_nonstubborn} \end{align} \end{theorem} As can be seen, the expression for the harmonic influence centrality of stubborn agent $i$ is just the sum of the $i$th column of the matrix $\mathbf G^{-1}\mathbf F$. Unlike for stubborn agents, the harmonic influence centrality of non-stubborn agents does not involve the matrix $\mathbf F$ which connects stubborn to non-stubborn agents. Both expressions require the matrix $\mathbf G$ which connects the non-stubborn agents, to be invertible. This just means that the network has a unique opinion equilibrium. As such, harmonic influence centrality is not applicable in networks where there are no stubborn agents, or not enough stubborn agents to create a unique equilibrium. This somewhat limits the applicability of harmonic influence centrality. However, it does make its actual value a relevant operational measure for assessing the influence of individuals in a network. One useful application of our definition of harmonic influence centrality is in optimizing opinions with stubborn agents. We will see in Section \ref{sec:results} that targeting non-stubborn individuals based on their harmonic influence centrality is a practical and effective approach to impact non-stubborn opinions in very large networks. \section{Optimization of Stubborn Agent Placement}\label{sec:optimization} One may be interested in using stubborn agents to shift the opinions in a network in order to maximize a given objective function. Examples of such functions include the sum of the opinions or the number of individuals whose opinion exceeds a given threshold. To optimize these objective functions, we utilize the equilibrium from equation \eqref{eq:mean}. Our results from Section \ref{sec:stubborn_agents} show that for a broad class of stubborness rates, the opinions in a network reach this equilibrium. We now show how to place stubborn agents in a network to shift this equilibrium and maximize these objective functions. The equilibrium condition in equation \eqref{eq:mean} can be rewritten as $-\mathbf G\mathbb E[\mathbf \theta_{\mathcal{V}_1}]= \mathbf{F} \mathbf \theta_{\mathcal{V}_0}$. This linear system of equations constrains the opinions of the non-stubborn individuals. If stubborn agents are placed in the network, these matrices change. By stubborn agent placement, we mean the agent causes non-stubborn individuals to follow it, allowing the agent to influence their opinion and shift the equilibrium. By optimizing where we place the stubborn agents, we can shift the opinions in the network as we desire. \subsection{Opinion Objective Functions}\label{ssec:objective} We now consider the problem of how one can optimize a function of the non-stubborn opinions via stubborn agent placement. We consider two different objective functions. First, there is the sum of the non-stubborn opinions. This is a fairly standard objective function, and we will see it also has desirable mathematical properties. Second, there is the number of non-stubborn individuals whose opinion exceeds a given threshold. This is objective function may be relevant if the individuals take an action when their opinion exceeds the threshold (buy a product, vote, protest, etc.). Consider the scenario where we add one stubborn agent to the network with communication probability $p$. Without loss of generality, we assume that this agent's opinion $\theta=1$. Suppose that we begin with some equilibrium solution $\theta^0$ that satisfies $-\mathbf{G}\theta^0= \mathbf{F} \theta_{\mathcal{V}_0}$. Here we are assuming the opinions are deterministic, which is a valid assumption under the conditions of Theorem \ref{thm:2}. Consider adding this new stubborn agent to the network and having non-stubborn individual $i$ follow it. Let $\mathbf{e}_i$ be a vector that has component $i$ equal to one, and all other components equal to zero. When the agent is followed by individual $i$ we achieve a new equilibrium solution $\theta^1$ given by \begin{align*} -\left( \mathbf{G} - p\; \mathbf{e}_i\mathbf{e}_i^T \right)\theta^1 = \mathbf{F} \mathbf \theta_{\mathcal{V}_0} + p\mathbf{e}_i. \end{align*} The sum of the opinions under this new equilibrium can be written as $ -\mathbf{e}^T \; {\left(\mathbf{G} - p\mathbf{e}_i \mathbf{e}_i^T\right)}^{-1} \left(\mathbf{F} \mathbf \theta_{\mathcal{V}_0} + p \mathbf{e}_i \right)$, where $\mathbf{e}$ is the vector of all ones. In general, the stubborn agent can target a set $S\subseteq \mathcal V_1$ of non-stubborn users. The opinion sum in the resulting equilibrium can be viewed as a function of the target set $S$. This function $f : \mathcal{V}_1 \mapsto \mathbb{R}_{\geq0}$ is given by \begin{align*} f(S) = -\mathbf{e}^T \; {\left(\mathbf{G} - p \sum_{i\in S}\mathbf{e}_i \mathbf{e}_i^T\right)}^{-1} \left(\mathbf{F} \mathbf \theta_{\mathcal{V}_0} + p \sum_{i\in S}\mathbf{e}_i \right). \end{align*} In addition to the opinion sum, there are other important functions one can optimize. Consider the set function $g: \mathcal{V}_1 \mapsto \mathbb{R}_{\geq0}$ defined to be \begin{align*} g(S) = \sum\limits_{i\in\mathcal{V}_1} \mathbbm{1}_{i, \tau} \left\{- \; {\left(\mathbf{G} - p \sum_{i\in S}\mathbf{e}_i \mathbf{e}_i^T\right)}^{-1} \left(\mathbf{F} \mathbf \theta_{\mathcal{V}_0} + p \sum_{i\in S}\mathbf{e}_i \right) \right\} \end{align*} where $\mathbbm{1}_{i, \tau} \left\{\mathbf x \right\}$ is equal to one if the $i$-th component of $\mathbf{x}$ is greater than some predetermined threshold $\tau$, and zero otherwise. Maximizing this set function is equivalent to maximizing the number of non-stubborn agents with final opinion greater than $\tau$, which could correspond to for instance buying a product or voting for a particular candidate. \subsection{Greedy Approach}\label{ssec:algo} In practice one may limit the number of non-stubborn individuals that are targeted. This is done so the stubborn agents do not appear to be spam and lose their persuasion power. A natural constraint would be $|S|=k$ for some $k\leq |\mathcal V_1|$. Then the problem of determining which $k$ non-stubborn agents to target in order to maximize the sum of the non-stubborn opinions can be written as \begin{align}\label{eq:optimization} \max_{S \; : \; |S| = k} f(S). \end{align} Similarly, the constrained optimization problem for the number of individuals over a threhsold is \begin{align}\label{eq:optimization} \max_{S \; : \; |S| = k} g(S). \end{align} These discrete optimization problems become difficult to solve for all $k$ targets simultaneously in real social networks, which can be quite large. One solution to this is to solve for one target at a time in a greedy manner. This means in each iteration choosing the target which gives the largest increase in the objective function. This approach greatly reduces the complexity of the problems and allows them to be solved for large networks. While we cannot provide any performance guarantees for the threshold objective using this greedy approach, we do have a guarantee for the sum of opinions. \begin{theorem}\label{thm:submodular} For an arbitrary instance of $\mathbf{G}$, $\mathbf{F}$, and $\theta_{\mathcal{V}_0}$ the set function $f(\cdot)$ is monotone and submodular. \end{theorem} Because the objective is monotone and submodular, a greedy approach to maximizing the sum of opinions will produce a solution within a factor of $1-e^{-1}$ of the optimum \citep{nemhauser1978analysis}. In Section \ref{sec:results} we present performance results of greedy solutions for these two objective functions. For very large networks, even our proposed greedy approach for stubborn agent placement remains computationally challenging. For example, just solving for the equilibrium opinions can take over a second on networks of hundreds of thousands of nodes. For each iteration of our greedy approach, this equilibrium calculation must be repeated for each potential target node. For networks with hundreds of thousands of nodes, the time required for each greedy iteration can be on the order of days. If one wants to find hundreds of targets, the resulting computation can take weeks. This computational burden can be reduced by not checking every potential target. However, this could result in severely suboptimal stubborn agent placement. To overcome these challenges, we propose some useful computational techniques. First, we do not calculate the equilibrium for each potential target. Instead, we only check a small subset of the targets. This subset consists of the non-stubborn agents with the highest harmonic influence centrality in the initial network before anyone is targeted. The logic here is that high centrality targets will likely give large gains in the objective functions we consider. In each greedy iteration, we only calculate the equilibrium for the potential targets in this subset. Calculating harmonic influence centrality requires solving for the equilibrium twice for each non-stubborn individual in the network. We do not recalculate the centrality in subsequent iterations. Second, we parallelize the calculation of the equilibrium. In each iteration, we simultaneously calculate the equilibrium of all potential targets in the subset. If enough processors are available the runtime of this step can be reduced to the time for a single equilibrium calculation. With the resources we had available we could calculate several hundred equilibria in parallel, increasing our speed by nearly two orders of magnitude. \section{Results}\label{sec:results} To understand how much impact a stubborn agent can have on the opinions in a network, we solve the opinion optimization problems described in Section \ref{sec:optimization} with different objective functions on two real social networks. We present a novel method based on neural networks to identify stubborn and non-stubborn agents in a network. We then show how our greedy approach is more effective in optimizing the opinion objective functions than other simpler heuristics. \subsection{Datsets}\label{ssec:datasets} We consider two datasets from the social network Twitter about certain geo-political events. The first dataset consists of Twitter users discussing Brexit, the planned British departure from the European Union. The second consists of Twitter users discussing the Gilets Jaunes protests in France. We chose these events because there may be interest in shifting opinions on these events given their significance. We now provide some background about these events. \textbf{Brexit.} Brexit is the withdrawal process of the United Kingdom (UK) from European Union (EU). While Brexit began with a vote in 2016, in this work, we focus on the time period from September 2018 to March 2019 when the British government worked on constructing a formal plan for executing the Brexit. \textbf{Gilets Jaunes.} Gilets Jaunes, or Yellow Vests, is a French populist movement that started in November 2018. Although it was initially a response to the sudden rise in fuel prices, it quickly became a generalized social unrest protest against the government of president Emmanuel Macron. The protests have been going on every Saturday since November 2018, each week being called as a new ``Acte'' by the protesters. In this work, we focus on social network data about the Gilets Jaunes protests from February 2019 to May 2019. For each event, we identified a set of relevant keywords. We then collected every post or \emph{tweet}, on the social network site Twitter containing these keywords during the relevant collection periods. We also collected the follower edges between all users who posted these tweets for each event. This provided us with the follower network of Twitter users discussing each event. In addition we were able to measure the posting rate for each user by counting the number of tweets they posted during the data collection period. We provide basic information about the datasets in Table \ref{table:NNdata_stat}. Further details on the dataset construction is provided in Section \ref{sec:data_analysis}. \begin{table}[!hbt] \centering \caption{Basic information about the Twitter datasets (M denotes million).} \label{table:NNdata_stat} \centering \begin{tabular}{|l|l|c|c|c|} \hline Event & Data collection & Number of & Number of & Number of\\ & period & tweets & follower edges & users\\ \hline Brexit & September 2018 to February 2019 & 27M & 18.5M &104,755 \\ \hline Gilets Jaunes & February 2019 to May 2019 & 3.2M & 2.3M & 40,456 \\ \hline \end{tabular} \end{table} \subsection{Opinions and Stubborn Users} To apply our equilibrium model, we require the follower network, posting rate, and stubborn user opinions. We already have the first two items form the raw data. Next we must identify the stubborn users. We assessed stubbornness using the content of the tweets of the users. This was done by building a neural network to measure the tweet opinions. To do this we developed a novel approach to label tweets with a ground truth opinion in order to construct a training dataset for the neural network. We first identified several extremely polarized hashtags for each event. These are words or phrases that exhibit strong support for or opposition to the event. The complete lists of hashtags for Brexit and Gilets Jaunes are found Section \ref{sec:data_analysis}. We manually labeled these hashtags as either pro-event or anti-event. This is done using domain expertise, and does not constitute a difficult task. Then we identified users in our dataset that had the hashtags in their profile description. If the user is using any of the hashtags from the pro-event list, and none from the anti-event list, then this user is labeled as pro-event. The same process is done for the anti-event list. After this user labeling is done, all tweets in the dataset belonging to any anti-event users are given an opinion of zero, and all tweets of pro-event users are given an opinion of one. The logic here is that if a user puts the extremely polarized hashtags in their profile description, they are broadcasting a very strong signal about their opinion. It is then highly likely that any tweet they post about the event will have a very extreme opinion. Using our approach, we are able to efficiently label hundreds of thousands of tweets for the two events. Details of the training data are provided in Table \ref{table:training_set}. \begin{table}[!hbt] \centering \caption{Details on the neural network training data for Brexit and Gilets Jaunes. The number of tweets in the training data and number of users who posted these tweets for and against each event are shown.} \label{table:training_set} \centering \begin{tabular}{|l|c|c|c|c|} \hline Event &Pro-event tweets & Anti-event tweets & Pro-event users & Anti-event users \\ \hline Brexit & 400,000 & 400,000 & 1,935 & 6,863 \\ \hline Gilets Jaunes & 130,000 & 130,000 & 383 & 2,354 \\ \hline \end{tabular} \end{table} Once we had labeled the tweets, we could train the neural network. We use a standard architecture that was developed in \cite{kim2014convolutional}. For each event we train on 80\% of the labeled data and tested on the remaining fraction. We used the deep learning library \emph{Keras} \citep{chollet2015}, and trained our model with a cross-entropy loss over five epochs on a single CPU. With this configuration, the training time is under a few hours. The resulting performance is quite good. The neural network achieves an accuracy of 86\% on the testing data for Brexit and 83\% on the testing data for Gilets Jaunes. For a more qualitative demonstration of the accuracy of the neural network, we show in Tables \ref{table:exTweets_BREXIT} and \ref{table:exTweets_YELLOW_VESTS} the opinions it measures for tweets in the Brexit and Gilets Jaunes datasets, respectively. As can be seen, the opinion estimates of the neural network align with the text of the tweets. Details of the neural network architecture and training process are provided in Section \ref{sec:neural_network}. \begin{table}[h!] \begin{center} \caption{Tweets and their opinion scores given by the neural network for the Brexit dataset. An opinion of zero is anti-Brexit and an opinion of one is pro-Brexit.} \label{table:exTweets_BREXIT} \begin{tabular}{|p{4in}|c|} \hline \multicolumn{1}{|c|}{Tweet} & Polarity \\ \hline \#stopbrexit \#PeoplesVoter\#brexit \#Eunurses \#nurseshortage & 0.03 \\ \hline Britain will receive an economic boost on the back &\\ of a Brexit deal with the European Union, Philip Hammond &\\ has again claimed & 0.63 \\ \hline @Nigel\_Farage Wait for the remoaners to make stupid comments&\\ of Russian interference on Brexit & 0.76\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h!] \begin{center} \caption{Tweets and their opinion scores given by the neural network for the Gilets Jaunes dataset. An opinion of zero is anti-Gilets Jaunes and an opinion of one is pro-Gilets Jaunes.} \label{table:exTweets_YELLOW_VESTS} \begin{tabular}{|p{4in}|c|} \hline \multicolumn{1}{|c|}{Tweet} & Polarity \\ \hline Il n'y a aucune raison que leurs revendications passent& \\ avant d'autres, quelques dizaines de milliers repr\'esentant & \\ une minorit\'e ne vont pas d\'ecider pour la majorit\'e. & 0.0\\ \hline \#Giletsjaunes \#Nancy Les manifestants ont r\`eussi \`a & \\ entrer dans le périmètre interdit dans le centre ville. & 0.5 \\ \hline Aucun essoufflement pour l'\#ActeXV des \#GiletsJaune! & 0.85 \\\hline \end{tabular} \end{center} \end{table} Our final step was to identify which users were stubborn based on their opinions. We used the trained neural network to estimate the opinion of all tweets in our datasets. Then we averaged the opinions of each user's tweets to obtain their opinions. We determined which users were stubborn by setting lower and upper opinion intervals. Any user whose opinion falls within either of these intervals is declared stubborn. We made the assumption that people with more extreme opinions (close to zero or one) are stubborn. Previous work in opinion dynamics supports this definition of stubborn. For example, \cite{martins2013building} define an \emph{inflexible agent} as someone who has a very strong, extreme opinion. Further evidence is provided by \cite{moussaid2013social} who found that the majority of people systematically keep their opinion when their own confidence exceeds that of their partner. People with extreme opinions are generally confident in their beliefs. This suggests that people with more extreme opinions are likely to be stubborn. For our datasets, we chose $[0.0,0.1]$ and $[0.9,1.0]$ as the stubborn intervals. We performed robustness checks and found that our opinion optimization results were not sensitive to the precise values of these intervals, as long as the values were reasonable and left sufficient non-stubborn users in the network. Using these stubborn intervals, we have 81,043 non-stubborn users and 23,705 stubborn users for Brexit and 38,483 non-stubborn users and 1,973 stubborn users for Gilets Jaunes. For Brexit there are 6,147 users in $[0.0,0.1]$ and 1,555 users in $[0.9,1.0]$. For Gilets Jaunes there are 1,973 users in $[0.0,0.1]$ and only 134 users in $[0.9,1.0]$. \begin{table}[h!] \begin{center} \caption{Number of stubborn and non-stubborn users in the Brexit and Gilets Jaunes datasets.} \label{table:stubnotstub_stats} \begin{tabular}{|l|c|c|} \hline Dataset & Brexit & Gilets Jaunes\\ \hline Number of non-stubborn users & 81,043 & 38,483 \\ \hline Number of stubborn users & 23,705 & 1,973\\ \hline Number of stubborn users in $[0.9,1.0]$ & 5,893 & 134\\ \hline Number of stubborn users in $[0.0,0.1]$ & 14,950 & 1,839\\ \hline \end{tabular} \end{center} \end{table} \subsection{Performance} We applied the greedy algorithm from Section \ref{ssec:algo} to target non-stubborn individuals in the Brexit and Gilets Jaunes networks in order to maximize the mean opinion and the number of individuals with opinion greater than 0.5. For reference, we compared this algorithm to a set of benchmark targeting algorithms which we now describe. \begin{itemize} \item[\textbf{Out-degree.}] This algorithm targets the nodes in order of decreasing out-degree (follower count). \item[\textbf{Posting rate.}] This algorithm targets the nodes in order of decreasing posting rate. \item[\textbf{Harmonic influence centrality.}] This algorithm targets the nodes in order of decreasing harmonic influence centrality. \end{itemize} Each of these benchmark algorithms exploit a different aspect of the equilibrium in equation \eqref{eq:ohm}. The out-degree algorithm focuses on the sum over neighbors. The logic here is that users with many followers have more influence. The posting rate algorithm focuses on the rate term in the equation. The logic here is that users' opinions will tend to align with their active neighbors. The harmonic influence centrality algorithm combines these two aspects to identify active users with a large reach. For the greedy algorithm, we pre-computed the harmonic influence centrality of all non-stubborn users in each network, as mentioned in Section \ref{ssec:algo}. We then checked 1,000 users with the highest harmonic influence centrality as potential targets in each iteration of the algorithm. We had the stubborn agent post at the average rate as the non-stubborn users in the network. This would prevent the agent from appearing too suspicious and potentially being flagged by spam detection algorithms on the social network. Each algorithm's performance for the different networks and objective functions is shown in Figures \ref{fig:mean_mean} and \ref{fig:threshold_mean}. We see similar trends for all the scenarios. The posting rate is the worst algorithm and harmonic influence centrality does better than out-degree. Our greedy algorithm has the best performance, which shows the importance of the network structure in the targeting process. For the mean objective function we see a rapid increase in the mean opinion when less than 100 users are targeted, after which we see a linear growth in the opinion. This is interesting because it suggests a few targets can have a large impact on the opinions. For the threshold objective function, we see different results for the two networks. In Brexit, the greedy algorithm increases the objective function at a near linear rate as more users are targeted, while harmonic influence centrality saturates. However, in Gilets Jaunes, the greedy algorithm initially outperforms harmonic influence centrality, but then saturates as more targets are added. In contrast, harmonic influence centrality steadily increases at a linear rate, eventually catching up with the greedy algorithm. For Gilets Jaunes, we see that targeting 100 users with the greedy algorithm moves 403 users over the threshold. In Brexit, targeting 100 users with the greedy algorithm puts 1,197 users over the threshold. We see a greater efficiency of the targeting in Brexit compared to Gilets Jaunes. This may be due to the initial opinion distribution. For Brexit, there are initially approximately 6,000 non-stubborn users above the threshold, which is 7.4\% of the non-stubborn users. Therefore, there are many people available to be pushed over the threshold. In contrast, the Gilets Jaunes network there are initially about 28,000 non-stubborn users above the threshold, or 73.7\% of the non-stubborn users. In this case, there are fewer people available to be pushed over the threshold. We suspect this is the reason the Brexit targeting is more efficient. \begin{figure*} \centering \includegraphics[scale = .6]{figures/mean_opinion_mean_rate} \caption{Plot of mean opinion versus number of targeted non-stubborn individuals in the Brexit network (left) and Gilets Jaunes network (right) for different targeting algorithms (HIC is harmonic influence centrality). The stubborn agent posts at a rate equal to the mean rate of the network. } \label{fig:mean_mean} \end{figure*} \begin{figure*} \centering \includegraphics[scale = .6]{figures/threshold_opinion_mean_rate} \caption{Plot of number of non-stubborn individuals with opinion over the 0.5 threshold versus number of targeted non-stubborn individuals in the Brexit network (left) and Gilets Jaunes network (right) for different targeting algorithms (HIC is harmonic influence centrality). The stubborn agent posts at a rate equal to the mean rate of the network. } \label{fig:threshold_mean} \end{figure*} \section{Conclusion}\label{sec:conclusion} We have proposed a model for opinion dynamics in a social network where individuals become more stubborn with time. We were able to derive convergence results for this non-stationary model, one of the first results of its kind. Using our convergence results, we formulated an optimization problem for targeting people in a network with stubborn agents to have maximal impact on their opinions. We showed that this was a submodular problem, allowing performance guarantees for a greedy algorithm. Finally, we showed how to apply our greedy algorithm to real social networks. A key component of this was developing a neural network to identify stubborn users based on the content they post. Tests on these networks showed that the greedy algorithm outperforms several benchmarks, allowing one to obtain greater influence with a limited number of targets. This algorithm is a useful operational tool for countering influence campaigns and shaping opinion in large social networks. As the roles these networks play in our society increases, tools such as our greedy targeting algorithm will continue to grow in importance.
2,869,038,156,171
arxiv
\section*{Preamble} This paper is devoted to Henri Faure who celebrated his $80^{{\rm th}}$ birthday on July 12, 2018. Henri is well known for his pioneering work on low-discrepancy sequences. As an example we would like to mention his famous paper \cite{fau} from 1982 in which he gave one of the first explicit constructions of digital sequences in arbitrary dimension with low star discrepancy. These sequences are nowadays known as {\it Faure sequences}. I met Henri for the first time at the MCQMC conference 2002 in Singapore. Later, during several visits of Henri in Linz, we started a fruitful cooperation which continues to this day. I would like to thank Henri for this close cooperation and for his great friendship and wish him and his family all the best for the future. \section{Introduction} \label{sec:1} We consider infinite sequences $\mathcal{S}=(\boldsymbol{x}_n)_{n \ge 0}$ of points $\boldsymbol{x}_n$ in the $s$-dimensional unit cube $[0,1)^s$. For $N \in \mathbb{N}$ let $\mathcal{S}_N=(\boldsymbol{x}_n)_{n=0}^{N-1}$ be the initial segment of $\mathcal{S}$ consisting of the first $N$ elements. According to Weyl~\cite{weyl} a sequence $\mathcal{S}=(\boldsymbol{x}_n)_{n \ge 0}$ is uniformly distributed (u.d.) if for every axes-parallel box $J \subseteq [0,1)^s$ it is true that $$\lim_{N \rightarrow \infty} \frac{\#\{n \in \{0,\ldots,N-1\} \ : \ \boldsymbol{x}_n \in J\}}{N}={\rm Volume}(J).$$ An extensive introduction to the theory of uniform distribution of sequences can be found in the book of Kuipers and Niederreiter~\cite{kuinie}. There are several equivalent definitions of uniform distribution of a sequence and one of them is of particular importance for quasi-Monte Carlo (QMC) integration. Weyl proved that a sequence $\mathcal{S}$ is u.d. if and only if for every Riemann-integrable function $f:[0,1]^s \rightarrow \mathbb{R}$ we have \begin{equation}\label{udfa} \lim_{N \rightarrow \infty} \frac{1}{N}\sum_{n=0}^{N-1} f(\boldsymbol{x}_n)=\int_{[0,1]^s} f(\boldsymbol{x}) \, {\rm d}\boldsymbol{x}. \end{equation} The average of function evaluations on the left-hand side is nowadays called a {\it QMC rule}, $$Q_N(f)=\frac{1}{N}\sum_{n=0}^{N-1} f(\boldsymbol{x}_n).$$ Hence, in order to have a QMC rule converging to the true value of the integral of a function it has to be based on a u.d. sequence. A quantitative version of \eqref{udfa} can be stated in terms of discrepancy. \begin{definition} For a finite initial segment $\mathcal{S}_N$ of a sequence (or a finite point set) in $[0,1)^s$ the {\it local discrepancy function} $\Delta_{\mathcal{S}_N}:[0,1]^s \rightarrow \mathbb{R}$ is defined as $$\Delta_{\mathcal{S}_N}(\boldsymbol{t})=\frac{\#\{n \in \{0,1,\ldots,N-1\}\ : \ \boldsymbol{x}_n \in [\boldsymbol{0},\boldsymbol{t})\}}{N}- t_1 t_2 \cdots t_s,$$ where $\boldsymbol{t}=(t_1,t_2,\ldots,t_s)$, $[\boldsymbol{0},\boldsymbol{t})=[0,t_1)\times [0,t_2)\times \ldots \times [0,t_s)$, and hence $t_1 t_2\cdots t_s={\rm Volume}([\boldsymbol{0},\boldsymbol{t}))$. For $p \ge 1$ the {\it $L_p$ discrepancy} of $\mathcal{S}_N$ is defined as the $L_p$ norm of the local discrepancy function $$L_{p,N}(\mathcal{S}_N)=\|\Delta_{\mathcal{S}_N}\|_{L_p([0,1]^s)}=\left(\int_{[0,1]^s} |\Delta_{\mathcal{S}_N}(\boldsymbol{t})|^p \,\mathrm{d} \boldsymbol{t}\right)^{1/p}$$ with the usual adaptions if $p=\infty$. In this latter case one often talks about {\it star discrepancy} which is denoted by $D_N^*(\mathcal{S}_N):=L_{\infty,N}(\mathcal{S}_N)$. For an infinite sequence $\mathcal{S}$ in $[0,1)^s$ we denote the $L_p$ discrepancy of the first $N$ points by $L_{p,N}(\mathcal{S})= L_{p,N}(\mathcal{S}_N)$ for $N \ge 1$. \end{definition} It is well-known that a sequence $\mathcal{S}$ is u.d. if and only if $\lim_{N \rightarrow \infty} L_{p,N}(\mathcal{S})=0$ for some $p \ge 1$. A quantitative version of \eqref{udfa} is the famous {\it Koksma-Hlawka inequality} which states that for every function $f:[0,1]^s \rightarrow \mathbb{R}$ with bounded variation $V(f)$ in the sense of Hardy and Krause and for every finite sequence $\mathcal{S}_N$ of points in $[0,1)^s$ we have $$\left|\int_{[0,1]^s} f(\boldsymbol{x}) \, {\rm d}\boldsymbol{x}- \frac{1}{N}\sum_{n=0}^{N-1} f(\boldsymbol{x}_n)\right| \le V(f) D_N^*(\mathcal{S}_N).$$ The Koksma-Hlawka inequality is the fundamental error estimate for QMC rules and the basis for QMC theory. Nowadays there exist several versions of this inequality which may also be based on the $L_p$ discrepancy or other norms of the local discrepancy function. One often speaks about ``Koksma-Hlawka type inequalities''. For more information and for introductions to QMC theory we refer to \cite{DKS,DP2010,LP14_book,niesiam}. It is clear that QMC requires sequences with low discrepancy in some sense and this motivates the study of ``low discrepancy sequences''. On the other hand discrepancy is also an interesting topic by itself that is intensively studied (see, e.g., the books \cite{BC,CST,DT,DP2010,kuinie,Mat99,niesiam}). In the following we collect some well-known facts about $L_p$ discrepancy of finite and infinite sequences. \section{Known facts about the $L_p$ discrepancy} We begin with results on finite sequences: for every $p \in(1,\infty]$ and $s \in \mathbb{N}$ there exists a $c_{p,s}>0$ such that for every finite $N$-element sequence $\mathcal{S}_N$ in $[0,1)^s$ with $N \ge 2$ we have \begin{equation*} L_{p,N}(\mathcal{S}_N) \ge c_{p,s} \frac{(\log N)^{\frac{s-1}{2}}}{N} \ \ \mbox{ and } \ \ D_N^{\ast}(\mathcal{S}_N) \ge c_{\infty,s} \frac{(\log N)^{\frac{s-1}{2}+\eta_s}}{N} \end{equation*} for some $\eta_s \in (0,\tfrac{1}{2})$. The result on the left hand side for $p \ge 2$ is a celebrated result by Roth~\cite{R54} from 1954 that was extended later by Schmidt~\cite{S77} to the case $p \in (1,2)$. The general lower bound for the star discrepancy is an important result of Bilyk, Lacey and Vagharshakyan~\cite{BLV08} from 2008. As shown by Hal\'{a}sz~\cite{H81}, the $L_p$ estimate is also true for $p=1$ and $s=2$, i.e., there exists a positive constant $c_{1,2}$ with the following property: for every finite sequence $\mathcal{S}_N$ in $[0,1)^2$ with $N \ge 2$ we have \begin{equation}\label{lbdl1D2dipts} L_{1,N}(\mathcal{S}_N) \ge c_{1,2} \frac{\sqrt{\log N}}{N}. \end{equation} Schmidt showed for $s=2$ the improved lower bound on star discrepancy $$D_N^*(\mathcal{S}_N) \ge c_{\infty,2} \frac{\log N}{N}$$ for some $c_{\infty,2}>0$. On the other hand, it is known that for every $s,N \in \mathbb{N}$ there exist finite sequences $\mathcal{S}_N$ in $[0,1)^s$ such that $$D_N^{\ast}(\mathcal{S}_N) \lesssim_s \frac{(\log N)^{s-1}}{N}.$$ First examples for such sequences are the Hammersley point sets, see, e.g., \cite[Section~3.4.2]{DP2010} or \cite[Section~3.2]{niesiam}. Similarly, for every $s,N \in \mathbb{N}$ and every $p \in [1,\infty)$ there exist finite sequences $\mathcal{S}_N$ in $[0,1)^s$ such that \begin{equation}\label{uplpps} L_{p,N}(\mathcal{S}_N) \lesssim_{s,p} \frac{(\log N)^{\frac{s-1}{2}}}{N}. \end{equation} Hence, for $p \in (1,\infty)$ and arbitrary $s \in \mathbb{N}$ we have matching lower and upper bounds. For both $p=1$ and $p=\infty$ we have matching lower and upper bounds only for $s=2$. The result in \eqref{uplpps} was proved by Davenport \cite{D56} for $p= 2$, $s= 2$, by Roth \cite{R80} for $p= 2$ and arbitrary $s$ and finally by Chen \cite{C80} in the general case. Other proofs were found by Frolov~\cite{Frolov}, Chen~\cite{C83}, Dobrovol'ski{\u\i}~\cite{Do84}, Skriganov~\cite{Skr89, Skr94}, Hickernell and Yue~\cite{HY00}, and Dick and Pillichshammer~\cite{DP05b}. For more details on the early history of the subject see the monograph \cite{BC}. Apart from Davenport, who gave an explicit construction in dimension $s=2$, these results are pure existence results and explicit constructions of point sets were not known until the beginning of this millennium. First explicit constructions of point sets with optimal order of $L_2$ discrepancy in arbitrary dimensions have been provided in 2002 by Chen and Skriganov \cite{CS02} for $p=2$ and in 2006 by Skriganov \cite{S06} for general $p$. Other explicit constructions are due to Dick and Pillichshammer \cite{DP14a} for $p=2$, and Dick \cite{D14} and Markhasin \cite{M15} for general $p$. Before we summarize results about infinite sequences some words about the conceptual difference between the discrepancy of finite and infinite sequences are appropriate. Matou\v{s}ek~\cite{Mat99} explained this in the following way: while for finite sequences one is interested in the distribution behavior of the whole sequence $(\boldsymbol{x}_0,\boldsymbol{x}_1,\ldots,\boldsymbol{x}_{N-1})$ with a fixed number of elements $N$, for infinite sequences one is interested in the discrepancy of all initial segments $(\boldsymbol{x}_0)$, $(\boldsymbol{x}_0,\boldsymbol{x}_1)$, $(\boldsymbol{x}_0,\boldsymbol{x}_1,\boldsymbol{x}_2)$, \ldots, $(\boldsymbol{x}_0,\boldsymbol{x}_1,\boldsymbol{x}_2,\ldots,\boldsymbol{x}_{N-1})$, simultaneously for $N \in \mathbb{N}$. In this sense the discrepancy of finite sequences can be viewed as a static setting and the discrepancy of infinite sequences as a dynamic setting. Using a method from Pro{\u\i}nov~\cite{Pro86} (see also \cite{DP14a}) the results about lower bounds on $L_p$ discrepancy for finite sequences can be transferred to the following lower bounds for infinite sequences: for every $p \in(1,\infty]$ and every $s \in \mathbb{N}$ there exists a $C_{p,s}>0$ such that for every infinite sequence $\mathcal{S}$ in $[0,1)^s$ \begin{equation}\label{lbdlpdiseq} L_{p,N}(\mathcal{S}) \ge C_{p,s} \frac{(\log N)^{\frac{s}{2}}}{N} \ \ \ \ \mbox{infinitely often} \end{equation} and \begin{equation}\label{bdstdisequ} D_N^{\ast}(\mathcal{S}) \ge C_{\infty,s} \frac{(\log N)^{\frac{s}{2}+\eta_s}}{N} \ \ \ \ \mbox{infinitely often,} \end{equation} where $\eta_s \in (0,\tfrac{1}{2})$ is independent of the concrete sequence. For $s=1$ the result holds also for the case $p=1$, i.e., for every $\mathcal{S}$ in $[0,1)$ we have \begin{equation*} L_{1,N}(\mathcal{S}) \ge c_{1,1} \frac{\sqrt{\log N}}{N} \ \ \ \ \mbox{infinitely often,} \end{equation*} and the result on the star discrepancy can be improved to (see Schmidt \cite{S72}; see also \cite{B82,Lar15,LarPu15}) \begin{equation}\label{lpseqschm} D_N^{\ast}(\mathcal{S}) \ge c_{\infty,1} \frac{\log N}{N} \ \ \ \ \mbox{infinitely often.} \end{equation} On the other hand, for every dimension $s$ there exist infinite sequences $\mathcal{S}$ in $[0,1)^s$ such that \begin{equation}\label{bdlds} D_N^{\ast}(\mathcal{S})\lesssim_s \frac{(\log N)^s}{N} \ \ \ \ \mbox{for all $N \ge 2$.} \end{equation} Informally one calls a sequence a {\it low-discrepancy sequence} if its star discrepancy satisfies the bound \eqref{bdlds}. Examples of low-discrepancy sequences are: \begin{itemize} \item Kronecker sequences $(\{n \boldsymbol{\alpha}\})_{n \ge 0}$, where $\boldsymbol{\alpha} \in \mathbb{R}^s$ and where the fractional part function $\{\cdot\}$ is applied component-wise. In dimension $s=1$ and if $\alpha \in \mathbb{R}$ has bounded continued fraction coefficients, then the Kronecker sequence $(\{n \alpha\})_{n \ge 0}$ has star discrepancy of exact order of magnitude $\log N/N$; see \cite[Chapter~3]{niesiam} for more information. \item Digital sequences: the prototype of a digital sequence is the van der Corput sequence in base $b$ which was introduced by van der Corput~\cite{vdc35} in 1935. For an integer $b \ge 2$ (the ``basis'') the $n^{{\rm th}}$ element of this sequence is given by $x_n=n_0 b^{-1}+n_1 b^{-2}+ n_2 b^{-3}+\cdots$ whenever $n$ has $b$-adic expansion $n=n_0+n_1 b+n_2 b^2+\cdots$. The van der Corput sequence has star discrepancy of exact order of magnitude $\log N/N$; see the recent survey article \cite{FKP} and the references therein. Multi-dimensional extensions of the van der Corput sequence are the Halton sequence \cite{H60}, which is the component-wise concatenation of van der Corput sequences in pairwise co-prime bases, or digital $(t,s)$-sequences, where the basis $b$ is the same for all coordinate directions. First examples of such sequences have been given by Sobol' \cite{sob} and by Faure \cite{fau}. Later the general unifying concept has been introduced by Niederreiter~\cite{N87} in 1987. Halton sequences in pairwise co-prime bases as well as digital $(t,s)$-sequences have star discrepancy of order of magnitude of at most $(\log N)^s/N$; see Section~\ref{digtssequ}. \end{itemize} Except for the one-dimensional case, there is a gap for the $\log N$ exponent in the lower and upper bound on the star discrepancy of infinite sequences (cf. Eq.~\eqref{bdstdisequ} and Eq.~\eqref{bdlds}) which seems to be very difficult to close. There is a grand conjecture in discrepancy theory which share many colleagues (but it must be mentioned that there are also other opinions; see, e.g., \cite{BL14}): \begin{conjecture}\label{con1} For every $s \in \mathbb{N}$ there exists a $c_s>0$ with the following property: for every $\mathcal{S}$ in $[0,1)^s$ it holds true that $$D_N^{\ast}(\mathcal{S}) \ge c_s \frac{(\log N)^s}{N}\ \ \ \ \mbox{infinitely often.}$$ \end{conjecture} For the $L_p$ discrepancy of infinite sequences with finite $p$ the situation is different. It was widely assumed that the general lower bound of Roth-Schmidt-Pro{\u\i}nov in Eq. \eqref{lbdlpdiseq} is optimal in the order of magnitude in $N$ but until recently there was no proof of this conjecture (although it was some times quoted as a proven fact). In the meantime there exist explicit constructions of infinite sequences with optimal order of $L_p$ discrepancy in the sense of the general lower bound \eqref{lbdlpdiseq}. These constructions will be presented in Section~\ref{secHOdS}. \section{Discrepancy of digital sequences} In the following we give the definition of digital sequences in prime bases $b$. For the general definition we refer to \cite[Section~4.3]{niesiam}. From now on let $b$ be a prime number and let $\mathbb{F}_b$ be the finite field of order $b$. We identify $\mathbb{F}_b$ with the set of integers $\{0,1,\ldots,b-1\}$ equipped with the usual arithmetic operations modulo $b$. \begin{definition}[Niederreiter 1987] A digital sequence is constructed in the following way: \begin{itemize} \item choose $s$ infinite matrices $C_1,\ldots, C_s \in \mathbb{F}_b^{\mathbb{N} \times \mathbb{N}}$; \item for $n \in \mathbb{N}_0$ of the form $n = n_0 + n_1 b + n_2 b^2+ \cdots$ and $j=1,2,\ldots,s$ compute (over $\mathbb{F}_b$) the matrix-vector products \begin{equation*} C_j \left( \begin{array}{l} n_0\\ n_1\\ n_2\\ \vdots \end{array}\right) =:\left( \begin{array}{l} x_{n,j,1}\\ x_{n,j,2}\\ x_{n,j,3}\\ \vdots \end{array} \right); \end{equation*} \item put \begin{equation*} x_{n,j} = \frac{x_{n,j,1}}{b} + \frac{x_{n,j,2}}{b^2} + \frac{x_{n,j,3}}{b^3}+\cdots \ \ \ \mbox{ and }\ \ \ \boldsymbol{x}_n = (x_{n,1}, \ldots, x_{n,s}). \end{equation*} \end{itemize} The resulting sequence $\mathcal{S}(C_1,\ldots,C_s)=(\boldsymbol{x}_n)_{n \ge 0}$ is called a {\it digital sequence over $\mathbb{F}_b$} and $C_1,\ldots,C_s$ are called the {\it generating matrices} of the digital sequence. \end{definition} \subsection{A metrical result} It is known that almost all digital sequences in a fixed dimension $s$ are low-discrepancy sequences, up to some $\log\log N$-term. The ``almost all'' statement is with respect to a natural probability measure on the set of all $s$-tuples $(C_1,\ldots,C_s)$ of $\mathbb{N} \times \mathbb{N}$ matrices over $\mathbb{F}_b$. For the definition of this probability measure we refer to \cite[p.~107]{LP14}. \begin{theorem}[Larcher 1998, Larcher \& Pillichshammer 2014]\label{thmmetric} Let $\varepsilon>0$. For almost all $s$-tuples $(C_1,\ldots,C_s)$ with $C_j \in \mathbb{F}_b^{\mathbb{N} \times \mathbb{N}}$ the corresponding digital sequences $\mathcal{S}=\mathcal{S}(C_1,\ldots,C_s)$ satisfy $$D_N^*(\mathcal{S}) \lesssim_{b,s,\varepsilon} \frac{(\log N)^s (\log \log N)^{2+\varepsilon}}{N} \ \ \ \forall N \ge 2$$ and $$D_N^*(\mathcal{S}) \ge c_{b,s} \frac{(\log N)^s \log \log N}{N} \ \ \ \mbox{infinitely often.}$$ \end{theorem} The upper estimate has been shown by Larcher in \cite{L98} and a proof for the lower bound can be found in \cite{LP14}. A corresponding result for the sub-class of so-called digital Kronecker sequences can be found in \cite{L95} (upper bound) and \cite{LP14a} (lower bound). These results correspond to metrical discrepancy bounds for classical Kronecker sequences by Beck~\cite{be}. The question now arises whether there are $s$ tuples $(C_1,\ldots,C_s)$ of generating matrices such that the resulting digital sequences are low-discrepancy sequences and, if the answer is {\it yes}, which properties of the matrices guarantee low discrepancy. Niederreiter found out that this depends on a certain linear independence structure of the rows of the matrices $C_1,\ldots,C_s$. This leads to the concept of digital $(t,s)$-sequences. \subsection{Digital $(t,s)$-sequences}\label{digtssequ} For $C \in \mathbb{F}_b^{\mathbb{N} \times \mathbb{N}}$ and $m \in \mathbb{N}$ denote by $C(m)$ the left upper $m \times m$ submatrix of $C$. For technical reasons one often assumes that the generating matrices $C_1,\ldots,C_s$ satisfy the following condition: let $C_j = (c_{k,\ell}^{(j)})_{k, \ell \in \mathbb{N}}$, then for each $\ell \in \mathbb{N}$ there exists a $K(\ell) \in \mathbb{N}$ such that $c_{k,\ell}^{(j)} = 0$ for all $k > K(\ell)$. This condition, which is condition (S6) in \cite[p.72]{niesiam}, guarantees that the components of the elements of a digital sequence have a finite digit expansion in base $b$. For the rest of the paper we tacitly assume that this condition is satisfied. (We remark that in order to include new important constructions to the concept of digital $(t,s)$-sequences, Niederreiter and Xing~\cite{NX1996,NX_book} use a truncation operator to overcome the above-mentioned technicalities. Such sequences are sometimes called $(t,s)$-sequences {\it in the broad sense}.) \begin{definition}[Niederreiter] Given $C_1,\ldots,C_s \in \mathbb{F}_b^{\mathbb{N} \times \mathbb{N}}$. If there exists a number $t \in \mathbb{N}_0$ such that for every $m \ge t$ and for all $d_1,\ldots,d_s\ge 0$ with $d_1+\cdots+d_s=m-t$ the \iffalse \begin{center} first $d_1$ rows of $C_1(m)$,\\ first $d_2$ rows of $C_2(m)$,\\ \ldots \\ first $d_s$ rows of $C_s(m)$ \end{center} \fi $$\left.\begin{array}{l} \mbox{first $d_1$ rows of $C_1(m)$,}\\ \mbox{first $d_2$ rows of $C_2(m)$,}\\ \ldots \\ \mbox{first $d_s$ rows of $C_s(m)$,} \end{array} \right\} \ \ \mbox{are linearly independent over $\mathbb{F}_b$,} $$ then the corresponding digital sequence $\mathcal{S}(C_1,\ldots,C_s)$ is called a {\it digital $(t,s)$-sequence over $\mathbb{F}_b$}. \end{definition} The technical condition from the above definition guarantees that every $b^m$-element sub-block $(\boldsymbol{x}_{k b^m},\boldsymbol{x}_{k b^m+1},\ldots,\boldsymbol{x}_{(k+1) b^m-1})=:\mathcal{S}_{m,k}$ of the digital sequence, where $m \ge t$ and $k \in \mathbb{N}_0$, is a $(t,m,s)$-net in base $b$, i.e., every so-called elementary $b$-adic interval of the form $$J=\prod_{j=1}^s \left[\frac{a_j}{b^{d_j}}, \frac{a_j+1}{b^{d_j}}\right) \ \ \ \ \ \ \mbox{with}\ {\rm Volume}(J)=b^{t-m}$$ contains the right share of elements from $\mathcal{S}_{m,k}$, which is exactly $b^t$. For more information we refer to \cite[Chapter~4]{DP2010} and \cite[Chapter~4]{niesiam}. Examples for digital $(t,s)$-sequences are generalized Niederreiter sequences which comprise the concepts of Sobol'-, Faure- and original Niederreiter-sequences, Niederreiter-Xing sequences, \ldots. We refer to \cite[Chapter~8]{DP2010} for a collection of constructions and for further references. An overview of the constructions of Niederreiter and Xing can also be found in \cite[Chapter~8]{NX_book}. It has been shown by Niederreiter~\cite{N87} that every digital $(t,s)$-sequence is a low-discrepancy sequence. The following result holds true: \begin{theorem}[Niederreiter 1987]\label{thmNie87} For every digital $(t,s)$-sequence $\mathcal{S}$ over $\mathbb{F}_b$ we have $$D_N^*(\mathcal{S}) \le c_{s,b} \, b^t \, \frac{(\log N)^s}{N}+ O\left(\frac{(\log N)^{s-1}}{N}\right).$$ \end{theorem} Later several authors worked on improvements of the implied quantity $c_{s,b}$, e.g. \cite{FK,K06}. The currently smallest values for $c_{s,b}$ were provided by Faure and Kritzer~\cite{FK}. More explicit versions of the estimate in Theorem~\ref{thmNie87} can be found in \cite{FL12,FL14,FL15}. For a summary of these results one can also consult \cite[Section~4.3]{FKP}. \begin{remark} Theorem~\ref{thmNie87} in combination with the lower bound in Theorem~\ref{thmmetric} shows that the set of $s$-tuples $(C_1,\ldots,C_s)$ of matrices that generate a digital $(t,s)$-sequence is a set of measure zero. \end{remark} Remember that the exact order of optimal star discrepancy of infinite sequences is still unknown (except for the one-dimensional case). From this point of view it might be still possible that Niederreiter's star discrepancy bound in Theorem~\ref{thmNie87} could be improved in the order of magnitude in $N$. However, it has been shown recently by Levin~\cite{Lev} that this is not possible in general. In his proofs Levin requires the concept of $d$-admissibility. He calls a sequence $(\boldsymbol{x}_n)_{n \ge 0}$ in $[0,1)^s$ {\it $d$-admissible} if $$\inf_{n>k \ge0} \|n\ominus k\|_b \|\boldsymbol{x}_n \ominus \boldsymbol{x}_k\|_b \ge b^{-d},$$ where $\log_b \|x\|_b =\lfloor \log_b x\rfloor$ and $\ominus$ is the $b$-adic difference. Roughly speaking, this means that the $b$-adic distance between elements from the sequence whose indices are close is not too small. \begin{theorem}[Levin 2017] Let $\mathcal{S}$ be a $d$-admissible $(t,s)$-sequence. Then $$D_N^{\ast}(\mathcal{S}) \ge c_{s,t,d} \frac{(\log N)^s}{N}\ \ \ \ \mbox{infinitely often.}$$ \end{theorem} In his paper, Levin gave a whole list of digital $(t,s)$-sequences that have the property of being $d$-admissible for certain $d$. This list comprises the concepts of generalized Niederreiter sequences (which includes Sobol'-, Faure- and original Niederreiter-sequences), Niederreiter-Xing sequences, \ldots. For a survey of Levin's result we also refer to \cite{KaSt}. It should also be mentioned that there is one single result by Faure~\cite{fau95} from the year 1995 who already gave a lower bound for a particular digital $(0,2)$-sequence (in dimension 2) which is also of order $(\log N)^2/N$. Levin's results \cite{lev0,Lev} are important contributions to the grand problem in discrepancy theory (cf. Conjecture~\ref{con1}). But they only cover the important sub-class of admissible $(t,s)$-sequences and allow no conclusion for arbitrary (including non-digital) sequences. \subsection{Digital $(0,1)$-sequences over $\mathbb{F}_2$} In this sub-section we say a few words about the discrepancy of digital $(0,1)$-sequence over $\mathbb{F}_2$, because in this case exact results are known. Let $b=2$ and let $I$ be the $\mathbb{N} \times \mathbb{N}$ identity matrix, that is, the matrix whose entries are 0 except for the entries on the main-diagonal which are 1. The corresponding one-dimensional digital sequence $\mathcal{S}(I)$ is the van der Corput sequence in base $2$ and in fact, it is also a digital $(0,1)$-sequence over $\mathbb{F}_2$. The following is known: among all digital $(0,1)$-sequences over $\mathbb{F}_2$ the van der Corput sequence, which is the prototype of all digital constructions and whose star discrepancy is very well studied, has the worst star discrepancy; see \cite[Theorem~2]{P04}. More concretely, for every $\mathbb{N} \times \mathbb{N}$ matrix $C$ which generates a digital $(0,1)$-sequence $\mathcal{S}(C)$ over $\mathbb{F}_2$ we have \begin{equation}\label{estvdcdisc} D_N^*(\mathcal{S}(C)) \le D_N^*(\mathcal{S}(I)) \le \left\{ \begin{array}{l} \left(\frac{\log N}{3 \log 2} +1\right)\frac{1}{N},\\[0.7em] \frac{S_2(N)}{N}, \end{array}\right. \end{equation} where $S_2(N)$ denotes the {\it dyadic sum-of-digits function} of the integer $N$. The first bound on $D_N^*(\mathcal{S}(I))$ is a result of B\'{e}jian and Faure~\cite{befa77}. The factor $1/(3\log 2)$ conjoined with the $\log N$-term is known to be best possible, in fact, $$\limsup_{N \rightarrow \infty} \frac{N D_N^{\ast}(\mathcal{S}(I))}{\log N}= \frac{1}{3 \log 2}.$$ (The corresponding result for van der Corput sequences in arbitrary base can be found in \cite{Fau1981,fau07,K05}.) However, also the second estimate in terms of the dyadic sum-of-digits function, which follows easily from the proof of \cite[Theorem~3.5 on p. 127]{kuinie}, is very interesting. It shows that the star discrepancy of the van der Corput sequence (and of any digital $(0,1)$-sequence) is not always close to the high level of order $\log N/N$. If $N$ has only very few dyadic digits different from zero, then the star discrepancy is very small. For example, if $N$ is a power of two, then $S_2(N)=1$ and therefore $D_N^{\ast}(\mathcal{S}(I))\le 1/N$. The bound in \eqref{estvdcdisc} is demonstrated in Fig.~\ref{fig1}. \begin{figure}[ht] \begin{center} \includegraphics[width=10cm]{graph_disc_vdc2.pdf} \end{center} \caption{$N D_N^{\ast}(\mathcal{S}(I))$ compared with $\frac{\log N}{3 \log 2} +1$ (red line) for $N=2,3,\ldots,32$; if $N$ is a power of two, then $N D_N^{\ast}(\mathcal{S}(I))=1$}\label{fig1} \end{figure} While the star discrepancy of any digital $(0,1)$-sequence over $\mathbb{F}_2$ is of optimal order with respect to \eqref{lpseqschm} this fact is not true in general for the $L_p$ discrepancies with finite parameter $p$. For example, for the van der Corput sequence we have for all $p \in [1,\infty)$ $$\limsup_{N \rightarrow \infty} \frac{N L_{p,N}(\mathcal{S}(I))}{\log N}= \frac{1}{6 \log 2},$$ see \cite{P04}. Hence the $L_p$ discrepancy of the van der Corput sequence is at least of order of magnitude $\log N/N$ for infinitely many $N$. Another example, to be found in \cite{DLP}, is the digital $(0,1)$-sequence generated by the matrix $$U=\left(\begin{array}{cccc} 1 & 1 & 1 & \ldots\\ 0 & 1 & 1 & \ldots\\ 0 & 0 & 1 & \ldots\\ \multicolumn{4}{l}\dotfill \end{array} \right)$$ for which we have, with some positive real $c>0$, $$\limsup_{N \rightarrow \infty} \frac{N L_{2,N}(\mathcal{S}(U))}{\log N} \ge c >0.$$ More information on the discrepancy of digital $(0,1)$-sequences can be found in the survey articles \cite{FK14,FKP} and the references therein. The results in dimension one show that, in general, the $L_p$ discrepancy of digital sequences does not match the general lower bound \eqref{lbdlpdiseq} from Roth-Schmidt-Pro{\u\i}nov. Hence, in order to achieve the assumed optimal order of magnitude $(\log N)^{s/2}/N$ for the $L_p$ discrepancy with digital sequences, if at all possible, one needs more demanding properties on the generating matrices. This leads to the concept of higher order digital sequences. \subsection{Digital sequences with optimal order of $L_p$ discrepancy}\label{secHOdS} So-called {\it higher order digital sequences} have been introduced by Dick \cite{D07,D08} in 2007 with the aim to achieve optimal convergence rates for QMC rules applied to sufficiently smooth functions. For the definition of higher order digital sequences and for further information and references we refer to \cite[Chapter~15]{DP2010} or to \cite{DKS}. For our purposes it suffices to consider higher order digital sequences of order 2. We just show how such sequences can be constructed: to this end let $d:=2 s$ and let $C_1, \ldots, C_{d} \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}$ be generating matrices of a digital $(t,d)$-sequence in dimension $d$, for example a generalized Niederreiter sequence. Let $\vec{c}_{j,k}$ denote the $k^{{\rm th}}$ row-vector of the matrix $C_j$. Now define $s$ matrices $E_1,\ldots, E_s$ in the following way: the row-vectors of $E_j$ are given by \begin{equation*} \vec{e}_{j,2 u + v} = \vec{c}_{2 (j-1) + v, u+1} \ \ \ \mbox{for $j \in \{1,2,\ldots,s\}$, $u \in \mathbb{N}_0$ and $v \in \{1,2\}$.} \end{equation*} We illustrate the construction for $s=1$. Then $d=2$ and $$C_1=\left(\begin{array}{c} \vec{c}_{1,1}\\ \vec{c}_{1,2}\\ \vdots \end{array}\right), \ C_2=\left(\begin{array}{c} \vec{c}_{2,1}\\ \vec{c}_{2,2}\\ \vdots \end{array}\right) \ \Rightarrow \ E_1=\left(\begin{array}{c} \vec{c}_{1,1}\\ \vec{c}_{2,1}\\ \vec{c}_{1,2}\\ \vec{c}_{2,2}\\ \vdots \end{array}\right).$$ This procedure is called \emph{interlacing} (here the so-called ``interlacing factor'' is $2$). The following theorem has been shown in \cite{DHMP17a}. \begin{theorem}[Dick, Hinrichs, Markhasin \& Pillichshammer 2017]\label{thm_main2} Assume that $E_1,\ldots,E_s \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}$ are constructed with the interlacing principle as given above. Then for the corresponding digital sequence $\mathcal{S}=\mathcal{S}(E_1,\ldots,E_s)$ we have $$ L_{p,N}(\mathcal{S}) \lesssim_{p,s} 2^{2t} \frac{\left(\log N\right)^{s/2}}{N}\ \ \ \ \mbox{ for all $N\ge 2$ and all $1 \le p < \infty$.} $$ \end{theorem} This theorem shows, in a constructive way, that the lower bound \eqref{lbdlpdiseq} from Roth-Schmidt-Pro{\u\i}nov is best possible in the order of magnitude in $N$ for all parameters $p\in (1,\infty)$. Furthermore, the constructed digital sequences have optimal order of $L_p$ discrepancy simultaneously for all $p \in (1,\infty)$. For $p=2$ there is an interesting improvement, although this improvement requires higher order digital sequences of order 5 (instead of order 2). For such sequences $\mathcal{S}$ it has been shown in \cite{DP14b} that $$L_{2,N}(\mathcal{S}) \lesssim_s \frac{(\log N)^{(s-1)/2}}{N} \, \sqrt{S_2(N)}\ \ \ \ \mbox{ for all $N\ge 2$.}$$ The dyadic sum-of-digit function of $N$ is in the worst-case of order $\log N$ and then the above $L_2$ discrepancy bound is of order of magnitude $(\log N)^{s/2}/N$. But if $N$ has very few non-zero dyadic digits, for example if it is a power of 2, then the bound on the $L_2$ discrepancy becomes $(\log N)^{(s-1)/2}/N$ only. The proof of Theorem~\ref{thm_main2} uses methods from harmonic analysis, in particular the estimate of the $L_p$ norm of the discrepancy function is based on the following Littlewood-Paley type inequality: for $p \in (1,\infty)$ and $f\in L_p([0,1]^s)$ we have \begin{equation}\label{LiPal} \|f\|_{L_p([0,1]^s)} \lesssim_{p,s} \sum_{\boldsymbol{j}\in\mathbb{N}_{-1}^s} 2^{2|\boldsymbol{j}|(1-1/\bar{p})}\left(\sum_{\boldsymbol{m}\in\mathbb{D}_{\boldsymbol{j}}} |\langle f,h_{\boldsymbol{j},\boldsymbol{m}}\rangle|^{\bar{p}}\right)^{2/\bar{p}}, \end{equation} where $\bar{p} = \max(p,2)$, $\mathbb{N}_{-1}=\mathbb{N} \cup \{-1,0\}$, for $\boldsymbol{j}=(j_1,\ldots,j_s)$, $\mathbb{D}_{\boldsymbol{j}}=\mathbb{D}_{j_1}\times \ldots \times \mathbb{D}_{j_s}$, where $\mathbb{D}_j=\{0,1,\ldots,2^j-1\}$, $|\boldsymbol{j}|=\max(j_1,0)+\cdots+\max(j_s,0)$, and, for $\boldsymbol{m} \in \mathbb{D}_{\boldsymbol{j}}$, $h_{\boldsymbol{j},\boldsymbol{m}}(\boldsymbol{x})=h_{j_1,m_1}(x_1)\cdots h_{j_s,m_s}(x_s)$, where $h_{j,m}$ is the $m^{{\rm th}}$ dyadic Haar function on level $j$. See \cite{DHMP17a} and \cite{M13b}. The $L_2$ inner products $\langle f,h_{\boldsymbol{j},\boldsymbol{m}}\rangle$ are the so-called Haar coefficients of $f$. Inequality \eqref{LiPal} is used for the local discrepancy function of digital sequences which then requires tight estimates of the Haar coefficients of the local discrepancy function. For details we refer to \cite{DHMP17a}. With the same method one can also handle the quasi-norm of the local discrepancy function in Besov spaces and Triebel-Lizorkin spaces with dominating mixed smoothness. One reason why Besov spaces and Triebel-Lizorkin spaces are interesting in this context is that they form natural scales of function spaces including the $L_p$-spaces and Sobolev spaces of dominating mixed smoothness (see, e.g., \cite{T10}). The study of discrepancy in these function spaces has been initiated by Triebel \cite{T10,T10a} in 2010. Further results (for finite sequences) can be found in \cite{H10,M13a,M13b,M13c,M15} and (for infinite sequences in dimension one) in \cite{K15}. In \cite[Theorem~3.1 and 3.2]{DHMP17b} general lower bounds on the quasi-norm of the local discrepancy function in Besov spaces and Triebel-Lizorkin spaces with dominating mixed smoothness in the sense of the result of Roth-Schmidt-Pro{\u\i}nov in Eq. \eqref{lbdlpdiseq} are shown. Furthermore, these lower bounds are optimal in the order of magnitude in $N$, since matching upper bounds are obtained for infinite order two digital sequences as constructed above. For details we refer to \cite{DHMP17b}. \subsection{Intermediate norms of the local discrepancy function} While the quest for the exact order of the optimal $L_p$ discrepancy of infinite sequences in arbitrary dimension is now solved for finite parameters $p\in (1,\infty)$ the situation for the cases $p\in \{1,\infty\}$ remains open. In this situation, Bilyk, Lacey, Parissis and Vagharshakyan \cite{BLPV09} studied the question of what happens in intermediate spaces ``close'' to $L_{\infty}$. Two standard examples of such spaces are: \begin{itemize} \item {\it Exponential Orlicz space}: for the exact definition of the corresponding norm $\|\cdot\|_{\exp(L^\beta)}$, $\beta>0$, we refer to \cite{BLPV09,BM15,DHMP17b}. There is an equivalence which shows the relation to the $L_p$ norm, which is stated for any $\beta > 0$, \begin{equation*} \| f \|_{\exp(L^\beta)} \asymp \sup_{p > 1} p^{-\frac{1}{\beta}} \|f \|_{L_p([0,1]^s)}. \end{equation*} This equivalence suggests that the study of discrepancy with respect to the exponential Orlicz norm is related to the study of the dependence of the constant appearing in the $L_p$ discrepancy bounds on the parameter $p$. The latter problem is also studied in \cite{skr12}. \item {\it BMO space} (where BMO stands for ``bounded mean oscillation''): the definition of the corresponding semi-norm uses Haar functions and is given as $$\|f\|_{{\rm BMO}^s}^2 =\sup_{U \subseteq [0,1)^s} \frac{1}{\lambda_s(U)} \sum_{\boldsymbol{j} \in \mathbb{N}_0^s} 2^{|\boldsymbol{j}|} \sum_{\boldsymbol{m} \in \mathbb{D}_{\boldsymbol{j}}\atop \textsf{supp}(h_{\boldsymbol{j},\boldsymbol{m}}) \subseteq U}|\langle f, h_{\boldsymbol{j},\boldsymbol{m}}\rangle|^2,$$ where the supremum is extended over all measurable subsets $U$ from $[0,1)^s$. See again \cite{BLPV09,BM15,DHMP17b} and the references therein for more information. \end{itemize} Exponential Orlicz norm and BMO semi-norm of the local discrepancy function for finite point sets have been studied in \cite{BLPV09} (in dimension $s=2$) and in \cite{BM15} (in the general multi-variate case). For infinite sequences we have the following results which have been shown in \cite{DHMP17b}: \begin{theorem}[Dick, Hinrichs, Markhasin \& Pillichshammer 2017]\label{thm_main3} Assume that $E_1,\ldots,E_s \in \mathbb{F}_2^{\mathbb{N} \times \mathbb{N}}$ are constructed with the interlacing principle as given in Section~\ref{secHOdS}. Then for the corresponding digital sequence $\mathcal{S}=\mathcal{S}(E_1,\ldots,E_s)$ we have $$ \|\Delta_{\mathcal{S}_N}\|_{\exp(L^{\beta})} \lesssim_s \frac{(\log N)^{s-\frac{1}{\beta} }}{N} \ \ \ \mbox{ for all $N \ge 2$ and for all $\frac{2}{s-1} \le \beta < \infty$} $$ and \begin{equation}\label{bmo:ubd} \|\Delta_{\mathcal{S}_N}\|_{{\rm BMO}^s} \lesssim_s \frac{(\log N)^{\frac{s}{2}}}{N} \ \ \ \mbox{ for all $N \ge 2$.} \end{equation} \end{theorem} A matching lower bound in the case of exponential Orlicz norm on $\|\Delta_{\mathcal{S}_N}\|_{\exp(L^{\beta})}$ in arbitrary dimension is currently not available and seems to be a very difficult problem, even for finite sequences (see \cite[Remark after Theorem~1.3]{BM15}; for matching lower and upper bounds for finite sequences in dimension $s=2$ we refer to \cite{BLPV09}). On the other hand, the result \eqref{bmo:ubd} for the BMO semi-norm is best possible in the order of magnitude in $N$. A general lower bound in the sense of Roth-Schmidt-Pro{\u\i}nov's result \eqref{lbdlpdiseq} for the $L_p$ discrepancy has been shown in \cite[Theorem~2.1]{DHMP17b} and states that for every $s \in \mathbb{N}$ there exists a $c_s>0$ such that for every infinite sequence $\mathcal{S}$ in $[0,1)^s$ we have \begin{equation}\label{bmo:lowbd} \|\Delta_{\mathcal{S}_N}\|_{{\rm BMO}^s} \ge c_s \frac{(\log N)^{\frac{s}{2}}}{N}\ \ \ \mbox{ infinitely often.} \end{equation} \section{Discussion of the asymptotic discrepancy estimates} We restrict the following discussion to the case of star discrepancy. We have seen that the star discrepancy of digital sequences, and therefore QMC rules which are based on digital sequences, can achieve error bounds of order of magnitude $(\log N)^s/N$. At first sight this seems to be an excellent result. However, the crux of these, in an asymptotic sense, optimal results, lies in the dependence on the dimension $s$. If we consider the function $x \mapsto (\log x)^s/x$, then one can observe, that this function is increasing up to $x={\rm e}^s$ and only then it starts to decrease to 0 with the asymptotic order of almost $1/x$. This means, in order to have meaningful error bounds for QMC rules one requires finite sequences with at least ${\rm e}^s$ many elements or even larger. But ${\rm e}^s$ is already huge, even for moderate dimensions $s$. For example, if $s=200$, then ${\rm e}^s \approx 7.2 \times 10^{86}$ which exceeds the estimated number of atoms in our universe (which is $\approx10^{78}$). As it appears, according to the classical theory with its excellent asymptotic results, QMC rules cannot be expected to work for high-dimensional functions. However, there is numerical evidence, that QMC rules can also be used in these cases. The work of Paskov and Traub~\cite{PT95} from 1995 attracted much attention in this context. They considered a real world problem from mathematical finance which resulted in the evaluation of several 360 dimensional integrals and reported on their successful use of Sobol' and Halton-sequences in order to evaluate these integrals. Of course, it is now the aim of theory to the explain, {\it why} QMC rules also work for high-dimensional problems. One stream of research is to take the viewpoint of {\it Information Based Complexity (IBC)} in which also the dependence of the error bounds (discrepancy in our case) on the dimension $s$ is studied. A first remarkable, and at that time very surprising result, has been established by Heinrich, Novak, Wasilkowski and Wo\'{z}niakowski~\cite{HNWW} in 2001. \begin{theorem}[Heinrich, Novak, Wasilkowski \& Wo\'{z}niakowski 2001]\label{thmHNWW} For all $N,s \in\mathbb{N}$ there exist finite sequences $\mathcal{S}_N$ of $N$ elements in $[0,1)^s$ such that $$D_N^{\ast}(\mathcal{S}_N) \lesssim \sqrt{\frac{s}{N}},$$ where the implied constant is absolute, i.e., does neither depend on $s$, nor on $N$. \end{theorem} In 2007 Dick~\cite{D07a} extended this result to infinite sequences (in infinite dimension). In IBC the information complexity is studied rather then direct error bounds. In the case of star discrepancy the {\it information complexity}, which is then also called the {\it inverse of star discrepancy}, is, for some error demand $\varepsilon \in (0,1]$ and dimension $s$, given as $$N^{\ast}(\varepsilon,s)=\min\{N \in \mathbb{N} \ : \ \exists \ \mathcal{S}_N \subseteq [0,1)^s \ \mbox{with $|\mathcal{S}_N|=N$ and $D_N^{\ast}(\mathcal{S}_N) \le \varepsilon$}\}.$$ From Theorem~\ref{thmHNWW} one can deduce that $$N^{\ast}(\varepsilon,s) \lesssim s \varepsilon^{-2}$$ and this property is called {\it polynomial tractability} with $\varepsilon$-exponent $2$ and $s$-exponent 1. In 2004 Hinrichs~\cite{hin} proved that there exists a positive $c$ such that $N^{\ast}(\varepsilon,s) \ge c s \varepsilon^{-1}$ for all $s$ and all small enough $\varepsilon>0$. Combining these results we see, that {\it the inverse of the star discrepancy depends (exactly) linearly on the dimension $s$} (which is the programmatic title of the paper \cite{HNWW}). The exact dependence of the inverse of the star discrepancy on $\varepsilon^{-1}$ is still unknown and seems to be a very difficult problem. In 2011 Aistleitner~\cite{Aist} gave a new proof of the result in Theorem~\ref{thmHNWW} from which one can obtain an explicit constant in the star discrepancy estimate. He proved that there exist finite sequences $\mathcal{S}_N$ of $N$ elements in $[0,1)^s$ such that $D_N^{\ast}(\mathcal{S}_N) \le 10 \sqrt{s/N}$ and hence $N^{\ast}(\varepsilon,s)\le 100 s \varepsilon^{-2}$. Recently Gnewuch and Hebbinghaus (private communication) improved these implied constants to $D_N^{\ast}(\mathcal{S}_N) \le (2.5287\ldots) \times \sqrt{s/N}$ and hence $N^{\ast}(\varepsilon,s)\le (6.3943\ldots)\times s \varepsilon^{-2}$. For a comprehensive introduction to IBC and tractability theory we refer to the three volumes \cite{NW08,NW10,NW12} by Novak and Wo\'{z}niakowski. Unfortunately, the result in Theorem~\ref{thmHNWW} is a pure existence result and until now no concrete point set is known whose star discrepancy satisfies the given upper bound. Motivated by the excellent asymptotic behavior it may be obvious to consider digital sequences also in the context of tractability. This assumption is supported by a recent metrical result for a certain subsequence of a digital Kronecker sequence. In order to explain this result we need some notation: \begin{itemize} \item Let $\mathbb{F}_b((t^{-1}))$ be the field of {\it formal Laurent series} over $\mathbb{F}_b$ in the variable $t$: $$\mathbb{F}_b((t^{-1})) = \left\{ \sum_{i=w}^{\infty} g_i \, t^{-i} ~ : ~ w \in \mathbb{Z}, \forall i: g_i \in \mathbb{F}_b \right\}.$$ \item For $g \in \mathbb{F}_b((t^{-1}))$ of the form $g = \sum_{i=w}^{\infty}g_i \, t^{-i}$ define the ``fractional part'' $$\{g\}:=\sum_{i=\max\{w,1\}}^{\infty} g_i \, t^{-i}.$$ \item Every $n \in \mathbb{N}_0$ with $b$-adic expansion $n=n_0+n_1 b+\cdots + n_r b^{r}$, where $n_i \in \{0,\ldots,b-1\}$, is associated in the natural way with the polynomial $$n \cong n_0 + n_1t + \cdots + n_rt^r \in \mathbb{F}_b[t].$$ \end{itemize} Now a digital Kronecker sequence is defined as follows: \begin{definition} Let $\boldsymbol{f}=(f_1,\ldots,f_s) \in \mathbb{F}_b((t^{-1}))^s$. Then the sequence $\mathcal{S}(\boldsymbol{f})=(\boldsymbol{y}_n)_{n\geq 0}$ given by \begin{align*} \boldsymbol{y}_n:=\{n\boldsymbol{f}\}_{|t=b}=(\{nf_1\}_{|t=b},\ldots,\{nf_s\}_{|t=b}) \end{align*} is called a {\it digital Kronecker sequence over $\mathbb{F}_b$}. \end{definition} It can be shown that digital Kronecker sequences are examples of digital sequences where the generating matrices are Hankel matrices (i.e., constant ascending skew-diagonals) whose entries are the coefficients of the Laurent series expansions of $f_1,\ldots,f_s$; see, e.g., \cite{LN93,niesiam}. Neum\"uller and Pillichshammer~\cite{NP18} studied a subsequence of digital Kronecker sequences. For $\boldsymbol{f} \in \mathbb{F}_b((t^{-1}))^s$ consider $\widetilde{\mathcal{S}}(\boldsymbol{f})=(\boldsymbol{y}_n)_{n \ge 0}$ where $$\boldsymbol{y}_n=\{t^{n}\boldsymbol{f}\}_{|t=b}=(\{t^{n}f_1\}_{|t=b},\ldots,\{t^{n}f_s\}_{|t=b}).$$ With a certain natural probability measure on $\mathbb{F}_b((t^{-1}))^s$ the following metrical result can be shown: \begin{theorem}[Neum\"uller \& Pillichshammer 2018]\label{thmNP18} Let $s \ge 2$. For every $\delta \in (0,1)$ we have \begin{align}\label{estNP} D^*_N(\widetilde{\mathcal{S}}(\boldsymbol{f})) \lesssim_{b,\delta} \sqrt{\frac{s\log s}{N}}\, \log N \ \ \ \mbox{ for all $N \ge 2$} \end{align} with probability at least $1-\delta$, where the implied constant $C_{b,\delta} \asymp_b \log \delta^{-1}$. \end{theorem} The estimate \eqref{estNP} is only slightly weaker than the bound in Theorem~\ref{thmHNWW}. The additional $\log N$-term comes from the consideration of infinite sequences. Note that the result holds for all $N \ge 2$ simultaneously. One gets rid of this $\log N$-term when one considers only finite sequences as in Theorem~\ref{thmHNWW}; see \cite[Theorem~3]{NP18}. Furthermore, we remark that Theorem~\ref{thmNP18} corresponds to a result for classical Kronecker sequences which has been proved by L\"obbe~\cite{loeb}. \section{Weighted discrepancy of digital sequences} Another way to explain the success of QMC rules for high-dimensional problems is the study of so-called weighted function classes. This study, initiated by Sloan and Wo\'{z}niakowski~\cite{SW98} in 1998, is based on the assumption that functions depend differently on different variables and groups of variables when the dimension $s$ is large. This different dependence should be reflected in the error analysis. For this purpose Sloan and Wo\'{z}niakowski proposed the introduction of weights that model the dependence of the functions on different coordinate directions. In the context of discrepancy theory this led to the introduction of weighted $L_p$ discrepancy. Here we restrict ourselves to the case of weighted star discrepancy: In the following let $\boldsymbol{\gamma}=(\gamma_1,\gamma_2,\gamma_3,\ldots)$ be a sequence of positive reals, the so-called weights. Let $[s]:=\{1,2,\ldots,s\}$ and for ${\mathfrak u} \subseteq [s]$ put $$\gamma_{{\mathfrak u}}:=\prod_{j \in {\mathfrak u}} \gamma_j.$$ \begin{definition}[Sloan \& Wo\'{z}niakowski 1998] For a sequence $\mathcal{S}$ in $[0,1)^s$ the {\em $\boldsymbol{\gamma}$-weighted star discrepancy} is defined as $$D_{N,{\boldsymbol{\gamma}}}^*(\mathcal{S}):=\sup_{\boldsymbol{\alpha}\in [0,1]^s} \max_{\emptyset\ne {\mathfrak u}\subseteq [s]} \gamma_{\mathfrak u}|\Delta_{\mathcal{S}_N}(\boldsymbol{\alpha}_{\mathfrak u},\boldsymbol{1})|,$$ where for $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_s) \in [0,1]^s$ and for ${\mathfrak u} \subseteq [s]$ we put $(\boldsymbol{\alpha}_{{\mathfrak u}},\boldsymbol{1})=(y_1,\ldots,y_s)$ with $y_j=\alpha_j$ if $j \in {\mathfrak u}$ and $y_j=1$ if $j \not\in {\mathfrak u}$. \end{definition} \begin{remark} If $\gamma_j=1$ for all $j \ge 1$, then $D_{N,{\boldsymbol{\gamma}}}^*(\mathcal{S}) = D_N^*(\mathcal{S}).$ \end{remark} The relation between weighted discrepancy and error bounds for QMC rules is expressed by means of a {\it weighted Koksma-Hlawka inequality} as follows: Let $\mathcal{W}_1^{(1,1,\ldots,1)}([0,1]^s)$ be the Sobolev space of functions defined on $[0,1]^s$ that are once differentiable in each variable, and whose derivatives have finite $L_1$ norm. Consider $$\mathcal{F}_{s,1,\boldsymbol{\gamma}}=\{f \in \mathcal{W}_1^{(1,1,\ldots,1)}([0,1]^s) \ : \ \|f\|_{s,1,\boldsymbol{\gamma}}< \infty\},$$ where $$\|f\|_{s,1,\boldsymbol{\gamma}} =|f(\boldsymbol{1})| + \sum_{\emptyset \not={\mathfrak u} \subseteq [s]} \frac{1}{\gamma_{\mathfrak u}}\left\|\frac{\partial^{|{\mathfrak u}|}}{\partial \boldsymbol{x}_{{\mathfrak u}}}f(\boldsymbol{x}_{{\mathfrak u}},\boldsymbol{1})\right\|_{L_1}.$$ The $\boldsymbol{\gamma}$-weighted star discrepancy of a finite sequence is then exactly the worst-case error of a QMC rule in $\mathcal{F}_{s,1,\boldsymbol{\gamma}}$ that is based on this sequence, see \cite{SW98} or \cite[p.~65]{NW10}. More precisely, we have $$\sup_{\|f\|_{s,1,\boldsymbol{\gamma}} \le 1} \left|\int_{[0,1]^s} f(\boldsymbol{x}) \,\mathrm{d} \boldsymbol{x} - \frac{1}{N} \sum_{\boldsymbol{x} \in \mathcal{S}_N} f(\boldsymbol{x})\right|=D_{N,{\boldsymbol{\gamma}}}^*(\mathcal{S}).$$ In IBC again the {\it inverse of weighted star discrepancy} $$N_{\boldsymbol{\gamma}}^{\ast}(\varepsilon,s) := \min\{N \ : \ \exists \, \mathcal{S}_N \subseteq [0,1)^s \ \mbox{with $|\mathcal{S}_N|=N$ and $D_{N,\boldsymbol{\gamma}}^*(\mathcal{S}_N) \le \varepsilon$}\}$$ is studied. The weighted star discrepancy is said to be {\it strongly polynomially tractable} (SPT), if there exist non-negative real numbers $C$ and $\beta$ such that \begin{equation}\label{defspt} N_{\boldsymbol{\gamma}}^{\ast}(\varepsilon,s) \le C \varepsilon^{-\beta}\ \ \ \mbox{ for all $s\in \mathbb{N}$ and for all $\varepsilon \in (0,1)$.} \end{equation} The infimum $\beta^{\ast}$ over all $\beta > 0$ such that \eqref{defspt} holds is called the $\varepsilon$-exponent of strong polynomial tractability. It should be mentioned, that there are several other notions of tractability which are considered in literature. Examples are polynomial tractability, weak tractability, etc. For an overview we refer to \cite{NW08,NW10,NW12}. In \cite{HPT18} Hinrichs, Tezuka and the author studied tractability properties of the weighted star discrepancy of several digital sequences. \begin{theorem}[Hinrichs, Pillichshammer \& Tezuka 2018]\label{thm_HPT} The weighted star discrepancy of the Halton sequence (where the bases $b_1,\ldots,b_s$ are the first $s$ prime numbers in increasing order) and of Niederreiter sequences achieve SPT with $\varepsilon$-exponent \begin{itemize} \item $\beta^{\ast}=1$, which is optimal, if $$\sum_{j \ge 1} j \gamma_j < \infty, \ \ \ \ \ \ \ \ \ \ \ \mbox{e.g., if $\gamma_j=\frac{1}{j^{2+\delta}}$ with some $\delta>0$;}$$ \item $\beta^{\ast} \le 2$, if $$\sup\limits_{s \ge 1} \max\limits_{\emptyset \not= \mathfrak{u} \subseteq [s]} \prod_{j \in \mathfrak{u}} (j \gamma_j) < \infty\ \ \ \ \ \ \ \ \ \ \ \mbox{e.g., if $\gamma_j=\frac{1}{j}$.}$$ \end{itemize} \end{theorem} This result is the currently mildest weight condition for a ``constructive'' proof of SPT of the weighted star discrepancy. Furthermore, it is the first ``constructive'' result which does not require that the weights are summable in order to achieve SPT. By a ``constructive'' result we mean in this context that the corresponding point set can be found or constructed by a polynomial-time algorithm in $s$ and in $\varepsilon^{-1}$. To put the result in Theorem~\ref{thm_HPT} into context we recall the currently best ``existence result'' which has been shown by Aistleitner~\cite{Aist2}: \begin{theorem}[Aistleitner] If there exists a $c>0$ such that \begin{equation*} \sum_{j=1}^\infty \exp(-c \gamma_j^{-2}) < \infty \ \ \ \ \ \ \ \ \mbox{e.g., if $\gamma_j=\frac{1}{\sqrt{\log j}}$,} \end{equation*} then the weighted star discrepancy is SPT with $\varepsilon$-exponent $\beta^* \le 2$. \end{theorem} Obviously the condition on the weights in Aistleitner's ``existence'' result is much weaker then for the ``constructive'' result in Theorem~\ref{thm_HPT}. It is now the task to find sequences whose weighted star discrepancy achieves SPT under the milder weight condition. \section{Summary} Digital $(t,s)$-sequences are without doubt the most powerful concept for the construction of low-discrepancy sequences in many settings. Such sequences are very much-needed as sample points for QMC integration rules. They have excellent discrepancy properties in an asymptotic sense when the dimension $s$ is fixed and when $N \rightarrow \infty$: \begin{itemize} \item For $p \in [1,\infty)$ there are constructions of digital sequences with $L_p$ discrepancy $$L_p(\mathcal{S}) \lesssim_{s,p} \frac{(\log N)^{s/2}}{N} \ \ \ \ \mbox{ for all $N \ge 2$ and $p\in [1,\infty)$}$$ and this estimate is best possible in the order of magnitude in $N$ for $p \in (1,\infty)$ according to the general lower bound \eqref{lbdlpdiseq}. \item The star discrepancy of digital $(t,s)$-sequences satisfies a bound of the form $$D_N^{\ast}(\mathcal{S}) \lesssim_s \frac{(\log N)^s}{N} \ \ \ \ \mbox{ for all $N \ge 2$}$$ and this bound is often assumed to be best possible at all. \item For discrepancy with respect to various other norms digital sequences achieve very good and even optimal results. \end{itemize} On the other hand, nowadays one is also very much interested in the dependence of discrepancy on the dimension $s$. This is a very important topic, in particular in order to justify the use of QMC in high dimensions. First results suggest that also in this IBC context digital sequences may perform very well. But here many questions are still open and require further studies. One particularly important question is how sequences can be constructed whose discrepancy achieves some notion of tractability. Maybe digital sequences are good candidates also for this purpose.
2,869,038,156,172
arxiv
\section{\label{sec:intro}Introduction} \setlength{\mathindent}{0pt} Collisional depolarization cross sections between an alkali-metal and rare gas atom play important role in understanding the global interactions between the two species. Studies of excited atom collisions with other neutral atoms and molecules are key to understanding energy transfer processes, anisotropic cross sections, and the accurate description of the intermediate molecular properties of the colliding species~\cite{Baylis,Cook93,Lasell97,Wong98,Zhao07,Lin07,Bayram12,Bayram12-2}. Collisional depolarization of the first excited $P_{3/2}$ state of cesium has been experimentally studied using a pump-probe technique~\cite{Bayram2009} and by using incoherent light sources as optical excitation~\cite{Guiry76,Fricke67}. The depolarization of the second excited $P_{3/2}$ states of Cs I and Rb I in collisions with rare gas atoms have been studied using a level-crossing technique~\cite{Lukaszewski83}. To our knowledge, there are no experimental observations of the disalignment cross section for the third excited state ($8p\,^2P_{3/2}$) of cesium. We anticipate that this is due to the relatively high energy (3.198 eV) of the 8p level with respect to the first ionization limit ($\sim$ 3.89 eV) that makes the typical pump-probe (stepwise scheme) experimentally difficult. Our experimental technique is based on a $\Lambda$-type double-resonance transition. In the $\Lambda$-type scheme the probe laser resonantly induces downward transitions from the upper $8p\,^2P_{3/2}$ level of the pumped transition to the $5d\,^2D_{5/2}$ level. This process is called stimulated emission pumping (SEP). The advantages of the PUMP-SEP scheme are that it can be used to probe either high-lying energy levels of atoms which are difficult to reach using stepwise excitation when the probed level is close to the ionization level, or high-lying electronic vibrational-rotational levels of molecules close to the dissociation or ionization limit of the electronic state under study. Thus, in this work, we have used a nanosecond PUMP-SEP technique which is particularly well suited for the study of the dynamical properties of atomic and molecular systems, e.g. Rydberg energy levels which are relatively inaccessible using a typical stepwise scheme. In this work, we have measured the linear polarization dependence of the $6s\,^2S_{1/2}\rightarrow8p\,^2P_{3/2}\rightarrow5d\,^2D_{5/2}$ transition as a function of argon gas pressure using the PUMP-SEP technique. From the measurement we extracted the alignment-dependent collisional cross section of the $8p\,^2P_{3/2}$ cesium in collision with argon atoms. Our result yields a direct measure of the importance of the linear polarization to the alignment-dependent inelastic process in alkali-noble-gas collisions and provides additional insight into the collisional relaxation for the higher-lying energy levels of alkali atoms. An extensive theoretical treatment of collisional depolarization of atomic fluorescence has been developed by Ref.~\cite{RebaneRebane72,Rebane72}. Comprehensive reviews on the interpretation of atomic alignment and linear polarization, and the influence of spin-orbit coupling to the alignment parameter are discussed by Andersen and co-workers~\cite{Andersen97,Andersen}. Theoretical expressions for the rate constants of the anisotropic collisional relaxation of atomic polarization moments in terms of multipole moments are derived in Ref.~\cite{Petrashen93}. \section{\label{sec:concept}Experimental approach to measure polarization spectra} In this section we provide a brief overview of the experimental scheme and the arrangement that was used to perform the experiments. The cesium transitions involved in this experiment are illustrated by the partial energy level diagram in Fig. 1 and a schematic overview of the experimental apparatus is shown in Fig.~2. We use a nanosecond pulsed neodymium-doped yttrium aluminum garnet (Nd:YAG) laser which operates simultaneously at 532 nm and 355 nm with a pulse repetition rate of 20 Hz. This laser is used to drive two home-built grazing-incidence Littman-Metcalf cavity design dye laser oscillators. The output from the third harmonic generator (355 nm) is used to produce a UV pump laser at 387.92 nm to populate the $8p\,^2P_{3/2}$ level while the output of the second harmonic generator (532 nm) is used to produce the IR probe laser at 894.72 nm to drive $8p\,^2P_{3/2}\rightarrow5d\,^2D_{5/2}$ transition as probe. The dye laser oscillators operate in a single transverse mode and the output beam is highly linearly polarized through use of Glan-Thompson calcite prism polarizers having extinction ratios of better than 10$^{-5}$. Both dye lasers are equipped with dye circulating systems to maintain an average power of about 2 mW. \begin{figure}[ht] \begin{center} {$\scalebox{1.60}{\includegraphics*{figure1.eps}}$ } \caption{Partial energy level diagram of Cs I showing the UV-IR PUMP-SEP transitions tuned to resonance at 387.92 nm and 894.72 nm, respectively. Population from the $5d\,^2D_{5/2}$ level excusively decays to the $6p\,^2P_{3/2}$ level. The dotted line shows the detected fluorescence signal observed at 852.12 nm.} \end{center} \label{fig1} \end{figure} A temperature-controlled liquid crystal variable retarder (LCR) is used to electronically vary the linear polarization direction of the probe laser to be parallel or perpendicular to that of the pump laser. Polarization switching of the LCR is achieved by applying the necessary voltage to the retarder via a computer-controlled liquid crystal digital interface. The beams of the pump and probe pulsed dye lasers are directed collinearly, but in opposite directions, into the interaction region of the cesium cell. A resistively heated nonmagnetic cylindrical aluminum oven was used to generate the desired vapor pressure of atomic Cs in the cell. The oven, which houses the sealed Pyrex cell containing Cs vapor, was wrapped with an aluminum oxide blanket and an insulator to maintain the relative temperature to better than $\pm 0.01^{o}$C via a temperature controller. Cesium vapor cells with argon gas pressures ranging up to 133 mbar were prepared using an oil-free vacuum system in a 25.4 mm diameter by 50.8 mm length cell. The background pressure of the pure Cs cell is about $10^{-4}$ mbar. After the satisfaction of the resonance in the $8p\,^2P_{3/2}$ level by UV PUMP laser the most of the population decays to the $5d\,^2D_{5/2}$ level by the IR SEP laser due to the stimulated emission~\cite{Domiaty94}. The intensities of the cascade fluorescence from the $5d\,^2D_{5/2}$ level to the ground $6s\,^2S_{1/2}$ level were recorded at 852.12 nm by using an infrared sensitive water-cooled photomultiplier tube (PMT) which was located at right angles to the propagation directions of the lasers. A combination of interference and color glass filters was used in front of the PMT in order to remove background light. All the cables used in the experiment were electrically shielded and the optical table was grounded in order to suppress electronic pick-up and noise on the observed signal. \begin{figure}[th] \begin{center} {$\scalebox{1.20}{\includegraphics*{figure2.eps}}$ } \caption{A schematic view of the experimental apparatus. In the figure GT stands for a Glan-Thompson polarizer, PMT for photomultiplier tube, and LCR refers to liquid crystal retarder.} \end{center} \label{fig2} \end{figure} The amplified output of the PMT signal was sent to a boxcar which was opened after a 1-ns delay following the laser pulses. The recorded signal collected for each state of laser polarization consisted of 100x10$^6$ data points accumulated during 4 seconds. The boxcar operated in a 100 sample averaging mode, where the average single-shot level within the detection gate is digitized. Since the lifetimes of the $5d\,^2D_{5/2}$ and $6p\,^2P_{3/2}$ levels are shorter than the lifetime of the $8p\,^2P_{3/2}$ level (305 ns~\cite{Rad85}) we distinguished the signal from the spontaneous emission decay of the atoms from the $8p\,^2P_{3/2}$ level to the lower levels. Typical signal size was about 10$^{3}$ photons for each laser pulse. The digitized signals were stored on a computer using a LabVIEW program while monitoring the size of the signal within the gate-width in real time using a digital oscilloscope operating at 500 MHz with 2 GSa/s. Comparison of the signals, detected when the probe polarization angle is $\chi$ = 0 versus $\chi$ = $\pi$/2 allows definition of a linear polarization degree. A linear polarization degree is measured when intensities $I_{\parallel}(\chi=0)$ and $I_{\perp}(\chi = \pi/2)$ according to \begin{equation} P_L=\frac{I_{\parallel}(\chi = 0) - I_{\perp}(\chi = \pi /2)}{I_{\parallel}(\chi = 0) + I_{\perp}(\chi = \pi /2)}. \label{Eq1n} \end{equation} Since an absolute intensity ratio of the signals is sensitive only to the relative polarization directions of the lasers, any variations of the laser intensities with experimental factors such as absorbing medium density, fluorescence background, and sensitivity of the gated boxcar integrator, collectively have negligible effect on the intensity ratio. For atoms with non-zero nuclear spin, the total angular momentum of the atomic ensemble couples to the nuclear spin moment to produce a new space-fixed total angular momentum $F$. The total electronic angular momentum will then precess about $F$ so that the initially prepared space-fixed frame is altered. This precession is results from the hyperfine structure and affects the initially prepared alignment at $t$=0 by a pump laser. In the case of $^{133}$Cs, the coupling of nuclear spin $I=7/2$ with $J=3/2$ in the $8p\,^2P_{3/2}$ level introduces an oscillating time dependence in the alignment. Therefore, the time evolution of the alignment under the influence of the hyperfine structure can be evaluated. $P_L$, whose value depends on the hyperfine energy separations in the probed $8p\,^2P_{3/2}$ level, is the main quantity to be measured in the experiment and strongly depends on the time delay between the pump and probe laser pulses. This means that the excited level can be characterized by an overall population and the axially symmetric electronic alignment tensor component $\langle{A_o}\rangle$. The quantum mechanical treatment of the detection of alignment and polarization of the emitted light in terms of alignment have been given in detail~\cite{Green82,Fano73}, and the applications to the polarization measurement using two-photon pulsed pump-probe excitation has been described in our earlier work~\cite{Bayram2009,Bayram06}. Thus, the linear polarization in terms of alignment is defined~\cite{Green82} as \begin{equation} P_L=\frac{-3 \langle{A_o}\rangle}{16- \langle{A_o}\rangle}. \label{Eq1} \end{equation} The pump laser excitation on the $6s\,^2S_{1/2}\rightarrow8p\,^2P_{3/2}$ transition creates an initial value of electronic alignment $\langle{A_o}\rangle= -4/5$. In the absence of any perturbations this quantity evolves in time according to $\langle{A_o(t)}\rangle = \langle A_o(0)\rangle g^{(2)}(t)$, where the quantity $g^{(2)}(t)$ is the hyperfine depolarization coefficient and strongly depends on the temporal width of and arrival time of the pump and probe lasers into the interaction region of the atomic oven. Measurements of the overlap time of the pulses were made using two fast vacuum photodiodes with 500 ps rise time. At $t=0$, in the absence of any perturbations and systematic error, the theoretical value of polarization is 1/7 (14.29\%). The linear polarization degree evolves in time due to hyperfine structure in the excited level. We have measured the time evolution of polarization from $t=0$ to $t=150$ ns in our earlier work~\cite{Bayram2014}. At $t=1.5$ ns the linear polarization for the $6s\,^2S_{1/2}\rightarrow8p\,^2P_{3/2}\rightarrow5d\,^2D_{5/2}$ transition is obtained to be 12.4(5)\%. This measurement is in excellent agreement with theory~\cite{Bayram2014,Blum81}. Figure~3 illustrates typical fluorescence signal immediately after stimulated emission pump laser arrives into the interaction region. The sharp peak confirms depletion of the excited state population by the stimulated emission pump transition and this is because the cascade fluorescence from the $5d\,^2D_{5/2}$ level populates the $6p\,^2P_{3/2}$ level exclusively. The population of the $6p\,^2P_{3/2}$ level subsequently decays by spontaneous emission to the ground level. This appears as a sharp increase in the signal size within about 13 ns at FWHM, compared to the spontaneous emission which occurs over hundreds of nanoseconds. We opened a boxcar gatewidth at 30 ns to measure the full intensity of the cascade fluorescence from the stimulated emission signal only. \begin{figure}[ht] \centering {$\scalebox{0.35}{\includegraphics*{figure3.eps}}$ } \caption{Typical observed stimulated emission signal when the probe laser transfers the population from the $8p\,^2P_{3/2}$ to the $5d\,^2D_{5/2}$ level. Cascade fluorescence from the $5d\,^2D_{5/2}$ level populates the $6p\,^2P_{3/2}$ level exclusively. Inset shows the stimulated emission signal which causes substantial increase in observed cascade fluorescence when the probe laser beam follows the pump temporally.} \label{signal} \end{figure} \section{Results and Discussions} A linearly polarized pump laser selectively populates the Zeeman sublevels of the Cs $8p\,^2P_{3/2}$ level. After introducing argon atoms, the excited Cs and ground level Ar atoms collisionally mix the population among the Zeeman sublevels. Using rate equation analysis, population densities in terms of total population $N(t)$ and alignment $\langle{A_o(t)}\rangle$ were expressed in our earlier paper~\cite{Bayram2009}. Considering collisional mixing among the $M_J=\pm1/2,\pm3/2,\pm5/2$ sublevels, the measured signals for the $6s\,^2S_{1/2} \rightarrow 5d\,^2D_{5/2}$ transition can be written as \begin{equation} I_{\parallel}=\frac{1}{2}N(t)-\frac{1}{4}\langle{A_o(t)}\rangle, \label{integral1} \end{equation} and \begin{equation} I_{\perp}=\frac{1}{2}N(t)+\frac{1}{16}\langle{A_o(t)}\rangle, \label{integral2} \end{equation} where $N(t)=\frac{2}{\gamma}(1-e^{-\gamma t})$ and $\langle{A_o(t)}\rangle=\frac{-8}{5\gamma_a}(1-e^{-\gamma_a t})$. Substituting Eqs.~(\ref{integral1}) and (\ref{integral2}) into Eq.~(\ref{Eq1n}), a linear polarization can be readily expressed in terms of the depolarization cross section and the pressure of the buffer gas as \begin{equation} P_{L}=\frac{1}{3+4Z^{\prime}}, \label{polZ} \end{equation} where \begin{equation} Z^{\prime}=\frac{\gamma_a}{\gamma}~\frac{\left[1-\frac{1}{\gamma~T}(1-e^{-\gamma~T})\right]} {\left[1-\frac{1}{\gamma_a~T}(1-e^{-\gamma_a~T})\right]}. \label{Z} \end{equation} In Eq.~(\ref{Z}), $\gamma$ is the radiative decay rate, T is the overlap time of the pulses, and $\gamma_a$ is defined as $\gamma$+$\Gamma$ where $\Gamma$=$\rho_{Ar}k_2$ is the collisional rate. Here $k_{2}=\langle{\sigma_{2}v}\rangle$ is the disalignment rate coefficient, $\rho_{Ar}$ is the argon density which depends on the argon pressure and thermal energy constant, and $\sigma_{2}$ is the disalignment cross section. It is assumed that $\langle{\sigma_{2}v}\rangle$ may be factored since $\sigma_{2}$ typically depends weakly on the velocities of the colliding Cs-Ar atoms so that $k_2=\sigma_{2}\langle v\rangle$, where $\sigma_{2}$ is the disalignment cross section. Thus, we denoted $\langle v\rangle$ as $\bar{v}_{CsAr}$ which is the average speed of the colliding Cs-Ar atoms over the Maxwell-Boltzmann distribution of relative velocities at the $80^{o}$C cesium cell temperature. The left side of Eq.~(\ref{polZ}) is the measured linear polarization value at various argon gas pressures. Measured linear polarization degree as a function of argon gas pressures ranging up to 133 mbar is illustrated in Fig.~4. The result of the best fit to the data from the weighted non-linear least-squares fitting program is shown as the solid line. From the measurement $\sigma_{2}$ was extracted. \begin{figure}[ht] \centering {$\scalebox{0.28}{\includegraphics*{figure4.eps}}$ } \caption{Nonlinear least square fit of the measured polarization as a function of argon gas pressure. Vertical error bars represent one standard deviation.} \label{fitting} \end{figure} \\ Collisional depolarization cross sections for the lowest few principal quantum numbers of the Cs I $np\,^2P_{3/2}$ series are summarized in Table I. Experimental results are obtained by a broadened level crossing technique~\cite{Lukaszewski83,Minemoto74} or by a pump-stimulated probe approach, as described in the present report. Also listed are theoretical determinations of the cross-sections. We see in Table I that the cross section increases sharply for the $7p\,^2P_{3/2}$ level in comparison with the $6p\,^2P_{3/2}$; the further increase associated with the measurements made here on $8p\,^2P_{3/2}$ level is much more modest. This is somewhat surprising given the typical sharp variations, with effective principal quantum number, of experimental observables associated with excited atomic levels. However, previous observations of collisional process cross sections have revealed nonmonotonic behavior in some cases. For instance, Gallagher~\cite{Gall94} reported that the cross sections for the collisional angular momentum mixing rise approximately as $n^4$ for lower $n$, reach a peak, and then decreases for larger $n$. Other researchers have reported oscillatory cross sections in some cases and large oscillations in the dependence of linewidth on principal quantum number~\cite{Hugon79,Stoicheff80}. These results motivate us to explore disalignment cross section measurements for a wider range of principal quantum numbers; the technique used here is ideally suited for such measurements. \begin{table} [ht] \caption{\label{tab:depolarizationtab1}Collisional cross section ($\sigma_{2}$) of the $np\,^2P_{3/2}$ Cs atom in collision with Ar buffer gas is listed. Here, PPS refers to pump-probe spectroscopy and BLC broadening of level crossing.} \begin{tabular}{l@{}l@{\hspace{5mm}} l@{}l@{\hspace{5mm}} l@{}l@{\hspace{5mm}}l} \hline\noalign{\smallskip} \multicolumn{2}{l@{\hspace{11mm}}}{$n$} & \multicolumn{2}{l@{\hspace{11mm}}}{$\sigma_{2}(\AA^2)$} & \multicolumn{2}{l@{\hspace{11mm}}}{Technique} & \multicolumn{1}{l}{Reference} \\ \hline\noalign{\smallskip} 6 & & 186(58) & & PPS & &\cite{Bayram06} \\ & & 238~~~~~~~& & Theory& &\cite{Okunevich70}\\ 7 & & 730(24) & & BLC & &\cite{Lukaszewski83}\\ & & 610~~~~~~~& & BLC & &\cite{Minemoto74}\\ & & 557~~~~~~~& & Theory& &\cite{Okunevich70}\\ 8 & & 895(57) & & PPS & &This work\\ \hline\noalign{\smallskip} \end{tabular} \end{table} Our result is consistent with the lower $n$ level values. However, future experiments on greater $n$ values could reveal any systematic structure on the $n$ dependence. \section{\label{sec:results}Conclusions} We have introduced a time-resolved double-resonance UV-IR nanosecond PUMP-SEP technique to provide experimental data on the depolarization cross section of the cesium $8p\,^2P_{3/2}$ level from the measurement of polarization degree of the $6s\,^2S_{1/2}\rightarrow8p\,^2P_{3/2}\rightarrow5d\,^2S_{5/2}$ transition. From the measurement at various gas pressures we extracted the collisional disalignment cross section using nonlinear least square fit to the data and obtained 785(57)$\AA^2$. This work complements and substantiates our earlier studies of the collisional disalignmnet cross section of the excited states of cesium at lower $n$ values. We anticipate that this technique will be applicable to measure collisional energy transfer in higher energy levels of many atoms and in diatomic molecules. \section*{Acknowledgement} Financial support from the National Science Foundation (Grant No. NSF-PHY-1309571) is gratefully acknowledged. The authors would like to thank Greg Reese, from Research Computing Support, for providing the MATLAB programming code to do the weighted nonlinear least square fit to the data. We would like to thank Professor Mark Havey of Old Dominion University for valuable discussions.
2,869,038,156,173
arxiv
\section{Approximate Continuous-Discrete Inference} \label{sec:background} \vspace{-0.5em} We begin with a review of approximate continuous-discrete Bayesian filtering and smoothing, inference techniques employed by our proposed model. Consider the following It\^o SDE, \begin{equation} d{\mathbf{z}}_t = {\mathbf{f}}({\mathbf{z}}_t, t) dt + {\mathbf{G}}({\mathbf{z}}_t, t)d{\mathbf{B}}_t,\label{eq:non-linear-sde} \end{equation} where ${\mathbf{z}}_t \in \mathbb{R}^m$ is the state, ${\mathbf{B}}_t \in \mathbb{R}^m$ denotes a Brownian motion with diffusion matrix ${\mathbf{Q}}$, ${\mathbf{f}}(\cdot, t): \mathbb{R}^m \to \mathbb{R}^m$ is the drift function and ${\mathbf{G}}(\cdot, t): \mathbb{R}^m \to \mathbb{R}^{m \times m}$ is the diffusion function. The initial density of the state, $p({\mathbf{z}}_0)$, is assumed to be known and independent of the Brownian motion, ${\mathbf{B}}_t$. The evolution of the marginal density of the state, $p_t({\mathbf{z}}_t)$, is governed by the Fokker-Plank-Kolmogorov (FPK) equation~\citep[Ch.~4]{jazwinski1970stochastic}, \begin{align} \frac{\partial p_t({\mathbf{z}}_t)}{\partial t} = \mathscr{L}^\ast p_t,\label{eq:fpk} \end{align} where $\mathscr{L}^\ast$ is the forward diffusion operator given by \begin{equation} \mathscr{L}^\ast \varphi = -\sum_{i=1}^d\frac{\partial}{\partial x_i}\left[\varphi f_i\right] + \frac{1}{2}\sum_{i=1}^d\sum_{j=1}^d\left[\varphi({\mathbf{G}}{\mathbf{Q}} {\mathbf{G}}^\top)_{ij}\right].\nonumber \end{equation} In practice, we only have access to noisy transformations (called measurements or observations), ${\mathbf{y}}_k \in \mathbb{R}^d$, of the state, ${\mathbf{z}}_t$, at discrete timesteps $t_k \in \{t_0, \dots, t_T\}$. The \emph{continuous-discrete state space model}~\citep[Ch.~6]{jazwinski1970stochastic} is an elegant framework for modeling such time series. \begin{definition}[Continuous-Discrete State Space Model] A continuous-discrete state space model is one where the latent state, ${\mathbf{z}}_t$, follows the continuous-time dynamics governed by \eqref{eq:non-linear-sde} and the measurement, ${\mathbf{y}}_k$, at time $t_k$ is obtained from the measurement model $p({\mathbf{y}}_{k} | {\mathbf{z}}_{t_k})$. \end{definition} In this work, we consider linear Gaussian measurement models, $ {\mathbf{y}}_{k} \sim {\mathcal{N}}({\mathbf{y}}_{k}; {\mathbf{H}}{\mathbf{z}}_{t_k}, {\mathbf{R}}), $ where ${\mathbf{H}} \in \mathbb{R}^{d \times m}$ is the measurement matrix and ${\mathbf{R}} \succeq 0 \in \mathbb{R}^{d \times d}$ is the covariance matrix. Given observations ${\mathcal{Y}}_\tau = \{{\mathbf{y}}_k: t_k \leq \tau\}$, we are interested in answering two types of inference queries: the posterior distribution of the state, ${\mathbf{z}}_t$, conditioned on observations up to time $t$, $p_t({\mathbf{z}}_t | {\mathcal{Y}}_t)$, and the posterior distribution of the state, ${\mathbf{z}}_t$, conditioned on all available observations, $p_t({\mathbf{z}}_t | {\mathcal{Y}}_T)$. These are known as the \emph{filtering} and \emph{smoothing} problems, respectively. The filtering density, $p_t({\mathbf{z}}_t | {\mathcal{Y}}_t)$, satisfies the FPK equation (Eq.~\ref{eq:fpk}) for $t \in [t_k, t_{k+1})$ between observations, with the initial condition $p_t({\mathbf{z}}_t | {\mathcal{Y}}_{t_k})$ at time $t_k$. Observations can be incorporated via a Bayesian update, \begin{equation} p_t({\mathbf{z}}_{t_k} | {\mathcal{Y}}_{t_k}) = \frac{p({\mathbf{y}}_{t_k} | {\mathbf{z}}_{t_k})p({\mathbf{z}}_{t_k} | {\mathcal{Y}}_{t_{k-1}})}{p({\mathbf{y}}_k | {\mathcal{Y}}_{t_{k-1}})}.\label{eq:bayesian-update} \end{equation} The smoothing density satisfies a backward partial differential equation related to the FPK equation. We refer the reader to \citet{anderson1972fixed} and \citet[Ch.~10]{sarkka2019applied} for details and discuss a practical approximate filtering procedure in the following (cf. Appendix~\ref{app:smoothing} for smoothing). \subsection{Continuous-Discrete Bayesian Filtering} Solving \eqref{eq:fpk} for arbitrary ${\mathbf{f}}$ and ${\mathbf{G}}$ is intractable; hence, several approximations have been considered in the literature~\citep[Ch.~9]{sarkka2019applied}. The Gaussian assumed density approximation uses a Gaussian approximation, \begin{equation} p_t({\mathbf{z}}_t) \approx {\mathcal{N}}({\mathbf{z}}_t; {\mathbf{m}}_t, {\mathbf{P}}_t),\label{eq:assumed-density-approx} \end{equation} for the solution to the FPK equation, characterized by the time-varying mean, ${\mathbf{m}}_t$, and covariance matrix, ${\mathbf{P}}_t$. Further, linearization of the drift ${\mathbf{f}}$ via Taylor expansion results in the following ODEs that govern the evolution of the mean and covariance matrix, \begin{subequations} \label{eq:linearization-approx} \begin{align} \frac{d{\mathbf{m}}_t}{dt} &= {\mathbf{f}}({\mathbf{m}}_t, t),\label{eq:mean-predict}\\ \frac{d\mathbf{P}_t}{dt} &= {\mathbf{F}}_{{\mathbf{z}}}({\mathbf{m}}_t, t){\mathbf{P}}_t + {\mathbf{P}}_t{\mathbf{F}}^\top_{{\mathbf{z}}}({\mathbf{m}}_t, t) + {\mathbf{D}}({\mathbf{m}}_t, t),\label{eq:cov-predict} \end{align} \end{subequations} where ${\mathbf{F}}_{{\mathbf{z}}}({\mathbf{m}}_t, t)$ is the Jacobian of ${\mathbf{f}}({\mathbf{z}}, t)$ with respect to ${\mathbf{z}}$ at ${\mathbf{m}}_t$ and ${\mathbf{D}}(\cdot, t) = {\mathbf{G}}(\cdot, t){\mathbf{Q}} {\mathbf{G}}^\top(\cdot, t)$. Thus, for $t \in [t_k, t_{k+1})$ between observations, the filter distribution $p_t({\mathbf{z}}_t | {\mathcal{Y}}_t)$ can be approximated as a Gaussian with mean and covariance matrix given by solving \eqref{eq:linearization-approx}, with initial conditions ${\mathbf{m}}_{t_k}$ and ${\mathbf{P}}_{t_k}$ at time $t_k$. This is known as the \emph{prediction step}. The Gaussian assumed density approximation of $p({\mathbf{z}}_{t_k} | {\mathcal{Y}}_{t_{k-1}})$ described above makes the Bayesian update in \eqref{eq:bayesian-update} analytically tractable as $p({\mathbf{y}}_{t_k} | {\mathbf{z}}_{t_k})$ is also a Gaussian distribution with mean ${\mathbf{H}}{\mathbf{z}}_{k}$ and covariance matrix ${\mathbf{R}}$. The parameters, ${\mathbf{m}}_{k}$ and $\mathbf{P}_{k}$, of the Gaussian approximation of $p_t({\mathbf{z}}_{t_k} | {\mathcal{Y}}_{t_k})$ are then given by, \begin{subequations} \label{eq:update-step} \begin{align} {\mathbf{S}}_k &= \mathbf{H}\mathbf{P}_{k}^-\mathbf{H}^\top + \mathbf{R},\\ \mathbf{K}_k &= \mathbf{P}_{k}^-\mathbf{H}^\top{\mathbf{S}}_k^{-1},\label{eq:kalman-gain}\\ {\mathbf{m}}_{k} &= {\mathbf{m}}_{k}^- + \mathbf{K}_k\left({\mathbf{y}}_{k} - \mathbf{H}{\mathbf{m}}_{k}^-\right),\\ \mathbf{P}_{k} &= \mathbf{P}_{k}^- - \mathbf{K}_k{\mathbf{S}}_k\mathbf{K}^\top_k, \end{align} \end{subequations} where ${\mathbf{m}}_{k}^-$ and $\mathbf{P}_{k}^-$ are the parameters of $p_t({\mathbf{z}}_{t_k} | {\mathcal{Y}}_{t_{k-1}})$ given by the prediction step. \eqref{eq:update-step} constitutes the \emph{update step} which is exactly the same as the update step in the Kalman filter for discrete-time linear Gaussian SSMs. The continuous-time prediction step together with the discrete-time update step is sometimes also referred to as the hybrid Kalman filter. As a byproduct, the update step also provides the conditional likelihood terms, $ p({\mathbf{y}}_k | {\mathcal{Y}}_{t_{k-1}}) = {\mathcal{N}}({\mathbf{y}}_k; \mathbf{H}{\mathbf{m}}_{k}^-, {\mathbf{S}}_k), $ which can be combined to give the likelihood of the observed sequence, $ p({\mathcal{Y}}_{t_{T}}) = p({\mathbf{y}}_0)\prod_{k=1}^T p({\mathbf{y}}_k | {\mathcal{Y}}_{t_{k-1}}). $ \section{Conclusion} \label{sec:conclusion} \vspace{-0.5em} In this work, we proposed a model for continuous-time modeling of irregularly-sampled time series. NCDSSM\ improves continuous-discrete SSMs with neural network-based parameterizations of dynamics, and modern inference and learning techniques. Through the introduction of auxiliary variables, NCDSSM\ enables efficient modeling of high-dimensional time series while allowing accurate continuous-discrete Bayesian inference of the dynamic states. Experiments on a variety of low- and high-dimensional datasets show that NCDSSM\ outperforms existing models on time series imputation and forecasting tasks. \section*{Acknowledgements} This research is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-016). We thank Richard Kurle, Fabian Falck, Alexej Klushyn, and Marcel Kollovieh for helpful discussions and feedback. \section{Introduction} State space models (SSMs) provide an elegant framework for modeling time series data. Combinations of SSMs with neural networks have proven effective for various time series tasks such as segmentation, imputation, and forecasting~\citep{krishnan2015deep,fraccaro2017disentangled,rangapuram2018deep,kurle2020deep,ansari2021deep}. However, most existing models are limited to the discrete time (i.e., uniformly sampled) setting, whereas data from various physical and industrial systems in the real world are sometimes only available at irregular (often sparse) intervals. Such systems are best modeled as continuous-time latent processes with irregularly-sampled discrete-time observations. Desirable features of such a time series model include modeling of stochasticity (uncertainty) in the system, and efficient and accurate inference of the system state from potentially high-dimensional observations (e.g., video frames). \begin{figure} \centering \includegraphics[width=0.80\linewidth]{gen_and_infer} \caption{(\textbf{Top}) Generative model of Neural Continuous-Discrete State Space Model. The bold red arrows indicate that the state, ${\mathbf{z}}_t$, evolves continuously in time. The auxiliary variables, ${\mathbf{a}}_k$, and observations, ${\mathbf{y}}_k$, are emitted at arbitrary discrete timesteps $t_k \in \{t_0, t_1, \dots, t_T\}$. (\textbf{Bottom}) Amortized inference for auxiliary variables and continuous-discrete Bayesian inference for states. Samples from the amortized variational distribution over auxiliary variables are used as pseudo-observations to condition and perform inference in the continuous-discrete SSM at the bottom.} \label{fig:gen-model-and-inference} \vspace{-2em} \end{figure} Recently, latent variable models based on neural differential equations have gained popularity for continuous-time modeling of time series~\citep{chen2018neural,rubanova2019latent,yildiz2019ode2vae,li2020scalable,liu2020learning,solin2021scalable}. However, these models suffer from limitations. The ordinary differential equation (ODE)-based models employ deterministic latent dynamics and/or encode the entire context window into an initial state, creating a restrictive bottleneck. On the other hand, stochastic differential equation (SDE)-based models use stochastic latent dynamics, but typically perform a variational approximation of the latent trajectories via posterior SDEs. The posterior SDEs incorporate new observations in an ad-hoc manner, potentially resulting in a disparity between the posterior and generative transition dynamics, and a non-Markovian state space. To address these issues, we propose the Neural Continuous-Discrete State Space Model (NCDSSM) that uses discrete-time observations to model continuous-time stochastic Markovian dynamics (Fig.~\ref{fig:gen-model-and-inference}). By using auxiliary variables, NCDSSM{} disentangles recognition of high-dimensional observations from dynamics (encoded by the state)~\citep{fraccaro2017disentangled,kurle2020deep}. We leverage the rich literature on continuous-discrete filtering theory~\citep{jazwinski1970stochastic}, which has remained relatively unexplored in the modern deep learning context. Our proposed inference algorithm only performs amortized variational inference for the auxiliary variables since they enable classic continuous-discrete Bayesian inference~\cite{jazwinski1970stochastic} for the states, using only the generative model. This obviates the need for posterior SDEs and allows incorporation of new observations via a principled Bayesian update, resulting in accurate state estimation. As a result, NCDSSM\ enables online prediction and naturally provides state uncertainty estimates. We propose three dynamics parameterizations for NCDSSM\ (linear time-invariant, non-linear and locally-linear) and a training objective that can be easily computed during inference. We evaluated NCDSSM\ on imputation and forecasting tasks on multiple benchmark datasets. Our experiments demonstrate that NCDSSM\ accurately captures the underlying dynamics of the time series and extrapolates it consistently beyond the training context, significantly outperforming baseline models. From a practical perspective, we found that NCDSSM\ is less sensitive to random initializations and requires fewer parameters than the baselines. In summary, the key contributions of this work are:% \begin{itemize}[noitemsep,nolistsep] \item NCDSSM, a continuous-discrete SSM with auxiliary variables for continuous-time modeling of irregularly-sampled (high dimensional) time series; \item An accurate inference algorithm that performs amortized inference for auxiliary variables and classic Bayesian inference for the dynamic states; \item An efficient learning algorithm and its stable implementation using square root factors; \item Experiments on multiple benchmark datasets demonstrating that NCDSSM\ learns accurate models of the underlying dynamics and extrapolates it consistently into the future. \end{itemize} \section{Neural Continuous-Discrete State Space Models} \vspace{-0.5em} In this section, we describe our proposed model: Neural Continuous-Discrete State Space Model (NCDSSM). We begin by formulating NCDSSM\ as a continuous-discrete SSM with auxiliary variables that serve as succinct representations of high-dimensional observations. % We then discuss how to perform efficient inference along with parameter learning and a stable implementation for NCDSSM. \subsection{Model Formulation} \label{sec:model-formulation} NCDSSM\ is a continuous-discrete SSM in which the latent state, ${\mathbf{z}}_t \in \mathbb{R}^m$, evolves in continuous time, emitting linear-Gaussian auxiliary variables, ${\mathbf{a}}_t \in \mathbb{R}^h$, which in turn emit observations, ${\mathbf{y}}_t \in \mathbb{R}^d$. Thus, NCDSSM\ possesses two types of latent variables: (a) the states that encode the hidden dynamics, and (b) the auxiliary variables that can be viewed as succinct representations of the observations. The inclusion of auxiliary variables offers two benefits; (i) it allows disentangling representation learning (or recognition) from dynamics (encoded by ${\mathbf{z}}_t$) and (ii) it enables the use of arbitrary decoders to model the conditional distribution $p({\mathbf{y}}_t | {\mathbf{a}}_t)$. We discuss this further in Section \ref{sec:inference}. Consider the case when we have observations available at discrete timesteps $t_0, \dots, t_T$. Following the graphical model in Fig.~\ref{fig:gen-model-and-inference}, the joint distribution over the states ${\mathbf{z}}_{0:T}$, the auxiliary variables ${\mathbf{a}}_{0:T}$, and the observations ${\mathbf{y}}_{0:T}$ factorises as \begin{align*} p_{\theta}({\mathbf{z}}_{0:T}, &{\mathbf{a}}_{0:T}, {\mathbf{y}}_{0:T}) = \prod_{k=0}^T p({\mathbf{y}}_{k} | {\mathbf{a}}_{k})p({\mathbf{a}}_{k} | {\mathbf{z}}_{k})p({\mathbf{z}}_{k} | {\mathbf{z}}_{{k-1}}), \end{align*} where ${\mathbf{x}}_{0:T}$ denotes the set $\{{\mathbf{x}}_{t_0}, \dots, {\mathbf{x}}_{t_T}\}$ and $p({\mathbf{z}}_{0} | {\mathbf{z}}_{{-1}}) = p({\mathbf{z}}_{0})$. We model the initial (prior) distribution of the states as a multivariate Gaussian distribution, \begin{equation} p({\mathbf{z}}_{0}) = {\mathcal{N}}({\mathbf{z}}_0; \bm{\mu}_0, \bm{\Sigma}_0), \end{equation} where $\bm{\mu}_0 \in \mathbb{R}^m$ and $\bm{\Sigma}_0 \succeq 0 \in \mathbb{R}^{m \times m}$ are the mean and covariance matrix, respectively. The transition distribution of the states, $p({\mathbf{z}}_{k} | {\mathbf{z}}_{{k-1}})$, follows the dynamics governed by the SDE in \eqref{eq:non-linear-sde}. The conditional emission distributions of the auxiliary variables and observations are modeled as multivariate Gaussian distributions given by, \begin{align} p({\mathbf{a}}_{k} | {\mathbf{z}}_{k}) &= {\mathcal{N}}({\mathbf{a}}_{k}; \mathbf{H}{\mathbf{z}}_{k}, {\mathbf{R}}),\label{eq:auxiliary-emission}\\ p({\mathbf{y}}_{k} | {\mathbf{a}}_{k}) &= {\mathcal{N}}({\mathbf{y}}_{k}; f^\mu({\mathbf{a}}_{k}), f^\Sigma({\mathbf{a}}_{k})),\label{eq:observation-emission} \end{align} where ${\mathbf{H}}~\in~\mathbb{R}^{h \times m}$, ${\mathbf{R}}~\succeq~0~\in~\mathbb{R}^{h \times h}$, and $f^\mu$ and $f^\Sigma$ are functions parameterized by neural networks that output the mean and the covariance matrix of the distribution, respectively. We use $\theta$ to denote the parameters of the generative model, including SSM parameters $\{\bm{\mu}_0, \bm{\Sigma}_0, {\mathbf{f}}, {\mathbf{Q}}, {\mathbf{G}}, {\mathbf{H}}, {\mathbf{R}}\}$ and observation emission distribution parameters $\{f^\mu, f^\Sigma\}$. We propose three variants of NCDSSM, depending on the parameterization of ${\mathbf{f}}$ and ${\mathbf{G}}$ functions in \eqref{eq:non-linear-sde} that govern the dynamics of the state: \textbf{Linear time-invariant dynamics}~is obtained by parameterizing ${\mathbf{f}}$ and ${\mathbf{G}}$ as \begin{equation} {\mathbf{f}}({\mathbf{z}}_t, t) = {\mathbf{F}}{\mathbf{z}}_t \quad \text{and} \quad {\mathbf{G}}({\mathbf{z}}, t) = {\mathbf{I}},\label{eq:lti-dynamics} \end{equation} respectively, where ${\mathbf{F}} \in \mathbb{R}^{m \times m}$ is a Markov transition matrix and ${\mathbf{I}}$ is the $m$-dimensional identity matrix. In this case, Eqs.\ (\ref{eq:assumed-density-approx}) and (\ref{eq:linearization-approx}) become exact and the ODEs in \eqref{eq:linearization-approx} can be solved analytically using matrix exponentials (cf.~Appendix~\ref{app:algorithms}). Unfortunately, the restriction of linear dynamics is limiting for practical applications. We denote this linear time-invariant variant as NCDSSM-LTI. \textbf{Non-linear dynamics}~is obtained by parameterizing ${\mathbf{f}}$ and ${\mathbf{G}}$ using neural networks. With sufficiently powerful neural networks, this parameterization is flexible enough to model arbitrary non-linear dynamics. However, the neural networks need to be carefully regularized (cf. Appendix~\ref{app:stable-implementation}) to ensure optimization and inference stability. Inference in this variant also requires computation of the Jacobian of a neural network for solving \eqref{eq:linearization-approx}. We denote this non-linear variant as NCDSSM-NL. \textbf{Locally-linear dynamics}~is obtained by parameterizing ${\mathbf{f}}$ and ${\mathbf{G}}$ as \begin{equation} {\mathbf{f}}({\mathbf{z}}_t, t) = {\mathbf{F}}({\mathbf{z}}_t){\mathbf{z}}_t \quad \text{and} \quad {\mathbf{G}}({\mathbf{z}}, t) = {\mathbf{I}},\label{eq:locally-linear-dynamics} \end{equation} respectively, where the matrix ${\mathbf{F}}({\mathbf{z}}_t) \in \mathbb{R}^{m \times m}$ is given by a convex combination of $K$ base matrices $\{{\mathbf{F}}^{(j)}\}_{j=1}^K$, \begin{equation} {\mathbf{F}}({\mathbf{z}}_t) = \sum_{j=1}^K \alpha^{(j)}({\mathbf{z}}_t){\mathbf{F}}^{(j)},\label{eq:locally-linear} \end{equation} and the combination weights, $\alpha({\mathbf{z}}_t)$, are given by \begin{equation} \alpha({\mathbf{z}}_t) = \mathrm{softmax}(g({\mathbf{z}}_t)),\label{eq:locally-linear-mixture-weights} \end{equation} where $g$ is a neural network. Such parameterizations smoothly interpolate between linear SSMs and can be viewed as ``soft'' switching SSMs. Locally-linear dynamics has previously been used for discrete-time SSMs~\citep{karl2016deep,klushyn2021latent}; we extend it to the continuous time setting by evaluating \eqref{eq:locally-linear} continuously in time. Unlike non-linear dynamics, this parameterization does not require careful regularization and its flexibility can be controlled by choosing the number of base matrices, $K$. Furthermore, the Jacobian of ${\mathbf{f}}$ in \eqref{eq:linearization-approx} can be approximated as ${\mathbf{F}}({\mathbf{m}}_t)$, avoiding the expensive computation of the Jacobian of a neural network~\citep{klushyn2021latent}. We denote this locally-linear variant as NCDSSM-LL. \subsection{Inference} \label{sec:inference} Exact inference in the model described above is intractable when the dynamics is non-linear and/or the observation emission distribution, $p({\mathbf{y}}_{k} | {\mathbf{a}}_{k})$, is modeled by arbitrary non-linear functions. In the modern deep learning context, a straightforward approach would be to approximate the posterior distribution over the states and auxiliary variables, $q({\mathbf{z}}_{0:T}, {\mathbf{a}}_{0:T} | {\mathbf{y}}_{0:T})$, using recurrent neural networks (e.g., using ODE-RNNs when modeling in continuous time). However, such parameterizations have been shown to lead to poor optimization of the transition model in discrete-time SSMs, leading to inaccurate learning of system dynamics~\citep{klushyn2021latent}. Alternatively, directly applying continuous-discrete inference techniques to non-linear emission models requires computation of Jacobian matrices and inverses of $d \times d$ matrices (cf. Eq.~\ref{eq:update-step}) which scales poorly with the data dimensionality. The introduction of linear-Gaussian auxiliary variables offers a middle ground between the two options above. It allows efficient use of continuous-discrete Bayesian inference techniques for the inference of states, avoiding fully amortized inference for auxiliary variables and states. Concretely, we split our inference procedure into two inference steps: (i) for auxiliary variables and (ii) for states. \paragraph{Inference for auxiliary variables.} We perform amortized inference for the auxiliary variables, factorizing the variational distribution as, \begin{equation} q_\phi({\mathbf{a}}_{0:T} | {\mathbf{y}}_{0:T}) = \prod_{k=0}^T q({\mathbf{a}}_{k} | {\mathbf{y}}_{k}),\label{eq:auxiliary-inference} \end{equation} where $q({\mathbf{a}}_{k} | {\mathbf{y}}_{k}) = {\mathcal{N}}({\mathbf{a}}_{k}; f_{\phi}^\mu({\mathbf{y}}_{k}), f_{\phi}^\Sigma({\mathbf{y}}_{k}))$ and $f_{\phi}^\mu$, $f_{\phi}^\Sigma$ are neural networks. This can be viewed as the recognition network in a variational autoencoder, per timestep. This flexible factorization permits use of arbitrary recognition networks, thereby allowing arbitrary non-linear emission distributions, $p({\mathbf{y}}_{k} | {\mathbf{a}}_{k})$. \paragraph{Inference for states.} Given the variational distribution $q_\phi({\mathbf{a}}_{0:T} | {\mathbf{y}}_{0:T})$ in \eqref{eq:auxiliary-inference}, we can draw samples, $\tilde{{\mathbf{a}}}_{0:T} \sim q_\phi({\mathbf{a}}_{0:T} | {\mathbf{y}}_{0:T})$, from it. Viewing $\tilde{{\mathbf{a}}}_{0:T}$ as pseudo-observations, we treat the remaining SSM (i.e., the states and auxiliary variables) separately. Specifically, conditioned on the auxiliary variables, $\tilde{{\mathcal{A}}}_\tau = \{\tilde{{\mathbf{a}}}_k: t_k \leq \tau\}$, we can answer inference queries over the states ${\mathbf{z}}_t$ in continuous time. This does not require additional inference networks and can be performed only using the generative model via classic continuous-discrete Bayesian inference techniques in Section~\ref{sec:background}. To infer the filtered density, $p_t({\mathbf{z}}_t | \tilde{{\mathcal{A}}}_t)$, we can use \eqref{eq:linearization-approx} for the prediction step and \eqref{eq:update-step} for the update step, replacing ${\mathbf{y}}_k$ by $\tilde{{\mathbf{a}}}_k$. Similarly, we can use \eqref{eq:smooth-linearization-approx} (Appendix) to infer the smoothed density, $p_t({\mathbf{z}}_t | \tilde{{\mathcal{A}}}_T)$. As the inference of states is now conditioned on auxiliary variables, only the inversion of $h \times h$ matrices is required which is computationally feasible as ${\mathbf{a}}_k$ generally has lower dimensionality than ${\mathbf{y}}_k$. Notably, this inference scheme does not require posterior SDEs for inference (as in other SDE-based models; cf. Section~\ref{sec:related-work}) and does not suffer from poor optimization of the transition model as we employ the (generative) transition model for the inference of states. \subsection{Learning} \label{sec:learning} The parameters of the generative model $\{\theta\}$ and the inference network $\{\phi\}$ can be jointly optimized by maximizing the following evidence lower bound (ELBO) of the log-likelihood, $\log p_{\theta}({\mathbf{y}}_{0:T})$, \begin{align} &\log p_{\theta}({\mathbf{y}}_{0:T})\nonumber\\ &\:\geq \mathbb{E}_{q_\phi({\mathbf{a}}_{0:T} | {\mathbf{y}}_{0:T})}\left[\log \frac{\prod_{k=0}^T p_\theta({\mathbf{y}}_{k} | {\mathbf{a}}_{k})p_\theta({\mathbf{a}}_{0:T})}{\prod_{k=0}^T q_\phi({\mathbf{a}}_{k} | {\mathbf{y}}_{k})}\right]\nonumber\\ &\: =: {\mathcal{L}}_{\mathrm{ELBO}}(\theta,\phi). \end{align} The distributions $p_\theta({\mathbf{y}}_{k} | {\mathbf{a}}_{k})$ and $q_\phi({\mathbf{a}}_{k} | {\mathbf{y}}_{k})$ in ${\mathcal{L}}_{\mathrm{ELBO}}$ are immediately available via the emission and recognition networks, respectively. What remains is the computation of $p_\theta({\mathbf{a}}_{0:T})$. Fortunately, $p_\theta({\mathbf{a}}_{0:T})$ can be computed as a byproduct of the inference (filtering) procedure described in Section~\ref{sec:inference}. The distribution factorizes as $$ p({\mathbf{a}}_{0:T}) = p({\mathbf{a}}_0)\prod_{k=1}^T p({\mathbf{a}}_k | {\mathcal{A}}_{t_{k-1}}), $$ where $p({\mathbf{a}}_k | {\mathcal{A}}_{t_{k-1}}) = {\mathcal{N}}({\mathbf{a}}_k; \mathbf{H}{\mathbf{m}}_{k}^-, {\mathbf{S}}_k)$, and ${\mathbf{m}}_{k}^-$ and ${\mathbf{S}}_k$ are computed during the prediction and update steps, respectively. The $p_\theta({\mathbf{a}}_{0:T})$ term can be viewed as a ``prior'' over the auxiliary variables. However, unlike the fixed standard Gaussian prior in a vanilla variational autoencoder, $p_\theta({\mathbf{a}}_{0:T})$ is a learned prior given by the marginalization of the states, ${\mathbf{z}}_t$, from the underlying SSM. Algorithm \ref{alg:learning} summarizes the learning algorithm for a single time series; in practice, mini-batches of time series are sampled from the dataset. \input{algorithms/learning.tex} \subsection{Stable Implementation} \label{sec:stable-implementation} A naive implementation of the numerical integration of ODEs (Eqs.~\ref{eq:linearization-approx} and \ref{eq:smooth-linearization-approx}) and other operations \eqrefp{eq:update-step} results in unstable training and crashing due to violation of the positive definite constraint for the covariance matrices. Commonly employed tricks such as symmetrization, $ {\mathbf{P}} = ({\mathbf{P}} + {\mathbf{P}}^\top)/2, $ and addition of a small positive number ($\epsilon$) to the diagonal elements, $ {\mathbf{P}} = {\mathbf{P}} + \epsilon{\mathbf{I}}, $ did not solve these training issues. Therefore, we implemented our algorithms in terms of square root (Cholesky) factors, which proved critical to the stable training of NCDSSM. Several square root factors' based inference algorithms have been previously proposed~\citep[Ch.~12]{zonov2019kalman,jorgensen2007computationally,kailath2000linear}. In the following, we discuss our implementation which is based on \citet{zonov2019kalman}. Further discussion on implementation stability, particularly in the case of non-linear dynamics, can be found in Appendix~\ref{app:stable-implementation}. We begin with a lemma that shows that the square root factor of the sum of two matrices with square root factors can be computed using $\mathrm{QR}$ decomposition. \begin{restatable}{lemma}{sumofsqrts} \label{lemma:sum-sqrt-factors} Let ${\mathbf{A}}$ and ${\mathbf{B}}$ be two $n \times n$ matrices with square root factors ${\mathbf{A}}^{1/2}$ and ${\mathbf{B}}^{1/2}$, respectively. The matrix ${\mathbf{C}} = {\mathbf{A}} + {\mathbf{B}}$ also has a square root factor, ${\mathbf{C}}^{1/2}$, given by \begin{equation*} \Theta, \begin{bmatrix}{\mathbf{C}}^{1/2} & \mathbf{0}_{n \times n}\end{bmatrix}^\top = \mathrm{QR}\left(\begin{bmatrix}{\mathbf{A}}^{1/2} & {\mathbf{B}}^{1/2}\end{bmatrix}^\top\right), \end{equation*} where $\Theta$ is the orthogonal $\mathrm{Q}$ matrix given by $\mathrm{QR}$ decomposition and $\mathbf{0}_{n \times n}$ is an $n \times n$ matrix of zeros. \end{restatable} \paragraph{Prediction step.} The solution of matrix differential equations of the form in \eqref{eq:cov-predict} --- called Lyapunov differential equations --- over $[t_0, t_1]$ is given by~\citep[Corollary~1.1.6]{abou2012matrix} \begin{equation} {\mathbf{P}}_{t_1} = \bm{\Phi}_{t_1}{\mathbf{P}}_{t_0}\bm{\Phi}_{t_1}^\top + \int_{t_0}^{t_1}\bm{\Phi}_{t}{\mathbf{D}}_t\bm{\Phi}_{t}^\top dt,\label{eq:lyapunov-soln} \end{equation} where $\bm{\Phi}_t$, called the fundamental matrix, is defined by \begin{equation} \frac{d\bm{\Phi}_{t}}{dt} = {\mathbf{F}}_{{\mathbf{z}}}({\mathbf{m}}_t, t)\bm{\Phi}_{t} \:\: \text{and} \:\: \bm{\Phi}_{t_0} = {\mathbf{I}}.\label{eq:fundamental-ode} \end{equation} This initial value problem can be solved using an off-the-shelf ODE solver. Let $\{\tilde{\bm{\Phi}}_{1} = {\mathbf{I}}, \tilde{\bm{\Phi}}_{2}, \dots, \tilde{\bm{\Phi}}_{n}\}$ be intermediate solutions of \eqref{eq:fundamental-ode} given by an ODE solver with step size $\eta$, \eqref{eq:lyapunov-soln} can be approximated as \begin{align} {\mathbf{P}}_{t_1} &\approx \tilde{\bm{\Phi}}_{n}{\mathbf{P}}_{t_0}\tilde{\bm{\Phi}}_{n}^\top\nonumber\\ &+ \frac{\eta}{2}\left(\tilde{\bm{\Phi}}_{1}{\mathbf{D}}_{1}\tilde{\bm{\Phi}}_{1}^\top + 2\tilde{\bm{\Phi}}_{2}{\mathbf{D}}_{2}\tilde{\bm{\Phi}}_{2}^\top + \dots + \tilde{\bm{\Phi}}_{n}{\mathbf{D}}_{n}\tilde{\bm{\Phi}}_{n}^\top\right).\label{eq:cov-predict-approx} \end{align} The additions in \eqref{eq:cov-predict-approx} are performed using Lemma \ref{lemma:sum-sqrt-factors} with square root factors $\tilde{\bm{\Phi}}_{n}{\mathbf{P}}_{t_0}^{1/2}$ and $\{\tilde{\bm{\Phi}}_{j}{\mathbf{D}}_{j}^{1/2}\}_{j=1}^{n}$. \paragraph{Update step.} Using similar arguments as in the proof of Lemma \ref{lemma:sum-sqrt-factors} (cf. Appendix~\ref{app:stable-implementation} for details), the update step \eqrefp{eq:update-step} can be performed by the $\mathrm{QR}$ decomposition of the square root factor \begin{equation} \begin{bmatrix} {\mathbf{R}}^{1/2} & {\mathbf{H}}({\mathbf{P}}_k^{-})^{1/2}\\ \mathbf{0}_{m \times d} & ({\mathbf{P}}_k^{-})^{1/2} \end{bmatrix}^\top.\label{eq:update-sqrt-factor} \end{equation} Let $ \begin{bmatrix} {\mathbf{X}} & \mathbf{0}\\ {\mathbf{Y}} & {\mathbf{Z}} \end{bmatrix}^\top $ be the upper triangular $\mathrm{R}$ matrix obtained from the $\mathrm{QR}$ decomposition of (\ref{eq:update-sqrt-factor}). The square root factor of the updated covariance matrix, ${\mathbf{P}}_k^{1/2}$, and the Kalman gain matrix, ${\mathbf{K}}_k$, are then given by ${\mathbf{P}}_k^{1/2} = {\mathbf{Z}}$ and ${\mathbf{K}}_k = {\mathbf{Y}}{\mathbf{X}}^{-1}$, respectively. \section{Related Work} \label{sec:related-work} \vspace{-0.5em} Several previous works~\citep{chung2015recurrent,krishnan2015deep,karl2016deep,krishnan2017structured,doerr2018probabilistic} have proposed SSM-like models for discrete-time sequential data, trained via amortized variational inference. Unlike NCDSSM, these models approximate sequential Bayesian inference (i.e., filtering and smoothing) via deterministic RNNs and are limited to the discrete time setting. Bayesian inference for a subset of latent variables combined with amortized inference for others has previously been studied for SSMs. SNLDS~\citep{dong2020collapsed} and REDSDS~\citep{ansari2021deep} perform amortized inference for the states and exact inference for discrete random variables (switches and duration counts) in switching SSMs. KVAE~\citep{fraccaro2017disentangled}, EKVAE~\citep{klushyn2021latent} and ARSGLS~\citep{kurle2020deep} introduce auxiliary variables and perform classic Bayesian filtering and smoothing for the state variables, similar to NCDSSM. However, these models use specific parameterizations of state dynamics and operate on discrete-time sequential data. In contrast, we propose a general framework for continuous-time modeling of irregularly-sampled time series with multiple possible parameterizations of the dynamics.% Since the introduction of NeuralODE~\citep{chen2018neural}, various models based on neural differential equations have been proposed for continuous-time modeling of time series~\citep{rubanova2019latent,de2019gru,yildiz2019ode2vae,li2020scalable,liu2020learning,kidger2020neural,solin2021scalable}. Amongst these, we focus on the latent variable models as they are closely related to SSMs. LatentODE~\citep{rubanova2019latent} encodes the entire context window into an initial state using an encoder (e.g., ODE-RNN) and uses a NeuralODE to model latent dynamics. ODE2VAE~\citep{yildiz2019ode2vae} decomposes the latent state into position and velocity components to explicitly model the acceleration and parameterize the ODE dynamics by Bayesian neural networks to account for uncertainty. LatentSDE~\citep{li2020scalable} uses a posterior SDE in the latent space to infer the latent dynamics together with a prior (generative) SDE in a variational setup. \citet{solin2021scalable} proposed a variant of LatentSDE trained by exploiting the Gaussian assumed density approximation of the non-linear SDE. VSDN~\citep{liu2020learning} uses ODE-RNNs to provide historical information about the time series to the SDE drift and diffusion functions. These existing ODE-based models either use deterministic latent dynamics and/or create a restrictive bottleneck by encoding the entire time series into an initial state. The SDE-based models require posterior SDEs to infer the dynamics; new observations are incorporated in an ad-hoc fashion, potentially resulting in a disparity between posterior and generative dynamics and a non-Markovian state space. Contrary to previous models, NCDSSM\ uses (i) stochastic Markovian dynamics, (ii) incorporates observations via a principled Bayesian update, (iii) disentangles recognition from dynamics using auxiliary variables and (iv) performs continuous-discrete Bayesian inference for the state variables (dynamics), obviating the need for posterior SDEs. \section{Experiments} \vspace{-0.5em} In this section, we present empirical results on time series imputation and forecasting tasks. Our primary focus was to investigate the models' ability to capture the underlying dynamics of the time series, gauged by the accuracy of long-term forecasts beyond the training context. We experimented with the three variants of our model described in Section \ref{sec:model-formulation}: NCDSSM-LTI, NCDSSM-NL, and NCDSSM-LL. Our main baselines were LatentODE and LatentSDE, two popular continuous-time latent variable models with deterministic and stochastic dynamics, respectively. We also compared NCDSSM\ against several other baselines for individual experiments. We first discuss experiment results on the low-dimensional bouncing ball and damped pendulum datasets, then move to higher dimensional settings: walking sequences from the CMU Motion Capture (MoCap) dataset, the USHCN daily climate dataset, and two 32x32 dimensional video datasets (Box and Pong). \begin{table*}[!ht] \scriptsize \centering \caption{Imputation and forecasting results for bouncing ball and damped pendulum datasets averaged over 50 sample trajectories. Mean \textpm\ standard deviation are computed over 5 independent runs.} \label{tab:low-dim-results} \resizebox{\textwidth}{!}{\begin{tabular}{clccccccc} \toprule \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{Imputation MSE ($\downarrow$) (\% Missing)} & \multicolumn{4}{c}{Forecast MSE ($\downarrow$) (\% Missing)}\\ \cmidrule(lr){3-5} \cmidrule(lr){6-9} & & 30\% & 50\% & 80\% & 0\% & 30\% & 50\% & 80\% \\ \midrule \multirow{5}{*}[-0.4ex]{\rotatebox{90}{\parbox[c]{1.5cm}{\centering Bouncing Ball}}} & LatentODE~{\scriptsize\citep{rubanova2019latent}} & 0.007 \textpm\ 0.000 & 0.008 \textpm\ 0.001 & 0.011 \textpm\ 0.000 & 0.386 \textpm\ 0.025 & 0.489 \textpm\ 0.133 & 0.422 \textpm\ 0.053 & 0.412 \textpm\ 0.048 \\ & LatentSDE~{\scriptsize\citep{li2020scalable}} & \textbf{0.006 \textpm\ 0.000} & 0.007 \textpm\ 0.000 & 0.011 \textpm\ 0.001 & 0.408 \textpm\ 0.043 & 1.209 \textpm\ 1.115 & 1.567 \textpm\ 2.263 & 0.352 \textpm\ 0.077 \\ \cmidrule{2-9} & NCDSSM-LTI\ & 0.020 \textpm\ 0.001 & 0.026 \textpm\ 0.001 & 0.067 \textpm\ 0.002 & 0.592 \textpm\ 0.106 & 0.557 \textpm\ 0.014 & 0.556 \textpm\ 0.025 & 0.555 \textpm\ 0.022 \\ & NCDSSM-NL\ & \textbf{0.006 \textpm\ 0.000} & \textbf{0.006 \textpm\ 0.000} & \textbf{0.007 \textpm\ 0.000} & \textbf{0.037 \textpm\ 0.018} & 0.036 \textpm\ 0.007 & \textbf{0.041 \textpm\ 0.007} & 0.115 \textpm\ 0.029 \\ & NCDSSM-LL\ & \textbf{0.006 \textpm\ 0.000} & \textbf{0.006 \textpm\ 0.000} & 0.008 \textpm\ 0.001 & \textbf{0.037 \textpm\ 0.028} & \textbf{0.034 \textpm\ 0.016} & 0.049 \textpm\ 0.034 & \textbf{0.076 \textpm\ 0.017} \\ \midrule \multirow{5}{*}[-0.4ex]{\rotatebox{90}{\parbox[c]{1.5cm}{\centering Damped \mbox{Pendulum}}}} & LatentODE~{\scriptsize\citep{rubanova2019latent}} & 0.151 \textpm\ 0.002 & 0.155 \textpm\ 0.002 & 0.206 \textpm\ 0.013 & 0.097 \textpm\ 0.042 & 0.117 \textpm\ 0.001 & 0.119 \textpm\ 0.001 & 0.148 \textpm\ 0.007 \\ & LatentSDE~{\scriptsize\citep{li2020scalable}} & 0.092 \textpm\ 0.076 & 0.148 \textpm\ 0.001 & 0.229 \textpm\ 0.001 & 0.046 \textpm\ 0.046 & 0.084 \textpm\ 0.058 & 0.147 \textpm\ 0.020 & 0.357 \textpm\ 0.096 \\ \cmidrule{2-9} & NCDSSM-LTI\ & 0.036 \textpm\ 0.001 & 0.057 \textpm\ 0.001 & 0.120 \textpm\ 0.002 & 0.282 \textpm\ 0.084 & 1.017 \textpm\ 1.363 & 1.527 \textpm\ 1.440 & 0.231 \textpm\ 0.050 \\ & NCDSSM-NL\ & \textbf{0.008 \textpm\ 0.000} & \textbf{0.011 \textpm\ 0.000} & \textbf{0.033 \textpm\ 0.002} & \textbf{0.011 \textpm\ 0.004} & 0.011 \textpm\ 0.003 & \textbf{0.012 \textpm\ 0.003} & \textbf{0.034 \textpm\ 0.019} \\ & NCDSSM-LL\ & \textbf{0.008 \textpm\ 0.000} & \textbf{0.011 \textpm\ 0.000} & 0.037 \textpm\ 0.003 & 0.025 \textpm\ 0.030 & \textbf{0.010 \textpm\ 0.001} & 0.020 \textpm\ 0.008 & 0.055 \textpm\ 0.007 \\ \bottomrule \end{tabular}} \vspace{-2.5em} \end{table*} \begin{figure} \includegraphics[width=1\linewidth]{pendulum_all_0.8.pdf} \vspace{-2em} \caption{Predictions from different models on the damped pendulum dataset in the 80\% missing data setting. The ground truth is shown using dashed lines with observed points in the context window (gray shaded region) shown as filled circles. The vertical dashed gray line marks the beginning of the forecast horizon. Solid lines indicate median predictions with 90\% prediction intervals shaded around them. The purple and orange colors indicate observation dimensions. NCDSSM-NL\ and NCDSSM-LL\ are significantly better at forecasting compared to the baselines.} \label{fig:low-dim-forecasts} \vspace{-2em} \end{figure} \vspace{-0.5em} \subsection{Bouncing Ball and Damped Pendulum} \label{sec:exp-low-dim} \vspace{-0.5em} The bouncing ball and damped pendulum datasets have known ground truth dynamics, which facilitates quality assessment of the dynamics learned by a given model. For details on these datasets, please refer to Appendix~\ref{app:datasets}. In brief, the univariate bouncing ball dataset exhibits piecewise-linear dynamics, whilst bivariate damped pendulum dataset~\citep{karl2016deep,kurle2020deep} exhibits non-linear latent dynamics. We trained all the models on 10s/5s sequences (with a discretization of 0.1s) for bouncing ball/damped pendulum with 0\%, 30\%, 50\% and 80\% timesteps missing at random to simulate irregularly-sampled data. The models were evaluated on imputation of the missing timesteps and forecasts of 20s/10s beyond the training regime for bouncing ball/damped pendulum. Table~\ref{tab:low-dim-results} reports the imputation and forecast mean squared error (MSE) for different missing data settings. In summary, the NCDSSM\ models with non-linear and locally-linear dynamics (NCDSSM-NL\ and NCDSSM-LL) perform well across datasets, settings, and random initializations, significantly outperforming the baselines. Furthermore, for these low-dimensional datasets, learning latent representations in the form of auxiliary variables is not required and we can set the recognition and emission functions in \eqref{eq:auxiliary-inference} and \eqref{eq:observation-emission} to identity functions. This results in NCDSSM\ models requiring 2-5 times fewer parameters than LatentODE and LatentSDE (cf. Table~\ref{tab:parameter-comparison} in the Appendix). Fig.~\ref{fig:low-dim-forecasts} shows example predictions from the best performing run of every model for 80\% missing data for the pendulum (cf.~Appendix~\ref{app:additional-results} for other settings). NCDSSM-NL\ and NCDSSM-LL\ generates far better predictions both inside and outside the context window compared to the baselines. Ordinary least squares (OLS) goodness-of-fit results in Table~\ref{tab:ols-pendulum} (Appendix) suggest that this performance can be attributed to our models having learnt the correct dynamics; latent states from NCDSSM-NL\ and NCDSSM-LL\ are highly correlated with the ground truth angle and angular velocity for all missingness scenarios. In other words, the models have learnt a Markovian state space which is informative about the dynamics at a specific time. \vspace{-0.5em} \subsection{CMU Motion Capture (Walking)} \label{sec:exp-mocap} \vspace{-0.5em} \begin{table} \scriptsize \centering \caption{Forecasting results for the CMU MoCap walking dataset averaged over 50 sample trajectories with 95\% prediction interval based on the $t$-statistic in parentheses. $^\dagger$Baseline results from \citet{solin2021scalable}.} \label{tab:mocap-results} \resizebox{\columnwidth}{!}{\begin{tabular}{lrr} \toprule \multirow{2}{*}{Model} & \multicolumn{2}{c}{MSE ($\downarrow$)}\\ \cmidrule{2-3} & \multicolumn{1}{c}{$^\dagger$Setup~1} & \multicolumn{1}{c}{Setup~2}\\ \midrule {np}ODE~{\scriptsize\citep{heinonen2018learning}} & 22.96 & \multicolumn{1}{c}{--} \\ {Neural}ODE~{\scriptsize\citep{chen2018neural}} & 22.49 (0.88) & \multicolumn{1}{c}{--} \\ ODE\textsuperscript{2}VAE-KL~{\scriptsize\citep{yildiz2019ode2vae}} & 8.09 (1.95) & \multicolumn{1}{c}{--}\\ LatentODE~{\scriptsize\citep{rubanova2019latent}} & 5.98 (0.28) & 31.62 (0.05) \\ LatentSDE~{\scriptsize\citep{li2020scalable}} & \textbf{4.03 (0.20)} & 9.52 (0.21) \\ LatentApproxSDE~{\scriptsize\citep{solin2021scalable}} & 7.55 (0.05) & \multicolumn{1}{c}{--} \\ \midrule NCDSSM-LTI\ & 13.90 (0.02) & 5.22 (0.02) \\ NCDSSM-NL\ & 5.69 (0.01) & 6.73 (0.02) \\ NCDSSM-LL\ & 9.96 (0.01) & \textbf{4.74 (0.01)} \\ \bottomrule \end{tabular}} \vspace{-2em} \end{table} This dataset comprises walking sequences of subject 35 from the CMU MoCap database containing joint angles of subjects performing everyday activities. We used a preprocessed version of the dataset from \citet{yildiz2019ode2vae} that has 23 50-dimensional sequences of length 300. \begin{figure*} \centering \includegraphics[width=1.0\linewidth]{pong_pred_v2.png} \vspace{-2em} \caption{Sample predictions from NCDSSM-NL\ on the Pong dataset. The top row is the ground truth with some missing observations in the context window. The next two rows show trajectories sampled from NCDSSM-NL{} upto 20 forecast steps. NCDSSM-NL\ is able to both impute and forecast accurately. Best viewed zoomed-in on a computer. More examples in Appendix~\ref{app:additional-results}.} \label{fig:pymunk-predictions} \vspace{-2em} \end{figure*} We tested the models under two setups. Setup 1~~\citep{yildiz2019ode2vae,li2020scalable,solin2021scalable} involves training on complete 300 timestep sequences from the training set and using only the first 3 timesteps as context to predict the remaining 297 timesteps during test time. Although challenging, this setup does not evaluate the model's performance beyond the training context. % Thus, we propose Setup 2 in which we train the model only using the first 200 timesteps. During test time, we give the first 100 timesteps as context and predict the remaining 200 timesteps. The forecast MSE results for both setups are reported in Table~\ref{tab:mocap-results}. NCDSSM-NL\ performs better than all baselines except LatentSDE on Setup~1 while NCDSSM\ models perform significantly better than baselines on Setup~2. This showcases NCDSSM's ability to correctly model the latent dynamics, aiding accurate long-term predictions beyond the training context. \vspace{-0.8em} \subsection{USHCN Climate Indicators} \label{sec:exp-ushcn} \vspace{-0.5em} We evaluated the models on the United States Historical Climatology Network (USHCN) dataset that comprises measurements of five climate indicators across the United States. The preprocessed version of this dataset from \citet{de2019gru} contains sporadic time series (i.e., with measurements missing \emph{both} over the time and feature axes) from 1,114 meteorological stations over 4 years. Following \citet{de2019gru}, we trained the models on sequences from the training stations and evaluated them on the task of predicting the next 3 measurements given the first 3 years as context from the held-out test stations. The results in Table~\ref{tab:ushcn-results} show that NCDSSM-NL\ outperforms all the baselines with NCDSSM-LTI\ and NCDSSM-LL\ performing better than most of the baselines. \vspace{-0.5em} \subsection{Pymunk Physical Environments} \label{sec:exp-pymunk} \vspace{-0.5em} Finally, we evaluated the models on two high-dimensional (video) datasets of physical environments used in \citet{fraccaro2017disentangled}, simulated using the Pymunk Physics engine~\cite{Blomqvist_Pymunk_2022}: Box and Pong. The box dataset consists of videos of a ball moving in a 2-dimensional box and the pong dataset consists of videos of a Pong-like environment where two paddles move to keep a ball in the frame at all times. Each frame is a 32x32 binary image. \begin{table} \scriptsize \centering \caption{Forecasting results for the USHCN climate dataset. Mean \textpm\ standard deviation are computed over 5 folds as described in \citet{de2019gru}. $^\dagger$Results from \citet{de2019gru}. $^\ddagger$Results from \citet{liu2020learning}.} \label{tab:ushcn-results} \begin{tabular}{lc} \toprule Model & \multicolumn{1}{c}{MSE ($\downarrow$)}\\ \midrule $^\dagger$NeuralODE-VAE~{\scriptsize\citep{chen2018neural}} & 0.83 \textpm\ 0.10 \\ $^\dagger$SequentialVAE~{\scriptsize\citep{krishnan2015deep}} & 0.83 \textpm\ 0.07 \\ $^\dagger$GRU-D~{\scriptsize\citep{che2018recurrent}} & 0.53 \textpm\ 0.06 \\ $^\dagger$T-LSTM~{\scriptsize\citep{baytas2017patient}} & 0.59 \textpm\ 0.11 \\ $^\dagger$GRUODE-B{\scriptsize~\citep{de2019gru}} & 0.43 \textpm\ 0.07\\ $^\ddagger$ODE-RNN~{\scriptsize\citep{rubanova2019latent}} & 0.39 \textpm\ 0.06 \\ $^\ddagger$LatentODE~{\scriptsize\citep{rubanova2019latent}} & 0.77 \textpm\ 0.09 \\ $^\ddagger$LatentSDE~{\scriptsize\citep{li2020scalable}} & 0.74 \textpm\ 0.11 \\ $^\ddagger$VSDN-F (IWAE)~{\scriptsize\citep{liu2020learning}} & 0.37 \textpm\ 0.06 \\ \midrule NCDSSM-LTI\ & 0.38 \textpm\ 0.07 \\ NCDSSM-NL\ & \textbf{0.34 \textpm\ 0.06} \\ NCDSSM-LL\ & 0.37 \textpm\ 0.06 \\ \bottomrule \end{tabular} \vspace{-2.2em} \end{table} \begin{table} \scriptsize \centering \caption{Forecasting results for the Box and Pong datasets averaged over 16 sample trajectories.} \label{tab:pymunk-results} \begin{tabular}{lcc} \toprule \multirow{2}{*}{Model} & \multicolumn{2}{c}{EMD ($\downarrow$)}\\ \cmidrule{2-3} & Box & Pong\\ \midrule LatentODE~{\scriptsize\citep{rubanova2019latent}} & 1.792 & 4.543 \\ LatentSDE~{\scriptsize\citep{li2020scalable}} & 1.925 & 3.505 \\ \midrule NCDSSM-LTI\ & 1.685 & 3.265 \\ NCDSSM-NL\ & 0.692 & \textbf{1.714} \\ NCDSSM-LL\ & \textbf{0.632} & 1.891 \\ \bottomrule \end{tabular} \vspace{-2.2em} \end{table} We trained the models on sequences of 20 frames with 20\% of these frames randomly dropped. At test time, the models were evaluated on forecasts of 40 frames beyond the training context. For evaluation, we treat each image as a probability distribution on the XY-plane and report the earth mover's distance (EMD) between the ground truth and predicted images, averaged over the forecast horizon, in Table~\ref{tab:pymunk-results}. NCDSSM-NL\ and NCDSSM-LL\ significantly outperform baseline models on both box and pong datasets. Fig.~\ref{fig:pymunk-emd} (Appendix) shows the variation of EMD against time for different models. In the context window (0-2s), all models have EMD close to 0; however, in the forecast horizon (2-6s), the EMD rises rapidly and irregularly for LatentODE and LatentSDE but does so gradually for NCDSSM-NL\ and NCDSSM-LL. This indicates that the dynamics models learned by NCDSSM-NL\ and NCDSSM-LL\ are both accurate and robust. Qualitatively, both NCDSSM-LL\ and NCDSSM-NL\ correctly impute the missing frames and the forecasts generated by them are similar to ground truth. Fig.~\ref{fig:pymunk-predictions} shows sample predictions for the pong dataset generated by NCDSSM-NL. In contrast, other models only impute the missing frames correctly, failing to generate accurate forecasts (cf.~Appendix~\ref{app:additional-results}). \section{Proofs} \subsection{Proof of Lemma~\ref{lemma:sum-sqrt-factors}} \sumofsqrts* \begin{proof} Our proof is based on \citet[Thm.~3.2]{zonov2019kalman}. Consider the square root factor $${\mathbf{Y}} = \begin{bmatrix}{\mathbf{A}}^{1/2} & {\mathbf{B}}^{1/2}\end{bmatrix}.$$ Clearly, ${\mathbf{C}} = {\mathbf{Y}}\rmY^\top$; however, we also have ${\mathbf{C}} = {\mathbf{Y}}\Theta\Theta^\top{\mathbf{Y}}^\top$, for any orthogonal matrix $\Theta$. Thus, ${\mathbf{Y}}\Theta$ is also a square root factor of ${\mathbf{C}}$. Let $\Theta$ be an orthogonal matrix such that \begin{equation} {\mathbf{Y}}\Theta = \begin{bmatrix}{\mathbf{X}} & \mathbf{0}_{n \times n}\end{bmatrix},\label{eq:sum-mat-sqrt-proof-step1} \end{equation} where ${\mathbf{X}}$ is an $n \times n$ lower triangular matrix. This implies that ${\mathbf{X}}$ is a square root factor of ${\mathbf{C}}$. From \eqref{eq:sum-mat-sqrt-proof-step1}, we further have the following, \begin{align} {\mathbf{Y}} &= \begin{bmatrix}{\mathbf{X}} & \mathbf{0}_{n \times n}\end{bmatrix}\Theta^\top,\\ {\mathbf{Y}}^\top &= \Theta\begin{bmatrix}{\mathbf{X}}^\top \\ \mathbf{0}_{n \times n}\end{bmatrix}, \end{align} where we post-multiply by $\Theta^\top$ in the first step and use the fact that $\Theta\Theta^\top = {\mathbf{I}}$, and transpose both sides in the second step. We have thus expressed ${\mathbf{Y}}^\top$ as the product of an orthogonal matrix, $\Theta$, and an upper triangular matrix, $\begin{bmatrix}{\mathbf{X}} & \mathbf{0}_{n \times n}\end{bmatrix}^\top$. Such a factorization can be performed by $\mathrm{QR}$ decomposition. Thus, we can compute the square root factor ${\mathbf{C}}^{1/2} = {\mathbf{X}}$ via the $\mathrm{QR}$ decomposition of $\begin{bmatrix}{\mathbf{A}}^{1/2} & {\mathbf{B}}^{1/2}\end{bmatrix}^\top$. \end{proof} \input{algorithms/sum-mat-sqrts.tex} \section{Technical Details} \label{app:implementation} \subsection{Continuous-Discrete Bayesian Smoothing} \label{app:smoothing} Several approximate smoothing procedures based on Gaussian assumed density approximation have been proposed in the literature. We refer the reader to \citet{sarkka2013gaussian} for an excellent review of continuous-discrete smoothers. In the following, we discuss the \emph{Type II extended RTS smoother} which is linear in the smoothing solution. According to this smoother, the mean, ${\mathbf{m}}^s_t$, and covariance matrix, ${\mathbf{P}}^s_t$, of the Gaussian approximation to the smoothing density, $p_t({\mathbf{z}}_t | {\mathcal{Y}}_T)$, follow the backward ODEs, \begin{subequations} \label{eq:smooth-linearization-approx} \begin{align} \frac{d{\mathbf{m}}^s_t}{dt} &= {\mathbf{f}}({\mathbf{m}}_t, t) + {\mathbf{C}}({\mathbf{m}}_t, t)({\mathbf{m}}^s_t - {\mathbf{m}}_t),\label{eq:mean-smooth}\\ \frac{d\mathbf{P}^s_t}{dt} &= {\mathbf{C}}({\mathbf{m}}_t, t){\mathbf{P}}^s_t + {\mathbf{P}}^s_t{\mathbf{C}}^\top({\mathbf{m}}_t, t) - {\mathbf{D}}({\mathbf{m}}_t, t),\label{eq:cov-smooth} \end{align} \end{subequations} where (${\mathbf{m}}_t$, ${\mathbf{P}}_t$) is the filtering solution given by \eqref{eq:linearization-approx}, ${\mathbf{C}}({\mathbf{m}}_t, t) = {\mathbf{F}}_{{\mathbf{z}}}({\mathbf{m}}_t, t) + {\mathbf{D}}({\mathbf{m}}_t, t)\mathbf{P}^{-1}_t$ and backward means that the ODEs are solved backwards in time from the filtering solution (${\mathbf{m}}^s_T = {\mathbf{m}}_T$, ${\mathbf{P}}^s_T = {\mathbf{P}}_T$). \subsection{Algorithms} \label{app:algorithms} \input{algorithms/filtering.tex} In this section, we discuss the \emph{stable} filtering and smoothing algorithms used in NCDSSM. Algorithm~\ref{alg:sum-mat-sqrts} provides a utility function --- \textsc{SumMatrixSqrts} --- that uses Lemma~\ref{lemma:sum-sqrt-factors} to compute the square root factor of the sum of two matrices with square root factors. The square root factor version of the continuous-discrete Bayesian filtering algorithm is given in Algorithm~\ref{alg:filtering}. Note that the \textsc{Predict} step for linear time-invariant dynamics can be performed analytically using matrix exponentials~\citep[Ch.~6]{sarkka2019applied}. We used the analytic solver for some of our experiments. The Type II RTS smoothing algorithm (Algorithm~\ref{alg:smoothing}) takes the filtered distributions as input and computes the smoothed distribution at every filtered timestep. To compute the smoothed distribution between observed timesteps, we cache the filtered distributions at these timesteps and provide them to the \textsc{Smooth} function together with the filtered distributions at observed timesteps. \input{algorithms/smoothing.tex} \subsection{Stable Implementation (Contd.)} \label{app:stable-implementation} \paragraph{Square Root Factor Measurement Update.} In Section~\ref{sec:stable-implementation}, we discussed a square root factor version of the measurement update step via the $\mathrm{QR}$ decomposition of ${\mathbf{A}}^\top$, where, \begin{equation} {\mathbf{A}} = \begin{bmatrix} {\mathbf{R}}^{1/2} & {\mathbf{H}}({\mathbf{P}}_k^{-})^{1/2}\\ \mathbf{0}_{m \times d} & ({\mathbf{P}}_k^{-})^{1/2} \end{bmatrix}. \end{equation} Let $ \Theta, {\mathbf{U}} = \mathrm{QR}({\mathbf{A}}^\top) $, where \begin{equation} {\mathbf{U}} = \begin{bmatrix} {\mathbf{X}} & \mathbf{0}\\ {\mathbf{Y}} & {\mathbf{Z}} \end{bmatrix}^\top. \end{equation} In the following, we show how ${\mathbf{P}}_k^{1/2} = {\mathbf{Z}}$. Our proof is based on \citet{zonov2019kalman} and we refer the reader to \citet[Appendix~A]{zonov2019kalman} for the proof of ${\mathbf{K}}_k = {\mathbf{Y}}{\mathbf{X}}^{-1}$. \begin{proof} Note that ${\mathbf{A}}$ is a square root factor of \begin{equation} \begin{bmatrix} {\mathbf{R}} + {\mathbf{H}}{\mathbf{P}}_k^{-}{\mathbf{H}}^\top & {\mathbf{H}}{\mathbf{P}}_k^{-}\\ ({\mathbf{P}}_k^{-})^\top{\mathbf{H}}^\top & {\mathbf{P}}_k^{-} \end{bmatrix}.\label{eq:sqrt-update-step-1} \end{equation} Matching the terms in (\ref{eq:sqrt-update-step-1}) with the terms in \begin{equation} {\mathbf{U}}\rmU^\top = \begin{bmatrix} {\mathbf{X}}\rmX^\top & {\mathbf{X}}{\mathbf{Y}}^\top \\ {\mathbf{Y}}{\mathbf{X}}^\top & {\mathbf{Y}}\rmY^\top + {\mathbf{Z}}\rmZ^\top \end{bmatrix}, \end{equation} we get the following equations, \begin{subequations} \begin{align} {\mathbf{X}}\rmX^\top &= {\mathbf{R}} + {\mathbf{H}}{\mathbf{P}}_k^{-}{\mathbf{H}}^\top,\\ {\mathbf{X}}{\mathbf{Y}}^\top &= {\mathbf{H}}{\mathbf{P}}_k^{-},\\ {\mathbf{Y}}{\mathbf{X}}^\top &= ({\mathbf{P}}_k^{-})^\top{\mathbf{H}}^\top,\\ {\mathbf{Y}}\rmY^\top + {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-}.\label{eq:sqrt-update-step-2-d} \end{align} \label{eq:sqrt-update-step-2} \end{subequations} From \eqref{eq:sqrt-update-step-2-d}, we have the following, \begin{subequations} \begin{align} {\mathbf{Y}}\rmY^\top + {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-},\\ {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-} - {\mathbf{Y}}\rmY^\top,\\ {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-} - {\mathbf{Y}}({\mathbf{X}}^\top{\mathbf{X}}^{-\top})({\mathbf{X}}^{-1}{\mathbf{X}}){\mathbf{Y}}^\top,\\ {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-} - {\mathbf{Y}}{\mathbf{X}}^\top({\mathbf{X}}\rmX^{\top})^{-1}{\mathbf{X}}{\mathbf{Y}}^\top, \end{align} \end{subequations} where we introduce ${\mathbf{I}} = ({\mathbf{X}}^\top{\mathbf{X}}^{-\top})({\mathbf{X}}^{-1}{\mathbf{X}})$ in the third step and use the property $({\mathbf{X}}\rmX^{\top})^{-1} = {\mathbf{X}}^{-\top}{\mathbf{X}}^{-1}$ in the last step. Substituting values from \eqref{eq:sqrt-update-step-2}, we get, \begin{subequations} \begin{align} {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-} - ({\mathbf{P}}_k^{-})^\top{\mathbf{H}}^\top{\mathbf{S}}_k^{-1}{\mathbf{H}}{\mathbf{P}}_k^{-}\\ {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-} - ({\mathbf{P}}_k^{-})^\top{\mathbf{H}}^\top{\mathbf{S}}_k^{-1}({\mathbf{S}}_k{\mathbf{S}}_k^{-1}){\mathbf{H}}{\mathbf{P}}_k^{-}\\ {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-} - {\mathbf{P}}_k^{-}{\mathbf{H}}^\top{\mathbf{S}}_k^{-1}{\mathbf{S}}_k{\mathbf{S}}_k^{-\top}{\mathbf{H}}({\mathbf{P}}_k^{-})^\top\\ {\mathbf{Z}}\rmZ^\top &= {\mathbf{P}}_k^{-} - {\mathbf{K}}_k{\mathbf{S}}_k{\mathbf{K}}_k^\top \end{align} \end{subequations} where we introduce ${\mathbf{I}} = {\mathbf{S}}_k{\mathbf{S}}_k^{-1}$ in the second step, use the fact that ${\mathbf{S}}_k$ is symmetric in the third step and substitute the value of ${\mathbf{K}}_k$ from \eqref{eq:kalman-gain} in last step. Note that $ {\mathbf{Z}}\rmZ^\top = {\mathbf{P}}_k^{-} - {\mathbf{K}}_k{\mathbf{S}}_k{\mathbf{K}}_k^\top = {\mathbf{P}}_k$; therefore, ${\mathbf{Z}} = {\mathbf{P}}_k^{1/2}$. \end{proof} \paragraph{Regularizing Non-Linear Dynamics.} We now discuss the techniques we employed to regularize the latent dynamics in NCDSSM. Particularly in the case of non-linear dynamics (NCDSSM-NL), regularization is critical for stable training. The drift function, ${\mathbf{f}}$, was parameterized by an MLP in all our experiments. We experimented with the $\tanh$ and $\mathrm{softplus}$ non-linearities. We found that applying the non-linearity after the last layer was important when using $\tanh$. Furthermore, we also initialized the parameters of the last layer to $0$ when using $\tanh$. In the case of experiments with a large time interval (e.g., MoCap and USHCN), application of spectral normalization~\citep{miyato2018spectral} along with the $\mathrm{softplus}$ non-linearity proved critical for stable training. In the following, we present our hypothesis on why spectral normalization stabilizes training. According to \citet[Section~5.2]{oksendal2003stochastic}, one of the conditions for the existence of a unique solution of an SDE is the Lipschitz continuity of the drift function, ${\mathbf{f}}$. Applying spectral normalization regularizes the neural network to be 1-Lipschitz, aiding its solvability using numerical methods. However, spectral normalization is even more important in the case of NCDSSM\ from a practical perspective --- it prevents the numerical explosion of the elements of $\bm{\Phi}_t$ in the prediction step \eqrefp{eq:fundamental-ode}, as discussed below. Consider the case of a fixed Jacobian matrix ${\mathbf{F}}_{\mathbf{z}}$ in an interval $[t_1, t_2]$. In this case, the solution of \eqref{eq:fundamental-ode} is given by \begin{equation} \bm{\Phi}_{t_2} = \exp{({\mathbf{F}}_{\mathbf{z}}(t_2 - t_1))}\bm{\Phi}_{t_1}, \end{equation} where $\exp{({\mathbf{F}}_{\mathbf{z}}(t_2 - t_1))}$ denotes the matrix exponential. For unregularized drifts, the elements of $\exp{({\mathbf{F}}_{\mathbf{z}}(t_2 - t_1))}$ can become arbitrarily large. However, in the case of 1-Lipschitz drift functions (as provided by spectral normalization), the spectral norm of $\exp{({\mathbf{F}}_{\mathbf{z}})}$ is bounded by $\exp(1)$, as shown in Lemma~\ref{lemma:lip-mat-exp}. This controls the growth rate of the elements of fundamental matrix, $\bm{\Phi}_t$. \begin{lemma} \label{lemma:lip-mat-exp} Let ${\mathbf{g}}: \mathbb{R}^m \to \mathbb{R}^m$ be a 1-Lipschitz function and ${\mathbf{J}}_{{\mathbf{g}}}: \mathbb{R}^m \to \mathbb{R}^{m \times m}$ be its Jacobian function. Then, $\|\exp({\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}}))\|_2 \leq \exp(1)~\forall~{\mathbf{z}} \in \mathbb{R}^m$ where $\|\cdot\|_2$ denotes the spectral norm of a matrix. \end{lemma} \begin{proof} The spectral norm of the Jacobian of a $K$-Lipschitz function is bounded by $K$. Thus, we have, \begin{equation} \|{\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}})\|_2 \leq 1~\forall~{\mathbf{z}} \in \mathbb{R}^m.\label{eq:lip-jac-bound} \end{equation} Using the power series representation of the matrix exponential, $$ \exp({\mathbf{A}}) = \sum_{k=0}^\infty \frac{{\mathbf{A}}^k}{k!}, $$ we get the following bound on $\|\exp({\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}}))\|_2$, \begin{equation} \|\exp({\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}}))\|_2 \leq \sum_{k=0}^\infty \left\|\frac{{\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}})^k}{k!}\right\|_2 \leq \sum_{k=0}^\infty \frac{\left\|{\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}})\right\|_2^k}{k!} = \exp(\left\|{\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}})\right\|_2).\label{eq:mat-exp-spectral-bound} \end{equation} Combining \eqref{eq:mat-exp-spectral-bound} with \eqref{eq:lip-jac-bound}, we get, \begin{equation} \|\exp({\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}}))\|_2 \leq \exp(\left\|{\mathbf{J}}_{{\mathbf{g}}}({\mathbf{z}})\right\|_2) \leq \exp(1), \end{equation} which completes the proof. \end{proof} For the same reasons as discussed above, we initialized the transition matrices in our linear models to be random orthogonal matrices as orthogonal matrices have spectral norm equal to 1. However, in the case of NCDSSM-LTI\ on the USHCN dataset, this initialization was not sufficient during the initial phase of training and we used a random skew-symmetric matrix instead. The matrix exponential of a skew-symmetric matrix is an orthogonal matrix. We generated a random skew-symmetric matrix as follows, \begin{align*} {\mathbf{F}} &\sim \left[{\mathcal{N}}(0, 1)\right]^{m \times m},\\ {\mathbf{F}} &= \left(\frac{{\mathbf{F}} - {\mathbf{F}}^\top}{2}\right). \end{align*} \paragraph{Fixed Measurement Matrix.} We used a fixed rectangular identity matrix as the auxiliary measurement matrix (${\mathbf{H}}$ in Eq.~\ref{eq:auxiliary-emission}) in our bouncing ball, damped pendulum and CMU MoCap (walking) experiments as it lead to improved learning of dynamics. This parameterization forces the model to learn the static (e.g., position) and dynamic (e.g., velocity) components in separate elements of the latent state, thereby disentangling them~\citep{klushyn2021latent}. \subsection{Imputation and Forecasting} In this section, we describe how to perform imputation and forecasting using a trained NCDSSM. For imputation, the timesteps at which imputation is to be performed are provided to the \textsc{Filter} function during filtering. The filtered distributions are then passed to the \textsc{Smooth} function and (imputed) samples are drawn from the smoothed distributions. For forecasting, filtering is first performed over the context time series. The \textsc{Predict} function is then used up to end of the forecast horizon, starting from the last filtered distribution. Sample forecast trajectories are then drawn from these predicted distributions. \section{Experiment Details} \label{app:exp-details} \subsection{Datasets} \label{app:datasets} \paragraph{Bouncing Ball and Damped Pendulum.} The bouncing ball dataset comprises univariate time series of the position of a ball bouncing between two fixed walls, in the absence of dissipative forces. The initial position, $x_0$, and velocity, $v_0$, of the ball are chosen at random, as follows, \begin{align} x_0 &\sim {\mathcal{U}}(-1, 1),\\ v_0 &\sim {\mathcal{U}}(0.05, 0.5) \times {\mathcal{U}}\{-1, 1\}, \end{align} where ${\mathcal{U}}(a, b)$ denotes a uniform distribution on $(a, b)$ and ${\mathcal{U}}\{c_1, \dots, c_k\}$ denotes a uniform categorical distribution on $\{c_1, \dots, c_k\}$. The observed position, $y_k$, is a corrupted version of the true position, $x_k$, \begin{align} y_k \sim {\mathcal{N}}(x_k, 0.05^2). \end{align} Collisions with the walls, located at $-1$ and $+1$, are assumed to be perfectly elastic, i.e., the sign of the velocity gets flipped when the ball hits either of the walls. Thus, the ball exhibits piecewise-linear dynamics. We used the Euler integrator with a step size of 0.1s to simulate the dynamics. The training, validation, and test datasets consist of 5000, 500, and 500 sequences of length 30s each, respectively. The damped pendulum dataset~\citep{karl2016deep,kurle2020deep} comprises bivariate time series of the XY-coordinates of a pendulum oscillating in the presence of a damping force. The non-linear latent dynamics of this dataset is given by, \begin{align} \frac{d\theta_t}{dt} &= \omega_t,\\ \frac{d\omega_t}{dt} &= -\frac{g}{l}\sin(\theta_t) -\frac{\gamma}{m}\omega_t, \end{align} where $\theta_t$ and $\omega_t$ are the angle and angular velocity, respectively, and $g=9.81$, $l=1$, $m=1$, and $\gamma=0.25$ are the acceleration due to gravity, the length of the massless cord of the pendulum, the mass of the pendulum bob, and the damping coefficient, respectively. The initial angle, $\theta_0$, and angular velocity, $\omega_0$, of the pendulum are chosen at random, as follows, \begin{align} \theta_0 &= \pi + \mathrm{clip}\left({\epsilon}, -2, 2\right),\\ \omega_0 &= 4\times\mathrm{clip}\left({\epsilon}, -2, 2\right), \end{align} where ${\epsilon} \sim {\mathcal{N}}(0, 1)$ and $\mathrm{clip}(x, a, b)$ denotes clipping the value of $x$ between $a$ and $b$. The observations are Cartesian coordinates of the pendulum's bob with additive Gaussian noise, ${\mathcal{N}}(0, 0.05^2)$. We used the RK4 integrator to simulate the latent dynamics with a step size of 0.1s. The training, validation, and test datasets consist of 5000, 1000, and 1000 sequences of length 15s each, respectively. \paragraph{CMU Motion Capture (Walking).} The CMU Motion Capture database\footnote{The original CMU MoCap database is available at: \url{http://mocap.cs.cmu.edu}.} comprises time series of joint angles of human subjects performing everyday activities, e.g., walking, running, and dancing. We used walking sequences of subject 35 from this database for our experiments. A preprocessed version of this dataset from \citet{yildiz2019ode2vae} consists of 23 50-dimensional sequences of 300 timesteps each, split into 16 training, 3 validation and 4 test sequences. \paragraph{USHCN Climate Indicators.} The USHCN Climate dataset\footnote{The original USHCN Climate dataset is available at: \url{https://cdiac.ess-dive.lbl.gov/ftp/ushcn_daily/}.} consists of measurements of five climate indicators --- precipitation, snowfall, snow depth, minimum temperature, and maximum temperature --- across the United States. The preprocessed version of this dataset from \citet{de2019gru} contains sporadic time series from 1,114 meteorological stations with a total of 386,068 unique observations over 4 years, between 1996 and 2000. The timestamps are scaled to lie in $[0, 200]$. The 1,114 stations are split into 5 folds of 70\% training, 20\% validation, and 10\% test stations, respectively. \paragraph{Pymunk Physical Environments.} The Pymunk physical environments datasets are video datasets of physical environments simulated using the Pymunk Physics engine. We used two environments proposed in \citet{fraccaro2017disentangled}: Box and Pong. Each frame of these videos is a 32 $\times$ 32 binary image. The Box dataset consists of videos of a ball moving inside a 2-dimensional box with perfectly elastic collisions with the walls of the box. The Pong dataset consists of videos of a Pong-like environment with a ball and two paddles that move to keep the ball inside the frame. Both datasets consist of 5000 training, 100 validation, and 1000 test videos with 60 frames each. We refer the reader to \citet{fraccaro2017disentangled} for further details on how these datasets are generated\footnote{The scripts for generating Pymunk datasets are available at: \url{https://github.com/simonkamronn/kvae}.}. \subsection{Training and Evaluation Setups} \paragraph{Bouncing Ball and Damped Pendulum.} We trained all the models on the first 10s/5s of the sequences (i.e., 100/50 steps) from the training dataset for the bouncing ball/damped pendulum datasets. We randomly dropped 30\%, 50\%, and 80\% of the training steps for the missing-data experiments. For evaluation, we report the MSE over the missing (for imputation) and the next 200/100 timesteps (for forecast) for the bouncing ball/damped pendulum test datasets. The MSE was averaged over 5 independent runs for 50 sample trajectories. \paragraph{CMU Motion Capture (Walking).} For Setup~1, we trained NCDSSM\ models on complete 300-timestep sequences from the training set. During test time, we evaluated the predictive performance on the next 297 steps with a context of the first 3 steps from the test set. For Setup~2, we trained the models on the first 200 timesteps from sequences in the training set. During test time, we provided the models with a context of the first 100 timesteps from sequences in the test set and evaluated their performance on the next 200 timesteps. We report the MSE averaged over 50 sample trajectories together with 95\% prediction interval based on the $t$-statistic for a single run, as reported in prior works. \paragraph{USHCN Climate Indicators.} We trained NCDSSM\ models under the same setup as \citet{de2019gru} using 4 years of observations from the training stations. During test time, we provided the models with the first 3 years of observations from the test set as context and evaluated their performance on the accuracy of the next 3 measurements. The MSE was computed between the mean of 50 sample forecast trajectories (simulating a point forecast) and the ground truth, averaged over the 5 folds. \paragraph{Pymunk Physical Environments.} We trained the models on the first 20 frames of the videos from the training dataset with 20\% of the frames randomly dropped. During test time, we provided the models with a context of 20 frames and evaluated the forecast performance on the next 40 frames. We report the EMD between the predicted and the ground truth frames, averaged over 16 sample trajectories. The EMD was computed using the \texttt{ot.emd2} function from the Python Optimal Transport (POT) library~\citep{flamary2021pot} with the \texttt{euclidean} metric as the cost function. \subsection{Experiment Configurations} We ran all our experiments on 2 machines with 1 Tesla T4 GPU, 16 CPUs, and 64 GB of memory each. In this section, we report training and hyperparameter configurations used in our experiments. We optimized all models using the Adam optimizer with a learning rate of 0.01 for all the datasets except Pymunk physical environments where we used 0.002. We reduced the learning rate exponentially with a decay rate of 0.9 every 500 steps for the bouncing ball, damped pendulum, and CMU MoCap (walking) datasets, every 100 steps for the USHCN climate dataset, and every 3000 steps for the Pymunk physical environments datasets. We trained the models for 5K, 2K, 2.5K, 150, and 100K steps with a batch size of 50, 64, 16, 100, and 32 for the bouncing ball, damped pendulum, CMU MoCap (walking), USHCN climate indicators, and Pymunk physical environments, respectively. For NCDSSM\ models, we used the following auxiliary inference and emission networks for each dataset: \begin{itemize}[noitemsep,nolistsep] \item \textbf{Bouncing Ball, Damped Pendulum, and USHCN Climate Indicators} \begin{itemize}[noitemsep,nolistsep] \item Auxiliary inference network: \texttt{Identity()} \item Emission network: \texttt{Identity()} \end{itemize} \item \textbf{CMU Motion Capture (Walking)} \begin{itemize}[noitemsep,nolistsep] \item Auxiliary inference network: \texttt{Input(d) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear (2$\times$h)} \item Emission network: \texttt{Input(h) $\rightarrow$ 2$\times$[Linear(30) $\rightarrow$ Softplus()] $\rightarrow$ Linear (d)} \end{itemize} \item \textbf{Pymunk Physical Environments} \begin{itemize}[noitemsep,nolistsep] \item Auxiliary inference network: \texttt{Input(1, 32, 32) $\rightarrow$ ZeroPad2d(padding=[0, 1, 0, 1]) $\rightarrow$ Conv2d(1, 32, kernel\_size=3, stride=2) $\rightarrow$ ReLU() $\rightarrow$ 2$\times$[ZeroPad2d(padding=[0, 1, 0, 1]) $\rightarrow$ Conv2d(32, 32, kernel\_size=3, stride=2) $\rightarrow$ ReLU()] $\rightarrow$ Flatten $\rightarrow$ Linear(64) $\rightarrow$ Linear(2$\times$h)} \item Emission network: \texttt{Input(h) $\rightarrow$ Linear(512) $\rightarrow$ 3$\times$[Conv2d(32, 128, kernel\_size=3, stride=1, padding=1) $\rightarrow$ ReLU() $\rightarrow$ PixelShuffle(upscale\_factor=2)] $\rightarrow$ Conv2d(32, 1, kernel\_size=1, stride=1)} \end{itemize} \end{itemize} To ensure good initial estimation of auxiliary variables, we did not update the underlying SSM parameters for the first 100 and 1000 training steps for the CMU MoCap (walking) and Pymunk physical environments datasets, respectively. In the following, we list specific experiment configurations for individual experiments. \subsubsection{LatentODE} We used the RK4 ODE solver to integrate the encoder and drift ODEs with a step size of 0.05 for all datasets. \begin{itemize}[noitemsep,nolistsep] \item\textbf{Bouncing Ball} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 6 \item Dimension of observations: 1 \item Encoder network: ODEGRU with a \texttt{GRUCell(hidden\_units=10)} and ODE drift function \texttt{Input(10) $\rightarrow$ Linear(30) $\rightarrow$ Tanh() $\rightarrow$ Linear(10)} \item Decoder network: \texttt{Input(6) $\rightarrow$ Linear(10) $\rightarrow$ Softplus() $\rightarrow$ Linear(1)} \item ODE drift function: \texttt{Input(6) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(6)} \end{itemize} \item\textbf{Damped Pendulum} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 6 \item Dimension of observations: 2 \item Encoder network: ODEGRU with a \texttt{GRUCell(hidden\_units=10)} and ODE drift function \texttt{Input(10) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(10)} \item Decoder network: \texttt{Input(6) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(2)} \item ODE drift function: \texttt{Input(6) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(6)} \end{itemize} \item\textbf{CMU Motion Capture (Walking)} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 10 \item Dimension of observations: 50 \item Encoder network: ODEGRU with a \texttt{GRUCell(hidden\_units=30)} and ODE drift function \texttt{Input(30) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(30)} \item Decoder network: \texttt{Input(10) $\rightarrow$ 2$\times$[Linear(30) $\rightarrow$ Softplus()] $\rightarrow$ Linear(50)} \item ODE drift function: \texttt{Input(10) $\rightarrow$ Linear(30) $\rightarrow$ Softplus() $\rightarrow$ Linear(10)} \end{itemize} \item\textbf{Pymunk Physical Environments} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 10 \item Dimension of observations: 1024 \item Encoder network: Same CNN encoder base as in the auxiliary inference network in NCDSSM\ models and ODEGRU with a \texttt{GRUCell(hidden\_units=64)} and ODE drift function \texttt{Input(64) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(64)} \item Decoder network: Same CNN decoder as in the emission network in NCDSSM\ models \item ODE drift function: \texttt{Input(10) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(10)} \end{itemize} \end{itemize} \subsubsection{LatentSDE} For LatentSDE experiments, we additionally annealed the KL term in the objective function with a linear annealing schedule from 0 to 1 over 500 steps for all datasets except Pymunk physical environments for which we annealed over 1000 steps. As proposed in \citet{li2020scalable}, we also provided the posterior SDEs with an additional context vector from the encoder to incorporate information from later observations. We used the RK4 ODE solver to integrate the encoder ODEs and the Euler-Maruyama SDE solver to integrate the prior/posterior SDEs with a step size of 0.05 for all datasets. \begin{itemize}[noitemsep,nolistsep] \item\textbf{Bouncing Ball} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 6 \item Dimension of context vector: 3 \item Dimension of observations: 1 \item Encoder network: ODEGRU with a \texttt{GRUCell(hidden\_units=10)} and ODE drift function \texttt{Input(10) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(10)} \item Decoder network: \texttt{Input(6) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(1)} \item Posterior SDE drift function: \texttt{Input(6+3) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(6)} \item Prior SDE drift function: \texttt{Input(6) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(6)} \item Posterior/Prior SDE diffusion function: \texttt{6$\times$[Input(1) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(1)]} \end{itemize} \item\textbf{Damped Pendulum} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 6 \item Dimension of context vector: 3 \item Dimension of observations: 2 \item Encoder network: ODEGRU with a \texttt{GRUCell(hidden\_units=10)} and ODE drift function \texttt{Input(10) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(10)} \item Decoder network: \texttt{Input(6) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(2)} \item Posterior SDE drift function: \texttt{Input(6+3) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(6)} \item Prior SDE drift function: \texttt{Input(6) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(6)} \item Posterior/Prior SDE diffusion function: \texttt{6$\times$[Input(1) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(1)]} \end{itemize} \item\textbf{CMU Motion Capture (Walking)} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 10 \item Dimension of context vector: 3 \item Dimension of observations: 50 \item Encoder network: ODEGRU with a \texttt{GRUCell(hidden\_units=30)} and ODE drift function \texttt{Input(30) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(30)} \item Decoder network: \texttt{Input(10) $\rightarrow$ 2$\times$[Linear(30) $\rightarrow$ Softplus()] $\rightarrow$ Linear(50)} \item Posterior SDE drift function: \texttt{Input(10+3) $\rightarrow$ Linear(30) $\rightarrow$ Softplus() $\rightarrow$ Linear(10)} \item Prior SDE drift function: \texttt{Input(10) $\rightarrow$ Linear(30) $\rightarrow$ Softplus() $\rightarrow$ Linear(10)} \item Posterior/Prior SDE diffusion function: \texttt{10$\times$[Input(1) $\rightarrow$ Linear(30) $\rightarrow$ Softplus() $\rightarrow$ Linear(1)]} \end{itemize} \item\textbf{Pymunk Physical Environments} \begin{itemize}[noitemsep,nolistsep] \item Dimension of latent state: 10 \item Dimension of context vector: 4 \item Dimension of observations: 1024 \item Encoder network: Same CNN encoder base as in the auxiliary inference network in NCDSSM\ models and ODEGRU with a \texttt{GRUCell(hidden\_units=64)} and ODE drift function \texttt{Input(64) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(64)} \item Decoder network: Same CNN decoder as in the emission network in NCDSSM\ models \item Posterior SDE drift function: \texttt{Input(10+4) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(10) $\rightarrow$ Tanh()} \item Prior SDE drift function: \texttt{Input(10) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(10) $\rightarrow$ Tanh()} \item Posterior/Prior SDE diffusion function: \texttt{10$\times$[Input(1) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(1)]} \end{itemize} \end{itemize} \subsubsection{NCDSSM-LTI} \begin{itemize}[noitemsep,nolistsep] \item \textbf{Bouncing Ball} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 6 \item Dimension of auxiliary variables ($h$): 1 \item Dimension of observations ($d$): 1 \item Integrator: Analytic \end{itemize} \item \textbf{Damped Pendulum} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 6 \item Dimension of auxiliary variables ($h$): 2 \item Dimension of observations ($d$): 2 \item Integrator: Analytic \end{itemize} \item \textbf{CMU Motion Capture (Walking)} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 6 \item Dimension of observations ($d$): 50 \item Integrator: Analytic \end{itemize} \item \textbf{USHCN Climate Indicators} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 5 \item Dimension of observations ($d$): 5 \item Integrator: Euler with step size 0.1 \end{itemize} \item \textbf{Pymunk Physical Environments} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 4 \item Dimension of observations ($d$): 1024 \item Integrator: RK4 with step size 0.05 \end{itemize} \end{itemize} \subsubsection{NCDSSM-NL} We set the diffusion function to ${\mathbf{G}}(\cdot, t) = {\mathbf{I}}$ for all datasets. \begin{itemize}[noitemsep,nolistsep] \item\textbf{Bouncing Ball} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 6 \item Dimension of auxiliary variables ($h$): 1 \item Dimension of observations ($d$): 1 \item Drift function (${\mathbf{f}}$): \texttt{Input(m) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(m)} \item Integrator: RK4 with step size 0.05 \end{itemize} \item\textbf{Damped Pendulum} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 6 \item Dimension of auxiliary variables ($h$): 2 \item Dimension of observations ($d$): 2 \item Drift function (${\mathbf{f}}$): \texttt{Input(m) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(m)} \item Integrator: RK4 with step size 0.05 \end{itemize} \item\textbf{CMU Motion Capture (Walking)} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 6 \item Dimension of observations ($d$): 50 \item Drift function (${\mathbf{f}}$): \texttt{Input(m) -> SN(Linear(30)) -> Softplus() -> SN(Linear(m))} \item Integrator: RK4 with step size 0.05 \end{itemize} \item\textbf{USHCN Climate Indicators} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 5 \item Dimension of observations ($d$): 5 \item Drift function (${\mathbf{f}}$): \texttt{Input(m) $\rightarrow$ SN(Linear(64)) $\rightarrow$ Softplus() $\rightarrow$ SN(Linear(m))} \item Integrator: Euler with step size 0.1 \end{itemize} \item\textbf{Pymunk Physical Environments} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 4 \item Dimension of observations ($d$): 1024 \item Drift function (${\mathbf{f}}$): \texttt{Input(m) $\rightarrow$ Linear(64) $\rightarrow$ Tanh() $\rightarrow$ Linear(m) $\rightarrow$ Tanh()} \item Integrator: RK4 with step size 0.05 \end{itemize} \end{itemize} \subsubsection{NCDSSM-LL} We set the $\alpha$-network to \texttt{Input(m) $\rightarrow$ Linear(64) $\rightarrow$ Softplus() $\rightarrow$ Linear(K)} for all datasets. \begin{itemize}[noitemsep,nolistsep] \item\textbf{Bouncing Ball} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 6 \item Dimension of auxiliary variables ($h$): 1 \item Dimension of observations ($d$): 1 \item Number of base matrices ($K$): 5 \item Integrator: RK4 with step size 0.05 \end{itemize} \item\textbf{Damped Pendulum} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 6 \item Dimension of auxiliary variables ($h$): 2 \item Dimension of observations ($d$): 2 \item Number of base matrices ($K$): 5 \item Integrator: RK4 with step size 0.05 \end{itemize} \item\textbf{CMU Motion Capture (Walking)} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 6 \item Dimension of observations ($d$): 50 \item Integrator: RK4 with step size 0.05 \end{itemize} \item\textbf{USHCN Climate Indicators} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 5 \item Dimension of observations ($d$): 5 \item Number of base matrices ($K$): 10 \item Integrator: Euler with step size 0.1 \end{itemize} \item\textbf{Pymunk Physical Environments} \begin{itemize}[noitemsep,nolistsep] \item Dimension of state ($m$): 10 \item Dimension of auxiliary variables ($h$): 4 \item Dimension of observations ($d$): 1024 \item Number of base matrices ($K$): 10 \item Integrator: RK4 with step size 0.05 \end{itemize} \end{itemize} \section{Additional Results} \label{app:additional-results} Table~\ref{tab:parameter-comparison} shows the number of trainable parameters in each model for different experiments. NCDSSM\ models obtain better performance on every dataset with significantly fewer parameters. Table~\ref{tab:ols-pendulum} shows the goodness-of-fit coefficient ($R^2$) for ordinary least squares regression with the latent states as features, and the ground truth angle and angular velocity as targets. NCDSSM-NL\ and NCDSSM-LL\ models obtain a high $R^2$ coefficient showing that the latent states learned by these models are informative about the true latent state (angle and angular velocity). Figs.~\ref{fig:bb-all-preds} and \ref{fig:pendulum-all-preds} show sample predictions from the \emph{best run} of each model for different missing data settings on the bouncing ball and the damped pendulum datasets, respectively. For the bouncing ball experiment, both LatentODE and LatentSDE learn that the dataset exhibits a zig-zag pattern but are unable to accurately extrapolate it beyond the training context. In the case of damped pendulum, LatentODE and LatentSDE perform well on the low missing data settings (0\% and 30\%) but completely fail on the more challenging settings of 50\% and 80\% missing data. In contrast, NCDSSM-NL\ and NCDSSM-LL\ generate accurate predictions across datasets and missing data settings. Furthermore, while the predictions shown in Figs.~\ref{fig:bb-all-preds} and \ref{fig:pendulum-all-preds} are from the best performing runs of each model, they represent a typical run for NCDSSM-NL\ and NCDSSM-LL. On the other hand, the prediction quality from LatentODE and LatentSDE models varies significantly across random initializations. Fig.~\ref{fig:pymunk-emd} shows the variation of the EMD with time for different models on the box and pong datasets. All models have EMD close to 0 in the context window from 0-2s; however, in the forecast horizon from 2-6s, the EMD rises gradually for NCDSSM-NL\ and NCDSSM-LL\ but rapidly and irregularly for other models. Figs.~\ref{fig:box-all-preds} and \ref{fig:pong-all-preds} show sample predictions from different models on the box and the pong datasets, respectively. NCDSSM-NL\ and NCDSSM-LL\ generate accurate predictions whereas LatentODE and LatentSDE perform significantly worse. \begin{table}[htb] \footnotesize \centering \vspace{-1em} \caption{The number of trainable parameters in every model for different experiments.} \label{tab:parameter-comparison} \begin{tabular}{lrrrrr} \toprule \multirow{2}{*}{Model} & \multicolumn{5}{c}{Number of Parameters}\\ \cmidrule{2-6} & Bouncing Ball & Damped Pendulum & MoCap Walking (Setup 2) & USHCN & Pymunk Environments\\ \midrule LatentODE & 2094 & 3336 & 15454 & -- & 204243 \\ LatentSDE & 5461 & 5557 & 17187 & -- & 208043 \\ \midrule NCDSSM-LTI\ & 63 & 72 & 11080 & 185 & 165911 \\ NCDSSM-NL\ & 859 & 862 & 11620 & 1439 & 167165 \\ NCDSSM-LL\ & 974 & 977 & 12509 & 2439 & 168165 \\ \bottomrule \end{tabular} \end{table} \begin{table}[htb] \footnotesize \centering \vspace{-1em} \caption{Goodness-of-fit coefficient ($R^2$) of ordinary least squares (OLS) regression for the \emph{best run} of each model on the Pendulum dataset. The latent states are treated as features and ground truth angle --- transformed into polar coordinates: $\sin(\text{angle})/\cos(\text{angle})$ --- and angular velocity as targets.} \label{tab:ols-pendulum} \begin{tabular}{lcccccccc} \toprule \multirow{2}{*}{Model} & \multicolumn{4}{c}{$\sin(\text{angle})/\cos(\text{angle})$ $R^2$ ($\uparrow$) (\% Missing)} & \multicolumn{4}{c}{Angular Velocity $R^2$ ($\uparrow$) (\% Missing)}\\ \cmidrule(lr){2-5} \cmidrule(lr){6-9} & 0\% & 30\% & 50\% & 80\% & 0\% & 30\% & 50\% & 80\% \\ \midrule LatentODE & 0.000 / 0.802 & 0.000 / 0.735 & 0.000 / 0.744 & 0.000 / 0.626 & 0.001 & 0.000 & 0.000 & 0.000 \\ LatentSDE & 0.953 / 0.960 & 0.918 / 0.957 & 0.000 / 0.817 & 0.000 / 0.513 & 0.970 & 0.962 & 0.001 & 0.000 \\ \midrule NCDSSM-LTI\ & 0.593 / 0.537 & 0.604 / 0.468 & 0.477 / 0.796 & 0.481 / 0.705 & 0.349 & 0.388 & 0.162 & 0.305 \\ NCDSSM-NL\ & 0.984 / \textbf{0.990} & 0.982 / 0.985 & \textbf{0.973} / 0.976 & \textbf{0.905} / \textbf{0.920} &\textbf{ 0.986} & 0.969 & 0.935 & \textbf{0.859} \\ NCDSSM-LL\ & \textbf{0.986} / 0.989 & \textbf{0.983} / \textbf{0.989} & 0.972 /\textbf{ 0.980} & 0.875 / 0.888 & 0.972 & \textbf{0.978} & \textbf{0.955} & 0.827\\ \bottomrule \end{tabular} \end{table} \begin{figure}[ht] \centering \subfloat[0\% Missing]{\includegraphics[width=0.45\linewidth]{disc_bouncing_ball_all_0.0.pdf}} \subfloat[30\% Missing]{\includegraphics[width=0.45\linewidth]{disc_bouncing_ball_all_0.3.pdf}}\\ \subfloat[50\% Missing]{\includegraphics[width=0.45\linewidth]{disc_bouncing_ball_all_0.5.pdf}} \subfloat[80\% Missing]{\includegraphics[width=0.45\linewidth]{disc_bouncing_ball_all_0.8.pdf}} \caption{Predictions from different models on the bouncing ball dataset for the 0\%, 30\%, 50\%, and 80\% missing data settings. The ground truth is shown using dashed lines with observed points in the context window (gray shaded region) shown as filled circles. The vertical dashed gray line marks the beginning of the forecast horizon. Solid lines indicate median predictions with 90\% prediction intervals shaded around them.} \label{fig:bb-all-preds} \end{figure} \begin{figure}[hb] \centering \subfloat[0\% Missing]{\includegraphics[width=0.45\linewidth]{pendulum_all_0.0.pdf}} \subfloat[30\% Missing]{\includegraphics[width=0.45\linewidth]{pendulum_all_0.3.pdf}}\\ \subfloat[50\% Missing]{\includegraphics[width=0.45\linewidth]{pendulum_all_0.5.pdf}} \subfloat[80\% Missing]{\includegraphics[width=0.45\linewidth]{pendulum_all_0.8.pdf}} \caption{Predictions from different models on the damped pendulum dataset for the 0\%, 30\%, 50\%, and 80\% missing data settings. The ground truth is shown using dashed lines with observed points in the context window (gray shaded region) shown as filled circles. The vertical dashed gray line marks the beginning of the forecast horizon. Solid lines indicate median predictions with 90\% prediction intervals shaded around them. The purple and orange colors indicate observation dimensions.} \label{fig:pendulum-all-preds} \end{figure} \begin{figure}[ht] \centering \subfloat{\includegraphics[width=0.5\linewidth]{box_w_dist.pdf}} \subfloat{\includegraphics[width=0.5\linewidth]{pong_w_dist.pdf}} \caption{Variation of EMD over time for the Box (left) and Pong (right) datasets. The EMD rises gradually with time for NCDSSM-LL\ and NCDSSM-NL\ but rapidly and irregularly for other models.} \label{fig:pymunk-emd} \end{figure} \begin{figure*}[hb] \centering \subfloat[Box LatentODE]{\includegraphics[width=0.9\linewidth]{box_latentode_prediction_white.png}}\\ \subfloat[Box LatentSDE]{\includegraphics[width=0.9\linewidth]{box_latentsde_prediction_white.png}}\\ \subfloat[Box NCDSSM-LTI]{\includegraphics[width=0.9\linewidth]{box_auxctkf_prediction_white.png}}\\ \subfloat[Box NCDSSM-NL]{\includegraphics[width=0.9\linewidth]{box_auxctekf_prediction_white.png}}\\ \subfloat[Box NCDSSM-LL]{\includegraphics[width=0.9\linewidth]{box_auxmixturectkf_prediction_white.png}} \caption{Sample predictions from different models on the Box dataset. The top row in each figure is the ground truth with some missing observations in the context window (before the dashed grey line). The next five rows show trajectories sampled from each model. Best viewed zoomed-in on a computer.} \label{fig:box-all-preds} \end{figure*} \begin{figure*}[htb] \centering \subfloat[Pong LatentODE]{\includegraphics[width=0.9\linewidth]{pong_latentode_prediction_white.png}}\\ \subfloat[Pong LatentSDE]{\includegraphics[width=0.9\linewidth]{pong_latentsde_prediction_white.png}}\\ \subfloat[Pong NCDSSM-LTI]{\includegraphics[width=0.9\linewidth]{pong_auxctkf_prediction_white.png}}\\ \subfloat[Pong NCDSSM-NL]{\includegraphics[width=0.9\linewidth]{pong_auxctekf_prediction_white.png}}\\ \subfloat[Pong NCDSSM-LL]{\includegraphics[width=0.9\linewidth]{pong_auxmixturectkf_prediction_white.png}} \caption{Sample predictions from different models on the Pong dataset. The top row in each figure is the ground truth with some missing observations in the context window (before the dashed grey line). The next five rows show trajectories sampled from each model. Best viewed zoomed-in on a computer.} \label{fig:pong-all-preds} \end{figure*}
2,869,038,156,174
arxiv
\section{ mixed-state preparation non-contextuality $\Rightarrow$ 1M$\psi$E } We start by encapsulating the first definition of M$\psi$E first introduced in \cite{maroney12}. For arbitrary two quantum states $|\phi\rangle $ and $|\psi\rangle$, by the definition of 1M$\psi$E model, the following condition holds. \begin{align} \int _{\Lambda_{\phi}} \mu(\lambda|\psi) d\lambda = |\langle\phi|\psi\rangle|^2 \end{align} where $\Lambda_{\phi}$ is the support of $|\phi\rangle$ in ontic state space. This model is outcome deterministic as $\xi(\phi|\lambda, \Pi_{\phi})=1$ for every $\lambda\in\Lambda_{\phi}$. One can then write \begin{align} \label{born2} \int _{\Lambda_{\phi}} \mu(\lambda|\psi) \xi(\phi|\lambda, \Pi_{\phi}) d\lambda =|\langle\phi|\psi\rangle|^2 \end{align} Then, for a non-maximally or simply $\psi$-epistemic model, \begin{align} \label{born3} \int _{\Lambda_{\phi}} \mu(\lambda|\psi) \xi(\phi|\lambda, \Pi_{\phi}) d\lambda \leq \int _\Lambda \mu(\lambda|\psi) \xi(\phi|\lambda, \Pi_{\phi}) d\lambda \end{align} Clearly, the model is 1M$\psi$E only if the equality in Eq.(\ref{born3}) holds. One can define a degree of epistemicity $f(\psi,\phi)$, so that, \begin{align} \int _{\Lambda_{\phi}} \mu(\lambda|\psi) d\lambda = f(\psi,\phi) |\langle\psi|\phi\rangle|^{2} \end{align} where $0\leq f(\psi,\phi)\leq 1$. As the name suggests, the degree of epistemicity is maximum ( i.e., $f(\psi,\phi)=1$) in a 1M$\psi$E model where $|\psi\rangle$ and $|\phi\rangle$ are arbitrary and belong to a $d$-dimensional Hilbert space. Other conceptual features of such model is discussed in \cite{bal}. In an interesting work Leifer and Maroney \cite{leifer13} argued that a preparation non-contextual model is 1M$\psi$E but converse is not true due to the counterexample of KS model in two dimension. We note here again that the assumption of preparation non-contextuality in \cite{leifer13} corresponds to a mixed-state. They proved the following theorem.\\ \textbf{Theorem-1:} \textit{If an ontological model of quantum theory is mixed-state preparation non-contextual then it is 1M$\psi$E.}\\ They started the argument by assuming an ontological model is non-maximally $\psi$-epistemic and proved that such a model is mixed-state preparation-contextual. We provide a direct and simpler proof of Theorem-1 shortly after recapitulating the essence of their proof. Consider a maximally mixed qubit state $\rho=\mathcal{I}/2$ prepared in two different decompositions \begin{eqnarray} \label{ms1} \frac{\mathcal{I}}{2}&=&\frac{1}{2}\big(|\chi\rangle\langle \chi|+ \frac{1}{2}|\chi_{\perp}\rangle\langle \chi_{\perp}|\\ \label{ms2} &=& \frac{1}{2}|\eta\rangle\langle \eta|+ \frac{1}{2}|\eta_{\perp}\rangle\langle \eta_{\perp}| \end{eqnarray} This can be viewed as preparing $\rho=\mathcal{I}/2$ by using two distinct preparation procedures $P$ and $P^{\prime}$ respectively. In an ontological model of quantum theory, the associated epistemic states can be written as \begin{eqnarray} \label{ep1} \mu_{P}(\lambda|\mathcal{I}/2)=(\mu_{\chi}+\mu_{\chi_{\perp}})/2\\ \label{ep2} \mu_{P^{\prime}}(\lambda|\mathcal{I}/2)=(\mu_{\eta}+\mu_{\eta_{\perp}})/2 \end{eqnarray} and $\Lambda_{P}=\lambda_{\chi}\cup\lambda_{\chi_{\perp}}$ and $\Lambda_{P^{\prime}}=\lambda_{\eta}\cup\lambda_{\eta_{\perp}}$ are the respective ontic state spaces. The assumption of mixed-state preparation non-contextuality implies $\Lambda_{P}=\Lambda_{P^{\prime}}$. Now, if the ontological model is \emph{not} 1M$\psi$E, then from Eq.(\ref{born3}) we can be write, $\int _{\Lambda_{\eta}} \mu(\lambda|\chi) \xi(\eta|\lambda, \Pi_{\eta}) d\lambda < \int _\Lambda \mu(\lambda|\chi) \xi(\eta|\lambda, \Pi_{\eta}) d\lambda$. This means there is a set of ontic states $\Omega$, so that $\Lambda_{\chi}\cap \Omega=\emptyset$, and $\xi(\eta|\lambda)>0$ for all $\lambda\in \Omega$. Since $|\chi\rangle\perp|\chi_{\perp}\rangle $, then by definition $\Lambda_{\chi_{\perp}}\cap \Omega=\emptyset$. It is then evident that $\Lambda_{P}\cap \Omega\neq\Lambda_{P^{\prime}}\cap \Omega$ which means $\Lambda_{P}$ and $\Lambda_{P^{\prime}}$. Hence, a non-1M$\psi$E model is mixed-state preparation-contextual. In other words, a mixed-state preparation non-contextual model is the 1M$\psi$E one. They have also argued that converse does not hold good due to the existence of KS model \cite{leifer13}. We note here that in Leifer-Maroney proof the the preparation non-contextuality for pure state is implicitly assumed. However, the proof does not rely on this assumption. Generalization of the above proof for arbitrary dimensional system can be found in \cite{banik,leiferquanta}. Leifer and Maroney have also argued that a 1M$\psi$E model is KS non-contextual but converse does not hold. We provide a direct and elegant proof of the Theorem-1. We start by noting the fact that the normalization condition provides $\int_{\Lambda_{P^{\prime}}} \mu_{P^{\prime}}(\lambda|\frac{\mathcal{I}}{2})=1$. Assuming preparation non-contextuality for mixed state, i.e., $\mu_{P}(\lambda|\frac{\mathcal{I}}{2})=\mu_{P^{\prime}}(\lambda|\frac{\mathcal{I}}{2})$, we have \begin{align} \int_{\Lambda_{P^{\prime}}} \mu_{P}(\lambda|\frac{\mathcal{I}}{2})d\lambda=1 \end{align} which, by using Eq. (\ref{ep1}) takes the form \begin{align} \label{pp} \frac{1}{2}\int_{\Lambda_{P^{\prime}}} \mu(\lambda|\chi) d\lambda + \frac{1}{2} \int_{\Lambda_{P^{\prime}}} \mu(\lambda|\chi_{\perp})d\lambda=1 \end{align} Since $\Lambda_{P^{\prime}}=\Lambda_{\eta}\cup\Lambda_{\eta_{\perp}}$, Eq. (\ref{pp}) can be re-written as \begin{eqnarray} \label{pnc1} &&\int_{\Lambda_{\eta}} \mu(\lambda|\chi) d\lambda + \int_{\Lambda_{\eta_{\perp}}} \mu(\lambda|\chi) d\lambda\\ \nonumber &+& \int_{\Lambda_{\eta}} \mu(\lambda|\chi_{\perp}) d\lambda + \int_{\Lambda_{\eta_{\perp}}} \mu(\lambda|\chi_{\perp})d\lambda=2 \end{eqnarray} As already mentioned that for a $\psi$-epistemic model model, $\int _{\Lambda_{\phi}} \mu(\lambda|\psi) d\lambda = f(\psi,\phi) |\langle\psi|\phi\rangle|^{2}$ and for 1M$\psi$E model $f(\psi,\phi)=1$ where $|\psi\rangle$ and $|\phi\rangle$ are arbitrary. From Eq.(\ref{pnc1}), we then have \begin{eqnarray} \label{pnc2} &&\big[f(\chi,\eta)-f(\chi_{\perp},\eta)\Big]\langle\chi|\eta\rangle|^{2} +f(\chi_{\perp},\eta)\\ \nonumber &+& \big[f(\chi,\eta_{\perp})-f(\chi_{\perp},\eta_{\perp})\Big]\langle\chi|\eta_{\perp}\rangle|^{2} +f(\chi_{\perp},\eta_{\perp})=2 \end{eqnarray} which can be satisfied if each of the degree of epistemicity functions involved in Eq. (\ref{pnc2}) equals to unity, i.e., $f(\chi,\eta)=f(\chi,\eta_{\perp})=f(\chi_{\perp},\eta)=f(\chi_{\perp},\eta_{\perp})=1$. This in turn proves that the model is 1M$\psi$E by definition. We thus provided a direct and simpler proof to demonstrate that mixed-state preparation non-contextuality implies 1M$\psi$E. The proof using qubit system suffices the purpose of this work but it can be generalized for any mixed state in arbitrary dimensional Hilbert space. \section{2M$\psi$E $\Rightarrow$ pure-state preparation non-contextuality} Next, we consider the second definition of maximal $\psi$-epistemicity (2M$\psi$E) in an ontological model of quantum theory. Let us recall the notion of probability of successful distinction of two quantum states in Hilbert space and corresponding to two epistemic states in ontic state space. Consider two arbitrary states $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$ in a $d$-dimensional Hilbert space prepared by two preparation procedures $P$ and $P^{\prime}$ and their associated epistemic states are $\mu_{P}(\lambda|\psi_{1})$ and $\mu_{\mathcal{P^{\prime}}}(\lambda|\psi_{2})$ respectively. The explicit mentioning of preparation procedures $P$ and $P^{\prime}$ will be made clear soon. When respective ontic state spaces $\Lambda_{\psi_{1}}$ and $\Lambda_{\psi_{2}}$ corresponding to the states $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$ have non-zero overlap, we have $\Lambda_{\psi_{1}}\cap\Lambda_{\psi_{2}}\neq\emptyset$. In such a case, $\int _{\Lambda_{\psi_{1}}}\mu_{\mathcal{P^{\prime}}}(\lambda|\psi_{2})d\lambda\neq 0 $ and similarly $\int _{\Lambda_{\psi_{2}}} \mu_{P}(\lambda|\psi_{1})d\lambda\neq 0 $. In order to quantify the the overlap between two epistemic states $\mu_{P}(\lambda|\psi_{1})$ and $\mu_{\mathcal{P^{\prime}}}(\lambda|\psi_{2})$, one can start from classical trace distance between two classical probability distributions. One can then define an appropriate quantity known as classical fidelity \cite{leifer14,barrett14,bran, nigg,ring}, \begin{align} \label{lc1} L_{C}(\mu_{\psi_{1}},\mu_{\psi_2})=\int _{\Lambda}min \{\mu_{P}(\lambda|\psi_{1}), \mu_{\mathcal{P^{\prime}}}(\lambda|\psi_{2})\} \ d\lambda \end{align} By definition, $0\leq L_{C}(\mu_{\psi_{1}},\mu_{\psi_2})\leq 1$, and the value is $1(0)$ for identical ( disjoint) epistemic states respectively. This definition implies that the impossibility of discriminating non-orthogonal quantum states would be explained in a natural way if the two quantum states sometimes correspond to the same state of reality. But this explanation is expected to fail if the quantum and classical overlaps are not equal. This fact is utilized in \cite{maroney12, barrett14, leifer14, bran,ring} to show the inability of $\psi$-epistemic model in explaining the distinguishability between two quantum states belongs to the Hilbert space having dimension grater than two. In quantum theory, the overlap between two states $|\psi_1\rangle$ and $|\psi_2\rangle$ can be defined as \begin{align} \label{lq1} L_{Q}(\psi_{1},\psi_{2}) = 1 - D_{Q}(\psi_1,\psi_2) \end{align} where $D(\psi_1,\psi_2)=\sqrt{1-|\langle\psi_1 |\psi_2\rangle|^{2}}$ is the trace-distance. By definition, $0\leq L_{Q}(\psi_{1},\psi_{2})\leq 1$. If $|\psi_{1}\rangle$ and $|\psi_{2}\rangle$ are identical (orthogonal), $L_{Q}(\psi_{1},\psi_{2})$ takes the value $1(0)$. Now, since $0\leq L_{Q}(\psi_{1},\psi_{2})\leq 1$ and $0\leq L_{C}(\mu_{\psi_{1}},\mu_{\psi_2})\leq 1$, we may then categorize the possible cases are the following; \textbf{(i)} For \emph{any} two quantum states, if $L_{Q}(\psi_{1},\psi_{2})\neq 0$ and $L_{C}(\mu_{\psi_{1}},\mu_{\psi_2})=0$, then the corresponding ontological model is termed as \emph{$\psi$-ontic }. In this model, different (even non-orthogonal) quantum states compatible with distinct ontic states. \textbf{(ii)} If a model is \emph{not} $\psi$-ontic then it is $\psi$-epistemic. There exists \emph{at least} two quantum states for which $L_{Q}(\psi_{1},\psi_{2})\neq 0$ and $L_{C}(\psi_{1},\psi_{2})\neq 0$ but $L_{Q}(\psi_{1},\psi_{2})\geq L_{C}(\mu_{\psi_{1}},\mu_{\psi_2})$. Such a model is called \emph{$\psi$-epistemic } in which epistemic states corresponding to two different quantum states has at least a common $\lambda$ and the quantum state is interpreted to have contained the information about real physical state of the system but not the reality itself. \textbf{(ii$a$)} A model is defined as 2M$\psi$E if the special case of equality in (ii) is satisfied, i.e., $L_{Q}(\psi_{1},\psi_{2})=L_{C}(\mu_{\psi_{1}},\mu_{\psi_2})$. In such a model, the overlap between two epistemic states fully accounts the overlap between the two quantum states. From state discrimination viewpoints, in a 2M$\psi$E model the discrimination between two non-orthogonal quantum states is completely and quantitatively explained by the discrimination between the corresponding epistemic states. Equipped with those definitions, we are now in a position to demonstrate the connection between 2M$\psi$E model and preparation non-contextuality for pure-state. We prove the following theorem. \\ \textbf{Theorem 2:} \textit{A 2M$\psi$E ontological model of quantum theory is pure-state preparation non-contextual.} \\ The proof of Theorem-2 is straightforward but implication of it is quite interesting. Let two preparation procedures $P$ and ${P^{\prime}}$ prepare the \emph{same} pure state $|\psi\rangle$ in a $d-$dimensional Hilbert space. The associated epistemic states are $\mu_{P}(\lambda|\psi)$ and $\mu_{P^{\prime}}(\lambda|\psi)$ respectively. Now, if $|\psi_1\rangle=|\psi_2\rangle \equiv |\psi\rangle$, from Eq.(\ref{lq1}) we have $D_{Q}(\psi_1,\psi_2)=0$ leading to $ L_{Q}(\psi_{1},\psi_{2}) = 1$. As defined in (ii.$a$), a 2M$\psi$E model thus demands \begin{align} \label{proof1} L_{C}(\mu_{P}(\lambda|\psi), \mu_{P^{\prime}}(\lambda|\psi)) = 1 \end{align} This is possible only when $\mu_{P}(\lambda|\psi)=\mu_{\mathcal{P^{\prime}}}(\lambda|\psi)\equiv \mu(\lambda|\psi)$, i.e., preparation non-contextual for pure-state. This concludes the proof of Theorem-2. The above proof does not require the explicit use of response function and valid for any arbitrary dimension of the Hilbert space. \section{Mixed state preparation non-contextuality $\centernot\iff$ pure state preparation non-contextuality} A crucial point to note here that we have used the \emph{same} notion of preparation non-contextuality assumption at the level of mixed-state in Theorem-1 and then at the level of pure-state in Theorem-2. In Theorem-1, we showed that mixed-state preparation non-contestuality $\Rightarrow$ 1M$\psi$E and in Theorem-2 we proved 2M$\psi$E $\Rightarrow$ pure-state preparation non-contextuality. If notion of maximal $\psi$-epistemicity remains equivalent in both the definitions of 1M$\psi$E and 2M$\psi$E, we can write the following; mixed-state preparation non-contextuality $\Rightarrow$ pure-state preparation non-contextuality. However, this inference is not in accordance with the Theorem-3 whose statement is the following.\\ \textbf{Theorem 3:} \textit{The assumption of preparation non-contextuality in an ontological model of quantum theory does not hold both for mixed-state and pure-state simultaneously.} \\ To prove Theorem-3, a particular example will suffice the purpose. We specifically argue that in any all-versus-nothing proof of preparation contextuality, there is an inherent interplay between the pure-state and mixed-state preparation non-contextuality in an ontological model. This was hitherto unnoticed which we made explicit here. To impose the assumption of preparation non-contextuality at the level of mixed state inevitably requires the \emph{preparation contextuality} for pure state and vice-versa. We may also remark here that since any KS proof can be cast into a proof of preparation contextuality, then a KS proof would also suffice our purpose. But this requires minimum three dimensional Hilbert space to run the argument. We use the Spekken's \cite{spek05} proof of preparation contextuality for a qubit system. Let $\{P_{t}\} $ is a set of three preparation procedures in quantum theory (with $t=1,2,3$) preparing the maximally mixed state $\mathcal{I}/2$, so that \begin{align} \label{pt} \frac{\mathcal{I}}{2}=\frac{1}{2}(A_t^{+}+A_t^{-}) \end{align} where $A_{t}^{\pm}=(\mathcal {I}\pm A_{t})/2$ are the rank-1 projectors and $A_{t}$ are qubit observables. Now, consider that the same maximally mixed state $\mathcal{I}/2$ is also prepared by two more preparation procedures $P_4$ and $P_5$ are given by \begin{align} \label{pnt} \frac{\mathcal{I}}{2}=\frac{1}{3}\sum_{t=1}^{3}A_t^{+}; \ \ \ \ \frac{\mathcal{I}}{2}=\frac{1}{3}\sum_{t=1}^{3}A_t^{-} \end{align} This requires the qubit observables to satisfy the relation $\sum_{t=1}^{3} A_{t}=0$. In an ontological model, assuming the preparation non-contextuality for the epistemic states corresponding to the mixed state $\mathcal{I}/2$ prepared by five preparations, one can write $\mu_{P_1}(\lambda|\frac{\mathcal{I}}{2})=\mu_{P_2}(\lambda|\frac{\mathcal{I}}{2})=\mu_{P_3}(\lambda|\frac{\mathcal{I}}{2})=\mu_{P_4}(\lambda|\frac{\mathcal{I}}{2})=\mu_{P_5}(\lambda|\frac{\mathcal{I}}{2})\equiv\nu(\lambda|\frac{\mathcal{I}}{2})$. The relevant ontic state spaces are respectively denoted as $\Lambda^{\mathcal{I}/2 }_{{P}_j}$ with $j=1,2,3,4,5$. Clearly, in a preparation non-contextual model, $\Lambda^{\mathcal{I}/2 }_{{P}_1}=\Lambda^{\mathcal{I}/2 }_{{P}_2}=....=\Lambda^{\mathcal{I}/2 }_{P_5}\equiv \Lambda^{\mathcal{I}/2 }$. Using convexity property and assuming preparation non-conextuality for the mixed state, one can write \begin{eqnarray} \label{ptlambda} \nu(\lambda|\frac{\mathcal{I}}{2})&=&\frac{1}{2}\left( \mu_{P_{t}}(\lambda|A_t^{+}) + \mu_{P_{t}}(\lambda|A_t^{-})\right)\\ \label{p45p} &=&\frac{1}{3}\sum_{t=1}^{3}\mu_{P_4}(\lambda|A_t^{+})\\ \label{p45m} &=&\frac{1}{3}\sum_{t=1}^{3}\mu_{P_5}(\lambda|A_t^{-}) \end{eqnarray} where $t=1,2,3$. Each of the six qubit projectors appears twice in the preparation of $\rho=\mathcal{I}/2$ by those five different preparation procedures. For example, $A_{1}^{+}$ appears in $P_1$ and $P_4$. The assumption of preparation non-contextuality is equally applicable to each the rank-1 density matrices $A_{t}^{\pm}$ prepared by two distinct preparation procedures. We denote the ontic space corresponding to the pure states as $\Lambda^{A_{t}^{\pm} }_{{P}_j}$. Now, consider an arbitrary ontic state, say, $\lambda^{\ast}\in\Lambda^{\mathcal{I}/2 }$ for which $\nu(\lambda^{\ast}|\mathcal{I}/2 )>0$. Since $A_{t}^{+}$ and $A_{t}^{-}$ are orthogonal then $\Lambda_{P_t}^{A_{t}^{+} }\cap \Lambda_{P_t}^{A_{t}^{-} } =\emptyset$. If we consider $\mu_{P_{t}}(\lambda^{\ast}|{A_t}^{+})>0$ for every $t$ then $\mu_{P_{t}}(\lambda^{\ast}|{A_t}^{-})=0$. If preparation non-contextuality for every pure state is also assumed (for example, $\mu_{P_{t}}(\lambda^{\ast}|A_t^{+})=\mu_{P_{4}}(\lambda^{\ast}|A_t^{+}) )$, then from Eq. (\ref{p45m}) one has $\nu(\lambda^{\ast}|\mathcal{I}/2 )=0$. Since $\nu(\lambda^{\ast}|\mathcal{I}/2 )>0$, we thus have a contradiction. The argument holds for any $\lambda$ and thus similar contradiction can be found for any such assignment of positive probability distribution. By the word contradiction, we explicitly mean here that if one assumes preparation non-contextuality for pure states then the epistemic states corresponding to the mixed states become preparation contextual. On the other hand, if one wishes to assume preparation contextuality for pure states (for example, given a $\lambda^{\ast}$ the epistemic states corresponding to the pure states $\mu_{P_{t}}(\lambda^{\ast}|\{\rho_{A_t}^{\pm}\})$ change their support in the preparation procedures $P_{5}$ or $P_{4}$) then the preparation non-contextuality for mixed state can be maintained. Thus, the assumption of preparation noncontextuality cannot be imposed for both pure-state and mixed state together, which is the statement of Theorem-3. However, there is no restriction for both pure and mixed-state to be preparation contextual in an ontological model of quantum theory. \section{Summary and discussion} In summary, we provided three theorems regarding the connections between the notions of preparation non-contextuality and maximal $\psi$-epistemicity in an ontological model. We showed that, taken together, those three theorems leads conceptually inconsistent conclusions. Note that the notion preparation non-contextuality in an ontological model ($\mu_{P}(\lambda|\rho)=\mu_{P^{\prime}}(\lambda|\rho))$ usually corresponds to a mixed state $\rho$ prepared by two or more distinct procedures $P$ and $P^{\prime}$. Although it not hitherto spelled out explicitly, but there is whatsoever no reason to constrain the notion of preparation non-contextuality to not to be applicable for pure-state $\rho=|\psi\rangle\langle\psi|$ prepared by distinct procedures. As mentioned earlier, the maximal $\psi$-epistemicity in an ontological model of quantum theory conceptually implies that the overlap between any two quantum states in Hilbert space is fully accounted for by the overlap of their respective epistemic states in ontic state space. Interestingly, the same notion of maximal $\psi$-epistemicity is captured by two distinct mathematical definitions, termed here as 1M$\psi$E and 2M$\psi$E. In the very first paper on maximal $\psi$-epistemicity by Maroney \cite{maroney12}, 2M$\psi$E was introduced as a noise tolerant version of 1M$\psi$E. In Theorem-1, using the first mathematical definition of maximal $\psi$-epistemicity we showed that mixed-state preparation non-contextuality $\Rightarrow$ 1M$\psi$E. This theorem was proved in \cite{leifer13} but we provided here a simpler and direct proof of it. By using the second mathmatical definition of maximal $\psi$-epistemicity we proved the Theorem-2 to demonstrate that 2M$\psi$E $\Rightarrow$ pure-state preparation non-contextuality. Now, if notion of maximal $\psi$-epistemicity is equivalent in both the definitions of 1M$\psi$E and 2M$\psi$E, one can infer that the mixed-state preparation non-contextuality $\Rightarrow$ pure-state preparation non-contextuality. However, we argued in Theorem-3 that there are scenarios (for example, any all-versus-nothing proof of preparation contextuality) where the assumption of preparation non-contextuality cannot be imposed for the epistemic states corresponding to both the pure and mixed states. To maintain the preparation non-contextuality for a mixed state one has to allow the preparation contextuality for pure state and vice versa. Thus, the statement of the Theorem-3 is in clear disagreement with the conclusion drawn from the first two theorems. One may thus conclude that the notion of maximal $\psi$-epistemicity captured by 1M$\psi$E and 2M$\psi$E are inequivalent and 2M$\psi$E is merely not a noise tolerant version of 1M$\psi$E. We may note here that, it will cerrtainly be interesting if a direct proof of the inequivalence between 1M$\psi$E and 2M$\psi$E can be demonstrated. Finally, we make a few remarks about further implications of our work. Note that, a mixed-state preparation non-contextual model and 1M$\psi$E model both are outcome deterministic \cite{spekkens13, maroney12, bal}. Also, the deterministic response function is a key element for connecting 1M$\psi$E and KS non-contextuality which eventually proves mixed-state preparation non-contextuality implies KS non-contextuality \cite{leifer13}. On the other hand, 2M$\psi$E modes does not use the response function and it is remained to be examined whether 2M$\psi$E model is deterministic or not. However, a pure-state preparation non-contextual model is not expected to be a deterministic theory as can be understood from Beltrametti-Bugajski model \cite{bel}. This indicates that there is no obvious connection between 2M$\psi$E and KS non-contextuality. The 2M$\psi$E model can provide an insight into the no-go theorem proposed by Hardy \cite{hardy}. He introduced a $\psi$-ontology theorem by considering a key assumption known as `ontic indifference'. The assumption of ontic indifference states the following. If under quantum transformation a pure state $|\psi\rangle$ remains unaffected (i.e., $\mathcal{U}|\psi\rangle=|\psi\rangle$) then suitable transformation in the ontic space can be found so that every ontic state in the support of epistemic state corresponding to $|\psi\rangle$ remain unaffected by the transformation. Now, by considering $\mathcal{U}|\psi\rangle=|\psi\rangle$ is an another preparation procedure to prepare $|\psi\rangle$, a weaker reading of ontic indifference assumption could be the pure-state preparation non-contextuality where epistemic state is assumed to remain unaffected by the transformation. This indicates an interesting connection between 2M$\psi$E model and Hardy's ontological model satisfying a weaker ontic indifference assumption, which calls for further study. It could also be an interesting study to examine the limit on the degree of preparation contextuality (for pure or mixed state) that can be imposed by quantum theory. For example, in the Bell-CHSH scenario, mixed-state preparation non-contextuality implies the locality condition \cite{pusey,tava}. But, quantum violation is restricted to Cirelson bound and hence put a limit on the degree of mixed-state preparation contextuality. In Bell-CHSH scenario, there is no issue of pure-state preparation contextuality because same pure state does not appear in different context. One may quantify the degree of preparation contextuality for pure and mixed state in suitable scenario. This requires to introduce some new ingredients where such a degree is meaningful and can provides some interesting conclusions. Such a degree of preparation contextuality can also be compared with the degree of $\psi$-epistemicity. This may provide new insights into the research on $\psi$-epistemic models. Further studies along this line could be an interesting avenue for future research. \section*{Acknowledgments} Author acknowledges the support from the project DST/ICPS/QuEST/2018/Q-42.
2,869,038,156,175
arxiv
\section*{Introduction} Consider a fair even odds game in which we win or lose a proportion $r$ of our capital in every round. As \begin{equation} (1+r)(1-r)=1-r^2,\quad r>0,\label{binf} \end{equation} we reduce our capital by factor $r^2$ if we win once and lose once (assuming that every time we stake all our capital). After only two rounds, there is already a $75\%$ probability of being behind, even though the expected value of the game is zero regardless of how many rounds we play. After $M$ rounds of the considered fair game, the median outcome is $(1-r^2)^{M/2}<1$, which is decreasing in $M$. If we have another (identical but independent) game at our disposal, then we can, in every round, split our capital equally between both games. The question is what we intuitively believe the effect on his wealth progression is going to be, comparing a single game (\emph{imbalanced}) with two simultaneous games (\emph{balanced}). For the balanced game, outcomes $(1+0.5r)$ and $(1-0.5r)$ have equal probabilities. But, as the returns of two simultaneous games can net to zero, a new neutral state has been added to the player's space of outcomes. We see the effects in Table \ref{tab:binDyn}. After one round of playing two games simultaneously, we obtain less extreme outcomes, but we leave our win-loss ratio unchanged (with a 25\% probability for winning and losing, respectively, and a 50\% probability of a neutral outcome). After two rounds of two simultaneous games, our probability of being behind has been reduced to $43.75\%$, and we have a probability of $25\%$ of breaking even, while the expected value is still zero. In this simple example, we observe that rebalancing can increase the probability of a positive return. By considering Binomial and Gaussian dynamics, we will now show, with relatively little technical complexity, that the gap in the most likely outcomes of the two strategies continues to widen as time tends to infinity, from which we then infer the rebalancing principle: in the long term, volatility reduction translates into growth. Throughout the discussion, it is important to bear in mind that trading strategies which generate growth through rebalancing (or volatility harvesting) do require specific market dynamics to persist. They are therefore conceptually no different from a simple directional trade -- we are merely betting on market dynamics rather than market direction, and success is not an arbitrage. \section*{Binomial Dynamics}\label{Sec_Bin} We introduce some mathematical notation and relax previous assumptions slightly. Consider two assets, $A_1$ and $A_2$, with returns given by \begin{equation*} R_{i,j} = \mu+rB_{i,j},\quad 1\leq j\leq M,\ i=1,\,2, \end{equation*} where $M$ denotes the number of time steps considered, and suppose $\mu\in(0,1)$ with $\mu+0.5r=1$ and $\mu-r>0$. Suppose $B_{i,j}\sim \mathcal{B}(1,p)$ are Bernoulli distributed for some parameter $p>0$ and with correlation $\rho=\mathrm{Corr}(B_{1,j},B_{2,j})>0$. Suppose also that there is no serial (inter-) correlation in the considered random processes, such that any random variables drawn at different points in time are independent. \begin{table}[b] \caption{\it For $p=0.5$ and $\rho=0$, we see the differences in outcomes between playing a single (imbalanced) or two simultaneous (balanced) games. We notice that the balanced player has a significantly lower probability of falling behind due to the zero return he obtains in a round where he simultaneously wins and loses a game.} \centering \begin{tabular}{ c | c | c | c || c} & $\mathbb{P}\big[R > 1\big]$ & $\mathbb{P}\big[R = 1\big]$ & $\mathbb{P}\big[R < 1\big]$ & $\mathbb{P}\big[R \geq 1\big]$ \\ \hline\hline Imbalanced, after $1^{st}$ round & 50\% & 0 & 50\% & 50\%\\ \hline Balanced, after $1^{st}$ round & 25\% & 50\% & 25\% & 75\%\\ \hline\hline Imbalanced, after $2^{nd}$ round & 25\% & 0 & 75\% & 25\%\\ \hline Balanced, after $2^{nd}$ round & 31.25\% & 25\% & 43.75\% & 56.25\%\\ \hline\hline \end{tabular} \label{tab:binDyn} \end{table} If, at every time $j$, we invest fully into either $A_1$ or $A_2$, we engage in what we earlier termed an imbalanced game, whereas, if we spread our allocation equally between $A_1$ and $A_2$, we engage in a balanced game. The expected period return is identical for imbalanced and balanced strategies, but, for an imbalanced portfolio, the probability of negative period returns is given by $$\mathbb{P}\big[R_{i,j} < 1\big] = 1-p,$$ while, for the balanced portfolio, we have \begin{align} \mathbb{P}\big[0.5 R_{1,j} + 0.5 R_{2,j} < 1\big]=&\ \mathbb{P}\big[B_{1,j} = 0 \wedge B_{2,j} =0\big]\nonumber\\ =&\ \mathbb{P}\big[B_{1,j}= 0 \mid B_{2,j} =0\big]\,\mathbb{P}\big[ B_{2,j} =0\big]<1-p,\label{FewDEq}\nonumber \end{align} which holds regardless of $\rho$. \begin{figure}[b] \centering \includegraphics[scale=0.35]{BinSample.png} \caption{The (fitted) densities obtained from 1000 simulated paths of balanced and imbalanced strategies on binomial dynamics. We use $p=0.5$, $\mu=0.98$, $r=0.04$, and $M=250$, which corresponds to one year trading with unbiased daily $\pm 2\%$ dynamics. We use $\rho = 0$, which represents independent assets. The median points of the distributions (which for this choice of $p$ correspond to the most likely outcomes) are $0.9751$ and $0.9512$ for the balanced and imbalanced strategies, respectively.} \label{BinSample} \end{figure} After many rounds of our game, we would expect $pM$ ups and $(1-p)M$ downs. The most likely outcome for the imbalanced strategy is therefore given by \begin{align*} \ (\mu+r)^{pM}\mu^{(1-p)M} = (\mu+r)^{(2p-1)M}(\mu+r)^{(1-p)M}\mu^{(1-p)M} \end{align*} Conversely, for the balanced strategy, the most likely outcome is \begin{align*} (\mu+r)^{\beta_{1}M}(\mu+0.5r)^{\beta_{2}M}\mu^{\beta_{3}M} = (\mu+r)^{(\beta_{1}-\beta_3)M}(\mu+r)^{\beta_3M}\mu^{\beta_{3}M}, \end{align*} where $\beta_1=\mathbb{P}\big[B_{1,j}=B_{2,j}=1\big]$, $\beta_3=\mathbb{P}\big[B_{1,j}=B_{2,j}=0\big]$, and $\beta_2 = 1-\beta_1-\beta_3$. We can write \begin{align*} \rho = \frac{\beta_1-p^{2}}{p(1-p)}, \end{align*} so $\beta_1=p(1-p)\rho+p^2$. By a symmetry argument, we obtain $\beta_3=p(1-p)\rho+(1-p)^2$, and finally $\beta_2=1-\beta_1-\beta_3=2p(1-p)(1-\rho)$. Noting that $\beta_1-\beta_3 = 2p-1$ and $1-p-\beta_3=0.5\beta_2$, we obtain the ratio between the modal values of the imbalanced and balanced strategies as \begin{align*} \big[(\mu +r)\mu\big]^{0.5\beta_2M} = \big[\mu + 0.5r\mu\big]^{0.5\beta_2M} < 1 \end{align*} since $\mu<1$. We conclude that the distribution of the balanced strategy has a mode (the highest point of the distribution, the value most likely to occur) which is higher than that of the imbalanced strategy, and the gap widens as $M$ increases. If we consider that the number of outcomes lying near the distribution mode tends to infinity as $M$ increases, then the probability of the balanced strategy outperforming the imbalanced strategy also tends to infinity with $M$. At the same time, expected value of both strategies is always given by $(\mu+rp)^M$. In Figure \ref{BinSample}, we see (fitted) densities obtained from 1000 simulated paths of balanced and imbalanced strategies on binomial dynamics. We use $p=0.5$, $\mu=0.98$, $r=0.04$, and $M=250$, $\rho = 0$, which corresponds to one year trading with unbiased daily $\pm 2\%$ dynamics. We use $\rho = 0$. The median points of the distributions (which for this choice of $p$ correspond to the most likely outcomes) are $0.9751$ and $0.9512$ for the balanced and imbalanced strategies, respectively, confirming our theoretical results. \section*{Gaussian Dynamics}\label{Sec_Normal} Consider two assets $A_1$ and $A_2$ with returns given by \begin{equation} R_{i,j} = \mu+\sigma X_{i,j},\quad 1\leq j\leq M,\ i=1,\,2,\label{NormProc} \end{equation} where $X^i_j\sim N(0,1)$ are normally distributed with $\mathrm{Corr}(X_{1,j},X_{2,j})=\rho$. As before, we assume that any two random variables drawn at different points in time are independent. We also suppose $A_{1,0}=A_{2,0}=1$. We denote by $P$ a portfolio for which, at the beginning of every time period, all capital is split equally between assets $A_1$ and $A_2$. The period returns of this rebalanced portfolio $P$ are given by \begin{align*} R_{P,j}= \frac{1}{2}R_{1,j} + \frac{1}{2}R_{2,j} =\mu + \frac{1}{2}\sigma X_{1,j} + \frac{1}{2}\sigma X_{2,j}. \end{align*} The portfolio value $P_M$ at time $M$ is then given by \begin{align*} P_M=\prod^M_{j=1}R_{P,j} \end{align*} if we set our starting capital equal to 1. We obtain the expected logarithmic growth rate as \begin{align} \mathbb{E}\Big[\log P_M^{1/M}\Big] =\frac{1}{M}\sum^M_{j=1}\log R_{P,j} = \mathbb{E}\log\Big[\mu + \frac{1}{2}\sigma X_1 + \frac{1}{2}\sigma X_2\Big],\label{balancedLogr} \end{align} where $X_1$, $X_2\sim N(0,1)$ with $\mathrm{Corr}(X_1,X_2)=\rho$. The expected logarithmic growth rate of the individual assets $A_1$ and $A_2$ is given by \begin{equation} \mathbb{E}\Big[\log \left(A_{1,M}\right)^{1/M}\Big] = \mathbb{E}\Big[\log \left(A_{2,M}\right)^{1/M}\Big] = \mathbb{E} \log\Big[\mu + \sigma X^1\Big].\label{imbalancedLogr} \end{equation} \subsubsection*{Comparing Growth Rates} Developing expressions \eqref{balancedLogr} and \eqref{imbalancedLogr} in a second order Taylor expansion about zero, we obtain \begin{equation*} \mathbb{E}\Big[\log P_M^{1/M}\Big] = \mu - \frac{1}{2}\sigma^2 + \frac{1}{4}\sigma^2(1-\rho) \end{equation*} and \begin{equation*} \mathbb{E}\Big[\log \left(A^1_M\right)^{1/M}\Big] = \mu - \frac{1}{2}\sigma^2 \end{equation*} if we assume that $\sigma>>\mu$. The rebalancing profit in terms of growth rate differential is then given by \begin{equation} \frac{1}{4}\sigma^2(1-\rho).\label{RebBonusHeuristic} \end{equation} We observe that the rebalancing profit scales inversely with $\rho$. And, for $\rho=1$, the rebalancing profit is zero (as we would expect). Relying on a more detailed outline given by Breiman (1961), we can now formulate the following result. \begin{prop}\label{RebBonusTh} Denote by $\Lambda_{{\rho_1},M}$ and $\Lambda_{{\rho_2},M}$ the time $M$ values of two balanced strategies which differ only in the correlations $\rho_1$ and $\rho_2$, $\rho_1<\rho_2$, of their respectively traded asset pairs, with all other dynamics being equal. We have \begin{equation*} \lim_{M\to\infty}\frac{\Lambda_{\rho_1,M}}{\Lambda_{\rho_2,M}}=\infty\quad\text{a.s.}\label{RebBonus_Eq1} \end{equation*} for the two strategies. The special case of an imbalanced strategy is contained by chosing $\rho_2=1$, which tells us that, in the long run, the balanced strategy will almost surely outperform the imbalanced strategy obtained by trading only one of the two assets. \end{prop} We can generalise the Gaussian dynamics in \eqref{NormProc} by considering \begin{equation*} R_{i,j} = \mu_i+\sigma_i X_{i,j} \end{equation*} for $\mu_i$ and $\sigma_i$ depending on $i=1$, $2$. Then, denoting by $\theta\in [0,1]$ the portfolio balance of the two assets, \begin{align*} R_{P,j}=&\ \theta R_{1,j} + (1-\theta)R_{2,j}\\ =&\ \theta\mu_1+(1-\theta)\mu_2 + \theta\sigma_1 X_{1,j} + (1-\theta)\sigma_2 X_{2,j}, \end{align*} and so, developing this expression as before, we obtain \begin{align} \mathbb{E}\Big[\log P_M^{1/M}\Big] =&\ \theta\mu_1+(1-\theta)\mu_2 -\frac{1}{2}\Big[\theta^2\sigma_1^2+(1-\theta)^2\sigma_2^2\Big]-\theta(1-\theta)\sigma_1\sigma_2\rho,\label{GenEq} \end{align} while \begin{equation*} \mathbb{E}\Big[\log \left(A_{i,M}\right)^{1/M}\Big] = \mu_i-\frac{1}{2}\sigma_i^2. \end{equation*} In many cases, a careful choice of $\theta$ can be used to create an expected logarithmic growth rate for the balanced strategy which exceeds that of both individual assets, and the result of Proposition \ref{RebBonusTh} extends to those situations. In particular, a positive expected logarithmic growth rate for the balanced strategy can be achieved even in some situations where both assets individually have negative expected logarithmic growth. \section*{General Market Dynamics} Provided $X_1$ and $X_2$ are identically distributed, we can apply Jensen's inequality to conclude directly that \begin{align} \mathbb{E}\log\Big[\theta X_1 + (1-\theta)X_2\Big]\geq\ \theta\,\mathbb{E}\log X_1 + (1-\theta)\,\mathbb{E}\log X_2 = \mathbb{E}\log X_1 \label{JenIneq} \end{align} without specifying the probabilistic dynamics of our two traded assets any further. Intuitively, \eqref{JenIneq} can be used to explain why the results of Proposition \ref{RebBonusTh} would be expected to hold more generally, and why normality is a sufficient but not necessary prerequisite for the presented results. Caution is required to ensure that the limit $M\to\infty$ can be taken safely, and the Gaussian dynamics as studied in the previous section allow the use of Breiman's (1961) classic results. Details of a comprehensive and general proof, which requires more technicality, can be found in \cite{JoyOfVol}. \section*{Relationship to Kelly's Formula} The Kelly criterion \cite{KellyIR} can be stated as maximising the expected logarithmic growth rate under certain conditions. For two assets in a multi-period model, the optimal weights $w^*_1$ and $w^*_2$ to invest in asset one and two, respectively, are given by \begin{equation} (w^*_1,w^*_2)^T = \Sigma^{-1}(\mu_1,\mu_2)^T,\label{KellyEq1} \end{equation} where $\Sigma$ denotes the $2\times 2$ covariance matrix and $\mu_1$, $\mu_2>0$ denote the expected period returns of our two assets. (We require $\Sigma$ to be invertible.) If we denote the determinant of $\Sigma$ by $|\Sigma|$, $|\Sigma|>0$, then we can write \eqref{KellyEq1} as \begin{equation*} (w^*_1,w^*_2)^T = \frac{1}{|\Sigma|} \left( \begin{array}{cc} \sigma^2_2 & \text{-Cov}(X_1,X_2) \\ \text{-Cov}(X_1,X_2) & \sigma^2_1 \\ \end{array} \right) \left( \begin{array}{c} \mu_1\\ \mu_2 \\ \end{array} \right), \end{equation*} which tells us that, as long as $\mu_1=\mu_2$ and $\sigma_1=\sigma_2$, Kelly always commands a balanced strategy with equal amounts invested in assets $A_1$ and $A_2$. (Note that this does not mean that Kelly commands a 50\%-50\% allocation.) While our previous results state that a higher rebalancing speed will guarantee outperformance of otherwise equal strategies in the long run, Kelly's allocation is one that guarantees outperformance of any other strategy in the long run \cite{Breimann}. Given an imbalanced allocation, it is therefore instructive to view a balanced strategy as an improvement step towards Kelly's allocation, the practical difficulty of the latter being the need for a precise understanding of market dynamics. \begin{figure}[b] \centering \includegraphics[width=1\textwidth]{g10Reb1.png} \caption{Rebalancing effect for two asset portfolios in G10, 1 of 5.} \label{g10Reb1} \end{figure} \begin{figure}[b] \centering \includegraphics[width=1\textwidth]{g10Reb2.png} \caption{Rebalancing effect for two asset portfolios in G10, 2 of 5.} \label{g10Reb2} \end{figure} \begin{figure}[b] \centering \includegraphics[width=1\textwidth]{g10Reb3.png} \caption{Rebalancing effect for two asset portfolios in G10, 3 of 5.} \label{g10Reb3} \end{figure} \section*{Real World Data} We use daily WM/Reuters FX data from 1 January 2000 to 5 November 2015 for all G10 USD crosses. We assume the role of a USD based investor. To account for interest rate as well as spot movements, we simulate trading based on historical mid prices of one day forward contracts with gains and losses reconverted into the base currency at the end of every day. We trade such that, for every currency cross, we go longt the first currency and short the second. For every combination of two exchange rates, we create a two asset portfolio, where we compare the performance of a daily balanced two asset portfolio to that of an imbalanced one (with a 50\%--50\% initial allocation) which we refer to as \emph{initial balanced} portfolio. In Figures \ref{g10Reb1} to \ref{g10Reb5}, we see the annualised returns generated by the balanced and initial balanced strategies, respectively, as well as the annualised return differential. We notice that, for30 out of the 36 crosses, the rebalanced return is greater than the initial balanced return, albeit by very different magnitudes. Considering a daily volatility of $0.75\%$ per currency pair and trading on 250 business days, the total rebalancing cost every day would roughly be $2\times 250\times 0.0075 \times \text{half-spread}$, which, for example, gives annual costs of 3.75 bps and 9.375 bps for assumed spreads of 1 bps and 5 bps, respectively; a calculation which does not yet account for the (relatively cheaper) cost of rolling the underlying position, and which does not yet take into account occasional big movements. If we consider 20 bps as a total annual cost number for very currency liquid pairs and highly efficient execution, a net annual profit of 10 or 20 bps seem feasible for some currency pairs. For many currency pairs, the potential rebalancing profit will in the same order of magnitude as the required transaction costs. \begin{figure}[b] \centering \includegraphics[width=1\textwidth]{g10Reb4.png} \caption{Rebalancing effect for two asset portfolios in G10, 4 of 5.} \label{g10Reb4} \end{figure} \begin{figure}[b] \centering \includegraphics[width=1\textwidth]{g10Reb5.png} \caption{Rebalancing effect for two asset portfolios in G10, 5 of 5.} \label{g10Reb5} \end{figure} \section*{Conclusion} Showing that exponential growth can be generated in a random market with growth rate zero is an idea originally presented by Shannon (cf.\,\cite{Poundstone}). For $\frac{1}{2}\sigma_1^2\geq\mu_1>\frac{1}{4}\sigma_1^2$, $\mu_2=0$, $\sigma_2=0$, $\rho=0$, and $\theta=0.5$, we obtain the logarithmic growth rate \begin{align*} \mathbb{E}\log P_M^{1/M} = \frac{1}{2}\mu_1 - \frac{1}{8}\sigma^2_1 >0 \end{align*} in \eqref{GenEq} for a strategy which rebalances between an asset with non-positive growth rate and an interest free cash account, and we recover Shannon's strategy from our results. In analogy with Proposition \ref{RebBonusTh}, we have no finite time arbitrage (and therefore no conflict with the fundamental theorem of asset pricing), but we can expect profitability in the long run. Is it surprising, or even contradictory, that exponential growth can be generated from volatility in a zero-growth (but otherwise random) market? Not if we recall that the balanced strategy has a smaller probability of getting lucky (i.e., of profiting from random big moves), and that expected values remain unchanged, as is for example highlighted in Figure \ref{BinSample}. In practice, while rebalancing is not confined to mean-reversion environments, it still relies on continuity of dynamics, which constitutes the strategy's risk, namely that any observed large deviation will provoke doubt as to whether the required equilibrium in the underlying assets is still being assessed correctly. This need for repeated correct assessment of the market environment highlights a property volatility harvesting has in common with many other strategies: success depends on skilful application.
2,869,038,156,176
arxiv
\section{Introduction} Vertical charge transfer in a superlattice subjected to a homogeneous electric field (biased superlattice, BSL, with equipopulated levels) has been under investigation starting 70th (see Ref. \cite{1} and references in the reviews of Ref. \cite{2}). The stimulated emission in the mid-infrared (IR) and terahertz (THz) spectral regions, caused by the intersubband transitions of electrons under vertical transport through tunnel-coupled cascade structures (monopolar laser effect), has also been investigated. Using this scheme, both mid-IR and THz lasers viability has been demonstrated during the previous decade (see Refs.\cite{3,4} and references therein ). Recently, nonequilibrium electron distribution has been observed experimentally \cite{5} and described theoretically \cite{6} for heavily-doped cascade structures, when the effective temperature is determined from the balance equation. To the best of our knowledge, there is no consideration of nonequilibrium carriers for low-doped structures which were performed beyond the balance approach. \begin{figure}[tbp] \begin{center} \includegraphics{condBSL1.eps} \end{center} \par \addvspace{-2 cm} \caption{(a) Band diagram for BSL of period $Z$ with the Bloch energy $% \protect\varepsilon _{B}$ which is comparable to the optical phonon energy, $% \hbar \protect\omega _{0}$. (b-d) Schemes of tunneling transitions due to elastic scattering (solid arrows), spontaneous optical phonon emission (dotted arrow), and phonon emission from the active region (vertical dashed arrows) for the cases: (b) $\protect\varepsilon _{B}<\hbar \protect\omega % _{0}/2$ , (c) $\hbar \protect\omega _{0}/2<\protect\varepsilon _{B}<\hbar \protect\omega _{0}$ , and (d) $\hbar \protect\omega _{0}<\protect% \varepsilon _{B}$. } \end{figure} In this paper we study the nonequilibrium electron distribution in a biased superlattice (BSL) under vertical current through the Wannier-Stark ladder, which takes place under the condition $2T\ll \varepsilon _{B}$, where $% \varepsilon _{B}$ is the Bloch energy and $T$ stands for the tunneling matrix element between adjacent quantum wells (QWs) \cite{7}. Since the parameters of each QW and the conditions for interwell transitions are identical (see Fig. 1a) the level populations over QWs are the same. But the distribution of electrons over the in-plane energy should change essentially due to the interplay between elastic and non-elastic processes (see Figs. 1b-d). In the low-temperature case we only consider the passive region, with energy less than the optical phonon energy, $\hbar \omega _{0}$. For the low-concentration limit, we consider the kinetic equation which takes into account the quasi-elastic scattering caused by acoustic phonons as well as the interwell tunneling due to elastic scattering by disorder and due to optical phonon emission. As a result, we obtain the electron distribution versus the in-plane kinetic energy which strongly depends on the ratio $% \varepsilon _{B}/\hbar \omega _{0}$. In the case of the Bloch-phonon resonance, when $M\varepsilon _{B}=N\hbar \omega _{0}$ with $N$ and $M$ integers, \textit{a partially-inverted distribution}, with maxima at energies $(N/M)\hbar \omega _{0}$, can be realized. The phenomenon of absolute negative conductivity (ANC) of electrons excited near the energy $\hbar \omega _{0}$ was discussed four decades ago \cite{8} and different regimes of ANC (including magnetotransport\cite{9} and transient regimes of the response\cite{10}) were considered. Recently, ANC regime was observed when microwave radiation acts on two-dimensional (2D) electrons in a quantizing magnetic field \cite{11}. As shown below, \textit{% a resonant ANC regime} of the in-plane response can be obtained in BSL under the Bloch-phonon resonance conditions. Such a peculiarity appears due to the contribution of the energy interval near $\hbar \omega _{0}$ where an inverted distribution takes place. As a result, BSL becomes instable with respect to in-plane fluctuations if ANC conditions are satisfied. The paper is organized as follows. The basic equations describing distribution of hot electrons in BSL and in-plane conductivity are presented in Sec. II. Analytical consideration for the case $\varepsilon _{B}/\hbar \omega _{0}=1/2$ (the second-order Bloch-phonon resonance) is performed in Sec. III. Results of numerical calculations are discussed in Sec. IV. Concluding remarks and the list of assumptions made are given in the last section. In Appendix, the kinetic equations for different $\varepsilon _{B}/\hbar \omega _{0}$ are presented. \section{Basic equations} Within the tight-binding approximation, the electrons in BSL are characterized by the 2D momentum, $\mathbf{p}$, and the quantum well number, $r=0,\pm 1,\ldots $. Under the in-plane electric field $\mathbf{E}$, the distribution function, $f_{r\mathbf{p}}$, is governed by the system of kinetic equations: \begin{equation} e\mathbf{E}\cdot \frac{\partial f_{r\mathbf{p}}}{\partial \mathbf{p}}% =\sum\limits_{k}J_{k}(f|r\mathbf{p}), \label{1} \end{equation}% where the collision integrals $J_{k}(f|r\mathbf{p})$ describe the scattering processes caused by the longitudinal optical phonon emission ($k=LO$), the acoustic phonons ($k=ac$), or the static disorder ($k=d$). Below we present these collision integrals and consider the kinetic equation for the distribution functions $f_{r\mathbf{p}}$, which are normalized by the condition $n=(2/V)\sum_{r\mathbf{p}}f_{r\mathbf{p}}$, where $n$ is the 3D concentration, $V$ is the normalization volume, and the factor 2 is due to spin. We also evaluate the lateral current density $\mathbf{I}=(2e/Vm)\sum_{r% \mathbf{p}}\mathbf{p}f_{r\mathbf{p}}$ for electrons, with the effective mass $m$, under a weak probe field $\mathbf{E}$. \subsection{Collision integrals} Here we evaluate the collision integrals in Eq. (1) by modifying the general expressions \cite{12} for electrons in BSL, described in the tight-binding approximation by the states $|r\mathbf{p})$, with the energies $\varepsilon _{rp}=r\varepsilon _{B}+\varepsilon _{p}$, where $\varepsilon _{p}=p^{2}/2m$ is the in-plane kinetic energy (see Sec. 3 in Ref. 7). For the low-temperature case, when temperature of phonons $T_{ph}\ll \hbar \omega _{0}$, the spontaneous emission of dispersionless optical phonons is described by \begin{equation} J_{LO}(f|r\mathbf{p})=\frac{2\pi }{\hbar }\sum\limits_{r^{\prime }\mathbf{p}% ^{\prime }\mathbf{Q}}|C_{Q}^{(LO)}|^{2}|(r^{\prime }\mathbf{p}^{\prime }|e^{i% \mathbf{Qr}}|r\mathbf{p})|^{2}~\left[ \delta (\varepsilon _{r^{\prime }p^{\prime }}-\varepsilon _{rp}-\hbar \omega _{0})f_{r^{\prime }\mathbf{p}% ^{\prime }}-\delta (\varepsilon _{rp}-\varepsilon _{r^{\prime }p^{\prime }}-\hbar \omega _{0})f_{r\mathbf{p}}\right] , \label{2} \end{equation}% where $|C_{Q}^{(LO)}|^{2}$ is the bulk matrix element for the Fr\"{o}lich interaction, with the vibration mode characterized by the 3D wave vector $% \mathbf{Q}$, and $|(r^{\prime }\mathbf{p}^{\prime }|e^{i\mathbf{Qr}}|r% \mathbf{p})|^{2}$ is the overlap factor. Taking into account the quasielastic energy relaxation caused by the equipopulated acoustic phonons, one obtains the collision integral \begin{equation} J_{ac}(f|r\mathbf{p})=\sum\limits_{r^{\prime }\mathbf{p}^{\prime }}W_{r% \mathbf{p}r^{\prime }\mathbf{p}^{\prime }}(f_{r^{\prime }\mathbf{p}^{\prime }}-f_{r\mathbf{p}})-\frac{1}{2}\sum\limits_{r^{\prime }\mathbf{p}^{\prime }}\Delta W_{r\mathbf{p}r^{\prime }\mathbf{p}^{\prime }}(f_{r^{\prime }% \mathbf{p}^{\prime }}+f_{r\mathbf{p}}). \label{3} \end{equation}% The transition probabilities $W_{r\mathbf{p}r^{\prime }\mathbf{p}^{\prime }}$ and $\Delta W_{r\mathbf{p}r^{\prime }\mathbf{p}^{\prime }}$ are written here within the second order accuracy with respect to the acoustic phonon energy, $\hbar \omega _{Q}$, as follows \begin{eqnarray} W_{r\mathbf{p}r^{\prime }\mathbf{p}^{\prime }} &=&K_{rr^{\prime }}(\mathbf{p}% -\mathbf{p}^{\prime })\delta (\varepsilon _{r^{\prime }p^{\prime }}-\varepsilon _{rp})+\frac{T_{ph}}{2}\Delta K_{rr^{\prime }}(\mathbf{p}-% \mathbf{p}^{\prime })\delta ^{\prime \prime }(\varepsilon _{r^{\prime }p^{\prime }}-\varepsilon _{rp}), \nonumber \\ \Delta W_{r\mathbf{p}r^{\prime }\mathbf{p}^{\prime }} &=&\Delta K_{rr^{\prime }}(\mathbf{p}-\mathbf{p}^{\prime })\delta (\varepsilon _{r^{\prime }p^{\prime }}-\varepsilon _{rp}). \label{4} \end{eqnarray}% Here $\delta ^{\prime }(E)$ and $\delta ^{\prime \prime }(E)$ are the first and second derivatives of the $\delta $-function and the kernels $% K_{rr^{\prime }}$ and $\Delta K_{rr^{\prime }}$ are given by \begin{eqnarray} K_{rr^{\prime }}(\mathbf{p}-\mathbf{p}^{\prime }) &=&\frac{4\pi }{\hbar }% \sum\limits_{\mathbf{Q}}|C_{Q}^{(ac)}|^{2}|(r^{\prime }\mathbf{p}^{\prime }|e^{i\mathbf{Qr}}|r\mathbf{p})|^{2}\frac{T_{ph}}{\hbar \omega _{Q}}~ \nonumber \\ \Delta K_{rr^{\prime }}(\mathbf{p}-\mathbf{p}^{\prime }) &=&\frac{4\pi }{% \hbar }\sum\limits_{\mathbf{Q}}|C_{Q}^{(ac)}|^{2}|(r^{\prime }\mathbf{p}% ^{\prime }|e^{i\mathbf{Qr}}|r\mathbf{p})|^{2}\hbar \omega _{Q} \label{5} \end{eqnarray}% where $C_{Q}^{(ac)}$ is the bulk matrix element for the deformation interaction. We restrict ourselves to the sequential tunneling processes under the condition $T\ll \varepsilon _{B}$. Considering only the proportional to $% (T/\varepsilon _{B})^{2}$ corrections to the overlap factors, we use \begin{equation} |(r^{\prime }\mathbf{p}^{\prime }|e^{i\mathbf{Qr}}|r\mathbf{p})|^{2}\simeq \delta _{\mathbf{p}^{\prime }\mathbf{p}+\hbar \mathbf{q}}\Psi _{q_{\bot }d}% \left[ \delta _{rr^{\prime }}+\left( \frac{T}{\varepsilon _{B}}\right) ^{2}\left( \delta _{rr^{\prime }+1}+\delta _{rr^{\prime }-1}\right) \right] , \label{6} \end{equation}% where $\Psi _{q_{\bot }d}=|(0|e^{iq_{\bot }z}|0)|^{2}$ describes the transverse overlap between the ground states of the QWs, $|0)$. Since all QWs are identical and $\varepsilon _{r^{\prime }p^{\prime }}-\varepsilon _{rp}=(r-r^{\prime })\varepsilon _{B}+\varepsilon _{p^{\prime }}-\varepsilon _{p}$, the distribution functions are the same in any QW, i.e. $f_{r\mathbf{p% }}\rightarrow f_{\mathbf{p}}$. Thus, the collision integrals in Eqs. (2) and (3) are independent on $r$ because the summation over $r^{\prime }$ is replaced by $\sum_{\Delta r=\pm 1}\ldots $. The collision integral in Eq. (2) is transformed into \begin{eqnarray} J_{LO}(f|\mathbf{p}) &\simeq &\frac{2\pi }{\hbar }\sum\limits_{r^{\prime }% \mathbf{p^{\prime }}q_{\bot }}|C_{Q}^{(LO)}|^{2}\Psi _{q_{\bot }d}~~~~~ \nonumber \\ &&\times\biggr\{\delta (\varepsilon _{p^{\prime }}-\varepsilon _{p}-\hbar \omega_{0})f_{\mathbf{p}^{\prime }}-\delta (\varepsilon _{p}-\varepsilon _{p^{\prime }}-\hbar\omega _{0})f_{\mathbf{p}} \nonumber \\ &&+\biggr(\frac{T}{\varepsilon _{B}}\biggr)^{2}\sum\limits_{\Delta r=\pm 1}% \biggr[\delta (\Delta r\varepsilon _{B}+\varepsilon _{p^{\prime }}-\varepsilon _{p}-\hbar \omega _{0})f_{\mathbf{p}^{\prime }} \nonumber \\ &&-\delta (\Delta r\varepsilon _{B}+\varepsilon _{p^{\prime }}-\varepsilon _{p}+\hbar \omega _{0})f_{\mathbf{p}}\biggr]\biggr\}, \label{7} \end{eqnarray}% where $\sum_{\Delta r=\pm 1}\ldots $ describes the interwell tunneling with LO-phonon emission and $Q^{2}=|\mathbf{p}-\mathbf{p}^{\prime }|^{2}/\hbar ^{2}+q_{\bot }^{2}$. Below we restrict ourselves to the thin QW case, when $|C_{Q}^{(ac)}|^{2}$ can be replaced by $|C_{q_{\bot }}^{(ac)}|^{2}$. Similar transformations for the acoustic phonon contribution of Eq. (3) give us \begin{eqnarray} J_{ac}(f|\mathbf{p}) &=&K_{ac}\sum\limits_{\mathbf{p}^{\prime }}\biggr[% \delta (\varepsilon _{p^{\prime }}-\varepsilon _{p})~+\sum\limits_{\Delta r=\pm 1}\left( \frac{T}{\varepsilon _{B}}\right) ^{2}\delta (\Delta r\varepsilon _{B}+\varepsilon _{p^{\prime }}-\varepsilon _{p})\biggr](f_{% \mathbf{p}^{\prime }}-f_{\mathbf{p}}) \nonumber \\ &&-\Delta K\sum\limits_{\mathbf{p}^{\prime }}\frac{T_{ph}}{2}\delta ^{\prime \prime }(\varepsilon _{p^{\prime }}-\varepsilon _{p})(f_{\mathbf{p}^{\prime }}-f_{\mathbf{p}})-\frac{1}{2}\delta ^{\prime }(\varepsilon _{p^{\prime }}-\varepsilon _{p})(f_{\mathbf{p}^{\prime }}+f_{\mathbf{p}}).~~~~~ \label{8} \end{eqnarray}% Here we have neglected weak ($\propto \Delta K$) contributions to the tunneling transitions. The kernels in Eq. (5) appear to be momentum independent \begin{eqnarray} K_{ac} &\approx &\frac{4\pi }{\hbar }\sum\limits_{q_{\bot }}|C_{q_{\bot }}^{(ac)}|^{2}\Psi _{q_{\bot }d}\frac{T_{ph}}{\hbar \omega _{q_{\bot }}} \nonumber \\ \Delta K &\approx &\frac{4\pi }{\hbar }\sum\limits_{q_{\bot }}|C_{q_{\bot }}^{(ac)}|^{2}\Psi _{q_{\bot }d}\hbar \omega _{q_{\bot }} \label{9} \end{eqnarray}% due to the narrow QW approximation. The intra- and inter-well scattering caused by the static disorder can be described in a similar way to the elastic ($\propto K_{ac}$) contributions in Eq. (8): \begin{equation} J_{d}(f|\mathbf{p})=K_{d}\sum\limits_{\mathbf{p}^{\prime }}\biggr[\delta (\varepsilon _{p^{\prime }}-\varepsilon _{p})~+\sum\limits_{\mathbf{p}% ^{\prime }\Delta r=\pm 1}\left( \frac{T}{\varepsilon _{B}}\right) ^{2}\delta (\Delta r\varepsilon _{B}+\varepsilon _{p^{\prime }}-\varepsilon _{p})% \biggr] (f_{\mathbf{p}^{\prime }}-f_{\mathbf{p}}) \label{10} \end{equation}% Factors $K_{d}$ and $K_{ac}$ determine the departure relaxation rates caused by the elastic scattering mechanisms as $\nu _{d,ac}=K_{d,ac}\sum_{\mathbf{p}% ^{\prime }}\delta (\varepsilon _{p^{\prime }}-\varepsilon _{p})\propto \rho _{2D}$, where $\rho _{2D}$ is the 2D density of states. \subsection{Nonequilibrium distribution} We search for the solution of Eq. (1) in the form $f_{\mathbf{p}}\simeq f_{\varepsilon }+\Delta f_{\mathbf{p}}$, where $f_{\varepsilon }$ describes the lateral heating due to tunneling current and $\Delta f_{\mathbf{p}}$ is the in-plane anisotropic addendum due to the weak field $\mathbf{E}$. We consider the symmetric part of the distribution which is governed by the kinetic equation $\sum_{k}J_{k}(f|\varepsilon )=0$ and satisfies the normalization condition \begin{equation} nZ=\rho _{2D}\int_{0}^{\infty }d\varepsilon f_{\varepsilon } \label{11} \end{equation}% with the layer concentration $nZ$. Averaging Eq. (7) over $\mathbf{p}$-plane and taking into account the energy conservation condition, one obtains the LO-contribution as a finite-difference form: \begin{equation} J_{LO}(f|\varepsilon )=\nu _{\varepsilon +\hbar \omega _{0},\varepsilon }f_{\varepsilon +\hbar \omega _{0}}-\nu _{\varepsilon -\hbar \omega _{0},\varepsilon }f_{\varepsilon }~+\sum\limits_{\Delta r=\pm 1}[\nu _{\varepsilon +\hbar \omega _{0}-\Delta r\varepsilon _{B},\varepsilon }^{t}f_{\varepsilon +\hbar \omega _{0}-\Delta r\varepsilon_{B}}-\nu_{\varepsilon -\hbar \omega _{0}-\Delta r\varepsilon _{B},\varepsilon }^{t}f_{\varepsilon }]. \label{12} \end{equation}% Here the tunneling contributions $\nu _{E,\varepsilon }^{t}=(T/\varepsilon _{B})^{2}\nu _{E,\varepsilon }$ are reduced by the factor $(T/\varepsilon _{B})^{2}$. We introduce the relaxation rate describing the spontaneous emission of LO-phonons as follows \begin{equation} \nu _{E,\varepsilon }=\frac{2\pi }{\hbar }\sum\limits_{\mathbf{p}^{\prime }q_{\bot }}|C_{Q}^{(LO)}|^{2}\Psi _{q_{\bot }d}\delta (\varepsilon _{p^{\prime }}-E). \label{13} \end{equation}% Performing the integration over $\mathbf{p}^{\prime }$-plane one obtains \begin{equation} \nu _{E,\varepsilon }=\theta (E)\alpha \omega _{o}\int\limits_{-\infty }^{\infty }dx\Psi _{x}\sqrt{\frac{\varepsilon _{d}\hbar \omega _{o}}{% (\varepsilon _{d}x^{2}+\varepsilon +E)^{2}-4\varepsilon E}}, \label{14} \end{equation}% where $\alpha $ is the polaron coupling constant and $\varepsilon _{d}=(\hbar /d)^{2}/2m$. This rate is of the order of $\alpha \omega _{o}$. Fig. 2 shows the dimensionless relaxation rate $\nu _{E,\varepsilon }/\alpha \omega _{0}$ versus $\varepsilon /\hbar \omega _{o}$, plotted in the passive region for different $E/\hbar \omega _{o}$\ values and \ for a 60 \AA\ wide QW, when $\varepsilon _{d}/\hbar \omega _{o}\simeq $0.44. Note, that $\nu _{E,\varepsilon }$ appears to be logarithmically divergent if $\ E\rightarrow \varepsilon $. \begin{figure}[tbp] \begin{center} \includegraphics{condBSL2.eps} \end{center} \par \addvspace{-4 cm} \caption{(Color online) Dimensionless relaxation rate $\protect\nu _{E,% \protect\varepsilon }/\protect\alpha \protect\omega _{0}$ versus $\protect% \varepsilon /\hbar \protect\omega _{0}$ plotted in the passive region for $% E/\hbar \protect\omega _{0}=$ 0, 0.2, 0.4, 0.6, 0.8 and 1. } \end{figure} The intrawell process of quasi-elastic energy relaxation is described by the Fokker-Planck collision integral \cite{12} given by \begin{equation} J_{ac}(f|\varepsilon )\approx \nu _{ac}\bar{\varepsilon}^{2}\frac{d}{% d\varepsilon }\left( \frac{df_{\varepsilon }}{d\varepsilon }+\frac{% f_{\varepsilon }}{T_{ph}}\right) , \label{15} \end{equation}% where $\nu _{ac}$ is the above-introduced departure relaxation rate and $% \bar{\varepsilon}^{2}\simeq (\Delta K/K_{ac})T_{ph}/2$. The elastic tunneling relaxation caused by disorder and acoustic phonons [see \ Eqs. (8) and (10)] is governed by the finite-difference contribution \begin{equation} J_{t}(f|\varepsilon )\simeq \nu _{t}\sum\limits_{\Delta r=\pm 1}\theta (\varepsilon -\Delta r\varepsilon _{B})(f_{\varepsilon -\Delta r\varepsilon _{B}}-f_{\varepsilon }) \label{16} \end{equation}% with the tunneling rate $\nu _{t}=(T/\varepsilon _{B})^{2}(\nu _{d}+\nu _{ac})$. Thus, the distribution $f_{\varepsilon }$ is governed by the equation \begin{equation} J_{ac}(f|\varepsilon )+J_{t}(f|\varepsilon )+J_{LO}(f|\varepsilon )=0. \label{17} \end{equation}% Moreover, in the active region, $\varepsilon >\hbar \omega _{o}$, the main contribution is due to the spontaneous emission of LO-phonons [first and second terms of Eq. (12)]. In the active region, $\varepsilon >\hbar \omega _{0}$, the distribution decreases fast. Thus, in the kinetic equation, one has to take into account the second derivative from Eq. (15) and the spontaneous emission contribution form Eq. (12): \begin{equation} \nu _{ac}\bar{\varepsilon}^{2}\frac{d^{2}f_{\varepsilon }}{d\varepsilon ^{2}}% -\nu _{LO}f_{\varepsilon }=0, \label{18} \end{equation}% where $\nu _{LO}=\nu _{0,\hbar\omega _{0}}$. Using the boundary condition $% f_{\varepsilon \rightarrow \infty }=0$ one obtains the solution $% f_{\varepsilon }\approx f_{\hbar \omega _{0}}\exp [-\lambda _{0}(\varepsilon -\hbar \omega _{0})]$ with $\lambda _{0}=\sqrt{\nu _{LO}/\nu _{ac}}\bar{% \varepsilon}^{-1}$. Next, eliminating the fast spontaneous emission of LO-phonons one obtains the Eq. (17) in the passive region with the boundary condition \begin{equation} \left( {\frac{d}{{d\varepsilon }}+\lambda _{0}}\right) f_{\varepsilon \rightarrow \hbar \omega _{0}-0}=0. \label{19} \end{equation}% Thus, the problem formulated in the passive region takes into account the quasi-elastic energy relaxation described by Eq. (15) and the interwell tunneling transitions shown in Figs. 1b-d. The normalization condition should be restricted over the passive region and Eq. (11) is transformed into $nZ=\rho _{2D}\int_{0}^{\hbar \omega _{0}}d\varepsilon f_{\varepsilon }$% . \subsection{Lateral conductivity} Further, we turn to the description of the linear response given by $\Delta f_{\mathbf{p}}\propto \mathbf{E}$ and consider the current density \begin{equation} \mathbf{I}=\frac{2e}{Zm}\int \frac{d\mathbf{p}}{(2\pi \hbar )^{2}}\mathbf{p}% \Delta f_{\mathbf{p}}. \label{20} \end{equation}% The nonsymmetric part of the distribution function $\Delta f_{\mathbf{p}}$ is determined from the linearized kinetic equation \begin{equation} e\mathbf{E}\cdot \frac{\partial f_{\varepsilon }}{\partial \mathbf{p}}% =\sum\limits_{k}J_{k}(\Delta f|\mathbf{p}) \label{21} \end{equation}% where the elastic collision integrals due to $ac$- and $d$-contributions can be replaced by $-\nu _{m}\Delta f_{\mathbf{p}}$. Here $\nu _{m}=\nu _{d}+\nu _{ac}$ is the momentum relaxation rate due to the elastic scattering [see Eqs. (8) and (10)]. The non-elastic momentum relaxation due to the optical-phonon-induced interwell transitions, $J_{LO}(\Delta f|\mathbf{p})$, is given by the $\propto (T/\varepsilon _{B})^{2}$ contribution of Eq. (7). Introducing the energy-dependent function $\chi _{\varepsilon }$ according to $\Delta f_{\mathbf{p}}=(e/m)(\mathbf{E}\cdot \mathbf{p})\chi _{\varepsilon }$, we transform Eq. (21) into the finite-difference equation \begin{equation} \frac{df_{\varepsilon }}{d\varepsilon }=-\nu _{m}\chi _{\varepsilon }+\sum\limits_{\Delta r=\pm 1}[\widetilde{\nu }_{\varepsilon +\hbar \omega _{0}-\Delta r\varepsilon _{B},\varepsilon }^{t}\chi _{\varepsilon +\hbar \omega _{0}-\Delta r\varepsilon _{B}}-\nu _{\varepsilon -\hbar \omega _{0}-\Delta r\varepsilon _{B},\varepsilon }^{t}\chi _{\varepsilon }]. \label{22} \end{equation}% Here $\widetilde{\nu }_{E,\varepsilon }^{t}=(T/\varepsilon _{B})^{2}% \widetilde{\nu }_{E,\varepsilon }$ is the tunneling-induced relaxation rate where \begin{equation} \widetilde{\nu }_{E,\varepsilon }=\frac{2\pi }{\hbar }\sum\limits_{\mathbf{p}% ^{\prime }q_{\bot }}|C_{Q}^{(LO)}|^{2}\Psi _{q_{\bot }d}\cos (\widehat{% \mathbf{p},\mathbf{p}^{\prime }})\delta (\varepsilon -\varepsilon _{p^{\prime }}-E) \label{23} \end{equation}% which uses a similar notation to Eq. (13). Introducing the in-plane conductivity, $\sigma $, according to $\mathbf{I}% =\sigma \mathbf{E}$ we obtain \begin{equation} \sigma =\frac{e^{2}\rho _{2D}}{mZ}\int\limits_{0}^{\hbar \omega _{0}}d\varepsilon \varepsilon \chi _{\varepsilon }. \label{24} \end{equation}% Here we neglect the contribution from $\varepsilon >\hbar \omega _{0}$ because of the smallness of $\chi _{\varepsilon }\simeq \nu _{LO}^{-1}(-df_{\varepsilon }/d\varepsilon )$ in the active region. Under the condition $\nu _{m}\gg \nu _{LO}^{t}$ Eq. (22) gives $\chi _{\varepsilon }\simeq \nu _{m}^{-1}(-df_{\varepsilon }/d\varepsilon )$ and the conductivity takes the form \begin{equation} \sigma =\sigma _{0}\left( 1-\frac{\rho _{2D}\hbar \omega _{0}}{n_{2D}}% f_{\hbar \omega _{0}}\right) ,~~~~~~\sigma _{0}=\frac{e^{2}n_{2D}}{m\nu _{m}Z% }. \label{25} \end{equation}% As a result, $\sigma /\sigma _{0}$ is expressed through the distribution function at $\varepsilon =\hbar \omega _{0}$ and a negative lateral conductivity takes place at $f_{\hbar \omega _{0}}>n_{2D}/\rho _{2D}\hbar \omega _{0}$. \section{Second order Bloch-phonon resonance} Before analyzing the general problem, we consider the simple resonant case $% 2\varepsilon _{B}=\hbar \omega _{0}$, when the distribution can be considered over two intervals: $0<\varepsilon <\hbar \omega _{0}/2$ and $% \hbar \omega _{0}/2<\varepsilon <\hbar \omega _{0}$. Introducing the functions over the interval $0<\varepsilon <\hbar \omega _{0}/2$ according to $f_{1\varepsilon }=f_{\varepsilon }$ and $f_{2\varepsilon }=f_{\varepsilon +\hbar \omega _{0}}$, we transform Eq. (17) [or Eqs. (A1) and (A3) in Appendix] into the system \begin{eqnarray} J_{ac}(f_{1}|\varepsilon )+\nu _{t}\left( 2f_{2\varepsilon }-f_{1\varepsilon }\right) +\nu _{\varepsilon +\hbar \omega _{0}/2,\varepsilon }^{t}f_{2\varepsilon } = 0 \label{26} \\ J_{ac}(f_{2}|\varepsilon )+\nu _{t}\left( f_{1\varepsilon }-2f_{2\varepsilon }\right) -\nu _{\varepsilon ,\varepsilon +\hbar \omega _{0}/2}^{t}f_{2\varepsilon } = 0 \nonumber \end{eqnarray}% Here $\nu _{t}\simeq (T/\varepsilon _{B})^{2}\nu _{m}$ is the elastic tunneling rate and $\nu _{E,\varepsilon }^{t}$ was introduced in Eq. (12). The two second-order differential equations (26) should be solved with the boundary condition of Eq. (19), the normalization condition $nZ=\rho _{2D}\int_{0}^{\hbar \omega _{0}/2}d\varepsilon (f_{1\varepsilon }+f_{2\varepsilon })$, as well as the inhomogeneous conditions $f_{1\hbar \omega _{0}/2}=f_{2\varepsilon =0}$ and $(df_{1\varepsilon }/d\varepsilon )_{\hbar \omega _{0}/2}=(df_{2\varepsilon }/d\varepsilon )_{0}$. \begin{figure}[tbp] \begin{center} \includegraphics{condBSL3n.eps} \end{center} \par \addvspace{-4 cm} \caption{(Color online) Distribution function $f_{\protect\varepsilon }$ vs in-plane kinetic energy obtained from the system (26) at $T_{ph}=$4.2 K and 20 K (solid and dashed curves) for (a) $T$=5 meV and $\protect\nu _{m}$=1 ps$% ^{-1}$, (b) $T$=5 meV and $\protect\nu _{m}$=0.5 ps$^{-1}$, (c) $T$=3.5 meV and $\protect\nu _{m}$=1 ps$^{-1}$, and (d) $T$=3.5 meV and $\protect\nu % _{m} $=0.5 ps$^{-1}$. } \end{figure} Fig. 3 shows the distribution function $f_{\varepsilon }$ versus in-plane kinetic energy obtained from the system (26) for different $T_{ph}$, $T$, and $\nu _{m}$. As can be seen, peaks of $f_{\varepsilon }$ widen as temperature increases, reducing their maxima. The effect of interwell coupling $\propto T$ and elastic scattering $\propto \nu _{m}$ is also evident, founding that peaks increase as these parameters do. Calculations have been made for a concentration $n_{2D}=10^{9}$ cm$^{-2}$ (or $n=10^{15}$ cm$^{-3}$ if $Z=$100 \AA ), so that $f_{1\varepsilon }\leq 0.1$ and electrons are non-degenerate. \cite{13} For this value, $n_{2D}/\rho _{2D}\hbar \omega _{0}\sim 10^{-3}$. Peaks located at $\hbar \omega _{0}/2$ and $\hbar \omega _{0}$ are of the order of 10$^{-2}$and 10$^{-3}$, respectively. Therefore, the last value is big enough to obtain ANC because of $f_{\hbar \omega _{0}}>n_{2D}/\rho _{2D}\hbar \omega _{0}$, according to Eq. (25). In order to magnify peaks at $\varepsilon /\hbar \omega _{0}$=0.5 and 1, we have limited vertical axis size. \begin{figure}[tbp] \begin{center} \includegraphics{condBSL4n.eps} \end{center} \par \addvspace{-4 cm} \caption{(Color online) Normalized conductivity $\protect\sigma /\protect% \sigma _{o}$, given by Eq. (25), vs momentum relaxation rate $\protect\nu % _{m}$, for different temperatures and tunneling coupling values ($T_{ph}$ and $T$). Solid line: (4.2 K and 5 meV). Dashed line: (20 K and 5 meV). Dotted line: (4.2 K and 3.5 meV). Dash-dotted line: (20 K and 3.5 meV). } \end{figure} Fig. 4 describes normalized lateral conductivity $\sigma /\sigma _{0}$ given by Eq. (25) vs momentum relaxation rate $\nu _{m}$, for temperatures $% T_{ph}= $4.2 K and 20 K, and tunneling coupling $T=$ 5 and 3.5 meV. Because of the distribution function peaks at\ $\hbar \omega _{0}$ increase as $T$ and $\nu _{m}$ do (see Fig. 3), lateral conductivity decreases correspondingly leading to negative values for a wide region of parameters. As we saw in Fig. 3, the effect of the temperature is opposed to the previous ones. For high temperatures $\sigma $ increases to reach $\sigma _{0}$ and the possibility of having ANC disappears. \section{Numerical results} We turn now to the numerical calculations of the nonequilibrium distribution $f_{\varepsilon }$ governed by the Eq. (17), with the boundary condition defined by Eq. (19), and the normalization requirement [the explicit form of Eq. (17) for the cases $\varepsilon _{B}<\hbar \omega _{0}/2$, $\hbar \omega _{0}/2<\varepsilon _{B}<\hbar \omega _{0}$, and $\hbar \omega _{0}<\varepsilon _{B}$ are given in Appendix]. We also analyze the lateral conductivity solving the finite-difference Eq. (22) [see explicit expressions (A2), (A4), and (A6)] and performing the integration in Eq. (24). Calculations below are performed for the GaAs/Al$_{0.3}$Ga$_{0.7}$% As-based SL, formed by 60 \AA\ wide QWs separated by barriers of 32 \AA ~ (or 37 \AA )~ wide, which correspond to the tunneling matrix element $T$= 5 meV (or 3.5 meV). We consider temperatures of $T_{ph}=$4.2 K and 20 K as well as the effect of the elastic scattering variation through the momentum relaxation rate $\nu _{m}$=1 ps$^{-1}$ and 0.5 ps$^{-1}$. It is convenient to use $n_{2D}=10^{9}$ cm$^{-2}$ in spite of $\sigma /\sigma_0$ does not depend on concentration. \begin{figure}[tbp] \begin{center} \includegraphics{condBSL5an.eps} \includegraphics{condBSL5bn.eps} \end{center} \par \addvspace{-4 cm} \caption{ (Color online) Distribution function $f_{\protect\varepsilon }$ vs in-plane kinetic energy for different $\protect\varepsilon _{B}/\hbar \protect\omega _{0}$ values (1/3, 1/2, 2/3, and 4/3) in GaAs/Al$_{0.3}$Ga$% _{0.7}$As-based BSL at different temperatures (a) $T_{ph}$=4.2 K, and (b) $% T_{ph}$=20 K. } \end{figure} Fig. 5 displays the distribution function $f_{\varepsilon }$ vs the in-plane kinetic energy for different $\varepsilon _{B}/\hbar \omega _{o}$ values (1/3, 1/2, 2/3, and 4/3) obtained from the general Eq. (17) (see also Appendix). When comparing the case $\varepsilon _{B}/\hbar \omega _{o}=$1/2 of Fig. 5(a,b) with the panel (a) in Fig. 3, calculated in the former section for the same parameters, a good agreement is found. As mentioned before, temperature effect is reflected as a widening and decreasing of the peaks. For other $N/M$ values $f_{\varepsilon }$ shows lower relative maxima. \begin{figure}[tbp] \begin{center} \includegraphics{condBSL6n.eps} \end{center} \par \addvspace{-4 cm} \caption{Normalized lateral conductivity vs Bloch energy $\protect% \varepsilon _{B}/\hbar \protect\omega _{0}$ and for different temperature, elastic scattering and tunneling coupling ($T_{ph}$, $\protect\nu _{m}$, and $T$). $(a)$: (4.2 K, 1 ps$^{-1}$, and 5 meV). $(b)$: (20 K, 1 ps$^{-1}$, and 5 meV). $(c)$: (4.2 K, 1 ps$^{-1}$, and 3.5 meV). $(d)$: (4.2 K, 0.5 ps$% ^{-1} $, and 5 meV). } \end{figure} Next we calculate the normalized lateral conductivity solving Eq. (22) and using $f_{\varepsilon }$ obtained before. Fig. 6 represents $\sigma /\sigma _{0}$ as function of the Bloch energy $\varepsilon _{B}/\hbar \omega _{0}$ and for different temperature, elastic scattering and tunneling coupling values. General behavior shows a pronounced relative minimum located at $% \varepsilon _{B}/\hbar \omega _{0}=$1, followed by other relative minima at 1/2, 1/3, \ 2/3, 1/4,... (in decreasing order). In the active region, when $% N>M$, these peaks are practically negligible. Depending on parameters some of the peaks reach negative values. To clarify the effect of these parameters we can compare in pairs the panels in Fig. 6. Thus, comparing panels (a) and (b) we can see the temperature effect, which is similar to the one found before for the distribution function: peaks are wider and less pronounced, leading to less negative values. Comparing panels (a) and (c) the effect of the tunneling coupling can be visible: if $T$ decreases (increasing barriers width) size of peak minima are reduced. Finally, an analogous behavior is observed comparing panels (a) and (d) to see the elastic scattering effect. Reducing $\nu _{m}$, we obtain a decreasing of the peaks in a similar way. One can conclude that the most favorable conditions to get negative conductivity correspond to low temperature, and big tunneling coupling and elastic scattering values. \begin{figure}[tbp] \begin{center} \includegraphics{condBSL7n.eps} \end{center} \par \addvspace{-4 cm} \caption{(Color online) Normalized conductivity vs Bloch energy $\protect% \varepsilon _{B}/\hbar \protect\omega _{0}$ around 1/3 (a), 1/2 (b), 2/3 (c), and 1 (d), for $T=$5 meV. Solid lines: $\protect\nu _{m}$=1 ps$^{-1}$ and $T_{ph}=$4.2 K. Dashed lines: $\protect\nu _{m}$=1 ps$^{-1}$ and $% T_{ph}= $20 K. Dotted lines: $\protect\nu _{m}$=0.5 ps$^{-1}$ and $T_{ph}=$% 4.2 K.} \end{figure} In order to detail the shape of the normalized conductivity peaks we present some of them in Fig. 7. We have chosen the more noticeable ones, corresponding to Bloch energy $\varepsilon _{B}/\hbar \omega _{0}$ around 1/3 [panel (a)], 1/2 (b), 2/3 (c), and 1 (d), for $T=$5 meV, with temperature and elastic scattering values corresponding to panels (a), (b) and (d) included in Fig. 6. A breakdown of the peak symmetry is observable when $\varepsilon _{B}/\hbar \omega _{0}$ increases going from a quite symmetric peak for $\varepsilon _{B}/\hbar \omega _{0}$ close to 1/3, to a clearly asymmetric peak around 1. \section{Concluding remarks} In summary, we have demonstrated that the negative lateral conductivity regime is possible in low-doped biased superlattices under the Bloch-phonon resonance conditions. When analyzing the dependence of $\sigma $ vs bias voltage, narrow negative peaks take place if $\varepsilon _{B}/\hbar \omega _{0}$ is close to the ratio 1/2, 1/3, 2/3$\ldots $. ANC regime appears to be most pronounced for low-temperature region in BSL with an effective interwell coupling. Next, we list the assumptions used. The main restriction of the ANC regime consists in neglecting the electron-electron scattering, imposed by the Maxwell distribution, with an effective temperature suppressing the high-energy part of the distribution. The condition $\sigma <0$ does not depend on concentration for non-degenerate electrons. Thus, we have used in the calculations $n_{2D}<10^{10}$ cm$^{-2}$, where the electron-electron scattering is not the main scattering process (see Ref. 14, where different systems were considered). The evaluation of the limiting concentration, which requires to involve the electron-electron collision integral in Eq. (17), lies beyond the scope of this paper. Another approximation we have made is rather standard. In order to estimate the coefficients of the kinetic equation, we have used a tight-binding approach for the description of the electronic states\cite{7}. The use of the bulk model for phonon dispersion and electron-phonon interaction is a reasonable approximation for the GaAs/AlGaAs-based structures\cite{15}. We consider the momentum relaxation due to short-range scattering neglecting a large-scale potential; last contributions require a special attention in analogy with the case of a single low-doped well. We restrict ourselves to the case of uniform bias fields and equipopulated wells neglecting the possible domain formation (one can avoid instabilities of vertical current in a short enough BSL\cite{16}). We should also mention that an experimental task for measure the lateral conductivity of BSL is not simple because a complicate contribution of the corner regions is possible. But, instead of the dc current measurements, one can use a high-frequency contactless study of the response using a transverse capacitor. In addition, under the instability conditions (if $% \sigma <0$) a direct measurement of lateral conductivity is not necessary because the vertical current appears to be unstable. A detailed description of this unstable response requires a special consideration. To conclude, a low-doped BSL at low temperature is a suitable structure in order to realize the absolute negative conductivity regime. In addition, a similar behavior is possible not only for the BSL under consideration but also for the more complicated tunnel-coupled structure used in quantum cascade lasers. An instability of such a structure for the case of low doping and temperature is possible and should be checked additionally.
2,869,038,156,177
arxiv
\section{Introduction} The problem of geometric Hermite approximation is to estimate a curve from a finite number of its samples, consisting of both points on the curve and their associated, normalized tangent vectors. This problem is fundamental in Computer Aided Geometric Design (CAGD), see, e.g.,~\cite{de1987high, meek1997geometric, xu2001geometric}, but it also appears in various other applied topics such as biomechanical engineering~\cite{Biomechanical_Engineering}, marine biology~\cite{tremblay2006interpolation}, scientific simulations~\cite{vargas2019leapfrog}, CNC machining~\cite{beudaert20115} and more. The primary challenge in geometric Hermite approximation is to incorporate the additional information, given in the form of normalized tangent vectors, for obtaining a better approximation of the sampled curve than approximation based on points only. Moreover, the fact that the curve lies in a high-dimensional space poses additional practical and theoretical difficulties. Our approach provides a unified solution that copes with the above, and generates approximation with many favorable geometric properties. Classical Hermite interpolation deals with the linear problem of interpolating a function to data consisting of its values and its derivatives' values at a finite number of points by a polynomial. In~\cite{merrien1992family}, a family of linear Hermite subdivision schemes is introduced. This work opened the door for solving Hermite-type approximation through refinement~\cite{dubuc2009hermite, dyn1995analysis, han2005noninterpolatory, jeong2017construction, merrien2012hermite}. Although all these subdivision schemes are linear, refinement is the approach taken in this paper to the nonlinear problem of geometric Hermite interpolation. Recent years have given rise to many nonlinear subdivision schemes and, in particular, to operators refining 2D and 3D geometric Hermite data, e.g.,~\cite{aihua2016new, lipovetsky2016weighted, reif2021clothoid}. The design of such subdivision operators is typically based on the reconstruction of geometric objects from a certain class. Such operators were suggested for circle reconstruction~\cite{chalmoviansky2007non, lipovetsky2016weighted, romani2010circle}, ellipse reconstruction~\cite{conti2015ellipse}, local shape-preserving~\cite{costantini2008constrained,manni2010shape} and local fitting of clothoids~\cite{reif2021clothoid}. The relative simplicity and high flexibility in the design of refinement rules allow posing advanced solutions for modern instances of the Hermite problem, e.g., for manifold values~\cite{moosmuller2016c, moosmuller2017hermite}. The latter is an example of the agility of subdivision schemes, perhaps best illustrated when considering the variety of methods that serve as approximation operators over diverse nonlinear settings. Note that although the Euclidean space is linear, the problem of geometric Hermite approximation for data sampled from a curve in an Euclidean space is nonlinear. This is due to the fact that the space of pairs of point-normalized tangent (for short, point-ntangent) is not linear, as it is a subset of $\mathbb{R}^n \times S^{n-1}$. In our approach, we interpret the refinement rules of a subdivision scheme as a method of averaging, as in~\cite{dyn2011convergence, dyn2017manifold, schaefer2008nonlinear}. Thus, generating curves in a nonlinear environment boils down to properly defining and understanding the averaging in the particular space. While most of the work in this direction refers to non-Hermite problems, some recent papers such as~\cite{lipovetsky2021subdivision, lipovetsky2016weighted, reif2021clothoid} introduce different techniques for averaging two point-normal pairs in the plane. Therefore, as a first step, we formulate the notion of Hermite average in $\mathbb{R}^{n}$, which encapsulates the concept of averaging two point-ntangent pairs sampled from a curve in $\mathbb{R}^{n}$ for any $n\geq2$. We then propose a nonlinear average in $\mathbb{R}^{n}$ that is based on B\'{e}zier curves and satisfies the requirements of the Hermite average. Determining such a mean is crucial for constructing our subdivision operators. Our paper is an example of how the averaging approach can serve as a fundamental bridge for obtaining approximation operators in general metric spaces, see also, e.g.,~\cite{kels2013subdivision}. The construction of our approximation operators, begins with the study of our Hermite average termed hereafter B\'{e}zier average. We show that this average satisfies several geometric properties such as invariance under similarity transformations and preservation of lines and circles. Then, we form a two-point interpolatory subdivision scheme by repeatedly applying the B\'{e}zier average as an insertion rule. This subdivision scheme serves as an illustrative example, which demonstrates the benefits of using our approach for constructing approximation operators based on refinement. For example, the geometric subdivision scheme we form inherits the geometric properties of the B\'{e}zier average; it is invariant under similarity transformations, it reconstructs lines and circles. Furthermore, our general approach of using an appropriate average, enables us to obtain a large class of nonlinear subdivision schemes refining geometric Hermite data, by replacing the linear average in the Lane--Riesenfeld subdivision schemes~\cite{lane1980theoretical} by the B\'{e}zier average. As part of the analysis, we prove the convergence of our interpolatory subdivision scheme, namely that it refines geometric Hermite data and its limit is a smooth curve that interpolates both the data points and their normalized tangent vectors. Moreover, the limit normalized tangent vectors are tangent to the limit curve. The proof of convergence of our non-linear subdivision scheme uses an auxiliary result which we validate with computer-aided evidence, combining analysis of multivariate functions and exhaustive search done by a computer program. The details appear in Appendix A and as a complementary software code. One additional advantage of our method is that the new schemes based on the B\'{e}zier average, allow us to use a more flexible sampling strategy than typically assumed when applying subdivision schemes to approximate curves from Hermite data~\cite{floater2006parameterization}. We demonstrate this property numerically, emphasizing the approximation capabilities and how it assists in avoiding geometric artifacts, see Section~\ref{sec:examples}. In addition, we present numerical evidence of fourth-order approximation of curves, which is higher than what current theory guarantees for non-uniform sampling~\cite{floater2006parameterization}. This fourth-order approximation of curves generalizes similar results in classical Hermite interpolation of functions. We also compare numerically our schemes with other known methods and illustrate the superiority of techniques based on the B\'{e}zier average. The entire set of examples is available for reproducibility as open Python code in~\url{https://github.com/HofitVardi/Hermite-Interpolation}. The paper is organized as follows. Section~\ref {sec:HermiteAveraging} presents the precise statement of the fundamental approximation problem we solve and defines the notion of Hermite averaging. Then, we introduce our B\'{e}zier average and prove that it is Hermite average. The section is concluded with several geometric properties of this new Hermite average. Section~\ref{sec: subdivision} shows how to form Hermite subdivision schemes based on the B\'{e}zier average. There we offer additional properties of the B\'{e}zier average and use them to prove the convergence of our interpolatory Hermite subdivision scheme. In Section~\ref{sec:examples} we explore our solution to the problem of geometric Hermite interpolation numerically. The numerical examples demonstrate the performance of our Hermite subdivision schemes compared to other approximants. In the last section, conclusions and future work are discussed. This paper is accompanied by three appendices that include the computer-aided evidence mentioned above, proofs of some claims from the main sections, and a further discussion on the B\'{e}zier average. \section{Averaging in the Hermite setting} \label{sec:HermiteAveraging} It is important to place the motivation behind the following discussion in the area of Hermite approximation. We thus begin with some essential notation and definitions, leading to the formulation of the problem of geometric Hermite interpolation. In the following subsections we state a few lemmas and a proposition. The proofs of these results are given in Appendix~\ref{proofs}. \subsection{Notation and definitions} A geometric Hermite data is a sequence \begin{equation} \label{eqn:HermiteData} \left( (p_{j}, v_{j}) \right)_{j \in J}\subseteq\mathbb{R}^{n}\times S^{n - 1} , \end{equation} where $\mathbb{R}^{n}$ is the Euclidean $ n$ dimensional space equipped with the Euclidean metric, $ S^{n - 1} \subset \mathbb{R}^{n} $ is the $n - 1$ sphere equipped with the angular metric, and $J=0,1,\ldots,N$ where $N \in \mathbb{N}$. We view such a sequence as samples of a $G^1$ curve, consisting of points and tangent directions. Unless otherwise stated, we denote by $ d\left(\cdot ,\cdot\right)$ and $ g\left(\cdot ,\cdot\right)$ the Euclidean and angular metrics on $\mathbb{R}^{n}$ and $ S^{n - 1}$, respectively. Moreover, $\left\langle \cdot ,\cdot\right\rangle$ and $\left\Vert \cdot\right\Vert$ refer to the Euclidean inner product and norm in $\mathbb{R}^{n}$. Let $\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\in\mathbb{R}^n\times S^{n-1}$ s.t. $p_0\neq p_1$. Consider the following functions of $\left(\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\right)$: The normalized vector of difference between $p_0$ and $p_1$ is \begin{equation} \label{u definition} u\left(\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\right):=\frac{p_1-p_0}{\norm{p_1-p_0}}\in S^{n-1}, \end{equation} The distance between $v_0$ and $v_{1}$ in terms of their angular distance is \begin{equation} \label{theta definition} \theta\left(\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\right):=g\left(v_0,v_1\right)=\arccos\left(\left\langle v_0,v_1\right\rangle\right)\in [0,\pi], \end{equation} The deviations of $v_0$ and $v_1$ from $u$ (in terms of their angular distance) is \begin{equation} \label{theta j definition} \theta_{j}\left(\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\right):=g\left(v_j,u\right)= \arccos\left(\left\langle v_j,u\right\rangle\right)\in [0,\pi],\ \ j=0,1. \end{equation} The $L_2$ norm of $\begin{pmatrix} \theta_{0}, \theta_{1} \end{pmatrix}$, regarded as a vector in $\mathbb{R}^{2}$, is \begin{equation} \label{sigma definition} \sigma\left(\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\right): \sqrt{\theta_1^2+\theta_2^2}\in [0,\sqrt{2}\pi]. \end{equation} The last quantity was first used in~\cite{reif2021clothoid} in the convergence analysis of a Hermite subdivision scheme. When there is no possibility of ambiguity, we omit the notation of the variables and simply write $u,\theta,\theta_0,\theta_1$ and $\sigma$. We conclude with the definition of the problem we solve. \begin{definition} \label{def: GHI} The problem of geometric Hermite interpolation is to find a $G^{1}$ curve $\widetilde{\gamma} \colon \mathbb{R} \to \mathbb{R}^{n}$ which interpolates the geometric Hermite data $\left( (p_{j},v_j) \right)_{j=1}^N \subseteq\mathbb{R}^{n}\times S^{n - 1}$, see~\eqref{eqn:HermiteData}, where $N\in \mathbb{N}$. We assume the data is sampled from a $G^{1}$ curve $\gamma \colon \mathbb{R} \to \mathbb{R}^{n}$, \begin{equation} \label{eqn:sampled_hermite_data} \gamma(t_j)=p_j , \quad T(\gamma,t_j)=v_j , \quad j = 1,\ldots,N , \end{equation} with $t_j < t_{j+1}$ for all $1 \le j \le N-1$. Here $T$ operates on $G^1$ curves and returns the ntangent to a curve at a specified parameter value. The curve $\widetilde{\gamma}$ should satisfy the interpolation conditions, \begin{equation} \label{interpolation} \widetilde{\gamma}(\rho({\widetilde{t}}_j))= p_j \quad\text{and}\quad T(\widetilde{\gamma},\rho(\widetilde{t}_j))=v_j , \quad j = 1,\ldots,N , \end{equation} where $\rho$ is a parametrization and ${\widetilde{t}}_j < {\widetilde{t}}_{j+1}$ for all $1 \le j \le N-1$. \end{definition} \subsection{Hermite Average} We address the problem of geometric Hermite interpolation through a subdivision process which is defined via an average over $\mathbb{R}^{n}\times S^{n - 1}$. We call this average a Hermite average, define it, and later propose a method to construct such a mean. The new definition illustrates some of the significant differences between classical and geometric Hermite settings. The classical concept of average is perhaps best demonstrated by the case of positive real numbers, see ,e.g.,~\cite{itai2013subdivision}. An average is a bivariate function with a weight parameter \[ \operatorname{av} \colon [0,1] \times \left( (0,\infty) \times (0,\infty) \right) \to (0,\infty) , \] For brevity, we use the form $\av{\omega}{x}{y}$ where $\omega\in[0,1]$ is the weight. It is worth noting that in some cases it is essential to extend the weight parameter beyond the $[0,1]$ segment, see e.g.,~\cite{dyn2017global}. The popular value $\omega = 0.5$ leads to the well-known average expressions such as $\frac{x+y}{2}$ or $\sqrt{xy}$, that correspond to the linear $(1-\omega)x+\omega y$ and geometric $x^{1-\omega} y^\omega $ averages. Many other common families of averages exist, for example the $p$-averages $\left( (1-\omega)x^p+\omega y^p \right)^{\frac{1}{p}}$, which include the above linear and geometric means as special cases. This example is itself a special case of a wider family of averages of the form $F^{-1}\left( (1-\omega)F(x)+\omega F(y) \right)$, when $F$ is an appropriate invertible function. The $p$-averages family comes up when using $F(x) = x^p$. For more details, see~\cite{dyn2011convergence}. As we generalize the concept of averaging to more intricate settings, we wish to follow the averages' fundamental properties. Namely, the basic properties satisfied by the above bivariate function $\operatorname{av}$, \begin{enumerate} \item Identity on the diagonal: $\av{\omega}{x}{x}=x$. \item Symmetry: $\av{\omega}{x}{y}=\av{1-\omega}{y}{x}$. \item End points interpolation: $\av{0}{x}{y}=x$ and $\av{1}{x}{y}=y$. \item Boundedness: $\min\{x,y\}\leq \av{\omega}{x}{y} \leq\max\{x,y\}$. \end{enumerate} Note that the last property, unlike the first three, requires ordering (partial ordering). This harsh requirement can be relaxed by using a metric and modifying the last property to ensure metric-related boundedness. Considering the classical case and the absence of a ``natural'' metric in the Hermite setting, we define a Hermite average by adjusting the first three properties listed above. Recall that a pair of elements in our domain is viewed as two samples of a regular, differentiable curve. Thus, there is a native hierarchy associated with the direction of the curve. Under this interpretation, it makes sense to consider an average that is sensitive to orientation. It also calls for a different understanding of the diagonal since an element $\left(p,v\right)\in \mathbb{R}^n\times S^{n-1}$ cannot represent two distinct, yet close enough samples of a regular curve. Therefore, we refer to the diagonal utilizing the notion of limit. The limit we consider is the most fundamental approximation as one approaches a point on a curve. Finally, we treat the symmetry as reversing the curve and so the orientation of the points. Next is the formal definition. \begin{definition} \label{Hermite average} We term $H$ a Hermite average over $ X \subseteq \left(\mathbb{R}^{n}\times S^{n-1}\right)^2$ if $H:[0,1]\times X \to \left(\mathbb{R}^{n}\times S^{n-1}\right)$ and the following holds for any $ \omega\in [0,1]$ and $\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \right)\in X$, \begin{enumerate} \item Identity on the ``limit diagonal": \begin{equation*} H_{\omega }\left(\begin{pmatrix} p \\ v \\ \end{pmatrix},\begin{pmatrix} p+tv \\ v \\ \end{pmatrix}\right)\xrightarrow{t\longrightarrow 0^{ + }}\begin{pmatrix} p\\ v\\ \end{pmatrix} .\end{equation*} \item Symmetry with respect to orientation: \[ \text{If} \,\, H_{\omega }\left( \begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \right) =\begin{pmatrix} p \\ v \\ \end{pmatrix} \, \text{then} \, \, H_{1 - \omega }\left( \begin{pmatrix} p_{1} \\ - v_{1} \\ \end{pmatrix},\begin{pmatrix} p_{0} \\ - v_{0} \\ \end{pmatrix} \right) =\begin{pmatrix} p \\ - v \\ \end{pmatrix}. \] \item End points interpolation: \[ H_{j}\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\right) =\begin{pmatrix} p_{j} \\ v_{j} \\ \end{pmatrix}, \quad j=0,1.\] \end{enumerate} \end{definition} \begin{remark} \label{metric} In the presence of an appropriate metric $\tilde{d}$, the classical boundedness property can emerge as \[ \tilde{d}\left(\begin{pmatrix} p_{j} \\ v_{j} \\ \end{pmatrix},\begin{pmatrix} p_{\omega } \\ v_{\omega } \\ \end{pmatrix}\right)\leq \tilde{d}\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix}, \begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\right), \quad j=0,1, \] where $\begin{pmatrix} p_{\omega } \\ v_{\omega } \\ \end{pmatrix}=H_\omega\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\right).$ \end{remark} \subsection{B\'{e}zier Average} In this section we introduce the ``B\'{e}zier average". While the definition considers a weighted average for any weight $ \omega \in\left[0,1\right]$, we mainly focus on the special case $ \omega = \frac{1}{2}$, which is required for our refinements. We start our construction with an \textit{admissible} pair $\begin{pmatrix} \begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \end{pmatrix}$ that satisfies the following two conditions \begin{equation} \label{eqn:conditions_on_pts} \begin{split} p_0 &\neq p_1, \\ v_0=v_1=u \text{ or } v_0,v_1,u &\text{ are pair-wise linearly independent,} \end{split} \end{equation} where $u$ is defined in \eqref{u definition}.\\ The full reasoning behind the conditions of~\eqref{eqn:conditions_on_pts} is revealed in the following discussion. Our next step is to generate the B\'{e}zier curve, which we denote by $ b\left(t\right) $, based on the following control points: \begin{equation} \label{control points} p_{0}, p_{0} + \alpha v_{0}, p_{1} - \alpha v_{1}, p_{1} . \end{equation} Here, \begin{equation} \label{eqn:alpha} \alpha:=\frac{d\left(p_{0},p_{1}\right)}{3\cos^{2}\left(\frac{\theta_{0} + \theta_{1}}{4}\right)}, \end{equation} and $\theta_0,\theta_1$ are as defined in~\eqref{theta j definition}. Note that the second condition of~\eqref{eqn:conditions_on_pts} guarantees that~\eqref{eqn:alpha} is well-defined since $\theta_0 + \theta_1<2\pi$. The B\'{e}zier curve $b(t),\ t\in [0,1]$ is given explicitly by \begin{equation} \label{bezier curve} \begin{split} b\left(t\right) & = \left(1 - t\right)^{3}p_{0} + 3t\left(1 - t\right)^{2}\left(p_{0} + \alpha v_{0}\right) + 3t^{2}\left(1 - t\right)\left(p_{1} - \alpha v_{1}\right) + t^{3}p_{1} \\ & =\left(t - 1\right)^{2}\left(2t + 1\right)p_{0} + t^{2}\left(3 - 2t\right)p_{1} + 3t\left(1 - t\right)\alpha\left(\left(1 - t\right)v_{0} - tv_{1}\right), \end{split} \end{equation} and its derivative is thus, \begin{equation} \label{tangents curve} \frac{d}{dt}b\left(t\right) = 6t\left(1 - t\right)\left(p_{1} - p_{0}\right) + 3\left(3t - 1\right)\left(t - 1\right)\alpha v_{0} + 3t\left(3t - 2\right)\alpha v_{1}. \end{equation} \begin{remark} \label{remark: linearly independence} Note that if $ u,v_{0},v_{1}$ are linearly independent as vectors in $\mathbb{R}^{n}$ then $ \frac{d}{dt}b(t)\neq 0$ for any $ t\in\left(0,1\right)$.\ \ Moreover, it can be shown that if two of those vectors are linearly dependent while the third is independent of each one of them, then $ \frac{d}{dt}b(t)\neq 0$ for any $ t\in\left(0,1\right)$.\ \ As well, it can be verified that if $ v_{0} = v_{1} =u$ then $ \frac{d}{dt}b(t) = p_{1} - p_{0}=d\left(p_0,p_1\right)u$. \end{remark} We are now ready to present the B\'{e}zier average. \begin{definition} \label{def: average def} The B\'{e}zier average of $\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix}$ and $\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}$ with weight $ \omega$ is defined as the point and its normalized tangent vector of the B\'{e}zier curve \eqref{bezier curve} at $t=\omega$. Namely, \begin{equation} \label{eqn:bezierAvg} B_{\omega }\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\right) = \begin{pmatrix} p_{\omega } \\ v_{\omega } \\ \end{pmatrix} , \end{equation} where $ p_{\omega } = b\left(\omega\right)$ and $ v_{\omega } =\frac{b^{'}(\omega )}{\norm{b^{'}(\omega)}}$ (or $ v_{\omega } = 0$ if $ b^{'}\left(\omega\right) = 0$). \end{definition} In the spirit of Definition~\ref{def: GHI}, the above means that if $b(t),\ t\in [0,1]$ of~\eqref{bezier curve} is a regular curve, that is $\frac{d}{dt}b(t)$ does not vanish, then $v_{\omega}=T(b(\omega),\omega)$ for any $\omega\in[0,1]$. Figure~\ref{fig:average} demonstrates the procedure of averaging according to Definition~\ref{def: average def}. In particular, we present two examples that correspond to two different cases. On the left the B\'{e}zier curve which we sample according to Definition~\ref{def: average def} is convex, while on the right it is non-convex. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{average1.pdf} \quad \includegraphics[width=0.45\textwidth]{average2.pdf} \caption{Computing B\'{e}zier average in two examples. In each example the data is in black, the averaged element is in blue, the control polygon and its associated B\'{e}zier curve are in gray.} \label{fig:average} \end{figure} Definition~\ref{def: average def} does not guarantee that the tangent vector is non-vanishing. That is to say, one can find examples of two pairs of Hermite data where $ B_{\omega }\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\right) =\begin{pmatrix} p_{\omega } \\ 0 \\ \end{pmatrix}\not\in \mathbb{R}^n\times S^{n-1}$. However, as is stated in the next lemma, this is not the case for $ \omega = \frac{1}{2}$. \begin{lemma} \label{non vanishing vectors} Under the notation of Definition~\ref{def: average def}, \begin{equation} p_{\frac{1}{2}} =\frac{1}{2}\left(p_{0} + p_{1}\right) +\frac{3}{8}\alpha\left(v_{0} - v_{1}\right), \label{mid-point} \end{equation} \begin{equation} v_{\frac{1}{2}} =\frac{p_{1} - p_{0} - 0.5\alpha\left(v_{0} + v_{1}\right)}{\norm{ p_{1} - p_{0} - 0.5\alpha\left(v_{0} + v_{1}\right)} }. \label{mid-vector} \end{equation} In particular, $ v_{\frac{1}{2}} \neq 0$. \end{lemma} Next we claim that the B\'{e}zier average (Definition~\ref{Hermite average}) is a Hermite average over $X$ (Definition~\ref{def: average def}). For the B\'{e}zier average we define $X$ to be the set of all pairs in $\left(\mathbb{R}^{n}\times S^{n-1}\right)^2$ satisfying conditions~\eqref{eqn:conditions_on_pts} and producing non vanishing averaged vectors for any weight in $[0,1]$. Note that by Remark~\ref{remark: linearly independence} $X$ is not empty. A discussion about the set $X$ is differed to Appendix~\ref{app:subsec_conjecture}. \begin{prop} \label{Bezier average is Hermite average} The B\'{e}zier average is a Hermite average over $X$. \end{prop} \begin{remark} \label{alternative average} The value of $ \alpha$ we chose for the definition of the B\'{e}zier average, is suggested in [4] in order to approximate circular arcs in $\mathbb{R}^{2}$ by B\'{e}zier curves. In case $n=2$ and under certain assumptions about the input, this choice coincides with the choice of $\alpha=\frac{d\left(p_{0},p_{1}\right)}{3\cos^{2}\left(\frac{\theta}{4}\right)}$. The latter choice of $\alpha$ yields the average defined in \cite{lipovetsky2021subdivision}, which can also be shown to be a Hermite average over a suitable set. \end{remark} \subsection{Geometric properties of the B\'{e}zier average} We conclude this section by introducing three additional properties of the B\'{e}zier average, which are of great value in the context of generating curves. The first property shows that averaging two point-ntangent pairs is invariant under similarities appropriate for Hermite data. This essential property of the B\'{e}zier average follows from the invariance of B\'{e}zier curves under affine transformation. \begin{lemma} [Similarity invariance] \label{lemma: similarity invariance} The B\'{e}zier average is invariant under similarities in the following sense: \begin{enumerate} \item The B\'{e}zier average is invariant under isometries applied to both $\mathbb{R}^{n}$ and $S^{n-1}$. That is, given an isometry $\Phi:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}$, \begin{equation} \label{similarity1} \tilde{\Phi}\left(B_\omega\left(\begin{pmatrix} p_0\\ v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\ v_1\\ \end{pmatrix}\right)\right)=B_\omega\left(\tilde{\Phi}\begin{pmatrix} p_0\\ v_0\\ \end{pmatrix},\tilde{\Phi}\begin{pmatrix} p_1\\ v_1\\ \end{pmatrix}\right), \end{equation} where $\tilde{\Phi}\begin{pmatrix} p\\ v\\ \end{pmatrix}:=\begin{pmatrix} \Phi(p)\\ \Phi(v)-\Phi(0)\\ \end{pmatrix}$. \item The B\'{e}zier average is invariant under scaling: if $B_\omega\left(\begin{pmatrix} p_0\\ v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\ v_1\\ \end{pmatrix}\right)=\begin{pmatrix} p_\omega\\v_\omega\\ \end{pmatrix}$, for any $a>0$, then, \begin{equation} \label{similarity 2} B_\omega\left(\begin{pmatrix} a\cdot p_0\\ v_0\\ \end{pmatrix},\begin{pmatrix} a\cdot p_1\\ v_1\\ \end{pmatrix}\right)=\begin{pmatrix} a\cdot p_\omega\\ v_\omega\\ \end{pmatrix}. \end{equation} \end{enumerate} \end{lemma} The following two results show two important geometric properties of the average. \begin{lemma}[Lines preservation] \label{line reconstruction lemma} Let $ \begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}$ be two geometric Hermite samples from a straight line, $L_u$. Then, the B\'{e}zier average coincides with the linear average of each component, namely, \begin{equation} \label{lines reconstraction} B_{\omega } \left( \begin{pmatrix} p_{0} \\ v_0\\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \right) =\begin{pmatrix} \left(1 - \omega\right)p_{0} + \omega p_{1} \\ u \\ \end{pmatrix}, \quad \omega \in [0,1]. \end{equation} \end{lemma} Note that the average in~\eqref{lines reconstraction} is a point-ntangent pair from the line $L_u$. \begin{lemma}[Circles preservation] \label{circle reconstruction lemma} Assume $\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}$ are samples of a circle and its ntangents. Then, their average at $\omega=\frac{1}{2}$ consists of the mid-point and its ntangent of the circle-arc, defined by $p_{0}$, $p_{1}$ and $v_{0}$, $v_{1}$. \end{lemma} \section{Subdivision schemes based on the B\'{e}zier Average} \label{sec: subdivision} In the previous section we describe the concept of Hermite average and introduce such an average --- the B\'{e}zier Average. We also present some of the main properties of the B\'{e}zier average. This essential building block is the first step in constructing curves' approximations. The next step is to use an approximation operator, which can be defined in terms of binary averages, and replace the binary average by a Hermite average. As approximation operators, we use subdivision schemes, which enjoy simple implementation and are used extensively for modeling, approximation, and multiscale representations of curves. As an illustrative method, the main subdivision scheme we utilize is the interpolatory two-point scheme. This simple method demonstrates the application of the B\'{e}zier average and how the average's properties are inherited by the approximation process. We define the new refinement rules and prove the convergence of this subdivision scheme for any initial admissible data. \subsection{From Hermite average to Hermite subdivision schemes} \label{subsec: IHB} We start with the following lemma, which, together with Lemma~\ref{non vanishing vectors}, asserts that the process of midpoint averaging can be performed repeatedly. \begin{lemma} \label{lemma: reaveraging} Assume the pair $\begin{pmatrix} \begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \end{pmatrix}$ satisfies conditions~\eqref{eqn:conditions_on_pts}. Then, in the notations of Definition~\ref{def: average def}, the two pairs $\left(\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_{\frac{1}{2}}\\v_{\frac{1}{2}}\\ \end{pmatrix}\right)$ and $\left(\begin{pmatrix} p_{\frac{1}{2}}\\v_{\frac{1}{2}}\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\right)$ also satisfy conditions~\eqref{eqn:conditions_on_pts}. \end{lemma} The proof of Lemma~\ref{lemma: reaveraging} is given in Appendix~\ref{proofs}. Lemma~\ref{lemma: reaveraging} allows us to design a Hermite interpolatory subdivision scheme utilizing the B\'{e}zier average with $\omega=\frac{1}{2}$ as an insertion rule. Namely, for initial Hermite data of the form $\begin{pmatrix} p_{j}^{0} \\ v_{j}^{0} \\ \end{pmatrix}$, ${j \in \mathbb{Z}} $, we define for all $k=0,1,2,\ldots$: \begin{equation} \label{eqn:IHB_scheme} \begin{pmatrix} p_{2j}^{k + 1} \\ v_{2j}^{k + 1} \\ \end{pmatrix} =\begin{pmatrix} p_{j}^{k} \\ v_{j}^{k} \\ \end{pmatrix} , \quad \begin{pmatrix} p_{2j + 1}^{k + 1} \\ v_{2j + 1}^{k + 1} \\ \end{pmatrix} = B_{\frac{1}{2}}\begin{pmatrix} \begin{pmatrix} p_{j}^{k} \\ v_{j}^{k} \\ \end{pmatrix},\begin{pmatrix} p_{j + 1}^{k} \\ v_{j + 1}^{k} \\ \end{pmatrix} \end{pmatrix}, \quad j\in \mathbb{Z}. \end{equation} We refer to the subdivision scheme~\eqref{eqn:IHB_scheme} as the interpolatory Hermite-B\'{e}zier scheme, in short \textit{IHB-scheme}. We view the IHB scheme as a modification of the piecewise linear subdivision scheme, which is also known as Lane-Riesenfeld of order $1$ subdivision scheme (LR1). This scheme serves as an illustrating example for our general method where we replace the linear average in any LR scheme of order $m\ge 1$, by the B\'{e}zier average with $\omega=\frac{1}{2}$. The modified scheme refines Hermite data and we term it Hermite-B\'{e}zier Lane-Riesenfeld of order $m$ (HB-LRm). The refinement step of this scheme is given in Algorithm~\ref{alg:HB_LRm}. For the HB-LRm scheme with $m>1$, we obtain a subdivision scheme which is not interpolatory, namely, the interpolation conditions~\eqref{interpolation} are not fulfilled. However, we expect the HB-LRm schemes to approximate the curve, as seen visually in the numerical examples in Section~\ref{sec:examples}. \begin{algorithm}[ht] \caption{The refinement step of the Hermite-B\'{e}zier Lane-Riesenfeld of order $m$} \label{alg:HB_LRm} \begin{algorithmic}[1] \REQUIRE The Hermite data to be refined $ \{ \left( p_i, v_i \right) \}_{i \in \mathbb{Z}}$. The order $m$. \ENSURE The refined data. \FOR{$i \in \mathbb{Z}$} \STATE $Q_{2i}^{0} \gets \begin{pmatrix} p_{i} \\ v_{i} \\ \end{pmatrix} $ \STATE $Q_{2i+1}^{0} \gets \begin{pmatrix} p_{i} \\ v_{i} \\ \end{pmatrix} $ \ENDFOR \FOR{$j=1$ \TO $m$} \FOR{$i \in \mathbb{Z}$} \STATE $ Q_{2i}^{j} \gets Q_{i}^{j-1} $ \STATE $Q_{2i+1}^{j} \gets B_\frac{1}{2}\left(Q_{i}^{j-1},Q_{i+1}^{j-1}\right) $ \ENDFOR \ENDFOR \\ \RETURN $\{Q_{i}^{m} \}_{i \in \mathbb{Z}}$ \end{algorithmic} \end{algorithm} As a first demonstration, we show the repeated refinements of two elements in $\mathbb{R}^{n}\times S^{n-1}$ by the IHB-scheme, as appear on Figure~\ref{fig:average inserting} and Figure~\ref{fig:3D average inserting} for $n=2$ and $n=3$, respectively. These figures visually reveal some properties, in particular convergence of the IHB scheme, which we present and show in the sequel. \begin{figure} \centering \includegraphics[width=0.19\textwidth]{convex_0_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{convex_1_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{convex_2_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{convex_3_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{convex_6_iterations.pdf} \includegraphics[width=0.19\textwidth]{0_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{1_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{2_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{3_iterations.pdf}\hfill \includegraphics[width=0.19\textwidth]{6_iterations.pdf} \caption{Two examples of applying the IHB-scheme in $\mathbb{R}^{2}$. From left to right: initial data, one iteration, two iterations, three iterations, six iterations.} \label{fig:average inserting} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{0.28\textwidth} \includegraphics[width=\textwidth]{3D0.pdf} \caption{initial data} \end{subfigure} \quad\quad \begin{subfigure}[t]{0.28\textwidth} \includegraphics[width=\textwidth]{3D1.pdf} \caption{1 iteration} \end{subfigure} \quad\quad \begin{subfigure}[t]{0.28\textwidth} \includegraphics[width=\textwidth]{3D2.pdf} \caption{2 iterations} \end{subfigure} \begin{subfigure}[t]{0.28\textwidth} \includegraphics[width=\textwidth]{3D3.pdf} \caption{3 iterations} \end{subfigure}\quad\quad \begin{subfigure}[t]{0.28\textwidth} \includegraphics[width=\textwidth]{3D6.pdf} \caption{6 iterations} \end{subfigure} \caption{Example of the performance of the IHB-scheme in $\mathbb{R}^{3}$.} \label{fig:3D average inserting} \end{figure} Next, we present geometric properties of the HB-LRm schemes with $m \ge 1$ which are a direct consequence of geomeric properties of the B\'{e}zier average, as given in Lemma~\ref{lemma: similarity invariance}, Lemma~\ref{line reconstruction lemma}, and Lemma~\ref{circle reconstruction lemma}. In particular, the first part of the following result shows that preservation of geometric objects by the average becomes reconstruction of these objects by the modified subdivision schemes. \begin{thm} The HB-LRm schemes of order $m \ge 1$ reconstruct lines and circles. For converging HB-LRm schemes, their limits are invariant under isometries and scaling transformations. \end{thm} From here and until the end of this section, we focus our attention on the IHB-scheme (HB-LR1). Next, we present auxiliary results on the B\'{e}zier average and then present the convergence analysis of the IHB-scheme. \subsection{Contractivity of the B\'{e}zier Average} Let $\left(\begin{pmatrix} p_0\\v_0\\ \end{pmatrix},\begin{pmatrix} p_1\\v_1\\ \end{pmatrix}\right)\in\left(\mathbb{R}^n\times S^{n-1}\right)^2$ be given. As in the previous section, we always assume that $p_0\neq p_1$ and we denote by $p_\frac{1}{2}$ and $v_\frac{1}{2}$ the point and vector obtained by the B\'{e}zier average with weight $\frac{1}{2}$ applied to the given data. The following two lemmas play a central role in the convergence analysis of the IHB-scheme. \begin{lemma} \label{points distance contraction lemma} If $ \theta_{0} + \theta_{1}\leq \pi$ then $d\left(p_{\frac{1}{2}},p_{j}\right)\leq d\left(p_{0},p_{1}\right), \quad j = 0,1$. Furthermore, if $ \theta_{0} + \theta_{1}<\pi$ then there exists $\mu \in (0,1)$ such that $ d\left(p_{\frac{1}{2}},p_{j}\right)\leq \mu d\left(p_{0},p_{1}\right)$. \end{lemma} The proof of Lemma~\ref{points distance contraction lemma} is given in Appendix~\ref{proofs}. \begin{lemma} \label{angles contraction lemma} If $ \sigma\begin{pmatrix} \begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \end{pmatrix}\leq\frac{3\pi }{4}$. Then, \[ \max \Bigg\{ \sigma\begin{pmatrix} \begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{\frac{1}{2}} \\ v_{\frac{1}{2}} \\ \end{pmatrix} \end{pmatrix},\sigma\begin{pmatrix} \begin{pmatrix} p_{\frac{1}{2}} \\ v_{\frac{1}{2}} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \end{pmatrix} \Bigg\} \leq \sqrt{0.9}\sigma\begin{pmatrix} \begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix} \end{pmatrix}.\] \end{lemma} We do not provide a formal proof of Lemma~\ref{angles contraction lemma}. Instead, we give in Appendix~\ref{app:sigma proof} a comprehensive discussion where we show that proving the lemma is equivalent to showing the positivity of an explicit trigonometric function. Then, we introduce numerical indications for the positivity of this function and an outline for a proof based on a further exhaustive computer search. \begin{remark} The above two lemmas shed light on our exploration of a natural metric; see, e.g., Remark~\ref{metric}. On the one hand, Lemma~\ref{points distance contraction lemma} guarantees the boundedness of the Euclidean metric over a subset of $\left(\mathbb{R}^{n}\times S^{n - 1}\right)^2$. On the other hand, even though $ \sigma :X \to [0,\sqrt{2}\pi]$ is not a metric over $\mathbb{R}^{n}\times S^{n - 1}$, it gives a measure of how far an ordered couple of point-vector pairs is from being sampled from a line. It is the result of Lemma~\ref{angles contraction lemma} that the B\'{e}zier average yields two pairs that are ``closer" to samples from a line than the original pair. In addition, a metric $\tilde{d}$ that satisfies the so-called metric property, \[ \tilde{d}\left(\begin{pmatrix} p_{j} \\ v_{j} \\ \end{pmatrix},\begin{pmatrix} p_{\frac{1}{2} } \\ v_{\frac{1}{2} } \\ \end{pmatrix}\right)= \frac{1}{2} \tilde{d}\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix}, \begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\right), \quad j=0,1. \] would lead to the convergence of the HB-LRm subdivision schemes of any order $m \ge 1$ (see Algorithm~\ref{alg:HB_LRm}), which can be shown by the technique in~\cite{dyn2017global}. \end{remark} \subsection{G1 Convergence of the IHB-scheme} First, we recall several definitions and notation needed in the convergence analysis. Initial control points $ P^{0} = \left\{ p_{j}^{0}\right\}_{j\mathbb{\in Z}} $ is a sequence of points in $\mathbb{R}^d$. A subdivision scheme refining points $ \mathcal{S}$ refines $P^{0} $ and generate the control points $\mathcal{S}^k(P^0)= P^{k} = \left\{ p_{j}^{k}\right\}_{j\mathbb{\in Z}}$, $k>0$. Associating the point $p^k_j$ with the parameter value $ t^k_j=2^{-k}j$, we define at each level the piecewise linear interpolant to the points $(t^k_j,p^k_j),\ j\in \mathbb Z$, $f_k$ (also known as the \textit{control polygon at level $k$}). With these notions we can define the convergence of $\mathcal S.$ We term $\mathcal{S}$ convergent if the sequence $\left\{ f_{k}\right\}_{k\geq 0}$ converges uniformly in the $ L_{\infty }$ norm. In addition, we follow the definition given in~\cite{dyn2012geometric} regarding G1 convergence of a subdivision scheme refining points, and say that $\mathcal{S}$ is G1 convergent if it is convergent and there exists a continuously varying directed tangent along its limit curve. A Hermite subdivision scheme is termed $G1$ convergent if it is convergent as a points refining scheme and is convergent as tangents refining scheme such that the limit tangents are tangent to the limit curve, see e.g.,~\cite{dyn2012geometric,lipovetsky2021subdivision, reif2021clothoid}. Note that the normalized tangents are points on the $S^{n-1}$ sphere, and that the definition of convergence can be generalized to manifold-valued data if instead of piecewise linear interpolants we use piecewise geodesic interpolants (see Definition~3.5. in~\cite{dyn2017manifold}). By Theorem 3.6. in~\cite{dyn2017manifold}, sufficient conditions for a subdivision scheme refining points $\mathcal{S}$ to be convergent are displacement$-$safe and a contractivity factor $ \mu \in (0,1)$. The first requires a bound between two control polygons of consecutive refinement levels, and it trivially holds for interpolatory schemes. The second one means that \[ \Delta(\mathcal{S}(P^k)) \le \mu \Delta(P^k), \quad \text{ where } \quad \Delta(P) = \sup_j d(p_{j}^{k}, p_{j+1}^{k}) . \] Back to the Hermite setting, with the initial data $ P^{0} =\left(\begin{pmatrix} p_{j}^0 \\ v_{j}^0 \\ \end{pmatrix}\right)_{j\mathbb{\in Z}}$ and with $P^{k}=\left(\begin{pmatrix} p_{j}^{k} \\ v_{j}^{k} \\ \end{pmatrix}\right)_{j\mathbb{\in Z}}$ the refined Hermite data at the $k$-th refinement level, and its associated \begin{equation} \sigma^{(k)} = \sup_{j}\sigma\left(\begin{pmatrix} p_{j}^{k} \\ v_{j}^{k} \\ \end{pmatrix},\begin{pmatrix} p_{j + 1}^{k} \\ v_{j + 1}^{k} \\ \end{pmatrix}\right) . \end{equation} In addition, we define the piecewise geodesic interpolant of the tangent vectors at the $k$-th refinement level $\left\{\left(v_{j}^{k}\right)\right\}_{j\mathbb{\in Z}}$ as \begin{equation} PG_{k}\left(t\right) = M_{t2^{k} - j}\left(v_{j}^{k},v_{j + 1}^{k}\right), \quad t\in\left[2^{ - k}j,2^{ - k}\left(j + 1\right)\right) , \end{equation} where $ M_{\omega }$ is the sphere geodesic average, namely $ M_{\omega }(u,v) $ is the unique point on the great circle connecting two non-antipodal points $u$ and $v$, that divides the geodesic distance between $u$ and $v$ in a ratio of $1-\omega$ when measuring from $u$ and $\omega$ when measuring from $v$. Then, we conclude: \begin{thm} The IHB-scheme is $ G^{1}$ convergent whenever the initial data satisfies $ \sigma^{\left(0\right)}\leq\frac{3\pi }{4}$. \end{thm} \begin{proof} Let $ P^{0}=\left(\begin{pmatrix} p_{j}^0 \\ v_{j}^0 \\ \end{pmatrix}\right)_{j\mathbb{\in Z}}$ be the initial Hermite type data with $ \sigma^{\left(0\right)}\leq\frac{3\pi }{4}$. We begin with the convergence of the ntangents. To this end, we prove that $\left\{ PG_{k}\left(t\right)\right\}_{k\mathbb{\in N}}$ is a Cauchy series for all $ t$. The proof follows closely the proof of Theorem~3.6. in~\cite{dyn2017manifold}. Recall $\theta$ of~\eqref{theta definition} and $\theta_{0}$ and $\theta_{1} $ as defined in~\eqref{theta j definition}. Then, we observe that for any $\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\in\mathbb{R}^{n}\times S^{n - 1}$, \begin{align*} \theta = g\left(v_{0},v_{1}\right) &\leq \theta_{0} + \theta_{1} = \sqrt{\theta_{0}^{2} + 2\theta_{0}\theta_{1} + \theta_{1}^{2}} \\ & \leq\sqrt{2\left(\theta_{0}^{2} + \theta_{1}^{2}\right)} =\sqrt{2}\sigma\left(\begin{pmatrix} p_{0} \\ v_{0} \\ \end{pmatrix},\begin{pmatrix} p_{1} \\ v_{1} \\ \end{pmatrix}\right) . \end{align*} Let $t\in\left[2^{ - k}j,2^{ - k}\left(j + 1\right)\right)$, $ j\mathbb{\in Z}$. Then, $g\left(PG_{k}\left(t\right),PG_{k + 1}\left(t\right)\right)$ is bounded by \begin{align*} & g\left(PG_{k}\left(t\right),v_{j}^{k}\right) + g\left(v_{j}^{k},PG_{k + 1}\left(t\right)\right) \\ & = g\left(PG_{k}\left(t\right),v_{j}^{k}\right) + g\left(v_{2j}^{k + 1},PG_{k +1}\left(t\right)\right) \\ & \leq g\left(v_{j}^{k},v_{j + 1}^{k}\right) + g\left(v_{2j}^{k + 1},v_{2j + 1}^{k + 1}\right) + g\left(v_{2j+1}^{k + 1},v_{2j + 2}^{k + 1}\right) \\ &\leq\sqrt{2}\cdot \sigma^{\left(k\right)} + 2\sqrt{2}\cdot \sigma^{\left(k + 1\right)}\leq 3\sqrt{2}\cdot \mu^{k}\sigma^{\left(0\right)} \end{align*} with $\mu =\sqrt{0.9}$. The last inequality follows from Lemma~\ref{angles contraction lemma}. Thus $\left\{ PG_{k}\right\}_{k\mathbb{\in N}}$ converges uniformly and the IHB-scheme is convergent as a refining scheme of ntangents. We now consider the IHB-scheme as a point refining scheme. It is interpolatory and hence displacement-safe. To derive a contractivity factor we consider levels $k$ with $k$ large enough, such that $\sup_{j}\theta_{0}^{\left(j\right)} + \theta_{1}^{\left(j\right)}\leq\frac{2\pi }{3}$, where $\theta_i^{\left(j\right)}=\theta_i\left(\begin{pmatrix} p_j^{k}\\v_j^{k}\\ \end{pmatrix},\begin{pmatrix} p_{j+1}^{k}\\v_{j+1}^{k}\\ \end{pmatrix}\right)$ for $i=0,1$. By Lemma~\ref{points distance contraction lemma}, \[ d\left(p_{2j}^{k + 1},p_{2j + 1}^{k + 1}\right),d\left(p_{2j + 1}^{k + 1},p_{2j + 2}^{k + 1}\right)\leq\frac{5}{6}d\left(p_{j}^{k},p_{j + 1}^{k}\right) , \] with $\gamma =\frac{2\pi }{3}$ in the proof of Lemma~\ref{points distance contraction lemma}. It follows that the IHB-scheme has a contractivity factor of $\frac{5}{6}$ as a point refining scheme, and thus we deduce the convergence of the points. Finally we observe that since $ \sigma^{\left(k\right)}\xrightarrow{k\longrightarrow \infty }0$ both the angles between $ v_{j}^{k}$ to $ p_{j + 1}^{k} - p_{j}^{k}$ and to $ p_{j}^{k} - p_{j - 1}^{k}$ approaches zero when $ k$ approaches infinity.\ \ This means that the limit tangents are tangent to the limit curve. We conclude that the IHB-scheme is G1 for data satisfying $ \sigma^{\left(0\right)}\leq\frac{3\pi }{4}$. \end{proof} \begin{remark} \label{remark: different modifications} \phantom{The following} \begin{enumerate}[label=(\roman*)] \item Different modifications of LR1 can be obtained by using other Hermite averages, for example as done in~\cite{lipovetsky2021subdivision} in the 2D case. \item Similar arguments, as in the proofs of this section, lead to an extension of the convergence result in~\cite{lipovetsky2021subdivision} for a wider class of initial data and for any dimension. \end{enumerate} \end{remark} \section{Numerical examples} \label{sec:examples} This section provides 2D and 3D examples of interpolating or approximating curves based on geometric Hermite samples. Specifically, we test the performance of the IHB-scheme and the HB-LR3 scheme (modification, based on the B\'{e}zier average, of the LR3 algorithm). We then compare these results to approximations by related methods, such as other modifications of LR1, see Remark~\ref{remark: different modifications}. Note that all the examples of this section are available as Python code for reproducibility and for providing a further point of view, in~\url{https://github.com/HofitVardi/Hermite-Interpolation}. \subsection{Comparison between different modifications of LR1} In the first example, we compare the IHB-scheme with the modification of LR1 in~\cite{lipovetsky2021subdivision}; see also Remark~\ref{alternative average} and Remark~\ref{remark: different modifications}. We apply both schemes to data sampled from spirals in $\mathbb{R}^{2}$ and $\mathbb{R}^{3}$. Figure~\ref{fig:spiral} shows the limit curves obtained from samples with increasing density. As demonstrated in the figure, for a high density of samples, the performances of the two schemes are almost identical. However, as the number of samples decreases, the performance of the IHB-scheme is superior. Note that as mentioned in Remark~\ref{alternative average}, in the 2D case and for a sufficiently dense sampling, the two methods coincide. This example indicates similar behavior in 3D. \begin{figure}[hbt!] \centering \includegraphics[width=0.26\textwidth]{spiral_reconstruction4_iter.pdf} \quad\quad \includegraphics[width=0.26\textwidth]{spiral_reconstruction8_iter.pdf} \quad\quad \includegraphics[width=0.26\textwidth]{spiral_reconstruction16_iter.pdf} \\ \vspace{\baselineskip} \includegraphics[width=0.3\textwidth]{3Dspiral4_iter.pdf} \quad \includegraphics[width=0.3\textwidth]{3Dspiral8_iter.pdf} \quad \includegraphics[width=0.3\textwidth]{3Dspiral16_iter.pdf} \caption{Comparison between two modifications of LR1. Spiral approximation in 2D (upper row) and 3D (lower row) by the IHB-scheme (black), by the modification of LR1 as in~\cite{lipovetsky2021subdivision} (blue), initial data and spirals (gray).} \label{fig:spiral} \end{figure} \subsection{Geometric versus linear Hermite interpolation} We interpolate geometric Hermite data, sampling the $2D$ curve $\gamma(t)=(t,\sin{t})$, by the IHB-scheme and by a linear Hermite subdivision scheme. The latter subdivision was introduced in~\cite{merrien1992family}, and we term it Merrien-scheme. The Merrien-scheme is based on cubic Hermite interpolation, and uses point-tangent pairs with tangents that are not normalized as its initial data. Nevertheless, from the reasoning we present next, we compare the above two schemes when applied to geometric Hermite data, with different sampling rates. \begin{figure}[!hbp] \centering \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{Schemes_comparison_2_0pi_v2.pdf} \caption*{$h=2\pi$} \end{subfigure} \quad\quad \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{Schemes_comparison_1_11pi_v2.pdf} \caption*{$h=\frac{10}{9}\pi$} \end{subfigure} \\ \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{Schemes_comparison_1_0pi_v2.pdf} \caption*{$h=\pi$} \end{subfigure} \quad\quad \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\textwidth]{Schemes_comparison_0_66pi_v2.pdf} \caption*{$h=\frac{2}{3}\pi$} \end{subfigure} \\ \begin{subfigure}[t]{0.65\textwidth} \includegraphics[width=\textwidth]{Schemes_comparison_zoomed.pdf} \caption*{$h=0.05\pi$} \end{subfigure} \caption{Hermite Interpolation for varying sampling densities by the IHB-scheme (black) and by Merrien-scheme (red), both applied to geometric Hermite data. The sampled curve $\gamma(t)=(t,\sin{t})$ and the initial data are gray. Here $h$ is the distance between consecutive sampling points in the parameter domain.} \label{fig:Schemes comparison} \end{figure} For approximating a function by subdivision schemes, we assume that the points where we sample the function are equidistant. For curves approximation, the samples have to be close to equidistant with respect to the arc-length parameterization, see, e.g.,~\cite{floater2006parameterization}. To imitate such an ideal sampling in the Hermite setting, one must consider the distance between the points and the tangents' magnitude. Therefore, to show the effect of sampling, we fix the tangents to be normalized and apply different sampling rates in the current comparison. Figure~\ref{fig:Schemes comparison} demonstrates the way linear Merrien-scheme and the IHB-scheme address sparse sampling (top and middle left sub-figures) and dense one (bottom sub-figure). We observe that the sampling rate significantly affects the approximation quality of the linear scheme. In particular, sparse samples yield a ``flattened'' curve, and dense samples generate unnecessary loops. On the other hand, the B\'{e}zier average, being adaptive to the distance between the two points, provides a remedy. \subsection{The contribution of the tangent directions to the approximation} In the following example, we test the contribution of the tangent directions to the quality of the approximation. This contribution is clear when sampling a periodic curve in parameter distances which are equal to its period. In this case, the only reasonable approximation in the absence of tangent directions is a straight line. On the other hand, our scheme when applied to Hermite data sampled from the curve $\gamma(t)=(t,\sin{t})$, in parameter distances which are equal to its period, preserves the oscillations of $\gamma$ (see the first sub-figure of Figure~\ref{fig:naive vectors}). Note that this cannot be the case when the sampled points are extremum points. \begin{figure}[!htp] \centering \begin{subfigure}[t]{0.35\textwidth} \includegraphics[width=\textwidth]{naive_vectors2_0pi_v2.pdf} \caption*{$h=2\pi$} \end{subfigure}\quad\quad \begin{subfigure}[t]{0.35\textwidth} \includegraphics[width=\textwidth]{naive_vectors1_11pi_v2.pdf} \caption*{$h=\frac{10}{9}\pi$} \end{subfigure}\\ \begin{subfigure}[t]{0.35\textwidth} \includegraphics[width=\textwidth]{naive_vectors1_0pi_v2.pdf} \caption*{$h=\pi$} \end{subfigure}\quad\quad \begin{subfigure}[t]{0.35\textwidth} \includegraphics[width=\textwidth]{naive_vectors0_66pi_v2.pdf} \caption*{$h=\frac{2}{3}\pi$} \end{subfigure}\\ \caption{Non-interpolatory approximation by LR3 applied to initial point values only (orange), the HB-LR3 applied to the same point values with estimated tangent directions (dashed line) and the HB-LR3 applied to the corresponding Hermite data (solid black). The sampled curve $\gamma=(t,\sin{t})$ and the initial points are in gray. $h$ represents the distance between parameter values corresponding to consecutive samples. } \label{fig:naive vectors} \end{figure} We demonstrate the contribution of the tangent directions in two ways: How well information about the initial tangent directions affects the quality of the approximation and how the Hermite setting can be effective even in the absence of information on the initial tangent directions. We compare the HB-LR3 scheme, as discussed in Section~\ref{subsec: IHB}, with the linear LR3 scheme. We do so by applying the HB-LR3 once to real Hermite data sampled from $\gamma=(t,\sin t)$, and once to Hermite data, obtained from data consisting of points only, by an algorithm which estimate a tangent direction at a point using the point and its immediate neighboring points; see, e.g.~\cite{yang2006normal} and Remark~\ref{rem:guesing_tangents} below. The results of our experiments are presented in Figure~\ref{fig:naive vectors} and indicate the clear superiority of the approximation based on the right tangent directions. Moreover, in the absence of such information, the HB-LR3 scheme still produces a better approximation than the linear LR3 scheme, which in some cases fails to describe the curve under the limited available data. Note that this comparison follows a similar experiment done in \cite{lipovetsky2016weighted} and also indicates the limitation in estimating the tangent directions, see, e.g., the bottom left subfigure. \begin{remark} \label{rem:guesing_tangents} The estimation of tangent directions from the initial points is explained below. Given points $\{p_{j}\}_{j\mathbb{\in Z}}\subseteq\mathbb{R}^ n$, the estimated tangent direction $\tilde{v}_j$ at the point $p_j$ is the weighted geodesic average of the two normalized vectors $\frac{p_{j} - p_{j-1}}{\left\Vert p_{j} - p_{j-1}\right\Vert }$ and $\frac{p_{j+1} - p_{j}}{\left\Vert p_{j+1} - p_{j}\right\Vert}$ with weight $\omega=\frac{\left\Vert p_{j} - p_{j-1}\right\Vert}{\left\Vert p_{j} - p_{j-1}\right\Vert+\left\Vert p_{j+1} - p_{j}\right\Vert}$. The computation of the tangent direction follows the computation of a ``naive normal" in \cite{lipovetsky2016weighted}. \end{remark} \subsection{Approximation order} \label{subsec:app_order} The last two examples emphasize visually the superior quality of the outputs of the HB-LRm schemes with $m=1$ and $m=3$ as approximants. Namely, the distance between the sampled curve and the curves generated by the aboive two HB-LRm schemes is smaller than the distance obtained by the other methods. In this section, we further examine the approximation quality via the concept of approximation order: how small is the distance between the approximation and the sampled curve as a function of the distance between the equidistant parameter values corresponding to the sampled data, as this distance becomes small. In the functional setting, classical Hermite interpolation to the data $(x_i,f^{(j)}(x_i)),\ i,j=0,1$ gives fourth-order approximation over $[x_0,x_1]$ if $\frac{d^4f}{dt^4}$ is continuous on $[x_0,x_1]$. Namely, under these conditions the error decays like $C (x_1-x_0)^4$, for some constant $C$, when $x_1 -x_0$ tends to zero. In particular, methods like piecewise cubic interpolation are known to have a fourth-order approximation see, e.g.,~\cite{conti2015ellipse}. Nevertheless, this claim is more limited when it comes to curves as it depends on a particular choice of samples. Specifically, when one samples a curve in the non-Hermite settings, the sampling must be taken close to equidistant with respect to the arc-length parameterization. In the Hermite setting, we must take into account also the tangents to form the analogue ``equidistant sampling'', see~\cite{floater2006parameterization}. In our numerrical tests we observe that the IHB-scheme preserves the classical fourth-order approximation, independently of any particular sampling policy. We relate this observation to the fact that the B\'{e}zier average depends on the distance between the points and the angle between the tangent directions, and believe that due to this property of our average, the IHB-scheme is more robust to sampling. Consider a curve $\gamma$ and its approximation $\tilde{\gamma}$. A common method to measure the approximation error is the Hausdorff distance between the two curves, \begin{equation} \label{eqn:hausdorf_dist} D(\gamma,\tilde{\gamma})=\max\left(\sup_{p\in\gamma}\inf_{q\in\tilde{\gamma}} d(p,q),\sup_{q\in\tilde{\gamma}}\inf_{p\in\gamma} d(q,p)\right). \end{equation} Given a set of Hermite samples $\left( (p_{j},v_j) \right)_{j\in J}$ of $\gamma$, as in~\eqref{eqn:sampled_hermite_data}, we denote by $h$ the maximal distance between consecutive samples. One may choose the parametric distance between adjacent samples if a parameterization is known. A natural parameterization is the arc length, which also plays a significant role in the context of approximation order, see~\cite{floater2006parameterization}. As we usually do not have access to parameterization, a possible solution is to set $h = \max_{j \in J} d(p_j,p_{j+1})$. With $h$, we denote the approximation based on the samples by $\gamma_h$. For estimating the approximation order, we consider the approximation error as a function of $h$, \begin{equation*} e(h) = D(\gamma,\gamma_h). \end{equation*} Our ansatz is that when $h$ is small, the error function behaves as $C h^\beta$, for some constants $C$ and $\beta$. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{order.pdf} \caption{Order of approximation. A log-log plot of the maximal error of approximation to the curve~\eqref{eqn:gamma_example}, as obtained by IHB (solid) and by piecewise linear interpolation (dashed), as a function of the parametric distance between consecutive samples.} \label{fig:order} \end{figure} For simplicity, we consider in our numerical test a functional curve $\gamma$, \begin{equation} \label{eqn:gamma_example} \gamma(t) = (t, P(t)) \in \mathbb{R}^2 , \end{equation} where $P$ is a polynomial. In this case, we sample $\gamma$ equidistantly according to the parameter $t$, that is $h$ is taken with respect to the first coordinate. Then, we use the maximal pointwise distance at the second coordinate of $\gamma$ as a simpler form for the error. Next, to estimate numerically $\beta$, we use a log-log plot, presenting $\log(e(h))$ as a function of $\log(h)$ for decreasing values of $h$. In view of our ansatz we expect to obtain a line whose slope is an estimate of $\beta$. Figure~\ref{fig:order} presents the above log-log plot. This figure shows a nearly straight line with a slope equal to $4$. In other words, $e(h) \approx h^4$, that is $\beta=4$. For comparison, we also compute and plot the maximal error of the approximation of $\gamma$ by the linear LR1. The latter yields a line of slope $2$ in accordance with the known rate of approximation of LR1. \section{Conclusions and future work} This paper proposes a method for geometric Hermite interpolation by a G1 convergent subdivision scheme. The scheme is a part of a more general approach for addressing the problem of geometric Hermite approximation by modifying the Lane-Riesenfeld family of subdivision schemes~\cite{lane1980theoretical} by replacing the linear binary average in these schemes with an appropriate Hermite average. We give a general definition for such an average and design a specific example, the B\'{e}zier average. We also point to the problem of finding a proper metric over $\mathbb{R}^n\times S^{n-1}$. By having such a metric, we aim to identify the subspace consisting of Hermite samples of curves and adapt the property of metric-boundedness with respect to a given Hermite average. Finally, we mention the particular benefit that might arise once a metric that fulfills the so-called metric-property with respect to a given Hermite average is available. Such a Hermite-metric can significantly facilitate the convergence analysis of the modified LRm schemes. We show how geometric properties, like circle-preserving of a Hermite average, are inherited by the adapted HB-LRm subdivision schemes, and present numerically the advantage in using a geometric method for approximating curves from their Hermite samples. While we treat the problem of approximating curves in $\mathbb{R}^n$, the generality of our approach naturally leads to the solution of a more general problem, namely to the approximation of curves in certain non-linear spaces by a similar method. For example, the definition of the B\'{e}zier average has a natural and intrinsic generalization that fits inputs from the tangent bundle of some Riemannian manifolds. The latter is the subject of an ongoing research by the authors.
2,869,038,156,178
arxiv
\section{Introduction} When materials absorb or release heat, their temperature varies in general. However, if a phase change occurs in materials, then the temperature only slightly varies, even though a large amount of energy is stored or released. Only after the phase change is over does the temperature begin to rise or fall significantly. Therefore, materials with a phase change, or phase-change materials (PCMs), are of great interest in the applications where there is demand for thermal energy storage with a high density (within a small temperature range) and/or where a temperature level needs to be maintained. Examples are solar energy storage \cite{Ken07}, space heating and cooling of buildings \cite{Cab11,Kuz11,Soa13}, cold storage applications \cite{Cab12}, data storage applications \cite{Wutt11}, and industrial applications in textiles and clothing systems \cite{SO12}. It is well known that during a phase change between two phases (such as melting/freezing between a solid and liquid phase) the enthalpy vs.~temperature plot shows a sudden jump, while the heat capacity vs.~temperature plot shows a distinct peak. The presence of such rounded jumps and peaks is attributed mostly to non-equilibrium effects; if heat exchange were carried out quasistatically and the studied sample were macroscopically \emph{large}, the jumps and peaks would become infinitely sharp. In some experiments, however, heat capacity peaks keep their finite width even at rather slow heating rates \cite{Red09,Sch09}. For example, when adiabatic scanning calorimetry (ASC) is applied, very slow scanning rates can be achieved (down to $\SI{0.5}{mK.min^{-1}}$) so that thermodynamic equilibrium of the investigated samples is ensured \cite{Glor11a}. This suggests that finite jumps and peaks need not be a pure non-equilibrium phenomenon, but it should be possible to obtain them even within an equilibrium approach. In this paper we wish to present such an approach and demonstrate that it can predict rounded jumps/peaks from experiments with very good accuracy. The approach is based on the observation that the crystalline state of PCMs has usually a polycrystalline structure, being composed of many single-crystalline grains some of which have just few tens of nanometers in diameter \cite{Wutt11}. We thus propose to interpret an experimentally measured jump/peak as a superposition of many contributions coming from the individual grains (see Section~\ref{sec: poly}). Due to finite-size effects, the jumps/peaks from the \emph{small} grains are sharp, yet of finite width. In addition, they are mutually shifted. Therefore, when they are superimposed, the so obtained result can fit experimental data with very good precision (see Section~\ref{sec: appl}). The starting point of our approach is a microscopic theory \cite{BoKo95} from which enthalpy jumps and heat capacity peaks in a single grain can be obtained (see Section~\ref{sec: single}). It should be noted that lately there has been a number of studies of PCMs using various microscopic techniques, such as molecular dynamics simulations \cite{Par09,Par10,Par10a,Ell11,Ell12,Ell13,Ell14,Anan14}, density-functional calculations \cite{Jon08,Jon09,Jon09a,Jon11,Jon14}, a cellular automata approach \cite{Wri08}, classical nucleation theory simulations \cite{Bur12}, and a statistical theory of crystallization \cite{Pet13}. Most of these works focus on specific materials (one or more of the alloys Ge$_2$Sb$_2$Te$_5$, Sb$_2$Te$_3$, GeTe, AgInSbTe, and Ga-Sb) due to their practical importance in digital memory technologies. As a specific material, we shall consider a paraffin-based PCM called Rubitherm RT~$27$ in which a change between a solid and liquid phase is used to store/release thermal energy in various civil engineering applications. Its enthalpy and heat capacity were measured in \cite{Glor11} using adiabatic scanning calorimetry. Thus, it should be plausible to apply a quasistatic approach to describe these results in which the enthalpy has a single distinct jump and the heat capacity has a single distinct peak. The phase-change temperature was determined to be $\SI{27.3}{\celsius}$ for heating and $\SI{27.2}{\celsius}$ for cooling. We shall focus on the heating part of the temperature dependences (the corresponding experimental data are shown in Fig.~\ref{fig: data} and listed in Table~\ref{tab: RT data}), because, on closer inspection, they are more representative than those for the cooling run. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig1}\\ \caption{(a) The specific enthalpy and (b) the specific heat capacity of Rubitherm RT~$27$ obtained from ASC measurements performed in \cite{Glor11}. Either thick line represents more than $21$ thousand data points. The inset in (b) shows the heat capacity near its two foot regions.} \label{fig: data} \end{figure} \begin{table} \footnotesize \centering \begin{tabular}{lllll} \hline \hline Quantity & Symbol & Value & Unit & Reference \\ \hline Phase change temperature & $T_\pc$ & $300.5$ & K & \cite{Glor11} \\ Enthalpy change between $T_\pc \pm \SI{5}{K}$& $\De h$ & $165$ & $\si{kJ.kg^{-1}}$ & \cite{Glor11} \\ Solid phase density & $\rho_s$ & $880$ & $\si{kg.m^{-3}}$ & Producer \\ Liquid phase density & $\rho_l$ & $760$ & $\si{kg.m^{-3}}$ & Producer \\ \hline \hline \end{tabular} \caption{The basic properties of Rubitherm RT~$27$ (produced by Rubitherm GmbH, Germany).} \label{tab: RT data} \end{table} \section{Single-crystalline PCMs: Extremely sharp peaks and jumps} \label{sec: single} We shall consider a phase change that occurs between two phases. Then a jump in the specific enthalpy, $h$, is expected to interpolate between the specific enthalpies, $h_1$ and $h_2$, of the two phases; i.e., \begin{subequations} \label{eq: h,c single} \begin{equation} \label{eq: h single} h = h_1 + (h_2 - h_1) \, \eta, \end{equation} where the quantity $0 < \eta < 1$ describes the precise form of the interpolation. Since $\eta = (h-h_1)/(h_2-h_1)$, it has the meaning of a normalized enthalpy and is dimensionless. It is further expected that a peak in the heat capacity, $c_p$, is the sum of the excess and baseline heat capacities, \begin{equation} c_p = c_\ex + c_\bs. \end{equation} Similarly to $h$, the baseline capacity should interpolate between the heat capacities, $c_1$ and $c_2$, of the two phases, \begin{equation} c_\bs = c_1 + (c_2 - c_1) \, \eta', \end{equation} where $0 < \eta' < 1$ is the corresponding interpolation function. On the other hand, the excess capacity has the shape of a peak, for it is associated with the phase change itself. It may be written as the product \begin{equation} c_\ex = c_0 \ga, \end{equation} \end{subequations} where $c_0$ is its maximal value (about two orders of magnitude larger than the single-phase capacities $c_1$ and $c_2$), and $0 < \ga \leq 1$ is a dimensionless quantity describing the peak in $c_\ex$. At present there is no universal microscopic theory of phase changes for realistic models of materials that would predict jumps in the enthalpy and peaks in the heat capacity as given in Eqs.~\eqref{eq: h,c single}. Nevertheless, for simplified models, called lattice gases, such a general theory was already developed \cite{BoKo90,BoKo95}. It is appropriate only for processes in which temperature changes are performed quasistatically. Moreover, since lattice gases are suitable for the description of changes between crystalline phases, a PCM that we can thus describe must have a perfect, single-crystal microstructure. This is plausible for a solid phase of the studied PCM, but it is somewhat approximative for a liquid phase. If we invoke the theory from \cite{BoKo95}, then the results from Eqs.~\eqref{eq: h,c single} can be indeed obtained. Namely, it follows that the two interpolating functions are identical, $\eta \approx \eta'$, and can be both approximated by the function $J(x) = (1+\tanh x)/2$, while the peak function $\ga$ can be approximated by the function $P(x) = \cosh^{-2} x$. The functions $J$ and $P$ are similar in shape to the Gaussian error function and bell curve, respectively, but they approach their limiting values at a slower, exponential rate. The shorthand $x$ and the maximal value $c_0$ are given as \begin{equation} \label{eq: xT, c0, DT0} x = 2 \, \frac{T - T_\mx}{\De T_0}, \quad \De T_0 = \frac{4 k_B T_\pc^2}{\ell m}, \qquad \qquad c_0 = \frac \ell{\De T_0} = \frac{\ell^2 m}{4 k_B T_\pc^2}, \end{equation} where $T_\pc$ is the temperature of the phase change, $\ell = h_2(T_\pc) - h_1(T_\pc)$ is the specific latent heat associated with the change, $m$ is the sample mass (assumed to be constant), and $k_B$ is the Boltzmann constant. Note that $c_0$ and $\ell$ is equal to the \emph{height} and \emph{area} of the excess heat capacity peak, respectively, and $\De T_0 = \ell / c_0$ corresponds to its \emph{half-width}. The temperature $T_\mx$ is the maximum position of the total heat capacity $c_p$. It is slightly shifted with respect to the phase-change temperature due to surface effects (the influence of the surroundings), \begin{equation} \label{eq: Tm} T_\mx - T_\pc \approx \frac{\sig S}{\ell m} \; T_\pc. \end{equation} Here $\sig$ is the difference between the specific (per unit area) surface free energies of the two phases and $S$ is the surface size of the sample, both evaluated at $T = T_\pc$. Using the experimental data from Fig.~\ref{fig: data} for Rubitherm RT~$27$, we may use quadratic fits to determine the enthalpies $h_1$ and $h_2$ and linear fits to determine the heat capacities $c_1$ and $c_2$, and then calculate the normalized enthalpy $\eta$, baseline heat capacity $c_\bs$, and excess heat capacity $c_\ex$ (see Fig.~\ref{fig: data norm}). \begin{figure} \centering \includegraphics[width=\linewidth]{Fig2}\\ \caption{(a) The single-phase enthalpies $h_1$ and $h_2$ (the dotted lines) determined for Rubitherm RT~$27$ by fitting the data from Fig.~\ref{fig: data}(a) by quadratic polynomials. (b) The corresponding normalized enthalpy $\eta$. (c) The single-phase capacities $c_1$ and $c_2$ (the dotted lines) determined for Rubitherm RT~$27$ by fitting the data from Fig.~\ref{fig: data}(b) by linear polynomials, and the corresponding baseline heat capacity $c_\bs$ calculated for $\eta$ from part (b). (d) The excess heat capacity obtained as the difference $c_p - c_\bs$. The inset shows its two foot regions in greater detail.} \label{fig: data norm} \end{figure} The latter has the height $\SI{144.4}{kJ.kg^{-1}.K^{-1}}$, half-width $\SI{0.64}{K}$, and area $\SI{138.5}{kJ.kg^{-1}}$. In the above theoretical results these should coincide with $c_0$, $\De T_0$, and $\ell$, respectively. This might be perhaps true for samples of just few nanometers in size, such as for nano-encapsulated PCMs \cite{Peng13}. However, for samples of few micrometers in diameter, Eq.~\eqref{eq: xT, c0, DT0} with $T_\pc$ and $\rho$ from Table~\ref{tab: RT data} predicts a peak that is about \emph{eight orders of magnitude sharper and taller} than the one observed experimentally (if the latent heat is kept unchanged). In fact, the same conclusion follows for any PCM for which the heat capacity peak has the height $c_0$, half-width $\De T_0$, and area (latent heat) $\ell$ of orders $\SI{100}{kJ.kg^{-1}.K^{-1}}$, $\SI{1}{K}$, and $\SI{100}{kJ.kg^{-1}}$, respectively. Therefore, the theoretical description based on Eqs.~\eqref{eq: h,c single} and \eqref{eq: xT, c0, DT0} cannot be used to accurately reproduce various experimental data, and a more sophisticated approach must be adopted. \section{Polycrystalline PCMs: Wide peaks and jumps} \label{sec: poly} The main reason why Eqs.~\eqref{eq: h,c single} and \eqref{eq: xT, c0, DT0} yield results that may be inconsistent with experiment is the assumption that a PCM has a perfect, single-crystalline microstructure. If we consider a PCM that is polycrystalline, consisting of a number of single-crystal grains, then we will be able to fit experimental data with theoretical results with very good precision. A single-crystalline PCM is a special case when there is just one grain. \subsection{Model of polycrystalline PCMs} The grains, $G$, may be of various sizes and their surroundings may affect them in different ways. For simplicity, we will assume that the grains are of spherical shape and mutually independent (non-interacting) and that possible effects of void spaces between the grains are neglected. Then the enthalpy and heat capacity of a PCM sample is the sum of the enthalpies and heat capacities coming from its individual grains. The specific enthalpy and capacity may thus be expressed as the weighted averages, $h = \sum_G w_G h_G$ and $c_p = \sum_G w_G c_G$, of the grain specific enthalpies, $h_G$, and capacities, $c_G$, respectively. The weight of a given grain $w_G = m_G/m$ is equal to the fraction of its mass in the sample. Applying Eqs.~\eqref{eq: h,c single} -- \eqref{eq: Tm} to $h_G$ and $c_G$ (with the sample mass $m$ and sample surface $S$ replaced by the grain mass, $m_G$, and grain surface, $S_G$, respectively), we get \begin{subequations} \label{eq: h,c poly} \begin{equation} \label{eq: h poly} h \approx h_1 + (h_2 - h_1) \, J_\av, \qquad c_p = c_\ex + c_\bs \end{equation} with \begin{equation} c_\ex \approx c_0 \, P_\av, \qquad c_\bs \approx c_1 + (c_2 - c_1) \, J_\av. \end{equation} \end{subequations} These results have the same form as for single-crystalline samples: \begin{itemize} \item[(a)] the specific enthalpy interpolates between the single-phase specific enthalpies $h_1$ and $h_2$; \item[(b)] the specific heat capacity is the sum of the excess and baseline heat capacities; \item[(c)] the excess heat capacity is the product of $c_0$ and a dimensionless peak function; \item[(d)] the baseline capacity interpolates between the single-phase specific heat capacities $c_1$ and $c_2$, similarly to the enthalpy. \end{itemize} This time, however, the interpolation is described by the average $J_\av = \sum_G w_G J(x_G)$ of the grain jump functions $J(x_G)$. In addition, the capacity peak is described by the average $P_\av = \sum_G w_G^2 P(x_G)$ of the products $w_G P(x_G)$ of the weights and gain peak functions $P(x_G)$, because $c_\ex = \sum_G w_G [ c_{0G} P(x_G) ]$ with $c_{0G} = \ell^2 m_G / 4 k_B T_\pc^2 = c_0 (m_G/m) = c_0 w_G$. In the special case of a single-crystalline PCM, there is only one grain $G$ with $w_G = 1$, and Eq.~\eqref{eq: h,c poly} reduces back to Eqs.~\eqref{eq: h,c single}. We anticipate that Eq.~\eqref{eq: h,c poly} can predict \emph{much wider and smaller} heat capacity peaks and much wider enthalpy jumps for polycrystalline PCMs than for single-crystalline PCMs. Indeed, if a PCM is composed of many grains, then the jump and peak functions $J(x_G)$ and $P(x_G)$ from different grains are mutually shifted and of various widths, depending on the grain sizes and surface effects. Therefore, when multiplied by the (usually very small) terms $w_G$ and $w_G^2$, respectively, and summed together, the resulting averages $J_\av$ and $P_\av$ could be much wider and, in the latter case, much smaller. Moreover, the positions of $J(x_G)$ and $P(x_G)$ for different grains are inversely proportional to the grain diameter ($T_\mx - T_\pc \propto S_G/m_G \propto 1/d$). Hence, $J(x_G)$ and $P(x_G)$ are unevenly distributed in a given temperature range, so that their averages $J_\av$ and $P_\av$ and, therefore, the enthalpy jumps and heat capacity peaks are expected to be \emph{asymmetric} in general, in agreement with experimental data. In the following it is sufficient to focus on the excess heat capacity $c_\ex$, because the baseline heat capacity $c_\bs$ and enthalpy $h$ are obtained from the average $J_\av$ by Eq.~\eqref{eq: h,c poly}, and the latter can be calculated from $c_\ex$ by integration, \begin{equation} \label{eq: cJ from c_exc} J_\av \approx \frac1{\ell} \int_0^T c_\ex (T) \, dT, \end{equation} as can be easily verified. To evaluate the excess heat capacity $c_\ex$, we shall rewrite it in a more convenient form, using the PCM density, $\rho$, and grain diameter, $d$, both evaluated at the phase-change temperature $T_\pc$. We express the grain mass and surface as $m_G = \rho V_G = \pi \rho d^3 / 6$ and $S_G = \pi d^2$, respectively. Then the variations in the grain heat capacities $c_G$ between various grains are only due to the grain diameter $d$ and its surface free energy difference $\sig$ (see Eqs.~\eqref{eq: xT, c0, DT0} and \eqref{eq: Tm}). So, if we classify the grains according to their values of $d$ and $\sig$, we may express the excess heat capacity as a double sum, \begin{equation} \label{eq: c_exc double av} c_\ex = c_0 \sum_{i=1}^n N_i \, \Big( \frac{d_i}D \Big)^6 P_i, \qquad P_i = \sum_{j=1}^{N_i} \nu_{ij} \, P (x_{ij}), \end{equation} where $D$ is a diameter of the PCM sample at $T_\pc$. The first sum is over all grain diameters $d_1, \dots, d_n$. The second sum is over all values $\sig_1, \dots, \sig_{N_i}$ of the surface free energy differences in the grains of a fixed diameter $d_i$; the number of these grains is denoted as $N_i$. The quantity $\nu_{ij}$ is the fraction of the grains of diameter $d_i$ whose value of the surface free energy difference is $\sig = \sig_j$. The shorthand $x_{ij}$ stands for $x$ evaluated for a grain with a diameter $d_i$ and $\sig = \sig_j$. Since the numbers $N_i$ and weights $\nu_{ij}$ are unknown for Rubitherm RT~$27$, we shall consider simple forms of these weights to obtain an explicit formula for $c_\ex$ from Eq.~\eqref{eq: c_exc double av}. \subsection{Surface effects} We will assume that the grains are created in a random process so that the boundary conditions for various grains are irregular, which is why $\sig$ changes from one grain to another. We let $\sig_0$ denote the mean value of $\sig$. In addition, since we consider $\sig$ to be random and related to the grain boundary, we shall assume that its fluctuation (standard deviation) is inversely proportional to the square root of the number $M_i$ of atoms lying on the grain boundary, $\De\sig_i \propto 1/\sqrt{M_i}$. Thus, $\De\sig_i = b_0/d_i$, where $b_0 > 0$ is a constant. Taking the values $\sig_1, \dots, \sig_{N_i}$ to be equally spread, for simplicity, we may approximate $\nu_{ij}$ for a grain of diameter $d_i$ by the Gaussian form, \begin{equation} \label{eq: la Gauss} \nu_{ij} \approx \la_i (\sig_j) \, \de\sig_i, \qquad \la_i (\sig) = \frac1{\sqrt{2\pi} \, \De\sig_i} \, e^{- \frac12 \, \big( \frac{\sig - \sig_0}{\De\sig_i} \big)^2}, \end{equation} where $\de\sig_i = (\sig_{N_i} - \sig_1)/(N_i-1)$ is the distance between two adjacent values $\sig_j$. According to Eqs.~\eqref{eq: xT, c0, DT0} and \eqref{eq: Tm}, the half-width and maximum positions of the grain peaks $P(x_{ij})$ in the average $P_i$ from Eq.~\eqref{eq: c_exc double av} are given by \begin{equation} \De T_i = \frac{24 k_B T_\pc^2}{\pi \ell \rho d_i^3}, \qquad T_\mx^{ij} = \Big( 1 + \frac{6\sig_j}{\ell \rho d_i} \Big) T_\pc. \end{equation} While the half-width is the same for all peaks $P(x_{ij})$ in $P_i$, their maximum positions vary proportionally to $\sig_j$. Thus, the peaks with different $\sig_j$ are spread over a range between the temperatures $T_\mx^{i1}$ and $T_\mx^{iN_i}$ corresponding to the maximal and minimal value $\sig_1$ and $\sig_{N_i}$, respectively. The dominant peaks $P(x_{ij})$ in the average $P_i$ are, however, those with a high weight $\nu_{ij}$; i.e., the peaks corresponding to $\sig_j$ between $\sig_0 - \De\sig_i$ and $\sig_0 + \De\sig_i$. These are spread over a narrower range of half-width \begin{equation} \De\tau_i = \frac{6 \De\sig_i}{\ell \rho d_i} \, T_\pc. \end{equation} The average $P_i$ strongly depends on the ratio of the half-widths $\De T_i$ and $\De\tau_i$. Indeed, if $\De\tau_i$ is much smaller than $\De T_i$, there are only small shifts between the grain peaks $P(x_{ij})$, and the average $P_i$ is practically the same as a peak function for a single grain of diameter $d_i$. On the other hand, if $\De\tau_i$ is much larger than $\De T_i$, the grain peaks $P(x_{ij})$ are spread over a wide temperature range, and the average $P_i$ is \emph{much wider and smaller} than a grain peak (see Fig.~\ref{fig: aver}). Namely \cite{MPH15}, \begin{equation} \label{eq: Pi} P_i \approx \frac{\De T_i}{\sqrt{2\pi} \; \De\tau_i} \, e^{- y_i^2/2 }, \qquad y_i = \frac{T - T_\mx^i}{\De\tau_i} = \frac{d_i}{b_0} \, \Big( \frac{\ell \rho d_i}6 \, \frac{T-T_\pc}{T_\pc} - \sig_0 \Big), \end{equation} provided the ratio \begin{equation} \label{eq: cond} \frac{\sqrt{2\pi} \; \De\tau_i}{\De T_i} = \Big( \frac\pi2 \Big)^{3/2} \, \frac{b_0 d_i}{k_B T_\pc} \gg 1. \end{equation} Here $T_\mx^i$ is the maximal temperature $T_\mx$ for a grain of diameter $d_i$ taken at the mean value $\sig = \sig_0$. Thus, while every grain peak $P(x_{ij})$ has the height $1$ and half-width $\De T_i$, the average $P_i$ has, according to Eq.~\eqref{eq: Pi}, the height $\De T_i / \sqrt{2\pi} \; \De\tau_i \ll 1$ and half-width about $\sqrt{2\pi} \; \De\tau_i \gg \De T_i$. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig22}\\ \caption{(a) The grain peak functions $P(x_{ij})$ for $N_i = 200$ (every ninth peak is shown). The value $\sqrt{2\pi} \; \De\tau_i$ is $11.9$ times larger than the half-width $\De T_i$ of all grain peaks. (b) The same grain peaks multiplied by the weights $\nu_{ij}$ from Eq.~\eqref{eq: la Gauss}. A maximal value $\la_0$ of the products is indicated. (c) The average $P_i$ of the grain peaks obtained numerically (the full line) and from the approximation from Eq.~\eqref{eq: Pi} (the dotted line). The maximal value (height) is $P_i^\mx = \De T_i / \sqrt{2\pi} \; \De\tau_i$. (d) The average $P_i$ is smaller and wider than a grain peak function by the factor $P_i^\mx$.} \label{fig: aver} \end{figure} \subsection{Peak in the excess heat capacity: the final formula } To get the excess heat capacity, it remains to perform the averaging over the grain diameters $d_1, \dots, d_n$. The simplest case is that the diameters are equally spread and that there is an equal number of grains of a given diameter, \begin{equation} N_i = \const = \frac{D^3}{d_0^3}, \qquad d_0^3 = \sum_{i=1}^n d_i^3, \end{equation} where we used that $\sum_i N_i (\pi d_i^3/6)$ must be equal to the total PCM volume $\pi D^3/6$. Then Eqs.~\eqref{eq: c_exc double av} and \eqref{eq: Pi} yield \begin{equation} \label{eq: c_exc explicit} c_\ex \approx \frac{\ell^2 \rho}{6\sqrt{2\pi} \; b_0 d_0^3 T_\pc} \, \sum_{i=1}^n d_i^5 \, e^{- y_i^2/2}. \end{equation} Thus, the excess heat capacity is a sum of peaks whose maxima are located at $T_\mx^i \propto 1/d_i$. Thus, as $d_i$ increases, these positions get closer and closer to the phase-change temperature $T_\pc$, but their mutual distances are not equal. Consequently, $c_\ex$ is a sum of unevenly distributed peaks and will in general be \emph{asymmetric}. This is a pure finite-size effect. The special case when $c_\ex$ is symmetric can occur only if all peaks have the same position, $\sig_0 = 0$. \section{Results and discussion} \label{sec: appl} Let us apply the above theoretical results to fit the experimental data for Rubitherm RT~$27$ plotted in Fig.~\ref{fig: data}. Since the data for the heat capacity are quite oscillating, especially near its maximum, let us consider also their averaged version that is much smoother and should be more representative (see Figs.~\ref{fig: data av} and \ref{fig: data norm av}). In the fitting procedure presented below we choose the minimal grain diameter $d_1 = \SI{10}{nm}$ and number of different grain sizes $n = 300$. The maximal grain diameter will be allowed to attain a range of values, $d_n = \SI{0.1}{\micro m}$, $\SI{0.15}{\micro m}$, $\dots$, $\SI{0.5}{\micro m}$, to observe the sensitivity of the results to this parameter. In addition, the sample density at $T_\pc$ will be estimated as an average of the solid and liquid densities, $\rho = (\rho_s + \rho_l)/2 = \SI{820}{kg.m^{-3}}$ (see Table~\ref{tab: RT data}). Thus, there are four parameters in Eq.~\eqref{eq: c_exc explicit} for $c_\ex$ that remain to be fitted to the data: the phase-change temperature $T_\pc$, specific latent heat $\ell$, mean value $\sig_0$, and width $b_0$. To determine them, four independent properties of $c_\ex$ taken from the experimental data must be fitted by theoretical expressions. We shall proceed as follows. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig3}\\ \caption{The original data from Fig.~\ref{fig: data} (lines) and the averaged data (dots) obtained for (a) the specific enthalpy and (b) the specific heat capacity of Rubitherm RT~$27$. In (c) and (d) the averaged data for the specific heat capacity near the peak maximum and near the two peak foots, respectively, are shown in detail.} \label{fig: data av} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig3b}\\ \caption{(a) The single-phase enthalpies $h_1$ and $h_2$ (the dashed lines) determined for Rubitherm RT~$27$ by fitting the averaged data from Fig.~\ref{fig: data av}(a) by quadratic polynomials. (b) The corresponding normalized enthalpy $\eta$. (c) The single-phase capacities $c_1$ and $c_2$ (the dashed lines) determined for Rubitherm RT~$27$ by fitting the averaged data from Fig.~\ref{fig: data av}(b) by linear polynomials, and the corresponding baseline heat capacity $c_\bs$ calculated for the interpolating function $\eta$ from part (b). (d) The excess heat capacity obtained as the difference $c_p - c_\bs$.} \label{fig: data norm av} \end{figure} First, we consider the \emph{area} under the peak exhibited by $c_\ex$. Since $c_\ex$ is an average of the grain peaks all of which have the area equal to $\ell$ (see Section~\ref{sec: single}), the peak of $c_\ex$ has also the area equal to $\ell$. Calculating the peak area for the original data in Fig.~\ref{fig: data norm}(d) and averaged data in Fig.~\ref{fig: data norm av}(d), we get \begin{equation} \ell = \SI{138.5}{kJ.kg^{-1}}, \qquad \ell = \SI{139.2}{kJ.kg^{-1}}, \end{equation} respectively, which is about $84$ \% of the total enthalpy change in the range between $\SI{295.5}{K}$ and $\SI{305.5}{K}$ (see Table~\ref{tab: RT data}). The reason of this discrepancy is that the excess heat capacity decreases to zero in both foot regions, and so it has a smaller area than the total heat capacity. Second, we consider the \emph{maximum position}, $T^*$, of the peak exhibited by $c_\ex$. If we express $T^*$ in a form similar to $T_\mx$, \begin{equation} \label{eq: T*} T^* = \Big( 1 + \frac{6\sig_0}{\ell \rho d^*} \Big) \, T_\pc, \end{equation} where $d^*$ is a suitable diameter, then the condition $d c_\ex (T^*)/dT = 0$ for the maximum may be rewritten as \begin{equation} \label{eq: L* eq} \sum_{i=1}^n d_i^8 (d_i - d^*) \, e^{- z_i^2/2} = 0, \qquad z_i = y_i (T^*) = \frac{\sig_0 d_i}{b_0} \, \Big( \frac{d_i}{d^*} - 1 \Big). \end{equation} The diameter $d^*$ is the solution to this equation. It depends only on the ratio $r = \sig_0 / b_0$ (and not on particular experimental data). This dependence can be calculated numerically and is plotted in Fig.~\ref{fig: dst}. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig4}\\ \caption{(a) The size $d^*$ (relative to $d_n$) specifying the maximum position $T^*$ of the excess heat capacity in dependence on the ratio $r = \sig_0 / b_0$ calculated for $d_n = \SI{0.1}{\micro m}$, $\SI{0.15}{\micro m}$, $\dots$, $\SI{0.5}{\micro m}$ (the lines from bottom to top). In (b) and (c) the size $d^*$ for negative ratios $\sig_0 / b_0$ and its the maximal and minimal values, respectively, are shown in detail.} \label{fig: dst} \end{figure} Third, we consider the \emph{asymmetry} factor, $0 < \al < 1$, of the peak in $c_\ex$. It is introduced as the ratio of the area under the peak that lies below the maximum position $T^*$ to the peak's total area $\ell$. Its value for the data in Fig.~\ref{fig: data norm}(d) and their averaged version in Fig.~\ref{fig: data norm av}(d) is $\al = 0.884$ and $\al = 0.821$, respectively. A theoretical expression for $\al$ follows from Eq.~\eqref{eq: c_exc explicit}, \begin{equation} \label{eq: al} \al \equiv \frac1\ell \int_0^{T^*} c_\ex (T) \, dT \approx \frac1{2 d_0^3} \sum_{i=1}^n d_i^3 \Big( \erf \frac{z_i}{\sqrt2} + 1 \Big), \end{equation} where $\erf$ is the Gauss error function. For the already obtained dependence $d^*(r)$ we now calculate the theoretical dependence of $\al$ on the ratio $r$. It is plotted in Fig.~\ref{fig: al}. Fitting these theoretical results to the experimental value of $\al$, we get the ratio $r$ for each diameter $d_n$, as is shown in Fig.~\ref{fig: FitProc2}(a). Note that the ratio $r$ is negative. This is when the grain peaks of which $c_\ex$ is a sum are shifted below the phase-change temperature, leading to $c_\ex$ with most of its area lying below its maximum position ($\al > 1/2$). A positive ratio $r$ would correspond to $c_\ex$ with most of its area lying above its maximum position ($\al < 1/2$). \begin{figure} \centering \includegraphics[width=\linewidth]{Fig5}\\ \caption{(a) The asymmetry factor $\al$ in dependence on the ratio $r = \sig_0 / b_0$ calculated for the size $d^*$ from Fig.~\ref{fig: dst}. (b) The asymmetry factor $\al$ for negative ratios $\sig_0 / b_0$ and near its maximal values. The experimental values of $\al$ for the original and averaged data are shown as dashed lines.} \label{fig: al} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig6}\\ \caption{(a) The ratio $r$ for various choices of $d_n$ as obtained by fitting the theoretical dependence $\al(r)$ from Fig.~\ref{fig: al}(b) to the experimental value of $\al$. (b) The product $b_0 T_\pc$ calculated from fitting the height of the peak $H$ to its experimental value. (c) The width $b_0$ calculated from $b_0 T_\pc$ and $T_\pc$. (d) The mean value $\sig_0$ obtained from (The results for the original/averaged data are depicted as squares/circles.)} \label{fig: FitProc2} \end{figure} Fourth, we consider the \emph{height} of the excess heat capacity peak for which Eq.~\eqref{eq: c_exc explicit} yields \begin{equation} \label{eq: cP*} H = c_\ex (T^*) \approx \frac{\ell^2 \rho}{6\sqrt{2\pi} \; b_0 d_0^3 T_\pc} \, \sum_{i=1}^n d_i^5 \, e^{- z_i^2/2}. \end{equation} Using the already determined ratio $r$ and dependence $d^*(r)$, this formula yields the peak height in the form $H = \const / b_0 T_\pc$ for each $d_n$. The experimental value $H = \SI{144.40}{kJ.kg^{-1}.K^{-1}}$ and $H = \SI{120.54}{kJ.kg^{-1}.K^{-1}}$ taken from the data in Fig.~\ref{fig: data norm}(d) and averaged data in Fig.~\ref{fig: data norm av}(d), respectively, yield the product $b_0 T_\pc$ plotted in Fig.~\ref{fig: FitProc2}(b). We may now use the experimental value of the maximum positions $T^* = \SI{300.4}{K}$ and $T^* = \SI{300.3}{K}$ for the original and averaged data to calculate the phase-change temperature from the expression $T_\pc = T^* - 6r(b_0 T_\pc) / \ell \rho d^*$ (see Eq.~\eqref{eq: T*}) and the already determined parameters. This yields practically the same values for all chosen diameters $d_n$ (the differences between $T_\pc$ for various $d_n$ do not exceed \SI{2}{mK}), \begin{equation} T_\pc = \SI{303.7}{K} \; (\SI{30.5}{\degreeCelsius}) \qquad \text{and} \qquad T_\pc = \SI{303.8}{K} \; (\SI{30.7}{\degreeCelsius}), \end{equation} respectively. Note that these values of $T_\pc$ are higher than the phase-change temperature determined in \cite{Glor11} by more than $\SI{3}{K}$ (see Table~\ref{tab: RT data}). The determined values of $r$, $b_0 T_\pc$, and $T_\pc$ yield the width $b_0 = (b_0 T_\pc)/T_\pc$ and mean value $\sig_0 = b_0 r$. They are plotted in Fig.~\ref{fig: FitProc2}(c) and (d). Finally, we must verify the condition from Eq.~\eqref{eq: cond} to see whether our theoretical formulas can be actually applied. From the fitted values of $b_0$ and $T_\pc$ we conclude that the ratio $\De T_i / \De\tau_i$ has lowest value $157.6$ (for $d_n = \SI{0.1}{\micro m}$ and $d_i = d_1 = \SI{10}{nm}$) so that the condition is indeed satisfied. Knowing the four parameters $T_\pc$, $\ell$, $\sig_0$, and $b_0$, we obtain the excess heat capacity $c_\ex$ from Eq.~\eqref{eq: c_exc explicit}, the jump function $J_\av$ from Eq.~\eqref{eq: cJ from c_exc}, and the heat capacity $c_p$ and enthalpy $h$ from Eq.~\eqref{eq: h,c poly}. The latter are plotted in Figs.~\ref{fig: Fits cp} and \ref{fig: Fits h}. They are practically identical for all chosen diameters $d_n$, so only the plots for $d_n = \SI{0.1}{\micro m}$ are shown. The agreement between the theoretical results and experimental data is very good. \begin{figure} \centering \includegraphics[width=\linewidth]{Fig7}\\ \caption{The comparison of the experimental data on the specific heat capacity with theoretical results (the dashed lines) calculated from the fitted parameters obtained for the original data (parts (a) and (b)) and averaged data (parts (c) and (d)). The theoretical results are plotted for $d_n = \SI{0.1}{\micro m}$ (the other diameters yield practically the same peaks).} \label{fig: Fits cp} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{Fig8}\\ \caption{The comparison of the experimental data on the specific enthalpy with theoretical results (the dashed lines) calculated from the fitted parameters obtained for the original data (parts (a) and (b)) and averaged data (parts (c) and (d)). The theoretical results are plotted for $d_n = \SI{0.1}{\micro m}$ (the other diameters yield practically the same peaks).} \label{fig: Fits h} \end{figure} \section{Conclusions} We presented a quasistatic approach to describe the temperature dependence of the specific enthalpy $h$ and heat capacity $c_p$ of a parafin-based PCM Rubitherm~RT~27. We used experimental data for the heating run that were measured by adiabatic scanning calorimetry in which thermodynamic equilibrium of samples can be ensured. If the PCM were a single crystal, a microscopic theory of first-order phase transitions in finite systems would predict heat capacity spikes that are much sharper and taller (by several orders of magnitude) than those measured in experiments. Therefore, we used that the PCM should have a polycrystalline structure and modeled it as a large ensemble of small single-crystal grains. Then we were able to obtain theoretical results for $h$ and $c_p$ that could be fitted to experimental data with very good precision. We used only four fitting parameters, including the specific latent heat $\ell$ and phase-change temperature $T_\pc$. Their values were adjusted from four characteristics of the excess heat capacity peak (its area, maximum position, height, and asymmetry). The key points of our approach may be summarized as follows. \begin{enumerate} \item We provided a procedure to separate the baseline and excess heat capacities for a phase change between two phases, using the experimental data on the enthalpy and heat capacity. \item The presented equilibrium approach predicts an asymmetric jump and peak in the enthalpy and heat capacity, respectively, as a result of finite-size effects. \item The specific latent heat $\ell$ was identified with the area of the peak in the excess heat capacity. For the considered PCM it formed $84$ \% of the total enthalpy change in the range between $\SI{295.5}{K}$ and $\SI{305.5}{K}$. \item We determined the phase-change temperature $T_\pc$ from the height and maximum position of the peak in the excess heat capacity. Its value was higher by $\SI{3.2}{K}$ than the quoted one. \end{enumerate} Since the microscopic structure of the studied material was not taken into account in depth, our results are quite robust and could be applied to other PCMs. In addition, our results can be extended to the phase changes with coexistence of more than two phases, using the necessary modifications to the description of the single grain behavior. This may be a topic for a future investigation. A weak point of our approach is the obtained value of the phase-change temperature that is rather shifted from the maximum position of the measured heat capacity peak. This is a consequence of taking the mean value $\sig_0$ to be fixed for all grain sizes. We may improve the results by considering $\sig_0$ to be varying with $L$ over a range of values. Then the half-width of this range would be an additional, fifth fitting parameter that must be determined from an additional property of the excess heat capacity peak. In this sense, the presented approach uses a minimal number of fitting parameters. Another weak point is the tacit assumption that the PCM phases are crystalline, which may be a crude approximation, especially for the liquid phase. In addition, it is farfetched to apply the theory of phase changes from \cite{BoKo95} to the behavior of grains in a real PCM. Nevertheless, the precise shapes of the jump and peak functions $J$ and $P$ associated with the individual grains are not essential in the final results (see Eq.~\eqref{eq: c_exc explicit}), because such a detailed information is lost after their averages $J_\av$ and $P_\av$ over the grains are taken. Finally, the presented approach is restricted to the quasistatic regime. Thus, non-equilibrium effects are not considered, even though they may have an additional effect on the shape and position of the enthalpy jumps and heat capacity peaks. \section*{Acknowledgements} The research in this paper was supported by the Czech Science Foundation, Project No.~P105/12/G059, and by the VEGA project No.~1/0162/15. The authors would like to thank Prof.~Christ Glorieux and Dr.~Jan Leys from the Catholic University of Leuven, Belgium, for providing experimental data.
2,869,038,156,179
arxiv
\section{Motivation, Basics} The $min$-algebra, $\N^\infty=\N\sqcup\{+\infty\}$ together with the binary operation of taking minimum is our main motivation. \begin{defn} By a tropical space, we mean a pointed set $(T,\infty)$ together with a $minimum$-function $\min=\min_T:T\times T\to T$ satisfying $$\min (t,\infty)=\min(\infty,t)=t\textrm{ for all }t\in T,$$ and $\mit(t,t)=t$ for all $t\in T$. \end{defn} Notice that under this operation, $T$ turns into a monoid, whose each element is idempotent. The tropical space $T$ is called commutative if $\min(t,t')=\min(t',t)$ for all $t,t'\in T$. Next, we make it clear what we mean by a tropical module, tropical algebra, etc. Let $R$ be a commutative ring. By a left tropical $R$-module $T$ we mean a tropical set $T$ of which has the structure of a left $R$-module, and the $R$-action fixes the $\infty$, i.e. $r\infty=\infty$. Similarly, we can define a tropical algebra $T$ to be a tropical space $T$ together with a multiplication which is compatible with the tropical structure.\\ Notice that we may consider the category of these objects, say category of tropical $R$-modules together with the forgetful functor $$\textrm{RMod}_\tro\longrightarrow \textrm{RMod}.$$ Next, we show how it is possible to construct examples of these spaces. This of course reduces to find reasonable ways of defining the $min$-function. \begin{note} We note that it is possible to have a dual tropical structure given by taking \textit{maximum} of two given elements. We will use this in some of our important examples. \end{note} \subsection{Examples} \subsubsection{Tropicalising normed spaces} Let us write $\R_{\geqslant}$ for the half line of nonnegative real numbers. Let $(M,|\ |)$ be a space with a norm function $|\ |:M\to\R_{\geqslant}$. Notice that any norm gives rise to a metric, and any metric space $(M,d)$ with a chosen point $0\in M$ can be turned into a normed space by setting $|v|=d(v,0)$ for any $v\in M$. Having a norm define $m:M\to \R_{>}$ by $$m(a,b)=\min(|a|,|b|)$$ where the right hand side is the minimum function of the real line. Notice that if we choose $M=\R$ with the usual the metric on $\R$, then $m(a,b)=\min(a,b)$. Now, we may define $\min=\min_M:M\times M\to M$ by $\min(a,a)=a$ and $$\min(a,b)=\left\{ \begin{array}{ll} a&\textrm{if }m(a,b)=|a|,\\ b&\textrm{if }m(a,b)=|b|. \end{array}\right.$$ For a given normed space $M$, we define \textit{tropicalisation} of $M$ by $M^\tro=M\sqcup\{\infty\}$ with generalised $min$-function satisfying $$\min(m,\infty)=\min(\infty,m)\textrm{ for all }m\in M$$ and $\min(\infty,\infty)=\infty$. Hence we obtain, a tropicalisation functor $$Trop:\textrm{Normed-Space}\longrightarrow \textrm{Trop-Space}.$$ Notice that category of normed spaces contains the category of finite dimensional vector spaces inside it.\\ The fact that the tropicalisation of $M$ depends on the norm, or say the metric, shows that this can be used as an invariant of metric spaces, and not necessarily as an invariant of topological spaces. The reason is that it is possible to choose different metrics, yielding the same topology on a space. \begin{note} We note that the tropicalisation functor in fact is defined from the category of totally ordered sets into the category of tropical sets, as the notion of order naturally tells how to choose smaller element among two given ones. \end{note} \subsubsection{Tropicalising Modules with projections} Notice that on the real line one has $$\min(a,b)=\frac{a}{2}+\frac{b}{2}-\frac{1}{2}|a-b|.$$ We use this as analogy to define $\min$ on a class of modules. Suppose we have a ring $R$ which includes $\frac{1}{2}$, say $R=\Z[\frac{1}{2}], \mathbb{Q},\R,\C$, etc. Let $M$ be an $R$-module together with a projection, i.e. a mapping $\tau:M\to M$ such that $\tau^2=\tau$. We then define $$\min{_M^\tau(a,b)}=\frac{1}{2}(a+b-\tau(a-b)).$$ This then defines a $min$-function on $M$. Note that one may restrict to choose $\tau$ as an automorphism of $R$-modules. But for the moment we shall not put this restriction. We can consider the tropicalisation of $M^\tro_\tau=M\sqcup\{\infty\}$ with generalised $min$-function defined as $$\min{_{M^\tro_\tau}(m,\infty)}=\min{_{M^\tro_\tau}(\infty,m)}=m\textrm{ for all }m\in M,$$ and $\min{_{M^\tro_\tau}(\infty,\infty)}=\infty$. This then defines the tropicalisation functor $$Trop:\textrm{Rmod}^{\textrm{pro}}\longrightarrow \textrm{Trop-Space}$$ where the left hand side is the category of $R$-modules with projections. If we give structure of a monoid to $M^\tro_\tau$ by $\min{_{M^\tro_\tau}}$ and define the multiplication on $M^\tro_\tau$ by generalising the addition operation of $M$, then we obtain an example of a tropical algebra. More precisely, we define $\oplus,\odot$ by $$\begin{array}{ll} m\oplus m'=\min{_{M^\tro_\tau}(m,m')} \textrm{ for all }m,m'\in M^\tro_\tau,\\ \\ m\odot m'=\left\{ \begin{array}{ll} m+m' &\textrm{ if }m,m'\in M\\ \infty &\textrm{ if }m=\infty,\textrm{ or }m'=\infty. \end{array}\right. \end{array}$$ Notice that here $\odot$ is commutative. We usually will drop $\odot$ from our notation. This then gives structure of a semi-ring to $(M^\tro_\tau,\oplus,\otimes)$. Observe that the inclusion $$M\to M^\tro_\tau$$ acts like the \textit{exponential map} as it sends addition to multiplication. If we choose $\tau$ to be an automorphism, then we can give structure of an $R$-module to to $M^\tro_\tau$ by requiring that the action fixes $\infty$, and on the other points of $M^\tro_\tau\setminus\{\infty\}$ acts in the same way as on $M$. This then makes the inclusion map $M\to M^\tro_\tau$ into a map of $R$-modules.\\ Finally, we note that it is possible to define a ``dual'' tropical structure on $M$. For motivation, notice that in $\R$, we have $$\max(a,b)=\frac{1}{2}(a+b+|b-a|).$$ One then may fix $\tau:M\to M$ with $\tau^2=1$ and define $$\max{_M^\tau(a,b)}=\frac{1}{2}(a+b+\tau(a-b)).$$ It is then possible to perfom as we did with the $min$-function. \section{Tropical Euclidean and Grassmannian Spaces} I like to approach these topics with hat of a topologist on. These are quite familiar objects, and it will be interesting how they look like geometrically.\\ In topology, there are different approaches to define Grassmannian spaces. Let us to work with $\R$ for a moment. We consider $\R$ with its usual addition and multiplication. The Grassmannian $G_k(\R^{n+k})$ may be viewed as the space of $k$-dimensional subspaces of $\R^{n+k}$ which can be described as quotient of another space, namely the Steifel space $V_k(\R^{n+k})$ of $k$-frames in $\R^{n+k}$ together the quotient map $$V_k(\R^{n+k})\to G_k(\R^{n+k}).$$ Another way of defining $G_k(\R^{n+k})$ is to consider it as the quotient $$\frac{O(n+k)}{O(n)\times O(k)}$$ where $O(n)$ is the orthogonal group, the group of isometries of $\R^n$.\\ The key element, in any of these approaches, is the type of algebraic structure of $\R^n$ is a ``set with addition'' together with its structure as an $\R$-module. On the other hand, the standard $\R$-module structure on $\R^n$ is obtained by using the multiplication operation $\R\times\R\to\R$.\\ We consider the tropicalisation of the real line and determine a model for it, which is more familiar to a topologist. We then consider the ways that one can define the multiplication operation on the tropical line, have granted that the addition is given by the tropical structure. \subsection{A Model for $(\R^\tro)^n$} To begin with, notice that $\R^\tro$ as a set is in a one to one correspondence with $\R_{\geqslant}$ where by the latter we mean the nonnegative real numbers. We fix such a correspondence as following. First, notice that there is a homeomorphism of topological spaces $f:\R_{>}\to\R$ defined by $$f(x)=-\ln x$$ where $\R_{>}$ stands for the open half line of positive real numbers. We extend this to $f:\R_{\geqslant}\to\R^\tro$ by setting $$f(0)=\infty.$$ This then enables us to get a one to one correspondence $$f^{\times n}=(f,\ldots,f):\R_{\geqslant}^n\to(\R^\tro)^n$$ where $X^n$ denotes $n$-fold Cartesian product of $X$ with itself. Note that it is quite straightforward to see that the category of tropical spaces is closed under the Cartesian product. \begin{note} For a topologist the object $\R_{\geqslant}^n$ is a familiar one. This is the space that is used as the model space to define \textit{manifolds with corners}, where $$\partial\R_{\geqslant}^n=\R_{\geqslant}^n-\mathrm{interior}(\R_\geqslant^n)$$ corresponds to different types of (potential) singularities that such a manifold can have, whereas the interior corresponds to the smooth parts of such manifolds. \end{note} \begin{rmk} Notice that the set $\R^\tro$ has two components, hence the set $(\R^\tro)^n$ will have $2^n$ components. We may write this as $$(\R^\tro)^n=\coprod_{0\leqslant k\leqslant n}(\R^k)^{\sqcup {n\choose k}},$$ where by $(\R^k)^{\sqcup {n\choose k}}$ we mean ${n\choose k}$-fold $\coprod$-product of $\R^k$ with itself. We can make this more precise. For instanse, in $(\R^\tro)^2$ we have one copy of $\R^2$, two copies of $\R$ and one copy of $\R^0$ in the following way: $\R^2$ corresponds to itself, one copy of $\R$ corresponds to $\{\infty\}\times\R$, another copy of $\R$ corresponds to $\R\times\{\infty\}$, and $\R^0$ corresponds to $(\infty,\infty)$. Under the correspondence of $(\R^\tro)^2$ with $\R_{\geqslant}^2$ we see that $\R^2$ corresponds to the interior of $\R_{\geqslant}^2$, $\R\times\{\infty\}$ corresponds to the $x$-axis without the origin, $\{\infty\}\times\R$ corresponds to the $y$-axis, and $(\infty,\infty)$ corresponds to the origin. \end{rmk} Our next objective, is to define an action on $\R^\tro$ and make sense of $k$-dimensional subspaces of $(\R^\tro)^{n+k}$. Here, we have to choose which object we like to act on $\R^\tro$. We may choose $\R$ to acts on $\R^\tro$, or we may choose $\R^\tro$ to act on $\R^\tro$. \subsubsection{A Tropical Structure on $\R^n_\geqslant$} We consider the ``dual'' tropical structure on $\R_{\geqslant}$ by using the taking the maximum of two given values. This operation has a min element $0\in\R_{\geqslant}$ as its minimum. This then induces a tropical structrue on $\R_{\geqslant}^n$ given $$v\oplus v'=(\max(v_1,v'_1),\ldots,\max(v_n,v'_n))$$ where $v,v'\in \R_{\geqslant}^n$. In this case $0\in\R_{\geqslant}^n$ correspond to the minmum element where its image under $f^{\times n}:\R_{\geqslant}^n\to(\R^\tro)^n$ corresponds to maximum element, i.e. $\infty$. Notice that $g$ and $f$ both are decreasing functions, hence they respect the tropical structure. Observe that the positive real line $\R_>$ is a group under multiplication. In fact $\R_\geqslant$ is the tropicalisation of $\R_>$ viewed as a group under multiplication. This then enables us define the multiplication $\R_\geqslant\times\R_\geqslant\to\R_\geqslant$ to be the usual multiplication of two real numbers, i.e. $$(r,s)\longmapsto rs.$$ It is straightforward to see that $f$ and $g$ become maps of tropical (semi)rings.\\ Our next goal is to make it clear what we mean by the action of real numbers on these spaces. \subsubsection{Flows on $\R_{\geqslant}^n$} The one to one correspondence $$f^{\times n}:\R_{\geqslant}^n\to(\R^\tro)^n$$ motivates us, and provides us with a tool, to define an action of $\R_{\geqslant}$ on $\R_{\geqslant}^n$. Let us write $g$ for the inverse of $f$, i.e. $g:\R^\tro\to\R_{\geqslant}$ is given by $$\begin{array}{lll} g(x) &=&e^{-x}\\ g(\infty)&=&0. \end{array}$$ Recall from previous section that we have tropicalisation of any module with projection. In this case, $\R^\tro$ is the same as tropicalisation of $\R$ with $\tau$ given by the norm function. This then shows that it is possible to have a multiplication $\odot:\R^\tro\times\R^\tro\to\R^\tro$ given by $$t\odot t'=\left\{\begin{array}{ll} t+t' &\textrm{if }t,t'\in\R,\\ \infty &\textrm{if }t=\infty,\textrm{ or }t'=\infty. \end{array}\right.$$ We use this to define the action of the additive group $\R$ on $\R^\tro$ as $\R\times\R^\tro\to\R^\tro$ given by $$(r,t)\longmapsto r\odot t.$$ We use $g$ to define an action of $\R\times\R_{\geqslant}\to\R_\geqslant$ by requiring that $g$ respects this action, i.e. $g$ has to be a map of $\R$-modules. This induces the action $\R\times\R_\geqslant\to\R_\geqslant$ given by $$(r,v)\longmapsto e^{-r}v.$$ Notice that here we consider the action of $(\R,+)$ on $\R_\geqslant$. It is again clear that the mappings $f,g$ respect these actions. Notice that it is then quite clear that how to define the corresponding actions of $\R$ on $(\R^\tro)^n$ and $\R_\geqslant^n$ respectively. This is done by defining the action component-wise. We investigate this in the next part, where we look at analogous of $k$-planes in these spaces.\\ \begin{note} Finally we explain the tittle for this section. The word \textit{flow}, for a differential topologist, reminds the action of the real numbers under addition on a (smooth) manifold. The above is only one particular flow that we use, and let call it the ``standard flow'' on $\R_\geqslant$.\\ It is possible to consider a more general setting. Notice that we have defined the mappings $f$ and $g$ in a way that they respect the tropical (semi)ring structures. Hence, having any action of the additive group $(\R,+)$ on $\R_\geqslant$ will determine a corresponding action on $\R^\tro$, hence on $\R_\geqslant^n$ and $(\R^\tro)^n$ respectively. Note that in the case of the standard flow the point $0\in\R_\geqslant^n$ corresponds to a singular point of the flow. \end{note} \subsection{Tropical Projective Spaces} The study of tropical projective spaces, i.e. the space of lines in the tropical Euclidean spaces, seems to be the first natural step in attempt to understand the tropical Grassmannian spaces. It turns out that, using our model for $(\R^\tro)^n$, it is an easy task to identify $\tP^n$ as a set where $\tP^n$ is the set of all lines in $(\R^\tro)^{n+1}$ which ``pass'' through the origin. We state the result. \begin{lmm} There is a one to one correspondence $$\tP^n\to\Delta^n$$ where $\Delta^n$ is the standard $n$-simplex, i.e. $$\Delta^n=\{(x_1,\ldots,x_{n+1})\in\R^{n+1}:x_i\geqslant 0,\sum x_i=1\}.$$ \end{lmm} This is quite straightforward to see, when we use $\R^{n+1}_\geqslant$ as our model for $(\R^\tro)^{n+1}$. For instance, let $n=1$. Then we need to identify all lines in $\R^{n+1}_\geqslant$. Notice that in $\R^{n+1}_\geqslant$ any point together with the ``origin'' determines a line. Let $(a,b)\in \R^2_\geqslant$. Then the line passing through this point and ``reaching'' the origin is determined by the orbit of this point under the action of real line, i.e. all points $e^{-r}(a,b)=(e^{-r}a,e^{-r}b)$ where $r\in\R$. Notice that $e^{-r}$ is nothing but a positive real number. Hence, the orbit of $(a,b)$ is the line passing $(a,b)$ and the origin. However, notice that this will never reach the origin, and the origin will be a limit point for this line when $r$ tends to $+\infty$. This then shows that the set $\{(a,b):a>0,b>0,a+b=1\}$ is in one to one correspondence with these line. We need to identify the lines that correspond to the cases with $a=0$ and $b=0$. The $x$-axis and the $y$-axis then give these two end points. Notice that this look like two point compactification of the real line.\\ For cases $n>2$ a similar approach gives the result. We only note that the boundary of the simplex will correspond to the lines on $\partial\R^{n+1}_\geqslant$ where as its interior points correspond to $\mr{interior}(\R^{n+1}_\geqslant)$. \begin{rmk} The tropical projective space does not seem to be a tropical space, however its correspondence an standard simplex may be an evidence that it is a kind of a variety(?!). \end{rmk} \subsection{Subspaces in Tropical Euclidean Spaces} We like to look at the tropical version of the Grassmannian spaces. We note that there is some work on this from an algebric-geometric point of view such as\\ \\ David Speyer, Bernd Sturmfels \textit{The tropical Grassmannian} Adv. Geom. Vol.4 No.3 pp389--411, 2004\\ But I am not aware of the contents of this work, so I don't make any comment. We choose to work with our model $\R^n_\geqslant$ to study the Grassmannian objects. The Grassmannian space $G_k(\R^{n+k})$ is the set of $k$-planes in $\R^{n+k}$. We take this approach and look at the $k$-planes in $\R^{n+k}_\geqslant$. \subsubsection{Linear independence in the tropical sense} In the Euclidean space, it is quite straightforward to identify the $k$-dimensional subspaces of $\R^{n+k}$, i.e. the space spanned by $k$ linear independent vector $\{v_1,\ldots,v_k:v_i\in\R^{n+k}\}$.\\ In tropical space $(\R^\tro)^{n+k}$ two vectors $(t_1,\ldots,t_n),(t'_1,\ldots,t'_n)$ are linearly dependent in the tropical sense if and only if there exist $r\in\R$ such that $$t_1+r=t'_1,\ldots,t_n+r=t'_n.$$ However, our model is much easier to use. More precisely, two vectors $(t_1,\ldots,t_n),(t'_1,\ldots,t'_n)\in\R^{n+k}_\geqslant$ are linearly dependent if there is a real number $r$ such that $$(t_1,\ldots,t_n)=e^{-r}(t'_1,\ldots,t'_n)$$ i.e. $$t_1=e^{-r}t'_1,\ldots,t_n=e^{-r}t'_n.$$ This tells that two vectors in $\R^{n+k}_\geqslant$ are linear independent if one is not multiple of the other one, similar to the notion of the linear independent in the Euclidean sense. However, the notion of linear combination is quite different in two cases.\\ Next, we have two identify what is meant by the space spanned by $k$ linear independent vectors $v_1,\ldots,v_k\in\R^{n+k}_\geqslant$ in the tropical. The addition in $\R^{n+k}_\geqslant$ defined in previous sections shows that spanning in tropical sense is given by $$\mr{Span}^\tro\{v_1,\ldots,v_k\}=\mr{interior}(\mr{Cone}\{v_1,\ldots,v_k\})$$ where $\{v_1,\ldots,v_k\}$ is an arbitrary set of $k$ vectors in $\R^{n+k}_\geqslant$ and the cone $\mr{Cone}\{v_1,\ldots,v_k\}$ is the cone taken in usual Euclidean sense. For instance, for $k=2$ the cone on two vectors is the area between the two lines determined by two vectors. By a $k$-subspace $C\subset\R^{n+k}_\geqslant$ we mean a cone which is span of $k$ independent vectors, i.e. $$C=\mr{Span}^\tro\{v_1,\ldots,v_k\}$$ where $\{v_1,\ldots,v_k\}$ is a linearly independent set.\\ Notice that there is not a precise notion of dimension here. The reason is that not every point in $\R^{n+k}_\geqslant$ is a linear combination of finite number of vectors. The reason for this lies in the way that the tropical addition, and scalar multiplication on $\R^{n+k}_\geqslant$ are defined. The following observation provides us with a framework to look at this. \begin{lmm} Let $u_1,\ldots,u_n\in\R^n$ denote the standard Euclidean basis elements, i.e. $u_1=(1,0,\ldots,0),\ldots,u_n=(0,\ldots,0,1)$. We then have $$v\in\mr{interior}(\R^n_\geqslant)\Longleftrightarrow v\in\mr{Span}^\tro\{u_1,\ldots,u_n\},$$ where $\mr{interior}(\R^n_\geqslant)=\R^n_\geqslant-\partial\R^n_\geqslant$. The space $\partial\R^n_\geqslant$ is characterised by $$(x_1,\ldots,x_n)\in \partial\R^n_\geqslant\Longleftrightarrow x_t=0\textrm{ for some }1\leqslant t\leqslant n.$$ \end{lmm} \begin{rmk} We note that according to this lemma, the space $\R^n_\geqslant$ is not finitely generated as an $(\R,+)$-set. The reason for this is that we don't have $\infty$ in $\R$. Later on, we will consider the action of $\R^\tro$ on this set, where $\R^n_\geqslant$ becomes an $\R^\tro$-module. \end{rmk} Notice that a vector $v$ is in $\mr{interior}(\R^n_\geqslant)$ if all of its component, written in the usual Euclidean basis, are positive. Observe that in particular, if we choose any vector, $u_i$ such as in the above lemma, then under the correspondence $\R^n_\geqslant\to(\R^\tro)^n$ we can see that $$\mr{Span}^\tro\{u_i\}\simeq\R.$$ Moreover, let us write $$\mr{Span}^\tro\{\widehat{u}_i\}\simeq\{\infty\}.$$ The above lemma together with the notation that just introduced our allows us to formally rewrite the decomposition of $\R^n_\geqslant$ corresponding to the one given by Remark 2.2. The result reads as following. \begin{crl} The space $\R^n_\geqslant$ has the following decomposition as $$\begin{array}{lll} \mr{Span}^\tro\{u_1,\ldots,u_n\}\sqcup\\ \\ \mr{Span}^\tro\{u_1,\ldots,u_{n-1},\widehat{u}_n\}\sqcup\cdots\sqcup \mr{Span}^\tro\{\widehat{u}_1,u_2,\ldots,u_n\}\sqcup\\ \\ \mr{Span}^\tro\{u_1,\ldots,u_{n-2},\widehat{u}_{n-1},\widehat{u}_n\}\sqcup\cdots\sqcup\mr{Span}^\tro\{\widehat{u}_1,\widehat{u}_2,u_3,\ldots,u_n\}\sqcup\\ \\ \sqcup \cdots \sqcup\\ \\ \mr{Span}^\tro\{u_1,\widehat{u}_2,\ldots,\widehat{u}_n\}\sqcup\cdots\sqcup\mr{Span}^\tro\{\widehat{u}_1,\widehat{u}_2, \ldots,\widehat{u}_{n-1},u_n\}\\ \\ \sqcup \{(0,0,\ldots,0)\}. \end{array}$$ Here $\widehat{u}_i$ means that the vector $u_i$ is not in the set. Moreover, under the correspondence $$\R^n_\geqslant\to(\R^\tro)^n$$ the space $\mr{Span}^\tro\{u_1,\ldots,u_{i-1},\widehat{u}_i,\ldots,\widehat{u}_j,u_{j+1}\ldots u_n\}$ maps to $$\begin{array}{l} \mr{Span}^\tro\{u_1\}\times\cdots\times\mr{Span}^\tro\{u_{i-1}\}\times\mr{Span}^\tro\{\widehat{u}_i\}\cdots\times\\ \\ \mr{Span}^\tro\{\widehat{u}_j\}\times\mr{Span}^\tro\{u_{j+1}\}\times\cdots\times\mr{Span}^\tro\{u_n\}\end{array}$$ which is the same as $$\R\times\cdots\times\R\times\{\infty\}\times\cdots\times\{\infty\}\times\R\times\cdots\times\R$$ where for each $u_k$ in the spanning set we obtain a copy of $\R$ at $k$th position, and for each $\widehat{u}_i$ we obtain a copy of $\{\infty\}$ at the $i$th position. \end{crl} Finally, we have a little observation which will be important later on when we consider the generalised tropical Grassmannians. \begin{lmm} Let $v_1,\ldots,v_k\in\R^n_\geqslant$ be linearly independent with $k>1$. Let $v\in\mr{Span}^\tro\{v_{\alpha_1},\ldots,v_{\alpha_j}\}$ where $1\leqslant\alpha_1,\ldots,\alpha_j\leqslant k$ and $j<k$. Then $v\not\in\mr{Span}^\tro\{v_1,\ldots,v_k\}$. In particular, $v_i\not\in \mr{Span}^\tro\{v_{\alpha_1},\ldots,v_{\alpha_j}\}$. \end{lmm} This is easy to see, as if $v\in\mr{Span}^\tro\{v_{\alpha_1},\ldots,v_{\alpha_j}\}$ then $v$ has to belong to boundary of the $k$-subspace determined by $v_1,\ldots,v_k$. The result then follows from Corollary 2.7. \subsubsection{Relating $G_k(\R^n_\geqslant)$ to configuration spaces} We like relate the Grassmannian space $G_k(\R^n_\geqslant)$ to some configuration spaces. The mapping fails to be an isomorphism. But it at least provides a tool which presumably will help to analyse, and calculate more sophisticated algebraic invariants of these spaces. We start by recalling the analogous construction in homotopy. Let us fix an arbitrary basis for $\R^n$. Let $V\subset\R^n$ be a $k$-dimensional subspace. We then can choose a basis for it, say $\{v_1,\ldots,v_k\}$. The fact that there are linearly independent means that they give rise to $k$ distinct lines, hence defines a mapping $$G_k(\R^n)\to F(P^{n-1},k),$$ where given any set $X$ we define the set of configuration of $n$ point in $X$ as $$F(X,n)=\{(x_1,\ldots,x_n):x_i\in X, i\neq j\Longrightarrow x_i\neq x_j\}.$$ The above mapping fails to be a homeomorphism as it is possible to choose $k$ distinct points in $P^{n-1}$, or say $k$ distinct vectors, which are not necessarily linearly independent.\\ We write $G_k(\R^n_\geqslant)$ for the set of all $k$-subspaces of $\R^n_\geqslant$. Assume that we have a $k$-subspace $C=\mr{Span}^\tro\{v_1,\ldots,v_k\}\subset\R^n_\geqslant$. Let $l_1,\ldots, l_k$ be $k$ distinct lines determined by $v_1,\ldots,v_k$ respectively. This determines a mapping $$G_k(\R^n_\geqslant)\longrightarrow F(\tP^{n-1},k).$$ Notice that $\tP^{n-1}$ is the same as $\Delta^{n-1}$. The fact that $v_1,\ldots,v_k$ are linearly independent in the tropical sense, implies that non of these vectors falls into the cone generated by the other ones. If we use $v_1,\ldots,v_k$ to denote those $k$ points in $\Delta^{n-1}$ that correspond to these lines, we obtain a convex set, where here convex means convex as a subset of $\R^n$ with its usual metric.\\ On the other hand, if we choose any convex set in $\Delta^{n-1}$ with $k$ vertices we obtain $k$ vector in $\R^n_\geqslant$ which are independent in the tropical sense. This completes the proof of the following observation. \begin{thm} There is an isomorphism of sets $$G_k(\R^n_\geqslant)\to F(\Delta^{n-1},k)^\mr{convex}$$ where $F(\Delta^{n-1},k)^\mr{convex}$ refers to a subset of $F(\Delta^{n-1},k)$ whose points are in one to one correspondence with convex subset of $\Delta^{n-1}$ with $k$ vertices. \end{thm} \subsection{$\R^n_\geqslant$ as an $\R^\tro$-module} Recall from previous sections that $\R\times \R^n_\geqslant\to\R^n_\geqslant$ gives $\R^n_\geqslant$ structure of an $(\R,+)$-set. This action is not compatible when we consider the field of real lines, with its usual addition and multiplication. However, it is possible to obtain structure of an $\R^\tro$-module on $\R^n_\geqslant$.\\ First, define $\R^\tro\times\R_\geqslant\to\R_\geqslant$ by $$\begin{array}{ll} (r,t) &\longmapsto e^{-r}t,\\ (+\infty,t)&\longmapsto 0. \end{array}$$ Recall that $\R^\tro$ has a (semi)ring structure when regarded as $(\R\cup\{+\infty\},\min,+)$ whereas $\R_\geqslant$ has the tropical structure when regarded as $(\R_\geqslant,\max,\cdot)$ where $\cdot$ denotes the usual product. We then define the action $\R^\tro\times\R^n_\geqslant\to \R^n_\geqslant$ to be the component-wise action, i.e. $$\begin{array}{ll} (r,(t_1,\ldots,t_n)) &\longmapsto (e^{-r}t_1,\ldots,e^{-r}t_n),\\ (+\infty,(t_1,\ldots,t_n))&\longmapsto(0,\ldots,0). \end{array}$$ This definition is very similar to the previous one. It does not change the notion of the linear independence. However, there is slight difference in the notion of span. We write $\mr{Span}^{\mr{Trop}}$ to distinguish it from $\mr{Span}^\tro$. \begin{lmm} Suppose $v_1,\ldots,v_k\in\R^n_\geqslant$. Then $$\mr{Span}^{\mr{Trop}}\{v_1,\ldots,v_k\}=\mr{Cone}\{v_1,\ldots,v_k\}.$$ \end{lmm} Hence, a slight change in the ground set acting on $\R^n_\geqslant$, i.e. replacing $\R$ with $\R^\tro$ has the effect that it adds the \textit{limit points} of a cone to it. As a corollary $\R^n_\geqslant$ becomes finitely generated over $\R^\tro$. We have the following. \begin{crl} Suppose $u_1,\ldots,u_n$ denote the usual basis elements for $\R^n$. We then have $$\R^n_\geqslant=\mr{Span}^{\mr{Trop}}\{u_1,\ldots,u_n\}.$$ Here, any point on $\partial\R^n_\geqslant$ will have tropical coordinates which are formed of real numbers, and $+\infty$. Moreover, $\{u_1,\ldots,u_n\}$ is the only set of vectors satisfying this property, i.e. if there is any set of vectors $\{v_1,\ldots,v_n\}$ such that $$\R^n_\geqslant=\mr{Span}^{\mr{Trop}}\{v_1,\ldots,v_n\},$$ then each $v_i$ will be a re-scaling of of $u_j$ for unique $1\leqslant j\leqslant n$. \end{crl} As an example, consider $\R^2_\geqslant$ and consider the point $(1,0)$ which is on its boundary. The Corollary 2.7 implies that it can not be written as any linear combination of two vectors in $\R^2_\geqslant$ when regarded as an $\R$-set. However, as an $\R^\tro$-module, we may write $$(1,0)=e^0(1,0)+\infty(0,1)$$ i.e. as a vector in $\R^2_\geqslant$ the point $(1,0)$ may be written as the column vector $$\left [\begin{array}{c} 1\\ +\infty \end{array}\right ].$$ \subsubsection{The space $G_k^\mr{Trop}(\R^n_\geqslant)$} Likewise the space $G_k(\R^n_\geqslant)$ we define $G_k^\mr{Trop}(\R^n_\geqslant)$ to be the set of all $k$-subspaces in $\R^n_\geqslant$ when regarded as an $\R^\tro$-module. We say $C\subseteq\R^n_\geqslant$ is a $k$-subspace if there are $k$ linearly independent vector $v_1,\ldots,v_k\in \R^n_\geqslant$ such that $$C=\mr{Span}^{\mr{Trop}}\{v_1,\ldots,v_k\}.$$ We may refer to $G_k^\mr{Trop}(\R^n_\geqslant)$ as the generalised tropical Grassmannian space. Notice that there is a one to one correspondence $$G_k(\R^n_\geqslant)\longrightarrow G_k^\mr{Trop}(\R^n_\geqslant)$$ given by $$C\longmapsto \overline{C}$$ where $\overline{C}$ denotes the closure of $C$, i.e. $\overline{C}=C\cup\partial C$. The inverse mapping $$G_k^\mr{Trop}(\R^n_\geqslant)\longrightarrow G_k(\R^n_\geqslant)$$ given by $$C\longmapsto\mr{interior}{C}=C-\partial C.$$ Accordingly we obtain the following description of $G_k^\mr{Trop}(\R^n_\geqslant)$. \begin{thm} There is a one to one correspondence $$G_k^\mr{Trop}(\R^n_\geqslant)\longrightarrow F(\Delta^{n-1},k)^\mr{convex}.$$ \end{thm} \begin{rmk} Before proceeding further, we like to draw the reader's attention to an essential difference between $G_k(\R^n_\geqslant)$ and $G_k^\mr{Trop}(\R^n_\geqslant)$ in one hand and their Euclidean analogous $G_k(\R^n)$ on the other hand. In the Euclidean space $\R^n$ any set of $n$-linearly independent set will span $\R^n$, however in $\R^n_\geqslant$ either as a $(\R,+)$ or as an $\R^\tro$-module the only option for such a choice is provided by the standard basis. Although, according to Lemma 2.6 as an $(\R,+)$-set this it is not possible to generate all of $\R^n_\geqslant$ in tropical sense.\\ For instance, consider $\R^3_\geqslant$ and let $C\in G_k(\R^3_\geqslant)$ be defined as $$C=\mr{Span}^\tro\{(1,0,0),(0,1,0),(1,1,2)\}.$$ This is a $3$-subspace in $\R^3_\geqslant$ and yet it is not equal to $\R^3_\geqslant$. We note that all of those three vectors defining $C$ are linearly independent. We can also consider to $\overline{C}\in G_3^\mr{Trop}(\R^3_\geqslant)$ where $C\neq\R^3_\geqslant$.\\ A consequence of this is that we may choose another vector $v\in\R^3_\geqslant$ which does not belong to $C$, i.e the set $$\{(1,0,0),(0,1,0),(1,1,2),v\}$$ is a linearly independent set in the tropical sense. This determines a cone formed by $4$ vectors which can not be generated by any $3$ vectors. Hence we obtain a $4$-subspace in $\R^3_\geqslant$ giving rise to an element of $G_4(\R^3_\geqslant)$.\\ In general, we may choose $k>n$ when we consider $G_k(\R^n_\geqslant)$. In fact $k$ can be any arbitrary number.\\ \end{rmk} It is quite interesting to see how a $k$-subspace in $\R^n_\geqslant$ maps under the tropical isomorphism $$\R^n_\geqslant\to(\R^\tro)^n.$$ Recall from previous sections that $(\R^\tro)^n$ is disjoint union of $2^n$ copies of $\R^m$ with $0\leqslant m\leqslant n$. Now assume that $C\in G_k(\R^n_\geqslant)$, i.e. $$C=\mr{interior}(\mr{Cone}\{v_1,\ldots,v_k\})$$ where $v_1,\ldots,v_k\in\R^n_\geqslant$ are linearly independent(in the tropical sense). Notice that in this case $C$ is given by the interior of the cone, which implies that $C\subset\mr{interior}(\R^n_\geqslant)$. We now that $\mr{interior}(\R^n_\geqslant)$ maps to $\R^n\subset(\R^\tro)^n$. Moreover, note that the mapping $f:\R^n_\geqslant\to(\R^\tro)^n$ is continuous when restricted to $\mr{interior}(\R^n_\geqslant)$. This implies that $C$ also maps into $\R^n\subset(\R^\tro)^n$.\\ Next, we consider $C\in G_k^\mr{Trop}(\R^n_\geqslant)$ and its image under the $f:\R^n_\geqslant\to (\R^\tro)^n$. Notice that if $C\in G_k^\mr{Trop}(\R^n_\geqslant)$ then it is a closed cone, where by being closed we mean closed as a set in $\R^n_\geqslant$ viewed as a topological space with its topology inherited from the standard topology on $\R^n$.\\ If $C\subset\mr{interior}(\R^n_\geqslant)$ then according to the previous case, all of $C$ maps to only one component of $(\R^\tro)^n$, namely to $\R^n$.\\ Another possibility is that $C\cap\partial\R^n_\geqslant\neq\phi$. In this case then the image of $C$ under $f:\R^n_\geqslant\to(\R^\tro)^n$ will land in more than one factor of $(\R^\tro)^n$ viewed as a disjoint union. The following provides us with an example. \begin{exm} Consider the space $\R^3_\geqslant$, together with vectors $v_1=(1,0,1)$ and $v_2=(0,1,1)$. This determines $$C=\mr{Span}^\mr{Trop}\{v_1,v_2\}$$ as an element of $G_2^\mr{Trop}(\R^3_\geqslant)$. Let $$L_1=\{(t,0,t):t>0\},\textrm{ }L_2=\{(0,t,t):t>0\},$$ i.e. $L_i\cup\{(0,0,0)\}$ is the line determined by $v_i$. It is then clear that $$\partial C=L_1\cup L_2.$$ Under the correspondence $f:\R^3_\geqslant\to(\R^\tro)^3$ we have $$\begin{array}{lll} f(L_1)&\subset&\R\times\{\infty\}\times\R\\ f(L_2)&\subset&\{\infty\}\times\R\times\R\\ f(0,0,0) & = & (\infty,\infty,\infty). \end{array}$$ The image of $C$ under $f$ is an example of a $\mathbf{2}$-subspace in $(\R^\tro)^3$.\\ \end{exm} In general, suppose $C\in G_k^\mr{Trop}(\R^n_\geqslant)$ with $C=\mr{Span}^\mr{Trop}\{v_1,\ldots,v_k\}$. If $C\cap\partial\R^n_\geqslant\neq\phi$ then there are $\alpha_i\in\{1,\ldots k\}$ with $v_{\alpha_i}\in\partial\R^n_\geqslant$. Recall from Corollary 2.8 that $\R^n_\geqslant$ has a decomposition as $$\begin{array}{lll} \mr{Span}^\tro\{u_1,\ldots,u_n\}\sqcup\\ \\ \mr{Span}^\tro\{u_1,\ldots,u_{n-1},\widehat{u}_n\}\sqcup\cdots\sqcup \mr{Span}^\tro\{\widehat{u}_1,u_2,\ldots,u_n\}\sqcup\\ \\ \mr{Span}^\tro\{u_1,\ldots,u_{n-2},\widehat{u}_{n-1},\widehat{u}_n\}\sqcup\cdots\sqcup\mr{Span}^\tro\{\widehat{u}_1,\widehat{u}_2,u_3,\ldots,u_n\}\sqcup\\ \\ \sqcup \cdots \sqcup\\ \\ \mr{Span}^\tro\{u_1,\widehat{u}_2,\ldots,\widehat{u}_n\}\sqcup\cdots\sqcup\mr{Span}^\tro\{\widehat{u}_1,\widehat{u}_2, \ldots,\widehat{u}_{n-1},u_n\}\\ \\ \sqcup \{(0,0,\ldots,0)\}. \end{array}$$ In this decomposition the first factor, i.e. $\mr{Span}^\tro\{u_1,\ldots,u_n\}$ corresponds to $\mr{interior}(\R^n_\geqslant)$, and the other factors correspond to $\partial\R^n_\geqslant$. Hence, each of $v_{\alpha_i}$ will belong to one, and only one, of the factors in the above decomposition. For instance, assume $$v_{\beta_1},v_{\beta_2},\ldots,v_{\beta_j}\in\mr{Span}^\tro\{u_1,\ldots,\widehat{u}_i,\ldots,\widehat{u}_j,\ldots,u_n\}=:S.$$ Notice that apart from $\{(0,0,\ldots,0)\}$, any other factor in the above decomposition is an open set, when viewed as a subspace of $\R^n$ with its usual topology. This then implies that $$\mr{Span}^\mr{Trop}\{v_{\beta_1},v_{\beta_2},\ldots,v_{\beta_j}\}\subset\mr{Span}^\tro\{u_1,\ldots,\widehat{u}_i,\ldots,\widehat{u}_j,\ldots,u_n\}.$$ Applying Corollary 2.8 shows that $\mr{Span}^\mr{Trop}\{v_{\beta_1},v_{\beta_2},\ldots,v_{\beta_j}\}$ maps into the factor of $(\R^\tro)^n$ corresponding to $S$. This then completely determines where $C\cap\partial\R^n_\geqslant$ maps under the tropical isomorphism $f:\R^n_\geqslant\to(\R^\tro)^n$. This concludes our notes on the generalised Grassmannian tropical spaces. \subsubsection{$\mathbf{k}$-subspaces of $(\R^\tro)^n$.} We like to analyse the those subspaces of $(\R^\tro)^n$ which are in the images of $G_k^\mr{Trop}(\R^n_\geqslant)$ and map to more than one factor in the disjoint union decomposition for $(\R^\tro)^n$.\\ We define what is meant by a $\mathbf{k}$-subspace in $(\R^\tro)^n$. Recall that an example of a $\mathbf{2}$-subspace was given the previous section.\\ In order to proceed, we need to fix an order on the disjoint union decomposition for $(\R^\tro)^n$. Let $I=(i_1,\ldots,i_r)$ with each of its entries belonging to $\{0,1\}$. By the set $(\R^\tro)^n_I$ we mean a product of copies of the real line $\R$, and the singleton $\{\infty\}$ in the following way: if $i_j=1$ then we have a copy of $\R$, and if $i_j=0$ then we have a copy of $\{\infty\}$. For example, for $n=2$, we have $(\R^\tro)^2_{(1,0)}=\R\times\{\infty\}$. It is then clear that $$(\R^\tro)^n=\bigsqcup_{I\in\{0,1\}^n}(\R^\tro)^n_I.$$ Moreover, let $|I|=\sum 2^{i_j}+1$. We then will refer to $(\R^\tro)^n_I$ as the $|I|$-th factor of $(\R^\tro)^n$. This also induces an order on these sequences (really the binary expansion of positive natural numbers) by $$I>J\Longleftrightarrow |I|>|J|.$$ This is the same as the the lexicographic order on the sequences $I$ and induces an order on the factors of $(\R^\tro)^n$ as following $$(\R^\tro)^n_I\leqslant (\R^\tro)^n_J \Longleftrightarrow I\leqslant J.$$ Next, let $k>0$ and let $\mathbf{k}=(k_1,\ldots,k_{2^n})\in\Z^{2^n}$ be a sequence of nonnegative integers, with the most left nonzero entry equal to $k$. Consider a collection of spaces $\mathbf{K}=\{K_i:1\leqslant i\leqslant 2^n\}$ where $K_i$ is a subspace of the $i$-th factor of $(\R^\tro)^n$ with $$K_i=\mr{Span}^\tro\{v^i_1,\ldots,v^i_{k_i}\}$$ and $\{v^i_1,\ldots,v^i_{k_i}\}$ being a linear independent set in the tropical sense. Moreover, we set the span of the empty set to be the empty set. We call $\mathbf{K}$ as $\mathbf{k}$-subspace, if there is a $k$-subspace $C\in G_k^\mr{Trop}(\R^n_\geqslant)$ such that $f(C)$ maps into $(\R^\tro)^n$ with its image in different factors of $(\R^\tro)^n$ being given by $K_i$'s. \begin{rmk} It is possible to give a more explicit account of the above calculations. In order to do this, we need to label different components of $\R^n_\geqslant$. It is similar to what we did in above. Let $I$ be a sequence of length $n$ whose entries are either $1$ or $0$. Let $u_1,\ldots,u_n$ denote the usual Euclidean basis for $\R^n$. Let $(\R^n_\geqslant)_I\subset\R^n_\geqslant$ be given by a product of copies of the open real half line $\R_>$ and the singleton $\{0\}$, where for $i_j=1$ we have a copy of $\R_>$ at $j$th position, and for $i_j=0$ we have have a copy of $\{0\}$ at the $j$th position. For example, in the case of $n=2$ we have $(\R^2_\geqslant)_{(1,0)}=\R_>\times\{0\}$ which is the $x$-axis without $\{(0,0)\}$. We refer to $(\R^n_\geqslant)_I$ as the $|I|$-th component of $\R^n_\geqslant$. Notice that $(\R^n_\geqslant)_I=\mr{Span}^\tro\{e_1,\ldots,e_n\}$ where $e_j=u_j$ if $i_j=1$ and $e_j=\widehat{u}_j$ if $i_j=0$. Hence, according to Corollary 2.8 we have $$\R^n_\geqslant=\bigcup_{I\in\{0,1\}^n}(\R^n_\geqslant)_I.$$ It is now evident that the $|I|$-th component of $(\R^n_\geqslant)$ maps homeomorphically onto the $|I|$-th factor of $(\R^\tro)^n$.\\ Now assume that $C\in G_k^\mr{Trop}(\R^n_\geqslant)$ with $C\cap\partial\R^n_\geqslant\neq\phi$. Let $C_I=C\cap(\R^n_\geqslant)_I$ be the $|I|$-th face of $C$. Then $C_I$ maps homeomorphically into the $|I|$-th factor of $(\R^\tro)^n$ under the tropical isomorphism $$\R^n_\geqslant\to (\R^\tro)^n.$$ \end{rmk} \subsection{Tropical Matrices, Tropical Orthogonal Matrices} An alternative description of Grassmannian spaces is provided by viewing them as the orbit space of two orthogonal groups acting on another one. This is our main motivation study ``tropical matrices''.\\ Let $M_{m,n}(\R_\geqslant)$ be the set of all $m\times n$ matrix whose entries are elements in $\R_\geqslant$. This set admits structure of a semi-ring induces by the addition and multiplication in $\R_\geqslant$. More precisely, for $A\in M_{m,n}(\R_\geqslant)$ let $A_{ij}$ denote the $(i,j)$ entry of $A$. Then for $A,B\in M_{m,n}(\R_\geqslant)$ we define the addition, denoted with $\oplus$, by $$\begin{array}{lllll} (A\oplus B)_{ij} & = & A_{ij}\oplus B_{ij} &=& \max(A_{ij},B_{ij}). \end{array}$$ If $A\in M_{m,p}(\R_\geqslant)$ and $B\in M_{p,n}(\R_\geqslant)$, then the multiplication, denotes with $\odot$, is defined by $$\begin{array}{lllll} (A\odot B)_{ij} & = & \oplus_{k=1}^p A_{ik}\odot B_{kj} & = &\max(A_{ik}B_{kj}:1\leqslant k\leqslant p). \end{array}$$ Under these operations, the identity element for $\oplus$ is the zero matrix, where as the identity matrix for $\odot$ is the identity matrix. Moreover, notice that $M_{m,n}(\R_\geqslant)$ is an $\R^\tro$-module.\\ One may ask whether if any matrix in $M_{n,n}(\R_\geqslant)$ is invertible. The answer is positive, and an example is provided by the set of all diagonal matrices whose diagonal does not have any nonzero element. This is a ``universal example'' of such matrices as the following theorem confirms, by giving a complete classification of all such matrices. \begin{thm} Let $A\in M_{n,n}(\R_\geqslant)$ be a matrix with right $\odot$-inverse with , i.e. there exists $B\in M_{n,n}(\R_\geqslant)$ such that $A\odot B=I$. Then there exists $\sigma\in\Sigma_n$ and a diagonal matrix $D=(D_1,\ldots,D_n)\in M_{n,n}(\R_\geqslant)$ whose diagonal entries are nonzero, and $$A=(D_{\sigma^{-1}(1)},\ldots,D_{\sigma^{-1}(n)}).$$ Here $\Sigma_n$ is the permutation group on $n$ letters. Moreover, the matrix $B$ is determined by $$B_{ij}=\left\{\begin{array}{ll} 0&\textrm{if }A_{ji}=0\\ \frac{1}{A_{ji}}&\textrm{if }A_{ji}\neq 0. \end{array} \right.$$ \end{thm} Notice that according to this theorem any matrix with right inverse also has a left inverse and they are the same. This is again straight away to see this once we observe that $$\begin{array}{lllll} (A\odot B)_{ii}&=&\max(A_{ik}B_{ki}:1\geqslant k\geqslant n)&=&1,\\ \\ (A\odot B)_{ij}&=&\max(A_{ik}B_{kj}:1\geqslant k\geqslant n)&=&0\textrm { \ for }i\neq j. \end{array}$$ For instance, let $i=1$. The fact that $(A\odot B)_{11}=1$ implies that there exists $k$ such that $A_{1k}=\frac{1}{B_{k1}}\neq 0$. Combining $A_{1k}\neq 0$ together with $(A\odot B)_{ij}=0$ for $i\neq j$ implies that $B_{kj}=0$ for $j\neq i$. Applying this for each row of $A\odot B$ completes the proof.\\ Next, we consider the problem of determining tropical orthogonal matrices. Recall that in the Euclidean case, the orthogonal matrix $O(n)$ is an $n\times n$ matrix whose each column viewed has a vector in $\R^n$ has unit norm, i.e. lies on the $(n-1)$-sphere $S^{n-1}=\{x\in\R^n:||x||=1\}$, and is perpendicular to all other columns.\\ We may define the tropical inner product $\langle -,- \rangle_\tro:\R^n_\geqslant\times\R^n_\geqslant\to\R^\tro$ by $$ \langle v,w \rangle_\tro =\oplus_{i=1}^nv_i\odot w_i=\max(v_iw_i:1\geqslant i\geqslant n) $$ with $v=(v_1,\ldots,v_n)$ and $w=(w_1,\ldots,w_n)$. In particular, we have the tropical norm on $\R^n_\geqslant$ given by $$||x||_\tro=(\max(x_i^2:1\leqslant i\leqslant n))^{1/2}.$$ In particular, the tropical circle is given by $$\begin{array}{lll} S^1_\tro & = & \{x\in\R^2_\geqslant:||x||_\tro=1\}\\ & = & \{(1,x_2)\in\R^2:x_2\leqslant 1\}\cup\{(x_1,1):x_1\leqslant 1\}. \end{array}$$ More generally, the tropical $n$-sphere is give by $$\begin{array}{lll} S^n_\tro & = & \{(x_1,\ldots,x_{n+1})\in\R^{n+1}_\geqslant:||x||_\tro=1\}\\ \\ & = & \bigcup_{i=1}^{n+1}\{(x_1,\ldots,x_{n+1})\in\R^{n+1}_\geqslant:x_i=1, j\neq i\Longrightarrow x_j\leqslant 1\} \end{array}$$ Moreover, these spaces admit a tropical structure. \begin{lmm} The tropical sphere $S^n_\tro$ together with the maximum operation, inherited from $\R^{n+1}_\geqslant$, is a tropical space with the identity element for this operation given by $$(1,1,\ldots,1).$$ \end{lmm} It is quite tempting to see what the analogous of the orthogonal group will be. It is quite easy to determine form of such matrices. The reason is provided with the following lemma. \begin{lmm} Let $A=(A_1,\ldots,A_n)\in M_{n,n}(\R_\geqslant)$ where $A_i$ denotes the $i$-th column of $A$. Suppose $||A_i||_\tro=1$ and $\langle A_i,A_j\rangle_\tro=0$ for $i\neq j$. Then $A$ has the same columns as the identity matrix. \end{lmm} Let $O(n)^\tro$ be the set of all matrices identifies by the above lemma, i.e. set of all tropical orthogonal matrices. \begin{lmm} Let $\Sigma_n$ denote the permutation group on $n$ letters. Let the action $\Sigma_n\times M_{m,n}(\R_\geqslant)\to M_{m,n}(\R_\geqslant)$ be given by $$(\sigma,(A_1,\ldots,A_n))\longmapsto (A_{\sigma^{-1}(1)},\ldots,A_{\sigma^{-1}(n)}),$$ where $A=(A_1,\ldots,A_n)$ is an arbitrary $m\times n$ matrix written in a column form. Then $O(n)^\tro$ is given by the orbit of the identity matrix under the action $\Sigma_n\times M_{n,n}(\R_\geqslant)\to M_{n,n}(\R_\geqslant)$. \end{lmm} Notice that there is an inclusion, in fact a map of monoids, $$O(n)^\tro\longrightarrow Gl(\R_\geqslant,n).$$ The above lemma tells us that the action of $O(n)^\tro\times M_{n,n}(\R_\geqslant)\to M_{n,n}(\R_\geqslant)$ given by the tropical matrix multiplication, will be only the permutation of the rows of a given matrix. Let us write $GL(\R_\geqslant,n)$ for the set of all tropical $n\times n$ invertible matrices. For $A\in GL(\R_\geqslant,n+k)$ we write $$A=\left(\begin{array}{ll} A_{nn} & B\\ C & A_{kk}\end{array}\right)$$ where $A_{nn}$ is the $n\times n$ and $A_{kk}$ is a $k\times k$ block. This allows one to define the action of $O(n)^\tro\times O(k)^\tro$ on $GL(\R_\geqslant,n+k)$. One then will guess that there should be a one to one correspondence $$\frac{Gl(\R_\geqslant,n+k)}{O(n)^\tro\times O(k)^\tro}\longrightarrow G_k(\R^{n+k}_\geqslant).$$ Finally, notice that $O(n)^\tro$ is a monoid under the tropical matrix multiplication. \subsubsection{Idempotents} Suppose $M$ is an arbitrary monoid. An element $a\in M$ is idempotent if $a^2=a$. We consider to the problem of determining the idempotent in the monoid $M_{n,n}(\R_\geqslant)$. The result reads as following. \begin{lmm} Let $A$ be an idempotent $n\times n$ matrix entries from $\R_\geqslant$. Then $A$ satisfies the following conditions $$\begin{array}{lll} A_{ii}&\leqslant 1\\ A_{ik}A_{ki}&\leqslant\min(A_{ii},A_{kk}) &\textrm{if }i\neq k. \end{array}$$ \end{lmm} The proof is straightforward once we consider the diagonal elements. The equation $A\odot A=A$ implies that $$\max(A_{ik}A_{ki}:1\leqslant k\leqslant n)=A_{ii}.$$ For instance, this implies that $A_{ii}^2\leqslant A_{ii}$ which means that $A_{ii}\leqslant 1$. The other inequality is obtained in a similar fashion, by comparing the equations for $A_{ii}$ and $A_{kk}$. \subsubsection{Stablisation} We consider the idea of the infinite dimensional orthogonal tropical matrices. Notice that for any $m,n$ there is a mapping $M_{m,n}(\R_\geqslant)\to M_{m+1,n+1}(\R_\geqslant)$ given by $$A\longmapsto\left(\begin{array}{ll} A & 0\\ 0 & 1\end{array}\right).$$ In particular, this induces a mapping $O(n)^\tro\to O(n+1)^\tro$. We then define the analogous of the infinite orthogonal group by $$O^\tro=\colim O(n)^\tro.$$ This object inherits a monoid structure induces from the monoid structure on the finite dimensional tropical orthogonal monoids. One then hopes that this will give a characterisation of the infinite dimensional Grassmannians. \subsubsection{Comments on Tropical Bundle Theory} The algebraic topology of fibre bundles with a given topological space $F$ as the fibre, is understood in terms of the classifying space of the groups of automorphisms of $F$.\\ By analogy one may consider to fibre bundle theory of surjections $E\to B$ whose fibres are given by copies of the tropical space $\R^n_\geqslant$. This then makes it quite reasonable to consider the classifying space $BO(n)^\tro$ of the tropical orthogonal monoids $O(n)^\tro$ where these are monoids under the tropical matrix multiplication. The classifying space functor is defined for monoid (in fact for topological monoids which carry a weaker structure). Hence, one may observe that the $\R^n_\geqslant$-bundles are classified in terms of mapping into $BO(n)^\tro$.\\ The interest in such theory, and the theory of associated characteristic classes, seems to come from the theory of singularities. We note that a canonical example for a tropical bundle will be ``tangent bundle'' of a tropical manifold. This then motivates one to claim that the classification of these singularities might be done in terms of the characteristic classes of the associated tangent bundle.\\ \\ Hadi Zare, The School of Mathematics, Manchester University, Manchester, UK M13 9PL, \textit{email: [email protected]} \end{document}
2,869,038,156,180
arxiv
\subsection{Classifier} \label{subsec:classifier} Firstly, we split data into 80\% for training and 20\% for later evaluation. Similarly to the works mentioned in this document, we built a humor classifier but for the Spanish language. Such works used Support Vector Machine (SVM) and a Multinomial version of Naïve Bayes (MNB). However, more machine learning techniques are tried here: Decision Trees (DT), k Nearest Neighbors (kNN) and a Gaussian version of Naïve Bayes (GNB). Tweets are tokenized using Freeling~\cite{padro12}. Also, a higher quantity of features was implemented, which is described below.\footnote{The codebase for the classifier and the corpus built can be found in \url{https://github.com/pln-fing-udelar/pghumor}.} \begin{description} \item[Adult slang:] Here we count the relative number of tokens in the tweets which appeared in a previously built dictionary about adult slang. This dictionary contains 132 words, and it was built using bootstrapping, in a similar manner to~\textcite{mihalcea2005bootstrapping}, with a seed of 21 words. Dictionary-lookup features are computed with this formula (where the multiset intersection is used): \[ featureValue(tweet) = \frac{|tweet \cap dictionary|}{\sqrt{|tweet|}} \] \item[Animal presence:] In this case we compare against a handcrafted dictionary about animals. This dictionary contains 103 names, including typical typographic misspellings and grammatical mistakes. \item[Antonyms:] Given a tweet, this feature counts the relative number of pairs of antonyms existing in it. WordNet~\cite{wordnet} antonymy relationship and Spanish language enrichment provided by the Multilingual Central Repository~\cite{mcr} are used for this. This feature was discarded since after performing Recursive Feature Elimination~\cite{guyon2002gene} (RFE) we found out the classification worsened. \item[Dialog:] This feature only establishes if a tweet is a dialog. \item[Exclamations:] The relative number of exclamation marks are counted. \item[First and Second person:] These two features try to capture verbs conjugated in the first and second persons and nouns and adjectives which agree with such conjugations (in Spanish, nouns and adjectives express gender and number at the end of the word). \item[Hashtags:] The amount of hashtags in the tweet is counted. It is suspected that the higher this amount is, the more informal the tweet is. Thus, it is more likely to be humorous. \item[Keywords:] An intuitively handmade dictionary of 43 common words found in jokes was built for this, and it was used for checking purposes. \item[Links:] This feature counts the number of links contained in a tweet. \item[Negation:] Here we count the relative quantity of times the word ``no'' appears in the tweet. It was removed after running RFE\@. \item[Non-Spanish words:] The relative number of words containing non-Spanish words is counted. It was discarded after running RFE\@. \item[Out of vocabulary:] The idea behind this is to keep record of the relative count of words not found in dictionaries. These are four features based on the combination of the dictionaries used: Freeling, Freeling-Google~\footnote{\url{https://www.google.com}}, Freeling-Wiktionary~\footnote{\url{https://www.wiktionary.org}} and Wiktionary. \item[Questions-answers:] One interesting attribute for tweets is to count how many questions and answers are present, one after another. \item[Topic distance:] The idea is to check if a tweet is somewhat near to a joke category in \emph{Chistes.com}, or whether it is closer to a Wikipedia's sentence, from Wikicorpus~\cite{reese10}. This is carried out using a Multinomial Naïve Bayes classifier together with the Bag of Words technique. \item[Uppercase words:] The relative amount of words completely in uppercase is counted. \end{description} \section{Computational Humor} \label{sec:computational-humor} Computational Humor is a recent field of study about recognizing and generating humor through automatic processing. The task of language understanding is rather hard, and so are tasks related to humor. Furthermore, humor entails the usage of figurative language, which obviously makes language handling harder. Humor by itself is not a clearly determined concept. According to Real Academia Española\footnote{\url{http://dle.rae.es/}}, humor is defined as a way of presenting reality, highlighting the comic or ridiculous side. As for comedy, it is a kind of drama meant to cause laughter. However, what causes laughter? There are several theories which try to answer this question, and consequently attempt to find what humor is. A report on the state of the art about Humor and Computational Humor~\cite{so63066} enumerates some of them. The main ideas of these theories are described hereinafter. Readers will notice that these ideas are similar, in spite of putting the focus on different attributes. \textcite{gruner2000game} develops a theory which claims that humor is related to superiority feelings, asserting that there is always a winner in every joke. \textcite{freud, Minsky80jokesand} state that humor is about relieving repressed feelings. In this case, laughter relieves the stress caused by taboo topics, such as death, marriage or sex. The Theory of the Incongruity Resolution~\cite{rutter1997stand} claims that two objects are presented under the same concept, with details applying to both and with similarities, but as narration progresses it turns out that only one is possible. Furthermore, we have The Semantic Script Theory of Humor and The General Theory of Verbal Humor~\cite{attardo1991script},~\cite{ ruch1993toward}. They state that humor is about two scripts which come into conflict with each other, where there are two opposed subjects contrasted, such as big vs small, death vs life, normal vs abnormal, among others. Let us introduce an example\footnote{Taken from \url{https://twitter.com/chistetipico/status/430549009812291584}. It has been slightly adapted to maintain an appropriate language.}: \begin{displayquote} \centering --- Nada es imposible. --- A ver, tocate la espalda con la rodilla, mente positivista. \begin{center} \scriptsize --- Nothing is impossible. --- Seriously? Touch your back with your knee, you positivist mind. \end{center} \end{displayquote} Following the Superiority Theory, the reader is the winner when he laughs at the positive person, feeling superior as the latter lose the dispute. According to the Relief Theory, we laugh with the purpose of releasing tension, which in this case can be provoked by talking about the limits of life, such as when saying ``nothing is impossible''. The Theory of the Incongruity Resolution also applies here due to the fact that there is ambiguity; with ``nothing is impossible'' the example implies that all your dreams may come true, but the person is answered as if the statement was literal. \subsection{Humor Detection} The concrete goal of this research is to classify tweets written in Spanish as humorous or not humorous. In order to accomplish this, jokes need to be completely expressed within the text, and no further information must be required (apart from contextual information). Since Twitter allows only brief publications --- no more than 140 characters --- we freely assume the text to be a unit: either the whole tweet is humorous, or it is not. \subsection{State of the Art} \label{sec:state-of-the-art} We did not find any attempt to automatically recognize humor for Spanish. Notwithstanding, \textcite{Mihalcea:2005:MCL:1220575.1220642, so63066} built humor detectors for English making use of \emph{one-liners}, i.\,e.,{} texts of approximately fifteen words. Supervised learning was used to produce an outcome --- humorous or not humorous content --- based on features which might reflect certain properties that humor should satisfy. Furthermore, \textcite{DBLP:conf/tsd/ReyesBR09, DBLP:journals/pdln/ReyesRMT09} have gathered and studied features specific to humor, without having the objective of creating a recognizer. A concise compilation of the features presented in these works is shown below: \begin{description} \item[Adult Slang:] According to~\textcite{Mihalcea:2005:MCL:1220575.1220642}, adult slang is popular in jokes. Let us remember that the Relief Theory states that laughter releases stress caused by taboo subjects, and adult slang could be one. WordNet Domains~\cite{strapparava2004wordnet} can be used to search for words tagged with the domain ``Sexuality'' in potentially humorous texts. \item[Alliteration:] This is about the repetition of phonemes in a text. It is a generalization of the rhyme. As stated in~\cite{Mihalcea:2005:MCL:1220575.1220642}, structural and phonetic properties of jokes are at least as important as their content. \item[Ambiguity:] It may be explained by the Incongruity Resolution Theory that ambiguity plays an important role, as it gives more than one interpretation to texts. \textcite{conf/wilf/SjoberghA07, Basili02parsingengineering, DBLP:conf/tsd/ReyesBR09} mention different ways to measure it, such as counting the number of meanings of the words that appear or counting the number of possible syntax trees. \item[Antonymy:] Following the Semantic Script Theory of Humor, we could look for opposed terms in texts, and that is how this feature is supported. The idea is to take into account pairs of antonym words mentioned in texts. Wordnet~\cite{wordnet} is useful since it is a lexical database which contains antonyms for English words, among other relations. \item[Keywords:] There are certain words that are more used in humorous contexts than in normal situations~\cite{conf/wilf/SjoberghA07}. An example of these are words related to animal contexts, lawyers, etc. \item[Language model perplexity:] In~\textcite{DBLP:conf/tsd/ReyesBR09} a language model is built from narrative texts, and perplexity\footnote{Perplexity is a measurement of how well a probability model predicts a sample. Low perplexity indicates the probability model is good at predicting the sample. It is defined as $2^{- \frac{1}{n} \sum_{i=1}^n \log_2 p(x_i)}$, where $x_1, \ldots, x_n$ are the sample data and $p(x_i)$ is the probability assigned to each one.} is used as a feature. Humorous texts have a higher perplexity than those which are not humorous. \item[Negativity:] There is a certain kind of humor which tends to have negative connotations~\cite{Mihalcea_characterizinghumour:}~\cite{DBLP:journals/pdln/ReyesRMT09}. It can be about denying, such as when saying ``no'', ``don't'' or ``never'', when talking about subjects with negative polarity such as ``bad'', ``illegal'' or ``wrong'' or when it is related to words referring to stressful subjects, such as ``alcohol'' or ``lie''. \item[People-centered words:] Humorous texts are constantly referring to scenarios related to people, with dialogues and references such as ``you'', ``I'', ``woman'' and ``my''. This is supported by~\textcite{Mihalcea_characterizinghumour:, journals/ci/MihalceaS06}. \end{description} \textcite{Mihalcea:2005:MCL:1220575.1220642} used the features Adult Slang, Alliteration and Antonymy, while \textcite{conf/wilf/SjoberghA07} focused on Alliteration, Ambiguity, Keywords and People-centered words. Both studies collected humorous one-liners from the Internet. \textcite{conf/wilf/SjoberghA07} employed only the British National Corpus (BNC) as negative samples whereas \textcite{Mihalcea:2005:MCL:1220575.1220642} additionally used proverbs and news headlines from Reuters. In both works they tried with Naïve Bayes and Support Vector Machine classifiers, resulting in no significant difference between these techniques. On one hand, \textcite{Mihalcea:2005:MCL:1220575.1220642} achieved their best accuracy with headlines: 96.85\%, while they reached 84.82\% with proverbs and 79.15\% with the BNC\@. Alliteration proved to be the most accurate feature. On the other hand, \textcite{conf/wilf/SjoberghA07} achieved an accuracy of 85.40\%, with Keywords being the most useful. \Cref{tab:comparison} summarizes the main differences and compares both studies. \begin{table} \caption{Comparison of the approach of both works. The results are not directly comparable as they use different corpora.} \begin{tabular}{>{\Centering}m{1.8cm} >{\Centering}m{5.0cm} >{\Centering}m{5.0cm}} \toprule & \multicolumn{1}{c}{\textcite{Mihalcea:2005:MCL:1220575.1220642}} & \multicolumn{1}{c}{\textcite{conf/wilf/SjoberghA07}} \\ \midrule Negative samples & BNC sentences, news headlines and proverbs & Other sentences from BNC \\ \midrule Accuracy & 96.95\% with headlines, 79.15\% with the BNC and 84.82\% with the proverbs & 85.40\% \\ \midrule Features & Adult Slang, Alliteration and Antonymy & Alliteration, Ambiguity, Keywords and People-centered words \\ \bottomrule \end{tabular} \label{tab:comparison} \end{table} \section{Conclusions} \label{sec:conclusions} A crowdsourced corpus has been assembled, which serves the purpose of this work and could be useful for future research. It contains over 30,000 annotations for 16,488 tweets, coming from humorous accounts, and it also counts with 22,875 sourced from non-humorous accounts. Uses of such corpus include analyzing its data, as well as performing tasks similar to the work described herein. We have built a classifier which outperforms the baselines outlined. Support Vector Machine proved to be the best technique. It has a precision of 83.6\%, a recall of 68.9\%, a $F_1$ score of 75.5\% and an accuracy of 92.5\%. Nevertheless, it must be highlighted that the corpus built does not depict a great variety of humor. Hence, some features perform well in this work but might not perform so well in another context. As a future work, more complex features could be crafted, such as trying to detect wordplay and puns, ambiguity, perplexity against some language model, inter alia. Other Machine Learning techniques could also be tried. It would be interesting if we take advantage of the star ranking people provided; maybe this can also suggest how funny a joke is. As a harder task, humor generation could be tackled. Finally, it could be studied how the influence of humor varies between different social contexts, depending on gender, age, interest areas, mood, etc. \section{Proposal} \subsection{Corpus} \label{subsec:corpus} Our first goal is to build a corpus with samples of humorous and non-humorous tweets. Based on~\textcite{Mihalcea:2005:MCL:1220575.1220642}, we choose to use non-humorous sample tweets that fall into the following topics: news, reflections and curious facts. For humorous samples, we extracted tweets from accounts which appeared after having searched for the keyword ``chistes'' (``jokes'' in Spanish). In total, 16,488 tweets were extracted from humorous accounts and 22,875 from non-humorous. The two groups are composed of 9 Twitter accounts each, with the non-humorous containing 3 of each topic. The amount of tweets in each topic is similar. We tagged all tweets from news, reflections and curious facts as non-humorous, as random sampling showed that there was no humor in them. Conversely, not all tweets that were extracted from a humorous account were in fact humorous. Many of them were used to increase their number of followers, to express their opinion about a fact or to support a cause through retweets. A crowdsourced web\footnote{\url{http://clasificahumor.com}} and a mobile\footnote{\url{https://play.google.com/store/apps/details?id=com.clasificahumor.android}} annotation was carried out in order to tag all tweets from humorous accounts. In order to obtain as many annotations as possible, we wanted to keep it simple. Therefore, we showed random tweets to annotators (avoiding duplicates), providing no instructions, and let them implicitly define what humor is. In addition, the user interface was simple, as shown in \cref{fig:page}. The users could either provide a ranking of humor between one and five, express that the tweet was not humorous or skip it. \begin{figure} \includegraphics[frame]{page.png} \caption{Page used to annotate tweets, with an example tweet on screen.} \label{fig:page} \end{figure} In total, 33,531 annotations were achieved, after filtering some of them that occurred in a short time lapse in the same session and with the same tag. About half of the labels were non-humorous, while the other half was divided approximately between the five rankings. A histogram of the annotations is shown in \cref{fig:histogram}. Regarding the agreement among annotators, the Fleiss' Kappa measurement for tweets with 2 annotations\footnote{Note that Kappa assumes a fixed number of annotators. For this reason, we measure it with 2 and 6, in order to give an idea of the agreement having a value with many tweets but few annotators, and other value with few tweets but many annotators.} is 0.416 and for those with 6 annotations it is 0.325. \begin{figure} \includegraphics{histogram.png} \caption{Histogram of annotations. Note that most tweets have few annotations.} \label{fig:histogram} \end{figure} Based on this analysis, we have to decide which tweets are considered humorous. Let us define the tweets considered humorous as \emph{positives} and the ones considered as non-humorous as \emph{negatives}. The decision consisted in marking as positives those tweets whose ratio of humorous annotations is greater than or equal to 0.6 and as negatives those lower than or equal to 0.3. The rest are considered as \emph{doubtful}. The criterion of giving a 0.1 handicap to the positives was thereby performed, as they are obtained from humorous accounts. This may be seen as if the source is giving its opinion too. Additionally, those tweets with no annotations fall into the category of doubtful. \Cref{subfig:decision} illustrates the proportions of each category. To sum up, 5,952 tweets are considered positive. The rest of the tweets obtained from humorous accounts are not taken into account, even though the negatives can also be used. The corpus composition is shown in \cref{subfig:tweets_by_tag}. \begin{figure} \centering \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\textwidth]{decision.png} \caption{Graph showing the percentage of tweets from humorous accounts in each category.} \label{subfig:decision} \end{subfigure}\hfill \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[width=\textwidth]{tweets_by_tag.png} \caption{Pie displaying the ratio between positives and negatives in the corpus, after the decision was made.} \label{subfig:tweets_by_tag} \end{subfigure} \caption{} \end{figure} \section{Experimental Evaluation} \label{sec:evaluation} Provided that our work is the only one using this corpus, and even the only one with the goal of classifying humor in Spanish, we cannot directly compare it with any other work. Hence, we developed two baselines to compare it with, aiming them to be simple ideas which could be crafted to face this task. The first one (BL1) is a Multinomial Naïve Bayes classifier combined with Bag of Words similarly to the Topic Distance feature. The second one (BL2) is a classifier which predicts all tweets with the most likely outcome, \emph{non-humorous}, having a frequency of almost 83\%. A comparison using mainly the $F_1$ score is intended. We want to pay attention to the positives (the humorous) but also granting the same degree of importance to false positives and false negatives. Nonetheless, we take advantage of the runs in order to also pay attention to other measurements. The results are shown in \cref{tab:results}. \begin{table} \centering \caption{Results obtained with the different techniques over the test set. NPV, TNR and Neg.\ F1 refer to Precision, Recall and $F_1$ score, respectively, when reversing the roles positive-negative.} \begin{tabular}{c r r r r r r r} \toprule & \multicolumn{1}{c}{Precision} & \multicolumn{1}{c}{Recall} & \multicolumn{1}{c}{$F_1$} & \multicolumn{1}{c}{NPV} & \multicolumn{1}{c}{TNR} & \multicolumn{1}{c}{Neg.\ $F_1$} & \multicolumn{1}{c}{Accuracy} \\ \midrule BL1 & 0.617 & 0.846 & 0.714 & 0.966 & 0.892 & 0.714 & 0.885 \\ BL2 & N/A & 0.000 & N/A & 0.830 & 1.000 & 0.907 & 0.830 \\ \midrule SVM & 0.836 & 0.689 & \textbf{0.755} & 0.938 & 0.972 & \textbf{0.955} & \textbf{0.925} \\ DT & 0.665 & 0.675 & 0.670 & 0.933 & 0.930 & 0.932 & 0.889 \\ GNB & 0.575 & \textbf{0.782} & 0.663 & \textbf{0.952} & 0.882 & 0.915 & 0.865 \\ MNB & \textbf{0.848} & 0.600 & 0.703 & 0.923 & \textbf{0.978} & 0.950 & 0.914 \\ kNN & 0.813 & 0.663 & 0.730 & 0.934 & 0.969 & 0.951 & 0.917 \\ \bottomrule \end{tabular} \label{tab:results} \end{table} The best results are obtained with SVM, even in terms of accuracy. Also, kNN shows satisfactory output. These two approaches outperform the baselines, with the former clearly surpassing the latter. Meanwhile, GNB and DT have poor precision, although GNB certainly does a better job among these two and has the best recall. The confusion matrix for SVM is shown in \cref{tab:matrix}. \begin{table} \centering \caption{Confusion matrix for SVM classifier with respect to the test set} \begin{tabular}{c r r} \toprule & Positive & Negative \\ \midrule Positive & 842 & 381 \\ \midrule Negative & 165 & 5805 \\ \bottomrule \end{tabular} \label{tab:matrix} \end{table} \section{Introduction} The human being as a species is characterized by laughter. Humor, which is a potential cause of laughter, is an essential component of human communication. Not only does it allow people to feel comfortable, but also produces a cozier environment. While humor has been studied from a psychological, cognitive~\cite{humorJournal} and even linguistic~\cite{raskin1985semantic} standpoint, its study from a computational viewpoint is still an area to be explored within Computational Linguistics. There exist some previous works~\cite{so63066}; however, a humor characterization that allows its automatic recognition and generation is far from being specified, particularly for the Spanish language. Identifying humor in a text can be seen as an intermediate step for the resolution of more complex tasks. It would be interesting to generate jokes, or humor in general, based on the knowledge of which attributes enrich texts in a better way. Another appealing use case is to exploit the outcome of a humor detector to decide automatically if a text span can be taken seriously or not. On the other hand, by way of a more direct use, humor identification can be used to find jokes on Twitter, to search for potentially funny tweets about certain trending topic or to search for humorous answers to comments on the social network. We address herein the problem of detecting humor in Spanish tweets. It should be noted that this is different from trying to recognize humor in arbitrary texts, due to tweets' length. Here it could be assumed that tweets are either humorous or not, but not both, because they are brief (up to 140 characters). This is not always the case in others texts, as jokes could only exist in some parts but not on the whole text. Another advantage considered is that there are plenty of tweets available to analyze. Since there is no clear definition of what humor is, how can we detect something that is in principle vaguely stated? We explore different ideas, and we finally decide to let people define it themselves by voting tweets from a web page and an Android app, in which they can label a tweet as humorous or not humorous. Once we have defined which tweets are humorous, we tackle the problem of humor detection using a supervised learning approach. In other words, we infer a function that identifies humor from labeled data. We use several techniques such as Support Vector Machine, Nearest Neighbors, Decision Trees and Naive Bayes. In order to build a set of features, we first study the state of the art of the Computational Humor area, focused on recognition and in Spanish. In \cref{sec:computational-humor} we present the humor detection problem and its state of the art, including features studied in previous works. In \cref{subsec:corpus} we show the corpus built for this purpose and in \cref{subsec:classifier} we describe the classifier used. Afterwards, we present an experimental evaluation in \cref{sec:evaluation} and finally the conclusions in \cref{sec:conclusions}.
2,869,038,156,181
arxiv
\section{Introduction} \label{sect:intro} The purpose of the present paper is to identify the mirror B-model objects that enable us to solve certain graph enumeration problems. We consider simple and orbifold Hurwitz numbers, by giving a graph enumeration formulation for these numbers. We then show that the mirror of these counting problems are constructed from the \emph{edge-contraction operations} of \cite{OM3} applied to orbifold Hurwitz numbers for the case of \emph{genus $0$ and one-marked point}. Edge-contraction operations provide an effective method for graph enumeration problems. It has been noted in \cite{DMSS} that the Laplace transform of edge-contraction operations on many counting problems corresponds to the topological recursion of \cite{EO2007}. In this paper, we examine the construction of mirror B-models corresponding to the simple and orbifold Hurwitz numbers. In general, enumerative geometry problems, such as computation of Gromov-Witten type invariants, are often solved by studying a corresponding problem on the \emph{mirror dual} side. The effectiveness of the mirror method relies on complex analysis and holomorphic geometry technique that is available on the mirror B-model side. The question we consider in this paper is the following: \begin{quest} How do we find the mirror of a given enumerative problem? \end{quest} We give an answer to this question for a class of graph enumeration problems that are equivalent to counting orbifold Hurwitz numbers. The key is the edge-contraction operations. The base case, or the case for the ``moduli space'' ${\overline{\mathcal{M}}}_{0,1}$, of the edge contraction in the counting problem identifies the mirror dual object, and a universal mechanism of complex analysis, known as the \textbf{topological recursion} of \cite{EO2007}, solves the B-model side of the counting problem. The solution is a collection of generating functions of the original counting problem for all genera. Bouchard and Mari\~no \cite{BM} conjectured that generating functions for simple Hurwitz numbers could be calculated by the topological recursion of \cite{EO2007}, based on the spectral curve identified as the \textbf{Lambert curve} \begin{equation} \label{Lambert} x = y e^{-y}. \end{equation} Here, the notion of spectral curve is the mirror dual object for the counting problem. They arrived at the mirror dual by a consideration of mirror symmetry of \emph{open} Gromov-Witten invariants of toric Calabi-Yau threefolds \cite{BKMP1}. The mirror geometry of a toric Calabi-Yau threefold is completely determined by a plane algebraic curve known as the \emph{mirror curve}. The Lambert curve \eqref{Lambert} appears as the infinite framing number limit of the mirror curve of ${\mathbb{C}}^3$. The Hurwitz number conjecture of \cite{BM} was then solved in a series of papers by one of the authors \cite{EMS,MZ}, using the Lambert curve as a \emph{given} input. Since conjecture is true, the Lambert curve \eqref{Lambert} \emph{should be} the mirror B-model for Hurwitz numbers. But why? In \cite{EMS,MZ}, we did not attempt to give any explanation. The emphasis of our current paper is to prove that the mirror dual object is simply a consequence of the ${\overline{\mathcal{M}}}_{0,1}$ case of the edge-contraction operation on the original counting problem. The situation is similar to several cases of Gromov-Witten theory, where the mirror is constructed by the genus $0$ Gromov-Witten invariants themselves. To illustrate the idea, let us consider the number $T_d$ of connected \emph{trees} consisting of \emph{labeled} $d$ nodes (or vertices). The initial condition is $T_1 = 1$. The numbers satisfy a recursion relation \begin{equation} \label{tree} (d-1) T_d = {\frac{1}{2}} \sum_{\substack{a+b=d\\ a,b\ge 1}} ab \binom{d}{a}T_aT_b. \end{equation} A tree of $d$ nodes has $d-1$ edges. The left-hand side counts how many ways we can eliminate an edge. When an edge is eliminated, the tree breaks down into two disjoint pieces, one consisting of $a$ labeled nodes, and the other $b=d-a$ labeled nodes. The original tree is restored by connecting one of the $a$ nodes on one side to one of the $b$ nodes on the other side. The equivalence of counting in this elimination process gives \eqref{tree}. From the initial value, the recursion formula generates the tree sequence $1,1,3,16,125,1296,\dots$. We note, however, that \eqref{tree} does not directly give a closed formula for $T_d$. To find one, we introduce a generating function, or a \textbf{spectral curve} \begin{equation} \label{tree y} y = y(x) := \sum_{d=1}^\infty \frac{T_d}{(d-1)!} x^d. \end{equation} In terms of the generating function, \eqref{tree} becomes equivalent to \begin{equation} \label{tree diff} \left(x^2\circ \frac{d}{dx}\circ \frac{1}{x} \right) y = {\frac{1}{2}} x\frac{d}{dx} y^2 \Longleftrightarrow \frac{dx}{dy} = \frac{x(1-y)}{y}. \end{equation} The initial condition is $y(0) = 0$ and $y'(0)=1$, which allows us to solve the differential equation uniquely. Lo and behold, the solution is exactly \eqref{Lambert}. To find the formula for $T_d$, we need the \emph{Lagrange Inversion Formula}. Suppose that $f(y)$ is a holomorphic function defined near $y=0$, and that $f(0)\ne 0$. Then the inverse function of $ x = \frac{y}{f(y)} $ near $x=0$ is given by \begin{equation} \label{LIF} y = \sum_{k=1}^\infty \left.\left(\frac{d}{dy} \right)^{k-1} \big(f(y)^k\big)\right|_{y=0}\frac{x^k}{k!}. \end{equation} The proof is elementary and requires only Cauchy's integration formula. Since $f(y)=e^y$ in our case, we immediately obtain Cayley's formula $T_d = d^{d-2}$. The point we wish to make here is that the real problem behind the scene is not tree-counting, but \emph{simple Hurwitz numbers}. This relation is understood by the correspondence between trees and ramified coverings of ${\mathbb{P}}^1$ by ${\mathbb{P}}^1$ of degree $d$ that are simply ramified except for one total ramification point. When we look at the dual graph of a tree, elimination of an edge becomes contracting an edge, and this operation precisely gives a \emph{degeneration formula} for counting problems on ${\overline{\mathcal{M}}}_{g,n}$. The base case for the counting problem is $(g,n) = (0,1)$, and the recursion \eqref{tree} is the result of the edge-contraction operation for simple Hurwitz numbers associated with ${\overline{\mathcal{M}}}_{0,1}$. In this sense, the Lambert curve \eqref{Lambert} is the \emph{mirror dual} of simple Hurwitz numbers. The paper is organized as follows. In Section~\ref{sect:Hurwitz}, we present combinatorial graph enumeration problems, and show that they are equivalent to counting of simple and orbifold Hurwitz numbers. In Section~\ref{sect:spectral}, the spectral curves of the topological recursion for simple and orbifold Hurwitz numbers (the mirror objects to the counting problems) are constructed from the edge-contraction formulas for $(g,n) = (0,1)$ invariants. \section{Orbifold Hurwitz numbers as graph enumeration} \label{sect:Hurwitz} Mirror symmetry provides an effective tool for counting problems of Gromov-Witten type invariants. The question is how we construct the mirror, given a counting problem. Although there is so far no general formalism, we present a systematic procedure for computing orbifold Hurwitz numbers in this paper. The key observation is that the edge-contraction operations for $(g,n)=(0,1)$ identify the mirror object. The topological recursion for simple and orbifold Hurwitz numbers are derived as the Laplace transform of the cut-and-join equation \cite{BHLM, EMS, MZ}, where the spectral curves are identified by the consideration of mirror symmetry of toric Calabi-Yau orbifolds \cite{BHLM,BM, FLZ,FLZ2}. In this section we give a purely combinatorial graph enumeration problem that is equivalent to counting orbifold Hurwitz numbers. We then show in the next section that the edge-contraction formula restricted to the $(g,n)=(0,1)$ case determines the spectral curve and the differential forms $W_{0,1}$ and $W_{0,2}$ of \cite{BHLM}. These quantities form the mirror objects for the orbifold Hurwitz numbers. \subsection{Cell graphs} To avoid unnecessary confusion, we use the terminology \emph{cell graphs} in this article, instead of more common ribbon graphs. Ribbon graphs naturally appear for encoding complex structures of a topological surface (see for example, \cite{K1992, MP1998}). Our purpose of using ribbon graphs are for degeneration of stable curves, and we label vertices, instead of \emph{faces}, of a ribbon graph. \begin{Def}[Cell graphs] A connected \textbf{cell graph} of topological type $(g,n)$ is the $1$-skeleton of a cell-decomposition of a connected closed oriented surface of genus $g$ with $n$ labeled $0$-cells. We call a $0$-cell a \emph{vertex}, a $1$-cell an \emph{edge}, and a $2$-cell a \emph{face}, of the cell graph. We denote by $\Gamma_{g,n}$ the set of connected cell graphs of type $(g,n)$. Each edge consists of two \textbf{half-edges} connected at the midpoint of the edge. \end{Def} \begin{rem} \begin{itemize} \item The \emph{dual} of a cell graph is a ribbon graph, or Grothendieck's dessin d'enfant. We note that we label vertices of a cell graph, which corresponds to face labeling of a ribbon graph. Ribbon graphs are also called by different names, such as embedded graphs and maps. \item We identify two cell graphs if there is a homeomorphism of the surfaces that brings one cell-decomposition to the other, keeping the labeling of $0$-cells. The only possible automorphisms of a cell graph come from cyclic rotations of half-edges at each vertex. \end{itemize} \end{rem} \begin{Def}[Directed cell graph] A \textbf{directed cell graph} is a cell graph for which an arrow is assigned to each edge. An arrow is the same as an ordering of the two half-edges forming an edge. The set of directed cell graphs of type $(g,n)$ is denoted by $\vec{\Gamma}_{g,n}$. \end{Def} \begin{rem} A directed cell graph is a \emph{quiver}. Since our graph is drawn on an oriented surface, a directed cell graph carries more information than its underlying quiver structure. The tail vertex of an arrowed edge is called the \emph{source}, and the head of the arrow the \emph{target}, in the quiver language. \end{rem} An effective tool in graph enumeration is edge-contraction operations. Often edge contraction leads to an inductive formula for counting problems of graphs. \begin{Def}[Edge-contraction operations] \label{def:ECO} There are two types of \textbf{edge-contraction operations} applied to cell graphs. \begin{itemize} \item \textbf{ECO 1}: Suppose there is a directed edge $\vec{E}=\overset{\longrightarrow}{p_ip_i}$ in a cell graph $\gamma\in \vec{\Gamma}_{g,n}$, connecting the tail vertex $p_i$ and the head vertex $p_j$. We \emph{contract} $\vec{E}$ in $\gamma$, and put the two vertices $p_i$ and $p_j$ together. We use $i$ for the label of this new vertex, and call it again $p_i$. Then we have a new cell graph $\gamma'\in \vec{\Gamma}_{g,n-1}$ with one less vertices. In this process, the topology of the surface on which $\gamma$ is drawn does not change. Thus genus $g$ of the graph stays the same. \begin{figure}[htb] \includegraphics[height=1in]{figECO1gen.pdf} \caption{Edge-contraction operation ECO 1. The edge bounded by two vertices $p_i$ and $p_j$ is contracted to a single vertex $p_i$. } \label{fig:ECO 1} \end{figure} \item We use the notation $\vec{E}$ for the edge-contraction operation \begin{equation}\label{ECO1} \vec{E}:\vec{\Gamma}_{g,n}\owns \gamma\longmapsto \gamma'\in \vec{\Gamma}_{g,n-1}. \end{equation} \item \textbf{ECO 2}: Suppose there is a directed loop $\vec{L}$ in $\gamma\in\vec{\Gamma}_{g,n}$ at the $i$-th vertex $p_i$. Since a loop in the $1$-skeleton of a cell decomposition is a topological cycle on the surface, its contraction inevitably changes the topology of the surface. First we look at the half-edges incident to vertex $p_i$. Locally around $p_i$ on the surface, the directed loop $\vec{L}$ separates the neighborhood of $p_i$ into two pieces. Accordingly, we put the incident half-edges into two groups. We then break the vertex $p_i$ into two vertices, $p_{i_1}$ and $p_{i_2}$, so that one group of half-edges are incident to $p_{i_1}$, and the other group to $p_{i_2}$. The order of two vertices is determined by placing the loop $\vec{L}$ \emph{upward} near at vertex $p_i$. Then we name the new vertex on its left by $p_{i_1}$, and on its right by $p_{i_2}$. Let $\gamma'$ denote the possibly disconnected graph obtained by contracting $\vec{L}$ and separating the vertex to two distinct vertices labeled by $i_1$ and $i_2$. \begin{figure}[htb] \includegraphics[height=1.1in]{figECO2gen.pdf} \caption{Edge-contraction operation ECO 2. The contracted edge is a loop $\vec{L}$ of a cell graph. Place the loop so that it is upward near at $p_i$ to which $\vec{L}$ is attached. The vertex $p_i$ is then broken into two vertices, $p_{i_1}$ on the left, and $p_{i_2}$ on the right. Half-edges incident to $p_i$ are separated into two groups, belonging to two sides of the loop near $p_i$. } \label{fig:ECO2} \end{figure} \item If $\gamma'$ is connected, then it is in $\vec{\Gamma}_{g-1,n+1}$. The loop $\vec{L}$ is a \textit{loop of handle}. We use the same notation $\vec{L}$ to indicate the edge-contraction operation \begin{equation}\label{ECO2-1} \vec{L}:\vec{\Gamma}_{g,n}\owns \gamma\longmapsto \gamma'\in \vec{\Gamma}_{g-1,n+1}. \end{equation} \item If $\gamma'$ is disconnected, then write $\gamma'=(\gamma_1,\gamma_2)\in \vec{\Gamma}_{g_1,|I|+1} \times \vec{\Gamma}_{g_2,|J|+1}$, where \begin{equation}\label{disconnected} \begin{cases} g=g_1+g_2\\ I\sqcup J = \{1,\dots,\widehat{i},\dots,n\} \end{cases}. \end{equation} The edge-contraction operation is again denoted by \begin{equation}\label{ECO2-2} \vec{L}:\vec{\Gamma}_{g,n}\owns \gamma\longmapsto (\gamma_1,\gamma_2)\in \vec{\Gamma}_{g_1,|I|+1}\times \vec{\Gamma}_{g_2,|J|+1}. \end{equation} In this case we call $\vec{L}$ a \textit{separating loop}. Here, vertices labeled by $I$ belong to the connected component of genus $g_1$, and those labeled by $J$ are on the other component of genus $g_2$. Let $(I_-,i,I_+)$ (reps. $(J_-,i,J_+)$) be the reordering of $I\sqcup \{i\}$ (resp. $J\sqcup \{i\}$) in the increasing order. Although we give labeling $i_1,i_2$ to the two vertices created by breaking $p_i$, since they belong to distinct graphs, we can simply use $i$ for the label of $p_{i_1}\in \gamma_1$ and the same $i$ for $p_{i_2}\in \gamma_2$. The arrow of $\vec{L}$ translates into the information of ordering among the two vertices $p_{i_1}$ and $p_{i_2}$. \end{itemize} \end{Def} \begin{rem} The use of directed cell graphs enables us to define edge-contraction operations, keeping track with vertex labeling. We refer to \cite{OM6} for the actual motivation for quiver cell graphs. Since our main concern is enumeration of graphs, the extra data of directed edges does not plan any role. In what follows, we deal with cell graphs without directed edges. The edge-contraction operations are defined with a choice of direction, but the counting formula we derive does not depend of this choice. \end{rem} \begin{rem} Let us define $m(\gamma)=2g-2+n$ for a graph $\gamma\in \Gamma_{g,n}$. Then every edge-contraction operation reduces $m(\gamma)$ exactly by $1$. Indeed, for ECO 1, we have $$ m(\gamma') = 2g -2 +(n-1) = m(\gamma)-1. $$ The ECO 2 applied to a loop of handle produces $$ m(\gamma') = 2(g-1)-2+(n+1) = m(\gamma)-1. $$ For a separating loop, we have $$ \begin{matrix} &2g_1-2+|I|+1 \\ {+)}&{2g_2-2+|J|+1} \\ &\overline{2g_1+2g_2-4+|I|+|J|+2} &=\; \;2g-2+n-1. \end{matrix} $$ \end{rem} \subsection{$r$-Hurwitz graphs} We choose and fix a positive integer $r$. The decorated graphs we wish to enumerate are the following. \begin{Def}[$r$-Hurwitz graph] An $r$-\textbf{Hurwitz graph} $(\gamma,D)$ of type $(g,n,d)$ consists of the following data. \begin{itemize} \item $\gamma$ is a connected cell graph of type $(g,n)$, with $n$ labeled vertices. \item $|D|=d$ is divisible by $r$, and $\gamma$ has $m=d/r$ unlabeled faces and $s$ unlabeled edges, where \begin{equation} \label{s} s= 2g-2+\frac{d}{r} + n. \end{equation} \item $D$ is a configuration of $d=rm$ unlabeled dots on the graph subject to the following conditions: \begin{enumerate} \item The set of $d$ dots are grouped into $m$ subsets of $r$ dots, each of which is equipped with a cyclic order. \item Every face of $\gamma$ has cyclically ordered $r$ dots. \item These dots are clustered near vertices of the face. At each corner of the face, say at Vertex $i$, the dots are ordered according to the cyclic order that is consistent of the orientation of the face, which is chosen to be counter-clock wise. \item Let $\mu_i$ denote the total number of dots clustered at Vertex $i$. Then $\mu_i>0$ for every $i=1,\dots,n$. Thus we have an ordered partition \begin{equation} \label{ordered partition} d = \mu_1+\cdots+\mu_n. \end{equation} In particular, the number of vertices ranges $0< n\le d$. \item Suppose an edge $E$ connecting two distinct vertices, say Vertex $i$ and $j$, bounds the same face twice. Let $p$ be the midpoint of $E$. The polygon representing the face has $E$ twice on its perimeter, hence the point $p$ appears also twice. We name them as $p$ and $p'$. Which one we call $p$ or $p'$ does not matter. Consider a path on the perimeter of this polygon starting from $p$ and ending up with $p'$ according to the counter-clock wise orientation. Let $r'$ be the total number of dots clustered around vertices of the face, counted along the path. Then it satisfies \begin{equation} \label{dot condition} 0<r'<r. \end{equation} For example, not all $r$ dots of a face can be clustered at a vertex of degree $1$. In particular, for the case of $r=1$, the graph $\gamma$ has no edges bounding the same face twice. \end{enumerate} \end{itemize} An \textbf{arrowed} $r$-Hurwitz graph $(\gamma,\vec{D})$ has, in addition to to the above data $(\gamma,D)$, an arrow assigned to one of the $\mu_i$ dots from Vertex $i$ for each index $1\le i \le n$. \end{Def} The counting problem we wish to study is the number ${\mathcal{H}}_{g,n}^r(\mu_1\dots,\mu_n)$ of arrowed $r$-Hurwitz graphs for a prescribed ordered partition \eqref{ordered partition}, counted with the automorphism weight. The combinatorial data corresponds to an object in algebraic geometry. Let us first identify what the $r$-Hurwitz graphs represent. We denote by ${\mathbb{P}}^1[r]$ the $1$-dimensional orbifold modeled on ${\mathbb{P}}^1$ that has one stacky point $\left[0 \big/\big({\mathbb{Z}}/(r)\big)\right]$ at $0\in {\mathbb{P}}^1$. \begin{ex} The base case is ${\mathcal{H}}_{0,1}^r(r)=1$ (see Figure~\ref{fig:H01}). This counts the identity morphism ${\mathbb{P}}^1[r] \overset{\sim}{\longrightarrow} {\mathbb{P}}^1[r]$. \end{ex} \begin{figure}[htb] \centerline{\epsfig{file=figH01r.pdf, width=0.7in}} \caption{The graph has only one vertex and no edges. All $r$ dots are clustered around this unique vertex, with an arrow attached to one of them. Because of the arrow, there is no automorphism of this graph. } \label{fig:H01} \end{figure} \begin{Def}[Orbifold Hurwitz cover and Orbifold Hurwitz numbers] An \emph{orbifold Hurwitz cover} $f:C\longrightarrow {\mathbb{P}}^1[r]$ is a morphism from an orbifold $C$ that is modeled on a smooth algebraic curve of genus $g$ that has \begin{enumerate} \item $m$ stacky points of the same type as the one on the base curve that are all mapped to $\left[0 \big/\big({\mathbb{Z}}/(r)\big)\right]\in{\mathbb{P}}^1[r]$, \item arbitrary profile $(\mu_1,\dots,\mu_n)$ with $n$ labeled points over $\infty\in {\mathbb{P}}^1[r]$, \item and all other ramification points are simple. \end{enumerate} If we replace the target orbifold by ${\mathbb{P}}^1$, then the morphism is a regular map from a smooth curve of genus $g$ with profile $(\overset{m}{\overbrace{r,\dots,r})}$ over $0\in{\mathbb{P}}^1$, labeled profile $(\mu_1,\dots,\mu_n)$ over $\infty\in{\mathbb{P}}^1$, and a simple ramification at any other ramification point. The Euler characteristic condition \eqref{s} of the graph $\gamma$ gives the number of simple ramification points of $f$ through the Riemann-Hurwitz formula. The automorphism weighted count of the number of the topological types of such covers is denoted by $H_{g,n}^r (\mu_1,\dots,\mu_n)$. These numbers are referred to as \emph{orbifold Hurwitz numbers}. When $r=1$, they count the usual simple Hurwitz numbers. \end{Def} The counting of the topological types is the same as counting actual orbifold Hurwitz covers such that all simple ramification points are mapped to one of the $s$-th roots of unity $\xi^1,\dots,\xi^s$, where $\xi = \exp(2\pi i/s)$, if all simple ramification points of $f$ are labeled. Indeed, such a labeling is given by elements of the cyclic group $\{\xi^1,\dots,\xi^s\}$ of order $s$. Let us construct an edge-labeled Hurwitz graph from an orbifold Hurwitz cover with fixed branch points on the target as above. We first review the case of $r=1$, i.e., the simple Hurwitz covers. Our graph is essentially the same as the dual of the \emph{branching graph} of \cite{OP}. \subsection{Construction of $r$-Hurwitz graphs} First we consider the case $r=1$. Let $f:C\longrightarrow {\mathbb{P}}^1$ be a simple Hurwitz cover of genus $g$ and degree $d$ with labeled profile $(\mu_i,\dots,\mu_n)$ over $\infty$, unramified over $0\in{\mathbb{P}}^1$, and simply ramified over $B=\{\xi^1,\dots,\xi^s\} \subset {\mathbb{P}}^1$, where $\xi = \exp(2\pi i/s)$ and $s=2g-2+d+n$. We denote by $R = \{p_1,\dots,p_s\}\subset C$ the labeled simple ramification points of $f$, that is bijectively mapped to $B$ by $f:R\longrightarrow B$. We choose a labeling of $R$ so that $f(p_\alpha)= \xi^\alpha$ for every $\alpha=1,\dots,s$. On ${\mathbb{P}}^1$, plot $B$ and connect each element $\xi^\alpha\in B$ with $0$ by a straight line segment. We also connect $0$ and $\infty$ by a straight line $z=t \exp(\pi i/s)$, $0\le t\le \infty$. Let $*$ denote the configuration of the $s$ line segments. The inverse image $f^{-1}(*)$ is a cell graph on $C$, for which $f^{-1}(0)$ forms the set of vertices. We remove all inverse images $f^{-1}(\overline{0\xi^\alpha})$ of the line segment $\overline{0\xi^\alpha}$ from this graph, except for the ones that end at one of the points $p_\alpha\in R$. Since $p_\alpha$ is a simple ramification point of $f$, the line segment ending at $p_\alpha$ extends to another vertex, i.e., another point in $f^{-1}(0)$. We denote by $\gamma^{\vee}$ the graph after this removal of line segments. We define the edges of the graph to be the connected line segments at $p_\alpha$ for some $\alpha$. We use $p_\alpha$ as the label of the edge. The graph $\gamma^{\vee}$ has $d$ vertices, $s$ edges, and $n$ faces. An inverse image of the line $\overline{0\infty}$ is a ray starting at a vertex of the graph $\gamma^{\vee}$ and ending up with one of the points in $f^{-1}(\infty)$, which is the center of a face. We place a dot on this line near at each vertex. The edges of $\gamma^{\vee}$ incident to a vertex are cyclically ordered counter-clockwise, following the natural cyclic order of $B$. Let $p_\alpha$ be an edge incident to a vertex, and $p_\beta$ the next one at the same vertex according to the cyclic order. We denote by $d_{\alpha\beta}$ the number of dots in the span of two edges $p_\alpha$ and $p_\beta$, which is $0$ if $\alpha<\beta$, and $1$ if $\beta<\alpha$. Now we consider the dual graph $\gamma$ of $\gamma^{\vee}$. It has $n$ vertices, $d$ faces, and $s$ edges still labeled by $\{p_1,\dots,p_s\}$. At the angled corner between the two adjacent edges labeled by $p_\alpha$ and $p_\beta$ in this order according to the cyclic order, we place $d_{\alpha\beta}$ dots. The data $(\gamma,D)$ consisting of the cell graph $\gamma$ and the dot configuration $D$ is the Hurwitz graph corresponding to the simple Hurwitz cover $f:C\longrightarrow {\mathbb{P}}^1$ for $r=1$. It is obvious that what we obtain is an $r=1$ Hurwitz graph, except for the condition (5) of the configuration $D$, which requires an explanation. The dual graph $\gamma^{\vee}$ for $r=1$ is the \emph{branching graph} of \cite{OP}. Since $|B|=s$ is the number of simple ramification points, which is also the number of edges of $\gamma^{\vee}$, the branching graph cannot have any loops. This is because two distinct powers of $\xi$ in the range of $1,\dots,s$ cannot be the same. This fact reflects in the condition that $\gamma$ has no edge that bounds the same face twice. This explains the condition (5) for $r=1$. \begin{rem} If we consider the case $r=1, g=0$ and $n=1$, then $s=d-1$. Hence the graph $\gamma^\vee$ is a connected tree consisting of $d$ nodes (vertices) and $d-1$ \emph{labeled} edges. Except for $d=1,2$, every vertex is uniquely labeled by incident edges. The tree counting of Introduction is relevant to Hurwitz numbers in this way. \end{rem} Now let us consider an orbifold Hurwitz cover $f:C\longrightarrow {\mathbb{P}}^1[r]$ of genus $g$ and degree $d=rm$ with labeled profile $(\mu_i,\dots,\mu_n)$ over $\infty$, $m$ isomorphic stacky points over $\left[0 \big/\big({\mathbb{Z}}/(r)\big)\right]\in{\mathbb{P}}^1[r]$, and simply ramified over $B=\{\xi^1,\dots,\xi^s\} \subset {\mathbb{P}}^1[r]$, where $s=2g-2+m+n$. By $R = \{p_1,\dots,p_s\}\subset C$ we indicate the labeled simple ramification points of $f$, that is again bijectively mapped to $B$ by $f:R\longrightarrow B$. We choose the same labeling of $R$ so that $f(p_\alpha)= \xi^\alpha$ for every $\alpha=1,\dots,s$. On ${\mathbb{P}}^1[r]$, plot $B$ and connect each element $\xi^\alpha\in B$ with the stacky point at $0$ by a straight line segment. We also connect $0$ and $\infty$ by a straight line $z=t \exp(\pi i/s)$, $0\le t\le \infty$, as before. Let $*$ denote the configuration of the $s$ line segments. The inverse image $f^{-1}(*)$ is a cell graph on $C$, for which $f^{-1}(0)$ forms the set of vertices. We remove all inverse images $f^{-1}(\overline{0\xi^\alpha})$ of the line segment $\overline{0\xi^\alpha}$ from this graph, except for the ones that end at one of the points $p_\alpha\in R$. We denote by $\gamma^{\vee}$ the graph after this removal of line segments. We define the edges of the graph to be the connected line segments at $p_\alpha$ for some $\alpha$. We use $p_\alpha$ as the label of the edge. The graph $\gamma^{\vee}$ has $m$ vertices, $s$ edges. The inverse image of the line $\overline{0\infty}$ form a set of $r$ rays at each vertex of the graph $\gamma^{\vee}$, connecting $m$ vertices and $n$ centers $f^{-1}(\infty)$ of faces. We place a dot on each line near at each vertex. These dots are cyclically ordered according to the orientation of $C$, which we choose to be counter-clock wise. The edges of $\gamma^{\vee}$ incident to a vertex are also cyclically ordered in the same way. Let $p_\alpha$ be an edge incident to this vertex, and $p_\beta$ the next one according to the cyclic order. We denote by $d_{\alpha\beta}$ the number of dots in the span of two edges $p_\alpha$ and $p_\beta$. Let $\gamma$ denote the dual graph of $\gamma^{\vee}$. It now has $n$ vertices, $m$ faces, and $s$ edges still labeled by $\{p_1,\dots,p_s\}$. At the angled corner between the two adjacent edges labeled by $p_\alpha$ and $p_\beta$ in this order according to the cyclic order, we place $d_{\alpha\beta}$ dots, again cyclically ordered as on $\gamma^{\vee}$. The data $(\gamma,D)$ consisting of the cell graph $\gamma$ and the dot configuration $D$ is the $r$-Hurwitz graph corresponding to the orbifold Hurwitz cover $f:C\longrightarrow {\mathbb{P}}^1[r]$. We note that $\gamma^\vee$ can have loops, unlike the case of $r=1$. Let us place $\gamma^\vee$ locally on an oriented plane around a vertex. The plane is locally separated into $r$ sectors by the $r$ rays $f^{-1}(\overline{0\infty})$ at this vertex. There are $s$ half-edges coming out of the vertex at each of these $r$ sectors. A half-edge corresponding to $\xi^\alpha$ cannot be connected to another half-edge corresponding to $\xi^\beta$ in the same sector, by the same reason for the case of $r=1$. But it can be connected to another half-edge of a different sector corresponding again to the same $\xi^\alpha$. In this case, within the loop there are some dots, representing the rays of $f^{-1}(\overline{0\infty})$ in between these half-edges. The total number of dots in the loop cannot be $r$, because then the half-edges being connected are in the same sector. Thus the condition (5) is satisfied. \begin{ex} Theorem~\ref{thm:ECF} below shows that $$ {\mathcal{H}}_{0,2}^2(3,1) = \frac{9}{2}. $$ This is the weighted count of the number of $2$-Hurwitz graphs of type $(g,n,d) =(0,2,4)$ with an ordered partition $4 = 3+1$. \begin{figure}[htb] \label{fig:H02} \includegraphics[width=2in]{figH02map.pdf} \caption{Hurwitz covers counted in ${\mathcal{H}}_{0,2}^2(3,1)$ have two orbifolds points, two simple ramification points, and one ramification point of degree $3$. } \end{figure} \begin{figure}[htb] \includegraphics[height=0.6in]{figH02graph.pdf} \caption{There are two $2$-Hurwitz graphs. The number of graphs is $3/2$ for the graph on the left counting the automorphism, and $3$ for the one on the right. The total is thus $9/2$.} \label{fig:H02graph} \end{figure} In terms of formulas, the $2$-Hurwitz cover corresponding to the graph on the left of Figure~\ref{fig:H02graph} is given by $$ f(x) = \frac{(x-1)^2(x+1)^2}{x}. $$ To make the simple ramification points sit on $\pm 1$, we need to divide $f(x)$ by $f(i/\sqrt{3})$, where $x=\pm 1/\sqrt{3}$ are the simple ramification points. The $2$-Hurwitz cover corresponding to the graph on the right of Figure \ref{fig:H02graph} is given by $$ f(x) = \frac{(x-1)^2(x+1)^2}{x-a}, $$ where $a$ is a real number satisfying $|a|>\sqrt{3}/2$. The real parameter $a$ changes the topological type of the $2$-Hurwitz cover. For $-\frac{\sqrt{3}}{2}<a<\frac{\sqrt{3}}{2}$, the graph is the same as on the left, and for $|a|>\frac{\sqrt{3}}{2}$, the graph becomes the one on the right. \end{ex} \subsection{The edge-contraction formulas} \begin{Def}[Edge-contraction operations] The edge-contraction operations (ECOs) on an arrowed $r$-Hurwitz graph $(\gamma,\vec{D})$ are the following procedures. Choose an edge $E$ of the cell graph $\gamma$. \begin{itemize} \item \textbf{ECO 1}: We consider the case that $E$ is an edge connecting two distinct vertices Vertex $i$ and Vertex $j$. We can assume $i<j$, which induces a direction $i\overset{E}{\longrightarrow} j$ on $E$. Let us denote by $F_+$ and $F_-$ the faces bounded by $E$, where $F_+$ is on the left side of $E$ with respect to the direction. We now contract $E$, with the following additional operations: \begin{enumerate} \item Remove the original arrows at Vertices $i$ and $j$. \item Put the dots on $F_\pm$ clustered at Vertices $i$ and $j$ together, keeping the cyclic order of the dots on each of $F_\pm$. \item Place a new arrow to the largest dot on the corner at Vertex $i$ of Face $F_+$ with respect to the cyclic order. \item If there are no dots on this particular corner, then place an arrow to the first dot we encounter according to the counter-clock wise rotation from $E$ and centered at Vertex $i$. \end{enumerate} \end{itemize} The new arrow at the joined vertex allows us to recover the original graph from the new one. \begin{figure}[htb] \includegraphics[height=0.9in]{figECO1.pdf} \caption{After contracting the edge, a new arrow is placed on the dot that is the largest (according to the cyclic order) around Vertex $i$ in the original graph, and on the face incident to $E$ which is on the left of $E$ with respect to the direction $i\rightarrow j$. The new arrow tells us where the break is made in the original graph. If there are no dots on this particular face, then we go around Vertex $i$ counter-clock wise and find the first dot in the original graph. We place an arrow to this dot in the new graph after contracting $E$. Here again the purpose is to identify which of the $\mu_i$ dots come from the original Vertex $i$ } \label{fig:ECO1} \end{figure} \begin{itemize} \item \textbf{ECO 2}: This time $E$ is a loop incident to Vertex $i$ twice. We contract $E$ and separate the vertex into two new ones, as in ECA 3 of Definition~\ref{def:ECA}. The additional operations are: \begin{enumerate} \item The contraction of a loop does not change the number of faces. Separate the dots clustered at Vertex $i$ according to the original configuration. \item Look at the new vertex to which the original arrow is placed. We keep the same name $i$ to this vertex. The other vertex is named $i'$. \item Place a new arrow to the dot on the corner at the new Vertex $i$ that was the largest in the original corner with respect to the cyclic order. \item If there are no dots on this particular corner, then place an arrow to the first dot we encounter according to the counter-clock wise rotation from $E$ and centered at Vertex $i$ on the side of the old arrow. \item We do the same operation for the new Vertex $i'$, and put a new arrow to a dot. \item Now remove the original arrow. \end{enumerate} \end{itemize} \begin{figure}[htb] \includegraphics[height=0.9in]{figECO2.pdf} \caption{New arrows are placed so that the original graph can be recovered from the new one } \label{fig:ECO2} \end{figure} \end{Def} Although cumbersome, it is easy to show that \begin{lem} The edge-contraction operations preserve the set of $r$-Hurwitz graphs. \end{lem} An application of the edge-contraction operations is the following counting recursion formula. \begin{thm}[Edge-Contraction Formula] \label{thm:ECF} The number of arrowed Hurwitz graphs satisfy the following edge-contraction formula. \begin{equation} \label{ECF} \begin{aligned} &\left(2g-2+\frac{d}{r}+ n\right) {\mathcal{H}}_{g,n}^r(\mu_1\dots,\mu_n) \\ &\qquad = \sum_{i< j}\mu_i\mu_j {\mathcal{H}}_{g,n-1}^r (\mu_1,\dots,\mu_{i-1}, \mu_i+\mu_j,\mu_{i+1},\dots, \widehat{\mu_j},\dots,\mu_n) \\ &\qquad+ {\frac{1}{2}} \sum_{i=1}^n \mu_i \sum_{\substack{\alpha+\beta=\mu_i\\ \alpha, \beta\ge 1}} \left[{\mathcal{H}}_{g-1,n+1}^r (\alpha,\beta,\mu_1,\dots, \widehat{\mu_i},\dots,\mu_n) \phantom{\sum_{\substack{g_1+g_2=g\\ I\sqcup J = \{1,\dots,\hat{i},\dots,n\}}}} \right. \\ &\qquad+ \left. \sum_{\substack{g_1+g_2=g\\ I\sqcup J = \{1,\dots,\hat{i},\dots,n\}}} {\mathcal{H}}_{g_1,|I|+1}^r (\alpha,\mu_I) {\mathcal{H}}_{g_2,|J|+1}^r (\beta,\mu_J) \right]. \end{aligned} \end{equation} Here, $\widehat{\;\;}$ indicates the omission of the index, and $\mu_I = (\mu_i)_{i\in I}$ for any subset $I\subset\{1,2,\dots,n\}$. \end{thm} \begin{rem} The edge-contraction formula (ECF) is a recursion with respect to the number of edges $$ s = 2g-2+\frac{\mu_1+\cdots+\mu_n}{r} +n. $$ Therefore, it calculates all values of ${\mathcal{H}}_{g,n}^r(\mu_1\dots,\mu_n)$ from the base case ${\mathcal{H}}_{0,1}^r(r)$. However, it does not determine the initial value itself, since $s=0$. We also note that the recursion is not for ${\mathcal{H}}_{g,n}^r$ as a function in $n$ integer variables. \end{rem} \begin{proof} The counting is done by applying the edge-contraction operations. The left-hand side of \eqref{ECF} shows the choice of an edge, say $E$, out of $ s = 2g-2+\frac{d}{r}+n $ edges. The first line of the right-hand side corresponds to the case that the chosen edge $E$ connects Vertex $i$ and Vertex $j$. We assume $i<j$, and apply ECO~1. The factor $\mu_i\mu_j$ indicates the removal of two arrows at these vertices (Figure~\ref{fig:ECO1}). When the edge $E$ we have chosen is a loop incident to Vertex $i$ twice, then we apply ECO~2. The factor $\mu_i$ is the removal of the original arrow (Figure~\ref{fig:ECO2}). The second and third lines on the right-hand side correspond whether $E$ is a handle-cutting loop, or a separation loop. The factor ${\frac{1}{2}}$ is there because of the symmetry between $\alpha$ and $\beta$ of the partition of $\mu_i$. This complete the proof. \end{proof} \begin{thm}[Graph enumeration and orbifold Hurwitz numbers] The graph enumeration and counting orbifold Hurwitz number are related by the following formula: \begin{equation} \label{equivalence} {\mathcal{H}}_{g,n}^r(\mu_i,\dots,\mu_n) = \mu_1\mu_2\cdots\mu_n H_{g,n}^r(\mu_i,\dots,\mu_n). \end{equation} \end{thm} \begin{proof} The simplest orbifold Hurwitz number is $H_{0,1}^r(r)$, which counts double Hurwitz numbers with the same profile $(r)$ at both $0\in {\mathbb{P}}^1$ and $\infty\in {\mathbb{P}}^1$. There is only one such map $f:{\mathbb{P}}^1\longrightarrow {\mathbb{P}}^1$, which is given by $f(x) = x^r$. Since the map has automorphism ${\mathbb{Z}}/(r)$, we have $H_{0,1}^r(r) = 1/r$. Thus \eqref{equivalence} holds for the base case. We notice that \eqref{ECF} is exactly the same as the cut-and-join equation of \cite[Theorem~2.2]{BHLM}, after modifying the orbifold Hurwitz numbers by multiplying $\mu_1\cdots\mu_n$. Since the initial value is the same, and the formulas are recursion based on $s=2g-2+\frac{d}{r}+ n$, \eqref{equivalence} holds by induction. This completes the proof. \end{proof} \section{Construction of the mirror spectral curves for orbifold Hurwitz numbers} \label{sect:spectral} In the earlier work on simple and orbifold Hurwitz numbers in connection to the topological recursion \cite{BHLM,BM,DLN,EMS,MZ}, the spectral curves are determined by the infinite framing limit of the mirror curves to toric Calabi-Yau (orbi-)threefolds. The other ingredients of the topological recursion, the differential forms $W_{0,1}$ and $W_{0,2}$, are calculated by the Laplace transform of the $(g,n)=(0,1)$ and $(0,2)$ cases of the ELSV \cite{ELSV} and JPT \cite{JPT} formulas. Certainly the logic is clear, but why these choices are the right ones is not well explained. In this section, we show that the edge-contraction operations themselves determine all the mirror ingredients, i.e., the spectral curve, $W_{0,1}$, and $W_{0,2}$. The structure of the story is the following. The edge-contraction formula \eqref{ECF} is an equation among different values of $(g,n)$. When restricted to $(g,n)= (0,1)$, it produces an equation on ${\mathcal{H}}_{0,1}^r(d)$ as a function in one integer variable. The generating function of ${\mathcal{H}}_{g,n}^r(\mu_1,\dots,\mu_n)$ is reasonably complicated, but it can be expressed rather nicely in terms of the generating function of the $(0,1)$-values ${\mathcal{H}}_{0,1}^r(d)$, which is essentially the spectral curve of the theory. The edge-contraction formula \eqref{ECF} itself has the Laplace transform that can be calculated in the spectral curve coordinate. Since \eqref{ECF} contains $(g,n)$ on each side of the equation, to make it a genuine recursion formula for functions with respect to $2g-2+n$ in the stable range, we need to calculate the generating functions of ${\mathcal{H}}_{0,1}^r(d)$ and ${\mathcal{H}}_{0,2}^r(\mu_1,\mu_2)$, and make the rest of \eqref{ECF} free of unstable terms. The result is the topological recursion of \cite{BHLM,EMS}. Let us now start with the restricted \eqref{ECF} on $(0,1)$ invariants: \begin{equation} \label{ECF01} \left(\frac{d}{r} -1 \right){\mathcal{H}}_{0,1}^r (d) ={\frac{1}{2}} d \sum_{\substack{\alpha+\beta=d\\\alpha,\beta\ge 1}} {\mathcal{H}}_{0,1}^r (\alpha){\mathcal{H}}_{0,1}^r (\beta). \end{equation} At this stage, we introduce a generating function \begin{equation} \label{H01generating} y=y(x)=\sum_{d=1}^\infty {\mathcal{H}}_{0,1}^r (d) x^d. \end{equation} In terms of this generating function, \eqref{ECF01} is a differential equation \begin{equation} \label{ECFdiff} \left(x^{r+1} \circ\frac{d}{dx}\circ \frac{1}{x^r}\right) y = {\frac{1}{2}} r x \frac{d}{dx} y^2, \end{equation} or simply $$ \frac{y'}{y}-ry'=\frac{r}{x}. $$ Its unique solution is $$ Cx^r= y e^{-ry} $$ with a constant of integration $C$. As we noted in the previous section, the recursion \eqref{ECF} does not determine the initial value ${\mathcal{H}}_{0,1}^r(d)$. For our graph enumeration problem, the values are \begin{equation} \label{H01initial} {\mathcal{H}}_{0,1}^r(d) = \begin{cases} 0 \qquad 1\le d<r; \\ 1 \qquad d = r, \end{cases} \end{equation} which determine $C=1$. Thus we find \begin{equation} \label{r-Lambert} x^r = y e^{-ry}, \end{equation} which is the $r$-Lambert curve of \cite{BHLM}. This is indeed the spectral curve for the orbifold Hurwitz numbers. \begin{rem} We note that $r{\mathcal{H}}_{0,1}^r(rm)$ satisfies the same recursion equation \eqref{ECF01} for $r=1$, with a different initial value. Thus essentially orbifold Hurwitz numbers are determined by the usual simple Hurwitz numbers. \end{rem} \begin{rem} If we define $T_d = (d-1)! {\mathcal{H}}_{0,1}^{r=1}(d)$, then \eqref{ECF01} for $r=1$ is equivalent to \eqref{tree}. This is the reason we consider the tree recursion as the spectral curve for simple and orbifold Hurwitz numbers. \end{rem} For the purpose of performing analysis on the spectral curve \eqref{r-Lambert}, let us introduce a global coordinate $z$ on the $r$-Lambert curve, which is an analytic curve of genus $0$: \begin{equation} \label{z} \begin{cases} x = x(z) := z e^{-z^r}\\ y =y(z) := z^r. \end{cases} \end{equation} We denote by $\Sigma\subset {\mathbb{C}}^2$ this parametric curve. Let us introduce the generating functions of general ${\mathcal{H}}_{g,n}^r$, which are called \emph{free energies}: \begin{equation} \label{FgnHur} F_{g,n}(x_1,\dots,x_n):= \sum_{\mu_1,\dots,\mu_n\ge 1} \frac{1}{\mu_1\cdots \mu_n} {\mathcal{H}}_{g,n}^r (\mu_1,\dots,\mu_n)\prod_{i=1}^n x_i ^{\mu_i}. \end{equation} We also define the exterior derivative \begin{equation} \label{WgnHur} W_{g,n}(x_1,\dots,x_n):= d_1\cdots d_n F_{g,n} (x_1,\dots,x_n), \end{equation} which is a symmetric $n$-linear differential form. By definition, we have \begin{equation} \label{F01} y=y(x) = x\frac{d}{dx} F_{0,1}(x). \end{equation} The topological recursion requires the spectral curve, $W_{0,1}$, and $W_{0,2}$. From \eqref{WgnHur} and \eqref{F01}, we have \begin{equation} \label{W01Hur} W_{0,1}(x) = y\frac{dx}{x} = y d\log(x). \end{equation} \begin{rem} For many examples of topological recursion such as ones considered in \cite{DMSS}, we often define $W_{0,1} = ydx$, which is a holomorphic $1$-form on the spectral curve. For Hurwitz theory, due to \eqref{F01}, it is more natural to use \eqref{W01Hur}. \end{rem} As a differential equation, we can solve \eqref{F01} in a closed formula on the spectral curve $\Sigma$ of \eqref{z}. Indeed, the role of the spectral curve is that the free energies, i.e., $F_{g,n}$'s, are actually analytic functions defined on $\Sigma^n$. Although we define $F_{g,n}$'s as a formal power series in $(x_1,\dots,x_n)$ as generating functions, they are analytic, and the domain of analyticity, or the classical sense of \emph{Riemann surface}, is the spectral curve $\Sigma$. The coordinate change \eqref{z} gives us \begin{equation} \label{xd/dx} x\frac{d}{d x} = \frac{z}{1-rz^r} \frac{d}{d z}, \end{equation} hence \eqref{F01} is equivalent to $$ z^{r-1}(1-rz^{r}) = \frac{d}{dz} F_{0,1} \big(x(z)\big). $$ Since $z=0\Longrightarrow x=0\Longrightarrow F_{0,1}(x)=0$, we find \begin{equation} \label{F01(z)} F_{0,1}\big(x(z)\big) = \frac{1}{r} z^r -{\frac{1}{2}} z^{2r}. \end{equation} The calculation of $F_{0,2}$ is done similarly, by restricting \eqref{ECF} to the $(g,n)=(0,1)$ and $(0,2)$ terms. Assuming that $\mu_1+\mu_n = mr$, we have \begin{multline} \label{ECF02} \left(\frac{d}{r}-1\right){\mathcal{H}}_{0,2}^r(\mu_1,\mu_2) \\ =\mu_1\mu_2 {\mathcal{H}}_{0,1}^r (\mu_1+\mu_2) + \mu_1 \sum_{\substack{\alpha+\beta=\mu_1\\\alpha,\beta>0}} {\mathcal{H}}_{0,1}^r(\alpha){\mathcal{H}}_{0,2}^r(\beta,\mu_2) + \mu_2 \sum_{\substack{\alpha+\beta=\mu_2\\\alpha,\beta>0}} {\mathcal{H}}_{0,1}^r(\alpha){\mathcal{H}}_{0,2}^r(\mu_1,\beta). \end{multline} As a special case of \cite[Lemma~4.1]{BHLM}, this equation translates into a differential equation for $F_{0,2}$: \begin{multline} \label{PDEF02x} \frac{1}{r}\left(x_1\frac{\partial}{\partial x_1} + x_2\frac{\partial}{\partial x_2}\right) F_{0,2}(x_1,x_2) \\ = \frac{1}{x_1-x_2}\left( x_1^2\frac{\partial}{\partial x_1}F_{0,1}(x_1)- x_2^2\frac{\partial}{\partial x_2}F_{0,1}(x_2) \right) - \left( x_1\frac{\partial}{\partial x_1}F_{0,1}(x_1)+ x_2\frac{\partial}{\partial x_2}F_{0,1}(x_2) \right) \\ + \left( x_1\frac{\partial}{\partial x_1}F_{0,1}(x_1) \right) \left( x_1\frac{\partial}{\partial x_1}F_{0,2}(x_1,x_2) \right) + \left( x_2\frac{\partial}{\partial x_2}F_{0,1}(x_2) \right) \left( x_2\frac{\partial}{\partial x_2}F_{0,2}(x_1,x_2) \right). \end{multline} Denoting by $x_i = x(z_i)$ and using \eqref{xd/dx}, \eqref{PDEF02x} becomes simply \begin{equation} \label{PDEF02z} \frac{1}{r}\left( z_1\frac{\partial}{\partial z_1}+ z_2\frac{\partial}{\partial z_2} \right) F_{0,2}\big(x(z_1),x(z_2)\big) = \frac{x_1 z_1^r-x_2 z_2^r}{x_1-x_2} -(z_1^r+z_2^r) \end{equation} on the spectral curve $\Sigma$. This is a linear partial differential equation of the first order with analytic coefficients in the neighborhood of $(0,0)\in {\mathbb{C}}^2$, hence by the Cauchy-Kovalevskaya theorem, it has the unique analytic solution around the origin of ${\mathbb{C}}^2$ for any Cauchy problem. Since the only analytic solution to the homogeneous equation $$ \left( z_1\frac{\partial}{\partial z_1}+ z_2\frac{\partial}{\partial z_2} \right) f(z_1,z_2)=0 $$ is a constant, the initial condition $F_{0,2}(0,x_2) = F_{0,2}(x_1,0) = 0$ determines the unique solution of \eqref{PDEF02z}. \begin{prop} We have a closed formula for $F_{0,2}$ in the $z$-coordinates: \begin{equation} \label{F02z} F_{0,2}\big(x(z_1),x(z_2)\big) = \log\frac{z_1-z_2}{x(z_1)-x(z_2)} -(z_1^r+z_2^r). \end{equation} \end{prop} \begin{proof} We first note that $\log\frac{z_1-z_2}{x(z_1)-x(z_2)}$ is holomorphic around $(0,0)\in {\mathbb{C}}^2$. \eqref{F02z} being a solution to \eqref{PDEF02z} is a straightforward calculation that can be verified as follows: \begin{align*} &\left( z_1\frac{\partial}{\partial z_1}+ z_2\frac{\partial}{\partial z_2} \right) \log\frac{z_1-z_2}{x(z_1)-x(z_2)} \\ &= \frac{z_1-z_2}{z_1-z_2} -\frac{z_1e^{-z_1^r}(1-rz_1^r)- z_2e^{-z_2^r}(1-rz_2^r)}{x_1-x_2} \\ &= 1 -\frac{x_1-x_2}{x_1-x_2} +r\frac{x_1z_1^r-x_2z_2^r}{x_1-x_2} =r\frac{x_1z_1^r-x_2z_2^r}{x_1-x_2}. \end{align*} Since $ F_{0,2}\big(x(0),x(z_2)\big) =\log e^{z_2^r} - z_2^r = 0, $ \eqref{F02z} is the desired unique solution. \end{proof} In \cite{BHLM}, the functions \eqref{F01(z)} and \eqref{F02z} are derived by directly computing the Laplace transform of the JPT formulas \cite{JPT} \begin{equation} \label{JPT} \begin{aligned} &H_{0,1}^r(d) = \frac{d^{\lfloor \frac{d}{r}\rfloor-2}} {\lfloor \frac{d}{r}\rfloor!}, \\ &H_{0,2}^r (\mu_1,\mu_2) = \begin{cases} r^{{\langle} \frac{\mu_1}{r}{\rangle} +{\langle} \frac{\mu_1}{r} {\rangle}} \frac{1}{\mu_1+\mu_2} \frac{\mu_1^{\lfloor \frac{\mu_1}{r}\rfloor} \mu_2^{\lfloor \frac{\mu_2}{r}\rfloor} } {\lfloor \frac{\mu_1}{r}\rfloor! \lfloor \frac{\mu_2}{r}\rfloor!} \qquad \mu_1+\mu_2 \equiv 0 \mod r \\ 0\hskip1.96in \text{otherwise}. \end{cases} \end{aligned} \end{equation} Here, $q = \lfloor q\rfloor +{\langle} q{\rangle}$ gives the decomposition of a rational number $q\in {\mathbb{Q}}$ into its floor and the fractional part. We have thus recovered \eqref{JPT} from the edge-contraction formula alone, which are the $(0,1)$ and $(0,2)$ cases of the ELSV formula for the orbifold Hurwitz numbers. \begin{ack} The paper is based a series of lectures by M.M.\ at Mathematische Arbeitstagung 2015, Max-Planck-Institut f\"ur Mathematik in Bonn. The authors are grateful to the American Institute of Mathematics in California, the Banff International Research Station, the Institute for Mathematical Sciences at the National University of Singapore, Kobe University, Leibniz Universit\"at Hannover, the Lorentz Center for Mathematical Sciences, Leiden, Max-Planck-Institut f\"ur Mathematik in Bonn, and Institut Henri Poincar\'e, Paris, for their hospitality and financial support during the authors' stay for collaboration. The research of O.D.\ has been supported by GRK 1463 \emph{Analysis, Geometry, and String Theory} at Leibniz Universit\"at Hannover and MPIM. The research of M.M.\ has been supported by NSF grants DMS-1309298, DMS-1619760, DMS-1642515, and NSF-RNMS: Geometric Structures And Representation Varieties (GEAR Network, DMS-1107452, 1107263, 1107367). \end{ack} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
2,869,038,156,182
arxiv
\section{Introduction}\label{s1} Difference equations play a role in various contexts of mathematical physics (see e.g. \cite{FR} and references therein). We are interested in the application to the form factor program in the exact integrable 1+1 dimensional field theory, which was formulated in 1978 by one of the authors (M.K.) and Weisz \cite{KW}. Form factors are matrix elements of local operators ${\cal O}(x)$ $$ F(i\pi-\theta)=\langle\,}\newcommand{\ra}{\,\rangle p'\,|\,{\cal O}(0)\,|\, p\ra $$ where $p'p=m^2\cosh\theta$. Difference equations for these functions are obtained by Watson's equations \cite{Wa} \begin{equation}\label}\newcommand{\ee}{\end{equation}{1.2} F(\theta)=S(\theta)\,F(-\theta)~,~~~ F(i\pi-\theta)=F(i\pi+\theta) \ee where $S$ is the $S$-matrix. For several models these equations have been solved in \cite{KW} and many later publications (see e.g. \cite{Sm,ff} and references therein). Generalized form factors are matrix elements for many particle states. For generalized form factors Watson's equations lead typically to matrix difference equations, which can be solved by a generalized Bethe Ansatz, also called "off-shell Bethe Ansatz". The conventional Bethe Ansatz introduced by Bethe \cite{Be} is used to solve eigenvalue problems. The algebraic formulation, which is also used in this article, was worked out by Faddeev and coworkers (see e.g. \cite{TF}). The "off-shell Bethe Ansatz" has been introduced by one of the authors (H.B.) to solve the Knizhnik-Zamolodchikov equations which are differential equations. In \cite{Re} a variant of this technique has been formulated to solve matrix difference equations of the form $$ f(x_1,\dots,x_i+2,\dots,x_n)=Q(x_1,\dots,x_n,;i)\,f(x_1\dots,x_i,\dots,x_n) ~,~~(i=1,\dots,n) $$ where $f({\underline x})$ is a vector valued function and the $Q({\underline x},i)$ are matrix valued functions to be specified later. For higher rank internal symmetry groups the nested version of this Bethe Ansatz has to be applied. The nested Bethe Ansatz to solve eigenvalue problems was introduced by Yang \cite{Y} and further developed by Sutherland \cite{Su} (see also \cite{KR} for the algebraic formulation). The very interesting generalization of this technique, which is applicable to difference equations, is developed in this article for the $SU(N)$ - symmetry group. This generalization demonstrates the power of the Bethe Ansatz even more beautifully than the conventional applications. Other methods to solve such matrix difference equations have been discussed in \cite{Sm,Ma,Na,TV}. In part II we will solve the $U(N)$ case, which will be used in \cite{BFK} to solve the form factor problem for the $SU(N)$ chiral Gross-Neveu model. The article is organized as follows. In Section \ref{s2} we recall some well known results concerning the $SU(N)$ R-matrix, the monodromy matrix and some commutation rules. In Section \ref{s3} we introduce the nested generalized Bethe Ansatz to solve a system of $SU(N)$ difference equations and present the solutions in terms of "Jackson-type Integrals". The proof of the main theorem avoids the decomposition of the monodromy matrix, as used e.g. in \cite{Re}. Instead we introduce a new type of monodromy matrix fulfilling a new type of Yang-Baxter relation and which is adapted to the difference problem. In particular this yields an essential simplification of the proof of the main theorem. In Section \ref{s4} we prove the highest weight property of the solutions and calculate the weights. Section \ref{s5} contains some examples of solutions of the $SU(N)$ difference equations. \section{The $SU(N)$ - R-matrix}\label{s2} \setcounter{equation}{0} Let ${V^{1\dots n}}$ be the tensor product space \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.1}{V^{1\dots n}}=V_1\otimes\dots\otimes V_n\ee where the vector spaces $V_i\cong{\bf C}^N,~(i=1,\dots,n)$ are considered as fundamental (vector) representation spaces of $SU(N)$. It is straightforward to generalize the results of this paper to the case where the $V_i$ are vector spaces for other representations. We denote the canonical basis vectors by \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.2} \,|\,\alpha_1\dots\alpha_n\ra\in {V^{1\dots n}},~~(\alpha_i=1,\dots,N). \ee A vector $v^{1\dots n}\in {V^{1\dots n}}$ is given in term of its components by \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.3} v^{1\dots n}=\sum_{\underline\alpha}\,|\,\alpha_1\dots\alpha_n\ra\, v^{\alpha_1,\dots,\alpha_n}. \ee A matrix acting in ${V^{1\dots n}}$ by is denoted by \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.4}{A_{1\dots n}}~:~{V^{1\dots n}}\to{V^{1\dots n}}.\ee The $SU(N)$ spectral parameter dependent R-matrix \cite{BKKW} acts on the tensor product of two (fundamental) representation spaces of $SU(N)$. It may be written and depicted as \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.5} R_{12}(x_1-x_2)=b(x_1-x_2)\,{\bf 1}_{12}+c(x_1-x_2)\,P_{12}~~=~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength2.7mm \begin{picture}}\newcommand{\ep}{\end{picture}(5,4) \put(1,0){\line(1,1){4}} \put(5,0){\line(-1,1){4}} \put(0,.5){$\scriptstyle x_1$} \put(5,.5){$\scriptstyle x_2$} \ep \ea ~~~~:\,V^{12}\to V^{21}, \ee where $P_{12}$ is the permutation operator. Here and in the following we associate a variable (spectral parameter) $x_i\in{\bf C}$ to each space $V_i$ which is graphically represented by a line labeled by $x_i$ (or simply by $i$). The components of the R-matrix are \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.6} R_{\alpha\beta}^{\delta\gamma}(x_1-x_2)= \delta_{\alpha\gamma}\delta_{\beta\delta}\,b(x_1-x_2)+ \delta_{\alpha\delta}\delta_{\beta\gamma}\,c(x_1-x_2)~~=~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength2mm \begin{picture}}\newcommand{\ep}{\end{picture}(6,6) \put(1,1){\line(1,1){4}} \put(5,1){\line(-1,1){4}} \put(.5,.1){$\scriptstyle\alpha$} \put(5,.1){$\scriptstyle\beta$} \put(5,5.3){$\scriptstyle\gamma$} \put(.5,5.3){$\scriptstyle\delta$} \put(0,2){$\scriptstyle x_1$} \put(4.8,2){$\scriptstyle x_2$} \ep \ea~, \ee and the functions \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.7} b(x)=\frac x{x-2/N},~~c(x)=\frac{-2/N}{x-2/N} \ee are obtained as the rational solution of the Yang-Baxter equation $$R_{12}(x_1-x_2)\,R_{13}(x_1-x_3)\,R_{23}(x_2-x_3) =R_{23}(x_2-x_3)\,R_{13}(x_1-x_3)\,R_{12}(x_1-x_2), $$ where we have employed the usual notation \cite{Y}. This relation is depicted as $$ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength4mm \begin{picture}}\newcommand{\ep}{\end{picture}(9,4) \put(0,1){\line(1,1){3}} \put(0,3){\line(1,-1){3}} \put(2,0){\line(0,1){4}} \put(4.3,2){$=$} \put(6,0){\line(1,1){3}} \put(6,4){\line(1,-1){3}} \put(7,0){\line(0,1){4}} \put(.2,.5){$\scriptstyle 1$} \put(1.3,0){$\scriptstyle 2$} \put(3,.2){$\scriptstyle 3$} \put(5.5,.2){$\scriptstyle 1$} \put(7.2,0){$\scriptstyle 2$} \put(8.4,.4){$\scriptstyle 3$} \ep~~. \ea $$ The "unitarity" of the R-matrix reads and may depicted as $$R_{21}(x_2-x_1)\,R_{12}(x_1-x_2)=1~:~~~~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength3mm \begin{picture}}\newcommand{\ep}{\end{picture}(9,4) \put(1,0){\line(1,1){2}} \put(3,0){\line(-1,1){2}} \put(1,2){\line(1,1){2}} \put(3,2){\line(-1,1){2}} \put(7,0){\line(0,1){4}} \put(9,0){\line(0,1){4}} \put(4.5,1.7){$=$} \put(.2,0){$\scriptstyle 1$} \put(3.2,0){$\scriptstyle 2$} \put(6.2,0){$\scriptstyle 1$} \put(8.2,0){$\scriptstyle 2$} \ep \ea~. $$ As usual we define the monodromy matrix (with ${\underline x}=x_1,\dots,x_n$) \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.8} \T{0}({\underline x},x_0)=R_{10}(x_1-x_0)\,R_{20}(x_2-x_0)\dots R_{n0}(x_n-x_0)= \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength2.5mm \begin{picture}}\newcommand{\ep}{\end{picture}(10,4) \put(0,2){\line(1,0){10}} \put(2,0){\line(0,1){4}} \put(4,0){\line(0,1){4}} \put(8,0){\line(0,1){4}} \put(1,0){$\scriptstyle 1$} \put(3,0){$\scriptstyle 2$} \put(7,0){$\scriptstyle n$} \put(9,1){$\scriptstyle 0$} \put(5,1){$\scriptstyle\dots$} \ep \ea \ee as a matrix acting in the tensor product of the "quantum space" ${V^{1\dots n}}$ and the "auxiliary space" $V_0$ (all $V_i\cong{\bf C}^N$). The Yang-Baxter algebra relations \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.9} \T{a}({\underline x},x_a)\,\T{b}({\underline x},x_b)\,R_{ab}(x_a-x_b) =R_{ab}(x_a-x_b)\,\T{b}({\underline x},x_b)\,\T{a}({\underline x},x_a) \ee imply the basic algebraic properties of the sub-matrices w.r.t the auxiliary space defined by \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.10} {T_{1\dots n}}^\alpha_\beta({\underline x},x)\equiv \left(\matrix{{A_{1\dots n}}({\underline x},x)&{B_{1\dots n}}_\beta({\underline x},x)\cr {C_{1\dots n}}^\alpha({\underline x},x)&{D_{1\dots n}}^\alpha_\beta({\underline x},x)}\right)~. \ee The indices $\alpha,\beta$ on the left hand side run from 1 to $N$ and on the right hand side from 2 to $N$. The commutation rules which we will need later are \begin{equation}\label}\newcommand{\ee}{\end{equation}{2.11} {B_{1\dots n}}_\alpha({\underline x},x')\,{B_{1\dots n}}_\beta({\underline x},x) ={B_{1\dots n}}_{\beta'}({\underline x},x)\,{B_{1\dots n}}_{\alpha'}({\underline x},x')\, R^{\alpha'\beta'}_{\beta\alpha}(x-x'), \ee \begin{eqnarray}\label}\newcommand{\ean}{\end{eqnarray}{2.12}\nonumber {A_{1\dots n}}({\underline x},x')\,{B_{1\dots n}}_\beta({\underline x},x) &=&\frac1{b(x'-x)}\,{B_{1\dots n}}_\beta({\underline x},x)\,{A_{1\dots n}}({\underline x},x')\\ &&-\frac{c(x'-x)}{b(x'-x)}\,{B_{1\dots n}}_\beta({\underline x},x')\,{A_{1\dots n}}({\underline x},x) \ean and \begin{eqnarray}\label}\newcommand{\ean}{\end{eqnarray}{2.13}\nonumber {D_{1\dots n}}_\gamma^{\gamma'}({\underline x},x')\,{B_{1\dots n}}_\beta({\underline x},x) &=&\frac1{b(x-x')}\,{B_{1\dots n}}_{\beta'}({\underline x},x)\,{D_{1\dots n}}^{\gamma'}_{\gamma''}({\underline x},x')\, R^{\gamma''\beta'}_{\beta\gamma}(x-x')\\ &&-\frac{c(x-x')}{b(x-x')}\, {B_{1\dots n}}_\gamma({\underline x},x')\,{D_{1\dots n}}^{\gamma'}_\beta({\underline x},x). \ean \section{The $SU(N)$ - difference equation}\label{s3} \setcounter{equation}{0} Let $$ {f^{1\dots n}}({\underline x})=~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength4mm \begin{picture}}\newcommand{\ep}{\end{picture}(4,3) \put(2,1){\oval(4,2)} \put(2,1){\makebox(0,0){$f$}} \put(1,2){\line(0,1){1}} \put(3,2){\line(0,1){1}} \put(0,2.5){$\scriptstyle x_1$} \put(3.2,2.5){$\scriptstyle x_n$} \put(1.4,2.5){$\dots$} \ep \ea ~~\in {V^{1\dots n}} $$ be a vector valued function of ${\underline x}=x_1,\dots,x_n$ with values in ${V^{1\dots n}}$. The components of this vector are denoted by $$f^{\alpha_1\dots\alpha_n}({\underline x})~,~~(1\le\alpha_i\le N).$$ \begin{cond}\label{cond} The following symmetry and periodicity conditions of the vector valued function ${f^{1\dots n}}({\underline x})$ are supposed to be valid: \begin{itemize} \item[\rm(i)] The symmetry property under the exchange of two neighboring spaces $V_i$ and $V_j$ and the variables $x_i$ and $x_j$, at the same time, is given by \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.1} f^{\dots ji\dots}(\dots,x_j,x_i,\dots)= R_{ij}(x_i-x_j)\,f^{\dots ij\dots}(\dots,x_i,x_j,\dots). \ee \item[\rm(ii)] The {\bf system of matrix difference equations} holds \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.2} \fbox{\rule[-3mm]{0cm}{8mm} $ {f^{1\dots n}}(\dots,x_i+2,\dots)=Q_{1\dots n}({\underline x};i)\,{f^{1\dots n}}(\dots,x_i,\dots) ~,~~(i=1,\dots,n)$ } \ee where the matrices $Q_{1\dots n}({\underline x};i)\in End({V^{1\dots n}})$ are defined by \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.3} Q_{1\dots n}({\underline x};i)=R_{i+1i}(x_{i+1}-x_i')\dots R_{ni}(x_n-x_i')\, R_{1i}(x_1-x_i)\dots R_{i-1i}(x_{i-1}-x_i) \ee with $x'_i=x_i+2$. \end{itemize} \end{cond} The Yang-Baxter equations for the R-matrix guarantee that these conditions are compatible. The shift of $2$ in eq.~(\ref{3.2}) could be replaced by an arbitrary $\kappa$. For the application to the form factor problem, however, it is fixed to $2$ because of crossing symmetry. Conditions \ref{cond} (i) and (ii) may be depicted as \begin{eqnarray*}}\newcommand{\eea}{\end{eqnarray*} {\rm(i)}&&~~~~~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength3mm \begin{picture}}\newcommand{\ep}{\end{picture}(8,4) \put(4,1){\oval(8,2)} \put(4,1){\makebox(0,0){$f$}} \put(1,2){\line(0,1){2}} \put(3,2){\line(0,1){2}} \put(5,2){\line(0,1){2}} \put(7,2){\line(0,1){2}} \put(1.5,2.5){$\scriptstyle\dots$} \put(5.5,2.5){$\scriptstyle\dots$} \ep \ea ~~=~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength3mm \begin{picture}}\newcommand{\ep}{\end{picture}(8,4) \put(4,1){\oval(8,2)} \put(4,1){\makebox(0,0){$f$}} \put(1,2){\line(0,1){2}} \put(3,2){\line(1,1){2}} \put(5,2){\line(-1,1){2}} \put(7,2){\line(0,1){2}} \put(1.5,2.5){$\scriptstyle\dots$} \put(5.5,2.5){$\scriptstyle\dots$} \ep~, \ea\\ {\rm(ii)}&&~~~~~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength3mm \begin{picture}}\newcommand{\ep}{\end{picture}(8,5) \put(4,2){\oval(7,2)} \put(4,2){\makebox(0,0){$f$}} \put(2,3){\line(0,1){2}} \put(4,3){\line(0,1){2}} \put(6,3){\line(0,1){2}} \put(2.5,3.5){$\scriptstyle\dots$} \put(4.5,3.5){$\scriptstyle\dots$} \ep \ea ~~=~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength3mm \begin{picture}}\newcommand{\ep}{\end{picture}(8,5) \put(4,2){\oval(7,2)} \put(4,2){\makebox(0,0){$f$}} \put(2,3){\line(0,1){2}} \put(6,3){\line(0,1){2}} \put(2.5,3.5){$\scriptstyle\dots$} \put(4.5,3.5){$\scriptstyle\dots$} \put(2,3){\oval(4,2)[t]} \put(4,3){\oval(8,6)[b]} \put(6,3){\oval(4,2)[tr]} \put(6,5){\oval(4,2)[bl]} \ep \ea \eea with the graphical rule that a line changing the "time direction" changes the spectral parameters $x\to x\pm 1$ as follows $$ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength3mm \begin{picture}}\newcommand{\ep}{\end{picture}(12,2) \put(2,0){\oval(2,4)[t]} \put(9,2){\oval(2,4)[b]} \put(0,0){$\scriptstyle x$} \put(3.3,0){$\scriptstyle x-1$} \put(7,1){$\scriptstyle x$} \put(10.3,1){$\scriptstyle x+1$} \ep~. \ea $$ The $Q_{1\dots n}({\underline x};i)$ fulfill the commutation rules \eqa{3.4}{ Q_{1\dots n}(\dots x_i\dots x_j+2\dots;i)\, Q_{1\dots n}(\dots x_i\dots x_j\dots;j)~~~~~~~~~~~~\\ =Q_{1\dots n}(\dots x_i+2\dots x_j\dots;j)\, Q_{1\dots n}(\dots x_i\dots x_j\dots;i). } The following Proposition is obvious \begin{prop}\label{p3.1} Let the vector valued function ${f^{1\dots n}}({\underline x})\in{V^{1\dots n}}$ fulfill Condition \ref{cond} (i), then Conditions \ref{cond} (ii) for all $i=1,\dots,n$ are equivalent to each other and also equivalent to the following periodicity property under cyclic permutation of the spaces and the variables \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.5} f^{12\dots n}(x_1,x_2,\dots,x_n+2)=f^{n1\dots n-1}(x_n,x_1,\dots,x_{n-1}). \ee \end{prop} \begin{rema} The equations (\ref{3.1},\ref{3.5}) imply Watson's (\ref{1.2}) equations for the form factors \cite{BFK}. \end{rema} For later convenience we write the matrices \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.6} Q_{1\dots n}({\underline x};i)={\rm tr}_0\,\Tq{0}({\underline x};i) \ee as the trace of of a new type of monodromy matrices where to the horizontal line two different spectral parameters are associated, namely one to the right hand side and the other one to the left hand side. However, both are related to a spectral parameter of one of the vertical lines. This new monodromy matrix is given by the following \begin{defi} For $i=1,\dots,n$ \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.7} \Tq{0}({\underline x};i)=R_{10}(x_1-x_i)\dots R_{i-10}(x_{i-1}-x_i)\,P_{i0}\, R_{i+10}(x_{i+1}-x'_i)\dots R_{n0}(x_n-x'_i) \ee $$~~=~~ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength10mm \begin{picture}}\newcommand{\ep}{\end{picture}(6,1) \put(1,0){\line(0,1){1}} \put(2.5,0){\line(0,1){1}} \put(3.5,0){\line(0,1){1}} \put(5,0){\line(0,1){1}} \put(0,0){\oval(6,1)[tr]} \put(6,1){\oval(6,1)[bl]} \put(.7,0){$\scriptstyle x_1$} \put(2.7,0){$\scriptstyle x_i$} \put(2.7,.9){$\scriptstyle x'_i$} \put(4.62,0){$\scriptstyle x_n$} \put(.5,.6){$\scriptstyle x_i$} \put(5.5,.65){$\scriptstyle x'_i$} \put(1.5,.7){$\dots$} \put(4,.7){$\dots$} \ep \ea $$ with $x'_i=x_i+2$. \end{defi} Note that for $i=n$ one has simply $\Tq{0}({\underline x};n) =\T{0}({\underline x},x_n)$ since $R(0)$ is the permutation operator $P$. The new type of monodromy matrix fulfills a new type of Yang-Baxter relation. Instead of eq.~(\ref{2.9}) we have for $i=1,\dots,n$ \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.8} \Tq{a}({\underline x};i)\,\T{b}({\underline x},u)\,R_{ab}(x'_i-u) =R_{ab}(x_i-u)\,\T{b}({\underline x}',u)\,\Tq{a}({\underline x};i) \ee with ${\underline x}'=x_1,\dots,x'_i,\dots,x_n$ and $x'_i=x_i+2$. This relation follows from the Yang-Baxter equation for the R-matrix and the obvious relation for the permutations operator $P$ $$ P_{ia}\,R_{ib}(x_i-u)\,R_{ab}(x'_i-u)=R_{ab}(x_i-u)\,R_{ib}(x'_i-u)\,P_{ia}. $$ Correspondingly to eq.~(\ref{2.10}) we introduce (suppressing the indices $1\dots n$) \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.9} {T^Q}^\alpha_\beta({\underline x};i)\equiv \left(\matrix{A^Q({\underline x};i)&{B^Q}_\beta({\underline x};i)\cr {C^Q}^\alpha({\underline x};i)&{D^Q}^\alpha_\beta({\underline x};i)}\right)~. \ee with the commutation rules with respect to the usual $A,B,C,D$ \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.10} A^Q({\underline x};i)\,B_b({\underline x},u)=\frac1{b(x_i'-u)}\,B_b({\underline x}',u)\,A^Q({\underline x};i) -\frac{c(x_i'-u)}{b(x_i'-u)}\,{B^Q}_b({\underline x};i)\,A({\underline x},u), \ee \begin{eqnarray}\label}\newcommand{\ean}{\end{eqnarray}{3.11}\nonumber {D^Q}_a({\underline x};i)\,B_b({\underline x},u) &=&\frac1{b(u-x_i')}\,B_b({\underline x}',u)\,{D^Q}_a({\underline x};i)\,R_{ba}(u-x'_i)\\ &&-\frac{c(u-x_i)}{b(u-x_i)}\,{B^Q}_b({\underline x};i)\,D_a({\underline x},u)\,P_{ab}. \ean The system of difference equations (\ref{3.2}) can be solved by means of a generalized ("off-shell") nested Bethe Ansatz. The first level is given by the \begin{beth}\label{be} \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.12} {f^{1\dots n}}({\underline x})=\sum_{\underline u}~{B_{1\dots n}}_{\beta_m}({\underline x},u_m)\dots{B_{1\dots n}}_{\beta_1}({\underline x},u_1)\, {\Omega^{1\dots n}}~g^{\beta_1\dots\beta_m}({\underline x},{\underline u}) \ee $$ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength4mm \begin{picture}}\newcommand{\ep}{\end{picture}(4,3) \put(2,1){\oval(4,2)} \put(2,1){\makebox(0,0){$f$}} \put(1,2){\line(0,1){1}} \put(3,2){\line(0,1){1}} \put(0,2.5){$\scriptstyle x_1$} \put(3.2,2.5){$\scriptstyle x_n$} \put(1.5,2.5){$\scriptstyle\dots$} \ep \ea ~~=\sum_{\underline u}~~\begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength4mm \begin{picture}}\newcommand{\ep}{\end{picture}(10,6) \put(8,1){\oval(4,2)} \put(8,1){\makebox(0,0){$g$}} \put(1,2){\oval(12,2)[tr]} \put(1,2){\oval(16,6)[tr]} \put(2,2){\line(0,1){4}} \put(3,2.5){$\scriptstyle \dots$} \put(3,5.5){$\scriptstyle \dots$} \put(7.5,2.5){$\scriptstyle \dots$} \put(1.2,3.5){$\scriptstyle \vdots$} \put(5,2){\line(0,1){4}} \put(1,5.5){$\scriptstyle x_1$} \put(5.2,5.5){$\scriptstyle x_n$} \put(6,3.2){$\scriptstyle u_1$} \put(9.2,3.2){$\scriptstyle u_m$} \put(1.8,1.2){$\scriptstyle 1$} \put(4.8,1.2){$\scriptstyle 1$} \put(.3,2.8){$\scriptstyle 1$} \put(.3,4.8){$\scriptstyle 1$} \ep \ea $$ where summation over $\beta_1,\dots,\beta_m$ is assumed and ${\Omega^{1\dots n}}\in{V^{1\dots n}}$ is the reference state defined by ${C_{1\dots n}}^\beta\,{\Omega^{1\dots n}}=0$ for $1<\beta\le N$. The summation over ${\underline u}$ is specified by \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.13} {\underline u}=(u_1,\dots,u_m)=(\tilde u_1-2l_1,\dots,\tilde u_m-2l_m) ~,~~~l_i\in{\bf Z}, \ee where the $\tilde u_i$ are arbitrary constants. \end{beth} The reference state is \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.14}{\Omega^{1\dots n}}=\,|\, 1\dots 1\ra,\ee a basis vector with components $\Omega^{\alpha_1\dots\alpha_n}=\prod_{i=1}^n\delta_{\alpha_i1}$. It is an eigenstate of ${A_{1\dots n}}$ and ${D_{1\dots n}}$ \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.15} {A_{1\dots n}}({\underline x},u){\Omega^{1\dots n}}={\Omega^{1\dots n}}~,~~ {D_{1\dots n}}^\alpha_\beta({\underline x},u){\Omega^{1\dots n}}={\Omega^{1\dots n}}\,\delta_{\alpha\beta}\prod_{i=1}^nb(x_i-u). \ee The sums (\ref{3.12}) are also called "Jackson-type Integrals" (see e.g. \cite{Re} and references therein). Note that the summations over $\beta_i$ run only over $1<\beta_i\le N$. Therefore the $g^{\beta_1\dots\beta_m}$ are the components of a vector $g^{1\dots m}$ in the tensor product of smaller spaces ${V^{(1)}}^{1\dots m}=V^{(1)}_1\otimes\dots\otimes V^{(1)}_m$ with $V^{(1)}_{i}\cong{\bf C}^{N-1}$. \begin{defi}\label{d3.2} Let the vector valued function ${{f^{(1)}}^{1\dots m}}({\underline u})\in{{V^{(1)}}^{1\dots m}}$ be given by \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.16} g^{1\dots m}({\underline x},{\underline u})=\prod_{i=1}^n\prod_{j=1}^m\psi(x_i-u_j)\, \prod_{1\le i<j\le m}\tau(u_i-u_j)\,{{f^{(1)}}^{1\dots m}}({\underline u}) \ee where the functions $\psi(x)$ and $\tau(x)$ fulfill the functional equations \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.17} b(x)\,\psi(x)=\psi(x-2)~,~~~ \frac{\tau(x)}{b(x)}=\frac{\tau(x-2)}{b(2-x)}. \ee \end{defi} Using the definition of $b(x)$ (\ref{2.7}) we get the solutions of eqs.~(\ref{3.17}) \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.18} \psi(x)=\frac{\Gamma(1-\frac1N+\frac x2)}{\Gamma(1+\frac x2)} ,~~~\tau(x)=\frac{x\,\Gamma(\frac1N+\frac x2)} {\Gamma(1-\frac1N+\frac x2)} \ee where the general solutions are obtained by multiplication with arbitrary periodic functions with period 2. Just as $g^{1\dots m}({\underline x},{\underline u})$ also the vector valued function ${{f^{(1)}}^{1\dots m}}({\underline u})$ is an element of the tensor product of the smaller spaces $V_i\cong{\bf C}^N$ $${{f^{(1)}}^{1\dots m}}({\underline u})\in{{V^{(1)}}^{1\dots m}}.$$ We say ${{f^{(1)}}^{1\dots m}}({\underline u})$ fulfills Conditions \ref{cond} (i)$^{(1)}$ and (ii)$^{(1)}$ if eqs.~(\ref{3.1}) and (\ref{3.2}) hold in this space. We are now in a position to formulate the main theorem of this paper. \begin{theo}\label{t3.1} Let the vector valued function ${f^{1\dots n}}({\underline x})$ be given by the Bethe Ansatz \ref{be} and let $g^{1\dots m}({\underline x},{\underline u})$ be of the form of Definition \ref{d3.2}. If in addition the vector valued function ${{f^{(1)}}^{1\dots m}}({\underline u})\in{{V^{(1)}}^{1\dots m}}$ fulfills the Conditions \ref{cond} {\rm(i)$^{(1)}$ and (ii)$^{(1)}$}, then also ${f^{1\dots n}}({\underline x})\in{V^{1\dots n}}$ fulfills the Conditions \ref{cond} {\rm(i) and (ii)}, i.e. ${f^{1\dots n}}({\underline x})$ is a solution of the set of difference equations (\ref{3.2}). \end{theo} \begin{rema} For $SU(2)$ (see e.g.\cite{Re}) the problem is already solved by \ref{3.16} since then $f^{(1)}$ is a constant. \end{rema} {\bf Proof:} Condition \ref{cond} (i) follows directly from the definition and the normalization of the R-matrix (\ref{2.5}) $$R_{ij}(x_i-x_j)\,\Omega^{\dots ij\dots}=\Omega^{\dots ij\dots},$$ the symmetry of $g^{1\dots m}({\underline x},{\underline u})$ given by eq.~(\ref{3.16}) under the exchange of $x_1,\dots,x_n$ and $$ B_{\dots ji\dots,\beta}(\dots x_j,x_i\dots,u)\,R_{ij}(x_i-x_j)= R_{ij}(x_i-x_j)\,B_{\dots ij\dots,\beta}(\dots x_i,x_j\dots,u) $$ which is a consequence of the Yang-Baxter relations for the R-matrix. Because of Proposition \ref{p3.1} it is sufficient to prove Condition \ref{cond} (ii) only for $i=n$ $$ Q({\underline x};n)\,f({\underline x})= {\rm tr}_a\,T^Q_a({\underline x};n)\,f({\underline x})=f({\underline x}')~,~~({\underline x}'=x_1,\dots,x'_n=x_n+2). $$ where the indices $1\dots n$ have been suppressed. For the first step we apply a technique quite analogous to that used for the conventional algebraic Bethe Ansatz which solves eigenvalue problems. We apply the trace of $T^Q_a({\underline x};n)$ to the vector $f({\underline x})$ as given by eq.~(\ref{3.12}) and push $A^Q({\underline x};n)$ and $D^Q_a({\underline x};n)$ through all the $B$'s using the commutation rules (\ref{3.10}) and (\ref{3.11}). Again with ${\underline x}'=x_1,\dots,x'_n=x_n+2$ we obtain \begin{eqnarray*}}\newcommand{\eea}{\end{eqnarray*} A^Q({\underline x};n)\,B_{b_m}({\underline x},u_m)\dots B_{b_1}({\underline x},u_1) =B_{b_m}({\underline x}',u_m)\dots B_{b_1}({\underline x}',u_1) \prod_{j=1}^m\frac1{b(x'_n-u_j)}~~~~~~\\\times A^Q({\underline x};n)+{\rm uw}_A \eea \begin{eqnarray*}}\newcommand{\eea}{\end{eqnarray*} D^Q_a({\underline x};n)\,B_{b_m}({\underline x},u_m)\dots B_{b_1}({\underline x},u_1) =B_{b_m}({\underline x}',u_m)\dots B_{b_1}({\underline x}',u_1) \prod_{j=1}^m\frac1{b(u_j-x_n)}~~~~~~\\ \times D^Q_a({\underline x};n)\,R_{b_1a}(u_1-x'_n)\dots R_{b_ma}(u_m-x'_n)+{\rm uw}_{D_a} \eea The "wanted terms" written explicitly originate from the first term in the commutations rules (\ref{3.10}) and (\ref{3.11}); all other contributions yield the "unwanted terms". If we insert these equations into the representation (\ref{3.12}) of $f({\underline x})$ we find that the wanted contribution from $A^Q$ already gives the desired result. The wanted contribution from $D^Q$ applied to $\Omega$ gives zero. The unwanted contributions can be written as as difference which vanishes after summation over the $u$'s. These three facts can be seen as follows. We have $$A^Q({\underline x};n)\,\Omega=\Omega~, ~~~D^Q_a({\underline x};n)\,\Omega=0$$ which follow from eq.~(\ref{3.15}) since $T^Q({\underline x};n)=T({\underline x},x_n)$ and $b(0)=0$. The defining relation of $\psi(x)$ (\ref{3.17}) implies that the wanted term from $A$ yields $f({\underline x}')$. The commutation relations (\ref{3.10}), (\ref{3.11}), (\ref{2.12}) and (\ref{2.13}) imply that the unwanted terms are proportional to a product of $B$-operators, where exactly one $B_{b_j}({\underline x},u_j)$ is replaced by $B^Q_{b_j}({\underline x};n)$. Because of the commutation relations of the $B$'s (\ref{2.11}) and the symmetry property given by Condition \ref{cond} (i)$^{(1)}$ of $g^{1\dots m}({\underline x},{\underline u})$ it is sufficient to consider only the unwanted terms for $j=m$ denoted by uw$_A^m$ and uw$_D^m$. They come from the second term in (\ref{3.10}) if $A^Q({\underline x};n)$ is commuted with $B_{b_m}({\underline x},u_m)$ and then the resulting $A({\underline x},u_m)$ pushed through the other $B$'s taking only the first terms in (\ref{2.12}) into account and correspondingly for $D^Q_a({\underline x};u_m)$. $$ {\rm uw}_A^m=-\frac{c(x'_n-u_m)}{b(x'_n-u_m)}\,B^Q_{b_m}({\underline x};m)\dots B_{b_1}({\underline x},u_1)\prod_{j<m}\frac1{b(u_m-u_j)}\,A({\underline x},u_m) $$ $$ {\rm uw}_{D_a}^m=-\frac{c(u_m-x_n)}{b(u_m-x_n)}\,B^Q_{b_m}({\underline x};m)\dots B_{b_1}({\underline x},u_1)\prod_{j<m}\frac1{b(u_j-u_m)}\,D_a({\underline x},u_m) \,T^{Q(1)}_{b_1\dots b_m,a}({\underline u};m) $$ where $T^{Q(1)}$ is the new type of monodromy matrix \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.18a} T^{Q(1)}_{b_1\dots b_m,a}({\underline u};m)= R_{b_1a}(u_1-u_m)\dots R_{b_{m-1}a}(u_{m-1}-u_m)\,P_{b_ma} \ee analogous to (\ref{3.6}) whose trace over the auxiliary space $V^{(1)}_a$ yields the shift operator $Q^{(1)}({\underline u};m)$. With $D_a({\underline x},u_m)\Omega={\bf 1}_a\,\prod_{i=1}^n b(x_i-u_m)\,\Omega$ (see (\ref{3.15})), by the assumption $$ Q^{(1)}({\underline u};m)\,f^{(1)}({\underline u})=f^{(1)}({\underline u}')~,~~~ ({\underline u}'=u_1,\dots,u'_m=u_m+2) $$ and the defining relations (\ref{3.17}) of $\psi(x)$ and $\tau(x)$, we obtain $$ {\rm tr}_a\,{\rm uw}_{D_a}^m({\underline u})\,\Omega\,g({\underline x},{\underline u})= -{\rm uw}_A^m({\underline u}')\,\Omega\,g({\underline x},{\underline u}') $$ where $c(-x)/b(-x)=-c(x)/b(x)$ has been used. Therefore the sum of all unwanted terms yield a difference analog of a total differential which vanishes after summation over the $u$'s. Iterating Theorem \ref{t3.1} we obtain the nested generalized Bethe Ansatz with levels $k=1,\dots,N-1$. The Ansatz of level $k$ reads \begin{eqnarray}\label}\newcommand{\ean}{\end{eqnarray}{3.19} \lefteqn{{f^{(k-1)}}^{1\dots n_{k-1}}\left({\underline x}^{(k-1)}\right) =\sum_{{\underline x}^{(k)}}\,B^{(k-1)}_{1\dots n_{k-1}\beta_{n_k}} \left({\underline x}^{(k-1)},x^{(k)}_{n_k}\right)\dots}\\ &&~~~\dots B^{(k-1)}_{1\dots n_{k-1}\beta_1}\left({\underline x}^{(k-1)},x^{(k)}_1 \right)\,{\Omega^{(k-1)}}^{1\dots n_{k-1}}~ {g^{(k-1)}}^{\beta_1\dots\beta_{n_k}}\left({\underline x}^{(k-1)},{\underline x}^{(k)}\right) \nonumber \ean The functions $f^{(k)}$ and $g^{(k)}$ are vectors with $$ {f^{(k)}}^{1\dots n_k},{g^{(k-1)}}^{1\dots n_k}\in {V^{(k)}}^{1\dots n_k} =V^{(k)}_1\otimes\dots\otimes V^{(k)}_{n_k} ~,~~~~(V^{(k)}_{i}\cong{\bf C}^{N-k}). $$ The basis vectors of these spaces are $\,|\,\alpha_1\dots\alpha_{n_k}\ra^{(k)} \in{V^{(k)}}^{1\dots n_k}$ and $k<\alpha_i\le N$. Analogously to Definition \ref{d3.2} we write \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.20} {g^{(k-1)}}^{1\dots n_k}({\underline x}^{(k-1)},{\underline x}^{(k)}) =\prod_{i=1}^{n_{k-1}}\prod_{j=1}^{n_k}\psi(x_i^{(k-1)}-x_j^{(k)})\, \prod_{1\le i<j\le n_k}\tau(x^{(k)}_i-x^{(k)}_j)\, {f^{(k)}}^{1\dots n_k}({\underline x}^{(k)}) \ee where the functions $\psi(x)$ and $\tau(x)$ fulfill the functional equations (\ref{3.17}) with the solutions (\ref{3.18}). Then the start of the iteration is given by a $k_{max}\le N$ with \begin{equation}\label}\newcommand{\ee}{\end{equation}{3.21} {f^{(k_{max}-1)}}^{1\dots n_{n_{k_{max}-1}}}=\,|\, k_{max}\dots k_{max}\ra~,~~~ {\rm and}~~n_k=0~~{\rm for}~k\ge k_{max} \ee which is the reference state of level $k_{max}-1$ and trivially fulfills the Conditions \ref{cond}. \begin{coro}\label{c3.2} The system of $SU(N)$ matrix difference equations (\ref{3.2}) is solved by the nested Bethe Ansatz (\ref{3.19}) with (\ref{3.20}), (\ref{3.21}) and ${f^{1\dots n}}({\underline x})={f^{(0)}}^{1\dots n}({\underline x})$. \end{coro} \section{Weights of generalized $SU(N)$ Bethe vectors}\label{s4} \setcounter{equation}{0} In this section we analyze some group theoretical properties of generalized Bethe states. We calculate the weights of the states and show that they are highest weight states. The first result does not depend on any restriction to the states. On the other hand the second result is not only true for the conventional Bethe Ansatz, which solves an eigenvalue problem and which is well known, but also, as we will show, for the generalized one which solves a difference equation (or a differential equation). By asymptotic expansion of the $R$-matrix and the monodromy matrix $T$ (cf. eqs.(\ref{2.5}) and(\ref{2.8})) we get for $u\to\infty$ \begin{eqnarray}\label}\newcommand{\ean}{\end{eqnarray}{4.1} R_{ab}(u)&=&{\bf 1}_{ab}-\frac2{Nu}\,P_{ab}+ O(u^{-2})\\ \T{a}({\underline x},u)&=&{\bf 1}_{1\dots n,a}+\frac2{Nu}\,M_{1\dots n,a} + O(u^{-2}). \ean Explicitly we get from eq.~(\ref{2.8}) \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.3} M_{1\dots n,a}=P_{1a}+\dots+P_{na} \ee where the $P$'s are the permutation operators. The matrix elements of $M_{1\dots n,a}$ as a matrix in the auxiliary space are the $su(N)$ Lie algebra generators. In the following we will consider only operators acting in the fixed tensor product space $V={V^{1\dots n}}$ of (\ref{2.1}); therefore we will omit the indices $1\dots n$. In terms of matrix elements in the auxiliary space $V_a$ the generators act on the basis states as \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.4} M^{\alpha'}_\alpha\,|\,\alpha_1,\dots,\alpha_i,\dots,\alpha_n\ra=\sum_{i=1}^n \delta_{\alpha'\alpha_i}\,|\,\alpha_1,\dots,\alpha,\dots,\alpha_n\ra. \ee The Yang-Baxter relations (\ref{2.9}) yield for $x_a\to\infty$ \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.5}[M_a+P_{ab},T_b(x_b)]=0\ee and if additionally $x_b\to\infty$ \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.6}[M_a+P_{ab},M_b]=0\ee or for the matrix elements \begin{eqnarray}\label}\newcommand{\ean}{\end{eqnarray}{4.7} [M^{\alpha'}_\alpha,T^{\beta'}_\beta(u)]&=& \delta_{\alpha'\beta}\,T^{\beta'}_\alpha(u) -\delta_{\alpha\beta'}\,T^{\alpha'}_\beta(u)\\~\label{4.8} [M^{\alpha'}_\alpha,M^{\beta'}_\beta]&=& \delta_{\alpha'\beta}\,M^{\beta'}_\alpha -\delta_{\alpha\beta'}\,M^{\alpha'}_\beta. \ean Equation (\ref{4.8}) represents the structure relations of the $su(N)$ Lie algebra and (\ref{4.7}) the $SU(N)$-covariance of $T$. In particular the transfer matrix is invariant \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.9} [M^{\alpha'}_\alpha,{\rm tr}\,T(u)]=0. \ee We now investigate the action of the lifting operators $M^{\alpha'}_\alpha$ ($\alpha'>\alpha$) to generalized Bethe vectors. \begin{lemm}\label{l4.1} Let $F[g]({\underline x})\in V$ be a Bethe Ansatz vector given in terms of a vector $g({\underline x},{\underline u})\in V^{(1)}\cong{\bf C}^{(N-1)\otimes m}$ by \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.10} F[g]({\underline x},{\underline u})=B_{\beta_m}({\underline x},u_m)\dots B_{\beta_1}({\underline x},u_1)\,\Omega~ g^{\underline\beta}({\underline x},{\underline u}) \ee with ${\underline\beta}=\beta_1,\dots,\beta_m$. Then $M^{\alpha'}_\alpha\,F[g]$ is of the form \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.11} M^{\alpha'}_\alpha\,F[g]=\cases{ \sum_{j+1}^mB_{\beta_m}\dots\delta_{\alpha'\beta_m}\dots B_{\beta_1}\, \Omega~G_j^{\underline\beta}({\underline x},{\underline u})&\rm for $\alpha'>\alpha=1$\cr F[{M^{(1)}}^{\alpha'}_\alpha\,g]&\rm for $\alpha'>\alpha>1$,} \ee where the ${M^{(1)}}^{\alpha'}_\alpha$ are the $su(N-1)$ generators represented in $V^{(1)}$ (analogously to (\ref{4.3})) and \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.12} G_m({\underline x},{\underline u})=\left(\frac1{\prod_{j=1}^mb(u_m-u_j)}- \frac{\prod_{i=1}^nb(x_i-u_m)}{\prod_{j=1}^mb(u_j-u_m)}\, Q^{(1)}({\underline u};m)\right)g({\underline x},{\underline u}). \ee The operator $Q^{(1)}({\underline u};m)\in End(V^{(1)})$ is a next level Q-matrix given by the trace \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.13}\nonumber Q^{(1)}({\underline u};m)={\rm tr}_a\,T^{Q(1)}_{a}({\underline u};m) \ee (see eq.~(\ref{3.18a})). The other $G_j$ are obtained by Yang-Baxter relations. \end{lemm} {\bf Proof:} First we consider the case $\alpha=1$. The commutation rule (\ref{4.7}) reads for $\beta'=1$ and $\alpha'\to\alpha$ $$[M^\alpha_1,B_\beta(u)]=\delta_{\alpha\beta}\,A(u)-D^\alpha_\beta(u).$$ We commute $M^\alpha_1$ through all the $B$'s of (\ref{4.10}) and use $M^\alpha_1\,\Omega=0$ for $\alpha>1$ (cf.~(\ref{4.4})). The $A$'s and $D$'s appearing are also commuted through all the $B$'s using the commutation rules (\ref{2.12}) and (\ref{2.13}). In each summand exactly one $B$-operator disappears. Therefore the result is of the form of eq.~(\ref{4.11}). Contributions to $G_m$ arise when we commute $M^\alpha_1$ through $B_{\beta_m}(u_m)$ and then push the $A(u_m)$ and $D(u_m)$ through the other $B$'s ($j<m$), only taking the first terms of (\ref{2.12}) and (\ref{2.13}) into account. All other terms would contain a $B(u_m)$ and would therefore contribute to one of the other $G_j~(j<m)$. Finally we apply $A(u_m)$ and $D(u_m)$ to $\Omega$ $$A(u_m)\,\Omega=\Omega~,~~~ D^\alpha_\beta(u_m)\,\Omega=\delta_{\alpha\beta}\, \prod_{i=1}^nb(x_i-u_m)\,\Omega $$ and get eq.~(\ref{4.12}). For $\alpha'>\alpha>1$ we again use the commutation rule (\ref{4.7}) $$[M^{\alpha'}_\alpha,B_\beta(u)]=\delta_{\alpha'\beta}\,B_\alpha(u)$$ and get $$ M^{\alpha'}_\alpha\,B_\beta(u_m)\dots B_\beta(u_1) =B_{\beta_m}(u_m)\dots B_{\beta_1}(u_1)\,M^{\alpha'}_\alpha\\ +B_{\beta'_m}(u_m)\dots B_{\beta'_1}(u_1)\, {M^{(1)}}^{{\underline\beta}',\alpha'}_{{\underline\beta},\alpha} $$ with $M^{(1)}_{1\dots m,a}=P^{(1)}_{1a}+\dots+P^{(1)}_{ma}$ analogously to (\ref{4.3}). Because of $M^{\alpha'}_\alpha\,\Omega=0$ for $\alpha'>1$ (cf.~(\ref{4.4})) we get eq.~(\ref{4.11}) The diagonal elements of $M$ are the weight operators $W_\alpha=M^\alpha_\alpha$, they act on the basis vectors in $V$ as \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.14} W_\alpha\,|\,\alpha_1,\dots,\alpha_n\ra= \sum_{i=1}^n\delta_{\alpha_i\alpha}\,\,|\,\alpha_1,\dots,\alpha_n\ra \ee which follows from ${P_i}^{\alpha'}_\alpha\,|\,\alpha_i\ra= \delta_{\alpha\alpha_i}\,\,|\,\alpha'\ra$. In particular we get for the Bethe Ansatz reference state (\ref{3.14}) \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.15}W_\alpha\,\Omega=\delta_{\alpha1}\,n\,\Omega.\ee \begin{lemm}\label{l4.2} Let $F[g]\in{V^{1\dots n}}$ be as in Lemma \ref{l4.1}. Then \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.16} W_\alpha\,F[g]=\cases{ (n-m)\,F[g]&\rm for $\alpha=1$\cr F[W^{(1)}_\alpha\,g]&\rm for $\alpha>1$,} \ee where the $W^{(1)}_\alpha$'s are the $su(N-1)$ weight operators acting in $V^{(1)}$, i.e. the diagonal elements of generator matrix ${M^{(1)}}^{\alpha'}_\alpha$ (analogously to (\ref{4.3})). \end{lemm} {\bf Proof:} By means of the commutation relation (\ref{4.7}) for $\alpha'=\alpha=\beta'=1,~\beta>1$ $$[W_1,B_\beta]=-B_\beta$$ we commute $W_1$ through all $m~B$'s of eq.~(\ref{4.10}) and with (\ref{4.15}) we get the first equation. For the second equation we again use (\ref{4.7}) now for $\alpha'=\alpha>1,~\beta'=1,~\beta>1$ $$[W_\alpha,B_\beta]=\delta_{\alpha\beta}\,B_\beta.$$ Again commuting $W_\alpha$ through all the $B$'s of eq.~(\ref{4.10}) we get with eq.~(\ref{4.15}) \begin{eqnarray*}}\newcommand{\eea}{\end{eqnarray*} W_\alpha\,B_{\beta_m}\dots B_{\beta_1}\,\Omega~g^{\beta_1\dots\beta_m} &=&B_{\beta_m}\dots B_{\beta_1}\,\left(W_\alpha+\sum_{i=1}^m \delta_{\beta_i\alpha}\right)\Omega~g^{\beta_1\dots\beta_m}\\ &=&B_{\beta_m}\dots B_{\beta_1}\,\Omega~ \Big(W^{(1)}_\alpha\,g\Big)^{\beta_1\dots\beta_m} \eea which concludes the proof. \begin{theo} Let the vector valued function $f({\underline x})\in V$ be given by the BETHE ANSATZ \ref{be} fulfilling the assumptions of Theorem \ref{t3.1}. If in addition $f^{(1)}$ is a highest weight vector and an eigenvector of the weight operators with \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.20} W^{(1)}_\alpha\,f^{(1)}=w^{(1)}_\alpha\,f^{(1)}, \ee then also $f$ is a highest weight vector \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.17} M^{\alpha'}_\alpha\,f=0~,~~(\alpha'>\alpha) \ee and an eigenvector of the weight operators \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.18} W_\alpha\,f=w_\alpha\,f~,~~~ w_\alpha=\cases{ n-m &\rm for $\alpha=1$\cr w^{(1)}_\alpha &\rm for $\alpha>1$} \ee with \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.19} w_\alpha\ge w_\beta~,~~~(1\le\alpha<\beta\le N). \ee \end{theo} {\bf Proof:} To prove the highest weight property we apply Lemma \ref{l4.1}. By assumption $f^{(1)}$ fulfills the difference equation $$f^{(1)}(u_1,\dots,u_m+2)=Q^{(1)}({\underline u};m)\,f^{(1)}(u_1,\dots,u_m).$$ Together with eq.~(\ref{3.16}), (\ref{3.17}) and (\ref{4.12}) we obtain after summation $\sum_{u_m}G_m({\underline x},{\underline u})=0$, if $u_m=\tilde u_m-2l_m~(l_m\in{\bf Z})$. The same is true for the other $G_i$ in eq.~(\ref{4.11}), since $g$ fulfills the symmetry property of Condition \ref{cond} (i) and thereby $F[g]({\underline x},{\underline u})$ of eq.~(\ref{4.10}) is symmetric with respect to the $u_i$. Therefore in eq.~(\ref{4.12}) we have $M^{\alpha'}_\alpha\,f=0$ for $\alpha'>\alpha>1$ and for $\alpha'>\alpha=1$ by assumption on $f^{(1)}$. The weights of $f$ follow from Lemma \ref{l4.2} and also by assumption on $f^{(1)}$. From the commutation rule (\ref{4.8}) and ${M^\beta_\alpha}^\dagger= M_\beta^\alpha$ follows $$ 0\le M^\beta_\alpha\,M_\beta^\alpha =M_\beta^\alpha\,M^\beta_\alpha+W_\alpha-W_\beta $$ which implies (\ref{4.19}). Since the states $f^{(k_{max}-1)}$ of eq.~(\ref{3.21}) are highest weight states in $V^{(k_{max}-1)}$ with weight $w^{(k_{max}-1)}_{k_{max}}=n_{k_{max}-1}$ we have the \begin{coro} If $f({\underline x})$ is a solution of the system of $SU(N)$ matrix difference equations (\ref{3.2}) $$ f(\dots,x_i+2,\dots)=Q({\underline x};i)\,f(\dots,x_i,\dots)~,~~(i=1,\dots,n) $$ given by the generalized nested Bethe Ansatz of Corollary \ref{c3.2}, then $f$ is a highest weight vector with weights \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.21} w=(w_1,\dots,w_N)=(n-n_1,n_1-n_2,\dots,n_{N-2}-n_{N-1},n_{N-1}), \ee where $n_k$ is the number of $B^{(k-1)}$ operators in the Bethe Ansatz of level $k,~(k=1,\dots,N-1)$. Further non-highest weight solutions of (\ref{3.2}) are given by \begin{equation}\label}\newcommand{\ee}{\end{equation}{4.22} f^{\alpha'}_\alpha=M^{\alpha'}_\alpha\,f~,~~(\alpha'<\alpha). \ee \end{coro} The interpretation of eq.~(\ref{4.21}) is that each $B^{(k)}$-operator reduces $w_k$ and lifts a $w_l~(l>k)$ by one. \section{Examples}\label{s5} \setcounter{equation}{0} From a solution of the matrix difference equations (\ref{3.2}) one gets a new solution by multiplication of a scalar function which is symmetric with respect to all variables $x_i$ and periodic with period $2$. Therefore the solutions of the following examples may be multiplied by such functions. \begin{exam} The simplest example is obtained for $k_{max}=1$ which means the trivial solution of the difference equations $${f^{1\dots n}}={\Omega^{1\dots n}}$$ The weights of ${f^{1\dots n}}$ are $w=(n,0,\dots,0)$. \end{exam} In the language of spin chains this case corresponds to the ferro-magnetic ground state. \begin{exam}\label{e5.2} For the case $k_{max}=1$ and $n^{(1)}=1$ the solution reads $$ {f^{1\dots n}}({\underline x})=\sum_u~{B_{1\dots n}}_{,\beta}({\underline x},u)\,{\Omega^{1\dots n}}~g^\beta({\underline x},u). $$ with $u=\tilde u-2l~(l\in{\bf Z},~\tilde u$ an arbitrary constant) and $$ g^\beta({\underline x},u)=\delta_{\beta2}\prod_{i=1}^n\psi(x_i-u). $$ The weights of this vector ${f^{1\dots n}}$ are $w=(n-1,1,0,\dots,0)$. The action of the creation operator ${B_{1\dots n}}_\beta(x,y;u)$ on the reference state is easily calculated with help of eqs.~(\ref{2.6}) (\ref{2.8}) and (\ref{2.10}). \end{exam} As a particular case of this example we determine explicitly the solution for the following \begin{exam}\label{e5.3} The action of the $B$-operator on the reference state for the case of $n=2$ of Example \ref{e5.2} yields $$ B_{12,\beta}(x,y;u)\mi11\ra= c(x-u)\,b(y-u)\,\,|\,\beta1\ra+c(y-u)\,\mi1\beta\ra. $$ Therefore we obtain $$ f^{12}(x,y)=\sum_u\,\psi(x-u)\,\psi(y-u)\{ c(x-u)\,b(y-u)\,\,|\,\beta1\ra+c(y-u)\,\mi1\beta\ra\} $$ with $u=\tilde u-2l,~(l\in{\bf Z})$. Using the expressions for the functions $b,~c,~\psi$ given by eqs.~(\ref{2.7}) and (\ref{3.18}) we get up to a constant $$ f^{12}(x,y)=\Big( \sin\pi\left(\scriptstyle\frac{x-\tilde u}2-\frac1N\right)\, \sin\pi\left(\scriptstyle\frac{y-\tilde u}2-\frac1N\right)\, \Gamma\left(\scriptstyle\frac{y-x}2-\frac1N\right)\, \Gamma\left(\s1+\frac{x-y}2-\frac1N\right)\Big)^{-1} (\mi21\ra-\mi12\ra). $$ This solution could also be obtained by means of the method used in \cite{KW}, namely by diagonalization of the R-matrix. One obtains the scalar difference equations $$ f_-(x,y)=R_-(x-y)\,f_-(y,x)~,~~~f_-(x,y)=f_-(y,x+2) $$ with the eigenvalue $R_-(x)=(x+2/N)(x-2/N)$ of the antisymmetric tensor representation. \end{exam} \begin{exam}\label{e3.1} Next we consider for $N>2$ the case of the quantum space $V_{123}=V_1\otimes V_2\otimes V_3$ and the case that the nested Bethe Ansatz has only two levels with two creation operators in the first level and one in the second level. This means $k_{max}=3,~n=3,~n^{(1)}=2,~n^{(2)}=1$ and the weights $w=(1,1,1,0,\dots,0)$. The first level Bethe Ansatz is given by $$ f^{123}(x,y,z)=\sum_{u,v}~B_{123,\beta}(x,y,z;v)\,B_{123,\alpha}(x,y,z;u) \,\Omega^{123}~g^{\alpha\beta}(x,y,z;u,v). $$ where the summation is specified by $u=\tilde u-2k,\,v=\tilde u-2l,~(k,l\in{\bf Z})$. By eq.~(\ref{3.16}) $g^{12}$ is related to the next level function ${f^{(1)}}^{12}$ by $$ g^{12}(x,y,z;u,v)=\prod_{x_i=x,y,z}\prod_{u_j=u,v}\psi(x_i-u_j) \,\tau(u-v)\,{f^{(1)}}^{12}(u,v). $$ The second level Bethe Ansatz reads $$ {f^{(1)}}^{12}(u,v)=\sum_w\,B^{(1)}_{12\gamma}(u,v;w)\, {\Omega^{(1)}}^{12}~{g^{(1)}}^{\gamma}(u,v;w) $$ where $w=\tilde w-2m,~(m\in{\bf Z})$. The second level reference state is ${\Omega^{(1)}}^{12}=\,|\, 22\ra^{(1)}\in {V^{(1)}}^{12}$. Again according to eq.~(\ref{3.16}) $$ {g^{(1)}}^\gamma(u,v;w)=\psi(u-w)\,\psi(v-w)\,{f^{(2)}}^\gamma, $$ with ${f^{(2)}}^\gamma=\delta_{\gamma3}.$ Similar as in Example \ref{e5.3} the action of the operators $B$ and $B^{(1)}$ on their reference states may be calculated. \end{exam} For this example the two level nested Bethe Ansatz may be depicted as $$ \begin{array}}\newcommand{\ea}{\end{array}{c} \unitlength3,5mm \begin{picture}}\newcommand{\ep}{\end{picture}(10,5) \put(5,1){\oval(2,4)[tr]} \put(6,1){\oval(2,6)[tr]} \put(5,1){\oval(8,2)[tr]} \put(1,3){\line(1,0){4}} \put(1,4){\line(1,0){5}} \put(2,2){\line(0,1){3}} \put(3,2){\line(0,1){3}} \put(4,2){\line(0,1){3}} \put(2.1,4.5){$\scriptstyle x$} \put(3.1,4.5){$\scriptstyle y$} \put(4.1,4.5){$\scriptstyle z$} \put(6,2.5){$\scriptstyle u$} \put(7.1,2.5){$\scriptstyle v$} \put(9.1,1.5){$\scriptstyle w$} \put(1.8,1.2){$\scriptstyle 1$} \put(2.8,1.2){$\scriptstyle 1$} \put(3.8,1.2){$\scriptstyle 1$} \put(5.8,.2){$\scriptstyle 2$} \put(6.8,.2){$\scriptstyle 2$} \put(8.8,.2){$\scriptstyle 3$} \put(.3,2.8){$\scriptstyle 1$} \put(.3,3.8){$\scriptstyle 1$} \put(4.3,1.8){$\scriptstyle 2$} \ep \ea $$ \\[1cm] {\bf Acknowledgment:} The authors have profited from discussions with A.~Fring, R.~Schra\-der, F.~Smirnov and A.~Belavin.
2,869,038,156,183
arxiv
\section{Introduction} Human oriented technology has a central role in computer vision and can greatly advance daily-life related applications. For example, face verification for surveillance~\cite{face} and clothing parsing for fashion search~\cite{clothliu}. One of the most fundamental human oriented techniques is the well-known \emph{human pose estimation} (HPE) in 2D images. In general, HPE could facilitate many applications, e.g., action recognition~\cite{action}, image segmentation~\cite{ladicky}, etc. However, it is difficult to accurately estimate the human pose in unconstrained environments, especially in the presence of vision occlusions and background clutters. To tackle the challenges, it is well-recognized that the contextual information (e.g., clothing attributes) is useful, as illustrated in Figure~\ref{fig:eg}. As a consequence, the so-called \emph{context modeling}, which is to model properly the contextual information possibly existing in images, is widely regarded as a promising direction for HPE. A variety of approaches have been proposed and investigated in the literature over several years, e.g.,~\cite{songchun,ladicky,shen2014unified}. In~\cite{songchun}, it was proposed a model that encourages high contrast between background and foreground. Ladicky et al.~\cite{ladicky} combined together pose estimation and image segmentation, aiming to take the advantages of joint learning. In~\cite{shen2014unified}, a unified structured learning procedure was adopted to predict human pose and garment attribute simultaneously. \begin{figure}[tbp] \centering \subfloat[] { \includegraphics[width=0.3\linewidth]{eg_a.pdf} } \subfloat[] { \includegraphics[width=0.3\linewidth]{eg_b.pdf} } \subfloat[] { \includegraphics[width=0.3\linewidth]{eg_c.pdf} } \caption{ \textbf{Examples to demonstrate the benefit of integrating clothing attributes into HPE.} In the three results of HPE, all human poses in (b) and (c) are correct except lower arms. we can assume that (c) is incorrect based on the great appearance difference between left and right lower arm, but there is slight appearance difference in (b). If we know the clothing attribute type, e.g. the sleeve type or color, we can remove (b) based on the inconsistent color between the upper and lower arms. Finally, we get the correct estimation (a). } \label{fig:eg} \end{figure} While effectual, the existing approaches require to label lots of contextual messages for training, and thus they are time-consuming and impractical. In this paper, we shall introduce a \emph{latent} clothing attribute approach for HPE. Our approach formulates the HPE problem by extending the pictorial structure framework~\cite{ps1,ps2} and, in particular, models the clothing attributes as \emph{latent variables}. Comparing to the previous approaches that rely on label information, our latent approach, in sharp contrast, requires no explicit labels of the clothing attributes and can therefore be executed in an efficient way. We define some clothing attributes and build their connections with human parts (e.g., sleeve with arms). Some domain specific features, including \emph{pose-specific} features and \emph{pose-attribute} features, are designed to describe the connections. We utilize the latent structured support vector machines (LSSVM) for the training procedure, where the attribute values are initialized by a simple K-Means clustering algorithm. Then the model parameters are learnt by employing a relabel strategy, which minimizes the objective function of LSSVM in an ``alternating direction'' manner. More precisely, we perform an iterative scheme to train the model: Given the (latent) clothing attributes, we perform a dynamic programming algorithm to find a suboptimal solution for human pose; Given the human pose, we seek the optimal attribute values by performing a greedy search on the attribute space. We empirically show that our approach can achieve the state-of-the-art performance on two benchmarks. In summary, the contributions of this paper are three-folds: (1) We establish a latent clothing attribute approach that can implicitly utilize clothing attributes to enhance HPE. (2) We propose some domain specific features to describe the connections between human parts and clothing attributes. (3) We introduce an efficient algorithm to solve the optimization problem which is indeed challenging due to the presence of latent variables. \section{Related Work} As aforementioned, HPE is a difficult problem, especially in unconstrained scenes. Some of the researchers studied the problem under the context of 3D scenery~\cite{burenius20133d,ics14cvpr}. In the work of~\cite{burenius20133d}, they extended the popular 2D pictorial structure~\cite{ps1,ps2} to 3D images and employed the new framework to model view point, joint angle, etc. Shotton et al.~\cite{shotton2013real} proposed a real time algorithm for estimating the 3D human pose, striving for making the technique practical in real world applications. Most studies (including this work) on HPE focus on 2D static images. In the early works, the human part was often modeled by oriented template. Although straightforward, the oriented templates may not properly handle the fore-shortening of the objects~\cite{nips06,cvpr10,Daniel}. In~\cite{deva11}, an advanced representation scheme was proposed to model the oriented human parts. The new model is formulated as a mixture of non-oriented components, each of which is attributed with a ``type''. Interestingly, the new model can approximate the fore-shortening by tuning the adjacent components in a spring structure. Some work tried to incorporate ``side'' techniques, e.g., image segmentation, to enhance HPE. In~\cite{eccv10}, a variety of image features, e.g., boundary response and region segmentation, were utilized to produce more reliable HPE results. In~\cite{songchun}, the background was modeled as a Gaussian distribution. In~\cite{mixing}, the authors present a two-stage approximate scheme to improve the accuracy of estimating lower arms in videos. The algorithm was imposed to output the candidates with high contrast to the surroundings. Besides of the shape feature which is very discriminative, the appearance feature (e.g. color, texture) is also important for HPE~\cite{bmvc09}. Generally, the appearance feature is actually a description of the clothing. As illustrated in Figure~\ref{fig:eg}, there is a strong correlation between human pose and clothing attribute. Some previous work such as~\cite{clotheccv,clothliu,poselets,junchi} utilized the result of HPE to predict the clothing attribute or retrieve similar garments. Other methods (e.g.,~\cite{cloth12,shen2014unified}) attempted to refine the clothing parsing by HPE and, in turn, refine HPE by clothing parsing. However, this requires a large annotation for clothing. In our work, it is not required to manually annotate the attributes as we take them as latent variables. There is some work that has investigated clothing attributes in the tasks other than HPE. In~\cite{clothrec}, Liu et al. aimed to recommend garment for specific scenes. To bridge the gap between the low-level image evidence and the garment recommendation, they integrated an attribute-level representation that propagates semantic messages to the recommendation system. In~\cite{action}, similar attribute techniques as ours were used for action recognition. However, there is a key difference: In~\cite{action}, the attribute is used as a middle level prior and the high level task was facilitated by the knowledge of attribute; In our work, the attribute is modeled in a unified manner with human pose. Our model takes a relabel strategy to alternatively optimize the variables of the attribute and pose. \begin{figure}[tbp] \centering \includegraphics[width=\textwidth]{frame.pdf} \caption{ \textbf{Overview of our approach.} } \label{fig:frame} \end{figure} \section{HPE with Latent Clothing Attributes} We summarize the pipeline of our approach in Figure~\ref{fig:frame}. First, we take a pre-processing step to detect potential human parts in the image. This step allows us to have a search space with manageable size. Then, we extract the domain specific features to characterize the human pose and clothing attributes. Finally, we utilize the LSSVM to actualize our attribute aware human pose model and present an efficient inference algorithm to find an approximate optimal solution to LSSVM. Note that our model can reveal the clothing attributes, and thus humans with similar attribute values will be grouped together (i.e., clustering human by their clothing attributes). \begin{table} \centering \caption{The configuration of clothing attributes} \begin{tabular}{|c|c|c|c|} \hline Attribute & Human parts & Features & Number of values \\ \hline Sleeve & All arms & Color Histogram & 3 \\ \hline Neckline & Torso + Head & HOG & 4\\ \hline Pattern & Torso & LBP~\cite{lbp} & 5\\ \hline \end{tabular} \label{tb:attr} \end{table} Before introducing the proposed approach in detail, we would like to introduce some notations. We write $I$ for an image. A human part is represented as a bounding box $(x, y, s, \theta)$, where $(x, y)$ is the coordinate, $s$ is the size and $\theta$ is the rotation. To obtain an input space with manageable size, we use the existing HPE method~\cite{deva11} to produce 40 candidates for each human part. Thus, the input space $\mathcal{X}$ of our approach is defined as: \begin{equation} \mathcal{X} = \left\{ \mathbf{x}|\mathbf{x} = ( \mathbf{b}_1, \mathbf{b}_2, \cdots, \mathbf{b}_m ) \right\}, \end{equation} where $m$ is the number of human upper-body parts ($m = 6$ in this work), and $\mathbf{b}_i$ denotes the candidate ensemble for the $i$-th human part (there are 40 candidates in each $\mathbf{b}_i$). The output space of human pose is defined as as follows: \begin{equation} \mathcal{P} = \{\mathbf{p}| \mathbf{p}=(p_1,p_2,\cdots,p_m), \forall i, 1\leq p_i \leq 40\}, \end{equation} where $p_i$ is a positive integer that indicates the index of the estimated candidate. We aim to integrate clothing attributes into HPE task, striving for capturing the strong correlation between human parts and clothing attributes. We consider three types of attributes in this work, including ``Neckline'', ``Pattern'' and ``Sleeve''. Each attribute has multiple styles, e.g., short sleeve and long sleeve for the ``Sleeve'' attribute. Heuristically, for each $r$-th attribute ($r=1,2,3$), the number of attribute values, $T_r$, are determined as in Table~\ref{tb:attr} (see the last column). Then the output space of the latent clothing attributes is as follows: \begin{equation} \mathcal{A} = \left\{ \mathbf{a}|\mathbf{a} = (a_1, a_2, \cdots, a_n), \forall r, 1 \leq a_r \leq T_r \right\}. \end{equation} where $n$ is the number of clothing attributes ($n = 3$ in this work), and $a_r$ is the label for the $r$-th attribute. Note here that it has no specific consideration to choose the value for $a_r$, e.g., $a_1 = 1$ may mean short sleeve or long sleeve. In this work it is an unsupervised clustering procedure that recognizes the clothing attributes. Finally, the task of jointly estimating clothing attribute and human pose is formulated as follows: \begin{equation} f: \mathcal{X} \rightarrow \mathcal{Y}, \label{eq:task_func} \end{equation} where $\mathcal{Y}$ is the output space given by \begin{equation} \mathcal{Y} = \left\{ \mathbf{y}|\mathbf{y} = (\mathbf{p},\mathbf{a}), \mathbf{p} \in \mathcal{P}, \mathbf{a} \in \mathcal{A} \right\}. \end{equation} Regarding the prediction function $f$, we presume that there is a score function $S$ which measures the fitness between any input-output pair $(\mathbf{x}, \mathbf{y})$ such that: \begin{equation} \label{eq:S(x,y;beta)} S(\mathbf{x}, \mathbf{y};\beta) = \langle \beta, J(\mathbf{x}, \mathbf{y}) \rangle \end{equation} where $\langle \cdot \rangle$ denotes the inner product between two vectors, $J(\cdot, \cdot)$ is the feature representation, and $\beta$ is an unknown weight vector. In this way, the mapping function $f$ in Eq.~\ref{eq:task_func} can be written as: \begin{equation} f(\mathbf{x}; \mathbf{\beta}) = \argmax_{\mathbf{y} \in \mathcal{Y} } S(\mathbf{x}, \mathbf{y}; \mathbf{\beta}) \label{eq:score} \end{equation} This is a latent structured learning problem, where the latent variables are clothing attributes. Our learning procedure is motivated by~\cite{dpm}, which employs a relabel strategy to increasingly improve the prediction of latent variables. Yet before proceeding to the training pipeline, we firstly introduce the design of the domain-specific features, as shown in the next section. \subsection{Feature Representation} \label{sec:feature} The joint feature representation is an important component in structured learning~\cite{svm-struct}. We define the joint feature function $J(\mathbf{x}, \mathbf{y})$ by using two types of features, including {\em pose-specific} features denoted by $j_p(\mathbf{x}, \mathbf{p})$, and {\em pose-attribute} features denoted by $j_{pa}(\mathbf{x}, \mathbf{y});$ that is, \begin{equation} \label{eq:feature} \langle \mathbf{\beta}, J(\mathbf{x},\mathbf{y}) \rangle = \langle \mathbf{\beta}_p, j_p(\mathbf{x}, \mathbf{p}) \rangle + \langle \mathbf{\beta}_{pa}, j_{pa}(\mathbf{x},\mathbf{y}) \rangle \end{equation} In the following, we present our techniques used to design each type of feature. \subsubsection{Pose-specific Features} Given an input sample $\mathbf{x}$, we use the Histogram of Oriented Gradients (HOG)~\cite{hog} to describe the shape of a candidate and consider the deformation constraint between two connected parts: \begin{equation} j_p(\mathbf{x}, \mathbf{p}) = \sum_{i=1}^m hog(\mathbf{x}, p_i) + \sum_{(i, j) \in E_p} d(\mathbf{x}, p_i, p_j), \end{equation} where $E_p$ is the set of connected limbs. The design of the deformation feature $d(\mathbf{x}, p_i, p_j)$ involves some basic geometry constraints between connected parts, including relative position, rotation and distance of part candidate $p_i$ with respect to $p_j$, which is computed as $[x_j - x_i, y_j - y_i, (x_j - x_i)^2, (y_j - y_i)^2]$~\cite{deva11}. \subsubsection{Pose-Attribute Features} Now we try to integrate the clothing attributes into our model. Notice that an attribute is only associated with some of the human parts). For a given attribute $r$, we denote the human parts associated with it as $r_p$ and the corresponding configuration as $P_r$. The detailed inter-dependency between human parts and clothing attributes is shown in the second column of Table~\ref{tb:attr}. According to the work~\cite{clothliu}, for different attributes, different low-level features should be used to achieve good performance. The specific features used for each clothing attribute can be found in the third column in Table~\ref{tb:attr}. Formally, the pose-attribute features are defined as: \begin{equation} \label{eq:j_pa} j_{pa}(\mathbf{x}, \mathbf{y}) = \sum_{r=1}^n \Psi(\mathbf{x}, P_r, a_r) \end{equation} where $\Psi(\mathbf{x}, P_r, a_r)$ denotes the features extracted from the human part $\textbf{x}$, with the configuration $P_r$ and the attribute label $a_r$. \begin{algorithm} \caption{Structured Learning with Latent SVM} \begin{algorithmic}[1] \REQUIRE Positive samples, negative samples, initial model $\beta$, number of relabel iteration $t_1$, number of hard negative mining iteration $t_2$. \ENSURE Final Model ${\beta}^*$. \STATE Initialize the final model: ${\beta}^* = \beta$. \STATE Let the negative sample set $F_n = \emptyset$. \FOR{ relabel = 1 to $t_1$ } \STATE Let the positive sample set $F_p = \emptyset$. \STATE Add positive samples to $F_p$. \FOR{ iter = 1 to $t_2$ } \STATE Add negative samples to $F_n$. \STATE ${\beta}^* := \mathrm{Pegasos}({\beta}^*, F_p \bigcup F_n)$.\\ \STATE Remove easy negative samples: \\ Remove the samples whose feature vector $v$ satisfying $\langle{\beta}^*, v \rangle < -1$ from $F_n$. \ENDFOR \ENDFOR \end{algorithmic} \label{alg:train} \end{algorithm} Similar to~\cite{shen2014unified}, the pose-attribute feature is designed by an outer product of low-level features and an identity vector. We first convert the clothing attribute label $a_r$ to a $T_r$-dimensional vector, denoted as $L(a_r)$, one element of which is assigned with valued ``1'' and all others are set to be ``0''. From Table~\ref{tb:attr}, the low-level feature descriptors of the $r$-th clothing attribute depend on two aspects: 1) the corresponding human parts and 2) the feature type (denoted by $F_r$ and has been specified in Table~\ref{tb:attr}). We use $F_r(P_r)$ to denote features of the $r$-th clothing attribute associated with the part configuration $P_r$. Then our pose-attribute feature $\Psi(\mathbf{x}, P_r, a_r)$ is designed as follows: \begin{equation} \Psi_{pa}(\mathbf{x}, P_r, a_r) = F_r(P_r) \otimes L(a_r) \end{equation} where the ``$\otimes$'' operator represents the (vectorized) outer product of two vectors. \subsection{Structured Learning with Latent SVM} Now we consider the problem of learning the prediction mapping $f$, given a collection of images labeled with human part locations. This is the type of data available in the all standard benchmark dataset for human pose estimation. Note that clothing attributes have no labels, and we treat them as latent variables. We describe a framework for initializing the structure of a joint model and learning all parameters. Parameter learning is done by constructing a LSSVM training problem. We train the LSSVM using the relabel approach (details will be described later) together with the data-mining (hard negative mining), and we use Pegasos~\cite{pegasos} for the online update to solve the problem of huge space for negative samples. \begin{algorithm} \caption{Inference for Clothing Attributes} \begin{algorithmic}[1] \REQUIRE A sample $\mathbf{x}$, Model parameter $\beta$ , Human parts label $\mathbf{p}$ \ENSURE optimal clothing attributes value $\mathbf{a^*}$ \STATE let $T_r$ is the number of $r$-th clothing attribute type \FOR {r:= 1 \textbf{to} 3} \STATE select the attribute value which has highest score:\\ $\mathbf{a}_r = \argmax_{1 \leq r \leq T_r} \langle \beta_{pa}^r, j_{pa}(\mathbf{x}, P_r, a_r) \rangle $ \ENDFOR \end{algorithmic} \label{alg:attr} \end{algorithm} \subsubsection{Objective Function} We aim to learn the fitness function $S(\mathbf{x}, \mathbf{y}; \beta)$ defined in Eq.~\eqref{eq:S(x,y;beta)}, which can later be used for joint estimation (see Eq.~\eqref{eq:score}). Given a positive training sample $(\mathbf{x}, \mathbf{y})$, we expect $S(\mathbf{x}, \mathbf{y}; \beta) \geq 1$. On the other hand, if a training sample $(\mathbf{x}, \mathbf{y})$ is negative, the output of the fitness function is required to be less than $-1$. In this way, given a training set $D = \{ (\mathbf{x}_1, \mathbf{y}_1, z_1), \cdots, (\mathbf{x}_q, \mathbf{y}_q, z_q) \}$, where $z_k \in \{1, -1\}$ indicates the $k$-th sample is positive or not, we can optimize the following objective function to solve $\beta$: \begin{equation} \begin{split} \min_{\beta} \frac{1}{2} \|\beta\|^2 + C \sum_{k=1}^{q}\max(0, 1 - z_k S(\mathbf{x}_k, \mathbf{y}_k; \beta)). \label{eq:latent} \end{split} \end{equation} \subsubsection{Initialization} Since the clothing attributes are latent variables, we can only access the label of human pose. To start up, we take a relabel strategy to update the positive samples (more accurately, the clothing attribute labels) and the weight vector $\beta$ in an alternative manner. There are many ways to initialize the latent variables. One can randomly assign labels for training samples which may be unstable. In our work, we first use the groundtruth of human pose to extract low-level features (see Table~\ref{tb:attr}) for each attribute. Then we perform a $K$-Means clustering algorithm to obtain the center of each attribute value, where $K$ is exactly the number of attribute values we defined in Table~\ref{tb:attr}. In this way, the initial label for the clothing attribute can be determined by the closest center. Now all of the labels have been generated, we can solve Problem~\eqref{eq:latent} to obtain the initial weight vector $\beta$ (line 1 in Algorithm~\ref{alg:train}). \subsubsection{Relabel Strategy} As the initial clothing attribute labels are not accurate, we employ a relabel strategy to update the attribute labels. That is, given the model parameter $\beta$ and human pose, we predict the clothing attribute by maximizing the fitness function $S(\mathbf{x}, \mathbf{y}; \beta)$, which is shown in Algorithm~\ref{alg:attr}. Note that according to the design of our joint feature $J(\mathbf{x}, \mathbf{y})$, the pose-specific features are irrelevant for the inference of attributes. From Eq.~\eqref{eq:j_pa}, we know that there is no interaction between different attributes since $j_{pa}$ is summation of $n$ separate attributes associated features. Therefore, we can perform an efficient greedy search for each attribute to obtain a local optima (line 2--4 in Algorithm~\ref{alg:attr}). \begin{algorithm} \caption{Approximate Inference for Clothing Attribute Aware HPE Task} \begin{algorithmic}[1] \REQUIRE A sample $\mathbf{x}$, Model parameter $\beta$. \ENSURE Optimal estimation $\mathbf{y}^*$ and score $S*$. \STATE Set $\mathbf{y}^* = \emptyset$. \STATE Set the optimal score $S^* = -\infty$. \STATE Initialize the parts estimation $\mathbf{p}_0$. \REPEAT \STATE Compute the local optimal clothing attributes $\mathbf{a}_t$. \STATE Compute the local optimal human pose $\mathbf{p}_t$. \STATE Compute the local score: $S = S(\mathbf{x}, \mathbf{y}_t; \beta)$. \IF{$S > S^*$} \STATE $S^* = S$, $\mathbf{y}^* = \mathbf{y}_t $ \ENDIF \UNTIL{$S^*$ not change} \end{algorithmic} \label{alg:inference} \end{algorithm} \subsubsection{Hard Negative Mining} For a recognition or detection task, one can obtain a positive sample set with manageable size. However, there is a huge space for the negative samples. Actually, it is not possible for enumerate \emph{all} negative samples. Thus, it is important to feed an algorithm with ``hard'' negative samples for efficiency and memory cost. In line 6--10 of Algorithm~\ref{alg:train}, we perform hard negative mining~\cite{dpm} to obtain valuable negative samples. This schema will call the inference algorithm~\ref{alg:inference} (see Section~\ref{subsec:inference}). More concretely, given an input sample $\mathbf{x}$ and weight vector $\beta$, we launch Algorithm~\ref{alg:inference} to find the optimal estimation $\mathbf{y}^*$. If $z \cdot S^*$ is less than $-1$ (a threshold we set), $\mathbf{x}$ is considered hard. The searching procedure on $\mathbf{x}$ will be stopped only when the $S^*$ is greater than $-1$ (the $\mathbf{y}^*$ produced by the previous step is removed from the search space). After collecting all the hard negative samples, we update $\beta$ with Pegasos solver~\cite{pegasos} (line 8 in Algorithm~\ref{alg:train}). Then we use the updated $\beta$ to perform a shrinkage step to remove the easy negatives from the hard negative set $F_n$. \begin{figure}[tbp] \centering \includegraphics[width=0.9\textwidth]{whole-model.pdf} \caption{ \textbf{Nodes with numbers from 0 to 5 are the human part variable and those 6 to 8 are clothing attributes. Colored nodes are the potentials.} } \label{fig:graph} \end{figure} \subsection{Inference} \label{subsec:inference} In Figure~\ref{fig:graph}, we represent our problem as a factor graph $\mathcal{G}$, where the rectangle node denotes a human part, the circle node with double boundaries denotes a clothing attribute. As our original problem is a cyclic graph, it cannot be optimized exactly and efficiently. Therefore, in Algorithm~\ref{alg:inference}, we propose an iterative algorithm to search for an approximate solution. Our algorithm receives a sample x, the model parameter $\beta$ as inputs and outputs a local optima for human parts and clothing attribute. In each iteration, by fixing the attributes, the inference can be performed on a tree structure, which can be optimized with a dynamic programming~\cite{ps2}. When the human parts are fixed, an efficient greedy search schema for clothing attribute is employed (see Algorithm~\ref{alg:attr}). \begin{algorithm} \caption{Inference for Human Pose} \begin{algorithmic}[1] \REQUIRE A sample $\mathbf{x}$, Model parameter $\beta$ , Clothing attributes value $\mathbf{a}$ \ENSURE optimal human parts estimation $\mathbf{p^*}$ \STATE set the optimal human parts estimation $\mathbf{p^*} = \emptyset$ \STATE set the node 0 as the root node \FOR{ each candidate $\mathbf{p}_i$ of node $i$ } \STATE set $m(\mathbf{p}_i) = \langle \beta_p^i, \phi_p(\mathbf{x}, p_i) \rangle + \langle \beta_{pa}^r, \Psi_{pa}(\mathbf{x}, P_r, a_r) \rangle$ \ENDFOR \FOR{ each candidate $\mathbf{p}_j$ of parent node $j$ and $\mathbf{p}_i$ of child node $i$ } \STATE set $l(\mathbf{p}_i, \mathbf{p}_j) = \langle \beta_p^{ij}, \psi_p(\mathbf{x}, p_i, p_j) \rangle$ \IF{ $i$ is a leaf node } \STATE $B_i(\mathbf{p}_j) = \max_{\mathbf{p}_i} (m(\mathbf{p}_i) + l(\mathbf{p}_i, \mathbf{p}_j) )$ \ELSE \STATE $B_i(\mathbf{p}_j) = \max_{\mathbf{p}_i} (m(\mathbf{p}_i) + l(\mathbf{p}_i, \mathbf{p}_j) + \sum_{v \in C_i} B_v(\mathbf{p}_i) )$ \ENDIF \ENDFOR \STATE select the best candidate for the root node: \\ $\mathbf{p}_0^* = \arg \max_{\mathbf{p}_0} ( m(\mathbf{p}_0) + \sum_{v \in C_0} B_v(\mathbf{p}_0) )$ \FOR{ each parent-child pair ($\mathbf{p}_j^*, \mathbf{p}_i$) } \STATE $\mathbf{p}_i^* = \arg \max_{\mathbf{p}_i} B_i(\mathbf{p}_j^*)$ \ENDFOR \end{algorithmic} \label{alg:ps} \end{algorithm} \subsubsection{Inference for Human Pose} We elaborate the inference procedure of human pose by extending the pictorial structure framework. In Figure~\ref{fig:graph}, we denote our score with colored nodes, with purple and red ones denoting the appearance and deformation scores. The main extension for the traditional PS model is the cyan nodes, which denoting the score to measure the fitness of human pose and clothing attribute (called pose-attribute score). Therefore, we propose the human pose inference procedure in Algorithm~\ref{alg:ps}. We denote the children nodes as $C_i$ for a node $i$. We compute the appearance and pose-attribute scores in line 3--5. In line 7, we compute the deformation score for each parent-child pair node $i$ and $j$. In the line 8--12, we compute conventional message passing procedure by dynamic programming~\cite{ps1}. Then we perform a top-down process to find the best candidate for each human part in line 14--17. \section{Experiments} \label{sec:exp} \subsection{Datasets} We evaluate our approach using the Buffy dataset~\cite{fer08} and the DL (daily life) dataset. The Buffy Dataset contains 748 pose-annotated video frames from Buffy TV show. This dataset is presented as a benchmark for HPE task. The DL dataset contains 997 daily life photos collected from the Flickr website. We annotate the human pose for this dataset. Compared with Buffy, the DL dataset has more various clothing attribute values. In order to obtain quantitative evaluation results for attributes, we manually annotate the clothing attributes for Buffy and DL. There is a standard partition of Buffy for training and testing, where the training set consists of 472 images and the remaining are used for testing. For the DL dataset, we select randomly 297 images for training and use the remaining 700 images for testing. \begin{table} \centering \caption{Comparison with State-of-the-art Algorithms on the Buffy Dataset} \begin{tabular}{|c|c|c|c|c|c|} \hline Method & Torso & Upper arms & Lower arms & Head & Total \\ \hline Andriluka et al.~\cite{cvpr09} & 90.7 & 79.3 & 41.2 & 95.5 & 73.5 \\ \hline Sapp et al.~\cite{eccv10} & \textbf{100} & 95.3 & 63.0 & 96.2 & 85.5 \\ \hline Yang and Ramanan~\cite{deva11} & \textbf{100} & 96.6 & 70.9 & \textbf{99.6} & 89.1 \\ \hline Our Approach & \textbf{100} & \textbf{97.1} & \textbf{78.4} & 99.1 & \textbf{91.6} \\ \hline \end{tabular} \label{tb:buffy} \end{table} \begin{table} \centering \caption{Comparison with State-of-the-art Algorithms on the DL Dataset} \begin{tabular}{|c|c|c|c|c|c|} \hline Method & Torso & Upper arms & Lower arms & Head & Total \\ \hline Andriluka et al.~\cite{cvpr09} & 97.0 & 91.7 & 84.5 & 94.0 & 90.6 \\ \hline Sapp et al.~\cite{eccv10} & \textbf{100} & 88.5 & 78.0 & 87.6 & 86.8 \\ \hline Yang and Ramanan~\cite{deva11} & 99.8 & 95.7 & 87.5 & 95.6 & 93.6 \\ \hline Our Approach & \textbf{100} & \textbf{97.2} & \textbf{91.3} & \textbf{99.1} & \textbf{95.7} \\ \hline \end{tabular} \label{tb:dl} \end{table} \subsection{Baselines and Metric} We compare our approach with three state-of-the-art algorithms: Andriluka et al.~\cite{cvpr09}, Sapp et al.~\cite{eccv10}, Yang and Ramanan~\cite{deva11}. For the HPE results, we evaluate them with a standardized evaluation protocol based on the probability of correct pose (PCP)~\cite{fer09}, which measures the percentage of correctly localized human parts. For the clothing attributes results, we evaluate them with a standardized metric (F1 score) of clustering task. We use the $K$-Means clustering results as our baseline for clothing attributes. First we use the groundtruth of human pose to obtain the clustering center for each attribute value. Then we perform $K$-Means clustering under a given pose, which is produced by either the state-of-the-art HPE algorithms or the groundtruth. \begin{figure}[tbp] \centering \includegraphics[width=0.9\textwidth]{compare.pdf} \caption{ \textbf{Comparison of our approach with Yang and Ramanan~\cite{deva11} } Yang and Ramanan~\cite{deva11} produces incorrect estimation (the 1st and 3rd) for upper and lower arms, while our latent clothing attribute approach produces correct. } \label{fig:compare} \end{figure} \subsection{Results} Figure~\ref{fig:result} shows some exemplar HPE results produced by our approach. We provide the PCP evaluation results on Buffy and DL in Table~\ref{tb:buffy} and Table~\ref{tb:dl} respectively. For the Buffy dataset, Table~\ref{tb:buffy} shows that our approach consistently outperforms Yang and Ramanan~\cite{deva11} which is a recently established algorithm. It is expected that the most difficult parts to estimate are the lower arms. Surprisingly, the improvement on the lower arms of our approach achieves 7.5 percent higher than Yang and Ramanan, possibly because of the integration of the sleeve attribute. For the DL dataset, our algorithm consistently outperforms all the competing baselines since the photos in DL are collected from daily life and have richer clothing attributes than Buffy. \begin{table} \centering \caption{F1 scores for clothing attributes results on Buffy} \begin{tabular}{|c|c|c|c|c|} \hline HPE & Sleeve & Neckline & Pattern & Total \\ \hline Andriluka et al.~\cite{cvpr09} + $K$-Means & 24.1 & 26.6 & 34.2 & 28.3 \\ \hline Sapp et al.~\cite{eccv10} + $K$-Means & 22.9 & 27.9 & 40.5 & 30.4 \\ \hline Yang and Ramanan~\cite{deva11} + $K$-Means & 38.3 & 25.7 & 22.6 & 28.9\\ \hline Groundtruth + $K$-Means & 34.7 & 36.1 & 39.5 & 36.8\\ \hline Our Approach & \textbf{55.6} & \textbf{68.8} & \textbf{80.8} & \textbf{68.4} \\ \hline \end{tabular} \label{tb:f1_buffy} \end{table} \begin{table} \centering \caption{F1 scores for clothing attributes results on DL} \begin{tabular}{|c|c|c|c|c|} \hline HPE & Sleeve & Neckline & Pattern & Total \\ \hline Andriluka et al.~\cite{cvpr09} + $K$-Means & 27.5 & 31.7 & 27.6 & 28.9 \\ \hline Sapp et al.~\cite{eccv10} + $K$-Means & 34.9 & 30.5 & 23.8 & 29.7 \\ \hline Yang and Ramanan~\cite{deva11} + $K$-Means & 43.2 & 28.6 & 35.8 & 35.9 \\ \hline Groundtruth + $K$-Means & 31 & 29.8 & 26.1 & 28.9 \\ \hline Our Approach & \textbf{57.2} & \textbf{60.3} & \textbf{74.7} & \textbf{64.1} \\ \hline \end{tabular} \label{tb:f1_dl} \end{table} As we also aim to reveal the clothing attribute, we show some results in Figure~\ref{fig:sleeve} for Buffy and DL, where we arrange the images with same attribute value into one group (i.e. clustering humans by their clothing attributes). In the top pane of Figure~\ref{fig:sleeve}, we group humans by the sleeve attribute. The performance under the F1 score is demonstrated in Table~\ref{tb:f1_buffy} and ~\ref{tb:f1_dl}. Surprisingly, our approach enjoys a significant improvement on both datasets, mainly because of the relabel strategy and the iterative update role for our model parameter. Note that the result of ``$K$-means + Groundtruth'' provides the initial labels for the clothing attributes. In this way, we examine the effectiveness of our relabel strategy. \begin{figure*}[tbp] \centering \includegraphics[width=\textwidth]{attr.pdf} \caption{ \textbf{Examples grouped on sleeve from Buffy and neckline from DL.} The first row of the top panel (sleeve) shows the sleeveless type, the second is long type, while the first row of the bottom panel (neckline) shows the pointed type, the second is round type. The right two columns are the incorrect results.} \label{fig:sleeve} \end{figure*} \section{Conclusion} \label{sec:conclude} Inspired by the strong correlation between human pose and clothing attributes, we propose a latent clothing attribute approach for HPE, incorporating the clothing attributes into the traditional HPE model as latent variables. Compared with previous work~\cite{shen2014unified}, our formulation is more suitable for practical applications as we do not need to annotate the clothing attributes. We utilize the LSSVM to learn all the parameters by employing a relabel strategy. To start up, we take a simple $K$-Means step to initialize the latent variables and then update the model and the clothing attributes in an alternative manner. Finally, we propose an approximate inference schema to iteratively find an increasingly better solution. The experimental results justify the effectiveness of our relabel strategy and show the state-of-the-art performance for HPE. \begin{figure} \centering \includegraphics[width=\textwidth]{result.pdf} \caption{ \textbf{Visualization of pose results produced by our algorithm on the Buffy and DL datasets.} The top two panels are from Buffy and the others are from DL. We use the oriented bounding box to denote the pose estimation. The first panel of each dataset are correct results, while the second panel are incorrect results. The bounding box with red color denote the incorrect estimation.} \label{fig:result} \end{figure} \bibliographystyle{splncs}
2,869,038,156,184
arxiv
\section{Introduction} Vehicular communications are deemed as integral component for the Intelligent Transportation Systems (ITS) that allow automobiles to stay connected with their surroundings as well as remote entities. They aim to provide anytime-anywhere connectivity to enable a wide range of critical and convenient services for vehicles. Indeed, the emergence of sophisticated vehicular communication technologies, \textit{i.e.}, cellular vehicle-to-everything (C-V2X) or Internet of Connected Vehicle (IoCV) \cite{chetlur2019coverage, al2020uav}, are forecast to effectively contribute in paving the way to support a plethora of essential applications including safety and non-safety related such as HD Maps, autonomous driving, 4K Video streaming, virtual and augmented reality, to name a few. Such applications urge for strict requirements such as higher throughput, lower latency, and massive connectivity. However, in the context of vehicular environment, it is strenuous to offer seamless communication experiences with ubiquitous connectivity and to enhance the quality of service. Technically speaking, in many areas where large objects, \textit{i.e.}, high-rise buildings or trucks, appear, it is very probable that wireless links between terrestrial infrastructures and vehicles face frequent disturbances. Hence, the service quality falls below the desirable levels and sometimes for extended periods of time. Moreover, certain regions, where obstacles severely block Line of Sight (LoS), are permanently out of coverage which, here, are dubbed as dark zones. Expanding wireless coverage to unserved areas translates to dramatic raise in costs. Meanwhile, recently, reconfigurable intelligent surfaces (RIS) have been recognized as a key promising technology for achieving cost- and energy-efficient communications via smartly reshaping the wireless propagation environment \cite{wu2019towards}. RIS is composed of a number of passive low-cost elements, each of which has the ability to independently tune the phase-shift of the incident radio waves. By adequately configuring the phase-shifts with the assistance of the RIS controller, the reflected signals can be constructively added \cite{Elhattab2020Reconfigurable}. Thus, the received signal strength can be improved at the point of interest. Consequently, by leveraging RIS, an indirect LoS wireless communication link can be provided for vehicles travelling in a dark zone; \textit{i.e}. a road where large buildings block the LoS of the Road Side Unit (RSU) \cite{di2020hybrid}. It is assumed that those vehicles are requesting service and the RSU is interested in maximizing the quality-of-service (QoS) for the passing by vehicles. To this end, the RSU operator may need to jointly optimize the RSU resource scheduling and RIS element coefficients (passive beamforming) such that the minimum average bit rates of vehicles is maximized. In addition, despite that a few works have addressed RIS phase-shift configuration in vehicular networks, only non-practical phase-shift RIS case is considered where RIS elements can have continuous element tuning. However, due to limited hardware, phase-shift elements of RIS can only have limited number of values \cite{zhang2020reconfigurable, wu2019beamforming}. Leveraging RIS in highly dynamic environments similar to vehicular communications implies a multitude of challenges. First, vehicles constantly change their position, hence, the distance between the RIS and vehicles is varying over time and that would highly affect the channel quality between them. Second, the RSU has limited resources in terms of the number of available wireless channels. Thereby, the RSU needs to optimize the radio scheduling while considering the mobility of vehicles which makes the problem more challenging especially when accounting for multi-user scenarios \cite{alwazani2020intelligent}. Third, vehicles move at different and varying speeds, that is, vehicles have various residence times. Considering the same service amounts for all the vehicles passing by the dark zones will deteriorate the performance of low speed vehicles. Subsequently, maximizing the minimum average bit rates provided to navigating vehicles regardless of their sojourn times should be considered. Fourth, the arrival times and speed of upcoming vehicles are not available, in practice, upfront to the RSU operator which makes the problem further more intricate. Finally and most importantly, discrete RIS phase-shift matrix configuration is a well-known problem which is generally hard to be solved especially in a context where the phase-shift matrix of the RIS together with wireless scheduling are jointly optimized. To the best of our knowledge, this work is the first to consider practical/discrete RIS in vehicular networks where the mobility of vehicles together with the environment uncertainties are addressed. To this end, to tackle the aforementioned challenges, an intelligent solution approach is proposed, namely Deep Reinforcement Learning, along with effective optimization technique based on block coordinate descent (BCD). The contributions of this work can be summarized as follows: \begin{itemize} \item A system model is presented that leverages discrete phase-shift RIS technology to extend and enhance RSU communication. Precisely, a RSU provides service for vehicles passing through a blocked zone indirectly by employing a RIS where the mobility of vehicles and future arrivals are considered. \item We investigate the joint vehicle scheduling and passive beamforming in RIS-empowered vehicular communication. This framework is formulated as an optimization problem with the goal of maximizing the minimum achievable bit rate for the vehicles passing through the dark zone. However, the formulated problem ends up to mixed integer non-convex problem, which is known to be difficult to solve. \item In order to tackle this challenge, we decouple the formulated problem into two sub-problems; wireless scheduling sub-problem and phase-shift matrix optimization sub-problem. Then, we resort to solve the first sub-problem via Deep Reinforcement Learning (DRL). To do so, the Markov Decision Process (MDP) is defined to be solved via DRL algorithm. Further, we propose BCD to solve the second sub-problem. We also demonstrate the robustness of our BCD algorithm. And, the computational complexity of the proposed algorithms are analyzed. \item Two case studies are carried out. The first one is to investigate how recent vehicular technologies can enable RIS integration with vehicular communications through obtaining precise vehicle positioning. Also, another study explores the area of RIS placement to optimize the overall network performance. \item Several extensive simulation based experiments are conducted using Simulation of Urban MObility (SUMO) to validate the effectiveness of our solution method and to compare with counterpart methods. \end{itemize} The remaining of this paper is organized as follows. Section \ref{sec:related-work} presents the major contributions that have been done in similar contexts. In Section \ref{sec:system-model}, we discusses our system model. Section \ref{sec:mathematical-formulation} formulates the problem mathematically along with the objective function. Next, Section \ref{sec:solution-approach} explains our solution approach in details. In section \ref{section:case-study}, two case studies are discussed regarding RIS placement and vehicle positioning. Then, section \ref{sec:numerical-result} shows our numerical results. Finally, we sum up the paper in Section \ref{sec:conclusion}. \textit{Notations:} Vectors are denoted by bold-face italic letters. $\diag(x)$ denotes a diagonal matrix whose diagonal element is the corresponding element in $x$. $\mathbb{C}^{M \times N}$ denotes a complex matrix of $M \times N$. For any matrix $M$, $M^H$ and $M^T$ denote its conjugate transpose and transpose, respectively. $Pr(A \mid B)$ denotes the probability of event $A$ given event $B$. \section{Related Work} \label{sec:related-work} Lately, high research efforts have been devoted towards investigating the introduction of the RIS to vehicular networks. In \cite{chen2020resource}, the authors studied resource allocation of RIS-aided vehicular communications where they aim to maximize vehicle to base station link quality while guaranteeing vehicle to vehicle communications. The authors of \cite{wang2020outage} provided analysis for outage probability in RIS-enabled vehicular networks. This paper derives an expression of outage probability showing that RIS can reduce the outage probability for vehicles in its vicinity. The analysis also proves that higher density roads increase outage probability since passing vehicles can block the communication links. In \cite{dampahalage2020intelligent}, the authors proposed RIS-aided vehicular networks while considering two scenarios to estimate the channels. The first one is by assuming fixed channel estimation within a coherence time. While the second one neglects the small scale fading based on the fact that vehicular positions can be realised in advance. \cite{you2020channel} considered constraint discrete phase-shift RIS with two challenges; channel estimation and passive beamforming. Another body of works on RIS deals with practical considerations of discrete RIS elements. In \cite{wu2019beamforming}, the authors introduced a finite number of phase-shift elements of RIS where the power is minimized while maintaining certain signal-to-interference-plus-noise ratio threshold. \cite{xu2020reconfigurable} proved how discrete phase-shift RIS is able to achieve high performance with minimum required number of phase quantization levels. This work shows that 3 levels are enough to attain the full diversity order. In addition, the authors of \cite{di2020practical} also worked on practical RIS where multiple users are served in parallel. The objective of this work is to maximize the sum rate where the continuous digital beamforming and discrete RIS beamforming are done. \cite{tang2020mimo} proposed RIS to assist multiple-input multiple-output (MIMO) systems with 2-bit phase-shift elements. In \cite{yuan2020intelligent}, the authors proposed utilizing RIS in cognitive radio systems yielding improved spectral efficiency and energy efficiency. \cite{yan2020passive} maximized the achievable sum rate of multi-users while the RIS sends information via controlling the reflecting modulation. \cite{hu2020location} proposed a new location-based RIS where users' locations are not perfectly known. Hence, the angle between the users and RIS are estimated to configure the beams of the RIS and trasmitter. In addition, some other works also leverage RIS for security purposes, for example \cite{ai2020secure} suggested that RIS can help in alleviating security breaches related to eavesdropping. In \cite{mensi2020physical}, the authors studied the security issues related to eavesdropping attacks under different circumstances including active and passive relays (RIS). As opposed to the previous papers, this work accounts for vehicles mobility where vehicles constantly change their position with time. Additionally, as time progresses, new vehicles arrive to the concerned area while others depart. This process of birth-and-death vehicles brings many uncertainties to the context which are hard to cope with. Thus, we aim to find a solution approach that can handle the dynamic nature of this context besides anticipating the upcoming arrivals and other hidden information about the environment. Accounting for these two objectives will help the RSU-RIS to better decide when and how to serve vehicles during their residence time. Moreover, unlike many existing works in the literature, we propose to use practical/discrete RIS. \section{System Model} \label{sec:system-model} We consider a particular road segment with no direct connectivity via a RSU as depicted in Fig. \ref{fig:system_model}. The line of sight (LoS) is assumed to be blocked by an obstacle, \textit{i.e.,} a high building \cite{long2020reflections}. We also consider a predefined time horizon of length $N$ which encompasses several smaller time slots, $[0,1,.., n,...,N]$. Meanwhile, we assume a flow of vehicles indexed by $v$ is navigating and requesting communication services from the RSU located at $(x_R, y_R, z_R)$ where $x_R, y_r$ are the Cartesian coordinates and $z_R$ is height of the infrastructure. The vehicles are moving at different and varying speeds, therefore, at each time slot $n \in N$, vehicle $v$ location is denoted by $(x_v^n, y_v^n, z_v)$. In order to provide uninterrupted service, the network operator leverages an RIS equipped with $M$ elements, which is situated on a building and possesses a strong LoS with both the moving vehicles passing by the dark zone and the RSU. Here, we denote the RIS location by $(x_I, y_I, z_I)$. The RSU operator aims to satisfy the vehicles by providing favourable quality of service. \begin{figure} \centerline{\includegraphics[scale=0.7]{system_model.png}} \caption{System Model} \label{fig:system_model} \end{figure} The RSU is assumed to have a number of channels $C$ to be scheduled for the vehicles \cite{wang2011ieee}\footnote{For simplicity, we assume each vehicle can only be served via one channel.}. In case when there are several vehicles present, the RSU has to determine how to schedule its resources and tune the RIS elements. Further, due to the mobility nature of the vehicular environment the distances between the RIS and vehicles change as time progresses. Meaning, the network operator has to take into consideration that link quality degrades as vehicles moving far from the RIS. Unlike prior work which deals with continuous phase-shift RIS, we assume a realistic scenario of a RIS where phase-shift coefficients are discrete. This scenario is more practical since it accounts for the real-world hardware limitations. However, discrete RIS is more challenging due to the additional constraint of discrete phase-shift. Moreover, the RIS consists of $M$ elements, $[1,...,m,...,M]$, each of which is controlled via $b$ bits. Hence, each one can be tuned to one of $2^b$ different angles. \subsection{Communication Model} In the proposed model, we consider a uniform linear array (ULA) RIS \cite{long2020reflections}. In addition, similar to the RSU, the RIS is assumed to have a certain height, $z_I$. The communication links between RSU and RIS and that between RIS and vehicle $v$ are assumed to have a dominant line-of-sight (LoS). Thus, these communication links experience small-scale fading which are modeled as Rician fading with pure LoS components \cite{abdullah2020hybrid, samir2020optimizing}. Consequently, the channel gain between the RSU and RIS, $\boldsymbol{h}_{I,R} \in \mathbb{C}^{M \times 1}$, can be formulated as follows. \begin{equation} \boldsymbol{h}_{I,R} = \underbrace{\sqrt{\rho (d_{I, R})^{-\alpha}}}_{\mathrm{path~loss}} \underbrace{\sqrt{\dfrac{K}{1+K}} \boldsymbol{\bar{h}}_{I, R}^\mathrm{LoS}}_{\mathrm{Rician~fading}}, \end{equation} where $\rho$ is the average path loss power gain at reference distance $d_0=1$m. Also, $K$ is the Rician factor and $\boldsymbol{\bar{h}}_{I, R}^\mathrm{LoS}$ is the deterministic LoS component which can be defined as follows \begin{equation} \small \label{eq:channel-gain-I-R} \boldsymbol{\bar{h}}_{I, R}^{\mathrm{LoS}} = \underbrace{\Bigg[1, e^{-j\frac{2\pi}{\lambda} d \phi_{I, R}},..., e^{-j\frac{2\pi}{\lambda} (M-1) d \phi_{I, R}}\Bigg]^{T}}_{\mathrm{array~response}}, \forall n \in N, \end{equation} where $d_{I, R}$ is the Euclidean distance between the RIS and RSU and can be computed from $\sqrt{(x_I - x_R)^2 + (y_I - y_R)^2 + (z_I - z_R)^2}$. $\phi_{I, R}=\dfrac{x_I - x_R}{d_{I, R}}$ is cosine of the angle of arrival of signal from RSU to RIS. $d$ is the separation between RIS elements and $\lambda$ is the carrier wavelength. Similarly, we can compute the channel gain between the RIS and vehicles which is denoted by $\boldsymbol{h}_{I,v}^n \in \mathbb{C}^{M \times 1}$ as in Eq (\ref{eq:channel-gain-I-v}). \begin{equation} \label{eq:channel-gain-I-v} \boldsymbol{h}_{I, v}^n = \underbrace{\sqrt{\rho (d_{I, v}^n)^{-\alpha}}}_{\mathrm{path~loss}} \underbrace{\sqrt{\dfrac{K}{1+K}} \bar{h}_{I, v}^{n~\mathrm{LoS}}}_{\mathrm{Rician~fading}}, \forall v, n \in N, \end{equation} \begin{equation} \small \begin{split} \boldsymbol{\bar{h}}_{I, v}^{n~\mathrm{LoS}} = \underbrace{\Bigg[1, e^{-j\frac{2\pi}{\lambda} d \phi_{I, v}^n},..., e^{-j\frac{2\pi}{\lambda} (M-1) d \phi_{I, v}^n}\Bigg]^{T}}_{\mathrm{array~response}}, \\ \forall v, n \in N, \end{split} \end{equation} where $d_{I, v}^n$ is the euclidean distance between the RIS and vehicle $v$ at time slot $n$ and $\phi_{I, v}^n=\dfrac{x_I - x_v^n}{d_{I, v}^n}$. Finally, we assume the channel is completely blocked between the RSU and vehicles in that zone similar to \cite{long2020reflections}\footnote{In this work, we assume that the channel gain between RIS and vehicles is fixed within one time slot.}. Denote the phase-shift matrix of the RIS in the $n$th time slot as $\boldsymbol{\theta}^n = \diag\{e^{j\theta_1^n},..., e^{j\theta_M^n}\}$, where $\theta_m^n$ is the phase-shift of the $m$th reflecting element $m = 1, 2, · · · , M$. Due to the hardware limitations, the phase-shift can only be selected from a finite set of discrete values. Specifically, the set of discrete values for each reflecting RIS element can be given as $\theta_m^n \in \Omega = \{0, \frac{2\pi}{Q}, \dots, \frac{2\pi(Q - 1)}{Q}\}$, where $Q = 2^b$ and $b$ is the number of bits that control the number of available phase-shifts for the RIS elements. Hence, the signal to noise ratio (SNR) is: \begin{equation} \label{eq:snr} \lambda^n_v = \dfrac{P\abs{\boldsymbol{h}_{I, R}^H \boldsymbol{\theta}^n \boldsymbol{h}_{I, v}^n}^2}{\sigma^2}, \forall v, n \in N, \end{equation} where $P$ is the transmission power of the RSU and $\sigma^2$ is the thermal noise power. Then, we can compute $\bold{h}_{I, R}^H \boldsymbol{\theta}^n \boldsymbol{h}_{I, v}^n$ based on Eq (\ref{eq:channel-gain-I-R}) and Eq (\ref{eq:channel-gain-I-v}). \begin{equation} \label{eq:irs-channel-gain-expression} \begin{split} \boldsymbol{h}_{I, R}^H \boldsymbol{\theta}^n \bold{h}_{I, v}^n = \dfrac{\rho \dfrac{K}{K+1}}{\sqrt{(d_{I,i}^n)^{\alpha}} \sqrt{(d_{I,R})^{\alpha}}} \times ~~~~~~~~~~~~~~~~~~~~\\ \sum_{m=1}^M e^{j(\theta_m^n + \frac{2\pi}{\lambda} (m-1) d \phi^n_{I,v} - \frac{2\pi}{\lambda} (m-1) d \phi_{I,R})}, \forall v, n \in N. \end{split} \end{equation} Now, instantaneous bit rate given to each vehicle is calculated as. \begin{equation} l_v^n = j_v^n \log_2(1 + \lambda^n_v), \forall v, n, \end{equation} where $j_v^n \in [0,1]$ is a decision variable to schedule the resources of RSU to vehicle $v$ at time slot $n$. Hence, $j_v^n=1$ means vehicle $v$ is served at time slot $n$ and 0 otherwise. Now, the average bit rate each vehicle receives throughout its sojourn time can be computed by the following. \begin{equation} z_v = \dfrac{1}{H_v}\sum_{n=1}^N l_v^n, \forall v, \end{equation} where $H_v$ is the residence time of vehicle $v$ in the dark zone. Next, we formally define our problem as: \textbf{Definition 1} \textit{Assume a flow of vehicles travelling through a dark zone. The vehicles are demanding connection to a remote RSU. Meanwhile, a RIS is deployed at specific point, i.e., on a building, where it possesses a strong LoS with the RSU and the dark zone. The RIS has a certain number of elements where the operator can tune their coefficients to provide service for vehicles in order to enhance channel gains and improve bit rates. During a certain time horizon (encompassing multiple time slots), what is the best RSU wireless scheduling and phase-shift configuration for the RIS elements such that the minimum average bit rate provided to vehicles is maximized.} \section{Mathematical Formulation} \label{sec:mathematical-formulation} In this section we formulate the problem of RSU wireless scheduling and RIS element tuning mathematically. Let $A_v$ and $D_v$ denote the arrival time and departure time of vehicle $v$, respectively. The notations used in this corresponding are listed in Table \ref{table:mathematical_notations}. \label{mathmatical_formulation} \begin{table}[t] \caption{Mathematical notation} \begin{center} \begin{tabular}{|c|p{6cm}|} \hline \rowcolor{lightgray} \multicolumn{2}{|c|}{Parameters} \\ \hline $x_I, y_I, z_I$& RIS location \\ \hline $x_R, y_R, z_R$& RSU location \\ \hline $x_v^n, y_v^n, z_v^n$& Vehicle $v$ location at time slot $n$ \\ \hline $N$& Time horizon consists of smaller time slots. \\ \hline $V^n$& Set of available vehicles during time slot $n$ \\ \hline $C$& Number of RSU channels \\ \hline $H_v$& Vehicle $v$ residence time \\ \hline $\phi_{I,R}$& Angle of arrival at RIS from RSU \\ \hline $\phi_{I,v}^n$& Angle of arrival between the RIS and vehicle $v$ at time slot $n$ \\ \hline $\alpha$& Path loss exponent \\ \hline $P$& Transmission power of the RSU \\ \hline $\rho$ & Median of the mean path gain at reference distance = 1m \\ \hline $\sigma$ & Thermal noise power \\ \hline $b$ & Number of control bits for the RIS elements\\ \hline $Q$ & Number of RIS phase-shift patterns \\ \hline \rowcolor{lightgray} \multicolumn{2}{|c|}{Variables} \\ \hline $j_v^n$& 1: if vehicle $v$ is scheduled for service by the RSU at $n$ and 0 otherwise \\ \hline $\theta^n_m$& RIS element $m$ phase-shift angle at time slot $n$\\ \hline \end{tabular} \label{table:mathematical_notations} \end{center} \end{table} The optimization problem alongside the objective function can be mathematically written as follows. For the sake of clarity, let $\boldsymbol{\Theta} = \{\boldsymbol{\theta}^1, \boldsymbol{\theta}^2, \dots, \boldsymbol{\theta}^N\}$ and $J=\{j_v^n, \forall v, n \in N\}$. \begin{maxi!}|s|[2] {\boldsymbol{\Theta}, J} {\{\min z_v\} \label{eq:objective}} {\label{eq:Example1}} {} \addConstraint{}{\sum_{v=1}^{V^n} j_v^n \leq C, \forall n \in N, \label{eq:con1}} \addConstraint{}{j_v^n \in [0,1], \forall n \in N, v, \label{eq:one-vehicle}} \addConstraint{}{j_v^n \leq \max(n - A_v,0), \forall n \in N, v, \label{eq:con2}} \addConstraint{}{j_v^n \leq \max(D_v - n,0), \forall n \in N, v, \label{eq:con3}} \addConstraint{}{\theta_m^n \in \Omega, \forall n \in N, m \in M. \label{eq:con4}} \end{maxi!} Here, the objective function, Eq (\ref{eq:objective}), is max-min which translates to maximizing the minimum average bit rate. Constraint (\ref{eq:con1}) ensures that the number of channels scheduled to vehicles is no more than that available at the RSU. Constraint (\ref{eq:one-vehicle}) allows vehicles to be served via one channel only. Constraints (\ref{eq:con2}) and (\ref{eq:con3}) make sure that vehicle can only be served via RIS while it is within the area of the dark zone. Finally, constraint (\ref{eq:con4}) restrains the number of phase-shift values. Now, the problem is non-convex due to the discrete RIS element phase-shift optimization. Also, the phase-shift matrix is hard to be solved. For instance, if the phase-shift is tuned to optimally serve the first vehicle, the other ones might receive less quality and vise versa. Furthermore, in this problem, it is hard to eliminate the coupling relationship between phase-shift configuration and wireless scheduling. In addition, the information of vehicles such as their arrival, speed, and departure, are unknown in advance. Due to the dynamic nature of the environment, it is impractical to assume such information is given. Hence, a effective solution mechanism has not only to deal with the difficulties of such problem, but it has also to predict for the hidden parameters. In order to address the above challenges, we resort to Deep Reinforcement Learning (DRL) with multi-binary action space to find a policy that maximizes the minimum average bit rate for vehicles. However, if DRL is used to solve for the two decisions of resource scheduling and phase-shift matrix, the action space will be equal to all the possible combinations of wireless scheduling and phase-shift patterns for $M$ elements which is unbearably large. Such massive action space would increase the DRL agent difficulty to learn. Similar to \cite{lee2020deep, zhang2020distributional}, a more practical solution approach can be realised by delegating one decision to an optimization technique while dedicating the second one to machine learning based approach. In particular, the DRL agent first determines which vehicles are going to be served at time slot $n$. While, BCD algorithm is invoked to configure the phase-shift matrix such that the service offered to the scheduled vehicles is optimized. Next, the solution approach, in details, will be discussed. \section{Solution Approach} \label{sec:solution-approach} The solution approach for joint resource scheduling and passive beamforming is presented in this section. First, we decompose the aforementioned problem into two sub-problems, the first sub-problem is due to the resource scheduling and second one corresponds to the phase-shift matrix of the RIS. The information and mobility of the upcoming vehicles are unknown in advance. That is, solving the first sub-problem is quite challenging. Hence, we resort to DRL to observe the environment and tackle multi-user RSU scheduling. Next, the RIS elements are tuned based on Block Coordinate Descent (BCD) \cite{bai2020latency, he2020reconfigurable, abeywickrama2020intelligent}. The details of our solution methodology are laid out in the next sections. \subsection{DRL for Wireless Scheduling} For the DRL, we need to define a Markov Decision Process tuple $\langle\boldsymbol{S}, \boldsymbol{A}, \gamma , R, G\rangle$ that represents the environment. First, let us define four new notations; $f^n, \forall n \in N$ which denotes the current minimum average bit rate until time slot $n$, $k_v^n, \forall n \in N, v$ denotes the speed of vehicle $v$ at time slot $n$, $z_v^n \forall n \in N, v$ is the current average bit rate of vehicle $v$ until time slot $n$, $\eta$ is the largest number of vehicles existing simultaneously, and $U$ is the number of possible actions. Now, the MDP is defined as follows: \begin{itemize} \item $\boldsymbol{S}$ is the state space where its size is large as it contains unbounded parameters of real numbers. The system state $s^n \in \boldsymbol{S}$ is a vector that indicates current minimum service provided up to $n-1$, the speeds of the existing vehicles ($k_v^n, \forall v$) in that time slots, their cumulative average bit rates up to $n$ ($z_v^n, \forall v$), and their locations ($x_v^n, \forall v$). The state $s$ can be expressed as: \begin{equation} \{f^n, \underbrace{k_1^n, z_1^n, x_1^n}_{v=1}, \underbrace{k_2^n, z_2^n, x_2^n}_{v=2}, ... , \underbrace{k_{\eta}^n, z_{\eta}^n, x_{\eta}^n}_{v=\eta}\}, \forall n. \end{equation} \item $\boldsymbol{A}$ is the action space where the action taken for each time slot $n$ is $a^n \in \boldsymbol{A}$. $a^n$ is a binary vector of size $\eta$. Also, the sum of vector $a^n$ should be equal to $C$ (to enforce constraint \eqref{eq:con1}). For example, if $a^n[0]=1$, then the first existing vehicle is being served at time slot $n$ and so forth. The number of actions can be computed by $U = \eta!/(C! (\eta-C)!)$. The possible combinations of action vectors is similar to the example below. % \begin{equation} \underbrace{\{\underbrace{0}_1,0,1,...,\underbrace{1}_{\eta}\}}_{1},\underbrace{\{0,1,1,...,1\}}_{2},...,\underbrace{\{1,1,0,...,0\}}_{U} \end{equation} \item $R$ is the discounted cumulative reward produced after executing every step $n \in N$ and it is defined as follows: % \begin{equation} R = \sum_{n=1}^{N} \gamma^{n-1} r^n, \label{eq:cumulative_reward} \end{equation} Here, the step reward $r^n$ is computed as follows. During the beginning of the operational phase, $r^n=0$ until the first vehicle departs. For the first vehicle, the step reward is equal to its average bit rate. Henceforth, whenever any vehicle leaves the dark zone, the step reward is given as a penalty if and only if that vehicle has received less average bit rate than the other vehicles which left previously. It is worth noting that since the agent seeks to maximize the minimum average bit rate, it does not count reward if a vehicle received higher bit rate than others. \item $G$ denotes state transition probabilities which is the probability of being in state $s'$ after applying action $a^n$ in state $s^n$. The probability of transition from one state to another depends only on the current state as in Eq (\ref{eq:mdp_proof}). \begin{align} \small \begin{split}\label{eq:mdp_proof} Pr(s^{n+1} = s' \mid s^n, a^n) {}& = Pr(f^{n+1} \mid f^{n}, a^n)\\ & \times Pr(k_v^{n+1} \mid k_v^n) \\ & \times Pr(x_v^{n+1} \mid x_v^n, k_v^n) \\ & \times Pr(z_v^{n+1} \mid z_v^{n}, a^n). \\ \end{split} \end{align} That is, the probability of having minimum average bit rate of $f^{n+1}$ depends on the current value $f^n$ and the action taken at that time slot $a^n$. The probability of vehicle having certain speed, $k_v^{n+1}$, depends on its current speed. Moreover, the probability of vehicle $v$ being in next location, $x_v^{n+1}$ depends on its current location ($x_v^n$) and its speed ($k_v^n$). Finally, the probability of vehicle having certain average bit rate ($z_v^{n+1}$) depends only on the current one ($z_v^{n}$) and the action taken at that time slot. \end{itemize} \textbf{Remark} \textit{selecting an action is a non-trivial task for the problem explained above. Actually, since our objective is to maximize the minimum average bit rate for all vehicles, it is not easy to decide which vehicles to serve at each time slot. For instance, a vehicle has just entered may have plenty of time to be served later while a vehicle near the end of the road segment may have no much time to receive service. In contrary, a vehicle located at the end is way far than those vehicles near the RIS. Hence, the latter can receive much higher bit rate if selected to be served. Moreover, if the RSU postpones the service for one vehicle, other vehicles may arrive, therefore, that vehicle will have less chances to be served later. In addition, in our work, we consider multi-user communication where more than one vehicle might be scheduled by the RSU simultaneously which makes the action space more complicated. Hence, the agent needs to interact with the environment and try different actions and scheduling policies in order to figure out which one attains the best cumulative reward.} For DRL, we exploit PPO to develop our agent as laid out in Algorithm \ref{algorithm:ppo_algorithm}. First, the agent initializes random sampling policy and value function for the neural networks as in lines 3 and 4. Then, in each epoch, the agent observes the environment which consists of the set of vehicles and their information, minimum average bit rate achieved up to $n$. Then at each time slot $n$, the agent selects an action which is a binary vector that determine which set of vehicles will be served via the RSU. Based on that action, the BCD algorithm is then invoked to configure the phase-shift matrix in order to maximize the channel gain. Eventually, the time step reward is worked out which has three cases. First, if no vehicle has departed yet, $r^n=0$. Second, if the very first vehicle departs, the reward is set to its average bit rate. Third, the consecutive vehicles leaving the area will be accounted as a penalty if and only if their average bit rate is less than $f^n$ when they have departed. After gathering the set of samples and computing the rewards, the agent works out the advantage function (line 19), $\hat{A}$, which is defined as the resultant of subtracting the expected value function from the actual reward. $\hat{A}$ is the estimated advantage function or relative value of the selected action. It helps the system to understand how good it is preforming based on its normal estimate function value \cite{bohn2019deep}. Our agent is based on proximal policy optimization which usually is implemented in Actor-Critic framework, where more objective functions are added to the surrogate objective. Based on \cite{shokry2020leveraging}, the complexity of connected network with $P$ layers is $O(\sum_{p=0}^P n_p n_{p-1})$ where $n_p$ denotes the total number of neurons in layer $p$. \begin{center} \begin{algorithm}[t] \small \caption{Proposed DRL for Scheduling} \label{algorithm:ppo_algorithm} \begin{algorithmic}[1] \State \textbf{Inputs:} $N$, $v$, Learning Rate, $\gamma$, $\epsilon$. \State \textbf{Outputs:} RSU resource scheduling and $\boldsymbol{\theta}^n$. \State Initial policy $\pi$ with random parameter $\theta$ and threshold $\epsilon$ \State Initial value function $V$ with random parameters $\phi$ \For{each episode $k \in \{0, 1, 2,...\}$} \For{$n:\{0, 1, 2,..., N\}$} \State Observe state $f^n, k_v^n, z_v^n, x_v^n, \forall v \in V^n$. \State Select action $a^n$ from $\pi_{\theta_{old}}$ \State Assign channel to vehicle $v$ if it is scheduled to be served. \State Configure RIS phase-shift matrix using Algorithm \ref{algortihm:BCD}. \If{Vehicle $v$ is the first one to leave} \State Set $r^n = z_v$ \ElsIf{Vehicle $v$ departed and $z_v < f^n$} \State Set $r^n = f^n - z_v$ \Else \State $r^n = 0$ \EndIf \EndFor \State Compute advantage estimate $\hat{A}$ for all epochs. \State Optimize surrogate loss function using Adam optimizer. \State Update policy $\pi_{\theta_{old}} \gets \pi_{\theta}$. \EndFor \end{algorithmic} \end{algorithm} \end{center} \subsection{BCD for RIS Phase-Shift Coefficients} Block coordinate descent (BCD) has been proposed in the literature to solve for RIS phase-shift matrix \cite{abeywickrama2020intelligent, he2020reconfigurable}. In this correspondence, we aim to leverage BCD to maximize the sum of immediate sum of bit rates of all vehicles selected to be served at time slot $n$. \begin{equation} \label{eq:max-sum-bcd} \sum_{v=1}^{V^n} j_v^n l_v^n, \forall n. \end{equation} To do so, Algorithm \ref{algortihm:BCD} receives the action selected by DRL in Algorithm \ref{algorithm:ppo_algorithm}, $J^n$. Once the decision is taken by the agent, the BCD is then called to optimize the phase-shift matrix in iterative way. In each iteration, a sequence of block optimization procedures are performed. In each one, all elements are fixed while one is optimized by checking all its possible values, $2^b$. The one that maximizes the objective will be selected. After that, the next element will be selected to optimize and so forth. This operation is iterated until Eq \eqref{eq:max-sum-bcd} has converged. In practice, Algorithm \ref{algorithm:ppo_algorithm} needs one iteration to surpass 95\% threshold of its maximum performance. Hence, this algorithm is pretty robust in dealing with the phase-shift coefficients. \begin{center} \begin{algorithm}[h] \small \caption{BCD to Tune the RIS Phase-Shift Matrix} \label{algortihm:BCD} \begin{algorithmic}[1] \State \textbf{Inputs:} $J^n$. \State \textbf{Outputs:} $\boldsymbol{\theta^n}$ \While{Eq (\ref{eq:max-sum-bcd}) not converged} \For{$m=1,...,M$} \State Fix $m', \forall m' \neq m, m' \in M$ \State Set $\theta_m^n = \underset{\Omega}{\argmax}~ $ Eq(\ref{eq:max-sum-bcd}) \EndFor \State Obtain Eq (\ref{eq:max-sum-bcd}) \EndWhile \end{algorithmic} \end{algorithm} \end{center} Concerning the complexity of Algorithm \ref{algortihm:BCD}, it is $O(I M 2^b)$ where $I$ stands for the number of iterations until Eq \eqref{eq:max-sum-bcd} converges. In details, there are three loops in this Algorithm; first is the number of iterations, second is the number of RIS elements, and third is the number of angles available to control each element. An experiment is conducted to study the BCD performance and the results are shown in Fig. \ref{fig:bcd-iterations}. In this experiment, we vary the number of RIS elements from 25 to 100 elements. Moreover, we try different number of users ($C$) and control bits ($b$). Based on the outcomes, we can approximate the complexity of Algorithm \ref{algortihm:BCD} to $O(M 2^b)$. The complexity can further be approximated to $O(M2^3) = O(8M)$ based on the fact that a $b$ of 3 is enough \cite{zhang2020reconfigurable}\footnote{Note that, based on our experiments, we found that $b = 2$ is enough to achieve high performance in our context as demonstrated in Section \ref{sec:numerical-result}.}. \begin{figure} \centerline{\includegraphics[scale=0.25]{bcd_iterations.png}} \caption{BCD convergence over iterations with different RIS elements ($M$), users ($C$), and quantization levels ($b$).} \label{fig:bcd-iterations} \end{figure} \section{Case Study: RIS Placement and Vehicle Positioning} \label{section:case-study} In the literature, RIS is often proposed to tune static wireless environments such as user equipment or IoT devices. However, implementing RIS in dynamic medium is far more challenging owing to the highly sensitivity of RIS phase-shift alignment. Therefore, we carry out two studies to address the practical RIS placement and the impacts of vehicle positioning accuracy in the context of RIS-assist vehicular communications. \subsection{RIS Placement} In this section, we discuss the issue of placing RIS at different places. We statistically study how placing the RIS at different locations can actually improve or worsen the overall performance. Then, we will see what is the optimal location to situate the RIS. We start off by a hypothesis stating that the optimal RIS placement is the closest one to the RSU. This hypothesis is based on initial observations that indicate the shorter the distance between the RIS and RSU, the best channel gain can be achieved. In order to prove our claim, we are going to derive it mathematically and then back it up with simulation experiments. \textbf{Theorem 1} Given $x_I < x_I^{'}, x_R < x_I, x_R < x_v, \forall v$, the inequality of $z_v(x_I) > z_v(x_I^{'}), \forall x_I < x_I^{'}$ always holds. \textit{Proof:} Here, for simplicity, we take a RIS and try to place it at different points to serve a single vehicle as shown in Fig. \ref{fig:irs-placement-illustration}. Hence, the RIS elements will always be tuned to maximize the channel gain for that vehicle. Fortunately, Eq \eqref{eq:irs-channel-gain-expression} can be obtained in a closed form for a single user \cite{li2020reconfigurable}. \begin{equation} \theta_m^n =\frac{2\pi}{\lambda} (m-1) d \phi^n_{I,v} + \frac{2\pi}{\lambda} (m-1) d \phi_{I,R}, \forall m \in M, n. \end{equation} Where the phase-shifts of RIS cancel out the ones of RIS-RSU and RIS-vehicle, Eq (\ref{eq:irs-channel-gain-expression}) can be rewritten as: \begin{equation} \label{eq:irs-placement-derived-eq-2} \boldsymbol{h}_{I, R}^H \boldsymbol{\theta}^n \boldsymbol{h}_{I, v}^n = \dfrac{\rho \dfrac{K}{1+K} M}{\sqrt{(d_{I,i}^n)^{\alpha}} \sqrt{(d_{I,R})^{\alpha}}}. \end{equation} Since $\rho$, $M$, and $\alpha$ are constant, the only factors that remain variable are $d_{I,v}^n$ and $d_{I,R}$ which denote the distances of RSU-RIS-vehicle. We can also notice that Eq (\ref{eq:irs-placement-derived-eq-2}) is a decreasing function with respect to distances. Now, we need to prove that $z_v(x_I) > z_v(x_I^{'}), \forall x_I < x_I^{'}$. To do so, let us assume a vehicle has $H_v = N$ (we assume there is a single vehicle on the road). To this end, $z_v(x_I) > z_v(x_I^{'})$ is greater when the sum of bit rates received throughout $H_v$ is larger. \begin{equation} \small \label{eq:proof-1} \begin{split} l_v^1(x_I) + l_v^2(x_I)+ ... + l_v^N(x_I) > l_v^1(x_I^{'}) + \\ l_v^2(x_I^{'}) + ... + l_v^n(x_I^{'}), \forall x_I < x_I^{'}. \end{split} \end{equation} Next, for clarity, let $Y = \dfrac{P \rho^2 \bigg(\dfrac{K}{1+K}\bigg)^2 M^2}{\sigma^2}$ which is a invariant value. Hence, Eq \eqref{eq:proof-1} can be rewritten as: \begin{equation} \label{eq:simplification-1} \small \begin{split} log_2(1 + \dfrac{Y}{(d_{I,v}^1)^{\alpha} (d_{I,R})^{\alpha}}) + log_2(1 + \dfrac{Y}{(d_{I,v}^2)^{\alpha} (d_{I,R})^{\alpha}}) + ... \\ + log_2(1 + \dfrac{Y}{(d_{I,v}^N)^{\alpha} (d_{I,R})^{\alpha}}) > log_2(1 + \dfrac{Y}{(d_{I,v}^{'1})^{\alpha} (d^{'}_{I,R})^{\alpha}}) + \\ log_2(1 + \dfrac{Y}{(d_{I,v}^{'2})^{\alpha} (d^{'}_{I,R})^{\alpha}}) + ... + log_2(1 + \dfrac{Y}{(d_{I,v}^{'N})^{\alpha} (d^{'}_{I,R})^{\alpha}}) \\ , \forall x_I < x_I^{'}. \end{split} \end{equation} \begin{figure}[t] \centerline{\includegraphics[scale=0.25]{irs-placement-illustration.png}} \caption{RIS placement with corresponding channel gain, in the background, in 1D space.} \label{fig:irs-placement-illustration} \end{figure} Now, only the distance would affect the sum of bit rates over $N$. To facilitate the expressions, let us further assume one dimensional environment. Hence, $d_{I,R} = \abs{x_I - x_R}$ and $d_{I,v}^n=\abs{x_v^n - x_I}$. We can also assume $\alpha = 1$. Therefore, as absolute values are multiplicative: \begin{equation} \label{eq:simplification-2} \begin{split} d_{I,R} \times d_{I,v}^n = \abs{x_I x_v^n - x_I^2 - x_R x_v^n + x_R x_I}. \end{split} \end{equation} In Fig \ref{fig:irs-placement-proof} (a), note that Eq (\ref{eq:simplification-2}) has its lowest value when $x_I = x_R$ or $x_I = x_v^n$. The first term can always be achieved, during the entire time horizon ($\forall n \in N$) and for all vehicles, as long as $x_I$ and $x_R$ are fixed. However, the second one, $x_I = x_v^n$, as vehicles driving, only holds true for one time slot $n$ and for a specific vehicle. Therefore, with $H_v > 1$, we can confidently say: \begin{equation} \small \begin{split} \sum_{n=1}^{N} log_2(1 + \dfrac{Y}{(d_{I,v}^n)^{\alpha} (d_{I,R})^{\alpha}}) > \sum_{n=1}^{N} log_2(1 + \dfrac{T}{(d_{I,v}^{'n})^{\alpha} (d_{I,R}^{'n})^{\alpha}}), \\ \forall x_I < x_I^{'}. \end{split} \end{equation} \begin{figure} \centering \begin{tabular}{@{}c@{}} \includegraphics[scale=0.26]{irs-placement-proof.png} \\[\abovecaptionskip] \small (a) Eq (\ref{eq:simplification-1}) curve with varying $x_I$ values. \end{tabular} \vspace{\floatsep} \begin{tabular}{@{}c@{}} \includegraphics[scale=0.25]{irs-placement-results.png} \\[\abovecaptionskip] \small (b) Multi-user scenario with varying $x_I$ values. \end{tabular} \caption{Simulation results for Eq (\ref{eq:simplification-1}) and RIS placement.} \label{fig:irs-placement-proof} \end{figure} Consequently, the right side is greater the left side which completes the proof. In Theorem 1, we have shown that, for a single vehicle, the ideal place for the RIS is to be as closer as it can to the RSU. However, in practice, there exists several constraints that force the RIS to be distant from the RSU. For example, the LoS has to be clear between the RIS and RSU and between the RIS and the vehicles it serves. Otherwise, the wireless links would be highly disturbed and the RIS will lose its functionality. In line with our proof and discussion above, we carry out three experiments to see the achievable minimum average bit rate for multi-user scenario. We set $x_R=0$, $x_I=[10, 20, 30, 40, 50]$. Vehicles are generated by SUMO. The outcomes are displayed in Fig. \ref{fig:irs-placement-proof} (b). One can observe that with RIS closer to RSU, the performance was much higher. However, this performance started to degrade dramatically as the RIS moves away from the RSU. \subsection{Vehicle Positioning Precision} One of the challenging issues in vehicular communications is vehicle positioning in such highly dynamic environment. Hence, in the context of RIS-assist vehicle communication, inaccurate vehicles positioning might lead to severe consequences that negatively impact the channel gain. Thus, we attempt to understand whether it is possible to leverage the emerging technologies related to vehicle positioning such as 3D-LIDAR (Light Detection and Ranging), Global Positioning System, etc., for the benefits of accurately estimating real vehicle positions to enable RIS-aided vehicular communications. First, based on the channel gain equations laid out in Eq \eqref{eq:irs-channel-gain-expression}, the impact of inaccurate vehicle positioning on the RIS-based system performance will be examined. Let $\Delta$ be the error in vehicles positioning, then, the estimated position is $x_v^n \pm \Delta$. It is worth noting that in this context, the small error ($\Delta$) in vehicle positioning will not have significant impact on the cascaded channel (RSU-RIS-vehicle) path loss. Thus, we only study the phase-shift angle deviation (difference between accurate and estimated angles of arrival). Next, we highlight the components in Eq \eqref{eq:irs-channel-gain-expression} that are affected by inaccurate vehicles positioning. The next equation describes the phase-shift multiplication of the two angles of arrival, $\phi_{R,I}$ and $\phi_{I,v}^n$, with the RIS elements at the real position. \begin{equation} \label{eq:vehicle_positioning_1} \small \begin{split} \theta_m^n + \frac{2\pi}{\lambda} (m-1) d \dfrac{x_I - x_v^n}{\sqrt{(x_I - x_v^n)^2 + Y}} - \frac{2\pi}{\lambda} (m-1) d \phi_{I,R}, \\ \forall m \in M, n \in N. \end{split} \end{equation} Next, we formulate the same equation, yet, at the estimated position. \begin{equation} \label{eq:vehicle_positioning_2} \small \begin{split} \theta_m^n + \frac{2\pi}{\lambda} (m-1) d \dfrac{x_I - x_v^n \pm \Delta}{\sqrt{(x_I - x_v^n \pm \Delta)^2 + Y}} - \\ \frac{2\pi}{\lambda} (m-1) d \phi_{I,R}, \forall m \in M, n \in N. \end{split} \end{equation} After subtracting the two equations, real and estimated one (Eq \eqref{eq:vehicle_positioning_1} - Eq \eqref{eq:vehicle_positioning_2}), we end up with the following. \begin{equation} \label{eq:vehicle-position} \small \begin{split} \frac{2\pi}{\lambda} (m-1) d (\dfrac{x_I - x_v^n}{\sqrt{(x_I - x_v^n)^2 + Y}} - \dfrac{x_I - x_v^n \pm \Delta}{\sqrt{(x_I - x_v^n \pm \Delta)^2 + Y}}) \\, \forall m \in M, n \in N. \end{split} \end{equation} Eq (\ref{eq:vehicle-position}) clearly states that the cosine angle of arrival between RIS and vehicles is affected by the error in vehicle positioning. It is also noted that this impact occurs for all the RIS elements. How much this deviation from the real position affects the performance is what we answer next. In order to show the impact of that error $\Delta$, an experiment is conducted to realise how the bit rate is changed while varying the value of $\Delta$ with various vehicle positions (the distance from the vehicle to the RIS). The results are displayed in Fig. \ref{fig:time-slot-length} and indicate that the RIS can keep up to 90\% of its performance with error $\Delta$ ranging form 20 to 100 centimeters depending on the vehicle position. It is observed that distant vehicles from the RIS are less impacted by $\Delta$ as $\phi_{I,v}^n$ is less influenced by $\Delta$ when the distance between the vehicle and RIS is larger. One can also see in this figure that as $\Delta$ grows up, the bit rate decreases. Yet, the RIS elements can be tuned with plausible $\Delta$ values to maintain most of the original performance expected from RIS deployment. According to \cite{humphreys2020deep}, vehicular carrier-phase differential Global Navigation Satellite System (GNSS) positioning can estimate vehicle positions with accuracy of less than 17 centimeters accompanied by a success rate reaches up to 95\%. As shown in Fig. \ref{fig:time-slot-length}, the minimum tolerance (to achieve 90\% of the performance) of vehicle positioning is 20 centimeters which is compatible with GNSS precision. Note that, for simplicity, in this work, $x_v^n$ is assumed to be accurately estimated. \begin{figure*}[htbp] \centering \subfloat[]{\includegraphics[scale=0.19]{vehicle_positioning_20.png}} \subfloat[]{\includegraphics[scale=0.19]{vehicle_positioning_30.png}} \subfloat[]{\includegraphics[scale=0.19]{vehicle_positioning_40.png}} \subfloat[]{\includegraphics[scale=0.19]{vehicle_positioning_50.png}} \caption{Inaccurate vehicle positioning effects on the bit rate at different positions ((a) $x_v=20$ (b) $x_v=30$ (c) $x_v=40$ (d) $x_v=50$).} \label{fig:time-slot-length} \end{figure*} \section{Simulation and Evaluation} \label{sec:numerical-result} \subsection{Simulation Setup} As mentioned earlier, we use SUMO to mimic vehicular environment. Two flows of vehicles are generated; one with normal speed (max speed 50Kph) and the other with slow speed (max speed 30Kph). For the Deep Reinforcement Learning, 3 linear layers are used with tanh as activation function for the middle layers and softmax for the output layer. Internal layers contain 64 units each and Adam optimizer is incorporated to minimize the loss function. Learning rate is set to 0.002, $\gamma$ to 0.08, and clip to 0.02. The results were averaged over 500 tests. The remaining parameters used in our study are listed in Table \ref{table:expermint-parameters} (unless otherwise indicated). \begin{table}[htbp]\small \caption{Simulation Parameters} \begin{center} \begin{tabular}{|c|c|} \hline \rowcolor{lightgray} Parameter& Value \\ \hline Road segment length& 100 m\\ \hline Arrival rate& 0.2 Veh/sec\\ \hline $\sigma^2$ & $- 110$ dBm \\ \hline $K$& 10 dB \cite{abdullah2020hybrid} \\ \hline $\alpha$& 4\\ \hline $P$& 20 dBm \\ \hline $C$& 3 \\ \hline $\rho$ & 10 dBm\\ \hline $M$ & 100 \\ \hline $b$ & 2 bits \\ \hline $x_I, y_I, z_I$& 10, 20, 10 \\ \hline $x_R, y_R, z_R$& 0, 40, 10 \\ \hline $y_v^n, z_v^n$& 20, 1 \\ \hline \end{tabular} \label{table:expermint-parameters} \end{center} \end{table} \subsection{Numerical Results} First, we attempt to see the behaviour of the DRL agent. As illustrated in Fig. \ref{fig:result_convergence}, the cumulative reward, here represents minimum average bit rate, is remarkably increasing as the agent is exposed to more epochs/iterations. One can note that after around 7000 iterations, the system starts to converge. \begin{figure} \centerline{\includegraphics[scale=0.25]{result_convergence.png}} \caption{Convergence over time.} \label{fig:result_convergence} \end{figure} In order to validate the performance of the proposed algorithm, it is compared with three other benchmarks as follows \begin{itemize} \item \textbf{Greedy Scheduling with BCD (GS-BCD)}: In this scheme, the vehicle schedule sub-problem is solved with greedy algorithm; meanwhile the passive beamforming sub-problem is obtained using the proposed BCD scheme. The greedy algorithm can be explained as follows. At each time slot $n \in N$, a greedy algorithm ranks the set of vehicles $V^n$ based on their cumulative bit rate achieved up to $n$ ($z_v^n$). Then, those with the lowest average bit rates will be scheduled to be served in the following time slot. \item \textbf{Random Scheduling with BCD (RS-BCD)}: In this scheme, each time slot, the vehicles are randomly scheduled. While the proposed BCD is used for obtaining the passive beamforming at the RIS. \item \textbf{DRL with Random Phase-Shift Matrix (DRL-RPS)}:The proposed DRL algorithm is used to schedule RSU resources without any optimization over phase-shift for the RIS elements (Random values for the RIS elements' phase-shift). \end{itemize} It is worthwhile to compare with these baseline methods as they will show us how the performance would be if one of the two sub-problems are solved via an alternative widely-used method such as greedy or random while the second one is solved with the same method we propose. The effect of RIS number of elements $M$ is first studied. As demonstrated in Fig. \ref{results_no_elements}, with small number of elements, the achievable minimum average bit rate is very limited. However, as more elements are incorporated, the gain starts to grow up gradually, especially for our proposed solution approach. Another insight one can notice is the gap between the proposed solution with other methods over different $M$ values. It is very apparent that this gap widens proportionally as $M$ increases. With $M=150$, the difference between the proposed one and the greedy algorithm is about 17\%. Meanwhile, GS-BCD seems to have also a good performance compared to the other two methods. The reason behind that is GS-BCD attempts to reduce the minimum average bit rates for those vehicles with low bit rate levels. In addition, as GS-BCD leverages BCD, it also benefits from the good RIS configuration. RS-BCD, on the other hand, comes in the third place with clear gap from GS-BCD as it does not take into account low bit rate vehicles. At the end, DRL-RPS attains very poor performance that indicates that without a proper RIS configuration, the performance would be very poor even if the wireless scheduling is done carefully. \begin{figure} \centerline{\includegraphics[scale=0.25]{results_no_elements.png}} \caption{RIS number of elements $M$ effects on network performance.} \label{results_no_elements} \end{figure} In our next experiment, we vary the value of $b$ for practical RIS. When $b$ is high, more phases are available for the configuration which is better for optimal RIS tuning. As it can be seen in Fig. \ref{results_control_bit}, larger $b$ means better performance. Yet, one can also notice that $b$ of 2 or 3 can almost obtain the highest gain. These behaviours have also been highlighted in other works related to non-dynamic environment \cite{di2020hybrid}. In the same figure, we can see that the proposed solution always achieves the highest performance regardless of $b$. Indeed, the difference can reach up to 19\% from GS-BCD. In the meantime. RS-BCD could only achieve minimal gain with larger $b$ and still below 0.6 of minimum average bit rate in all scenarios. In contrast to the other methods, DRL-RPS was unable to add any gain with larger $b$. That is due to the fact that this method does not consider RIS element tuning in the first place. Hence, higher $b$ values may also mean higher probability of falling to align with the vehicles as the options of angles for the RIS elements become larger. \begin{figure} \centerline{\includegraphics[scale=0.25]{results_control_bit.png}} \caption{Discrete quantization levels effects.} \label{results_control_bit} \end{figure} Next, the impacts of road density on the network is studied. Here, we vary the arrival rates of vehicles which results in more or less vehicles available simultaneously within the road segment. Intuitively, with small arrival rate, the RSU and RIS can better serve the vehicles. As seen in Fig. \ref{fig:results_density}, minimum average bit rate is slightly above 1.4 bps/Hz. However, as the road segment becomes more dense, the value degrades to approach approximately 1 bps/Hz. For the other methods, we can observe similar behaviours expect for GS-BCD where its gain seems to saturate at very low arrival rates. That is because GS-BCD does not consider the distance between the RIS and vehicles. It always assigns the resources for those with less $z_v^n$ regardless of their location or speed. Therefore, the wireless resources might be wasted on far vehicles instead of making benefit by serving nearest ones. Also, selecting two or more vehicles with relatively large gap between them reduces the efficiency of the RIS to serve both of them since the RIS needs to maximize the sum of immediate bit rates which is undesirable in such scenario\footnote{Note that, the immediate average bit rate, denoted by $z_v^n$, does not necessarily reflect the ultimate average bit rate of vehicle $v$. It only represents what was the average bit rate up to $n$ which depends on the time elapsed since vehicle $v$ arrival. Intuitively, this value decreases over time as the elapsed time increases, and this issue is not taken into consideration by GS-BCD.}. This especially appears when less numbers of vehicles exist on the road and the RSU oftentimes schedules for distant vehicles. In contrast, RS-BCD attains steep increase in minimum average bit rates with low road density since there will be less vehicles and the probability of vehicles being served is much higher. This impact is much less significance with DRL-RPS due to the miss alignment in phase-shifts with the vehicles. \begin{figure} \centerline{\includegraphics[scale=0.25]{results_density.png}} \caption{Min average bit rate values over different vehicle arrival rates.} \label{fig:results_density} \end{figure} Finally, one of the insightful indices to use in similar problem of max-min is Jain's fairness index \cite{sediq2013optimal}. The formula for this index is: \begin{equation} \dfrac{(\sum_{v=1}^{V} z_v)^2}{V \sum_{v=1}^{V} (z_v)^2} \end{equation} Then, we conduct an experiment to see the levels of jain's fairness attained by the four algorithms. We can notice in Fig. \ref{fig:results_jain} that our proposed solution achieved the highest level of fairness in comparison to the other methods. But, one can also see that the four methods, in fact, obtain high levels in general. For GS-BCD, it actually attempts to reduce the discrepancies in minimum average bit rates among all vehicles, hence, it enhances the fairness. RS-BCD, on the other hand, schedules the resources at random with uniform distribution which also improves the fairness. Finally, DRL-RPS leverages our DRL agent which indeed tries to maximize the original maxmin problem. That is, all the methods are able to maintain nice levels of fairness among the vehicles. Despite this fact, it is still true that only our proposed solution approach can achieve the highest fairness levels while notably maximizing the minimum average bit rates through considering the coupling effects of the two sup-problems. \begin{figure} \centerline{\includegraphics[scale=0.25]{results_jain.png}} \caption{Comparison for the Jain\'s fairness ($M=100$).} \label{fig:results_jain} \end{figure} \section{Conclusion} \label{sec:conclusion} We have investigated the area of RIS integration with vehicular communications. That is, the core of this work evolved around a system model that employs RIS to provide favourable wireless experiences for vehicles travelling in a dark zone. The RIS has demonstrated high competence in establishing indirect links between the RSU and vehicles. Throughout this study, we have also seen that DRL is an appealing solution to cope with the highly dynamic nature of such environment and it can adapt to various road conditions and RIS options. In addition, BCD was also leveraged to provide efficient yet robust solutions to the RIS phase-shift matrix. In the numerical results, the performance of our solution method has been analyzed thoroughly by comparing it with other benchmarks. Aside from that, we have also carried out a study on RIS placement to attain optimized wireless communication with the RSU and non-static receivers. In terms of future work, we will extend this work considering wireless resource allocation where the spectrum can be allocated to each vehicle based on its individual needs. As a result, the RIS phase-shift configuration method can also be updated to consider various link qualities for each specific vehicle depending on the allocated wireless resources. \bibliographystyle{IEEEtran}
2,869,038,156,185
arxiv
\section{Introduction} Recent development of hardware and software for computation and communication has opened up the possibility of large scale control systems, whose components are spatially distributed over large areas. The necessity to use communication and energy-supply resources ``parsimoniously'' has given rise to rapidly growing theories of control under limited data-rate~\cite{MatveevSavkinBook} and event-triggered control~\cite{Tabuada:2007,MazoCao:2014}. Many control and coordination algorithms, facing communication and computational constraints, have been inspired by natural phenomena, discovered long before the ``network boom'' in control. Early studies of the phenomenon of synchronous flashing in large populations of male fireflies in the dark~\cite{Buck:1938} have disclosed a vision-based distributed protocol, enabling fireflies to synchronize their internal clocks: \emph{``each individual apparently took his cue to flash from his more immediate neighbors, so that the mass flash took the form of a very rapid chain of overlapping flashes...''} \cite[p. 310]{Buck:1938}. In a similar way the claps of many hands synchronize into rhythmic applause \cite{NedaVicsek:2000}. Later works revealed the role of such event-based interactions, referred to as the \emph{pulse coupling}, in synchronization of neural networks~\cite{IzhikevichBook}, in particular, the cells of cardiac~\cite{Peskin} and circadian~\cite{WinfreeBook} pacemakers. Self-synchronizing networks of biological pulse-coupled oscillators (PCO) have inspired efficient algorithms for \emph{clock synchronization} in wireless networks~\cite{HongScaglione:2005,Pagliari:2011,WangDoyle:2012,WangNunezDoyle:2012,WangNunezDoyle:2013}, substantially reducing communication between the nodes. The influential papers \cite{MirolloStrogatz:1990, Kuramoto:1991}, addressing the dynamics of PCO networks, attracted extensive attention from applied mathematicians, physicists and engineers, since ensembles of PCO give an instructive model of self-organization in complex systems, composed of very simple units. Each unit of the ensemble is a system, which operates in a small vicinity of a stable \emph{limit cycle} and is naturally represented by a scalar \emph{phase} variable~\cite{SacreSepulchre:2014}. An oscillator's phase varies in a bounded interval; upon achieving its maximum, the phase is reset to the minimal value. At this time the oscillator fires an event, e.g. emitting electric pulse or other stimulus. The length of these pulses is usually neglected since they are very short, compared to the oscillators' periods. Unlike Kuramoto networks and other \emph{diffusively} coupled oscillator ensembles~\cite{BulloSurvey:2014,ProCao:2017-1}, the interactions of PCO are event-triggered. The effect of a stimulus from a neighboring oscillator on an oscillator's trajectory is modeled by a phase shift, characterized by the nonlinear \emph{phase response curve} (PRC) mapping \cite{Canavier:2010,SacreSepulchre:2014}. In spite of significant interest in dynamics of PCO networks, the relevant mathematical results are very limited. Assuming that the oscillators are weakly coupled, the hybrid dynamics of PCO networks can be approximated by the Kuramoto model~\cite{Kuramoto:1991,WangDoyle:2012,WangNunezDoyle:2013} that has been thoroughly studied~\cite{BulloSurvey:2014}. The analytic results for general couplings are mostly confined to networks with special graphs~\cite{MirolloStrogatz:1990,GoelErmentrout:2002,LuckenYanchuk:2012,WangNunezDoyle:2015}, providing a fixed order of the oscillators' firing. In recent papers~\cite{WangNunezDoyle:2012,WangNunezDoyle:2015-1} synchronization criteria over general \emph{strongly connected} graphs have been obtained, assuming that oscillators' PRC maps are \emph{delay-advance}~\cite{IzhikevichBook} and the deviations between the initial phases are less than a half of the oscillators' period. The main idea of the proof in \cite{WangNunezDoyle:2012,WangNunezDoyle:2015-1} is the \emph{contracting property} of the network dynamics under the assumption of delay-advance PRC, enabling one to use the maximal distance between the phases (the ensemble's ``diameter'') as a Lyapunov function; this approach is widely used in the analysis of Kuramoto networks~\cite{SchmidtPapaAllgower:2012,Antonis}. In this paper, we further develop the approach from~\cite{WangNunezDoyle:2012,WangNunezDoyle:2015-1}, relaxing the strong connectivity assumption to the existence of a directed spanning tree (or \emph{root} node) in the interaction graph, which is also necessary for synchronization. Also, unlike~\cite{WangDoyle:2012,WangNunezDoyle:2013} the delay-advance PRC maps are not restricted to be piecewise-linear and can be heterogeneous. Both extensions are important. Biological oscillator networks are usually ``densely'' connected (so the strong connectivity assumption is not very restrictive), but the piecewise linearity of PRC maps is an impractical condition. In clock synchronization problems the PRC map can be chosen piecewise-linear, but the requirement of strong connectivity excludes many natural communication graphs (e.g. the star-shaped graph with the single ``master'' clock and several ``slaves''). The results have been partly reported in the conference paper~\cite{ProCao15}. The paper is organized as follows. Preliminary Section~\ref{sec.prelim} introduces technical concepts and notation. The mathematical model of PCO networks is introduced in Section~\ref{sec.oscill}. Main results are formulated in Section~\ref{sec.main} and confirmed in Section~V by numerical simulations. Section~\ref{sec.concl} concludes the paper. \section{Preliminaries and notation}\label{sec.prelim} Given $t_0\in\r$ and a function $f(\cdot)$, defined at least on the interval $(t_0-\ve_0;t_0)$ for $\ve_0>0$ sufficiently small, let $f(t_0-)\dfb\lim\limits_{t\to t_0,t<t_0}f(t)$. If $f(t_0-)=f(t_0)$, we say $f(\cdot)$ is \emph{left-continuous} at $t_0$. The limit $f(t_0+)$ and right-continuity are defined similarly. A function $f:[0;+\infty)\to\r$ is \emph{piecewise continuous}, if it is continuous at any $t\ge 0$ except for a sequence $\{t_n\}_{n=1}^{\infty}$, such that $t_n\to\infty$ and at each of the points $t_n$ the left and right limits $f(t_n-)$, $f(t_n+)$ exist. We denote the unit circle on the complex plane by $\mathbb{S}^1=\{z\in\c:|z|=1\}$. Given $\vp\in\r$, $e^{\imath\vp}=\cos\vp+\imath\sin\vp\in\mathbb{S}^1$. Here $\mathbf{i}$ stands for the imaginary unit, $\imath^2=-1$. A (directed) \emph{graph} is a pair $(V,E)$, where $V$ and $E\subseteq V\times V$ are finite sets, whose elements are referred to as the \emph{nodes} and \emph{arcs} respectively. A \emph{walk} in the graph is a sequence of nodes $v_1,v_2,\ldots,v_k$, where consecutive nodes are connected by arcs $(v_i,v_{i+1})\in E$. A \emph{root} is a node, from which the walks to all other nodes exist. A graph having a root is called \emph{rooted} (this is equivalent to the existence of a \emph{directed spanning tree}); a graph in which any node is a root is called \emph{strongly connected}. \section{The problem setup}\label{sec.oscill} An \emph{oscillator} with frequency $\omega>0$ (or, equivalently, period $T=2\pi/\omega$) is a dynamical system $\dot x(t)=f(x(t))$ with an exponentially stable $T$-periodic limit cycle $x^0(t)=x^0(t+T)$. Any solution $x(t)$, staying in the cycle's basin of attraction, converges as $t\to\infty$ to the function $x^0(\theta(t)/\omega)$. Here $\theta(t)\in [0;2\pi)$ is a piecewise-linear function, referred to as \emph{phase} and treated as ``a normalized time, evolving on the unit circle''~\cite{SacreSepulchre:2014}. The phase grows linearly until it reaches $2\pi$ and then is reset: \begin{gather} \dot\theta(t)=\om\quad\text{while $\theta(t-)<2\pi$},\label{eq.freq-single}\\ \theta(t+)=0\quad\text{if $\theta(t-)=2\pi$}\label{eq.reset-single}. \end{gather} In this paper we deal with \emph{ensembles} of multiple oscillators~\eqref{eq.freq-single}, whose interactions are \emph{event triggered}. Upon resetting, an oscillator fires an \emph{event} by sending out some \emph{stimulus} such as a short electric pulse or message. If an oscillator receives a stimulus from one of its neighbors, its phase jumps \be\label{eq.shift-single} \theta(t+)=\Psi(\theta(t-))\mod 2\pi,\quad \Psi(\theta)\dfb\theta+c\Phi(\theta), \ee after which the ``free run''~\eqref{eq.freq-single} continues. Typically it is assumed that $\Phi(0)=\Phi(2\pi)=0$ so that if an oscillator is triggering an event at time $t$, then the stimuli received from the remaining oscillators do not violate~\eqref{eq.reset-single}. The map $\Psi:[0;2\pi]\to\r$ is referred to as the oscillator's \emph{phase transition curve} (PTC)~\cite{IzhikevichBook}. The PTC is determined as in~\eqref{eq.shift-single} by the map $\Phi:[0;2\pi]\to \r$, referred to as the \emph{phase response (or resetting) curve} (PRC)~\cite{Canavier:2010,IzhikevichBook}, and the scalar \emph{coupling gain} $c>0$. In networks of biological oscillators, the PRC maps depend on the stimuli waveforms and the gain $c$ depends on the stimulus' intensity~\cite{BrownMoehlisHolmes:2004,IzhikevichBook,Canavier:2010,SacreSepulchre:2014}. In time synchronization problems~\cite{HongScaglione:2005,WangNunezDoyle:2012,WangNunezDoyle:2013} the PRC map $\Phi$ and the coupling gain $c$ are the parameters to be designed. Henceforth we assume\footnote{Dealing with ``weakly coupled'' PCO networks ($c\approx 0$)~\eqref{eq.shift-many-single} is often replaced by the additive rule $\theta(t+)=\theta(t)+kc\Phi(\theta(t))\mod 2\pi$, enabling one to approximate the PCO network by the Kuramoto model~\cite{Kuramoto:1991}.}, following~\cite{LuckenYanchuk:2012}, that $k>1$ simultaneous events, affecting an oscillator, superpose as follows \be\label{eq.shift-many-single} \theta(t+)=\Psi^{k}(\theta(t-))\mod 2\pi,\; \Psi^{k}\dfb\underbrace{\Psi\circ\Psi\circ\ldots\circ\Psi}_{\text{$k$ times}}. \ee Taking $\Psi^0(\theta)\dfb\theta$,~\eqref{eq.shift-many-single} holds for $k=0$: if the neighbors fire no events, the phase is continuous unless it has reached $2\pi$. Note that $\theta(t+)<2\pi$ at any point; in particular, the oscillator cannot be forced to fire due to its neighbors' stimuli. At the points of discontinuity one can define $\theta(t)$ arbitrarily; for definiteness, we suppose that $\theta(t)=\theta(t-)\in [0;2\pi]$. We also allow the initial phase $\theta(0)=2\pi$: the oscillator fires an event and is immediately reset to $0$. \subsection{Mathematical model of the PCO network} Consider a group of $N>1$ oscillators of the same period $T=2\pi/\omega$ and PTC mappings $\Psi_1(\theta),\ldots,\Psi_N(\theta)$, corresponding to PRC maps $\Phi_i$ and coupling gains $c_i>0$. The vector of oscillators' phases is denoted by $\bar\theta(t)\dfb(\theta_i(t))_{i=1}^N\in [0;2\pi]^N$. The interactions among the oscillators are encoded by a graph $G=(V,E)$, whose nodes are in one-to-one correspondence with oscillators $V=\{1,\ldots,N\}$. The arc $(j,i)$ exists if and only if oscillator $j$ influences oscillator $i$; we denote $N_i\dfb\{j:(j,i)\in E\}$ to denote the set of oscillators, affecting oscillator $i$; it is convenient to assume that $i\in N_i\,\forall i$. The dynamics of the PCO network is as follows \begin{gather} \dot{\bar\theta}(t)=(\omega,\ldots,\omega)\quad\text{when $I(\bar\theta(t))=\emptyset$,}\label{eq.freq}\\ \bar\theta(t+)=\bar\Psi(\bar\theta(t))\mod 2\pi\quad\text{if $I(\bar\theta(t))\ne\emptyset$}\label{eq.shift},\\ \bar\Psi(\theta_1,\ldots,\theta_N)\dfb \left(\Psi_1^{k_1}(\theta_1),\ldots,\Psi_N^{k_N}(\theta_N)\right)\label{eq.psi-bar},\\ I(\bar\theta)\dfb\{j:\theta_j=2\pi\},\quad k_i=k_i(\bar\theta)\dfb\left|I(\bar\theta)\cap N_i\right|\label{eq.indices}. \end{gather} Here $|\cdot|$ denotes the cardinality of a set. The phases obey~\eqref{eq.freq-single} until some oscillators fire; $I(\bar\theta(t))\ne\emptyset$ stands for the set of their indices. Oscillator $i$ is affected by $k_i\ge 0$ firing neighbors, and its phase jumps in accordance with~\eqref{eq.shift-many-single}. If $k_i=0$ then $\theta_i(t)<2\pi$ (since $i\in N_i$) and $\theta_i(\cdot)$ is continuous at $t$. \begin{definition}\label{def.solution} A function $\bar\theta:\Delta\to [0;2\pi]^N$ is said to be a solution to the system~\eqref{eq.freq},~\eqref{eq.shift} on the interval $\Delta\subseteq [0;\infty)$ if the following conditions hold \begin{enumerate} \item on any \emph{compact} interval $\Delta'\subseteq \Delta$ only a finite number of events are fired $\left|\Delta'\cap\left\{t:I(\bar\theta(t))\ne\emptyset\right\}\right|<\infty$; \item the function $\bar\theta(t)$ is left-continuous and obeys~\eqref{eq.freq} at any $t\ge 0$ except for the points where some oscillators fire; at such points $\bar\theta(t)$ switches in accordance with~\eqref{eq.shift}. \end{enumerate} \end{definition} \begin{remark}\label{rem.definitions-difference} Our definition of a solution is more restrictive than the definitions in~\cite{WangNunezDoyle:2015,WangNunezDoyle:2015-1}, which replace the discontinuous mapping $\bar\Psi$ in~\eqref{eq.shift} by an outer-semicontinuous~\cite{Goebel:2009} \emph{multi-valued} map. Unlike the ``generalized'' solutions from~\cite{WangNunezDoyle:2015,WangNunezDoyle:2015-1}, the solution from Definition~\ref{def.solution} is \emph{uniquely} determined by its initial condition $\bar\theta(0)$ and depends continuously on it. \end{remark} Our goal is to establish conditions, under which the solution $\bar\theta(t)$ to the system~\eqref{eq.freq},~\eqref{eq.shift} exists on $[0;\infty)$ and the oscillators' phases become~\emph{synchronous} in the following sense. \begin{definition}\label{def.sync} The phases $\theta_i(\cdot)$ ($i\in 1:N$) synchronize if \be\label{eq.sync} e^{\imath(\theta_i(t)-\theta_j(t))}\xrightarrow[t\to\infty]{} 1\Longleftrightarrow e^{\imath\theta_i(t)}-e^{\imath\theta_j(t)}\xrightarrow[t\to\infty]{} 0. \ee \end{definition} \subsection{Assumptions} In this subsection, we formulate two assumptions adopted throughout the paper. The first of these assumptions implies an important contraction property of the hybrid dynamics~\eqref{eq.freq},\eqref{eq.shift}. \begin{assum}\label{ass.psi} The mappings $\Psi_i$ are continuous on $[0;2\pi]\setminus\{\pi\}$, satisfying the conditions $\Psi_i(0)=0$, $\Psi_i(2\pi)=2\pi$ and \be \Psi_i(\theta)\in (0;\theta)\,\forall \theta\in (0;\pi),\quad \Psi_i(\theta)\in(\theta;2\pi)\,\forall \theta\in (\pi;2\pi). \een \end{assum} \begin{figure} \center \begin{subfigure}[t]{0.4\linewidth} \begin{tikzpicture}[scale=0.4, baseline=(A.base)] \draw (-2.5,0) -- (2.5,0); \draw (0,-2.5) -- (0,2.5); \draw (0,0) circle (2cm); \draw (0,2) [red, very thick] arc (90:30:2cm); \draw (0,2) [->,red, very thick] arc (90:60:2cm); \node [label=above right:\small $\theta_i(t)$] at (0,2) [line width=0.05mm,myrad={0.02}{black}] {}; \node [label=right:\color{blue}\small $\theta_i(t+)$] at (1.732,1) [line width=0.05mm,myrad={0.02}{blue}] {}; \node [label=right:{\small $2\pi=\theta_{j}(t)$} ] (A) at (2,0) [line width=0.05mm,myrad={0.02}{gray}] {}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[t]{0.4\linewidth} \begin{tikzpicture}[scale=0.4, baseline=(A.base)] \draw (-2.5,0) -- (2.5,0); \draw (0,-2.5) -- (0,2.5); \draw (0,0) circle (2cm); \draw (0,-2) [red, very thick] arc (-90:-45:2cm); \draw (0,-2) [->,red, very thick] arc (-90:-60:2cm); \node [label=below right :\small $\theta_i(t)$] at (0,-2) [line width=0.05mm,myrad={0.02}{black}] {}; \node [label=right:\color{blue}\small $\theta_i(t+)$] at (1.732,-1) [line width=0.05mm,myrad={0.02}{blue}] {}; \node [label=right:{\small $2\pi=\theta_{j}(t)$} ] (A) at (2,0) [line width=0.05mm,myrad={0.02}{gray}] {}; \end{tikzpicture} \end{subfigure} \caption{Illustration to Assumption~\ref{ass.psi}: the jump~\eqref{eq.shift-single} decreases the distance between the oscillator $i$ and its firing neighbor $j$.}\label{fig.d1} \end{figure} Assumption~\ref{ass.psi} is illustrated by~Fig.~\ref{fig.d1}. The $i$th ``clock'' is \emph{delayed} by the phase jump~\eqref{eq.shift-many-single} if it is ahead of its firing neighbors (Fig.~\ref{fig.d1}, left part) and \emph{advanced} if it is behind them (Fig.~\ref{fig.d1}, right part). Such operations do not lead to ``overshoots'': a ``retarding'' oscillator cannot overrun its neighbors and become ``advancing'', and vice versa. A firing oscillator is not influenced by the others' events since $\Psi_i^{k_i}(2\pi)\mod 2\pi=0$. Assumption~\ref{ass.psi} holds, in particular, for PCOs with coupling gains $c_i\in (0;1)$ and piecewise-linear PRC maps \be\label{eq.prc-lin} \Phi_1(\theta)=\ldots=\Phi_N(\theta)=\begin{cases} -\theta,&\theta\in [0;\pi)\\ 2\pi-\theta,&\theta\in (\pi;2\pi]\\ \text{any}, &\theta=\pi. \end{cases} \ee Such a choice of the PRC map appears to be the most natural in time synchronization problems~\cite{WangDoyle:2012,WangNunezDoyle:2013,WangNunezDoyle:2015,WangNunezDoyle:2015-1}. More generally, the PRC map $\Phi(\theta)$ is called \emph{delay-advance}~\cite{WangNunezDoyle:2012} if $\Phi(\theta)<0$ for $\theta\in (0;\pi)$ and $\Phi(\theta)>0$ when $\theta\in(\pi;2\pi)$. Mathematical models of natural oscillators with delay-advance PRC include, but are not limited to, ``isochron clocks''~\cite{GoelErmentrout:2002} and the Andronov-Hopf oscillator~\cite{IzhikevichBook}. Assumption~\ref{ass.psi} holds for sufficiently small $c_i>0$ if $\Phi_i$ are delay-advance and \ben \inf_{\theta\in (0;\pi)}\frac{\Phi_i(\theta)}{\theta}>-\infty\quad\text{and}\quad \sup_{\theta\in (\pi;2\pi)}\frac{\Phi_i(\theta)}{2\pi-\theta}<\infty\quad\forall i. \een To introduce our second assumption, restricting oscillators to be ``partially synchronous'', we need a technical definition. \begin{definition}\label{def.diam} An \emph{arc} of $\S^1$ is a closed connected subset $L\subseteq\S^1$. Given a vector of phases $\bar\theta=(\theta_i)_{i=1}^N$, its \emph{diameter} $d(\bar\theta)$ is the length of the shortest arc, containing the set $\{e^{\imath\theta_i}\}_{i=1}^N$. \end{definition} The definition of diameter is illustrated by Fig.~\ref{fig.d}: one of the two shortest arcs, containing the phases, is drawn in red. \begin{figure} \center \begin{tikzpicture}[scale=0.4] \draw (-2.5,0) -- (2.5,0); \draw (0,-2.5) -- (0,2.5); \draw (0,0) circle (2cm); \draw (0,-2) [red, very thick] arc (-90:135:2cm); \node [label=above:\small $\theta_1$] at (-1.414,1.414) [line width=0.05mm,myrad={0.02}{black}] {}; \node [label=above:\small $\theta_2$] at (1.414,1.414) [line width=0.05mm,myrad={0.02}{black}] {}; \node [label=below right:\small $\theta_3$] at (0,-2) [line width=0.05mm,myrad={0.02}{black}] {}; \end{tikzpicture} \caption{$\bar\theta=(\pi/4,3\pi/4,3\pi/2)$, $d(\bar\theta)=5\pi/4$}\label{fig.d} \end{figure} \begin{assum}\label{ass.initial} The initial phases of the oscillators are ``partially synchronized'', satisfying the inequality \be\label{eq.initial} d(\bar\theta(0))<\pi. \ee \end{assum} \begin{remark}\label{rem.no-synchronize} The ``partial synchronization'' Assumption~\ref{ass.initial} can be relaxed in some special situations~\cite{MirolloStrogatz:1990,GoelErmentrout:2002}, but generally cannot be fully discarded. The simplest example is a network of $N=2$ coupled oscillators, whose PRC maps $\Phi_1,\Phi_2$ satisfy the condition $\Phi_1(\pi)=\Phi_2(\pi)=0$. Then the solution, starting at $(\theta_1(0),\theta_2(0))=(0;\pi)$, is $T$-periodic and $d(\bar\theta(t))\equiv \pi$. Conditions similar to~\eqref{eq.initial} are often adopted to prove the synchronization of diffusively coupled oscillators~\cite{BulloSurvey:2014}. \end{remark} \section{Main result}\label{sec.main} We start with establishing basic properties of the dynamical network~\eqref{eq.freq},~\eqref{eq.shift} (Subsect.~\ref{subsec.basic}) and then prove the the main result of the paper, ensuring synchronization (Subsect.~\ref{subsec.synch}). Our method extends the idea of the diameter Lyapunov function, used to prove stability of multi-agent coordination protocols~\cite{Moro:05}, to the hybrid system~\eqref{eq.freq},~\eqref{eq.shift}. We show that the diameter $d(\bar\theta(t))$ of the oscillator ensemble is non-increasing and, furthermore, there exists a period $T_N$, independent of the initial condition, such that $d(\bar\theta(T_N))-d(\bar\theta(0))<0$ unless $d(\bar\theta(0))=0$. The key idea is to establish the LaSalle-type result for the hybrid system~\eqref{eq.freq},~\eqref{eq.shift} and the Lyapunov function $d(\cdot)$, stating that any solution converges to the synchronous manifold $\{\bar\theta\in [0;2\pi]^N:d(\bar\theta)=0\}$. In the existing literature~\cite{WangNunezDoyle:2012,WangNunezDoyle:2015-1}, this is done via a straightforward estimation of the diameter's decrease $d(\bar\theta(T_N))-d(\bar\theta(0))$, employing the special structure of PRC maps and the strong connectivity of the graph. We extend these results to the case of rooted graphs and general delay-advanced PRC maps, deriving the mentioned LaSalle-type result from the continuity of the trajectory with respect to the initial condition. \subsection{Basic properties of the solutions}\label{subsec.basic} We first show existence and uniqueness of solutions to the system~\eqref{eq.freq},~\eqref{eq.shift} and establish their basic properties. \begin{theorem}\label{thm.exist} Under Assumption~\ref{ass.psi}, for \emph{any} initial condition $\bar\theta(0)\in [0;2\pi]^N$ the following statements hold: \begin{enumerate} \item the solution to~\eqref{eq.freq},~\eqref{eq.shift} exists on $[0;\infty)$ and is unique; \item if some oscillator fires two consecutive events at instants $t'>0$ and $t''>t'$ respectively, then $t''-t'>T/2$; \suspend{enumerate} If the initial condition satisfies the inequality~\eqref{eq.initial}, then \resume{enumerate} \item the diameter function $d(t)\dfb d(\bar\theta(t))$ is non-increasing; \item let $L(t)=L(\bar\theta(t))$ be the arc of the minimal length, containing $\{e^{\imath\theta_j(t)}\}_{j=1}^N$, then $L(t)\subseteq e^{\imath\om (t-t_0)}L(t_0)$ and $L(t_0+)\subseteq L(t_0)$ whenever $t>t_0\ge 0$; \item for any $s\ge 0$ each oscillator fires on $(s;s+3T/2)$. \end{enumerate} \end{theorem} \begin{remark} The problem of solution existence has been studied in~\cite{WangNunezDoyle:2015} (Proposition~4) and~\cite{WangNunezDoyle:2015-1} (Proposition~1), using the general framework of hybrid systems theory~\cite{Goebel:2009}. However, as discussed in Remark~\ref{rem.definitions-difference}, these results do not imply the existence of solutions in the sense of Definition~1. The proofs of Theorem~1 in~\cite{WangNunezDoyle:2012} and Theorem~1 in \cite{WangNunezDoyle:2015-1} contain in fact statements~3) and 4) for special PRC maps~\eqref{eq.prc-lin}. However, the proof of Theorem~\ref{thm.exist} for general delay-advance oscillators seems not to be available in the literature. \end{remark} The proof of Theorem~\ref{thm.exist} relies on the following proposition, proved in Appendix A. \begin{proposition}\label{prop.uniqueness} For a vector $\bar\xi\in [0;2\pi]^N$, denote \[ \bar\xi^+\dfb\bar\Psi(\bar\xi)\mod 2\pi,\quad \delta_0\dfb T-\om^{-1}\max_i\xi_i^+>0. \] Then on the interval $\Delta_0=[0;\delta_0)$ the system~\eqref{eq.freq},~\eqref{eq.shift} has a unique solution with the initial condition $\bar\theta(0)=\bar\xi$. On $(0;\delta_0)$ no events are fired (events at time $t=0$ are possible). \end{proposition} \begin{IEEEproof}[Proof of Theorem~\ref{thm.exist}] We start with proving the implication: if the system has a solution (defined on some interval) then for this solution statement 2) holds. We are going to prove a more general fact: if a solution $\bar\theta(\cdot)$ exists on $[t';t]$, where $t'<t$ and $0\le\theta_i(t'+)\le\pi-\om(t-t')$ for some $i$, then \be\label{eq.aux1} 0<\theta_i(t)\le \theta_i(t'+)+\om(t-t')\le\pi. \ee In particular, if $\theta_i(t'+)=0$ and $t-t'\le T/2$, then $\theta_i(t)\le\pi$ and thus oscillator $i$ cannot fire at time $t$. To prove~\eqref{eq.aux1}, recall that by Definition~\ref{def.solution} only a \emph{finite} number of events are fired between $t'$ and $t$. Denote the corresponding instants $t_1<\ldots<t_n$. Since $\theta_i(t_1)=\theta_i(t'+)+\om(t_1-t')\in [0;\pi)$ and thus $0\le\theta_i(t_1+)\le\theta_i(t_1)$. Iterating this procedure for $t_2,\ldots,t_n$, one shows that $0\le\theta_i(t_n+)\le \om(t_n-t')+\theta_i(t'+)\le\pi$, which entails~\eqref{eq.aux1} since $\theta_i(t)=\theta_i(t_n+)+\om(t-t_n)$. To prove statement~1), we invoke Proposition~\ref{prop.uniqueness}, showing that the solution exists and is unique on $\Delta=[0;\delta)$ for $\delta>0$ is sufficiently small. Consider the \emph{maximal} interval $\Delta=[0;\delta)$ with this property. We are going to show that $\delta=\infty$. Suppose on the contrary that $\delta<\infty$. Statement 2) shows that each oscillator fires a \emph{finite} number of events (at most $\lceil 2\delta/T\rceil$) on $\Delta$. Denoting the \emph{last} event instant by $t_*<\delta$, the phases obey~\eqref{eq.freq} on $(t_*,\delta)$ and hence the limit $\bar\theta(\delta)\dfb\bar\theta(\delta-)\in [0;2\pi]^N$ is defined. Applying Proposition~\ref{prop.uniqueness} to $\bar\xi\dfb \bar\theta(\delta)$, the solution is prolonged uniquely to $[\delta;\delta+\ve)$ for small $\ve>0$ and one arrives at a contradiction. Statement 1) is proved. Statements 3) and 4) are proved analogously to the inequality~\eqref{eq.aux1}. If $d(t)<\pi$ at the instant when some oscillators fire, then $L(t+)\subseteq L(t)$ and thus $d(t+)\le d(t)$ thanks to Assumption~\ref{ass.psi} since the new phases $\theta_i(t+)$ belongs to $L(t)$ (see Fig.~\ref{fig.d1}). Considering any interval $[t';t]$ (where $t'<t$) and the instants of events $t_1<\ldots<t_n\le t$, one has \be\label{eq.aux0} \begin{split} L(t)=e^{\imath\om(t-t_n)}L(t_n+)\subseteq e^{\imath\om(t-t_n)}L(t_n)\subseteq \\ \subseteq e^{\imath\om(t-t_{n-1})}L(t_{n-1})\subseteq\ldots\subseteq e^{\imath\om(t-t_{1})}L(t_{1})\subseteq\\\subseteq e^{\imath\om(t-t')}L(t'). \end{split} \ee It remains to prove statement 5). Retracing the proof of~\eqref{eq.aux1}, one proves that if $\theta_i(s)\in (\pi;2\pi)\,\forall s\in [t',t)$ then \be\label{eq.aux2} \theta_i(t)\ge \theta_i(t')+\om(t-t'). \ee Hence if $\theta_i(t')\in (\pi;2\pi)$, oscillator $i$ fires on $(t';t'+T/2)$. For any $s\ge 0$ there exists such $\tau\in [0;T)$ that $L(s+\tau)=e^{\imath\om\tau}L(s+)\subseteq \{e^{\imath\theta}:\theta\in (\pi;2\pi)\}$ (Fig.~\ref{fig.rot}). Thus $\theta_i(s+\tau)\in (\pi;2\pi)$ for any $i$, and therefore each oscillator fires during the interval $(s+\tau;s+\tau+T/2)\subseteq (s;s+3T/2)$. \end{IEEEproof} \begin{figure} \center \begin{subfigure}[t]{0.4\linewidth} \begin{tikzpicture}[scale=0.4, baseline=(A.base)] \draw (-2.5,0) -- (2.5,0); \draw (0,-2.5) -- (0,2.5); \draw (0,0) circle (2cm); \draw (0,2) [red, very thick] arc (90:-30:2cm); \node [label=above right:\small $\theta_1(s)$] at (0,2) [line width=0.05mm,myrad={0.02}{black}] {}; \node [label=right:\color{blue}\small $\theta_2(s)$] at (1.732,1) [line width=0.05mm,myrad={0.02}{blue}] {}; \node [label=right:{\small $\theta_3(s)$} ] at (1.732,-1) [line width=0.05mm,myrad={0.02}{gray}] {}; \node (A) at (2,0) {}; \node [label=below:\small $L(s)$] at (0,-2) {}; \end{tikzpicture} \end{subfigure} \begin{subfigure}[t]{0.4\linewidth} \begin{tikzpicture}[scale=0.4, baseline=(A.base)] \draw (-2.5,0) -- (2.5,0); \draw (0,-2.5) -- (0,2.5); \draw (0,0) circle (2cm); \draw (-1.732,-1) [red, very thick] arc (-150:-30:2cm); \node at (1.732,-1) [line width=0.05mm,myrad={0.02}{black}] {}; \node at (0,-2) [line width=0.05mm,myrad={0.02}{blue}] {}; \node at (-1.732,-1) [line width=0.05mm,myrad={0.02}{gray}] {}; \node (A) at (2,0) {}; \node [label=below:{\small $e^{\imath\om\tau}L(s),\;\tau\in (0;T)$}] at (0,-2) {}; \end{tikzpicture} \end{subfigure} \caption{Illustration to the proof of statement 4): rotation by some angle $\om\tau\in(0;2\pi)$ brings $L(s)$ to the lower half-plane.}\label{fig.rot} \end{figure} \begin{remark} Statements 2) and 5) of Theorem~\ref{thm.exist} show that under Assumptions~\ref{ass.psi} and~\ref{ass.initial} the time elapsed between two \emph{consecutive} events, fired by the same oscillator, lies strictly between $T/2$ and $3T/2$. Both bounds are tight and cannot be relaxed, as demonstrated by the following example. \end{remark} \begin{exam} Consider a network of two oscillators ($N=2$) with $T=2\pi$, PRC map~\eqref{eq.prc-lin} and gain $c\in(0;1)$, whose graph contains the only arc $2\mapsto 1$. Starting at $\theta_1(0)=0$ and $\theta_2(0)=\theta_*>\pi$, oscillator $2$ fires at time $t_*\dfb2\pi-\theta_*<\pi$ and $\theta_1(t_*+)=(1-c)t_*$. Hence the next event is fired by oscillator $1$ at time $t_*+2\pi-(1-c)t_*=2\pi+c(2\pi-\theta_*)$. If $\theta_*<\pi$, then $t_*>\pi$ and $\theta_1(t_*+)=(1-c)t_*+c2\pi$. Thus oscillator $1$ fires the next event at $t=t_*+(1-c)\theta_*=2\pi-c\theta_*$. When $c\approx 1$ and $\theta_*\approx\pi$ the time elapsed between two events of oscillator $1$ can be arbitrarily close to both $T/2$ and $3T/2$. \end{exam} Henceforth we confine ourselves to the trajectories satisfying Assumption~\ref{ass.initial}. It appears that such trajectories \emph{continuously} depend on the initial conditions in the following sense. For a given solution $\bar\theta(t)$, let $t_{ik}=t_{ik}[\bar\theta(0)]$ stand for the time instant when oscillator $i$ fires its $k$th event. \begin{lemma}\label{lem.contin} Suppose that Assumption~\ref{ass.psi} holds. Consider a sequence of solutions $\bar\theta^{(n)}(t)$ such that $\bar\theta^{(n)}(0)\xrightarrow[n\to\infty]{}\bar\theta(0)$, where $d[\bar\theta(0)]<\pi$. Then $t^{(n)}_{ik}\dfb t_{ik}[\bar\theta^{(n)}(0)]\xrightarrow[n\to\infty]{} t_{ik}$. Furthermore, $\bar\theta^{(n)}(t)\xrightarrow[n\to\infty]{}\bar\theta(t)$ whenever $t\ne t_{ik}\,\forall i,k$. \end{lemma} To prove Lemma~\ref{lem.contin}, we use a technical proposition, which is based on Assumption~\ref{ass.psi} and proved in Appendix. \begin{proposition}\label{prop.positive-time} For any $d_*<\pi$ and $\delta>0$ there exists $\tau=\tau(d_*,\delta)>0$ such that if $\theta_i(0)\le 2\pi-\delta$ and $d(\bar\theta(0))<d_*$, then oscillator $i$ fires at no earlier than $t=\tau$ (i.e. $t_{i1}\ge\tau$). \end{proposition} Proposition~\ref{prop.positive-time} has the following corollary, entailing that the ``leading'' oscillators, whose initial phases are sufficiently close to the maximal one, fire earlier than the remaining oscillators. \begin{corollary}\label{cor.fire-first} For any $d_*<\pi$, $\delta>0$ there exists $\ve=\ve(\delta,d_*)>0$ with the following property: for the phases satisfying the condition $\theta_1(0)\ge\ldots\ge\theta_m(0)>\theta_1(0)-\ve$ and $\theta_{m+1}(0),\ldots,\theta_N(0)\le \theta_1(0)-\delta$, oscillators $1,\ldots,m$ fire earlier than the remaining ones; moreover, $t_*\le t_{i1}<t_*+\om^{-1}\ve<t_{j1}$ whenever $1\le i\le m<j\le N$. Here $t_*\dfb T-\om^{-1}\theta_1(0)$ stands for the instant of first event. \end{corollary} \begin{IEEEproof} Obviously, oscillator $1$ fires at time $t_{11}=t_*$, and the phases obey~\eqref{eq.freq} until its event. Since $\theta_j(t_*)\le 2\pi-\delta$ for $j>m$, one has $t_{j1}>t_*+\tau$, where $\tau=\tau(\delta,d_*)$ is defined in Proposition~\ref{prop.positive-time}. Choosing $\ve<\min(\om\tau,\pi)$, one notices that $\theta_j(t_*)\ge 2\pi-\ve$ for $i=1,\ldots,m$. Using~\eqref{eq.aux2}, oscillator $i$ fires at time $t_{i1}\le t_*+\om^{-1}\ve<t_{j1}$, which ends the proof. \end{IEEEproof} We are now ready to prove Lemma~\ref{lem.contin}. \begin{IEEEproof}[Proof of Lemma~\ref{lem.contin}] For the solution $\bar\theta(t)$, let $\tau_1<\tau_2<\ldots<\tau_n$ be the instants when some oscillators fire, i.e. $I_j\dfb I(\bar\theta(\tau_j))\ne\emptyset$. Without loss of generality, one may assume that $I_1=\{1,\ldots,m\}$, i.e. $\theta_1(0)=\ldots=\theta_m(0)>\theta_j(0)$ for any $j>m$. Notice first that $\bar\theta^{(n)}(t)\xrightarrow[n\to\infty]{}\bar\theta(t)$ for any $t\in [0;\tau_1)$. Indeed, $\tau_1=T-\om^{-1}\theta_1(0)>t$ implies that $T-\om^{-1}\max_i\theta_i^{(n)}(0)>t$ for large $n$, and hence $\theta_i^{(n)}(t)=\theta_i^{(n)}(0)+\om t\to \theta_i(0)+\om t=\theta_i(t)$ for any $i$. Applying Corollary~\ref{cor.fire-first}, one proves that $t_{i1}^{(n)}\to \tau_1$ for any $i\le m$ and $t_*^{(n)}\dfb\max_{i\le m}t_{i1}^{(n)}<\min_{j>m}t_{j1}^{(n)}$. Using~\eqref{eq.aux1}, one shows that $0\le \theta_i^{(n)}(t_*^{(n)}+)\le \om(t_*^{(n)}-t_{i1}^{(n)})\to 0=\theta_i(\tau_1+)$ for any $i\le m$. The same holds for the remaining phases $\theta_j^{(n)}(t_*^{(n)}+)\to\theta_j(\tau_1+)$ (where $j>m$) since the cumulative effect of $m$ events, separated by infinitesimally small time periods, is the same as that of $m$ simultaneous events. Thus we have proved that $\bar\theta^{(n)}(t_*^{(n)}+)\xrightarrow[n\to\infty]{} \bar\theta(\tau_1+)$. We now can iterate this procedure, replacing $\bar\theta(0)$ and $\bar\theta^{(n)}(0)$ with, respectively, $\bar\theta(\tau_1+)$ and $\bar\theta^{(n)}(t_*^{(n)}+)$. One shows that $\bar\theta^{(n)}(t)\xrightarrow[n\to\infty]{}\bar\theta(t)$ for any $t\in (\tau_1,\tau_2)$ and for large $n$ the group of oscillators with indices from $I_2$ fires their events at times converging to $\tau_2$. The value of the $n$th state $\bar\theta^{(n)}$ after the last of these events converges to $\bar\theta(\tau_2+)$, and so on. \end{IEEEproof} \subsection{Synchronization criterion}\label{subsec.synch} Up to now, we have not assumed any connectivity properties, required to provide the oscillators' synchronization. The \emph{minimal} assumption of this type is the existence of a root (or, equivalently, a directed spanning tree) in the interaction graph $G$. In a graph without roots there exist two non-empty subsets of nodes, which have no incoming arcs and thus are ``isolated'' from each other and the remaining graph~\cite[Theorem 5]{Moro:05}. Obviously, the corresponding two groups of oscillators are totally independent of each other and thus do not synchronize. The following theorem shows that under Assumptions~\ref{ass.psi} and~\ref{ass.initial} rootedness is sufficient for the synchronization~\eqref{eq.sync}. \begin{theorem}\label{thm.fix} Suppose that Assumptions~\ref{ass.psi} and~\ref{ass.initial} hold, and the interaction graph $G$ is \emph{rooted}. Then the phases get synchronous~\eqref{eq.sync}. \end{theorem} For \emph{strongly} connected interaction graphs and special PRC maps Theorem~\ref{thm.fix} has been established in \cite{WangNunezDoyle:2012,WangNunezDoyle:2015-1}. The fundamental property of the dynamics~\eqref{eq.freq},~\eqref{eq.shift} (see the proofs of Theorem~1 in~\cite{WangNunezDoyle:2012} and Theorem~1 in~\cite{WangNunezDoyle:2015-1}) is ``contraction'' of the minimal arc, containing the phases, after each ``full round'' of the oscillators' firing. As soon as each of the $N$ oscillators has fired (some of them can fire twice), the diameter of the ensemble is decreased. This property, however, \emph{does not} hold for a general rooted graph, as shown by the following. \emph{Example 2.} Consider $N=3$ oscillators with the period $T=2\pi$rad/s that are connected in a chain $1\mapsto 2\mapsto 3$; thus $1$ is a root node, yet the graph is not strongly connected. Suppose that the oscillators start with $\theta_1(0)=0$, $\theta_2(0)=\theta_3(0)=\theta_0<\pi$. The events fired by oscillators $2$ and $3$ at the instant $t_1=2\pi-\theta_0$ do not affect oscillator $1$, and hence $\theta_1(t_1+)=\theta_1(t_1)=2\pi-\theta_0$. The latter oscillator fires at time $t_2=2\pi$ after which one has $\theta_1(t_2+)=0$, $\theta_2(t_2+)=\Psi(\theta_0)\in (0;\theta_0)$ and $\theta_3(t_2+)=\theta_0$. Thus after the full round of firing the diameter remains equal to $\theta_0$. Considering a similar chain of $N>3$ oscillators, its diameter in fact may remain unchanged even after $(N-2)$ full rounds of firing (each oscillator has fired at least $N-2$ times). It appears, however, that after $N-1$ ``full rounds'' of firing the diameter always decreases, which is the key idea of the proof of Theorem~\ref{thm.fix}. \begin{lemma}\label{lem.shrink} Under the assumptions of Theorem~\ref{thm.fix}, let $T_N\dfb 3T(N-1)/2$ and thus on $[0;T_N]$ each oscillator fires at least $(N-1)$ events. Then $d( \bar\theta(T_N))<d(\bar\theta(0))$ unless $d(\bar\theta(0))=0$. \end{lemma} \begin{IEEEproof} Introducing the shortest arc $L(t)$ from Theorem~\ref{thm.exist}, consider the sets of its endpoints $\{e^{\imath\theta_j(t)}:j\in J_-(t)\}$ and $\{e^{\imath\theta_j(t)}:j\in J_+(t)\}$. The shortest turn from the phases, indexed by $J_-(t)$, to those indexed by $J_+(t)$ is counterclockwise, see Fig.\ref{fig.j}. A closer look at the proof of statements 2 and 3 in Theorem~\ref{thm.exist} reveals that at any time $t_*$, when some oscillators fire, the following alternatives are possible: \begin{enumerate}[A)] \item none of the ``extremal'' oscillators from $J_-(t_*)\cup J_+(t_*)$ is affected by the events; in this case $J_-(t_*+)=J_-(t_*)$, $J_+(t_*+)=J_+(t_*)$ and $d(t_*+)=d(t_*)$; \item some of the ``extremal'' oscillators are affected, however $d(t_*+)=d(t_*)$; this implies that $J_-(t_*+)\subseteq J_-(t_*)$, $J_+(t_*+)\subseteq J_+(t_*)$ and one of these inclusions is strict; \item some of the ``extremal'' oscillators are affected, and the diameter is decreased: $d(t_*+)<d(t_*-)$. \end{enumerate} Notice that during the ``full round'' of events (each oscillator fires at least once) the second or third must take place. Indeed, suppose that $J_-$ and $J_+$ remain constant during such a round. The graph's rootedness implies~\cite[Theorem 5]{Moro:05} that at least one of the corresponding sets of nodes has an arc, coming from outside. That is, a node $j\in J_-$ (or $j\in J_+$) exists, having a neighbor $i\in N_j$ beyond $J_-$ (respectively, beyond $J_+$). At the instant $t$ when oscillator $i$ fires $\theta_i(t)=2\pi$ and thus $\theta_j(t)\not\in\{0;2\pi\}$ since otherwise $\theta_i(t)$ would also be an endpoint. Thus either $L(t+)\subsetneq L(t)$ and $d(t+)<d(t)$, or $\theta_j(t+)$ is not an endpoint of $L(t+)$. On each interval of length $3T/2$ all oscillators fire. Assuming that $d(T_N)=d(0)>0$, we have $|J_-(T_N)|+|J_+(T_N)|\le |J_-(0)|+|J_+(0)|-(N-1)\le 1$ arriving thus at the contradiction. Lemma is proved. \begin{figure} \center \begin{tikzpicture}[scale=0.4] \draw (-2.5,0) -- (2.5,0); \draw (0,-2.5) -- (0,2.5); \draw (0,0) circle (2cm); \draw (0,-2) [red, very thick] arc (-90:45:2cm); \node [label=above right:$\theta_2(t);\theta_4(t);\theta_6(t)$] at (1.414,1.414) [circle,draw,fill, minimum size=1mm] {}; \node [label=below right:$\theta_3(t);\theta_5(t)$] at (0,-2) [circle,draw,fill, minimum size=1mm] {}; \node [label=below right:$\theta_1(t)$] at (2,0) [circle,draw,fill, minimum size=1mm] {}; \end{tikzpicture} \caption{Example: $L$ is drawn red, $J_-=\{3,5\}$, $J_+=\{2,4,6\}$}\label{fig.j} \end{figure} \end{IEEEproof} \begin{corollary}\label{cor.strict-decrease} For any constants $d_1,d_2>0$ such that $0<d_1<d_2<\pi$ there exists $\ve=\ve(d_1,d_2)>0$ that $d(\bar\theta(T_N))-d(\bar\theta(0))\le -\ve$ for any solution with $d_1\le d(\bar\theta(0))\le d_2$. \end{corollary} \begin{IEEEproof} Assume, on the contrary, that a sequence of solutions $\bar\theta^{(n)}(t)$ exists such that $d_1\le d(\bar\theta^{(n)}(0))\le d_2$, however $d(\bar\theta^{(n)}(T_N))-d(\bar\theta^{(n)}(0))\ge -1/n$. Since the set $\{\bar\theta\in [0;2\pi]^N:d_1\le d(\bar\theta)\le d_2\}$ is compact, one may assume, without loss of generality, that the limit $\bar\theta_0\dfb\lim_{n\to\infty}\bar\theta^{(n)}(0)$ exists. Consider the solution $\bar\theta(t)$ with the initial condition $\bar\theta(0)=\theta_0$. Arbitrarily close to $T_N$ there exists a time instant $t_0$, such that none of the oscillators fires at $t_0$ and $d(\bar\theta(t_0))=d(\bar\theta(T_N))$. Thanks to Lemma~\ref{lem.contin}, one has $\bar\theta^{(n)}(t_0)\to \bar\theta(t_0)$ as $n\to\infty$ and thus $d(\bar\theta(T_N))=d(\bar\theta(t_0))\ge d(\bar\theta(0))\ge d_1>0$, arriving thus at a contradiction with Lemma~\ref{lem.shrink}. \end{IEEEproof} The proof of Theorem~\ref{thm.fix} is now immediate. \begin{IEEEproof}[Proof of Theorem~\ref{thm.fix}] Since the diameter is non-increasing, the limit $d_1\dfb\lim_{t\to\infty}d(\bar\theta(t))$ exists. It suffices to prove that $d_1=0$. Suppose, on the contrary, that $d_1>0$. Denoting $d_2\dfb d(\bar\theta(0))$, one has $d_1\le d(\bar\theta(t))\le d_2$ for any $t$ due to Theorem~\ref{thm.exist}. Corollary~\ref{cor.strict-decrease} implies that $0\le d(\bar\theta(kT_N))\le d_2-k\ve$ for any $k\ge 1$, where $\ve>0$ is constant, arriving at a contradiction. Hence the oscillators synchronize~\eqref{eq.sync}. \end{IEEEproof} \section{Numerical simulations} In this section, we confirm the result of Theorem~\ref{thm.fix} by a numerical test. We simulate a network of $N=4$ identical oscillators, whose natural frequency is $\omega=1$rad/s (and the period $T=2\pi$ s), starting with phases $\theta_1=\pi/2$, $\theta_2=0.3\pi$, $\theta_3=0.03\pi$ and $\theta_4=0.9\pi$, thus $d(\bar\theta(0))=0.87\pi<\pi$. We have simulated the dynamics of the oscillators under the interaction graph, shown in Fig.~\ref{fig.graph}. Notice that the graph in Fig.\ref{fig.graph} is rooted but \emph{not} strongly connected because the phase of the ``leading'' oscillator $1$ is unaffected by the others. \begin{figure}[h] \center \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, thick, scale=0.5,main node/.style={circle,fill=gray!20,draw,font=\sffamily\Large\bfseries,scale=0.5}] \node[main node] (1) {1}; \node[main node] (2) [below right of=1,node distance=3cm] {2}; \node[main node] (3) [above right of=2, node distance=3cm] {3}; \node[main node] (4) [below of=2, node distance=3cm] {4}; \path[every node/.style={font=\sffamily\small}] (1) edge node [left] {$e_1$} (2) (2) edge [bend left] node [left] {$e_2$} (3) (3) edge [bend left] node [right] {$e_3$} (2) (2) edge [bend left] node [right] {$e_4$} (4) (4) edge [bend left] node [left] {$e_5$} (2); \end{tikzpicture} \caption{The network topology}\label{fig.graph} \end{figure} Two numerical tests have been carried out. \textbf{Test 1} deals with identical oscillators, having the delay-advanced PRC $\Phi(\theta)=-\sin\theta$ (Fig.\ref{fig.fi}a) and the gain $c=0.4$. \textbf{Test 2} deals with a heterogeneous network, where oscillators 2-3 have identical PRC maps $\Phi_2(\theta)=\Phi_3(\theta)=\Phi_4(\theta)=-\sin\theta$ yet different gains $c_2=0.4$, $c_3=0.5$, $c_4=0.6$. Furthermore, the leading oscillator $1$ has the gain $c_1=0.6$ and the following piecewise-linear PRC map (Fig.~\ref{fig.fi}b) \be\label{eq.piecewise-lin} \Phi_1(\theta)=\begin{cases} -\theta,\theta\in [0;\pi/2)\\ \theta-\pi,\theta\in [\pi/2;3\pi/2]\\ 2\pi-\theta,\theta\in (3\pi/2;2\pi]. \end{cases} \ee \begin{figure}[h] \center \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\columnwidth]{phi.eps} \caption{$\Phi(\theta)=-\sin\theta$} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\columnwidth]{phi_new.eps} \caption{Piecewise-linear PRC~\eqref{eq.piecewise-lin}} \end{subfigure} \caption{Two delay-advanced PRC maps}\label{fig.fi} \end{figure} In both numerical examples the oscillators synchronize, i.e.~\eqref{eq.sync} holds. The corresponding dynamics of oscillators' phases $\theta_1$ (blue), $\theta_2$ (orange), $\theta_3$ (green) and $\theta_4$ (red) are shown in Fig.~\ref{fig.phase}. Fig.~\ref{fig.event} illustrates the corresponding event diagrams: the point $(t,i)$ on the plot in Fig.~\ref{fig.event} (where $t\ge 0$ and $i\in 1:4$) indicates that the $i$th oscillator fires an event at time $t$. \begin{figure}[h] \center \begin{subfigure}[b]{0.49\columnwidth} {\includegraphics[width=\columnwidth]{phase04.eps}} \caption{Test 1} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} {\includegraphics[width=\columnwidth]{case1_phases.eps}} \caption{Test 2} \end{subfigure} \caption{Dynamics of the phases $\theta_i(t)$}\label{fig.phase} \end{figure} \begin{figure}[h] \center \begin{subfigure}[b]{0.49\columnwidth} {\includegraphics[width=\columnwidth]{events04.eps}} \caption{Test 1} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} {\includegraphics[width=\columnwidth]{case1_events.eps}} \caption{Test 2} \end{subfigure} \caption{The diagrams of events.}\label{fig.event} \end{figure} Finally, Fig.~\ref{fig.circle} illustrates synchronization of phases on the unit circle $S^1$: plots (a)-(d) correspond to Test~1, and (e)-(h) illustrate the solutions obtained in Test~2. \begin{figure}[h] \center \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{c0.eps}} \caption{$t=0$s} \end{subfigure} \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{c04_22.eps}} \caption{$t=22$s} \end{subfigure} \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{c04_42.eps}} \caption{$t=42$s} \end{subfigure} \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{c04_100.eps}} \caption{$t=100$s} \end{subfigure} \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{c0.eps}} \caption{$t=0$s} \end{subfigure} \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{case1_t22.eps}} \caption{$t=22$s} \end{subfigure} \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{case1_t42.eps}} \caption{$t=42$s} \end{subfigure} \begin{subfigure}[b]{0.24\columnwidth} {\includegraphics[width=\columnwidth,height=\columnwidth]{case1_t100.eps}} \caption{$t=100$s} \end{subfigure} \caption{Phases on $S^1$ at four time instants: the plots on top are for Test 1 and the plots on the bottom are for Test 2.}\label{fig.circle} \end{figure} \section{Conclusions and future works}\label{sec.concl} In this paper, we have examined the dynamics of networks of pulse-coupled oscillators of the \emph{delay-advance} type. The models, studied in this paper, describe some biological networks~\cite{GoelErmentrout:2002,IzhikevichBook} and naturally arise in problems of synchronization of networked clocks~\cite{WangDoyle:2012,WangNunezDoyle:2012}. We have proved that the oscillators get synchronized if the maximal distance between the initial phases is less than $\pi$ and the interaction graph is static and rooted (has a directed spanning tree), which is the \emph{minimal} possible connectivity assumption. An extension to time-varying \emph{repeatedly} rooted graphs is also possible. An important problem, which is beyond the scope of this paper and remains open even for strongly connected graphs, is synchronization under \emph{general} initial conditions. The existing results deal mainly with all-to-all or cyclic graphs \cite{MirolloStrogatz:1990,DrorCanavier:1999,GoelErmentrout:2002,LuckenYanchuk:2012,WangNunezDoyle:2015} which guarantee some ordering of the oscillators' events and global contraction of the return map. For instance, as was noticed in~\cite{WangNunezDoyle:2015-1}, for the PRC map \eqref{eq.prc-lin}, the coupling gain $0.5\le c\le 1$ and the complete interaction graph, the diameter of ensemble becomes less than $\pi$ after the first event \emph{independent} of the initial condition. Another result, reported in~\cite{WangNunezDoyle:2015-1}, ensures synchronization over ``strongly rooted'' (star-shaped) and connected bidirectional graphs. However, as noticed in Remark~\ref{rem.no-synchronize}, in general the phases of pulse-coupled oscillators do not synchronize and can e.g. split into several clusters~\cite{LuckenYanchuk:2012}; similar effects may occur due to communication delays and negative (repulsive) couplings~\cite{MalladaTang:2013}. Even more complicated is the problem of synchronization between oscillators of different periods. One of the first results in this direction has been obtained in the recent paper~\cite{NunezWangTeelDoyle:2016}. \bibliographystyle{IEEEtran}
2,869,038,156,186
arxiv
\section{Introduction} \label{sec:introduction} The mathematical problem of optimal mass transport has a long history dating back to its introduction in \textcite{Monge1781}, with key contributions by \textcite{Kantorovich1942} and \textcite{Kantorovich1957}. It has recently received increased interest due to numerous applications in machine learning; see, e.g., the recent overview of \textcite{Kolouri2017} and the references therein. In a nutshell, the (discrete) problem of optimal transport in its Kantorovich form is to compute for given mass distributions ${{a}}$ and ${{b}}$ with equal mass a transport plan, i.e., an assignment of how much mass of ${{a}}$ at some point should be moved to another point to match the mass in ${{b}}$. This should be done in a way such that some transport cost (usually proportional to the amount of mass and dependent on the distance) is minimized. This leads to a linear optimization problem which has been well studied, but its application in machine learning has been problematic due to large memory requirement and long run time. Recently, \textcite{Cuturi2013} proposed a method that overcomes the memory requirement by so-called entropic regularization that has found broad applications; see, e.g., \textcite{Carlier2017, Cuturi2014, Frogner2015}. The resulting iteration resembles the so-called Sinkhorn--Knopp method from \textcite{Sinkhorn1967} for matrix balancing and allows for a simple and efficient implementation. \subsection{Our contribution} \label{sec:contribution} In this work, we show that the Sinkhorn--Knopp method can be viewed as an approximate Newton method and derive a full Newton method for entropically regularized optimal transport problems that is demonstrated to perform significantly better for small entropic regularization parameters. Here, compared to \textcite{Cuturi2013}, the key idea is to apply a logarithmic transform to the variables. This paper is organized as follows. In \cref{sec:sinkhorn_newton_method}, we state the Kantorovich formulation of optimal transport together with its dual which serves as the basis of the derived algorithm. Afterwards, we establish local quadratic convergence and discuss the relation of the proposed Newton method to the Sinkhorn--Knopp iteration. The performance and parameter dependence of the proposed method are illustrated with numerical examples in \cref{sec:numerical_examples}. \Cref{sec:proof} contains the proof of the key estimate for quadratic convergence, and \cref{sec:conclusion} concludes the paper. \subsection{Notation} \label{sec:notation} In the following, $\mathbb{1}_{n}$ represents the $n$-dimensional vector with all ones and $\mathbb{1}_{n, m}$ refers to the $n\times m$ matrix with all ones. Moreover, $\Sigma_{n} \coloneqq \{{{a}}\in\mathbb{R}^{n}_{+} : \mathbb{1}_{n}^{\top}{{a}} = 1\}$ denotes the probability simplex in $\mathbb{R}^{n}_{+}$ whose elements are called \emph{probability vectors}, or equivalently, \emph{histograms}. For two histograms ${{a}}\in\Sigma_{n}$ and ${{b}}\in\Sigma_{m}$, \begin{equation} \label{eq:admissible_transport_plans} {{U}}({{a}}, {{b}}) \coloneqq \{{{P}}\in\mathbb{R}^{n\times m}_{+} : {{P}}\mathbb{1}_{m} = {{a}}, \ {{P}}^{\top}\mathbb{1}_{n} = {{b}}\} \end{equation} is the set of admissible \emph{coupling matrices}. In the context of optimal transport, the elements of ${{U}}({{a}}, {{b}})$ are also referred to as \emph{transport plans}. Histograms ${{a}}$ and ${{b}}$ can be viewed as mass distributions, and an entry ${{P}}_{ij}$ of a transport plan ${{P}}\in{{U}}({{a}}, {{b}})$ can be interpreted as the amount of mass moved from ${{a}}_i$ to ${{b}}_j$. We refer to the Frobenius inner product of two matrices ${{P}}, {{P}}'\in\mathbb{R}^{n\times m}$ as $\langle {{P}}, {{P}}' \rangle \coloneqq \sum_{ij}{{P}}_{ij}{{P}}'_{ij}$. At the same time, $\langle{{a}}, {{a}}'\rangle \coloneqq \sum_{i}{{a}}_{i}{{a}}_{i}'$ denotes the standard dot product of two vectors ${{a}}, {{a}}'\in\mathbb{R}^{n}$. Finally, $\diag({{a}})\in\mathbb{R}^{n\times n}$ is defined as the diagonal matrix with $\diag({{a}})_{ii} \coloneqq {{a}}_{i}$ and $\diag({{a}})_{ij} \coloneqq 0$ for $i \neq j$, and ${{a}}\odot{{a}}'\coloneqq \diag({{a}}){{a}}'$ is the Hadamard product (i.e., the component-wise product) of ${{a}}$ and ${{a}}'$. \section{Sinkhorn--Newton method} \label{sec:sinkhorn_newton_method} In this section we derive our Sinkhorn--Newton method. We start by introducing the problem of entropically regularized optimal transport in \cref{sec:problem_setting}. Afterwards, in \cref{sec:algorithm}, we present our approach, which is essentially applying Newton's method to the optimality system associated with the transport problem and its dual, before we discuss its local quadratic convergence in \cref{sec:convergence}. In \cref{sec:relation_sinkhorn_knopp}, we finally establish a connection between our Newton iteration and the Sinkhorn--Knopp type iteration introduced by \textcite{Cuturi2013}. \subsection{Problem setting} \label{sec:problem_setting} Let ${{a}}\in\Sigma_{n}$ and ${{b}}\in\Sigma_{m}$ be given histograms together with a non-negative cost matrix ${{C}}\in\mathbb{R}^{n\times m}$. The entropically regularized Kantorovich problem of optimal mass transport between ${{a}}$ and ${{b}}$ is \begin{equation} \label{eq:kantorovich_entropically} \tag{P$_{\epsilon}$} \inf_{{{P}} \in {{U}}({{a}}, {{b}})} \ \langle{{C}}, {{P}}\rangle + \epsilon \langle{{P}}, \log{{P}} - \mathbb{1}_{n,m}\rangle, \end{equation} where the logarithm is applied componentwise to ${{P}}$ and $\epsilon > 0$ is the regularization strength. The variables ${{P}}_{ij}$ indicate how much of ${{a}}_{i}$ ends up in ${{b}}_{j}$, while ${{C}}_{ij}$ is the corresponding transport cost per unit mass. Abbreviating ${{K}}\coloneqq \exp(-{{C}} / \epsilon)$, standard convex duality theory leads us to the dual problem \begin{equation} \label{eq:dual_problem} \tag{D$_{\epsilon}$} \sup_{{{f}}\in\mathbb{R}^{n}, {{g}}\in\mathbb{R}^{m}} - \langle{{a}}, {{f}}\rangle - \langle{{b}}, {{g}}\rangle - \epsilon \langle \mathrm{e}^{-{{f}} / \epsilon}, {{K}} \mathrm{e}^{-{{g}} / \epsilon}\rangle, \end{equation} where ${{f}}$ and ${{g}}$ are the dual variables and the exponential function is applied componentwise. The problems \eqref{eq:kantorovich_entropically} and \eqref{eq:dual_problem} are linked via the optimality conditions \begin{subequations} \begin{align} {{P}} &= \diag(\mathrm{e}^{-{{f}} / \epsilon}){{K}}\diag(\mathrm{e}^{-{{g}} / \epsilon})\label{eq:oc_primal_dual}\\ {{a}} &= \diag(\mathrm{e}^{-{{f}} / \epsilon}){{K}}\mathrm{e}^{-{{g}} / \epsilon}\label{eq:oc_source_mass_conservation}\\ {{b}} &= \diag(\mathrm{e}^{-{{g}} / \epsilon}){{K}}^{\top}\mathrm{e}^{-{{f}} / \epsilon}\label{eq:oc_sink_mass_conservation}. \end{align} \label{eq:oc}% \end{subequations} The first condition \eqref{eq:oc_primal_dual} connects the optimal transport plan with the dual variables. The conditions \eqref{eq:oc_source_mass_conservation} and \eqref{eq:oc_sink_mass_conservation} simply reflect the feasibility of ${{P}}$ for \eqref{eq:kantorovich_entropically}, i.e., for the mass conservation constraints in \eqref{eq:admissible_transport_plans}. \subsection{Algorithm} \label{sec:algorithm} Finding dual vectors ${{f}}$ and ${{g}}$ that satisfy \eqref{eq:oc_source_mass_conservation} and \eqref{eq:oc_sink_mass_conservation} is equivalent to finding a root of the function \begin{equation} \label{eq:oc_function} F({{f}}, {{g}}) \coloneqq \begin{pmatrix*}[l] {{a}} - \diag(\mathrm{e}^{-{{f}} / \epsilon}){{K}}\mathrm{e}^{-{{g}} / \epsilon}\\ {{b}} - \diag(\mathrm{e}^{-{{g}} / \epsilon}){{K}}^{\top}\mathrm{e}^{-{{f}} / \epsilon} \end{pmatrix*}, \end{equation} i.e., to solving $F({{f}}, {{g}}) = 0$. A Newton iteration for this equation is given by \begin{equation} \label{eq:newton_iteration} \begin{pmatrix} {{f}}^{k+1}\\ {{g}}^{k+1} \end{pmatrix} = \begin{pmatrix} {{f}}^{k}\\ {{g}}^{k} \end{pmatrix} - {{J}}_{F}({{f}}^{k}, {{g}}^{k})^{-1}F({{f}}^{k}, {{g}}^{k}). \end{equation} The Jacobian matrix of $F$ is \begin{equation} \label{eq:jacobian} {{J}}_{F}({{f}}, {{g}}) = \frac{1}{\epsilon} \begin{bmatrix} \diag({{P}}\mathbb{1}_{m}) &{{P}}\\ {{P}}^{\top} &\diag({{P}}^{\top}\mathbb{1}_{n}) \end{bmatrix}, \end{equation} where we used \eqref{eq:oc_primal_dual} to simplify the notation. Performing the Newton step \eqref{eq:newton_iteration} requires finding a solution of the linear equation system \begin{equation} \label{eq:newton_step_les} {{J}}_{F}({{f}}^{k}, {{g}}^{k})\left( \begin{matrix} \delta{{f}}\\ \delta{{g}} \end{matrix}\right) = -F({{f}}^{k}, {{g}}^{k}). \end{equation} The new iterates are then given by \begin{subequations} \begin{align} {{f}}^{k+1} &= {{f}}^{k} + \delta{{f}}\\ {{g}}^{k+1} &= {{g}}^{k} + \delta{{g}}. \end{align} \label{eq:dual_variables_update}% \end{subequations} If one is only interested in the optimal transport plan, then it is actually not necessary to keep track of the dual iterates ${{f}}^{k}$ and ${{g}}^{k}$ after initialization (in our subsequent experiments, we use ${{f}}^{0} = {{g}}^{0} = 0$ and hence, ${{P}}^{0} = {{K}}$). This is true because \eqref{eq:newton_step_les} can be expressed entirely in terms of \begin{equation} \label{eq:p_k} {{P}}^{k} \coloneqq \diag(\mathrm{e}^{-{{f}}^{k} / \epsilon}){{K}}\diag(\mathrm{e}^{-{{g}}^{k} / \epsilon}), \end{equation} and thus, using \eqref{eq:dual_variables_update} and \eqref{eq:p_k}, we obtain the multiplicative update rule \begin{equation} \label{eq:primal_variables_update} \begin{aligned}[b] {{P}}^{k + 1} &= \diag(\mathrm{e}^{-[{{f}}^{k} + \delta{{f}}] / \epsilon}){{K}}\diag(\mathrm{e}^{-[{{g}}^{k} + \delta{{g}}] / \epsilon})\\ &= \diag(\mathrm{e}^{-\delta{{f}} / \epsilon}){{P}}^{k}\diag(\mathrm{e}^{-\delta{{g}} / \epsilon}). \end{aligned} \end{equation} In this way, we obtain an algorithm which only operates with primal variables, see \cref{alg:sinkhorn-newton-primal}. In applications where the storage demand for the plans ${{P}}^{k}$ is too high and one is only interested in the optimal value, there is another form which does not form the plans ${{P}}^{k}$, but only the dual variables ${{f}}^{k}$ and ${{g}}^{k}$ and which can basically operate matrix-free. We sketch it as \cref{alg:sinkhorn-newton-dual} below. \begin{algorithm}[tb] \caption{Sinkhorn-Newton method in primal variable}\label{alg:sinkhorn-newton-primal} \begin{algorithmic}[1] \STATE \textbf{Input:} ${{a}}\in\Sigma_{n}$, ${{b}}\in\Sigma_{m}$, ${{C}}\in\mathbb{R}^{n\times m}$ \STATE \textbf{Initialize:} ${{P}}^{0} = \exp(-{{C}} /\epsilon)$, set $k=0$ \REPEAT \STATE Compute approximate histograms \[ {{a}}^{k} = {{P}}^{k}\mathbb{1}_{m},\quad {{b}}^{k} = ({{P}}^{k})^{\top}\mathbb{1}_{n}. \] \STATE Compute updates $\delta{{f}}$ and $\delta{{g}}$ by solving \[ \frac{1}{\epsilon} \begin{bmatrix} \diag({{a}}^{k}) &{{P}}^{k}\\ ({{P}}^{k})^{\top} &\diag({{b}}^{k}) \end{bmatrix} \begin{bmatrix} \delta{{f}}\\\delta{{g}} \end{bmatrix} = \begin{bmatrix} {{a}}^{k}-{{a}}\\ {{b}}^{k}-{{b}} \end{bmatrix}. \] \STATE Update ${{P}}$ by \[ {{P}}^{k + 1} = \diag(\mathrm{e}^{-\delta{{f}} / \epsilon}){{P}}^{k}\diag(\mathrm{e}^{-\delta{{g}} / \epsilon}). \] \STATE $k\leftarrow k+1$ \UNTIL{some stopping criteria fulfilled} \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{Sinkhorn-Newton method in dual variables}\label{alg:sinkhorn-newton-dual} \begin{algorithmic}[1] \STATE \textbf{Input:} ${{a}}\in\Sigma_{n}$, ${{b}}\in\Sigma_{m}$, function handle for application of ${{K}}$ and ${{K}}^{\top}$ \STATE \textbf{Initialize:} ${{a}}^{0}\in\mathbb{R}^{n}$, ${{b}}^{0}\in\mathbb{R}^{m}$, set $k=0$ \REPEAT \STATE Compute approximate histograms \[ {{a}}^{k} = \mathrm{e}^{-{{f}}^{k} / \epsilon}\odot {{K}}\mathrm{e}^{-{{g}}^{k} /\epsilon},\quad {{b}}^{k} = \mathrm{e}^{-{{g}}^{k} / \epsilon}\odot {{K}}^{\top}\mathrm{e}^{-{{f}}^{k} /\epsilon}. \] \STATE Compute updates $\delta{{f}}$ and $\delta{{g}}$ by solving \[{{M}} \begin{bmatrix} \delta{{f}}\\\delta{{g}} \end{bmatrix} = \begin{bmatrix} {{a}}^{k}-{{a}}\\ {{b}}^{k}-{{b}} \end{bmatrix} \] where the application of ${{M}}$ is given by \[ {{M}} \begin{bmatrix} \delta{{f}}\\\delta{{g}} \end{bmatrix} = \frac1\epsilon \begin{bmatrix} {{a}}^{k}\odot\delta{{f}} + \mathrm{e}^{-{{f}}^{k}/\epsilon}\odot {{K}}(\mathrm{e}^{-{{g}}^{k}/\epsilon}\odot\delta{{g}})\\ {{b}}^{k}\odot\delta{{g}} + e^{-{{g}}^{k}/\epsilon}\odot {{K}}^{\top}(\mathrm{e}^{-{{f}}^{k}/\epsilon}\odot\delta{{f}}) \end{bmatrix}. \] \STATE Update ${{f}}$ and ${{g}}$ by \[ {{f}}^{k + 1} = {{f}}^{k} + \delta{{f}},\quad {{g}}^{k + 1} = {{g}}^{k} + \delta{{g}}. \] \STATE $k\leftarrow k+1$ \UNTIL{some stopping criteria fulfilled} \end{algorithmic} \end{algorithm} \subsection{Convergence and numerical aspects} \label{sec:convergence} In the following, we first argue that \eqref{eq:newton_step_les} is solvable. Then we show that the sequence of Newton iterates converges locally at a quadratic rate as long as the optimal transport plan satisfies ${{P}} \geq c\cdot\mathbb{1}_{n, m}$ for some constant $c > 0$. \begin{lemma} \label{lem:jacobian_sym_pos_def} For ${{f}}\in\mathbb{R}^{n}$ and ${{g}}\in\mathbb{R}^{m}$, the Jacobian matrix ${{J}}_{F}({{f}}, {{g}})$ is symmetric positive semi-definite, and its kernel is given by \begin{equation} \label{eq:jacobian_kernel} \ker \left[{{J}}_{F}({{f}}, {{g}})\right] = \spann\left\{ \left(\begin{matrix*}[l] \hphantom{-}\mathbb{1}_{n}\\ -\mathbb{1}_{m} \end{matrix*}\right) \right\}. \end{equation} \end{lemma} \begin{proof} The matrix is obviously symmetric. For arbitrary ${\varphi}\in\mathbb{R}^{n}$ and ${\gamma}\in\mathbb{R}^{m}$, we obtain from \eqref{eq:jacobian} that \begin{equation} \label{eq:jacobian_pos_semi_def} \left(\begin{matrix} {\varphi}^{\top} &{\gamma}^{\top} \end{matrix}\right) {{J}}_{F}({{f}}, {{g}}) \left(\begin{matrix} {\varphi}\\ {\gamma} \end{matrix}\right) = \frac{1}{\epsilon}\sum_{ij}{{P}}_{ij}({\varphi}_{i} + {\gamma}_{j})^{2} \geq 0, \end{equation} which holds with equality if and only if we have ${\varphi}_{i} + {\gamma}_{j} = 0$ for all $i, j$. \end{proof} Hence, the system \eqref{eq:newton_step_les} can be solved by a conjugate gradient (CG) method. To see that, recall that the CG method iterates on the orthogonal complement of the kernel as long as the initial iterate $(\delta{{f}}^0,\delta{{g}}^0)$ is chosen from this subspace, in this case with $\mathbb{1}_{n}^{\top}\delta{{f}}^{0} = \mathbb{1}_{m}^{\top}\delta{{g}}^{0}$. Furthermore, the Newton matrix can be applied matrix-free in an efficient manner as soon as the multiplication with ${{K}} = \exp(-{{C}}/\epsilon)$ and its transpose can be done efficiently, see \cref{alg:sinkhorn-newton-dual}. This is the case, for example if ${{C}}_{ij}$ only depends on $i-j$ and thus, multiplication with ${{K}}$ amounts to a convolution. A cheap diagonal preconditioner is provided by the matrix \begin{equation} \label{eq:preconditioner} \frac{1}{\epsilon} \begin{bmatrix} \diag({{P}}^{k}\mathbb{1}_{n}) & 0\\ 0 &\diag([{{P}}^{k}]^{\top}\mathbb{1}_{m}) \end{bmatrix}. \end{equation} According to \textcite[Thm.~2.3]{Deuflhard2011}, we expect local quadratic convergence as long as \begin{equation} \label{eq:newton_condition} \norm{{{J}}_{F}({{y}}^{k})^{-1}[{{J}}_{F}({{y}}^{k}) - {{J}}_{F}({\eta})]({{y}}^{k} - {\eta})} \leq \omega \norm{{{y}}^{k} - {\eta}}^{2} \end{equation} holds for all ${\eta}\in\mathbb{R}^{n}\times\mathbb{R}^{m}$ and $k\in\mathbb{N}$, with an arbitrary norm and some constant $\omega > 0$ in a neighborhood of the solution. Here, we abbreviated ${{y}}^{k} \coloneqq ({{f}}^{k}, {{g}}^{k})$. \begin{theorem} \label{thm:newton_constant} For any $k\in\mathbb{N}$ with ${{P}}_{ij}^{k} > 0$, \eqref{eq:newton_condition} holds in the $\ell_{\infty}$-norm for \begin{equation} \label{eq:newton_constant} \omega \leq (\mathrm{e}^{\frac1\epsilon}-1)\left(1 + 2\mathrm{e}^{\frac1\epsilon}\frac{\max\big\{\norm{{{P}}^{k}\mathbb{1}_{m}}_{\infty}, \, \norm{[{{P}}^{k}]^{\top}\mathbb{1}_{n}}_{\infty}\big\}}{\min_{ij}{{P}}_{ij}^{k}}\right) \end{equation} when $\norm{{{y}}^{k}-{\eta}}_{\infty}\leq 1$. \end{theorem} We postpone the proof of \cref{thm:newton_constant} to \cref{sec:proof}. \begin{remark} In fact, one can show that necessarily $\omega\geq\mathrm{e}^{\frac1\epsilon}-1$. Indeed, if ${{y}}^k-\eta=({\varphi},0)\in\mathbb{R}^n\times\mathbb{R}^n$, then one can explicitly compute \begin{equation*} {{J}}_{F}({{y}}^{k})^{-1}[{{J}}_{F}({{y}}^{k}) - {{J}}_{F}({\eta})]({{y}}^{k} - {\eta}) =((\mathrm{e}^{{\varphi}/\epsilon}-1){\varphi},0), \end{equation*} where the exponential and the multiplication are pointwise (the calculation is detailed in the proof of \cref{thm:newton_constant}). \end{remark} Hence, if $({{f}}^{0}, {{g}}^{0})$ is chosen sufficiently close to a solution of $F({{f}}, {{g}}) = 0$, then the contraction property of Newton's method shows that the sequence of Newton iterates $({{f}}^{k}, {{g}}^{k})$, and hence ${{P}}^{k}$, remain bounded. If the optimal plan satisfies ${{P}}^{*} \geq c\cdot\mathbb{1}_{n, m}$ for some $c > 0$, we can therefore expect local quadractic convergence of Newton's method. \subsection{Relation to Sinkhorn--Knopp} \label{sec:relation_sinkhorn_knopp} Substituting ${{u}}\coloneqq \mathrm{e}^{-{{f}} / \epsilon}$ and ${{v}} \coloneqq \mathrm{e}^{-{{g}} / \epsilon}$ in \eqref{eq:oc} shows that the optimality system can be written equivalently as \begin{subequations} \begin{align} {{P}} &= \diag({{u}}){{K}}\diag({{v}})\label{eq:oc_primal_dual_subst}\\ {{a}} &= \diag({{u}}){{K}}{{v}}\label{eq:oc_source_mass_conservation_subst}\\ {{b}} &= \diag({{v}}){{K}}^{\top}{{u}}\label{eq:oc_sink_mass_conservation_subst}. \end{align} \end{subequations} In order to find a solution of \eqref{eq:oc_source_mass_conservation_subst}--\eqref{eq:oc_sink_mass_conservation_subst}, one can apply the Sinkhorn--Knopp algorithm \cite{Sinkhorn1967} as recently proposed in \textcite{Cuturi2013}. This amounts to alternating updates in the form of \begin{subequations} \begin{align} {{u}}^{k+1} &\coloneqq \diag({{K}}{{v}}^{k})^{-1}{{a}}\label{eq:sinkhorn_1}\\ {{v}}^{k+1} &\coloneqq \diag({{K}}^{\top}{{u}}^{k+1})^{-1}{{b}}.\label{eq:sinkhorn_2} \end{align} \label{eq:sinkhorn_updates}% \end{subequations} In \eqref{eq:sinkhorn_1}, ${{u}}^{k+1}$ is updated such that ${{u}}^{k+1}$ and ${{v}}^{k}$ solve \eqref{eq:oc_source_mass_conservation_subst}, and in the subsequent \eqref{eq:sinkhorn_2}, ${{v}}^{k+1}$ is updated such that ${{u}}^{k+1}$ and ${{v}}^{k+1}$ form a solution of \eqref{eq:oc_sink_mass_conservation_subst}. If we proceed analogously to \cref{sec:algorithm} and derive a Newton iteration to find a root of the function \begin{equation} \label{eq:oc_function_subst} G({{u}}, {{v}}) \coloneqq \begin{pmatrix*}[l] \diag({{u}}){{K}}{{v}} - {{a}}\\ \diag({{v}}){{K}}^{\top}{{u}} - {{b}} \end{pmatrix*}, \end{equation} then the associated Jacobian matrix is \begin{equation} \label{eq:oc_function_jacobian_subst} {{J}}_{G}({{u}}, {{v}}) = \begin{pmatrix*}[l] \diag({{K}}{{v}}) &\diag({{u}}){{K}}\\ \diag({{v}}){{K}}^{\top} &\diag({{K}}^{\top}{{u}}) \end{pmatrix*}. \end{equation} Neglecting the off-diagonal blocks in \eqref{eq:oc_function_jacobian_subst} and using the approximation \begin{equation} \label{eq:oc_function_jacobian_subst_approx} \hat{{J}}_{G}({{u}}, {{v}}) = \begin{pmatrix*}[l] \diag({{K}}{{v}}) & 0\\ 0 &\diag({{K}}^{\top}{{u}}) \end{pmatrix*} \end{equation} to perform the Newton iteration \begin{equation} \label{eq:newton_iteration_subst} \begin{pmatrix} {{u}}^{k+1}\\ {{v}}^{k+1} \end{pmatrix} = \begin{pmatrix} {{u}}^{k}\\ {{v}}^{k} \end{pmatrix} - \hat{{J}}_{G}({{u}}^{k}, {{v}}^{k})^{-1}G({{u}}^{k}, {{v}}^{k}) \end{equation} leads us to the parallel updates \begin{subequations} \begin{align} {{u}}^{k+1} &\coloneqq \diag({{K}}{{v}}^{k})^{-1}{{a}}\label{eq:sinkhorn_newton_1}\\ {{v}}^{k+1} &\coloneqq \diag({{K}}^{\top}{{u}}^{k})^{-1}{{b}}.\label{eq:sinkhorn_newton_2} \end{align} \label{eq:sinkhorn_updates_newton}% \end{subequations} Hence, we see that a Sinkhorn--Knopp step \eqref{eq:sinkhorn_updates} simply approximates one Newton step \eqref{eq:sinkhorn_updates_newton} by neglecting the off-diagonal blocks and replacing ${{u}}^{k}$ by ${{u}}^{k+1}$ in \eqref{eq:sinkhorn_newton_2}. In our experience, neither the Newton iteration for $G({{u}},{{v}})= 0$ (which seems to work for the less general problem of matrix balancing; see \textcite{Knight2013}) nor the version of Sinkhorn--Knopp in which ${{v}}^{k+1}$ is updated using ${{u}}^{k}$ instead of ${{u}}^{k+1}$ converge. \section{Numerical examples} \label{sec:numerical_examples} We illustrate the performance of the Sinkhorn--Newton method and its behavior by several examples. We note that a numerical comparison is not straightforward as there a several possibilities to tune the method, depending on the structure at hand and on the specific goals. As illustrated in \cref{sec:algorithm}, one could take advantage of fast applications of the matrix ${{K}}$ or use less memory if one does not want to store ${{P}}$ during the iteration. Here we focus on the comparison with the usual (linearly convergent) Sinkhorn iteration. Thus, we do not aim for greatest overall speed but for a fair comparison between the Sinkhorn-Newton method and the Sinkhorn iteration. To that end, we observe that one application of the Newton matrix~\eqref{eq:jacobian} amounts to one multiplication with ${{P}}$ and ${{P}}^{\top}$ each, two coordinate-wise products and sums of vectors. For one Sinkhorn iteration we need one multiplication with ${{K}}$ and ${{K}}^{\top}$ and two additional coordinate-wise operations. Although \cref{alg:sinkhorn-newton-dual} looks a little closer to the Sinkhorn iteration, we still compare \cref{alg:sinkhorn-newton-primal}, as we did not exploit any of the special structure in ${{K}}$ or ${{P}}$. All timings are reported using MATLAB (R2017b) implementations of the methods on an Intel Xeon E3-1270v3 (four cores at 3.5\,GHz) with 16\,GB RAM. The code used to generate the results below can be downloaded from \url{https://github.com/dirloren/sinkhornnewton}. In all our experiments, we address the case $m = n$ and the considered histograms are defined on equidistant grids $\{{{x}}_{i}\}_{i = 1}^{n} \subset [0, 1]^{d}$ with $d = 2$ (in \cref{sec:comparison_sinkhorn,sec:dependence_regularization_strength}) and $d = 1$ (in \cref{sec:dependence_problem_dimension}), respectively. Throughout, the cost is chosen as quadratic, i.e., ${{C}}_{ij} \coloneqq \norm{{{x}}_{i} - {{x}}_{j}}_{2}^{2}$. Our Sinkhorn--Newton method is implemented according to \cref{alg:sinkhorn-newton-primal} and using a preconditioned CG method. The iteration is terminated as soon as the maximal violation of the constraints $\norm{{{a}}^{k} - {{a}}}_{\infty}$ and $\norm{{{b}}^{k} - {{b}}}_{\infty}$ drops below some threshold. If applicable, the same termination criterion is chosen for the Sinkhorn method, which is initialized with ${{u}}^{0} = {{v}}^{0} = \mathbb{1}_{n}$. \subsection{Comparison with Sinkhorn--Knopp} \label{sec:comparison_sinkhorn} \begin{figure*} \begin{subfigure}{0.45\linewidth} \centering \input{2d_errors_its.tikz} \caption{Errors over iterations (CG for Newton).} \label{fig:errors:its} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.45\linewidth} \centering \input{2d_errors_time.tikz} \caption{Errors over run time in seconds.} \label{fig:errors:time} \end{subfigure} \caption{Performance of Sinkhorn (S) and Newton (N) iterations measured by constraint violation (viol.), distance to optimal transport cost (cost) and distance to optimal transport plan (plan).} \label{fig:errors} \end{figure*} We first address the comparison of Sinkhorn--Newton with the classical Sinkhorn iteration. For this purpose, we discretize the unit square $[0, 1]^{2}$ using a $20\times 20$ equidistant grid $\{{{x}}_{i} = ({{x}}_{i1}, {{x}}_{i2})\}_{i = 1}^{400}\subset [0, 1]^{2}$ and take \begin{subequations} \begin{align} \label{eq:a_b_test} \tilde {{a}}_{i} &\coloneqq \mathrm{e}^{-36([{{x}}_{i1} - \frac13]^{2} - [{{x}}_{i2} -\frac13]^{2})} + 10^{-1},\\ \tilde {{b}}_{j} &\coloneqq \mathrm{e}^{-9([{{x}}_{j1} - \frac23]^{2} - [{{x}}_{j2} -\frac23]^{2})} + 10^{-1}, \end{align} \end{subequations} which are then normalized to unit mass by setting \begin{equation} \label{eq:a_b_test_2} {{a}}\coloneqq \frac{\tilde {{a}}}{\sum_{i}\tilde{{a}}_{i}} \quad \text{and} \quad {{b}}\coloneqq \frac{\tilde {{b}}}{\sum_{j}\tilde{{b}}_{j}}. \end{equation} The entropic regularization parameter is set to $\epsilon \coloneqq 10^{-3}$ and in case of Sinkhorn-Newton, the CG method is implemented with a tolerance of $10^{-13}$ and a maximum number of $34$ iterations. Moreover, the threshold for the termination criterion is chosen as $10^{-13}$. \Cref{fig:errors} shows the convergence history of the constraint violation for both iterations together with the error in the unregularized transport cost $\abs{\langle{{C}}, {{P}}^{k} - {{P}}^{*}\rangle}$, where ${{P}}^{*}$ denotes the final transport plan, and the error in the transport plan $\norm{{{P}}^{k} - {{P}}^{*}}_{1}$. In \cref{fig:errors:its}, we compare the error as a function of the iterations, where we take the total number of CG iterations for Sinkhorn--Newton to allow for a fair comparison (since both a Sinkhorn and a CG step have comparable costs, dominated by the two dense matrix--vector products ${{K}}{{v}}$, ${{K}}^{\top}{{u}}$ and ${{P}}^{\top}\delta{{f}}$, ${{P}}\delta{{g}}$, respectively). It can be seen clearly that with respect to all error measures, Sinkhorn converges linearly while Sinkhorn--Newton converges roughly quadratically, as expected, with Sinkhorn--Newton significantly outperforming classical Sinkhorn for this choice of parameters. The same behavior holds if the error is measured as a function of runtime; see \cref{fig:errors:time}. \subsection{Dependence on the regularization strength} \label{sec:dependence_regularization_strength} \begin{figure*} \begin{subfigure}{0.45\linewidth} \centering \input{mnist_e0.5_its.tikz} \caption{$\gamma=0.5$ (CG iterations)} \label{fig:mnist:05:its} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.45\linewidth} \centering \input{mnist_e0.5_time.tikz} \caption{$\gamma=0.5$ (run time in seconds)} \label{fig:mnist:05:time} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \input{mnist_e0.1_its.tikz} \caption{$\gamma=0.1$ (CG iterations)} \label{fig:mnist:01:its} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.45\linewidth} \centering \input{mnist_e0.1_time.tikz} \caption{$\gamma=0.1$ (run time in seconds)} \label{fig:mnist:01:time} \end{subfigure} \begin{subfigure}{0.45\linewidth} \centering \input{mnist_e0.01_its.tikz} \caption{$\gamma=0.01$ (CG iterations)} \label{fig:mnist:001:its} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.45\linewidth} \centering \input{mnist_e0.01_time.tikz} \caption{$\gamma=0.01$ (run time in seconds)} \label{fig:mnist:001:time} \end{subfigure} \caption{Constraint violation for different offsets $\gamma$ and different regularization parameters $\epsilon$.} \label{fig:mnist} \end{figure*} The second example addresses the dependence of the Sinkhorn--Newton method on the problem parameters. In particular, we consider the dependence on $\epsilon$ and on the minimal value of ${{a}}$ and ${{b}}$ (via the corresponding transport plans ${{P}}$), since these enter into the convergence rate estimate \eqref{eq:newton_constant}. Here, we take an example which is also used in \textcite{Cuturi2013}: computing the transport distances between different images from the MNIST database, which contains $28\times 28$ images of handwritten digits. We consider these as discrete distributions of dimension $28^{2} = 784$ on $[0, 1]^{2}$, to which we add a small offset $\gamma$ before normalizing to unit mass as before. Here, the tolerance for both the Newton and the CG iteration is set to $10^{-12}$, and the maximum number of CG iterations is fixed at $66$. The entropic regularization parameter $\epsilon$ is chosen as multiples of the median of the cost (which is $q_{50} = 0.2821$ in this case). \Cref{fig:mnist} shows the convergence history for different offsets $\gamma\in\{0.5, 0.1, 0.01\}$ and $\epsilon\in q_{50}\cdot\{1, 0.1, 0.01, 0.005\}$, where we again report the constraint violation both as a function of CG iterations and of the run time in seconds. Comparing \crefrange{fig:mnist:05:its}{fig:mnist:001:time}, we see that as $\epsilon$ decreases, an increasing number of CG iterations is required to achieve the prescribed tolerance. However, the convergence seems to be robust in $\epsilon$ at least for larger values of $\epsilon$ and only moderately deteriorate for $\epsilon \leq 0.01q_{50}$. \subsection{Dependence on the problem dimension} \label{sec:dependence_problem_dimension} \begin{figure*} \begin{subfigure}{0.45\linewidth} \centering \input{discr_its.tikz} \caption{Errors over CG iterations.} \label{fig:discr:its} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.45\linewidth} \centering \input{discr_time.tikz} \caption{Errors over run time in seconds.} \label{fig:discr:time} \end{subfigure} \caption{Constraint violation for different mesh sizes $n$} \label{fig:discr} \end{figure*} We finally address the dependence on the dimension of the problem. For this purpose, we discretize the unit interval $[0, 1]$ using $n$ equidistant points ${{x}}_{i}\in [0, 1]$ and take \begin{subequations} \begin{align} \tilde {{a}}_{i} &= \mathrm{e}^{-100({{x}}_{i} - 0.2)^{2}} + \mathrm{e}^{-20\abs{{{x}}_{i} - 0.4}} + 10^{-2},\\ \tilde {{b}}_{j} &= \mathrm{e}^{-100({{x}}_{j} - 0.6)^{2}} + 10^{-2}, \end{align}% \end{subequations} which are again normalized to unit mass to obtain ${{a}}$ and ${{b}}$. The regularization parameter is fixed at $\epsilon = 10^{-3}$. Moreover, the inner and outer tolerances are here set to $10^{-10}$, and the maximum number of CG iterations is coupled to the mesh size via $\lceil n / 12\rceil$. \Cref{fig:discr} shows the convergence behavior of Sinkhorn--Newton for $n\in\{1000, 2000, 4000, 8000\}$. As can be seen from \cref{fig:discr:its}, the behavior is nearly independent of $n$; in particular, the number of CG iterations required to reach the prescribed tolerance stays almost the same. (This is also true for the Newton method itself with $21, 22, 23$ and $23$ iterations.) Since each CG iteration involves two dense matrix--vector products with complexity ${\cal O}(n^{2})$, the total run time scales quadratically; see \cref{fig:discr:time}. \section{Proof of Theorem \labelcref{thm:newton_constant}} \label{sec:proof} For the sake of presentation, we restrict ourselves to the case $m = n$ here. However, in the end, we suggest how the proof can be generalized to the case $m \neq n$. To estimate~\eqref{eq:newton_condition} and in particular ${{J}}_{F}({{y}}^{k}) - {{J}}_{F}({\eta})$ for ${{y}}^{k} = ({{f}}^{k}, {{g}}^{k})$ and ${\eta} = ({\alpha},{\beta})\in\mathbb{R}^{n}\times\mathbb{R}^{n}$ we observe that \begin{equation*} {{J}}_{F}({{y}}^{k})-{{J}}_{F}({\eta}) = \frac1\epsilon\left[\mathrm{e}^{\frac{-c_{ij}-{{f}}^{k}_{i}-{{g}}^{k}_{j}}\epsilon} - \mathrm{e}^{\frac{-c_{ij}-{\alpha}_{i}-{\beta}_{j}}\epsilon}\right]_{ij} = \left[{{P}}_{ij}^{k}(1 - \mathrm{e}^{\frac{{{f}}^{k}_{i}-{\alpha}_{i} + {{g}}^{k}_{j}-{\beta}_{j}}\epsilon})\right]_{ij}. \end{equation*} To keep the notation concise, we abbreviate ${\psi} = ({\varphi}, {\gamma}) \coloneqq {{y}}^{k} - {\eta}$ and also write ${{y}}$ and ${{P}}$ for ${{y}}^{k}$ and ${{P}}^{k}$, respectively. Then, we compute \begin{equation*} \begin{aligned} {{J}}_{F}({{y}})^{-1}[{{J}}_{F}({{y}}) - {{J}}_{F}({\eta})]({{y}} - {\eta}) &=\ {{J}}_{F}({{y}})^{-1} \begin{bmatrix} \left(\sum_{j}{{P}}_{ij}(\mathrm{e}^{({\varphi}_{i} + {\gamma}_{j})/\epsilon}-1)({\varphi}_{i} + {\gamma}_{j})/\epsilon\right)_{i}\\ \left(\sum_{i}{{P}}_{ij}(\mathrm{e}^{({\varphi}_{i} + {\gamma}_{j})/\epsilon}-1)({\varphi}_{i} + {\gamma}_{j})/\epsilon\right)_{j} \end{bmatrix}\\ &=\ {{J}}_{F}({{y}})^{-1} \begin{bmatrix} \left(\sum\limits_{j}{{P}}_{ij}\sum\limits_{k=2}^\infty\frac1{(k-1)!\epsilon^k}\sum\limits_{l=0}^k{k\choose l}{\varphi}_{i}^l{\gamma}_{j}^{k-l}\right)_{i}\\ \left(\sum\limits_{i}{{P}}_{ij}\sum\limits_{k=2}^\infty\frac1{(k-1)!\epsilon^k}\sum\limits_{l=0}^k{k\choose l}{\varphi}_{i}^l{\gamma}_{j}^{k-l}\right)_{j} \end{bmatrix}\\ &=\ \sum\limits_{k=2}^{\infty}\sum\limits_{l=0}^{k}\binom{k}{l} \frac{1}{(k-1)!\epsilon^{k}} {{J}}_{F}({{y}})^{-1} \begin{bmatrix} \left(\sum_{j}{{P}}_{ij}{\varphi}_{i}^l{\gamma}_{j}^{k-l}\right)_{i}\\ \left(\sum_{i}{{P}}_{ij}{\varphi}_{i}^l{\gamma}_{j}^{k-l}\right)_{j} \end{bmatrix} \end{aligned} \end{equation*} where all exponents are applied componentwise. Now we first treat only the summands for $l=0$ and $l=k$. For those terms \eqref{eq:jacobian} immediately implies \begin{equation*} \sum\limits_{k=2}^{\infty}\sum\limits_{l=0,k}\binom{k}{l} \frac{1}{(k-1)!\epsilon^{k}} {{J}}_{F}({{y}})^{-1} \begin{bmatrix} \left(\sum_{j}{{P}}_{ij}({\varphi}_{i}^{k} + {\gamma}_{j}^{k})\right)_{i}\\ \left(\sum_{i}{{P}}_{ij}({\varphi}_{i}^{k} + {\gamma}_{j}^{k})\right)_{j} \end{bmatrix}\label{eq:deuflhard-est-l=0-k} =\sum\limits_{k=2}^{\infty}\sum\limits_{l=0,k} \frac{1}{(k-1)!\epsilon^{k-1}} \begin{bmatrix} {\varphi}^{k}\\{\gamma}^{k} \end{bmatrix} =\begin{bmatrix} (\mathrm{e}^{{\varphi}/\epsilon}-1){\varphi}\\(\mathrm{e}^{{\gamma}/\epsilon}-1){\gamma} \end{bmatrix}, \end{equation*} which has supremum norm bounded by $(\mathrm{e}^{\frac1\epsilon}-1)\norm{({\varphi},{\gamma})}_{\infty}^2$ for all $\norm{({\varphi},{\gamma})}_{\infty}\leq1$. For all other summands (i.e. $1\leq l\leq k-1$), we write \begin{equation*} \begin{bmatrix} {\alpha}\\ {\beta} \end{bmatrix} \coloneqq \frac{\binom{k}{l}}{(k-1)!\epsilon^{k}}{{J}}_{F}({{y}})^{-1} \begin{bmatrix} \left(\sum_{j}{{P}}_{ij}{\varphi}_{i}^{l}{\gamma}_{j}^{k-l}\right)_{i}\\ \left(\sum_{i}{{P}}_{ij}{\varphi}_{i}^{l}{\gamma}_{j}^{k-l}\right)_{j} \end{bmatrix}. \end{equation*} Using \eqref{eq:jacobian} again, it follows that \begin{equation*} \label{eq:A1} \underbrace{\begin{bmatrix} \diag({{P}}\mathbb{1}_{n}) &{{P}}\\ {{P}}^{\top} &\diag({{P}}^{\top}\mathbb{1}_{n}) \end{bmatrix}}_{\eqqcolon {{A}}} \begin{bmatrix} {\alpha}\\ {\beta} \end{bmatrix} = \frac{\binom{k}{l}}{(k-1)!\epsilon^{k-1}} \begin{bmatrix*}[l] \diag({{P}}{\gamma}^{k-l}){\varphi}^{l}\\ \diag({{P}}^{\top}{\varphi}^{l}){\gamma}^{k-l} \end{bmatrix*}, \end{equation*} and we aim to estimate $\norm{{\alpha}}_{\infty}$ and $\norm{{\beta}}_{\infty}$ by $\norm{{\varphi}}_{\infty}$ and $\norm{{\gamma}}_{\infty}$. By \cref{lem:jacobian_sym_pos_def}, the matrix ${{A}}$ has a one-dimensional kernel spanned by ${{q}} \coloneqq (\mathbb{1}_{n}^{\top}, -\mathbb{1}_{n}^{\top})^{\top}$, and a solution $({\alpha}, {\beta})$ in the orthogonal complement is also a solution to \begin{equation*} \underbrace{({{A}} + \Delta{{q}}\q^{\top})}_{{{B}}} \begin{bmatrix} {\alpha}\\ {\beta} \end{bmatrix} =\frac{\binom{k}{l}}{(k-1)!\epsilon^{k-1}} \begin{bmatrix*}[l] \diag({{P}}{\gamma}^{k-l}){\varphi}^{l}\\ \diag({{P}}^{\top}{\varphi}^{l}){\gamma}^{k-l} \end{bmatrix*} \end{equation*} for any $\Delta > 0$. From \textcite{Varah1975}, we know that the $\ell_{\infty}$-norm of the inverse matrix of ${{B}}$ is estimated by \begin{equation} \textstyle \label{eq:A2} \norm{{{B}}^{-1}}_{\infty} \leq \left[\min_{i}\left( \abs{{{B}}_{ii}} - \sum_{j\neq i}\abs{{{B}}_{ij}} \right)\right]^{-1}. \end{equation} In this case, we calculate for $i = 1,\dots,n$ that \begin{equation*} \abs{{{B}}_{ii}} - \sum_{\substack{1\leq j\leq 2n\\ j\neq i}}\abs{{{B}}_{ij}} = \sum\limits_{1\leq j\leq n}{{P}}_{ij} + \Delta - (n - 1)\Delta - \sum\limits_{1\leq j\leqn}\abs{{{P}}_{ij} -\Delta}. \end{equation*} For any $\Delta \leq \min_{j}{{P}}_{ij}$, this leads to \begin{equation*} \abs{{{B}}_{ii}} - \sum_{\substack{1\leq j\leq 2n\\ j\neq i}}\abs{{{B}}_{ij}} \leq 2\Delta. \end{equation*} Similarly, we get that \begin{equation*} \abs{{{B}}_{n + i, n + i}} - \sum_{\substack{1\leq j\leq 2n\\ j\neq n + i}}\abs{{{B}}_{n + i, j}} \leq 2\Delta. \end{equation*} Choosing $\Delta \coloneqq \min_{ij}{{P}}_{ij}$, we thus obtain that \begin{equation*} \norm{{{B}}^{-1}}_{\infty} \leq \left[2\min_{ij}{{P}}_{ij}\right]^{-1}. \end{equation*} Using that \begin{equation*} \norm{\diag({{P}}^{\top}{\varphi}^l){\gamma}^{k-l}}\leq\norm{{{P}}^{\top}\mathbb{1}_{n}}_{\infty}\norm{{\varphi}}_{\infty}^{l}\norm{{\gamma}}_{\infty}^{k-l}, \end{equation*} and similarly for $\diag({{P}}{\gamma}^{k-l}){\varphi}^l$, finally gives \begin{equation*} \norm{({\alpha}, {\beta})}_{\infty} \leq \frac{\binom{k}{l}}{(k-1)!\epsilon^{k-1}}M \norm{{\varphi}}_{\infty}^{l}\norm{{\gamma}}_{\infty}^{k-l}\label{eq:deuflhard-est-other-l} \end{equation*} with \begin{equation*} M = \frac{\max\{\norm{{{P}}\mathbb{1}_{n}}_{\infty}, \, \norm{{{P}}^{\top}\mathbb{1}_{n}}_{\infty}\}}{2\min_{ij}{{P}}_{ij}}. \end{equation*} Hence we obtain \begin{multline*} \left\|\sum\limits_{k=2}^{\infty}\sum\limits_{l=1}^{k-1}\binom{k}{l} \frac{1}{(k-1)!\epsilon^{k}} {{J}}_{F}({{y}})^{-1} \begin{bmatrix} \left(\sum_{j}{{P}}_{ij}{\varphi}_{i}^l{\gamma}_{j}^{k-l}\right)_{i}\\ \left(\sum_{i}{{P}}_{ij}{\varphi}_{i}^l{\gamma}_{j}^{k-l}\right)_{j} \end{bmatrix}\right\|_{\infty}\\ \begin{aligned} &\leq\ M\sum\limits_{k=2}^{\infty}\sum\limits_{l=1}^{k-1}\binom{k}{l} \frac{1}{(k-1)!\epsilon^{k-1}}\norm{{\varphi}}_{\infty}^{l}\norm{{\gamma}}_{\infty}^{k-l}\\ &=\ M\bigg[\left(\mathrm{e}^{(\norm{{\varphi}}_{\infty}+\norm{{\gamma}}_{\infty})/\epsilon}-1\right)(\norm{{\varphi}}_{\infty}+\norm{{\gamma}}_{\infty})\\ &\quad-\left(\mathrm{e}^{\norm{{\varphi}}_{\infty}/\epsilon}-1\right)\norm{{\varphi}}_{\infty}-\left(\mathrm{e}^{\norm{{\gamma}}_{\infty}/\epsilon}-1\right)\norm{{\gamma}}_{\infty}\bigg]\\ &\leq\ 2\mathrm{e}^{\frac1\epsilon}M(\mathrm{e}^{\frac1\epsilon}-1)\norm{({\varphi},{\gamma})}_{\infty}^2 \end{aligned} \end{multline*} for all $\norm{({\varphi},{\gamma})}_{\infty}\leq1$. In summary we obtain \begin{equation*} \norm{{{J}}_{F}({{y}})^{-1}[{{J}}_{F}({{y}}) - {{J}}_{F}({\eta})]({{y}} - {\eta})}_{\infty} \leq(1+2\mathrm{e}^{\frac1\epsilon}M)(\mathrm{e}^{\frac1\epsilon}-1)\norm{{{y}} - {\eta}}_{\infty}^2, \end{equation*} as desired To generalize this to the case $m\neq n$, one can take ${{q}} = (\Delta_{1}\mathbb{1}_{n}, -\Delta_{2}\mathbb{1}_{m})$ for $\Delta_{1}\neq -\Delta_{2}$ and $\Delta_{1}\Delta_{2} = \Delta$ and choose $\Delta_{1}$ to equilibrate the lower bounds for the first $m$ and the last $n$ rows of ${{B}}$. \section{Conclusion} \label{sec:conclusion} We have proposed a Newton iteration to solve the entropically regularized discrete optimal transport problem. Different from related Newton type approaches for matrix balancing, our method iterates on the logarithm of the scalings, which seems to be necessary for robust convergence in the optimal transport setting. Numerical examples show that our algorithm is a robust and efficient alternative to the more commonly used Sinkhorn--Knopp algorithm, at least for small regularization strength. \printbibliography \end{document}
2,869,038,156,187
arxiv
\section{Gravity with Anisotropic Scaling} The central idea of \cite{mqc,lif} is a minimalistic one: to formulate quantum gravity as a quantum field theory, with the spacetime metric as the elementary field, in the standard path-integral language. Quantum field theory (QFT) has emerged from the 20th century as the universal language for understanding systems with many degrees of freedom, ranging from high-energy particle physics to condensed matter, statistical physics and more. Before giving up on QFT for quantum gravity, it makes sense to apply its full machinery to this problem, without prior restrictions such as microscopic relativistic invariance. The novelty of \cite{mqc,lif} is that gravity is combined with the idea of anisotropic scaling of spacetime, more familiar from condensed matter, and characterized by \begin{equation} {\bf x}\to b{\bf x},\qquad t\to b^z t. \end{equation} Here $z$ is an important observable, the ``dynamical critical exponent,'' associated with a given fixed point of the renormalization group (RG). Systems with many different values of $z$ are known, for example in dynamical critical phenomena or quantum criticality. It is natural to ask whether one can construct theories with anisotropic scaling and with propagating gravitons. Why? A consistent theory of gravity with anisotropic scaling can be potentially useful for a number of possible applications: \begin{itemize} \item[(i)] Phenomenology of gravity in our Universe of $3+1$ macroscopic dimensions.\vspace{-.1in} \item[(ii)] New gravity duals for field theories in the context of the AdS/CFT correspondence; in particular, duals for a broader class of nonrelativistic QFTs.\vspace{-.1in} \item[(iii)] Gravity on worldsheets of strings and worldvolumes of branes. \vspace{-.1in} \item [(iv)] Mathematical applications to the theory of the Ricci flow on Riemannian manifolds \cite{mqc}.\vspace{-.1in} \item [(v)] IR fixed points in condensed matter systems, with emergent gravitons (new phases of algebraic bose liquids) \cite{eme}.\vspace{-.1in} \item[(vi)] Relativistic gravity and string theory in asymptotically anisotropic spacetimes \cite{aci}; \end{itemize} and possibly others. Note that only application (i) is subjected to the standard observational tests of gravity, while the others are only constrained by their mathematical consistency. And of course, applications (i--vi) aside, this system can serve as a useful theoretical playground for exploring field-theory and path-integral methods for quantum gravity. This approach shares some philosophical background with the idea of asymptotic safety, initiated in \cite{weinberg} and experiencing a resurgence of recent interest. Both approaches are equally minimalistic, suggesting that gravity can find its UV completion as a quantum field theory of the fluctuating spacetime metric, without additional degrees of freedom or a radical departure from standard QFT. While both approaches look for a UV fixed point, they differ in the nature of the proposed fixed point: In asymptotic safety, one benefits from maintaining manifest relativistic invariance, and pays the price of having to look for a nontrivial, strongly coupled fixed point. In gravity with anisotropic scaling, one gives up Lorentz invariance as a fundamental symmetry at short distances, and looks for much simpler, perhaps Gaussian or at least weakly coupled fixed points in the UV. The price to pay, if one is interested in application (i), is the need to explain how the experimentally extremely well-tested Lorentz symmetry emerges at long distances. Such Gaussian fixed points of gravity with $z>1$ can also serve as IR fixed point in condensed matter systems, as shown for $z=2$ and $z=3$ in \cite{eme}. This may be important because it leads to new phases of algebraic bose liquids, and gives a new mechanism for making gapless excitations technically natural in condensed matter. Implications for quantum gravity are less dramatic: The gapless excitations at the IR fixed points of \cite{eme} are linearized gravitons, only allowed to interact in a way which respects linearized diffeomorphism invariance. Hence, this lattice model is not a theory of emergent gravity with nonlinear diffeomorphism symmetries. \subsection{The minimal theory} In our construction of Lifshitz-type gravity, we assume that the spacetime manifold $M$ carries the additional structure of a codimension-one foliation ${\cal F}$, by $D$-dimensional leaves $\Sigma$ of constant time. We will use coordinate systems $(t,{\bf x}\equiv x^i)$, $i=1,\ldots D$, adapted to ${\cal F}$. Perhaps the simplest relevant example of systems with Lifshitz-type anisotropic scaling is the Lifshitz scalar theory with $z=2$, \begin{equation} \label{slifs} S=\frac{1}{2}\int dt\,d^D{\bf x}\left\{\dot\phi^2-(\Delta\phi)^2\right\}, \end{equation} with $\Delta$ the spatial Laplacian. Compared to the relativistic scalar in the same spacetime dimension, the Lifshitz scalar has an improved UV behavior. The scaling dimension of $\phi$ changes to $[\phi]=(D-2)/2$, and conseqently the (lower) critical dimension also shifts, from the relativistic $1+1$, to $2+1$ when $z=2$. The most ``primitive'' theory of gravity similar to (\ref{slifs}) would describe the dynamics of the spatial metric $g_{ij}({\bf x},t)$, invariant under time-independent spatial diffeomorphisms. Because of the lack of (time-dependent) gauge invariance, this model would propagate not only the tensor polarizations of the graviton, but also the vector and the scalar. This ``primitive'' theory becomes more interesting when we make it gauge invariant under foliation-preserving diffeomorphisms ${\rm Diff}(M,{\cal F})$, generated by \begin{equation} \label{difgen} \delta t=f(t),\qquad \delta x^i=\xi^i(t,{\bf x}). \end{equation} The minimal multiplet of fields now contains, besides $g_{ij}$, also the lapse function $N$ and the shift vector $N_i$. Since the lapse and shift play the role of gauge fields of ${\rm Diff}(M,{\cal F})$, we can assume that they inherit the same dependence on spacetime as the corresponding generators (\ref{difgen}): While $N_i(t,{\bf x})$ is a spacetime field, $N(t)$ is only a function of time, constant along $\Sigma$. Making this assumption about the lapse function gives to the minimal theory of gravity with anisotropic scaling, sometimes referred to as the ``projectable'' theory \cite{lif}. (For its brief review and some phenomenological applications, see \cite{shinjire}.) The dynamics of the projectable theory is described by the most general action which respects the ${\rm Diff}(M,{\cal F})$ symmetry. At the lowest orders in time derivatives, the action is given by \begin{equation} \label{smint} S=\frac{2}{\kappa^2}\int dt\,d^D{\bf x}\,\sqrt{g}\,N\left(K_{ij}K^{ij}-\lambda K^2 -{\cal V}\right), \end{equation} where \begin{equation} K_{ij}\equiv\frac{1}{2N}\left(\dot g_{ij}-\nabla_i N_j-\nabla_j N_i\right) \end{equation} is the extrinsic curvature of $\Sigma$, $K=g^{ij}K_{ij}$, $\lambda$ is a dimensionless coupling, and the potential term ${\cal V}$ is an arbitrary ${\rm Diff}(\Sigma)$-invariant local scalar functional built out of $g_{ij}$, its Riemann tensor and the spatial covariant derivatives, but no time derivatives. Which terms in ${\cal V}$ are relevant will depend on our choice of $z$ at short distances. Terms with $2z$ spatial derivatives have the same classical scaling dimension as the kinetic term, and their quadratic part defines the Gaussian fixed point. Terms with fewer derivatives represent relevant deformations of the theory. They induce a classical RG flow, which can lead to an IR fixed point, with the isotropic $z=1$ scaling in the deep infrared regime. As usual in effective field theory, terms of higher order in derivatives, or involving additional time derivatives, are of higher dimension and therefore superficially irrelevant around the UV fixed point. Compared to general relativity, the minimal model is different in three interconnected ways: It has one fewer gauge symmetry per spacetime point, its field multiplet has one fewer field component per spacetime point (since $N$ is independent of $x^i$), and it propagates an additional scalar graviton polarization in addition to the standard tensor polarizations, at least around flat spacetime. While the number of gauge symmetries and field components may not be observable, the number of propagating graviton polarizations is. \subsection{The nonprojectable case} Another possibility is to insist on matching the field content of general relativity, and promote the lapse $N$ to a spacetime field. This is the ``nonprojectable'' theory \cite{mqc,lif}. If we postulate the same ${\rm Diff}(M,{\cal F})$ gauge symmetry as in the projectable case, the generic action will contain new terms, constructed from the new ingredient $a_i\equiv\partial_iN/N$. The general theory with such new terms is sometimes referred to in the literature as the ``healthy extension'' of the projectable theory. This is a misnomer -- indeed, the basic rules of effective field theory clearly instruct us to include all terms compatible with the postulated symmetries, since such terms would otherwise be generated by quantum corrections. Hence, including all terms compatible with the gauge symmetry should not be called a ``healthy extension'' of the nonprojectable theory; it is just the correct implementation of the assumptions of the nonprojectable theory. (For a recent review of the nonprojectable theory, see \cite{bpsgbh}.) In contrast, leaving the $a_i$-dependent terms artificially out deserves to be called an ``unhealthy reduction'' of the projectable theory. Such an unhealty reduction could only be justified if it is protected by additional symmetries. However, a closer analysis of the unhealthy reduction indeed reveals difficulties with the closure of the constraint algebra and no new gauge symmetry \cite{emil,henx}, possibly with the interesting exception of the deep infrared limit \cite{rest}. The nonprojectable model may be described in terms of the same field content as general relativity, but the scalar graviton polarization is still present in its physical spectrum. In Part~2, we discuss a mechanism proposed in \cite{gen}, which eliminates the scalar graviton, by enlarging the gauge symmetry to ``nonrelativistic general covariance.'' \subsection{Entropic origin of gravity?} There is another concept originally introduced in \cite{mqc,lif} which has caused some level of confusion in the literature: the ``detailed balance'' condition. This concept has its roots in nonequlibrium statistical mechanics and dynamical critical phenomena. Oversimplifying slightly, the theory is said to be in detailed balance if the potential in (\ref{smint}) is of a special form, effectively a square of the equations of motion associated with a (Euclidean-signature) theory in $D$ dimensions with some action $W$. For example, the Lifshitz scalar (\ref{slifs}) is in detailed balance, with $W=\frac{1}{2}\int d^D{\bf x}\,\partial_i\phi\partial_i\phi$. In \cite{mqc,lif}, this condition was suggested simply as a technical trick, which can possibly reduce the number of independent couplings in ${\cal V}$, if one can show that detailed balance is preserved under renormalization (which is the case in many nongravitational examples in condensed matter). If one is interested in getting close to general relativity with a small cosmological constant at long distances, detailed balance would clearly have to be broken, at least in the minimal theory. If that breaking happens only at the level of relevant deformations, the restrictive power of the detailed balance condition can still be useful for constraining the terms whose dimension equals that of the kinetic term. Is it possible that the detailed balance condition could play a more physical role in our understanding of gravity? While this question remains open, one intriguing analogy seems worth pointing out: When gravity with anisotropic scaling satisfies the detailed balance condition, its path integral in imaginary time is formally analogous to the Onsager-Machlup theory of nonequilibrium thermodynamics \cite{om1,om2}. In this path-integral formulation of nonequilibrium systems, a collection of thermodynamic variables $\Phi^a$ is governed by the Onsager-Machlup action, given -- up to surface terms -- by \begin{equation} S=\frac{1}{2}\int dt\,d^D{\bf x}\left\{\dot\Phi^aL_{ab}\dot\Phi^b+ \frac{\delta W}{\delta\Phi^a}L^{ab}\frac{\delta W}{\delta\Phi^b}\right\}. \end{equation} Here the Onsager kinetic coefficients $L_{ab}$ represent a metric on the $\Phi^a$ space, $\frac{\delta W}{\delta\Phi^a}$ are interpreted as entropic forces, and the action $W$ itself plays the role of entropy! This analogy leads to a natural speculation, implicit in \cite{mqc,lif}, that the nature of gravity with anisotropic scaling is somehow entropic. It would be interesting to see whether this analogy can be turned into a coherent framework in which some of the intriguing recent ideas about the entropic origin of gravity \cite{everlinde} (also \cite{jacobson}) and cosmology \cite{easson} can be made more precise. \subsection{Causal dynamical triangulations and the spectral dimension of spacetime} In the study of quantum field theory, it is often useful to construct the system by a lattice regularization, and study the approach to the continuum limit using computer simulations. In the context of quantum gravity, it is natural to define the lattice version by summing over random triangulations of spacetime. This approach works well in spacetimes of two Euclidean dimensions, where the system can be solved exactly in terms of matrix models. However, extending this success to higher dimensions has proven frustratingly difficult, with random triangulations typically yielding branched polymers or other phases with fractional numbers of macroscopic dimensions in the continuum limit. In the past few years, a major breakthrough on this front has begun to emerge in the causal dynamical triangulations (CDT) approach to lattice gravity (see \cite{ajlre} for a review). In the CDT approach, the pathological continuum phases are avoided by changing the lattice rules slightly: The random triangulations that contribute to the partition sum are constrained to respect a preferred foliation of spacetime by fixed (imaginary) time slices. This seemingly innocuous change of the rules turns out to be relevant, in the technical RG sense: It leads to a different continuum limit, with much more attractive physical properties. The macroscopic dimension of spacetime in this continuum limit appears to be four, as is indicated by the measurement of the so-called ``spectral dimension'' $d_s$ of spacetime \cite{ajl} at the long distance limit in the lattice simulation (with $d_s= 4.02\pm 0.1$ reported in \cite{ajl}). This is a promising and exciting result, suggesting that perhaps for the first time, we might be close to sensible continuum results in lattice quantum gravity! Clearly, the relevant change of the rules that makes all the difference in the lattice implementation, namely that spacetime is equipped with a preferred foliation structure, is very similar to the starting point of the analytic approach to quantum gravity with anisotropic scaling. It is natural to conjecture that {\it the CDT formulation of quantum gravity represents a lattice version of Lifshitz gravity with anisotropic scaling}. The first nontrivial piece of evidence for this conjecture was presented in \cite{spdim}. One of the surprises of \cite{ajl} was not only that $d_s\approx 4$ at long distances, but also that at shorter distances, before the lattice artifacts kick in, $d_s$ undergoes a smooth crossover to $d_s\approx 2$. How can the effective dimension of spacetime change continuously from four at long distances to two at short distances? An analytic explanation was offered in \cite{spdim}: The spectral dimension is a precisely defined geometric quantity, and it can be calculated systematically in the continuum approach to quantum gravity with anisotropic scaling. In the mean-field approximation around the flat spacetime, the result is \cite{spdim} \begin{equation} \label{spd} d_s=1+\frac{D}{z}. \end{equation} Hence, if the gravity theory flows from a $z=3$ UV fixed point to a $z=1$ IR fixed point, the qualitative crossover of $d_s$ observed in \cite{ajl} is reproduced. The topological dimension of spacetime is always four, but the {\it spectral\/} dimension changes because of the anisotropic scaling at short distances. This argument can be turned around, leading to a prediction: For example, in $2+1$ dimensions (not studied in \cite{ajl}), the value of $z$ required at the UV fixed point for power-counting renormalizability and UV completeness is $z=2$, while the theory still flows to $z=1$ in the IR. The Lifshitz gravity formula (\ref{spd}) then predicts that the CDT formulation of $2+1$ gravity should find a crossover from $d_s=3$ at long distances to $d_s=2$ at short distances. This prediction was beautifully confirmed in the CDT lattice approach in \cite{benedetti}. \subsection{Phases of gravity} Additional evidence for the conjecture relating CDT lattice gravity and the continuum gravity with anisotropic scaling comes from comparing the phase diagrams of the two approaches. Recall first the phase structure of the Lifshitz scalar, first investigated in \cite{michelson}. Including the relevant deformations, and a $\phi^4$ self-interaction for stabilization, the theory is given by \begin{equation} \label{sldef} S=\frac{1}{2}\int dt\,d^D{\bf x}\left\{\dot\phi^2-(\Delta\phi)^2 -\mu^2\partial_i\phi\partial_i\phi-m^4\phi^2-\lambda\phi^4\right\}. \end{equation} With $\lambda>0$, depending on the values of $m^4$ and $\mu^2$, the theory can be in three phases. At positive $\mu^2$, we obtain the standard disordered and and uniformly ordered phase, in which the vacuum expectation value of $\phi$ either vanishes or takes a constant nonzero value. At negative $\mu^2$, a new, spatially modulated phase appears: In this phase, the vacuum condensate of $\phi$ is a periodic function along a spontaneously chosen spatial direction. The phase transition lines meet in the tricritical $z=2$ point at $\mu=m=0$. In the mean field approximation, the three phase transition lines joining at the tricritical point share a common tangent there \cite{michelson}; this feature is erased by quantum corrections, and for tricritical Lifshitz points with multi-component order parameters. \begin{figure}[tbp] \centering \includegraphics[width=2in]{x1.eps} \caption{The mean-field phase diagram of the Lifshitz scalar theory (\ref{sldef}).} \label{fig0} \end{figure} In the CDT approach to quantum gravity in $3+1$ dimensions, three phases have also been observed, referred to as A, B and C \cite{ajlre}. One appears to give rise to a macroscopic de~Sitter-like universe, while the other two attracted less attention at first, until recently \cite{ambjornhl}. The lines of phase transitions between these phases meet at a tricritical point, whose properties have not been explored in much detail on the lattice yet. The phase diagram of gravity with anisotropic scaling \cite{hmtz} exhibits the same qualitative structure, with several phases organized around a multicritical point (see \cite{hmtz} for details). For simplicity, we illustrate this by considering the case of the projectable theory in $2+1$ dimensions, where the generic power-counting renormalizable potential is \begin{equation} {\cal V}=\alpha R^2-\beta R +\gamma. \end{equation} Up to a sign, the value of $\alpha$ can be absorbed into a rescaling of space versus time. The remaining sign determines whether we are in real or imaginary time. The terms with the $\beta$ and $\gamma$ couplings, which roughly play the role of the (inverse) Newton constant and the cosmological constant, represent relevant deformations. In the mean-field approximation, the phases are classified by assuming the FRW ansatz for the metric (with the spatial slices compact spatial slices $\Sigma=S^2$, as in CDT), and finding the vacuum solutions by solving the Friedmann equation. It turns out that -- as in the Lifshitz scalar -- there are three phases, which meet at the tritical $z=2$ point with $\beta=\gamma=0$. Amusingly, the phase transition lines also share a common tangent at the tricritical point, just as in the case of the Lifshitz scalar. We again expect that quantum corrections will modify this behavior, without changing the qualitative structure of the phase diagram. \begin{figure}[tbp] \centering \includegraphics[width=2in]{x2.eps} \caption{The mean-field phase diagram of gravity with anisotropic scaling in $2+1$ dimensions with the compact spatial slices $\Sigma=S^2$.} \label{fig2} \end{figure} The nature of the three phases can be analyzed in real or imaginary time. In real time, Phase~I corresponds to a global de~Sitter-like spacetime, Phase~II describes a recollapsing cosmology with a big bang and a big crunch, while Phase~III breaks time reversal spontaneously, with an expanding big-bang cosmology or a contracting cosmology with a big crunch. In imaginary time, Phase~I yields a compact geometry on $S^3$ much like the shape found in \cite{benedetti}, Phase~II is a Euclidean bounce, and in Phase~III, there are no solutions satisfying our maximally symmetric FRW ansatz. This similarity between the phase diagram of quantum gravity with anisotropic scaling and the phase diagram found in the CDT approach represents further evidence \cite{hmtz} for our conjecture that these two approaches to quantum gravity are intimately related. Another universal lesson emerging from our analysis of the phase structure of quantum gravity with anisotropic scaling in \cite{hmtz} is that spatially modulated phases of gravity should be possible. \section{General Covariance in Gravity with Anisotropic Scaling} In order to eliminate the extra scalar polarization of the graviton, gravity with anisotropic scaling which enjoys an extended gauge invariance was proposed in \cite{gen}. The gauge symmetry in question is an extension of the foliation-preserving diffeomorphisms by an Abelian gauge symmetry, and can be interpreted as a nonrelativistic form of general covariance. The number of independent symmetries per spacetime point is the same as in general relativity, with the Abelian symmetry playing the role of linearized spacetime-dependent time reparametrizations. This extended symmetry preserves the preferred spacetime foliation and the privileged role of time, but eliminates the scalar polarization of the graviton. \subsection{Fields and symmetries} We start with the minimal projectable theory reviewed in Part~1. It was noticed in \cite{lif} that at $\lambda=1$, this theory exhibits in the linearized approximation around flat spacetime an enhanced symmetry, which acts only on the shift vector, \begin{equation} \label{alphax} \delta N_i=\partial_i\alpha, \end{equation} with $\alpha({\bf x})$ a time-independent local symmetry generator. Promoting this symmetry to a spacetime-dependent gauge symmetry of the full nonlinear theory will lead to our desired nonrelativistic general covariance. Extending (\ref{alphax}) to a gauge symmetry requires new fields beyond the minimal gravity multiplet $g_{ij}$, $N_i$ and $N(t)$. Already at the linearized level, we need to introduce a new field $A$ which transforms under $\alpha({\bf x},t)$ as the time component of an Abelian gauge field. In the interacting theory, this transformation rule becomes \begin{equation} \delta A=\dot\alpha -N^i\partial_i\alpha. \end{equation} The new field $A$, and the new gauge symmetry $\alpha$, have an elegant and geometric interpretation \cite{gen} in the context of a nonrelativistic $1/c$ expansion of relativistic gravity: $A$ is simply the subleading term in the $1/c$ expansion of the relativistic lapse function, and $\alpha$ is the subleading, linearized part of spacetime-dependent time reparametrizations. Unfortunately, in dimensions greater than $D=2$, this is not the whole story. When $D>2$, the linearized symmetry (\ref{alphax}) does not extend to a symmetry of the interacting, nonlinear theory, and therefore cannot be straightforwardly gauged. In order to fix this obstruction, a new field $\nu$ was introduced in \cite{gen}. This ``Newton prepotential'' transforms as a Goldstone field, \begin{equation} \delta\nu=\alpha. \end{equation} The introduction of the Newton prepotential allows (\ref{alphax}) to be extended to a symmetry of the nonlinear theory, which can then be gauged by the standard coupling to the gauge field $A$. Unlike the rest of the gravity multiplet, the Newton prepotential does not appear to have a natural geometric interpretation in terms of the $1/c$ expansion in the metric formulation of relativistic gravity. \subsection{The Lagrangian and Hamiltonian formulations} The systematic construction of an action invariant under the extended gauge symmetries leads to the following minimal theory \cite{gen}, \begin{equation} \label{fullact} S=\frac{2}{\kappa^2}\int dt\,d^D{\bf x}\,\sqrt{g}\left\{N\left[K_{ij}K^{ij}-K^2-{\cal V} +\nu\,\Theta^{ij}\vphantom{g^{ik}}\left(2K_{ij}+\nabla_i\nabla_j\nu\right) \right]-A\,(R-2\Omega)\right\}. \end{equation} Here $\Theta^{ij}$ a short-hand for $\Theta^{ij}=R^{ij}-\frac{1}{2}g^{ij}R+ \Omega g^{ij}$, and $\Omega$ is a new relevant coupling constant, of the same dimension as the cosmological constant $\Lambda$. It controls the scalar curvature of the spatial slices in the preferred foliation ${\cal F}$ of spacetime, and it makes sense to refer to $\Omega$ as the ``second cosmological constant.'' The form of the potential ${\cal V}$ is again unconstrained by the symmetries, just as in the minimal theory. The theory can also be rewritten in the Hamiltonian formalism \cite{gen}, which offers a more systematic way for studying the gauge symmetry structure and counting the number of propagating degrees of freedom without having to resort to sometimes unreliable linearizations around a chosen background. The Hamiltonian constraint algebra exhibits an intriguing mixture of first- and second-class constraints, and confirms that the theory propagates only the tensor graviton polarizations. The scalar graviton mode is seen as a gauge artifact of nonrelativistic general covariance. \subsection{Comparing to general relativity in the infrared} Since the spectrum of propagating gravitons -- and gravitational waves -- in the long-distance limit of our gravity with nonrelativistic general covariance matches that of general relativity, it is natural to extend this comparison to the long-distance limits of the full nonlinear theories. Some first steps in this direction were made in \cite{gen}. First, a simple conceptual argument implies that the Schwarzschild spacetime is an exact solution of the infrared limit of our theory. This bodes well for the standard tests, since it suggests that in the infrared regime, the $\beta$ and $\gamma$ parameters of the PPN formalism take their relativistic value, equal to one. The equation of motion associated with the variation of $A$ constrains the spatial scalar curvature to be constant, $R=2\Omega$. At first, it might seem that this equation might be difficult to reconcile with the existence of interesting cosmological solutions. However, this issue can be avoided in several different ways, and interesting cosmological solutions can be found \cite{gen}. In fact, the theory has a phenomenologically attractive feature: it seems to prefer cosmologies whose preferred spatial slices are flat. Perhaps the biggest challenge for this program is to explain why the infrared limit should exhibit Lorentz invariance, to the high level of accuracy required by observations. While the theory may naturally flow to $z=1$ at long distances, different species of low-energy probes may experience distinct effective limiting speeds of propagation, not equal to the speed of light. Setting all these speeds equal to $c$ would represents a rather unpleasant amount of fine tuning. While this problem remains unsolved in the theory with nonrelativistic general covariance as well, it is intriguing that -- unlike in the minimal theory -- global Lorentz symmetries of the flat spacetime can be embedded into the extended gauge symmetry of our generally covariant theory \cite{gen}. \section{Conclusions} If one's agenda is to construct a theory with anisotropic scaling which resembles general relativity in the infrared, the generally covariant model of \cite{gen} appears to be a step in the right direction, since its extended gauge symmetry eliminates the scalar graviton from the theory, leaving only the physical tensor polarizations. The resulting infared limit has the Schwarszchild geometry as an exact solution, suggesting that the theory is likely compatible with the standard solar-system tests. The price paid is the introduction of the rather mysterious Newton prepotential $\nu$ in \cite{gen}. This field does not appear to have a clear geometric interpretation in the $1/c$ expansion of the standard metric formulation of relativistic gravity. Moreover, the introduction of $\nu$ leads to a new proliferation of gauge invariant terms that can appear in the action, both for pure gravity and in its coupling to matter \cite{patrick,dasilva}. Clearly, a better understanding of the role of the Newton prepotential is desirable before one can seriously discuss detailed phenomenological constraints on models of gravity with nonrelativistic general covariance. \acknowledgments It is a pleasure to thank Kevin Grosvenor, Charles Melby-Thompson, Cenke Xu and Patrick Zulkowski for enjoyable collaborations on the topics discussed in this paper. This paper is based on the invited talks delivered at the {\it GR 19 Conference}, Ciudad de M\'exico, July 2010. I wish to thank Cedric Defayett, Fay Dowker, Don Marolf and Shiraz Minwalla for their invitation and hospitality. This work has been supported by NSF Grant PHY-0855653, by DOE Grant DE-AC02-05CH11231, and by the BCTP. \bibliographystyle{JHEP}
2,869,038,156,188
arxiv
\section{Introduction} \label{intro} The roundworm \textit{C. elegans} is commonly used in neuroscience because the connectome (the connectivity between all 302 neurons of the nervous system) has been entirely mapped, the genome has been sequenced, and genetic manipulations are relatively trivial \cite{white1986structure, Genome2012}. Combining these features with behaviour analysis allows investigations into the relationship between genes, neurons and systems. Although it is relatively simple for a human observer to learn typical movement and body shape patterns, quantifying anomalous behaviours can be a painstakingly slow and inaccurate process. There is a great need for automated detection, tracking and quantification of worms and their behaviours in neurogenetic research. For sophisticated behavioural analyses, the ability to distinguish the head of the worm from its tail is required. For instance, whether the worm is crawling forward or backward can be determined by comparing head and tail locations in sequential series of video frames. Nematodes will reverse the direction of crawling when encountering an aversive stimulus (‘escape’ behavior), which is an often used metric for quantifying behavioral responses \cite{huang2006machine}. Although there exists commercial and open tracking software, those are either proprietary software, outdated, no longer supported, needs manual tuning to work properly for different scales and lighting conditions or have a combination of these drawbacks. Here, we investigate an approach to detect the head and the tail of worms that generalizes well under different conditions. The creation of a broadly applicable method for automatic behaviour analysis will increase reproducibility in biological research. \subsection{Literature Survey} Manually curated features are extensively used for head-tail detection. Early methods for discrimination involved image thresholding and relied on differences in the brightness of the head compared to the tail, in addition to the change in frame-to-frame distance \cite{geng2004automatic, huang2006machine}. Worm Tracker 2.0 built upon these earlier methods by taking the largest connected component in the given image after an initial image thresholding step \cite{yemini2013database}. The worm endpoints are located as sharp, convex angles of the shape contour. Then lateral motion and grayscale intensity features are used as input for linear discriminant analysis to identify an endpoint as head or tail. However, the threshold to detect large, convex angles can be different between imaging conditions and this method is susceptible to noise and intensity variations on the edges as shown in Figure \ref{fig:worm_img}. \citet{wang2013track} used a similar approach and designated the sharpest corner as the tail and the second sharpest corner as the head. They use error checking mechanisms to ensure that curves at other locations of the worm are not mistaken as either the head or the tail. Preceding frames are also used to detect head in current frame to make the process easier. However, the thresholding algorithms that both of these methods use are sensitive to brightness variations. Further, errors could propagate from previous frames to future frames. In addition, error checking mechanisms require setting parameters manually. \citet{zhan2015automated} takes a different approach to identifying the head after an initial image preprocessing stage, which includes thresholding and size filtering steps, to detect structures present near the head but far from the tail. Figure \ref{fig:worm_img}, shows an image of a single worm. The top-right end of the worm is its head and the bottom-left end of the worm is the tail. Under common imaging conditions, the head appears with a less sharp angle than the tail and exhibits a brighter intensity than the tail. Figure \ref{fig:worm_img} shows the proposed head and tail locations based on the method in \cite{yemini2013database}. The proposed locations do not identify the worm's actual head or tail in the image, demonstrating a common drawback of existing software packages. Accurate detection requires manually tuning parameters, like angular thresholds or Gaussian blur spread, for each set of imaging conditions. Relying on a threshold for the angular bend parameter can lead to false identification of a worm body bend as either head or tail (see figure \ref{fig:worm_img}). \begin{figure}[htb] \centering {\includegraphics[width=\linewidth]{Stack-0010_mod.png}} \caption{Head and tail proposals using approach from \citet{yemini2013database} on an image from our dataset. Red line is the detected contour and blue points indicate the head/tail proposals} \label{fig:worm_img} \end{figure} We use a neural network based approach which generates head and tail predictions directly as well as eliminates the need for feature engineering. Our approach is also robust to different lighting conditions and is scaleable for different image sizes. The article is organized as follows: In section \ref{sec:Method}, we describe our approach. In section \ref{sec:Data}, we describe dataset collection and preprocessing, and in section \ref{sec:Experiments} we present experimental results. \section{Methodology} \label{sec:Method} Given an image containing a worm, our goal is to output coordinates of the head and tail (termed `coordinate regression'). We use the method proposed in \citet{dsnt} to perform coordinate regression. Since worm head/tail can be anywhere in the image, a successful method should be able to spatially generalize as well as be trained end-to-end with labelled numerical coordinates. First, we use a fully connected convolutional network (VGG16 \citet{simonyan2014very}) to generate one heatmap for tail ($Z_t$) and one heatmap for head ($Z_h$). The heatmaps are of size 5x5 pixels for the model that we use in this paper. Heatmaps have higher values near the head and tail and low values everywhere else. All convolutional layers are shared between head and tail detection except the final convolutional layer. This setup enables the model to share common features and also learn features which are specific to head and tail. Each heatmap is then normalized, i.e. sum of all values of heatmap is set to one and all values are greater than zero. This is achieved by applying a softmax function over the heatmaps. Each pixel in a normalized heatmap gives the probability that the corresponding pixel is the location of the head or tail. The normalized heat map is given by: \begin{eqnarray} Z'_t = softmax(Z_t)\\ Z'_h = softmax(Z_h) \end{eqnarray} We then use Differential Spatial to Numerical Transform (DSNT) \citet{dsnt} to get numerical coordinates from the heatmap. The DSNT layer is differentiable, unlike heatmap matching techniques, and preserves spatial content better than fully connected coordinate regression methods. The inputs to the DSNT layer are normalized heatmaps and coordinate matrices $X$ and $Y$. Each entry of the coordinate matrix represents coordinate values of the corresponding pixel scaled between (-1,1) as shown in \cite{dsnt}. The coordinate predictions are calculated as the Frobenius inner product, i.e. element-wise multiplication of $Z'_t$ and $Z'_h$ with the normalized coordinate matrices and then taking the mean of the resultant matrix. Tail coordinate predictions are given as: \begin{eqnarray} (x_t, y_t) = \mathbf{\mu_t} = [{\langle Z_t', X \rangle}_F , {\langle Z_t', Y \rangle}_F] \end{eqnarray} The same methodology is used for head coordinate predictions. Direct coordinate predictions from the DSNT layer makes our network trainable end-to-end. We show the network used in Figure \ref{fig:Network_used}. As outputs of DSNT layers are normalized coordinates, mean squared error between predicted coordinates and ground truth coordinates is used as a loss. To control the spread of predicted heatmap, along with MSE, Jensen-Shannon divergence is used as regularizer as described in \citet{dsnt}. \begin{figure}[htb!] \centering {\includegraphics[width=0.85\linewidth]{Network_1.PNG}} \caption{Neural network used to train the model. For "Convolutional" layer $ p \times p \times r $, $p$ is the kernel size and $r$ is a number of output channels.} \label{fig:Network_used} \end{figure} \section{Data Collection and Pre-processing} \label{sec:Data} 600 images (480x640 pixels) were selected and downloaded pseudo-randomly from the database described in \citet{javer2018open} across a variety of imaging conditions to minimize overfitting to a specific context. We manually labelled the head and tail of the worm and then applied adaptive thresholding on the given image using the OpenCV toolbox. Note that we use thresholding only to detect bounding boxes and not for head and tail localization. The largest bounding box was obtained around the largest connected component of the thresholded image. We then resized all bounding boxes to $ 150 \times 150 $ size. We removed bounding boxes if the label was outside the bounding box. \section{Experiments} \label{sec:Experiments} After pre-processing, we had a total of 596 images, out of which we selected 70 $\%$ (417) as the training images and the remaining 30 $\%$ (179) as the validation images. We also applied image augmentation techniques during training to increase the size of the dataset synthetically. Specifically, we added random brightness of up-to 12.5\% to the images and randomly rotated images by 90/180/270 degrees. We set a learning rate of $5e-4$ with Adam as an optimizer and trained the model for 600 epochs with a batch size of 64 on NVIDIA GeForce RTX 2080Ti GPU. We used MSE loss and probability of correct keypoint ($PCK$) accuracy to measure localization performance of our model. We define $PCK @ p$ metric as percentage of prediction coordinates which lie within range of $p$ pixels of ground truth label. We report $PCK$ for $p = 7, 15$ and $ 30 $ pixels (note that bounding box has the size of $ 150 \times 150 $). We run all experiments 10 times and show average loss for every epoch in \ref{fig:loss} and average $PCK @ 15$ for every epoch in the Figure \ref{fig:accu}. Code to reproduce results is available at \footnote{https://github.com/mansimane/WormML} \begin{figure}[htb] \centering \begin{subfigure}[b]{0.5\textwidth} {\includegraphics[width=\linewidth]{head_tail_accuracy.eps}} \caption{Accuracy} \label{fig:accu} \end{subfigure} \centering \begin{subfigure}[b]{0.5\textwidth} {\includegraphics[width=\linewidth]{train_eval_loss.eps}} \caption{Loss} \label{fig:loss} \end{subfigure} \caption{Training vs evaluation metrics} \end{figure} \begin{table}[htb] \caption{Accuracies for head and tail localization for evaluation images} \label{tab:table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccr} \toprule & Percentage Accuracy \\ \midrule Head ($PCK @ 7$) & 94.24 $\pm$ 2.09 \\ Head ($PCK @ 15$) & 96.65 $\pm$ 1.60 \\ Head ($PCK @ 30$) & 97.81 $\pm$ 1.02 \\ Tail ($PCK @ 7$) & 85.82 $\pm$ 3.28 \\ Tail ($PCK @ 15$) & 96.98 $\pm$ 1.47 \\ Tail ($PCK @ 30$) & 98.19 $\pm$ 1.03 \\ Average ($PCK @ 7$) & 90.03 $\pm$ 2.38 \\ Average ($PCK @ 15$) & 96.82 $\pm$ 1.48 \\ Average ($PCK @ 30$) & 97.99 $\pm$ 0.89 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} In table \ref{tab:table}, we show head and tail localization accuracy for different $PCK$ levels. In Figure \ref{fig:train} we show ground truth and predicted coordinates for example training images and in Figure \ref{fig:eval} we show ground truth and predicted coordinates for example evaluation images. In the evaluation case, there are some examples where our method predicts head or tail at different location. But on average, $96.82 \% $ of the time our model is able to predict head and tail coordinates within 15 pixels of ground truth coordinates. For context, the approximate width of the worm body in our images is 15 pixels. \begin{figure}[H] \centering \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\textwidth, height=0.7\textwidth]{0_train.png} \label{fig:subim1} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{0h_train.png} \label{fig:subim2} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\textwidth, height=0.7\textwidth]{8_train.png} \label{fig:subim3} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{8h_train.png} \label{fig:subim4} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\textwidth, height=0.7\textwidth]{62_train.png} \label{fig:subim5} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{62h_train.png} \label{fig:subim6} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\textwidth, height=0.7\textwidth]{19_train.png} \label{fig:subim7} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{19h_train.png} \label{fig:subim8} \end{subfigure} \caption{Head and tail localization on training images and corresponding heatmaps for head predictions. a) Green: ground truth head coordinates, b) Blue: predicted head coordinates, c) Red: ground truth tail coordinates, d) Magenta: predicted tail coordinates} \label{fig:train} \end{figure} \begin{figure}[htb] \centering \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\textwidth, height=0.7\textwidth]{47_val.png} \label{fig:subim9} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{47h_val.png} \label{fig:subim10} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\textwidth, height=0.7\textwidth]{17_val.png} \label{fig:subim11} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{17h_val.png} \label{fig:subim12} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\linewidth, height=0.7\textwidth]{1_val.png} \label{fig:subim13} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{1h_val.png} \label{fig:subim14} \end{subfigure} \begin{subfigure}{0.2\textwidth} \includegraphics[width=0.7\textwidth, height=0.7\textwidth]{0_val.png} \label{fig:subim15} \end{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=0.7\textwidth, height=0.5\textwidth]{0h_val.png} \label{fig:subim16} \end{subfigure} \caption{Head and tail localization on evaluation images and corresponding heatmaps for head predictions} \label{fig:eval} \end{figure} \section{Conclusion and Future Work} \label{sec:Conclusion} The approaches used until now for head and tail localization were sensitive to lighting conditions and required extensive tuning of parameters upon changing imaging conditions. Here, we proposed an approach which does not require manual tuning of the parameters and is robust to the range (albeit limited) of image conditions present in our dataset. Although we used a VGG16 network here, other networks like the resnet \citet{he2016deep} and stacked hourglass networks \citet{newell2016stacked} may improve the performance even further. It is worth noting that the training and evaluation sets used here contain several images per worm which may not be ideal in practice. We plan to collect more data from a variety of worm genotypes for future training and evaluations. The methodology used in this paper works when there is single worm in the image. However, We are currently expanding on this work so that the head and tail of multiple worms in a single image can be detected and localized simultaneously.
2,869,038,156,189
arxiv
\section{Introduction} Let $(X,\rho )$ be a compact metric space, and $I=[0,1]$. Denote by $\mathcal C(X)$ the class of continuous maps $X\to X$, and let $\mathcal C$ stand for $\mathcal C(I)$. By $\mathbb N$ and $\mathbb N_0$ we denote the set of positive, or nonnegative integers, respectively. $(X,f)$, with $f\in \mathcal C(X)$, is a {\it topological dynamical system\/}. A {\it nonautonomous (discrete dynamical) system} is a pair $(X,\{ f_n\}_{n\ge 1})$, where $f_n\in\mathcal C (X)$, $n\in\mathbb N$; following \cite{KS} we denote this system by $(X, f_{1,\infty})$. The {\it trajectory} of an $x\in X$ in this system is the sequence $\{x_n\}_{n\ge 0}$, where $x_0=x$ and $x_n=(f_n\circ f_{n-1}\circ\cdots\circ f_1)(x)$. The set of limit points of the trajectory of a point $x$ is its {\it $\omega$-limit set}; we denote it by $\omega_{f_{1,\infty}}(x)$. If $f_n=f$ for every $f_n\in f_{1,\infty}$ then $(X,f_{1,\infty})=(X,f)$. Nonautonomous systems are closely related to { skew-product maps} $F: X\times Y\to X\times Y$, with $X,Y$ compact metric spaces; for details see, e.g., \cite{KS}, a pioneering work dealing with nonautonomous systems, motivated just by open problems concerning skew-product maps. In particular, \cite{KS} deals with topological entropy which can be for nonautonomous systems defined similarly as for the autonomous ones. We denote by $h(f)$ or $h(f_{1,\infty})$ the topological entropy of a map $f$, or $f_{1,\infty}$, respectively. Let $\{x_n\}_{n\ge 0}$, $\{y_n\}_{n\ge 0}$ be trajectories of points $x,y\in X$, and $\varepsilon >0$. Then $(x,y)$ is an $\varepsilon$-{\it Li-Yorke pair} if $\limsup _{n\to\infty} \rho (x_n,y_n)\ge\varepsilon$ and $ \liminf _{n\to\infty}\rho (x_n, y_n)=0$. For $x,y\in X$ define $\Phi_{xy}, \Phi^*_{xy}:(0,\infty) \to I$ by \begin{equation} \label{equ12} \Phi _{xy}(t):=\liminf _{n\to\infty}\frac 1n\#\{ 0\le j<n; \rho (x_j,y_j)<t\}, \ \Phi ^*_{xy}(t):=\limsup _{n\to\infty}\frac 1n\#\{ 0\le j<n; \rho (x_j,y_j)<t\}. \end{equation} The system $(X,f)$ or $(X, f_{1,\infty})$ is {\it Li-Yorke chaotic}, briefly LYC, if there is an $\varepsilon >0$, and an uncountable {\it scrambled set} $S$ such that every distinct points $x,y\in S$ form an $\varepsilon$-Li-Yorke pair; it is {\it distributionally chaotic}, briefly DC1, if there there is an $\varepsilon>0$, and an uncountable set $S$ such that for every distinct points $x,y\in S$, $\Phi _{xy}(\varepsilon)=0$ and $\Phi^*_{xy}\equiv 1$. Notice that $(I,f)$ is DC1 if and only if $h(f)>0$, see \cite{ScSm}. \medskip Our paper is inspired by \cite{KS} (see also \cite{C}) where relations between systems $(I,f_{1,\infty})$ and $(I,f)$ such that $f_{1,\infty}$ uniformly converges to $f$ are considered. Since a single constant function in $f_{1,\infty}$ can destroy more complex behavior, even in the case when the limit system $(I,f)$ has complicated dynamics, {\it in this paper we assume that $f$ and all maps in $f_{1,\infty}$ are surjective}. With this condition, for example, $h(f)>0$ implies $h(f_{1,\infty})>0$ \cite{C}, without it we have only $h(f)\ge h(f_{1,\infty})$ \cite{KS}. Consequently, if $h(f)>0$ then it is possible to show directly that the nonautonomous system is DC1 (we obtain this result indirectly from Theorem 3.2). Therefore our paper is devoted to systems with zero topological entropy. The proofs are based on \lq\lq classical\rq\rq papers concerning chaos and structure of $\omega$-limit sets of maps $f\in\mathcal C$ with $h(f)=0$, \cite{Sh}, \cite{S}, \cite{FShSm}, \cite{BS}. Our main result is Theorem C; we show that $(I,f_{1,\infty})$ is LYC if $(I,f)$ is LYC. In some cases, the nonautonomous system inherits stronger forms of chaos (Theorem B) and infinite $\omega$-limit sets (Theorem A). Note that $(I,f)$ need not be LYC or DC1 if $(I,f_{1,\infty})$ is, see \cite{FPS}. Theorem A is interesting in itself and makes possible to prove other results more transparently.\\ {\bf Theorem A.} {\it Let $(I, f_{1,\infty})$ be a surjective nonautonomous system, and let $f_{1,\infty}$ converge uniformly to a map $f$. If $h(f)=0$ then every infinite $\omega$-limit set of $f$ is an $\omega$-limit set for $f_{1,\infty}$.}\\ {\bf Theorem B.} {\it Let $(I, f_{1,\infty})$ be a surjective nonautonomous system, and $f_{1,\infty}$ converge uniformly to a map $f$. Then $(I,f_{1,\infty})$ is DC1 if one of the following conditions is satisfied: (i) $h(f)>0$ (or equivalently, $f$ is DC1); (ii) $f$ has a minimal set $\widetilde\omega$ such that $f|_{\widetilde\omega}$ is not Lyapunov stable.} \smallskip \noindent Recall that $f$ is {\it Lyapunov stable} if for every $\varepsilon >0$ there is a $\delta >0$ such that $|x-y|<\delta$ implies $|f^n(x)-f^n(y)|<\varepsilon$, for every $n$.\\ {\bf Theorem C.} {\it Let $(I, f_{1,\infty})$ be a surjective nonautonomous system, and $f_{1,\infty}$ converge uniformly to a map $f$. If $f$ is LYC then also $(I,f_{1,\infty})$ is LYC.}\\ {\bf Remarks.} Obviously, Theorem A is not valid for finite $\omega$-limit sets. Theorems B and C cannot be strengthened in the sense that the non-autonomous system inherits chaos with extremal properties like big scrambled sets. For example, a map in $\mathcal C$ can have DC1 scrambled set with complement of zero Hausdorff dimension \cite{OS}, but this need not be inherited by a nonautonomous system, see \cite{Dv}. Theorem B is interesting since there are functions $f\in\mathcal C$ with $h(f)=0$ satisfying condition (ii), see \cite{FShSm} or \cite{BS} . \medskip \section{Proof of Theorem A} A compact set $A\subseteq X$ is {\it $f$-periodic of period} $m$, where $f\in\mathcal C(X)$, if $f^j(A)$ are pairwise disjoint, for $0\le j<m$, and $f^m(A)=A$. \medskip {\bf Theorem 2.1.} (See \cite{S}.) {\it Let $f\in\mathcal C$ with $h(f)=0$, and let $\widetilde\omega$ be an infinite $\omega$-limit set of $f$. Then there is a system $\{J(k,n); 0\le k<2^n\}_{n\ge 0}$ of $f$-periodic intervals in $I$ such that, for any $k, n\in\mathbb N_0$, \smallskip (i) $f(J(k,n))=J(k+1,n)$ where $k+1$ is taken ${\rm mod} \ 2^n$; \smallskip (ii) $J(k,n)$ has period $2^n$; \smallskip (iii) $J(k,n+1)\cup J(2^n+k, n+1)\subset J(k,n)$; \smallskip (iv) $\widetilde\omega\subset\bigcup _{0\le k<2^n} J(k, n) =: O_n$.}\\ \noindent Obviously we may assume that the intervals $J(k,n)$ are the minimal ones in the sense of inclusion. In this case, the collection of all $J(k,n)$ is the {\it system associated to} $\widetilde\omega$; we denote it by $\mathcal J_f(\widetilde\omega)$, or simply by $\mathcal J$. The system \begin{equation} \label{sup01} \{ \widetilde\omega (k,n):=J(k,n)\cap\widetilde\omega\}_{0\le k<2^n}, \ k,n\in\mathbb N, \end{equation} is {\it the cyclic decomposition of} $\widetilde\omega$ {\it of degree} $n$. Since $f(\widetilde\omega)=\widetilde\omega$, by Theorem 2.1 \begin{equation} \label{sup02} \{\widetilde\omega(k,n)\}_{0\le k<2^n} \ \text{forms an} \ f \text{-periodic orbit} \ \text{of period} \ 2^n, \ \text{and} \ \bigcup _{0\le k<2^n} \widetilde\omega(k,n)=\widetilde\omega. \end{equation} For the cyclic decomposition (\ref{sup02}), and $k,n\in\mathbb N$, $0\le k<2^n$, let $K(k,n)\subset J(k,n)$ be the compact interval between the sets $\widetilde\omega(k,n+1)$ and $\widetilde\omega(2^n+k,n+1)$ (which are neighbor sets in the cyclic decomposition of $\widetilde\omega$ of degree $n+1$). This $K(k,n)$ is {\it a complementary interval to} $\widetilde\omega$ of degree $n$.\\ {\bf Lemma 2.2.} {\it Assume that $f\in\mathcal C$, $h(f)=0$, and $\widetilde\omega$ is infinite $\omega$-limit set of $f$. Then \begin{equation} \label{sup03} f^{2^{n+2}}(K(k,n))\supset\widetilde\omega (k,n), \ k, n\in\mathbb N, \ 0\le k<2^n. \end{equation}} \smallskip {\bf Proof.} We may assume that $\widetilde\omega(k,n+1)<\widetilde\omega(2^n+k,n+1)$ where $<$ indicates the natural ordering of (disjoint) sets. Let $\widetilde\omega(k_0,n+2)< \widetilde\omega(k_1,n+2)<\widetilde\omega(k_2,n+2)<\widetilde\omega(k_3,n+2)$ be the sets from the cyclic decomposition of $\widetilde\omega$ of degree $n+2$ contained in $\widetilde\omega(k,n)$ so that $\widetilde\omega(k_0,n+2)\cup\widetilde\omega(k_1,n+2)\subset \widetilde\omega(k,n+1)$ and $\widetilde\omega(k_2,n+2)\cup\widetilde\omega(k_3,n+2)\subset \widetilde\omega(2^n+k,n+1)$. Since $\widetilde\omega(k,n)$ is an $\omega$-limit set of $f^{2^n}$ and no point in $\widetilde\omega$ is periodic, the interval between $f^{2^n}(u)$ and $f^{2^n}(v)$, where $u,v$ are the endpoints of $K(k,n)$, must contain one of the sets $\widetilde\omega(k_j,n+2)$, $0\le j<4$. Consequently, the interval between $f^{2^{n+1}}(u)$ and $f^{2^{n+1}}(v)$ contains two of the sets, the interval between $f^{3\cdot 2^{n}}(u)$ and $f^{3\cdot2^{n}}(v)$ three of the sets, and (\ref{sup03}) follows. $\hfill\Box$\\ {\bf Lemma 2.3. } (Itinerary lemma.) {\it Let $f_{1,\infty}$ be a sequence of maps in $\mathcal C(X)$, and $F_{1,\infty}$ a sequence of nonempty compact subsets of $X$ such that, for every $n\in\mathbb N$, $f_n(F_n)\supseteq F_{n+1}$. Then there is an $x$ such that $x_n\in F_n$, $n\in N$, where $x_1,x_2,\cdots$ is the trajectory of $x$ in the nonautonomous system. \smallskip} {\bf Proof} is easy. $\hfill\Box$\\ {\bf Lemma 2.4.} {\it Assume $f\in\mathcal C$ with $h(f)=0$, $\omega _f(z)=:\widetilde\omega$ is infinite, and $p$ is an isolated point of $\widetilde\omega$. Then there is a cluster point $a_p$ of $\widetilde\omega$ such that the interval $J_p$ with endpoints $p$ and $a_p$ is a wandering interval (i.e., $f^i(J_p)\cap f^j(J_p)=\emptyset$ if $i\ne j$), and for every neighborhood $U$ of $p$ and every $m\in\mathbb N$ there is a $q\in\mathbb N$ divisible by $2^m$ such that $f^q(U)$ is a neighborhood of $J_p$.}\\ {\bf Proof.} This result, in a different setting, is a part of Lemma 2.9 in \cite{S}. For convenience, we provide an outline of the argument. Let $J(k_0,0)\supset J(k_1,1)\supset \cdots\supset J(k_n,n)\supset\cdots$ be the intervals in $\mathcal J_f(\widetilde\omega)$ containing $p$. Then $\bigcap _{n\ge 0} J(k_n,n)=:J_p$ is a wandering interval with endpoints $p$ and $a_p\in\widetilde\omega$; moreover, $a_p$ is a cluster point of $\widetilde\omega$, see \cite{Sh} (cf. also \cite{BS}) so that $p$ is an endpoint of every $\widetilde\omega(k_n, n)$ with $n\ge n_0$. Let $z_{s(1)}, z_{s(2)}, z_{s(3)}, \cdots$ be a monotone subsequence of points in the trajectory of $z$ with $\lim _{i\to\infty}z_{s(i)}=p$; obviously, $z_{s(i)}\notin J_p$, $i\ge 1$. Let $z_{s(k)}\in U$. If $p$ is an endpoint of $\widetilde\omega (k_0,0)$ then, since $J_p$ is a wandering interval, $J_p$ is contained in the open interval $U^\prime$ with endpoints $z_{s(k+1)}$ and $f^{s(k+1)-s(k)}(p)$. To finish we may assume $m\ge n_0$. Then $p$ is an endpoint of $\widetilde\omega (k_m,m)$, and application of the above process to $g:=f^{2^m}$ completes the argument. $\hfill\Box$\\ {\bf Proof of Theorem A.} Denote by $P$ the set of isolated points of $\widetilde\omega$ and consider two possible cases. \noindent {\it Case 1.} $P=\emptyset$ so that $\widetilde\omega$ is a minimal set of $f$. For every $m,j\in\mathbb N$, $m\ge 1$, denote $f_{m}^j:=f_{m+j-1}\circ f_{m+j-2}\circ \cdots\circ f_{m+1}\circ f_m$. Since $f_{1,\infty}$ converges uniformly to $f$, by (\ref{sup03}) there is an $m(n)$ such that \begin{equation} \label{sup04} f_m^{2^{n+2}}(K(k,n))\supset K(k,n+1)\cup K(2^n+k,n+1), \ 0\le k<2^n, \ m\ge m(n), \ k, m,n\in\mathbb N; \end{equation} notice that $f_m^{2^{n+2}}(K(k,n))$ is a neighborhood of $K(k,n)$. Choose $c_n$ such that \begin{equation} \label{301} m(n+1)-m(n)\le 2^nc_n, \ n, c_n\in\mathbb N, \end{equation} where $m(n)$ is as in (\ref{sup04}). To simplify the notation let $K_n$ be the finite sequence $K(0,n),K(1,n),\cdots , K(2^n-1,n)$ of all $2^n$ intervals $K(k,n)$ of degree $n$. We wish to apply Itinerary lemma to the sequence \begin{equation} \label{sup05} F_{m(0),\infty}=\underbrace{K_0,K_0, \cdots , K_0}_\text{$c_0$-times},\underbrace{K_1,K_1, \cdots , K_1}_\text{$c_1$-times},\cdots ,\underbrace{K_n,K_n, \cdots , K_n}_\text{$c_n$-times},\cdots. \end{equation} Obviously, $f_j(F_j)\supseteq F_{j+1}$ if $f=f_j$ and, by (\ref{301}), if $F_j=K(k,n)$, for some $k, n$. However, if the numbers $c_n$ are rapidly increasing, the inclusions will be satisfied \lq\lq approximately\rq\rq so that, for every $j$, $F_{j+1}$ is contained in the $\delta_j$ neighborhood of $f_j(F_j)$, where $\delta _j\to 0$. Apply Itinerary lemma to (\ref{sup05}), and $(I,f)$ or $(I,f_{m(0),\infty})$, respectively, to get points $x$ and $x^\prime$ in $K(0,0)$. The trajectory of $x$ passes the sets in (\ref{sup05}) exactly, while the trajectory of $x^\prime$ hits exactly the sets $K(0,n)$. The trajectories $\{x_j\}_{j\ge m(0)}$ and $\{x_j^\prime\}_{j\ge m(0)}$ of these points are proximal since $\delta _j\to 0$ so that both must have the same $\omega$-limit set $\widetilde\omega^\prime$. But $\omega _f(x)=\widetilde\omega$ since by (\ref{sup05}) the trajectory of $x$ can have only finitely many members in the set $\bigcup _{0\le k<2^n}K(k,n)$ so that, by Lemma 2.4, $\omega_f(x)$ contains no isolated points. Since every $f_n$ is surjective, $\omega_{f_{1,\infty}}(z)=\widetilde\omega$ for some $z\in I$. \smallskip {\it Case 2.} $P\ne\emptyset$. In the proof we need facts which are contained implicitly in the literature, see \cite{Sh}, \cite{S}, \cite{FShSm}, \cite{BS}; to make the proof self-contained, we recall some of them with brief arguments. Let $\widetilde\omega=\omega _f(z)$, and let $\{z_j\}_{j\ge 0}$ be the trajectory of $z$. Since a point in $P$ cannot be periodic it has a preimage in $P$ so that $P$ is countably infinite. Since the intervals in $\mathcal J$ are periodic, there are $j_n\in\mathbb N$ such that \begin{equation} \label{sup06} z_j\in O_n \ \ \text{if} \ \ j\ge j_n, \ \text{and} \ z_j\notin O_n\setminus O_{n+1} \ \text {if} \ j\ge j_{n+1}, \ \text{where} \ j_{n+1}>j_n, \ j,n\in\mathbb N, \end{equation} where $O_n$ is the corresponding orbit of the intervals $J(k,n)$, $k\le 2^n-1$, as in Theorem 2.1. To see this note that, by Theorem 2.1 (iv) and Lemma 2.4, $\bigcap _{n \ge 1}O_n\setminus \widetilde\omega$ is the union of wandering intervals. It follows that for every $j$ there is a point $p_j$ such that the interval with endpoints $p_j$ and $z_j$ intersects $\widetilde\omega$ exactly at $p_j$; denote this interval by $L_j$ and notice that $p_j$ need not be in $P$, since the image of an isolated point need not be isolated, see also \cite{BS}. Obviously, $L_{j+1}$ has endpoints $z_{j+1}$ and $p_{j+1}:=f(p_j)$ so that $f(L_j)\supseteq L_{j+1}$. Since $L_j$ has just one point, $p_j$, in common with the wandering interval $J_{p_j}$, and $L_j\cup J_{p_j}$ is a neighborhood of $p_j$, Lemma 2.4 applies to $U:=L_j$. Therefore \begin{equation} \label{sup07} f^{j_{n+1}-j_n}(L_{j_{n}})\supset K(k_n, n+1)\supset L_{j_{n+1}}. \end{equation} For simplicity, denote by $\widetilde K_n$ the finite sequence $K(k_n,n+1), K(k_n+1,n+1), K(k_n+2,n+1),\cdots , K(2^{n+1}+k_n-1,n+1)$ which consists of the first $2^{n+1}$ sets in the $f$-trajectory of $K(k_n,n+1)$, and by $\widetilde L_n$ the finite sequence $L_{j_n}, L_{j_n+1},\cdots ,L_{j_{n+1}-1}$ of $j_{n+1}-j_n$ members of the $f$-trajectory of $L_{j_n}$. By Lemma 2.2 and (\ref{sup07}), Itinerary lemma applied to $f$ and \begin{equation} \label{sup08} \widetilde L_0, \underbrace {\widetilde K_0,\widetilde K_0,\cdots , \widetilde K_0}_\text{$c_0$-times}, \widetilde L_{1},\underbrace {\widetilde K_1,\widetilde K_1,\cdots , \widetilde K_1}_\text{$c_1$-times},\widetilde L_2,\cdots , \widetilde L_{n},\underbrace {\widetilde K_n,\widetilde K_n,\cdots , \widetilde K_n}_\text{$c_n$-times},\widetilde L_{n+1}, \cdots \end{equation} yields a point $x$ such that $\omega _f(x)=\widetilde\omega$ since its trajectory passes through $\widetilde L_0,\widetilde L_1, \cdots , \widetilde L_n, \widetilde L_{n+1},\cdots$. The inserted blocks $\widetilde K_n$ in (\ref{sup08}) contain only finitely many sets of type $K(i,n)$ which by Lemma 2.4 cannot generate new isolated points. Similarly as in Case 1, replace the sequence $c_0,c_0,\cdots$ in (\ref{sup08}) by a more rapidly increasing sequence $\widetilde c_0,\widetilde c_1,\cdots$ if necessary, and apply Itinerary lemma to $f_{m,\infty}$ where $m$ is sufficiently large. This gives a point $x^\prime$ such that $\omega_{f_{m,\infty}} (x^\prime)=\widetilde\omega$. $\hfill\Box$ \section{Proofs of Theorems B and C.} Recall that a map $f\in\mathcal C(X)$ has a {\it horseshoe} if there are disjoint nonempty compact sets $U,V$, and $m\in\mathbb N$ such that $f^m(U)\cap f^m(V)\supseteq U\cup V$. The following is a strictly weaker notion. \medskip {\bf Definition 3.1.} A map $f\in\mathcal C(X)$ has a {\it quasi horseshoe} if there are $\varepsilon >0$, compact sets $U_k, V_k$, and positive integers $m_k$, for $k\in\mathbb N_0$, with the following properties: \smallskip (i) dist $(U_k, V_k)\ge \varepsilon$; \smallskip (ii) $\lim_{k\to\infty} {\rm diam}(U_k)=\lim_{k\to\infty} {\rm diam}(V_k)=0$; \smallskip (iii) $f^{m_k}(U_k)$ is a neighborhood of $U_k\cup U_{k+1}\cup V_{k+1}$, and $f^{m_k}(V_k)$ a neighborhood of $V_k\cup V_{k+1}\cup U_{k+1}$.\\ {\bf Theorem 3.2.} {\it Let $f, f_k\in\mathcal C(X)$ be surjective maps, for $k\in\mathbb N$, and let $f_{1,\infty}$ converge uniformly to $ f$. If $f$ has a quasi horseshoe then $(X,f_{1,\infty})$ is distributionally (DC1) chaotic.}\\ {\bf Proof.} Keep the notation from Definition 3.1 and denote by $\widetilde U_k$ the finite sequence $U_k, f(U_k), f^2(U_k),$ $\cdots , f^{m_k-1}(U_k)$ of $m_k$ compact sets, and similarly with $\widetilde V_k$. Let $\Sigma _2=\{ 0,1\}^\mathbb N$. For $\alpha=\{a_k\}_{k\ge 0}\in\Sigma_2$ consider the itinerary \begin{equation} \label{401} I_\alpha: =\underbrace {B_0, B_0,\cdots, B_0}_{c_0{\text -times}},\underbrace { B_1, B_1\cdots, B_1}_{c_1{\text -times}}, \cdots, \underbrace { B_k, B_k\cdots, B_k}_{c_k{\text -times}}, \cdots , \end{equation} where \begin{equation} \label{402} B_k=\widetilde U_k \ \text {if} \ a_k=0, \ \text{and} \ B_k=\widetilde V_k \ \text {if} \ a_k=1, \ k\in\mathbb N_0. \end{equation} If the numbers $c_k$ are sufficiently large then by Itinerary lemma, similarly as in the proof of Theorem A, there is an $x_\alpha\in U_0\cup V_0$ with itinerary $I_\alpha$ in $f_{1,\infty}$. Let $\Sigma^\prime_2\subset\Sigma _2$ be an uncountable set such that, for every distinct $\{a_k\}_{k\ge 0}$ and $\{b_k\}_{k\ge 0}$ in $\Sigma^\prime _2$, we have $a_k=b_k$ for infinitely many $k$, and $a_k\ne b_k$ for infinitely many $k$; such a set exists, see, e.g., \cite{S}. Let $S=\{x_\alpha; \alpha\in \Sigma _2^\prime\}$ and assume that the numbers $c_k$ are increasing so rapidly that $\lim _{k\to\infty} c_k/c_{k+1}=0$. Then it is easy to verify that $S$ is a DC1 scrambled set for $f_{1,\infty}$ such that, for every $x\ne y$ in $S$, $\Phi_{xy}(\varepsilon )=0$ and $\Phi^*_{xy}\equiv 1$. $\hfill\Box$ \medskip The next theorem improves a result from \cite{S} that a LYC map $f\in\mathcal C$ has similar system of intervals as in Definition 3.1 except that condition (iii) is replaced by $f^{m_k}(U_k)\cap f^{m_k}(V_k)\supset U_{k+1}\cup V_{k+1}$. The stronger property is necessary in our proof of Theorem 3.2.\\ {\bf Theorem 3.3.} {\it Let $f\in\mathcal C$ have a minimal set $\widetilde\omega$ such that $f|_{\widetilde\omega}$ is not Lyapunov stable. Then $f$ has a quasi horseshoe.}\\ {\bf Proof.} We may assume $h(f)=0$ since otherwise $f$ has a horseshoe. By Theorem 2.1 there are $J(k_n,n)\in\mathcal J(\widetilde\omega)$ such that $J(k_{n+1},n+1)\subset J(k_n,n)$ and $\bigcap _{n\ge 0}J(k_n,n)=J$ is a non degenerate wandering interval; otherwise $f|_{\widetilde\omega}$ would be Lyapunov stable. Let $U_n:=K(k_{2n},2n)$ and $V_n:=K(k_{2n+1},2n+1)$, $n\in\mathbb N_0$. By Lemma 2.2 there are numbers $m_k$ such that $U_k, V_k, m_k$, $k\in\mathbb N_0$, form a quasi horseshoe for $f$ with $\varepsilon = |J|$, the length of $J$. $\hfill\Box$\\ {\bf Theorem 3.4.} {\it Let $f, f_k\in\mathcal C$, $k\in\mathbb N$, be surjective maps such that $f_{1,\infty}$ converges uniformly to $f$. If $f$ has an infinite $\omega$-limit set with isolated points then $(I, f_{1,\infty})$ is LYC.}\\ {\bf Proof.} Let $p\in\widetilde\omega:=\omega _f(z)$ be an isolated point, and let $J_p$ with endpoints $p$ and $a_p$ be as in Lemma 2.4. We show that there are sequences of compact intervals $K_j, P_j$, and positive integers $r_j, q_j$ such that \begin{equation} \label{nova420} p\in P_j, \ f^{r_j}(K_j) \ \text{is a neighborhood of} \ K_j\cup P_{j}, \ f^{q_j}(P_j) \ \text{is a neighborhood of} \ K_{j+1}, \ \text{and} \ r_j|q_j, \ j\in\mathbb N. \end{equation} To see this put $K_1=K(k_1,1)$. Let $J(k_n,n)$ be the intervals containing $p$ so that $\bigcap _{n\ge 0} J(k_n,n)=:J_p$. Since $\widetilde\omega\subset J(k_0,0)$, Lemma 2.2 implies $f^{r_1}(K_1)\supset \widetilde\omega$ (where $r_1=4$), and since $J_p$ is a subset of the convex hull of $\widetilde\omega$, $f^{r_1}(K_1)$ must contain infinitely many points from the trajectory of $z$ hence a small neighborhood of $p$; denote it $P_1$. By Lemma 2.4 get $q_1$ divisible by $r_1$ such that $f^{q_1}(P_1)\supset J(k_0,n_2)\supset J_p$, and put $K_2:=K(0,n_2)$. By induction we get (\ref{nova420}) such that $r_j$ are powers of $2$. Denote \begin{equation} \label{nova421} B_j:=K_j, f(K_j), f^2(K_j),\cdots ,f^{r_j-1}(K_j),\ \text {and} \ D_j:=P_j, f(P_j),f^2(P_j), \cdots ,f^{q_j-1}(P_j), \ j\in\mathbb N. \end{equation} and consider the itinerary \begin{equation} \label{405} \underbrace {B_1,B_1,\cdots , B_1}_\text{$c_1${ -times}},X_1, \underbrace {B_2,B_2,\cdots , B_2}_\text{$c_2${ -times}},X_2, \cdots , \underbrace {B_k,B_k,\cdots , B_k}_\text{$c_k${ -times}},X_k, \cdots, \end{equation} where $c_k\in\mathbb N$, and \begin{equation} \label{406} X_k=D_k=:X^0_k \ \ \text{or} \ \ X_k=\underbrace{B_k,B_k, \cdots ,B_k}_\text{$q_k/r_k$ -times}=:X^1_k, \end{equation} so that the blocks $X^0_k$ and $X^1_k$ have the same length $q_k$. By (\ref{nova420}) the above condition is correct. Let $\Sigma^\prime _2\subset\{0,1\}^\mathbb N$ be an uncountable set such that any two distinct sequences from $\Sigma _2^\prime$ have different coordinates at infinitely many places. For $\beta =\{b_k\}_{k\ge 1}$ in $\Sigma _2^\prime$ let $x_\beta$ be a point in $I$ with trajectory (\ref{405}) such that $X_k=X_k^{b_k}$, $k\in\mathbb N$. If the numbers $a_k$ increase sufficiently rapidly then (\ref{405}) is the itinerary of a point $x^\prime_\beta$ for the nonautonomous system $f_{1,\infty}$, similarly as in the proof of Theorem A. Then $S=\{x^\prime_\beta; \beta\in \Sigma _2^\prime\}$ is an uncountable scrambled set hence $(I,f_{1,\infty})$ is LYC, with $\varepsilon=|J_p|$. $\hfill\Box$ \medskip {\bf Proof of Theorem B.} The result follows by Theorems 3.3 and 3.2 since $f$ has a horseshoe if $h(f)>0$. $\hfill\Box$ \medskip {\bf Proof of Theorem C.} By Theorem B we may assume that $h(f)=0$. Since $f$ is LYC, it has an infinite $\omega$-limit set $\widetilde\omega$ such that $f$ is not Lyapunov stable on it, see \cite{FShSm}. If $\widetilde\omega$ has isolated points then the result follows by Theorem 3.4. Otherwise $\widetilde\omega$ is a minimal set; apply Theorem B. $\hfill\Box$ \section{Concluding remarks} There are open problems related to our results. We point out two of them. We assume $(I,f_{1,\infty})$ is a surjective system converging uniformly to $(I,f)$. Then $f$ can be the identity map even if $(I,f_{1,\infty})$ is chaotic, see, e.g., \cite{FPS}. In \cite{C} it is proved that if $(I,f_{1,\infty})$ is LYC then $f$ is LYC provided it has the shadowing property. But this condition eliminates maps $f$ with $h(f)=0$, see \cite{GK}. On the other hand, by Theorem B, if $h(f)>0$, then $f_{1,\infty}$ must be even DC1. \smallskip {\it Problem 1.} Assume $(I,f_{1,\infty})$ is LYC and $h(f_{1,\infty})=0$. Find a condition for $f_{1,\infty}$ that is necessary and sufficient for $f$ to be LYC. \smallskip Uniform convergence of $f_{1,\infty}$ to a map in $\mathcal C$ is essential to ensure that $h(f_{1,\infty})>0$ implies $(I,f_{1,\infty})$ is DC1: in \cite{SmSt} there is an example of a skew-product map $F:I^2\to I^2$ with $h(F)>0$ which is DC2, but not DC1. Recently T. Downarowicz \cite{D} proved that $h(f)>0$ implies DC2, for every $f\in\mathcal C(X)$. Recall that $(X,f)$ is DC2 if there is an uncountable set $S$ such that, for every distinct $x, y\in S$, $\Phi_{xy}<\Phi_{xy}^*\equiv 1$, cf. (\ref{equ12}). \smallskip {\it Problem 2.} Assume $(I,f_{1,\infty})$ has positive topological entropy and $f_{1,\infty}$ converges {\it pointwise} to a map in $\mathcal C$. Is it DC2? We conjecture that $(I,f_{1,\infty})$ must have a DC2-pair. \bigskip {\bf Acknowledgement.} The author would like to thank prof. J. Sm\'{\i}tal for fruitful discussions and valuable comments.
2,869,038,156,190
arxiv
\section{Analysis of Offline Local Search Algorithms for Facility Location} \label{appendix:local-search} In this section, we prove theorems related to the local search algorithms for facility location. \subsection{Local Search for facility location} \uflapprox* \begin{proof} This proof is almost identical to the analysis of the $\alpha_FL$-approximation local search algorithm for facility location, except we take $\phi$ into consideration in all the inequalities. Eventually we shall have an $\alpha_FL|C|\phi$ term on the right side of the inequality. Formally, we let $(S^*, \sigma^*)$ be the optimum solution to facility location instance. Focus on an $i^* \in S^*$. Since there is no $\phi$-efficient operation that opens $i^*$ (recall that we can open $i^*$ even if we have $i\in S^*$), we have \begin{align*} \sum_{j \in \sigma^{*-1}(i^*) }d(j,\sigma_j) \le \lambda f_{i^* } \cdot 1_{i^*\notin S} + \sum_{j \in \sigma^{*-1}(i^*)} (d(j,i^*) + \phi) .\end{align*} This implies \begin{align} \sum_{j \in \sigma^{*-1}(i^*) } d(j,\sigma_j) \le \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)} d(j,i^*) + |\sigma^{*-1}(i^*)|\phi. \label{inequ:ufl-open} \end{align} Summing the inequalities over all $i^* \in S^*$ gives us \begin{align} \mathsf{cc}(\sigma) \leq \lambda f(S^*) + \mathsf{cc}(\sigma^*) + |C|\phi. \label{inequ:ufl-C} \end{align} For every $i \in S$, let $\psi(i)$ be the nearest facility in $S^*$ to $i$. For every $i^* \in S^*$ with $\psi^{-1}(i^*) \neq \emptyset$, let $\psi^*(i^*)$ be the nearest facility in $\psi^{-1}(i^*)$ to $i^*$. Focus on some $i \in S, i^* = \psi(i)$ such that $\psi^*(i^*) = i$. The operation that swaps in $i^*$, swaps out $i$ and connects $\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)$ to $i^*$ is not $\phi$-efficient. This implies \begin{align*} &\quad \lambda f_i + \sum_{j \in \sigma^{*-1}(i^*) \cup \sigma^{-1}(i)} d(j,\sigma_j) \\ & \le \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)}d(j, i^*) + \sum_{j \in \sigma^{-1}(i) \setminus \sigma^{*-1}(i^*)}d(j, i^*) + \big|\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)\big|\phi \\ & \le \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)} d(j,i^*) + \sum_{j \in \sigma^{-1}(i) \setminus \sigma^{*-1}(i^*)} [d(j,\sigma^*(j)) + 2d(j,i)] + \big|\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)\big|\phi. \end{align*} To see the second inequality, notice that $d(j, i^*) \leq d(j, i) + d(i, i^*) \leq d(j, i) + d(i, \sigma^*(j)) \leq 2d(j, i) + d(j, \sigma^*(j))$. Canceling $\sum_{j \in \sigma^{-1}(i) \setminus \sigma^{*-1}(i^*)} d(j, i)$ on both sides and relaxing the right side a bit gives us \begin{align} \quad \lambda f_i + \sum_{j \in \sigma^{*-1}(i^*)} d(j,\sigma_j)&\leq \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)}d(j, i^*) + \big|\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)\big|\phi \nonumber\\ &+ \sum_{j \in \sigma^{-1}(i)} \left(d(j,i) + d(j, \sigma^*(j)))\right). \label{inequ:ufl-swap} \end{align} Notice that it could happen that $i = i^*$ in the above setting; the inequality was implied by the operation that opens $i = i^*$ and connects $\sigma^{*-1}(i^* = i)$ to $i$. Now, focus on a $i \in S$ with $\psi^*(\psi(i)) \neq i$. Then closing $i$ and connecting each client in $j \in \sigma^{-1}(i)$ to $\psi^*(\sigma^*(j)) \neq i$ is not $\phi$-efficient. So, we have \begin{align*} \lambda f_i + \sum_{j \in \sigma^{-1}(i)} d(j,i) &\le \lambda f_i+ \sum_{j \in \sigma^{-1}(i)}d(j, \psi^*(\sigma^*(j)) ) + \big|\sigma^{-1}(i)\big|\phi \\ & \le \sum_{j \in \sigma^{-1}(i)} [2d(j,\sigma^*(j)) + d(j,i)] + \big|\sigma^{-1}(i)\big|\phi. \end{align*} To see the inequality, we have $d(j, \psi^*(\sigma^*(j))) \leq d(j, \sigma^*(j)) + d(\sigma^*(j), \psi(\sigma^*(j))) \leq d(j, \sigma^*(j)) + d( \sigma^*(j), i) \leq 2d(j, \sigma^*(j)) + d(j, i)$. This implies \begin{align} \lambda f_i \leq 2\sum_{j \in \sigma^{-1}(i)} d(j,\sigma^*(j)) + \big|\sigma^{-1}(i)\big|\phi. \label{inequ:ufl-close} \end{align} Now, consider the inequality obtained by summing up \eqref{inequ:ufl-swap} for all pairs $(i, i^*)$ with $i^* = \psi(i)$ and $\psi^*(i^*) = i$, \eqref{inequ:ufl-close} for all $i$ with $\psi^*(\psi(i)) \neq i$, and \eqref{inequ:ufl-open} for all $i^*$ with $\psi^{-1}(i^*) = \emptyset$. This inequality will be $ \lambda f(S) + \mathsf{cc}(\sigma) \leq \lambda f(S^*) + 2\mathsf{cc}(\sigma^*) + \mathsf{cc}(\sigma) + 2|C|\phi$, which is \begin{align} \lambda f(S) \leq \lambda f(S^*) + 2\mathsf{cc}(\sigma^*) + 2|C|\phi. \label{inequ:ufl-F} \end{align} Summing up Inequalities~\eqref{inequ:ufl-C} and $1/\lambda$ times \eqref{inequ:ufl-F} gives $f(S) + \mathsf{cc}(\sigma) \leq (1+\lambda) f(S^*) + (1+2/\lambda)\left(\mathsf{cc}(\sigma^*) + |C|\phi\right) = \alpha_FL\left(\mathsf{opt} + |C|\phi\right)$, since $1 + \lambda = 1+2/\lambda = 1+\sqrt{2}=\alpha_FL$. This finishes the proof of Theorem~\ref{thm:FL-offline-apx-ratio}. \end{proof} \ufloperations* The theorem follows from the proof of Theorem~\ref{thm:FL-offline-apx-ratio}. Let $\phi = 0$ in the theorem statement and the proof. \eqref{inequ:ufl-C} and \eqref{inequ:ufl-F} were obtained by adding many of the inequalities of the form \eqref{inequ:ufl-open}, \eqref{inequ:ufl-swap} and \eqref{inequ:ufl-close}. Notice that each inequality corresponds to a local operation. In the setting for Theorem~\ref{thm:FL-offline-operations}, the inequalities do not hold anymore since we do not have the condition that $0$-efficient operations do not exist. However for an inequality correspondent to an operation $\textrm{op}$, we can add $\nabla_\text{op}$ to the right side so that the inequality becomes satisfied. Then adding all the inequalities that were used to obtain \eqref{inequ:ufl-C}, we obtain \begin{align*} \mathsf{cc}(\sigma) \leq \lambda f(S^*) + \mathsf{cc}(\sigma^*) + \sum_{\textrm{op} \in {\mathcal{P}}_{\mathrm{C}}} \nabla_{\textrm{op}} \end{align*} where ${\mathcal{P}}_{\mathrm{C}}$ is the set of operations correspondent to the inequalities. Similarly we can obtain a set ${\mathcal{P}}_{\mathrm{F}}$ of operations, such that \begin{align*} \lambda f(S) \leq \lambda f(S^*) + 2\mathsf{cc}(\sigma^*) + \sum_{\textrm{op} \in {\mathcal{P}}_{\mathrm{F}}} \nabla_{\textrm{op}}. \end{align*} It is easy to check that each of ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ contains at most 1 operation opens or swaps in $i^*$, for every $i^* \in S^* \subseteq f$ and does not contain operations that open or swap in facilities outside $S^*$. ${\mathcal{P}}_{\mathrm{C}} \uplus {\mathcal{P}}_{\mathrm{F}}$ contains at most $|S| \leq |F|$ close operations. Rewriting the two inequalities almost gives us Theorem~\ref{thm:FL-offline-operations}, except for the requirement that each $\textrm{op} \in {\mathcal{P}}_{\mathrm{C}} \cup {\mathcal{P}}_{\mathrm{F}}$ has $\nabla_{\mathrm{op}} > 0$; this can be ensured by removing $\textrm{op}$'s with $\nabla_{\textrm{op}} \leq 0$ from ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$. \section{Proofs of Useful Lemmas} \label{appendix:helper-proofs} \helpersumba* \begin{proof} Define $a_{T+1} = +\infty$. \begin{align*} \sum_{t = 1}^T \frac{b_t}{a_t} &= \sum_{t = 1}^T \frac{B_t - B_{t-1}}{ a_{t}}=\sum_{t = 1}^{T} B_t \left(\frac{1}{a_t} - \frac{1}{a_{t+1}}\right) = \sum_{t = 1}^{T}\frac{B_t}{a_t} \left(1 - \frac{a_t}{a_{t+1}}\right) \leq \alpha \sum_{t = 1}^{T}\left(1 - \frac{a_{t}}{a_{t+1}}\right)\\ &=\alpha T- \alpha\sum_{t = 1}^{T-1}\frac{a_t}{a_{t+1}} \leq \alpha T- \alpha(T-1)\Big(\frac{a_1}{a_T}\Big)^{1/(T-1)} \\ &= \alpha(T-1)\left(1-e^{-\ln\frac{a_T}{a_1}/(T-1)}\right) + \alpha \leq \alpha(T-1)\ln\frac{a_T}{a_1}/(T-1) + \alpha = \alpha\left(\ln \frac{a_T}{a_1}+1\right). \end{align*} The inequality in the second line used the following fact: if the product of $T-1$ positive numbers is $\frac{a_1}{a_T}$, then their sum is minimized when they are equal. The inequality in the third line used that $1-e^{-x} \leq x$ for every $x$. \end{proof} \helperstar* \begin{proof} By the conditions in the lemma, opening facility $i$ and reconnecting $\tilde C$ to $i $ is not $\phi$-efficient. This gives that at the moment, we have \[ \sum_{\tilde j \in \tilde C}d(\tilde j, S) \leq \sum_{\tilde j \in \tilde C}d(\tilde j, \sigma_{\tilde j}) \leq f_i + \sum_{\tilde j \in \tilde C} d(i , \tilde j) + |\tilde C|\cdot \phi \] By triangle inequalities we have $ d(\tilde j, S) \ge d(i, S) - d(i, \tilde j )$ for every $\tilde j \in \tilde C$. Combining with the previous inequality yields: \begin{align*} d(i, S) \le \frac{1}{|\tilde C|}\sum_{\tilde j \in \tilde C}\left(d(\tilde j, S) + d(i, \tilde j)\right) \leq \frac{ f_i + 2\sum_{\tilde j \in \tilde C} d(i, \tilde j) }{|\tilde C|}+ \phi. \hspace*{80pt} \qedhere \end{align*} \end{proof} \section{Moving Clients to Facility Locations} \label{appendix:moving-clients} In this section we show that by moving clients to their nearest facilities, we lose a multiplicative factor of $2$ and an additive factor of $1$ in the approximation. That is, an $\alpha$ approximate solution for the new instance, is $2\alpha+1$ approximate for the original instance. Throughout this section, we simply use the set of open facilities to define a solution and all clients are connected to their respective nearest open facilities. Let a facility location instance be given by $F, (f_j)_{j \in C}, C$ and $d$. Let $\psi_j$ be the nearest facility in $F$ to $j$ for every $j \in C$. By moving all clients $j$ to $\psi_j$, we obtain a new instance. Let $S^*$ be the optimum solution to the original instance. Suppose we have an solution $S$ for the new instance that is $\alpha$-approximate solution. Thus $f(S) + \sum_{j \in C}d(\psi_j, S) \leq \alpha\left(f(S^*) + \sum_{j \in C}d(\psi_j, S^*)\right)$. We show that $S$ is $2\alpha+1$ approximate for the original instance. Notice that for every $j \in C$, we have $d(j, S) - d(j, \psi_j ) \leq d(\psi_j, S) \leq d(j, S) + d(j, \psi_j)$ by triangle inequalities. \begin{align*} f(S) + \sum_{ j \in C} d(j, S) &\leq f(S) + \sum_{ j \in C} \left(d(\psi_j, S) + d(j, \psi_j)\right) \\ &\leq \alpha\left(f(S^*) + \sum_{j \in C}d(\psi_j, S^*)\right) + \sum_{j \in C} d(j, \psi_j) \end{align*} For every $j \in C$, since $\psi_j$ is the nearest facility in $F$ to $j$, we have $d(\psi_j, S^*) \leq d(j, \psi_j) + d(j, S^*) \leq 2d(j, S^*)$. Thus, we have \begin{align*} f(S) + \sum_{ j \in C} d(j, S) &\leq \alpha f(S^*) + 2\alpha\sum_{j \in C}d(j, S^*) + \sum_{j \in C} d(j, \psi_j)\\ &\leq \alpha f(S^*) + (2\alpha + 1)\sum_{j \in C}d(j, S^*). \end{align*} Thus, we have that $S$ is a $(2\alpha+1)$-approximate solution for the original instance. \section{Missing Proofs from Section~\ref{sec:fast-UFL}} \samplelocalsearch* \begin{proof} We are going to lower bound the expected value of $\mathsf{cost}_\lambda(S^0, \sigma^0) - \mathsf{cost}_\lambda(S^1, \sigma^1)$. By Theorem~\ref{thm:FL-offline-operations}, there are two sets ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ of local operations satisfying the properties. Below, we let ${\mathcal{Q}}$ be one of the following three sets: ${\mathcal{P}}_{\mathrm{C}}$, or ${\mathcal{P}}_{\mathrm{F}}$, or ${\mathcal{P}}_{\mathrm{C}} \biguplus {\mathcal{P}}_{\mathrm{F}}$. For every $i \in F$, let ${\mathcal{Q}}_{i}$ be the set of operations in ${\mathcal{Q}}$ that open or swap in $i$. Let ${\mathcal{Q}}_{\emptyset}$ be the set of $\mathsf{close}$ operations in ${\mathcal{Q}}$. Let $\Phi_i$ be maximum of $\nabla_{\mathsf{op}}$ over all $\mathsf{op} \in {\mathcal{Q}}_{i}$ (define $\Phi_i = 0$ if ${\mathcal{Q}}_{i} = \emptyset$); define $\Phi_\emptyset$ similarly. Notice that if $i \in S$ then open $i$ will not decrease the cost since we maintain that all the clients are connected to their nearest open facilities. Thus, ${\mathcal{Q}}_{i} = \emptyset$ for $i \in S$. Then, conditioned on that we consider $\mathsf{close}$ operations in $\mathsf{sampled\mhyphen local\mhyphen search}$, the cost decrement of the iteration is at least $\Phi_\emptyset$. Conditioned on that we consider opening or swapping in $i$ in the iteration, the decrement is at least $\Phi_i$. Thus, $\mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)] \geq \frac{\Phi_\emptyset}{3} + \sum_{i \in F\setminus S}\frac{2\Phi_i}{3|F \setminus S|}$. Therefore, \begin{align*} \sum_{\mathsf{op} \in {\mathcal{Q}}}\nabla_{\mathsf{op}} &\leq |{\mathcal{Q}}_\emptyset|\Phi_\emptyset + \sum_{i \in F \setminus S}|{\mathcal{Q}}_i|\Phi_i \leq |F|\Phi_\emptyset + 2\sum_{i \in F \setminus S}\Phi_i \\ &\leq 3 |F|(\mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)]), \end{align*} since the third and fourth properties in the theorem imply $|{\mathcal{Q}}_\emptyset| \leq |F|$ and $|{\mathcal{Q}}_i| \leq 2$ for every $i \in F \setminus S$. Replacing ${\mathcal{Q}}$ with each of ${\mathcal{P}}_{\mathrm{C}}$, ${\mathcal{P}}_{\mathrm{F}}$ and ${\mathcal{P}}_{\mathrm{C}} \biguplus {\mathcal{P}}_{\mathrm{F}}$, we obtain \begin{align*} \mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)] \geq \frac1{3 |F|}\max\left\{ \begin{array}{c} \mathsf{cc}(\sigma^0) - (\lambda f(S^*) + \mathsf{cc}(\sigma^*))\\ \lambda f(S) - (\lambda f(S^*) + 2\mathsf{cc}(\sigma^*))\\ \mathsf{cost}_\lambda(S^0, \sigma^0) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)) \end{array} \right\}. \end{align*} This finishes the proof of the lemma. \end{proof} \fliterate* \begin{proof} We break the procedure in two stages. The first stage contains $M_1 = O\left( |F|\log\frac{\Gamma}{\epsilon'}\right)$ iterations of the for-loop in $\mathsf{FL\mhyphen iterate}(M)$, where $M_1$ is sufficiently large. Applying Lemma~\ref{lemma:sample-local-search} and using the third term in the $\max$ operator, for any execution of $\mathsf{sampled\mhyphen local\mhyphen search}$, we have \begin{align*} &\quad \E\big[\big(\mathsf{cost}_\lambda(S^1, \sigma^1) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+\big] \\ &\leq \left(1- \frac{1}{3 |F|}\right)\big(\mathsf{cost}_\lambda(S^0, \sigma^0) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+, \end{align*} where $(S^0, \sigma^0)$ and $(S^1, \sigma^1)$ are as defined w.r.t the execution, and $x_+$ is defined as $\max\{x ,0\}$ for every real number $x$. Notice that when $\mathsf{cost}_\lambda(S^0, \sigma^0) \leq 2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)$, the inequality holds trivially. Truncating at $0$ is needed later when we apply the Markov inequality. So, after $M_1$ iterations, we have \begin{align*} &\quad \E\big[\big(\mathsf{cost}_\lambda(S, \sigma) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+\big] \\ &\leq \left(1- \frac{1}{3 |F|}\right)^{M_1}\big(\mathsf{cost}_\lambda(S^\circ, \sigma^\circ) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+\leq \frac{\epsilon'}{2\Gamma}\mathsf{opt}. \end{align*} The second inequality holds since $\mathsf{cost}_\lambda(S^\circ, \sigma^\circ) \leq \lambda\mathsf{cost}(S^\circ, \sigma^\circ) \leq O(1)\mathsf{opt}$ and $M = O\left(\frac{|F|}{\epsilon'}\log \Gamma\right)$ is sufficiently large. Using Markov's inequality, with probability at least $1-\frac{1}{2\Gamma}$, we have at the end of the first stage, $$(\mathsf{cost}_\lambda(S, \sigma) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)))_+ \leq \epsilon'\cdot\mathsf{opt}.$$ If the event happens, we say the first stage is successful. We assume the first stage is successful and analyze the second stage. The second stage contains $\log_2(2\Gamma)$ phases, and each phase contains $\frac{48 |F|}{\epsilon'}$ iterations. We focus on one phase in the stage. Assume that at the beginning of an iteration in the phase, we have \begin{align*} \mathsf{cc}(\sigma) \leq \big(\lambda + \frac{\epsilon'}2\big) f(S^*) + \big(1+\frac{\epsilon'}2\big)\mathsf{cc}(\sigma^*) \text{ and } \lambda f(S) \leq \big(\lambda + \frac{\lambda\epsilon'}2\big) f(S^*) + \big(2+\frac{\lambda\epsilon'}2\big)\mathsf{cc}(\sigma^*). \end{align*} Then at the moment, we have $\mathsf{cost}(S, \sigma) \leq (1 + \lambda + \epsilon')f(S^*) + (1+2/\lambda + \epsilon')\mathsf{cc}(\sigma^*) = (\alpha_{\mathsf{FL}} + \epsilon')\mathsf{opt}$ (obtained by adding the first inequality and $1/\lambda$ times the second inequality). Then we must have $\mathsf{cost}(S^{\mathsf{best}}, \sigma^{\mathsf{best}}) \leq (\alpha_{\mathsf{FL}} + \epsilon')\mathsf{opt}$ in the end of this execution of $\mathsf{FL\mhyphen iterate}$ since $(S^{\mathsf{best}}, \sigma^{\mathsf{best}})$ is the best solution according to the original (i.e, non-scaled) cost. Thus, we say a phase in the second stage is successful if both inequalities hold at the end of some iteration in the phase; then we can pretend that the phase ends at the moment it is successful. If one of the two inequalities does not hold at the end of an iteration, then by Lemma~\ref{lemma:sample-local-search}, for the execution of $\mathsf{sampled\mhyphen local\mhyphen search}$ in the next iteration, we have $\mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_{\lambda}(S^1, \sigma^1)] \geq \frac{\epsilon'}{6 |F|}(f(S^*) + \mathsf{cc}(\sigma^*)) = \frac{\epsilon'}{6 |F|}\mathsf{opt}$. Then, by stopping times of martingales, in expectation, the phase stops in at most $\frac{24 |F|}{\epsilon'}$ iterations since at the beginning of the phase we have $\mathsf{cost}_\lambda(S, \sigma) \leq \max\{3+\epsilon', 2\lambda+\epsilon'\}(f(S^*) + \mathsf{cc}(\sigma^*)) \leq 4\cdot\mathsf{opt}$ and $\mathsf{cost}_\lambda(S, \sigma)$ is always positive. By Markov's inequality, the probability that the phase does not stop early (i.e, is not successful) is at most $1/2$. The probability that the second stage succeeds, i.e, at least one of its phases succeeds is at least $1-1/(2\Gamma)$. Thus with probability at least $1-1/\Gamma$, both stages succeed and we have $\mathsf{cost}(S^{\mathrm{best}}, \sigma^{\mathrm{best}}) \leq (\alpha_{\mathsf{FL}} + \epsilon')\mathsf{opt}$. The number of iterations we need in the two stages is $O\left(\frac{ |F|}{\epsilon'}\log \Gamma\right)$. \end{proof} \section{Open Problems and Discussions} \label{sec:discussions} We initiated the study of facility location problem in general metric spaces in recourse and dynamic models. Several interesting problems remain open: The most obvious one is can we get $O(1)$-competitive online/dynamic algorithms with polylog amortized recourse or fast update times in the fully dynamic setting. Another interesting direction is can we extend our results to the capacitated facility location and capacitated $k$-median, where there is an upper bound on the number of clients that can be assigned to a single open facility. From technical point of view, it would be interesting to find more applications of local search and probabilistic tree embedding techniques in the dynamic algorithms model. Finally, as alluded in the introduction, an exciting research direction is to understand the power of recourse in the online model. \section{Introduction} \label{sec:intro} In the {\em (uncapacitated) facility location problem}, we are given a metric space $(F \cup C,d)$, where $F$ is the set of facility locations, $C$ is the set of clients, and $d: (F \cup C) \times (F \cup C) \rightarrow {\mathbb{R}}_{\geq 0}$ is a distance function, which is non-negative, symmetric and satisfies triangle inequalities. For each location $i \in F$, there is a facility opening cost $f_i \geq 0$. The goal is open a subset $S \subseteq F$ of facilities so as to minimize cost of opening the facilities and the connection cost. The cost of connecting a client $j$ to an open facility $i$ is equal to $d(j,i)$. Hence, the objective function can be expressed concisely as $\min_{S\subseteq F} \left(f(S) + \sum_{j \in C}d(j, S)\right)$, where for a set $S \subseteq F$, $f(S) := \sum_{i \in S}f_i$ is the total facility cost of $S$ and $d(j, S):=\min_{i \in S}d(j, i)$ denotes the distance of $j$ to the nearest location in $S$. The facility location problem arises in countless applications: in the placement of servers in data centers, network design, wireless networking, data clustering, location analysis for placement of fire stations, medical centers, and so on. Hence, the problem has been studied extensively in many different communities: approximation algorithms, operations research, and computational geometry. In the approximation algorithms literature in particular, the problem occupies a prominent position as the development of every major technique in the field is tied to its application on the facility location problem. See the text book by Williamson and Shmoys \cite{Williamson} for more details. The problem is hard to approximate to a factor better than 1.463 \cite{Guha1998}. The current best-known polynomial-time algorithm is given by the third author, and achieves 1.488-approximation \cite{Li13}. In many real-world applications the set of clients arrive online, the metric space can change over time, and there can be memory constraints: This has motivated the problem to be studied in various models: online \cite{Meyerson2001,Fotakis08algorithmica,Anagnostopoulos2004,Fotakis2007}, dynamic \cite{Cohen-Addad19, Goranci19,CyganCMS18,Wesolowsky:1973,Farahani2009, Eisenstat,AnNS15}, incremental \cite{Fotakis06,Charikar1997,Fotakis2011}, streaming \cite{Indyk2004, Fotakis11, Lammersen, Czumaj2013, Charikar1997}, game theoretic \cite{Vetta2002,FotakisT13,FotakisT13a}, to name a few. This paper is concerned with {\em online and dynamic models}. Thus to keep the flow of presentation linear, we restrict ourselves to the results in these two models here. \remove{In the Section , we put our results in the broader context. } \medskip Motivated by its applications in network design and data clustering, Meyerson \cite{Meyerson2001} initiated the study of facility location problem in the online setting. Here, clients arrive online one-by-one, the algorithm has to assign the newly arriving client to an already opened facility or needs to open a new facility to serve the request. The decisions made by the algorithm are {\em irrevocable}, in the sense that a facility that is opened cannot be closed and the clients cannot be reassigned. In the online setting, Meyerson \cite{Meyerson2001} designed a very elegant randomized algorithm that achieves an $O(\log n)$ competitive ratio, and also showed that no online algorithm can obtain $O(1)$ competitive ratio. This result was later extended by Fotakis \cite{Fotakis08algorithmica} to obtain an {\em asymptotically optimal} $O(\log n/\log \log n)$-competitive algorithm. Both the algorithms and analysis techniques in \cite{Fotakis08algorithmica, Meyerson2001} were influential, and found many applications in other models such as streaming \cite{Fotakis2011}. \shr{I think the lower bound was shown in a different paper; and the bound of Meyerson was $O(\log n)$.} The lowerbound in Fotakis \cite{Fotakis08algorithmica} holds even in very special metric spaces such as HSTs or the real line. Since then, several online algorithms have been designed achieving the same competitive ratio with more desirable properties such as deterministic \cite{Anagnostopoulos2004}, primal-dual \cite{Fotakis2007}, or having a small memory footprint \cite{Fotakis11}. We refer to a beautifully written survey by Fotakis \cite{Fotakis2011} for more details. The main reason to assume that decisions made by an algorithm are irrevocable is because the cost of changing the solution is expensive in some applications. However, if one examines these above applications closely, say for example connecting clients to servers in data centers, it is more natural to assume that decisions need not be irrevocable but the algorithm {\em should not change the solution too much}. This is even more true in modern data centers where topologies can be reconfigured; see \cite{GhobadiMPDKRBRG16} for more details. A standard way of quantifying the restriction that an online algorithm does not make too many changes is using the notion of {\em recourse}. The recourse per step of an online algorithm is the {\em number of changes} it makes to the solution. Recourse captures the {\em minimal} amount of changes an online algorithm {\em has to make} to maintain a desired competitive ratio due to the {\em information theoretic} limits. For the facility location problem, depending on the application, the recourse can correspond to: 1) the number of changes made to the opened facilities (called \emph{facility recourse}) 2) the number of reconnections made to the clients (called \emph{client recourse}). \sh{Notice that we can assume for every facility we open/close, we have to connect/disconnect at least one client. Thus the client recourse is at least the facility recourse.} In the clustering applications arising in massive data sets, the opened facilities represent cluster centers, which represent summaries of data. Here one is interested in making sure that summaries do not change too frequently as more documents are added online. Therefore, facility recourse is a good approximation to the actual cost of changing the solution \cite{Charikar1997,Fotakis06}. On the other hand, in network design problems, client recourse is the true indicator of the cost to implement the changes in the solution. As a concrete example, consider the problem of connecting clients to servers in datacenters, which was one of the main motivation for Meyerson \cite{Meyerson2001} to initiate the study of online facility location problem. Here, it is important that one does not reconnect clients to servers too many times, as such changes can incur significant costs both in terms of disruption of service and the labor cost. \sh{Consider another scenario where a retailing company tries to maintain stores to serve the dynamically changing set of clients. As the clients are changing so frequently, it would be infeasible to build/shutdown even one store for every new client. In this application, small client recourse per step is desirable, as that will automatically forbid frequent changes of store locations.}\xguor{Discuss the NeurIPS'19 paper on fully dynamic facility location by Cohen-Addad et~al.\cite{Cohen-Addad19}} In this light, a natural question that arises is: \shr{Also the facility recourse is always smaller than the client recourse since every time we open or close a facility, at least one client will be reconnected. Do we need to mention facility recourse in the results? Can we say this here? } \vspace{2mm} {\em Is it possible to maintain a constant approximation for the facility location problem if we require that the facility and client recourse is small?} \vspace{2mm} Our first main result shows that indeed this is possible. In the following theorems, we use $n$ to denote the total number of facility locations and all clients that ever arrived, and $D$ to denote the diameter of the metric $d$ (assuming all distances are integers). \begin{theorem} \label{UFL-recourse} There is a deterministic online algorithm for the facility location problem that achieves a competitive ratio of $(1+\sqrt{2} + \epsilon)$ with $O\left(\frac{\log n}{\epsilon}\log\frac1\epsilon\right)$ amortized facility and client recourse against an adaptive adversary. \end{theorem} Our algorithm to show the above theorem differs from the previous approaches used in the context of online variants of facility location problem, and is based on {\em local search}. The local search algorithm is one of the most widely used algorithms for the facility location problem in practice and is known to achieve an approximation factor of $(1+\sqrt 2)$ in the offline setting. See the influential paper by Arya {\em et al} \cite{AryaGKMP01} and a survey by Munagala \cite{Munagala16}. Thus our result matches the best known approximation ratio for offline facility location using local search. Further, our result shows that the local search algorithm augmented with some small modifications is inherently {\em stable} as it does not make too many changes to the solutions even if clients are added in an online fashion. This gives further justification for its popularity among practitioners. Prior to Theorem \ref{UFL-recourse}, the known results \cite{Fotakis06, Diveki2011,Fotakis11} needed one or more of these assumptions: 1) the facility costs are {\em the same} 2) we are interested in knowing only the cost of solution 3) we are interested only in bounding the {\em facility recourse}. In particular, there was no known algorithm that bounds the client recourse, which is an important consideration in many applications mentioned above. Moreover, our algorithm also achieves a better approximation factor; previously best known algorithm for the facility location problem achieved a competitive ratio of 48 \cite{Fotakis2011}. Our result in the recourse setting for the facility location problem should be contrasted with the similar results shown recently for online Steiner tree \cite{Gupta015}, set cover \cite{GuptaK0P17}, scheduling \cite{GuptaKS14}, and matchings and flows \cite{BernsteinHR19,GuptaKS14}. Moreover, these results also raise an intriguing questions: {\em is polylog amount of recourse enough to beat information theoretic lowerbounds in the online algorithms? Is recourse as or more powerful than randomization?} \medskip While having a small client recourse is enough in data center applications, it is not enough in some others. Take wireless networks as a concrete example. Here, the set of clients (mobile devices) keeps changing over time, and it is necessary to {\em update} the assignment of clients to facilities as {\em quickly} as possible so to minimize the service disruption. These applications motivated Cygan {\em et~al}~\cite{CyganCMS18}, Goranci {\em et~al}~\cite{Goranci19} and Cohen-Addad {\em et~al}~\cite{Cohen-Addad19} to study the facility location problem in the framework of {\em dynamic algorithms}. The dynamic model of \cite{CyganCMS18} and \cite{Cohen-Addad19} is different from what we study here, so we discuss it at end of this section. The dynamic facility location problem is similar to the one in online setting except that at each time step either {\em a new client arrives or an existing client departs}. The goal is to always maintain a solution that is a constant factor approximation to the optimal solution, while minimizing {\em the total time spent in updating the solution.} We emphasize that we require our dynamic algorithms to maintain {\em an actual assignment of clients to facilities}, not just the set of open facilities and an estimate of connection cost. This is important for applications mentioned above. This setting was considered in \cite{Goranci19}, who showed that for metric spaces with {\em doubling dimension $\kappa$}, there is a deterministic fully dynamic algorithm with $\tilde O(2^{\kappa^2})$ update time, which maintains a constant approximation. However, for more general metric spaces no results were known in the dynamic setting, and we give the first results. First we consider the incremental setting, where clients only arrive and never depart. \begin{theorem} \label{UFL-dynamicIncremental} In the incremental setting against an adaptive adversary, there is a randomized dynamic algorithm for the facility location problem that, with probability at least $1-1/n^2$, maintains an approximation factor of $(1+\sqrt{2} + \epsilon)$ and has \emph{total} update time of $O(\frac{n^2}{\epsilon^2}\log^3n\log\frac1\epsilon)$. \end{theorem} Note that it takes $\Theta(n|F|)$ space to specify the input in our model (see Section~\ref{subsec:specify-input}). Hence the running time of our algorithms is almost optimal up to polylog factors when $|F| = \Omega(n)$. The proof of above theorem uses randomized local search and builds on our result in the recourse setting. We use randomization to convert the recourse bound into an update time bound. Further, our analysis of above theorem also implies one can obtain $O(\frac{n|F|}{\epsilon^2}\log^3n\log\frac1{\epsilon})$ running time by losing $O(1)$ factors in the approximation ratio; see the remark at the end of Section \ref{sec:dfl}. \medskip Next we study the fully dynamic setting. Here, we first consider an important class of metric spaces called hierarchically well separated tree (HST) metrics {\cite{Bartal96}; see Definition~\ref{def:HST} for the formal definition, and Section~\ref{subsec:specify-input} for more details about how the input sequence is given. \remove{A $k$-hierarchically well-separated tree is defined as a rooted weighted tree with following properties: 1) The edge weight from any node to each of its children is same. 2) The edge weights along any path from the root to a leaf are decreasing by a factor of at least $k$. In most applications, $k$ is assumed to be a small constant.} \shr{This is different from what I defined in the preliminary section.} For HST metric spaces, we show the following result. \remove{In the following result, $n$ denotes the total number of clients that arrive in the entire course of the algorithm. }\shr{This was already defined.} \begin{theorem} \label{UFL-HST} In the fully dynamic setting against adaptive adversaries, there is a deterministic algorithm for the facility location problem that achieves an $O(1)$ approximation factor with $O(|F|)$ preprocessing time and $O(n\log^3 D)$ total update time for the HST metric spaces. \end{theorem} A seminal result by Bartal \cite{Bartal96}, which was later tightened by Fakcharoenphol, Rao and Talwar \cite{Fakcharoenphol2003}, shows that any arbitrary $N$-point metric space can be embedded into a distribution over HSTs such that the expected distortion is at most $O(\log N)$, which is also tight. Moreover, such a probabilistic embedding can also be computed in $O(N^2\log N)$ time; see recent results by Blelloch, Gu and Sun for details \cite{Blelloch0S17}. These results immediately imply the following theorem, provided the input is specified as in Section~\ref{subsec:specify-input}. \begin{theorem} \label{UFL-fullydynamic} In the fully dynamic setting against oblivious adversary, there is a randomized algorithm for the facility location problem that maintains an approximation factor of $O(\log |F|)$ with \sh{preprocessing time of $O(|F|^2\log |F|)$} and $O(n\log^3 D)$ total update time. The approximation guarantee holds only in expectation for every time step of the algorithm. \end{theorem} Observe that unlike the incremental setting, the above theorem holds only in the oblivious adversary model, as probabilistic embedding techniques preserve distances only in expectation as can be seen by taking a cycle on $n$ points. Our result also shows that probabilistic tree embeddings using HSTs can be a very useful technique in the design of dynamic algorithms, similar to its role in online algorithms \cite{Bartal96, BartalBBT97, Umboh15, BubeckCLLM18}. \medskip Our algorithms in Theorems \ref{UFL-HST} and \ref{UFL-fullydynamic} in the fully dynamic setting also have the nice property that amortized client and facility {\em recourse} is $O(\log^3D)$ (in fact, we can achieve a slight better bound of $O(\log^2 D)$ as can be seen from the analysis). This holds as our dynamic algorithms maintain the entire assignment of clients to facilities {\em explicitly} in memory at every time step. Thus, the amortized client reconnections is at most the amortized update time. This is useful when one considers an online setting where clients arrive and depart, and is interested in small client recourse. A fully dynamic online model of facility location problem, where clients arrive and \emph{depart} was recently studied by Cygan {\em et~al}~\cite{CyganCMS18} and Cohen-Addad {\em et~al}~\cite{Cohen-Addad19}, but with different assumption on recourse. In this model, when a client arrives, the algorithm has to assign it to an open facility immediately; While upon departure of a client, if a facility was opened at the same location, then the clients that were assigned to that location should be reassigned immediately and irrevocably. Cygan {\em et~al}~\cite{CyganCMS18} studied the case when recourse is \emph{not} allowed: they showed that a delicate extension of Meyerson's \cite{Meyerson2001} algorithm obtains asymptotically tight competitive ratio of $O(\log n /\log \log n)$. Cohen-Addad {\em et~al}~\cite{Cohen-Addad19} later showed that this can be improved to $O(1)$ if recourse is allowed. However, both results holds only for the {\em uniform facility costs} and Cygan {\em et~al}\cite{CyganCMS18} even showed an {\em unbounded} lower bound for the non-uniform facility cost case in their model. Moreover, in their model reconnections of clients are assumed to be ``automatic'' and do not count towards the client recourse; it is not clear how many client reconnections their algorithm will make. \subsection{Our Techniques} Our main algorithmic technique for proving Theorems~\ref{UFL-recourse} and \ref{UFL-dynamicIncremental} is local search, which is one of the powerful algorithm design paradigms. \sh{Indeed, for both results, the competitive (approximation) ratio we achieve is $1+\sqrt{2}+\epsilon$, which matches the best approximation ratio for offline facility location obtained using local search \cite{AryaGKMP01}. Both of our results are based on the following key lemma. Suppose we maintain local optimum solutions at every time step in our algorithm. When a new client $j_t$ comes at time $t$, we add it to our solution using a simple operation, and let $\Delta_t$ be the increase of our cost due to the arrival of $j_t$. The key lemma states that the sum of $\Delta_t$ values in the first $T'$ time steps can be bounded in terms the optimum cost at time $T'$. With a simple modification to the local search algorithm, in which we require each local operation decreases enough cost for every client it reconnects, one can bound the total client recourse. } \sh{ The straightforward way to implement the local search algorithm takes time $\Omega(n^3)$. To derive a better running time, we leverage the randomized local search idea of Charikar and Guha \cite{CharikarGhua2005}. At every iteration, \sh{we randomly choose a facility $i$ or a closing operation, and then perform the best operation that opens or swaps in $i$, or closes a facility if that is what we choose}. By restricting the facility $i$ and with the help of the heap data structure, an iteration of the algorithm can be implemented in time $O(|C|\log |F|)$. As in \cite{CharikarGhua2005} we can also show that each iteration can make a reasonable progress in expectation, leading to a bound of $\tilde O(|F|)$ on the number of iterations for the success of the algorithm with high probability. We remark that the algorithm in \cite{CharikarGhua2005} used a different local search framework. Therefore, our result shows that the classic algorithm of \cite{AryaGKMP01} can also be made fast. } \sh{However, directly replacing the randomized local search procedure with a deterministic one does not work: The solution at the end of each time might not be a local optimum as we did not enumerate all possible local operations. Thus the key lemma does not hold any more. Nevertheless we show that applying a few local operations around $j_t$ upon its arrival can address the issue. With the key lemma, one can bound the number of times we perform the iterative randomized local search procedure, and thus the overall running time. } \sh{Our proof for Theorem~\ref{UFL-HST} is based on a generalization of the greedy algorithm for facility location on HST metrics, which was developed in \cite{EsencayiGLW19} in the context of {\em differential privacy} but only for the case of {\em uniform} facility cost. The intuition of the algorithm is as follows: If for some vertex $v$ of the HST $T$, the number of clients in the tree $T_v$ (the sub-tree of $T$ rooted at $v$) times the length of parent edge of $v$ is big compared to the cost of the cheapest facility in $T_v$, then we should open that facility. Otherwise, we should not open it and let the clients in $T_v$ be connected to outside $T_v$ through the parent edge. This intuition can be made formal: We mark $v$ in the former case; then simply opening the cheapest facility in $T_v$ for all \emph{lowest marked} vertices $v$ leads to a constant approximation for facility location. } \sh{The above offline algorithm leads to a \emph{dynamic data structure} that maintains $O(1)$-approximate solutions, supports insertion and deletion of clients, and reports the connecting facility of a client in $O(\log D)$ time. This is the case since each time a client arrives or departs, only its ancestors will be affected. However, in a dynamic algorithm setting, we need to maintain the assignment vector in memory, so that when the connecting facility of a client changes, it needs to be notified. This requires that the number of reconnections made by our algorithm to be small. To achieve the goal, we impose two constants for each $v$ when deciding whether $v$ should be marked and the cheapest facility in $T_v$ should be open. When a vertex $v$ changes its marking/opening status, we update the constants in such a way that it becomes hard for the status to be changed back. } \section{Preliminaries} \label{sec:prelim} Throughout the paper, we use $F$ to denote the set of potential facilities for all the problems and models; we assume $F$ is given upfront. $C$ is the dynamic set of clients we need to connect by our algorithm. This is not necessarily the set of clients that are present: In the algorithms for online facility location with recourse and dynamic facility location in the incremental setting, we fix the connections of some clients as the algorithms proceed. These clients are said to be ``frozen'' and excluded from $C$. We shall always use $d$ to denote the hosting metric containing $F$ and all potential clients. For any point $j$ and subset $V$ of points in the metric, we define $d(j, V) = \min_{v \in V}d(j, v)$ to be the minimum distance from $j$ to a point in $V$. We assume all distances are integers, the minimum non-zero distance between two points is 1. We define $D$, the diameter or the aspect ratio of a metric space, as the largest distance between two points in it. Let $n$ be $|F|$ plus the total number of clients arrived during the whole process. The algorithms do not need to know the exact value of $n$ in advance, except that in the dynamic algorithm for facility location in the incremental setting (the problem in Theorem~\ref{UFL-dynamicIncremental}), to achieve the $1- 1/n^2$ success probability, a sufficiently large $\Gamma = \mathrm{poly}(n, \log D, \frac1\epsilon)$ needs to be given.\footnote{For an algorithm that might fail, we need to have some information about $n$ to obtain a failure probability that depends on $n$.} \shr{TODO: remove $n_c$ and $n_f$. $n_c$ is never used. We can just use $|F|$.Done. } In all the algorithms, we maintain a set $S$ of open facilities, and a connection $\sigma \in S^C$ of clients in $C$ to facilities in $S$. We do not require that $\sigma$ connects clients to their respective nearest open facilities. For any solution $(S' \subseteq F, \sigma' \in S'^C)$, we use $\mathsf{cc}(\sigma') = \sum_{j \in C}d(j, \sigma_j)$ to denote the connection cost of the solution. For facility location, we use $\mathsf{cost}(S', \sigma') = f(S') + \mathsf{cc}(\sigma')$ to denote the total cost of the solution $(S', \sigma')$, where $f(S') := \sum_{i \in S'} f_i$. Notice that $\sigma$ and the definitions of $\mathsf{cc}$ and $\mathsf{cost}$ functions depend on the dynamic set $C$. Throughout the paper, we distinguish between a ``moment'', a ``time'' and a ``step''. A moment refers to a specific time point during the execution of our algorithm. A time corresponds to an arrival or a departure event: At each time, exactly one client arrives or departs, and time $t$ refers to the period from the moment the $t$-th event happens until the moment the $(t+1)$-th event happens (or the end of the algorithm). One step refers to one statement in our pseudo-codes indexed by a number. \subsection{Hierarchically Well Separated Trees} \begin{definition} \label{def:HST} A hierarchically-well-separated tree (or HST for short) is an edge-weighted rooted tree with the following properties: \begin{itemize}[topsep=3pt,itemsep=0pt] \item all the root-to-leaf paths have the same number of edges, \item if we define the level of vertex $v$, ${\mathsf{level}}(v)$, to be the number of edges in a path from $v$ to any of its leaf descendant, then for an non-root vertex $v$, the weight of the edge between $v$ and its parent is exactly $2^{{\mathsf{level}}(v)}$. \end{itemize} Given a HST $T$ with the set of leaves being $X$, we use $d_T$ to denote the shortest path metric of the tree $T$ (with respect to the edge weights) restricted to $X$. \end{definition} The classic results by Bartal \cite{Bartal96} and Fakcharoenphol, Rao and Talwar \cite{Fakcharoenphol2003} state that we can embed any $N$-point metric $(X, d)$ (with minimum non-zero distance being $1$) to a distribution $\pi$ of \emph{expanding}\footnote{A metric $(X, d_T)$ is expanding w.r.t $(X, d)$ if for every $u, v \in X$, we have $d_T(u, v) \geq d(u, v)$.} HST metrics $(X, d_T)$ with distortion $O(\log N)$: For every $u, v \in X$, we have $d_T(u, v) \geq d(u, v)$ and $\E_{u, v}[d_T(u, v)] \leq O(\log N) d(u, v)$. Moreover, there is an efficient randomized algorithm \cite{Blelloch0S17} that outputs a sample of the tree $T$ from $\pi$. Thus applying standard arguments, Theorem~\ref{UFL-HST} implies Theorem~\ref{UFL-fullydynamic}. \subsection{Specifying Input Sequence} \label{subsec:specify-input} In this section we specify how the input sequence is given. For the online and dynamic facility location problem, we assume the facility locations $F$, their costs $(f_i)_{i \in F}$, and the metric $d$ restricted to $F$ are given upfront, and they take $O(|F|^2)$ space. Whenever a client $j \in C$ arrives, it specifies its distance to every facility $i \in F$ (notice that the connection cost of an assignment $\sigma \in S^C$ does not depend on distances between two clients and thus they do not need to be given). Thus the whole input contains $O(n|F|)$ words. For Theorems~\ref{UFL-HST} and \ref{UFL-fullydynamic}, as we do not try to optimize the constants, we {\em do not} need that a client specifies its distance to every facility. By losing a multiplicative factor of $2$ and an additive factor of $1$ in the approximation ratio, we can assume that every client $j$ is collocated with its nearest facility in $F$ (See Appendix~\ref{appendix:moving-clients}). Thus, we only require that when a client $j$ comes, it reports the position of its nearest facility. For Theorem~\ref{UFL-HST}, the HST $T$ over $F$ is given at the beginning using $O(|F|)$ words. For Theorem~\ref{UFL-fullydynamic}, the metric $d$ over $F$ is given at the beginning using $O(|F|^2)$ words. Then, we use an efficient algorithm \cite{Blelloch0S17} to sample a HST $T$. \subsection{Local Search for facility location} The local-search technique has been used to obtain the classic $(1+\sqrt 2)$-approximation offline algorithm for facility location \cite{AryaGKMP01}. We now give an overview of the algorithm, which will be the baseline of our online and dynamic algorithms for facility location. One can obtain a (tight) $3$-approximation for facility location without scaling facility costs. Scaling the facility costs by a factor of $\lambda := \sqrt{2}$ when deciding whether an operation can decrease the cost, we can achieve a better approximation ratio of $\alpha_{\mathsf{FL}}:= 1+\sqrt{2}$. Throughout, we fix the constants $\lambda = \sqrt{2}$ and $\alpha_{\mathsf{FL}} = 1+\sqrt{2}$. For a solution $(S', \sigma')$ to a facility location instance, we use $\mathsf{cost}_\lambda(S', \sigma') = \lambda f(S') + \mathsf{cc}(\sigma')$ to denote the cost of the solution $(S', \sigma')$ with facility costs scaled by $\lambda = \sqrt{2}$. We call $\mathsf{cost}_\lambda(S', \sigma')$ the \emph{scaled cost} of $(S', \sigma')$. Given the current solution $(S, \sigma)$ for a facility location instance defined by $F, C, d$ and $(f_i)_{i \in F}$, we can apply a \emph{local operation} that changes the solution $(S, \sigma)$. A valid local operation is one of the following. \begin{itemize} \item An $\mathsf{open}$ operation, in which we open some facility $i \in F$ and reconnect a subset $C' \subseteq C$ of clients to $i$. We allow $i$ to be already in $S$, in which case we simply reconnect $C'$ to $i$. This needs to be allowed since our $\sigma$ does not connect clients to their nearest open facilities. \item A $\mathsf{close}$ operation, we close some facility $i' \in S$ and reconnect the clients in $\sigma^{-1}(i')$ to facilities in $S \setminus \{i'\}$. \item In a $\mathsf{swap}$ operation, we open some facility $i \notin S$ and close some facility $i' \in S$, reconnect the clients in $\sigma^{-1}(i')$ to facilities in $S \setminus \{i'\} \cup \{i\}$, and possibly some other clients to $i$. We say $i$ is \emph{swapped in} and $i'$ is \emph{swapped out} by the operation. \end{itemize} Thus, in any valid operation, we can open and/or close at most one facility. A client can be reconnected if it is currently connected to the facility that will be closed, or it will be connected to the new open facility. After we apply a local operation, $S$ and $\sigma$ will be updated accordingly so that $(S, \sigma)$ is always the current solution. For the online algorithm with recourse model, since we need to bound the number of reconnections, we apply a local operation only if the \emph{scaled} cost it decreases is large compared to the number of reconnections it makes. This motivates the following definition: \begin{definition}[Efficient operations for facility location] \label{def:phieff} Given a $\phi \geq 0$, we say a local operation on a solution $(S, \sigma)$ for a facility location instance is $\phi$-efficient, if it decreases $\mathsf{cost}_\lambda(S, \sigma)$ by more than $\phi$ times the number of clients it reconnects. \end{definition} The following two theorems can be derived from the analysis for the local search algorithms for facility location. We include their proofs in Appendix~\ref{appendix:local-search} for completeness. \begin{restatable}{theorem}{uflapprox}\label{thm:FL-offline-apx-ratio} Consider a facility location instance with cost of the optimum solution being $\mathsf{opt}$ (using the original cost function). Let $(S, \sigma)$ be the current solution in our algorithm and $\phi \geq 0$ be a real number. If there are no $\phi$-efficient local operations on $(S, \sigma)$, then we have \begin{align*} \mathsf{cost}(S, \sigma) \leq \alpha_{\mathsf{FL}}\big(\mathsf{opt} + |C|\phi\big). \end{align*} \end{restatable} In particular, if we apply the theorem with $\phi = 0$, then we obtain that $(S, \sigma)$ is a $(\alpha_{\mathsf{FL}} = 1+\sqrt{2})$-approximation for the instance. \sh{The following theorem will be used to analyze our randomized local search procedure.} \begin{restatable}{theorem}{ufloperations}\label{thm:FL-offline-operations} Let $(S, \sigma)$ be a solution to a facility location instance and $\mathsf{opt}$ be the optimum cost. Then there are two sets ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ of valid local operations on $(S, \sigma)$, where each operation $\mathrm{op}$ decreases the scaled cost $\mathsf{cost}_\lambda(S, \sigma)$ by $\nabla_{\mathrm{op}} > 0$, such that the following holds: \begin{itemize} \item $\sum_{\mathrm{op} \in {\mathcal{P}}_{\mathrm{C}}} \nabla_{\mathrm{op}} \geq \mathsf{cc}(\sigma)- (\lambda f(S^*) + \mathsf{cc}(\sigma^*)) $. \item $\sum_{\mathrm{op} \in {\mathcal{P}}_{\mathrm{F}}} \nabla_{\mathrm{op}} \geq \lambda f(S) - (\lambda f(S^*) + 2\mathsf{cc}(\sigma^*)) $. \item There are at most $|F|$ $\mathsf{close}$ operations in ${\mathcal{P}}_{\mathrm{C}} \biguplus {\mathcal{P}}_{\mathrm{F}}$. \item For every $i \in F$, there is at most 1 operation in each of ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ that opens or swaps in $i$. \end{itemize} \end{restatable} \subsection{Useful Lemmas} The following lemmas will be used repeatedly in our analysis and thus we prove them separately in Appendix~\ref{appendix:helper-proofs}. \begin{restatable}{lemma}{helpersumba} \label{lemma:helper-sum-b/a} Let $b \in {\mathbb{R}}_{\geq 0}^T$ for some integer $T \geq 1$. Let $B_{T'} = \sum_{t=1}^{T'} b_t$ for every $T' = 0, 1, \cdots, T$. Let $0 < a_1 \leq a_2 \leq \cdots \leq a_T$ be a sequence of real numbers and $\alpha > 0$ such that $B_t \leq \alpha a_t$ for every $t \in [T]$. Then we have \begin{align*} \sum_{t = 1}^T \frac{b_t}{a_t} \leq \alpha \left(\ln \frac{a_T}{a_1} + 1\right). \end{align*} \end{restatable} \begin{restatable}{lemma}{helperstar} \label{lemma:helper-star} Assume at some moment of an algorithm for facility location, $C$ is the set of clients, $(S, \sigma)$ is the solution for $C$. Let $i \in F$ and $\tilde C \subseteq C$ be any non-empty set of clients. Also at the moment there are no $\phi$-efficient operation that opens $i$ for some $\phi \geq 0$. Then we have \begin{align*} d(i, S) \leq \frac{f_i + 2\sum_{\tilde j \in \tilde C} d(i, \tilde j)}{|\tilde C|} + \phi. \end{align*} \end{restatable} \paragraph{Organization} The rest of the paper is organized as follows. In Section~\ref{sec:ofl}, we prove Theorem~\ref{UFL-recourse} by giving our online algorithm for facility location with recourse. Section~\ref{sec:fast-UFL} gives the randomized local search procedure, that will be used in the proof of Theorem~\ref{UFL-dynamicIncremental} in Section~\ref{sec:dfl}. Section~\ref{sec:dfl-fully} is dedicated to the proof of Theorem~\ref{UFL-fullydynamic}, by giving the fully dynamic algorithm for facility location in HST metrics. We give some open problems and future directions in Section~\ref{sec:discussions}. Some proofs are deferred to the appendix for a better flow of the paper. \subsubsection{draft for key lemma in k-median} \begin{lemma} We have \shr{We can say the proof of the lemma is similar to that of Lemma~\ref{lmm:delta-cost-bound}. Or we can make Lemma~\ref{lmm:delta-cost-bound} more general so that it becomes a meta-lemma for all the similar lemmas that we are going to use.\\} \begin{align*} \sum_{t = 1}^{T'}\Delta_t \leq O\left(\frac{\log T'}{\eta}\right) \mathsf{opt}_{T'}. \end{align*} \end{lemma} \jiayi{For this proof, we can combine k-mean and ufl case} \begin{proof} Recall the inequalities for open: \begin{align*} \sum_{j \in S, \sigma^*(j)=i^* } d(j,S) \le f + \sum_{j \in S,\sigma^*(j)=i^*}d(j,i^*) + |J|\cdot \phi .\end{align*} Focus in one star $ i $, for $ \forall $ $ j_{t_1},\ldots, j_{t_{q-1}}\in \{j: \sigma^*(j)=i^*\} $ \begin{align*} D \le d(j_{\tau} ,S) + d(j,i^*) \qquad \text{for $\tau=t_1,\ldots,t_{q-1}$} .\end{align*} When $ j_{t_q} $ arrives \begin{align*} \Delta \le d(j_{t_{q-1} } ,\psi^*(i^*))\le D+d(j_{t_{q-1} } \ ,i^*) .\end{align*} \begin{align*} \Delta \le \frac{f}{q-1} + \frac{2\cdot \sum_{\tau=t_1}^{t_{q-1} } d(j_{\tau} ,i^*) }{q-1} + d(j_{t_{q-1} }, i^*) + |J|\cdot \phi .\end{align*} Sum up $ \Delta $ for each client and then for all stars, we have: \begin{align*} \sum_{i} \Delta_i & \le \sum_{i} f\cdot \log|C_i| + 2 C^*_i \log|C_i| + C^*_i + \frac{|C_i|(|C_i|+1)}{2} \phi \\ & \le |S|f\cdot \log T' + 2C^* \log T' + T'^2\phi .\end{align*} \jiayi{I need to know which definition of total cost I should use. What about $ \le O(\log T')\mathsf{opt}_{T'} $ when $ \eta <1 $ and $O(\frac{\log T'}{\eta}) $ when $ \eta\ge 1 $ } $ |S|f\le (1+ \eta)kf = F^* + \eta kf \le F^* + 12 \mathsf{opt}$. \end{proof} \medskip \section{Fully Dynamic Algorithm for Facility Location} In this section, we give our algorithms for facility location under the online algorithm with recourse model and fully dynamic setting. We give two results in the setting. The first and simpler result is concerned with the case where we have uniform facility cost and are interested in the \emph{facility recourse}: every time we close a facility, we incur a recourse of 1, regardless of the number of reconnected clients. In this case we show a simple $(3+\epsilon)$-approximation with $O(1)$-amortized facility recourse. In the second result we consider non-uniform facility cost and client-reconnection recourse. We obtain $O(\log^2D)$ amortized recourse, but with $O(\log |F|)$-competitive ratio. This is done by embedding the metric on $F$ to a distribution of metrics induced by hierarchically-well-separated trees using the classic FRT technique. \subsection{Problem with Uniform Facility Cost and Facility Recourse Considered} We first consider the case where all the facility costs are the same, i.e, $f_i = f$ for every $i \in F$. The total recourse is defined as how many clients we close during the course of the algorithm. Our algorithm is simple: when a client $j$ comes at a time, we open the nearest facility to $j$ (if it has not yet opened) and connect it to the facility; when a client $j$ departs at a time, we simply remove it from the solution. Then we do the following: while there exists a local operation that can decrease $\mathsf{cost}(S, \sigma)$ by at least $\epsilon' f$ (where $\epsilon' = \Theta(\epsilon)$, and $\epsilon$ is the additive loss in our competitive ratio), we perform the operation. The following theorem can be derived from the analysis of the local search algorithm for facility location and the proof can be found in Appendix xx for completeness. \begin{restatable}{theorem}{uflapproxfacilityrecourse} Let $(S, \sigma)$ be a solution to a facility location instance with optimum set of open facilities being $S^*$ and optimum cost being $\mathsf{opt}$. Let $\phi \geq 0$. If no local operations on $(S, \sigma)$ can decrease $\mathsf{cost}(S, \sigma)$ by more than $\phi$, then we have \begin{align*} \mathsf{cost}(S, \sigma) \leq 3 \mathsf{opt} + (|S| + 2|S^*|)\phi. \end{align*} \end{restatable} Thus at the end of every time, we have that $\mathsf{cost}(S, \sigma) \leq 3\mathsf{opt} + (|S| + 2|S^*|)\epsilon'f$ where $\mathsf{opt}$ is the cost of the optimum solution at the time step and $S^*$ is the set of facilities it opens. This implies $|S|(1-\epsilon')f + \mathsf{cc}(\sigma) \leq (3 + 2\epsilon')\mathsf{opt}$, implying that $\mathsf{cost}(S, \sigma) \leq \frac{3+\epsilon'}{1-2\epsilon'}\mathsf{opt}$. This is an $(3+O(\epsilon'))$-approximation. It suffices to bound the number of facility closing operations. We are interested in the following adjusted cost of any solution $(S', \sigma')$: $\mathsf{cost}(S', \sigma') - \sum_{j \in C}d(j, F)$; notice that $\sum_{j \in C}d(j, F)$ only depends on the instance. We shall show that the adjusted cost changes smoothly as clients arrive and depart. The following observations can be made: \begin{obs} Let $C'$ and $C''$ be two sets of clients such that $C' \subseteq C''$ and $|C'' \setminus C'| = 1$. The the adjusted cost of the optimum solution for $C''$ and that for $C'$ differs by at most $f$. \end{obs} \begin{proof} Let $j$ be the unique client in $C'' \setminus C'$. Let $\mathsf{opt}'_{C'}$ and $\mathsf{opt}'_{C''}$ be the adjusted costs of the optimum solutions for $C'$ and $C''$ respectively. Then we have $\mathsf{opt}'_{C'} \leq \mathsf{opt}'_{C''}$: Given the optimum solution for $C''$, remove $j$ will decrease the cost by at least $d(j, F)$ and thus will decrease the adjusted cost. We now prove $\mathsf{opt}_{C''} \leq \mathsf{opt}_{C''} + f$: Start from the optimum solution for $C'$. We then add $j$, and open the nearest facility to $j$ (if it has not been opened) and connect $j$ to the facility. Notice this increase the adjusted cost by at most $f$. \end{proof} Then it is easy to see that our algorithm has an amortized facility recourse of $1/\epsilon'$. When a client $j$ arrives, opening its nearest facility in $f$ and connect it to the facility increases the adjusted cost by at most $f$. When a client $j$ departs, removing it can only decrease the adjusted cost. Since each local operation decreases the adjusted cost by at least $\epsilon' f$, the total number of local operations we apply in the algorithm is at most $1/\epsilon'$ times the number of arrived clients. Since every local operation closes at most 1 facility, the algorithm has an amortized facility recourse of $1/\epsilon' = O(1/\epsilon)$. \section{Offline Algorithms for $k$-Median} \begin{algorithm} \caption{$\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$} \label{alg:offline-k-median} \begin{algorithmic}[1] \State repeat the following $M$ times \State\hspace*{\algorithmicindent} $i \gets $ random facility in $F \setminus S$ \State\hspace*{\algorithmicindent} $(\Delta, i') \gets \mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ \State\hspace*{\algorithmicindent} \textbf{if} $\Delta < 0$ \textbf{then} open $i$ and close $i'$ by update $S, \sigma$ and heaps accordingly \end{algorithmic} \end{algorithm} The fast offline algorithm for $k$-median can be analyzed similarly. We can use the 2 approximation for $k$-center to produce an initial solution for $k$-median. It is easy to se that this gives an $2|C|$-approximation. Then we run $\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$ for some $M$ to be decided later. The procedure $\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$ is similar to $\mathsf{offline\mhyphen UFL\mhyphen iterate}$ but is simper: We only try to perform $\mathsf{swap}$ operations. Applying theorem \sh{xx} for a solution $(S, \sigma)$ with $|S| = k$ and $f$ being sufficiently large, we can find a set ${\mathcal{P}}$ of swap operations (if $f$ is large enough, an $\mathsf{open}$ operation can not decrease the cost) satisfying the properties stated in the theorem. By a similar argument, we can show the following lemma: \begin{lemma} Let $(S^\circ, \sigma^\circ)$ and $(S^\bullet, \sigma^\bullet)$ be respectively the solutions before and after we run $\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$. Then we have \begin{align*} \E\big[(\mathsf{cost} (S^\bullet, \sigma^\bullet) - 5\mathsf{opt})_+ | (S^\circ, \sigma^\circ)\big] \leq \left(1-\frac1{|F|}\right)^M \left(\mathsf{cost} (S^\circ, \sigma^\circ) - 5\mathsf{opt}\right)_+. \end{align*} \end{lemma} Again, by choosing $M$ to be of order $O(n^2\log^2n)$, we can guarantee that with probability at least $1-1/n^2$, the output $k$-median solution has cost at most $(5+1/n^2)\mathsf{opt}$. \remove{\subsection{Our Techniques: Old Version} All our algorithms in the dynamic setting have the following two-step framework: In the first step, we design an algorithm that maintains the desired approximation factor with small recourse. Here we build upon our ideas in Theorem \ref{UFL-recourse}. Note that notion of recourse, which does not take into account update time, quantifies how many changes an algorithm {\em has to make} to maintain the desired approximation factor due to {\em information theoretic limits} imposed by online and dynamic models. Next, in the second step, we design new algorithms and data structures to ensure that changes made by the recourse algorithm can be {\em implemented with small update times}. This second step for the incremental algorithm for facility location makes use of randomization. On the other hand, our algorithm for fully dynamic setting for HST metrics shows one can convert a recourse bound into an update time bound easily on tree metrics. Despite this, our guarantee for general metric spaces holds only against oblivious adversaries. This is due to the fact that the probabilistic embedding of general metric spaces using HSTs only preserves distances in expectation. \medskip We believe that this two-step framework of separating the information theoretic aspects of dynamic algorithms with the data structure or implementation aspects using recourse as a conceptual tool should find more applications. While our theorems make this two step process explicit, similar ideas have appeared before in the literature, although not explicitly stated in this language; see \cite{GuptaK0P17, BernsteinC18, DuanHZ19,BK20} for some examples. The result of Gupta et al \cite{GuptaK0P17} for the dynamic set cover also builds on their results in the recourse model. However, they analyze same algorithms in both the settings. Other results that fit in this two step framework is elegant result of Duan, He and Zhang \cite{DuanHZ19} on the dynamic edge coloring, and the result of Bhattacharya and Kulkarni on dynamic topological sorting \cite{BK20}. In these results, it is possible to show that the algorithms can be made deterministic if one is only interested only recourse bounds. The randomization is only required in converting this recourse bound into an update time bound. Finally, as also observed in \cite{GuptaK0P17}, the notion of recourse also helps in bridging online and dynamic algorithms communities, and thus exposing each community to the rich algorithmic tool kit developed in the other. \medskip \subsection{Our Techniques} Besides our ideas for analysis, we should cover the following \begin{itemize} \item local search: not used before in online facility location. \item also not much analysed in the context of dynamic algorithms. \item but a powerful algorithmic paradigm and has lot of applications. \item we believe that our framework should be useful for other applications. \item I have also not seen many applications of HSTs in dyanmic algorithms. \end{itemize} Finally, as a broader appearl of our techniques, as an example we show that our results almost directly extend to $k$-median problem. \medskip } \end{document} \section{Fully Dynamic Algorithm for Facility Location on Hierarchically Well Separated Tree Metrics} \label{sec:dfl-fully} In this section, we give our fully dynamic algorithm for facility location on hierarchically-well-separated-tree (HST) metrics. Our algorithm achieves $O(1)$-approximation and $O(\log^2D)$ amortized update time. As we mentioned early, we assume each client is collocated with a facility. From now on, we fix the HST $T$ and assume the leaves of $T$ is $X = F$; let $V$ be the set of all nodes in $T$. Let $d_T$ be the metric induced by $T$ over the set $V$ of vertices. \noindent{\bf Notations.} Recall that ${\mathsf{level}}(v)$ is the level of $v$ in $T$. For every vertex $v \in V$, define $\Lambda_v$ to be the set of children of $v$, $X_v$ to be the set of leaf descendants of $v$, and $T_v$ be the maximal sub-tree of $T$ rooted at $v$. We extend the facility cost from $X$ to all vertices in $V$: for every $v \in V \setminus X$, we define $f_v = \min_{i \in X_v}f_i$. We can assume that each internal vertex $v$ is a facility; by opening $v$ we mean opening a copy of the $i \in X_v$ with $f_i = f_v$. This assumption only loses a factor of $2$ in the competitive ratio: On one hand, having more facilities can only make our problem easier; on the other hand, the cost of connecting a client to any $i \in X_v$ is at most twice that of connecting it to $v$. By the definition, the facility costs along a root-to-leaf path are non-decreasing. \subsection{Offline Algorithm for Facility Location on HST Metrics} In this section, we first give an offline $O(1)$-approximation algorithm for facility location on the HST metric $d_T$ as a baseline. Notice that facility location on trees can be solved exactly using dynamic programming. However the algorithm is hard to analyze in the dynamic algorithm model since the solution is sensitive to client arrivals and departures. Our algorithm generalizes the algorithm in \cite{EsencayiGLW19} for facility location with uniform facility cost, that was used to achieve the differential privacy requirement. For every vertex $v \in V$, we let $N_v$ be the number of clients at locations in $X_v$. Although according to the definition $N_v$'s are integers, in most part of the analysis we assume there are non-negative \emph{real numbers}. This will be useful when we design the dynamic algorithm. Let $\alpha \in \{1, 2\}^V$ and $\beta \in \{1, 2\}^{V \setminus X}$ be vectors given to our algorithm. They are introduced solely for the purpose of extending the algorithm to the dynamic setting; for the offline algorithm we can set $\alpha$ and $\beta$ to be all-1 vectors. \paragraph{Marked and Open Facilities} For every vertex $v \in V$, we say $v$ is \emph{marked} w.r.t the vectors $N$ and $\alpha$ if $$N_v \cdot 2^{{\mathsf{level}}(v)} > f_v/\alpha_v $$ and \emph{unmarked} otherwise. The following observation can be made: \begin{obs} Let $u$ be the parent of $v$. If $v$ is marked w.r.t $N$ and $\alpha$, so is $u$. \end{obs} \begin{proof} $v$ is marked w.r.t $N$ and $\alpha$ implies $N_v 2^{{\mathsf{level}}(v)} > f_v/\alpha_v $. Notice that $N_u \geq N_v, {\mathsf{level}}(u) = {\mathsf{level}}(v) + 1, \alpha_v \leq 2\alpha_u$ and $f_u \leq f_v$. So, $N_u 2^{{\mathsf{level}}(u)} \geq 2N_v2^{{\mathsf{level}}(v)} > 2 f_v/\alpha_v \geq f_u/\alpha_u $. \end{proof} Thus there is a monotonicity property on the marking status of vertices in $T$. We say a vertex $v$ is highest unmarked (w.r.t $N$ and $\alpha$) if it is unmarked and its parent is marked; we say a vertex $v$ is lowest marked if it is marked but all its children are unmarked. However, sometimes we say a vertex $u$ is the lowest marked ancestor of a leaf $v \in X$ if either $u=v$ is marked, or $u\neq v$ is marked and the child of $u$ in the $u$-$v$ path is unmarked; notice that in this case, $u$ might not be a lowest marked vertex since it may have some other marked children. If we need to distinguish between the two cases, we shall use that $u$ is lowest marked \emph{globally} to mean $u$ is a lowest marked vertex. If a leaf vertex $v \in X$ is marked, then we open $v$. For every marked vertex $v \in V\setminus X$, we open $v$ if and only if $$\left(\sum_{u \in \Lambda_v: u \text{ unmarked}} N_u \right)2^{{\mathsf{level}}(v)}> f_v/(\alpha_v\beta_v).$$ Notice that all unmarked vertices are closed. \begin{obs} \label{obs:departure-lowest-open} If $v$ is lowest marked, then $v$ is open. \end{obs} \begin{proof} We can assume $v \notin X$ since otherwise $v$ is open. So, $N_v 2^{{\mathsf{level}}(v)} > f_v/\alpha_v$ and all children of $v$ are unmarked. Thus, $\sum_{u \in \Lambda_v: {u\text{ unmarked}}}N_u = \sum_{u \in \Lambda_v}N_u = N_v$. Therefore, $\left(\sum_{u \in \Lambda_v: {u\text{ unmarked}}}N_u\right) 2^{{\mathsf{level}}(v)} = N_v 2^{{\mathsf{level}}(v)} > f_v/\alpha_v \geq f_v/(\alpha_v\beta_v)$. Thus $v$ will be open. \end{proof} With the set of open facilities defined, every client is connected to its nearest open facility according to $d_T$, using a consistent tie-breaking rule (e.g, the nearest open facility with the smallest index). We assume the root $r$ of $T$ has $\frac{f_v}{2^{{\mathsf{level}}(v)}} \leq 1$ by increasing the number of levels. So $r$ will be marked whenever $N_r \geq 1$. This finishes the description of the offline algorithm. \paragraph{Analysis of $O(1)$-Approximation Ratio.} We show the algorithm achieves an $O(1)$-approximation. First we give a lower bound on the optimum cost. For every $v \in V$, let $${\mathsf{LB}}(v) = \min\set{N_v2^{{\mathsf{level}}(v)}, f_v}.$$ Then we have \begin{lemma} \label{lemma:departure-LB} Let $U$ be a set of vertices in $T$ without an ancestor-descendant pair; i.e, for every two distinct vertex $u$ and $v$ in $U$, $u$ is not an ancestor of $v$. Then the cost of the optimum solution is at least $\sum_{v \in U}{\mathsf{LB}}(v)$. \end{lemma} \begin{proof} Fix an optimum solution. Consider any $v \in U$. We consider the cost inside $T_v$ in the optimum solution: the connection cost of clients, plus the cost of open facilities in $T_v$. Then this cost is at least ${\mathsf{LB}}(v)= \min\set{N_v2^{{\mathsf{level}}(v)}, f_v}$: If we open a facility in $T_v$ then the facility cost is at least $f_v$; otherwise, all the $N_v$ clients in $T_v$ have to be connected to outside $T_v$, incurring a cost of at least $N_v2^{{\mathsf{level}}(v)}$. The lemma follows from that the trees $T_v$ over all $v \in U$ are disjoint and thus we are not over-counting the costs in the optimum solution. \end{proof} Then let $U$ be the set of highest unmarked vertices and marked leaves; clearly $U$ does not have an ancestor-descendant pair. By Lemma~\ref{lemma:departure-LB}, the optimum cost is at least $\sum_{v \in U}{\mathsf{LB}}(v)$. We prove the following lemma. \begin{lemma} \label{lemma:departure-UB} The solution produced by our algorithm has cost at most $O(1)\sum_{u \in U}{\mathsf{LB}}(u)$. \end{lemma} \begin{proof} First consider the facility cost of our solution. If a leaf $v$ is marked and open, we have $N_v > f_v/\alpha_v$ (as ${\mathsf{level}}(v) = 0$) and thus ${\mathsf{LB}}(v) = \min\set{N_v,f_v} \geq f_v/\alpha_v$. Then $f_v$ can be bounded by $\alpha_v{\mathsf{LB}}(v) \leq 2{\mathsf{LB}}(v)$. If $v \in V \setminus X$ is marked and open, then by our algorithm we have $\sum_{u \in \Lambda_v: u \text{ unmarked}}N_u 2^{{\mathsf{level}}(v)} \geq f_v/(\alpha_v\beta_v )$. Since each $u$ in the summation is unmarked, we have ${\mathsf{LB}}(u) = N_u 2^{{\mathsf{level}}(u)}$. Thus, we have $\sum_{u \in \Lambda_v: u\text{ unmarked}}{\mathsf{LB}}(u) = \frac12\sum_{u}N_u 2^{{\mathsf{level}}(v)} \geq \frac12 f_v/(\alpha_v\beta_v) \geq \frac18 f_v$. That is $f_v$ can be bounded by $8\sum_{u \in \Lambda_v:u \text{ unmarked}}{\mathsf{LB}}(u)$. Notice that each $u$ in the summation has $u \in U$ since it is highest unmarked. So, summing the bounds over all open facilities $v$ gives us that the facility cost of our solution is at most $8\sum_{u \in U}{\mathsf{LB}}(u)$. Now consider the connection cost. For every $v \in X$, let $u$ be the highest unmarked ancestor of $v$ (if $v$ itself is open, then its connection cost is $0$ and we do not need to consider this case). Let $w$ be the parent of $u$; so $w$ is marked. Then there must be an open facility in the maximal tree rooted at $w$: consider any lowest marked vertex in the sub-tree rooted at $w$; it must be open by Lemma~\ref{obs:departure-lowest-open}. Thus, any client at $v$ has connection cost at most $2 \times 2^{{\mathsf{level}}(w)} = 4 \times 2^{{\mathsf{level}}(u)}$. Thus, the total connection cost in our solution is at most $4\sum_{u \in U \setminus X}N_u2^{{\mathsf{level}}(u)} = 4\sum_{u \in U \setminus X}{\mathsf{LB}}(u)$. This finishes the proof of the lemma. \end{proof} Combining Lemmas~\ref{lemma:departure-LB} and \ref{lemma:departure-UB} gives that our algorithm is an $O(1)$-approximation. One lemma that will be useful in the analysis of dynamic algorithm is the following: \begin{lemma} \label{lemma:departure-ub-outside} For any open facility $v$ in our solution, the number of clients connected to $v$ that are outside $T_v$ is at most $O(\log D)\frac{f_v}{2^{{\mathsf{level}}(v)}}$. \end{lemma} \begin{proof} We consider each ancestor $u$ of $v$ and count the number clients connected to $v$ with lowest common ancestor with $v$ being $u$. Focus on a child $w$ of $u$ that is not $v$ or an ancestor of $v$. If $w$ is marked, then no clients in $T_w$ will be connected to $v$ since some facility in $T_w$ will be open. Thus, let $U'$ be the unmarked children of $u$ that is not $v$ or an ancestor of $v$. Then if we have $\sum_{w \in U'}N_w2^{{\mathsf{level}}(u)} \geq f_u/(\alpha_u\beta_u)$, then $u$ will be marked and open and clients in $T_w, w \in U'$ will not be connected to $v$. Otherwise we have $\sum_{w \in U'}N_w< f_u/(\alpha_u\beta_u\cdot 2^{{\mathsf{level}}(u)} ) \leq f_u/2^{{\mathsf{level}}(u)} \leq f_v/2^{{\mathsf{level}}(v)}$ as $f_u \leq f_v$ and ${\mathsf{level}}(u) \geq {\mathsf{level}}(v)$. The lemma follows since we have at most $O(\log D)$ ancestors of $v$. \end{proof} \paragraph{Remark} The algorithm so far gives a \emph{data structure} that supports the following operations in $O(\log D)$ time: i) updating $N_v$ for some $v \in X$ and ii) returning the nearest open facility of a leaf $v \in X$. Indeed the algorithm can be made simpler: We set $\alpha$ to be the all-1 vector, and we open the set of lowest marked facilities (so both $\alpha$ and $\beta$ are not needed). For every vertex $u \in V$, we maintain the nearest open facility $\psi_u$ to $u$ in $T_u$. Whenever a client at $v$ arrives or departs, we only need change $N_u$, $\psi_u$, marking and opening status of $u$ for ancestors $u$ of $v$. To return the closest open facility to a leaf $v \in X$, we travel up the tree from $v$ until we find an ancestor $u$ with $\psi_u$ defined, and return $\psi_u$. Both operations take $O(\log D)$ time. However, our goal is to maintain the solution $(S, \sigma)$ \emph{explicitly} in memory. Thus we also have to bound the the number of reconnections during the algorithm, since that will be a lower bound on the total running time. \subsection{Dynamic Algorithm for Facility Location on HST Metrics} In this section, we extend the offline algorithm to a dynamic algorithm with $O(\log^3 D)$-amortized update time; recall that $D$ is the aspect ratio of the metric. We maintain $\alpha, \beta$ and $N$-vectors, and at any moment of the algorithm, the marking and opening status of vertices are exactly the same as that obtained from the offline algorithm for $\alpha, \beta$ and $N$. Initially, let $\alpha$ and $\beta$ be all-$1$ vectors, and $N$ be the all-0 vector. So all the vertices are unmarked. Whenever a client at some $v \in X$ arrives or departs, the $\alpha, \beta$ values, the marking and opening status of ancestors of $v$ may change and we show how to handle the changes. The vertices that are not ancestors of $v$ are not affected during the process. When a client at $v$ arrives or departs, we increase or decrease the $N_u$ values for all ancestors $u$ of $v$ by 1 \emph{continuously} at the same rate (we can think of that the number of clients at $v$ increases or decreases by 1 continuously). During the process, the marking and opening status of these vertices may change. If such an event happens, we change $\alpha$ and/or $\beta$ values of the vertex so that it becomes harder for the status to change back in the future. Specifically, we use the following rules: \begin{itemize} \item If a vertex $u$ changes to marked (from being unmarked), then we change $\alpha_u$ to $2$ (notice that $u$ remains marked w.r.t the new $\alpha$), and $\beta_u$ to $1$. In this case, we do not consider the opening status change of $u$ as an event. \item If a vertex $u$ changes to unmarked (from being marked), we change $\alpha_u$ to $1$ (notice that $u$ remains unmarked w.r.t the new $\alpha$). The $\beta_u$ value becomes useless. In this case, we also do not consider the opening status change of $u$ as an event. \item If a marked vertex $u$ becomes open (from being closed), then we change $\beta_u$ to $2$ (notice that $u$ remains open w.r.t the new $\beta$). \item If a marked vertex $u$ becomes closed (from being open), then we change $\beta_u$ to $1$ (notice that $u$ remains closed w.r.t the new $\beta$). \end{itemize} We call the 4 types of events above as marking, unmarking, opening and closing events. Now we talk about the order the events happen. When we increase $N_u$ values of ancestors of $v$ continuously, one of the following two events may happen: \begin{itemize \item The highest unmarked ancestor $u$ of $v$ may become globally lowest marked, and this may \emph{induce} a closing event for the parent $w$ of $u$. \item The lowest marked ancestor $u$ of $v$ may become open. \end{itemize} Similarly, when we decrease $N_u$ values of ancestors of $v$ continuously, one of the following two events may happen: \begin{itemize \item The lowest marked ancestor $u$ of $v$ may become unmarked (we must that $u$ was lowest marked globally), and this may \emph{induce} an opening event for the parent $w$ of $u$. \item The lowest marked ancestor $u$ of $v$ may become closed. \end{itemize} Above, if two events happen at the same time, we handle an arbitrary event. Notice that after we handle the event, the conditions for the other event might not hold any more, in which case we do not handle it. Once we have finished the process of increasing or decreasing $N_u$ values by 1, the clients will be connected to their respective nearest open facilities, breaking ties using the consistent rule. A reconnection happens if a client is connected to a different facility. \paragraph{Bounding Number of Reconnections} Now we analyze the reconnections made in the algorithm. When a client at $v \in X$ arrives or departs, at most $O(\log D)$ vertices $u$ will have their $N_u$ values changed by $1$. We distribute 4 tokens to each ancestor $u$ of $v$, that are of type-A, type-B, type-C and type-D respectively.\footnote{The types are only defined for convenience.} We are going to use these tokens to charge the events happened. First focus on the sequence of marking/unmarking events happened at a vertex $u$. Right before $u$ becomes unmarked we have $N_u \leq f_u/(2 \times 2^{{\mathsf{level}}(u)})$ since at the moment we have $\alpha_u = 2$. Immediate after that $\alpha_u$ is changed to $1$. For $u$ to become marked again, we need $N_u \leq f_u/2^{{\mathsf{level}}(u)}$. So during the period $N_u$ must have been increased by at least $f_u/(2 \times 2^{{\mathsf{level}}(u)})$. Similarly, right before $u$ becomes marked we have $N_u \geq f_u/2^{{\mathsf{level}}(u)}$ since at the moment we have $\alpha_u = 1$. Then we change $\alpha_u$ to $2$ immediately. For $u$ to become unmarked again, $N_u$ should be decreased by at least $ f_u/(2\times2^{{\mathsf{level}}(u)})$. So, when a marking/unmarking event happens at $u$, we can spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ type-A tokens owned by $u$. Then we focus on the sequence $\mathcal{S}$ of opening/closing events at $u$ between two adjacent marking/unmarking events at $u$. At these moments, $u$ is marked and $\alpha_u = 2$. For the first event in $\mathcal{S}$, we can spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ type-B tokens owned by $u$. If some opening/closing event $e$ in $\mathcal{S}$ is induced by an unmarking/marking event of some child $u'$ of $u$, then we can spend $\Omega(f_{u'}/2^{{\mathsf{level}}(u')}) \geq \Omega(f_u/2^{{\mathsf{level}}(u)})$ type-C tokens owned by $u'$ for $e$, and the event $e'$ after $e$ in $\mathcal{S}$ if it exists. Notice that we already argued that $u'$ has collected enough number of type-C tokens. Then we focus on an event $e'$ in $\mathcal{S}$ such that both $e$ and the event $e$ before $e'$ in $\mathcal{S}$ are not induced. First, assume $e$ is an opening event and $e'$ is a closing event. Then, after $e$ we have $\sum_{u' \in \Lambda_u: u' \text{ unmarked}} N_{u'} = f_u/(2 \times 2^{{\mathsf{level}}(u)})$ and before $e'$ we have $\sum_{u' \in \Lambda_u: u' \text{ unmarked}} N_{u'} = f_u/(4 \times 2^{{\mathsf{level}}(u)})$. Notice that the set of unmarked children of $u$ may change, and let $U'$ and $U''$ be the sets of unmarked children of $u$ at the moments after $e$ and before $e'$ respectively. Again if there is some $u' \in (U' \setminus U'') \cup (U'' \setminus U')$, we spend $\Omega(\frac{f_{u'}}{2^{{\mathsf{level}}(u')}}) \geq \Omega(\frac{f_u}{2^{{\mathsf{level}}(u)}})$ type-C tokens owned by $u'$. Otherwise, $U = U'$ and $f_u/(4\times 2^{({\mathsf{level}}(u))})$ clients in $T_u$ must have departed between $e$ and $e'$ and we can then spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ type-D tokens for $e'$. The case when $e$ is an closing event and $e'$ is an opening event can be argued in the same way. Thus, whenever an event happens at $u$, we can spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ tokens; moreover if an opening/closing event at $u$ was induced by an unmarking/marking event at some child $u'$ of $u$, then we can spend $\Omega(f_{u'}/2^{{\mathsf{level}}(u')})$ tokens for the event at $u$. A facility $u$ changes its opening status when an event happens at $u$. Notice that, we reconnect a client only if it was connected to a ready-to-close facility, or it needs to be connected to newly open facility. By Lemma~\ref{lemma:departure-ub-outside}, at any moment the number of clients connected to $u$ from outside $T_u$ is at most $O(\log D)\cdot \frac{f_u}{2^{{\mathsf{level}}(u)}}$. At the moment $u$ changes its opening status because of an non-induced event, then before and after the event the number of clients connected to $u$ from $T_u$ is of order $O\left(\frac{f_u}{2^{{\mathsf{level}}(u)}}\right)$. $u$ changes its opening status due to a marking/unmarking event happened at some child $u'$ of $u$, then before and after the event the number of clients connected to $u$ from $T_u$ is of order $\Theta\left(\frac{f_{u'}}{2^{{\mathsf{level}}(u')}}\right)$. Thus, on average, for each token we spent we connect at most $O(\log D)$ clients. Since each client arrival or departure distributes at most $O(\log D)$ tokens, we have that the amortized number of reconnections (per client arrival/departure) is at most $O(\log^2D)$. \paragraph{Analyzing Update Time} Then with the bound on the number of reconnections (recourse), we can bound the update time easily. Indeed, we can maintain a $\psi_u$ for every $u \in V$, which indicates the nearest open facility to $u$ in $T_u \setminus u$ ($\psi_u$ could be undefined). We also maintain a value $N'_u$ for marked vertices $u$ where $N'_u = \sum_{v \in \Lambda_v, v\text{ unmarked}} N_v$. Whenever a client at $v$ arrives or departs, we need to change $\alpha_u, \beta_u, N_u, N'_u, \psi_u$, marking and opening status of $u$ only for ancestors $u$ of $v$. The update can be made in $O(\log D)$ time for every client arrival or departure using the information on the vertices. The bottleneck of the algorithm comes from reconnecting clients. We already argued that the amortized number of reconnections per client arrival/departure is $O(\log^2D)$ and thus it suffices to give an algorithm that can find the clients to be connected efficiently. For every vertex $u$, we maintain a double-linked-list of unmarked children $u'$ of $u$ with $N_{u'} \geq 1$. With this structure it is easy to see that for every client that needs to be reconnected, we need $O(\log D)$ time to locate it. If $u$ becomes open, we need to consider each unmarked children $u'$ of $u$ and reconnect clients in $T_{u'}$ to $u$. The time needed to locate these clients can be made $O(\log D)$ times the number of clients. For every strict ancestor $w$ of $u$ for which there are no open facilities in between we can use the $\psi_w$ information to see if we need to reconnect clients in $T_w$. If yes, then for every unmarked child $w'$ of $w$ with $N_{w'} \geq 1$ that is not an ancestor of $u$, we need to connect the clients in $T_{w'}$ to $u$. Again enumerating these clients takes time $O(\log D)$ times the number of clients. Similarly, if $u$ becomes closed, we then need to connect all clients connected to $u$ to the nearest open facility to $u$, which can be computed using $\psi$ values of $u$ and its ancestors. Enumerating the clients takes time $O(\log D)$ times the number of clients. Overall, the amortized running time per client arrival/departure is $O(\log ^3D)$. \section{$(1+\sqrt{2}+\epsilon)$-Approximate Dynamic Algorithm for Facility Location in Incremental Setting} \label{sec:dfl} In this section, we prove Theorem~\ref{UFL-dynamicIncremental} by combining the ideas from Sections \ref{sec:ofl} and \ref{sec:fast-UFL} to derive a dynamic algorithm for facility location in the incremental setting. As for the online algorithm in Section~\ref{sec:ofl}, we divide our algorithm into stages. Whenever a client comes, we use a simple rule to accommodate it. Now we can not afford to consider all possible local operations as in Section~\ref{sec:ofl}. Instead we use the randomized local search idea from the algorithm in Section~\ref{sec:fast-UFL} by calling the procedure $\mathsf{FL\mhyphen iterate}$. We call the procedure only if the cost of our solution has increased by a factor of $1+\epsilon'$ (where $\epsilon' = \Theta(\epsilon)$ is small enough). In our analysis, we show a lemma similar to Lemma \ref{lmm:ofl-delta-cost-bound}: The total increase of costs due to arrival of clients is small, compared to the optimum cost for these clients. Then, we can bound the number of times we call $\mathsf{FL\mhyphen iterate}$. Recall that we are given an integer $\Gamma = \mathrm{poly}\big(n, \log D, \frac1\epsilon\big)$ that is big enough: We are aiming at a success probability of $1-1/\Gamma$ for each call of $\mathsf{FL\mhyphen iterate}$. Our final running time will only depend on $O(\log \Gamma)$. The main algorithm will be the same as Algorithm~\ref{alg:ofl}, except that we use Algorithm~\ref{alg:fast-UFL-one-stage} as the algorithm for one stage. As before, we only need to design one stage of the algorithm. Recall that in a stage we are given an initial set $C$ of clients, an $O(1)$-approximate solution $(S, \sigma)$ for $C$. Clients come one by one and our goal is to maintain an $(\alpha_{\mathsf{FL}} + O(\epsilon'))$-approximate solution at any time. The stage terminates if no client comes or our solution has cost more than $1/\epsilon'$ times the cost of the initial solution. \begin{algorithm} \caption{One Stage of Dynamic Algorithm for Facility Location} \label{alg:fast-UFL-one-stage} \begin{algorithmic}[1] \Require{ \begin{itemize} \item $C$: the initial set of clients \item $(S, \sigma)$: initial solution for $C$, which is $O(1)$-approximate \end{itemize} } \State let $M = O\left(\frac{ |F|}{\epsilon'}\log\Gamma\right)$ be large enough \label{step:fast-UFL-M} \State $(S, \sigma) \gets \mathsf{FL\mhyphen iterate}\left(M\right)$, $\textsf{init}\gets \mathsf{cost}(S, \sigma), {\mathsf{last}} \gets \textsf{init}$ \label{step:fast-UFL-init} \For{$t \gets 1, 2, 3, \cdots$, terminating if no more clients arrive} \For{$q = \ceil{\log\frac{{\mathsf{last}}}{|F|}}$ to $\ceil{\log\frac{\mathsf{last}}{\epsilon'}}$} \label{step:fast-UFL-enumerate-q} \State \textbf{if} $i \gets \arg\min_{i \in F\setminus S, f_i \leq 2^q}d(j_t, i)$ exists, \textbf{then} call $\mathsf{try\mhyphen open'}(i)$ \label{step:fast-UFL-try-open} \Comment{$\mathsf{try\mhyphen open'}$ is the same as $\mathsf{try\mhyphen open}$ except we consider the cost instead of scaled cost.} \EndFor \State $C \gets C \cup \{j_t\}$ and call $\mathsf{try\mhyphen open'}\big(\arg\min_{i \in F \setminus S}(d(j_t, i) + f_i)\big)$ \label{step:fast-UFL-handle-j} \If{$\mathsf{cost}(S, \sigma) > (1+\epsilon')\cdot {\mathsf{last}}$} \State $(S, \sigma) \gets \mathsf{FL\mhyphen iterate}\left(M\right)$ \label{step:fast-UFL-call-iterate} \If {$\mathsf{cost}(S, \sigma) > {\mathsf{last}}$} ${\mathsf{last}} \gets \mathsf{cost}(S, \sigma)$ \EndIf \label{step:fast-UFL-update-last} \If{${\mathsf{last}} > \textsf{init}/\epsilon'$} terminate the stage \EndIf \label{step:fast-UFL-terminate} \EndIf \EndFor \end{algorithmic} \end{algorithm} Notice that in a stage, we are considering the original costs of solutions (instead of scaled costs as inside $\mathsf{FL\mhyphen iterate}$). During a stage we maintain a value ${\mathsf{last}}$ which gives an estimation on the cost of the current solution $(S, \sigma)$. Whenever a client $j_t$ comes, we apply some rules to open some facilities and connect $j_t$ (Steps~\ref{step:fast-UFL-enumerate-q} to \ref{step:fast-UFL-handle-j}). These operations are needed to make the cost increase due to the arrival of $j_t$ (defined as $\Delta_t$ later) small. In the algorithm $\mathsf{try\mhyphen open'}$ is the same as $\mathsf{try\mhyphen open}$, except that we use the original cost instead of the scaled cost (this is not important but only for the sake of convenience). If $\mathsf{cost}(S, \sigma)$ becomes too large, i.e, $\mathsf{cost}(S, \sigma) > (1+\epsilon'){\mathsf{last}}$, then we call $(S, \sigma) \gets \mathsf{FL\mhyphen iterate}(M)$ for the $M$ defined in Step~\ref{step:fast-UFL-M} (Step~\ref{step:fast-UFL-call-iterate}), and update ${\mathsf{last}}$ to $\mathsf{cost}(S, \sigma)$ if we have $\mathsf{cost}(S, \sigma) > {\mathsf{last}}$ (Step~\ref{step:fast-UFL-update-last}). We terminate the algorithm when ${\mathsf{last}} \geq \mathsf{init}/\epsilon$, where $\mathsf{init}$ is $\mathsf{cost}(S, \sigma)$ at the beginning of the stage (Step~\ref{step:fast-UFL-terminate}). We say an execution of $\mathsf{FL\mhyphen iterate}(M)$ is successful if the event in Lemma~\ref{lemma:ufl-iterate} happens. Then we have \begin{lemma} \label{lemma:ufl-dynamic-ratio} If all executions of $\mathsf{FL\mhyphen iterate}$ are successful, the solution $(S, \sigma)$ at the end of each time is $(1+\epsilon')(\alpha_{\mathsf{FL}}+\epsilon')$-approximate. \end{lemma} \begin{proof} This holds since we always have $\mathsf{cost}(S, \sigma) \leq (1+\epsilon'){\mathsf{last}}$ at the end of each time, where ${\mathsf{last}}$ is the cost of some $(\alpha_{\mathsf{FL}} + \epsilon')$-approximate solution at some moment before. As we only add clients to $C$, the cost of the optimum solution can only increase and thus the claim holds. \end{proof} Now we argue each execution of $\mathsf{FL\mhyphen iterate}(M)$ is successful with probability at least $1-1/\Gamma$. This will happen if $(S, \sigma)$ is $O(1)$-approximate before the call. By Lemma~\ref{lemma:ufl-iterate}, we only need to make sure that the $(S, \sigma)$ before the execution is $O(1)$-approximate. This is easy to see: Before Step~\ref{step:fast-UFL-handle-j} in time $t$, we have $\mathsf{cost}(S, \sigma) \leq O(1)\mathsf{opt}$; the increase of $\mathsf{cost}(S, \sigma)$ in the step is at most the value of $\mathsf{opt}$ after the step (i.e, we consider the client $j_t$ when defining $\mathsf{opt}$). Thus, we have $\mathsf{cost}(S, \sigma) \leq O(1)\mathsf{opt}$ after the step. \subsection{Bounding Number of Times of Calling $\mathsf{FL\mhyphen iterate}$} It remains to bound the number of times we call $\mathsf{FL\mhyphen iterate}$. Again, we use $T$ to denote the last time step of Algorithm~\ref{alg:fast-UFL-one-stage} (i.e, one stage of the dynamic algorithm) and $\Delta_t$ to denote the cost increase due to the arrival of $j_t$: it is the value of $\mathsf{cost}(S, \sigma)$ before Step~\ref{step:fast-UFL-handle-j} minus that after Step~\ref{step:fast-UFL-handle-j} in time $t$. For every time $t \in [T]$, let $C_t$ be the set $C$ at the end of time $t$, and let $\mathsf{opt}_t$ be the cost of the optimum solution for $C_t$. Let ${\mathsf{last}}_t$ be the value of ${\mathsf{last}}$ at the \emph{beginning} of time $t$. Due to Step~\ref{step:fast-UFL-handle-j}, we have the following observation: \begin{obs} \label{obs:dfl-delta-t} For every $t \in [T]$, we have $\Delta_t \leq \min_{i \in F}(f_i + d(i, j_t))$. \end{obs} \begin{proof} Let $i = \arg\min_{i \in F}(f_i + d(i, j_t))$ and consider Step~\ref{step:fast-UFL-handle-j} at time $t$. If $d(j_t, S) \leq f_i + d(i, j_t)$ before the step, then we have $\Delta_t \leq d(i, j_t)$. Otherwise, $i \notin S$ and $d(j_t, S) > f_i + d(i, j_t)$. Then $\mathsf{try\mhyphen open'}(i)$ in the step will open $i$ and we have $\Delta_t \leq f_i + d(i, j_t)$. \end{proof} We can also prove the following lemma that bounds $\Delta_t$: \begin{lemma} \label{lemma:dfl-delta-t} Let $t \in [T], i^* \in F$ such that $f_{i^*} \leq {\mathsf{last}}_t/\epsilon'$ and $C' \subseteq C_{t-1}$ be any non-empty subset. Then we have \begin{align*} \Delta_t \leq \frac{2}{|C'|}\left(\max\set{f_{i^*}, {\mathsf{last}}_t/|F|} + \sum_{j \in C'}d(i^*, j)\right) + 5d(i^*, j_t). \end{align*} \end{lemma} \begin{proof} In this proof, we focus on the time $t$ of the algorithm. If $i^* \in S$ before Step~\ref{step:fast-UFL-handle-j}, then we have $\Delta_t \leq d(i^*, j_t)$ and thus we can assume $i^* \notin S$ before Step~\ref{step:fast-UFL-handle-j}. Since Loop~\ref{step:fast-UFL-enumerate-q} only adds facilities to $S$, we have that $i^* \notin S$ at any moment in Loop~\ref{step:fast-UFL-enumerate-q}. Let $q = \ceil{\log \max\set{f_{i^*}, {\mathsf{last}}_t/|F|}}$; notice this $q$ is considered in Loop~\ref{step:fast-UFL-enumerate-q}. Let $i \in F\setminus S$ be the facility with $f_i \leq 2^q$ nearest to $j_t$ at the beginning of the iteration $q$; this is the facility we try to open in Step~\ref{step:fast-UFL-try-open} in the iteration for $q$. Notice that $d(j_t, i) \leq d(j_t, i^*)$ since $i^*$ is a candidate facility. Since we called $\mathsf{try\mhyphen open}(i)$ in Step~\ref{step:fast-UFL-try-open}, there is no $0$-efficient opening operation that opens $i$ after the step. Then, we can apply Lemma~\ref{lemma:helper-star} on this facility $i$, the set $C'$ and $\phi = 0$. So, after Step~\ref{step:fast-UFL-try-open} of the iteration for $q$, we have \begin{align*} d(j_t, S) \leq \frac{1}{|C'|}\left(f_i + 2\sum_{j \in C'}d(i, j)\right) + d(i, j_t). \end{align*} Notice that $d(i, i^*) \leq d(i, j_t) + d(j_t, i^*) \leq 2d(j_t, i^*)$, $f_i \leq 2\max\set{f_{i^*}, \epsilon'{\mathsf{last}}_t/|F|} $ and $S$ can only grow before the end of Step~\ref{step:fast-UFL-handle-j}. We have \begin{align*} \Delta_t &\leq \frac{1}{|C'|}\left(2\max\set{f_{i^*},{\mathsf{last}}_t/|F|} + 2\sum_{j \in C'}(d(i^*, j) + d(i^*, i))\right) + d(i^*, j_t) \\ &\leq \frac{2}{|C'|}\left(\max\set{f_{i^*},{\mathsf{last}}_t/|F|} + \sum_{j \in C'}d(i^*, j)\right) + 5d(i^*, j_t). \qedhere \end{align*} \end{proof} With the lemma, we can then prove the following lemma: \begin{lemma} \label{lemma:dfl-Delta} For every $T' \in [T-1]$, we have \begin{align*} \sum_{t = 1}^{T'} \Delta_t \leq O(\log T') \cdot \mathsf{opt}_{T'} \end{align*} \end{lemma} \begin{proof} The proof is similar to that of Lemma~\ref{lmm:ofl-delta-cost-bound}. Let $(S^*, \sigma^*)$ be the optimum solution for clients $C_{T'}$. Focus on some $i^* \in S^*$ and assume $(C_{T'} \setminus C_0) \cap \sigma^{*-1}(i^*) = \{j_{t_1}, j_{t_2}, \cdots, j_{t_s}\}$ with $1 \leq t_1 < t_2 < \cdots < t_s \leq T'$. We have $\Delta_{t_1} \leq f_{i^*} + d(i^*, j_{t_1})$ by Observation~\ref{obs:dfl-delta-t}. Then focus on any $k \in [2, s]$. If $f_{i^*} > {\mathsf{last}}_{t_k}/\epsilon$, then we must have $\mathsf{opt}_{t_k} \geq {\mathsf{last}}_{t_k}/\epsilon$ and the stage will terminate at time ${t_k}$. Thus ${t_k} = T$, contradicting the assumption that ${t_k} \leq T' \leq T-1$. So we assume $f_{i^*} \leq {\mathsf{last}}_{t_k}/\epsilon$. We can apply Lemma~\ref{lemma:dfl-delta-t} with $i^*$ and $C' = \{j_{t_1}, j_{t_2}, \cdots, j_{t_{k-1}} \}$ to obtain that $\Delta_{t_k} \leq \frac{2}{k-1}\left(\max\set{f_{i^*},{\mathsf{last}}_{t_k}/|F|} + \sum_{k'=1}^{k-1}d(i^*, j_{t_{k'}})\right) + 5d(i^*, j_{t_k})$. We can replace ${\mathsf{last}}_{t_k}$ with ${\mathsf{last}}_{T'}$ since ${\mathsf{last}}_{t_k} \leq {\mathsf{last}}_{T'}$. The sum of upper bounds over all $k \in [s]$ is a linear combinations of $\max\set{f_{i^*},{\mathsf{last}}_{T'}/|F|}$ and $d(i^*, j_{t_{k'}})$'s. In the linear combination, the coefficient for $\max\set{f_{i^*},{\mathsf{last}}_{T'}/|F|}$ is at most $1 + \frac21 + \frac22 + \frac23 + \cdots + \frac2{s-1} = O(\log s) = O(\log T')$. The coefficient for $d(i^*, j_{t_{k'}})$ is at most $5 + \frac2{k'} + \frac2{k'+1} + \cdots \frac2{s-1} = O(\log s) = O(\log T')$. Thus, overall, we have $\sum_{k = 1}^{s}\Delta_{t_k} \leq O(\log T') \big(\max\set{f_{i^*},{\mathsf{last}}_{T'}/|F|} + \sum_{k'=1}^s d(i^*, j_{t_{k'}})\big)$. Therefore $\sum_{t = 1}^{T'} \Delta_t \leq O(\log T') \left( \mathsf{cost}(S^*, \sigma^*) + |S^*|{\mathsf{last}}_{T'}/|F|\right)$, by taking the sum of the above inequality over all $i^* \in S^*$. The bound is at most $O(\log T')(\mathsf{opt}_{T'} + {\mathsf{last}}_{T'}) = O(\log T') \cdot \mathsf{opt}_{T'}$, since $|S^*| \leq |F|$ and ${\mathsf{last}}_{T'} \leq O(1)\mathsf{opt}_{T'-1} \leq O(1) \mathsf{opt}_{T'}$. \end{proof} Between two consecutive calls of $\mathsf{FL\mhyphen iterate}$ in Step~\ref{step:fast-UFL-call-iterate} at time $t_1$ and $t_2 > t_1$, $\mathsf{cost}(S, \sigma)$ should have increased by at least $\epsilon'{\mathsf{last}}_{t_2}$: At the end of time $t_1$, we have $\mathsf{cost}(S, \sigma) \leq {\mathsf{last}}_{t_1+1} = {\mathsf{last}}_{t_2}$ since otherwise ${\mathsf{last}}$ should have been updated in time $t_1$. We need to have $\mathsf{cost}(S, \sigma) > (1+\epsilon'){\mathsf{last}}_{t_2}$ after Step~\ref{step:fast-UFL-handle-j} at time $t_2$ in order to call $\mathsf{FL\mhyphen iterate}$. Thus, the increase of the cost during this period is at least $\epsilon' {\mathsf{last}}_{t_2}$. Thus, we have $\sum_{t=t_1+1}^{t_2}\frac{\Delta_t}{\epsilon'\cdot{\mathsf{last}}_t} \geq 1$ since ${\mathsf{last}}_t = {\mathsf{last}}_{t_2}$ for every $t \in (t_1, t_2]$. The argument also holds when $t_1 = 0$ and $t_2 > t_1$ is the first time in which we call $\mathsf{FL\mhyphen iterate}$. Counting the call of $\mathsf{FL\mhyphen iterate}$ in Step~\ref{step:fast-UFL-init}, we can bound the total number of times we call the procedure by $1 + \frac{1}{\epsilon'}\sum_{t=1}^T\frac{\Delta_t}{{\mathsf{last}}_t}$. Again let $\Phi_{T'}= \sum_{t = 1}^{T'} \Delta_t$ for every $T' \in [0, T]$. Lemma~\ref{lemma:dfl-Delta} says $\Phi_{t} \leq O(\log t) \mathsf{opt}_{t}$ for every $t \in [0, T-1]$. For every $t \in [T]$, since $\Delta_t \leq \mathsf{opt}_t$, thus we have $\Phi_t = \Phi_{t-1} + \Delta_t \leq O(\log t) \mathsf{opt}_{t-1} \leq O(\log T) {\mathsf{last}}_t$ since ${\mathsf{last}}_t$ will be at least the cost of some solution for $C_{t-1}$. Applying Lemma~\ref{lemma:helper-sum-b/a} with $a_t = {\mathsf{last}}_t, b_t = \Delta_t$ and $B_t = \Phi_t$ for every $t$, the number of times we call $\mathsf{FL\mhyphen iterate}$ can be bounded by \begin{align*} 1+\frac{1}{\epsilon'}\sum_{t=1}^T\frac{\Delta_t}{{\mathsf{last}}_t} \leq \frac{1}{\epsilon'} O(\log T) \left(\ln\frac{{\mathsf{last}}_T}{{\mathsf{last}}_1} + 1\right) = O\left(\frac{\log T}{\epsilon}\log\frac{1}{\epsilon}\right). \end{align*} We can then analyze the running time and the success probability of our algorithm. Focus on each stage of the algorithm. By Observation~\ref{obs:time-iterate}, each call to $\mathsf{FL\mhyphen iterate}(M)$ takes time $O(M|C|\log |F|) = O\left(\frac{ |F|}{\epsilon'}(\log \Gamma) |C|\log n \right) = O\left(\frac{ n\cdot|C_T|}{\epsilon}\log^2 n\right)$, where $C$ is the set of clients in the algorithm at the time we call the procedure, $C_T \supseteq C$ is the number set of clients at the end of time $T$, and $M = O\left(\frac{|F|}{\epsilon'}\log \Gamma\right)$ is as defined in Step~\ref{step:fast-UFL-M}. The total number of times we call the procedure is at most $O\left(\frac{\log T}{\epsilon}\log\frac1\epsilon\right) \leq O\left(\frac{\log n}{\epsilon}\log\frac1\epsilon\right)$. Thus, the running time we spent on $\mathsf{FL\mhyphen iterate}$ is $O\left(\frac{ n\cdot|C_T|}{\epsilon^2}\log^3 n\log\frac{1}{\epsilon}\right)$. The running time for Steps~\ref{step:fast-UFL-enumerate-q} to \ref{step:fast-UFL-handle-j} is at most $T \cdot O\big(\log \frac{|F|}{\epsilon'}\big) \cdot O\big(|C_T|\log |F|\big) = O(|C_T|T\log^2 \frac{|F|}{\epsilon}) \leq O(n|C_T|\log^2\frac{n}{\epsilon})$. Thus, the total running time of a stage is at most $O\left(\frac{ n\cdot|C_T|}{\epsilon^2}\log^3 n\log\frac{1}{\epsilon}\right)$. Now consider all the stages together. The sum of $|C_T|$ values over all stages is at most $2n$ since every client appears in at most 2 stages. So, the total running time of our algorithm is $O\left(\frac{n^2}{\epsilon^2}\log^3 n\log\frac1\epsilon\right)$. For the success probability, the total number of times we call $\mathsf{FL\mhyphen iterate}(M)$ is at most $O\left(\log_{1/\epsilon} (nD)\frac{\log n}{\epsilon}\log \frac1\epsilon\right) = \mathrm{poly}(\log n, \log D, \frac1\epsilon)$. If we have $\Lambda$ is at least $n^2$ times this number, which is still $\mathrm{poly}(n, \log D, \frac{1}{\epsilon})$, then the success probability of our algorithm is at least $1-1/n^2$. Finally, we remark that the success of the algorithm only depends on the success of all executions of $\mathsf{FL\mhyphen iterate}$. Each execution has success probability $1-1/\Gamma$ even if the adversary is adaptive. This finishes the proof of Theorem~\ref{UFL-dynamicIncremental}. \paragraph{Remark} We can indeed obtain an algorithm that has both $O(\log T)$ amortized client recourse and $\tilde O(n^2)$ total running time, by defining $\phi = \frac{\mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}\epsilon'}$ and only performing $\phi$-efficient local operations. However, this will require us to put $\phi$ everywhere in our analysis and deteriorate the cleanness of the analysis. Thus, we choose to separate the two features in two algorithms: small recourse and $\tilde O(n^2)$ total running time. We also remark that the total running time for all calls of $\mathsf{FL\mhyphen iterate}$ is only $\tilde O(n|F|)$, and the $\tilde O(n^2)$ time comes from Steps~\ref{step:fast-UFL-enumerate-q} to \ref{step:fast-UFL-handle-j}. By losing a multiplicative factor of $2$ and additive factor of $1$ in the approximation ratio, we can assume every client is collocated with its nearest facility (See Appendix~\ref{appendix:moving-clients}). Then at any time we only have $O(|F|)$ different positions for clients, and the running time of the algorithm can be improved to $O(\frac{n|F|}{\epsilon^2}\log^3n\log\frac1{\epsilon})$. \section{Fast Local Search via Randomized Sampling} \label{sec:fast-UFL} \janr{This whole paragraph reads weird, and needs to rewritten. We have not defined category, the first line is incorrect.} \shr{Rewrote the paragraph.} From now on, we will be concerned with dynamic algorithms. Towards proving Theorem \ref{UFL-dynamicIncremental} for the incremental setting, we first develop a randomized procedure that allows us to perform local search operations fast. In the next section, we use this procedure and ideas from the previous section to develop the dynamic algorithm with the fast update time. The high level idea is as follows: We partition the set of local operations into many ``categories'' depending on which facility it tries to open or swap in. In each iteration of the procedure, we sample the category according to some distribution and find the best local operation in this category. By only focusing on one category, one iteration of the procedure can run in time $O(|C|\log |F|)$. On the other hand, the categories and the distribution over them are designed in such a way that in each iteration, the cost of our solution will be decreased by a multiplicative factor of $1 - \Omega\big(\frac1{ |F|}\big)$. This idea has been used in \cite{CharikarGhua2005} to obtain their $\tilde O(n^2)$ algorithm for approximating facility location. However, their algorithm was based on a different local search algorithm and analysis; for consistency and convenience of description, we stick to original local search algorithm of \cite{AryaGKMP01} that leads to $(1+\sqrt{2})$-approximation for the problem. Our algorithm needs to use the heap data structure. \subsection{Maintaining Heaps for Clients} Unlike the online algorithm for facility location in Section~\ref{sec:ofl}, in the dynamic algorithm, we guarantee that the clients are connected to their nearest open facilities. That is, we always have $\sigma_j = \arg\min_{i\in S} d(j, i)$; we still keep $\sigma$ for convenience of description. We maintain $|C|$ min-heaps, one for each client $j \in C$: The min-heap for $j$ will contain the facilities in $S \setminus \{\sigma_j\}$, with priority value of $i$ being $d(j, i)$. This allows us to efficiently retrieve the second nearest open facility to each $j$: This is the facility at the top of the heap for $j$ and we use the procedure $\mathsf{heap\mhyphen top}(j)$ to return it. \begin{figure*} \begin{algorithm}[H] \caption{$\mathsf{\Delta\mhyphen open}(i)$: \Return $\lambda f_i - \sum_{j \in C} \max\{0, d(j, \sigma_{j}) - d(j, i)\}$} \label{alg:Delta-open} \end{algorithm}\vspace*{-25pt} \begin{algorithm}[H] \caption{$\mathsf{try\mhyphen open}(i)$} \label{alg:try-open} \begin{algorithmic}[1] \State \textbf{if} $\mathsf{\Delta\mhyphen open}(i) < 0$ \textbf{then} open $i$ by updating $S, \sigma$ and heaps accordingly \end{algorithmic} \end{algorithm}\vspace*{-25pt} \begin{algorithm}[H] \caption{$\mathsf{\Delta\mhyphen swap\mhyphen in}(i)$} \label{alg:Delta-swap-in} \begin{algorithmic}[1] \State $C' \gets \{j \in C: d(j, i) < d(j, \sigma_j)\}$ and $\Psi \gets \lambda f_i - \sum_{j \in C'} \big(d(j, \sigma_j) - d(j, i)\big)$ \label{step:Delta-swap-in-C'-Psi} \State $\Delta \gets \min_{i' \in S}\left\{\sum_{j \in \sigma^{-1}(i') \setminus C'}\big[\min\{d(j, i), d(j, \mathsf{heap\mhyphen top}(j))\} - d(j, i')\big] - \lambda f_{i'}\right\} + \Psi$ \label{step:Delta-swap-in-Delta} \State \Return $(\Delta, \text{the $i'$ above achieving the value of $\Delta$})$ \label{step:Delta-swap-in-i'} \end{algorithmic} \end{algorithm}\vspace*{-25pt} \begin{algorithm}[H] \caption{$\mathsf{\Delta\mhyphen close}$} \label{alg:Delta-close} \begin{algorithmic}[1] \State $\Delta \gets \min_{i' \in S}\left\{\sum_{j \in \sigma^{-1}(i')}\big[d(j, \mathsf{heap\mhyphen top}(j)) - d(j, i')\big] - \lambda f_{i'}\right\}$ \State \Return $(\Delta, \text{the $i'$ above achieving the value of $\Delta$})$ \end{algorithmic} \end{algorithm} \end{figure*} We define four simple procedures $\mathsf{\Delta\mhyphen open}, \mathsf{try\mhyphen open}, \mathsf{\Delta\mhyphen swap\mhyphen in}$ and $\mathsf{\Delta\mhyphen close}$ that are described in Algorithms \ref{alg:Delta-open}, \ref{alg:try-open}, \ref{alg:Delta-swap-in} and \ref{alg:Delta-close} respectively. Recall that we use the \emph{scaled cost} for the local search algorithm; so we are working on the scaled cost function in all these procedures. $\mathsf{\Delta\mhyphen open}(i)$ for any $i \notin S$ returns $\Delta$, the increment of the scaled cost that will be incurred by opening $i$. (For it to be useful, $\Delta$ should be negative, in which case $|\Delta|$ indicates the cost decrement of opening $i$). This is just one line procedure as in Algorithm~\ref{alg:Delta-open}; \janr{Which of these 1 line procedure?}\shr{in Algorithm~\ref{alg:Delta-open}} $\mathsf{try\mhyphen open}$ will open $i$ if it can reduce the scaled cost. $\mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ for some $i \notin S$ returns a pair $(\Delta, i')$, where $\Delta$ is the smallest scaled cost increment we can achieve by opening $i$ and closing some facility $i' \in S$, and $i'$ gives the facility achieving the smallest value. (Again, for $\Delta$ to be useful, it should be negative, in which case $i'$ is the facility that gives the maximum scaled cost decrement $|\Delta|$.) Similarly, $\mathsf{\Delta\mhyphen close}$ returns a pair $(\Delta, i')$, which tells us the maximum scaled cost decrement we can achieve by closing one facility and which facility can achieve the decrement. Notice that in all the procedures, the facility we shall open or swap in is given as a parameter, while the facility we shall close is chosen and returned by the procedures. With the heaps, the procedures $\mathsf{\Delta\mhyphen open}, \mathsf{\Delta\mhyphen swap\mhyphen in}$ and $\mathsf{\Delta\mhyphen close}$ can run in $O(|C|)$ time. We only analyze $\mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ as the other two are easier. First, we define $C'$ to be the set of clients $j$ with $d(j, i) < d(j, \sigma_j)$; these are the clients that will surely be reconnected to $i$ once $i$ is swapped in. Let $\Psi = \lambda f_i - \sum_{j \in C'} (d(j, \sigma_j) - d(j, i))$ be the net scaled cost increase by opening $i$ and connecting $C'$ to $i$. The computation of $C'$ and $\Psi$ in Step~\ref{step:Delta-swap-in-C'-Psi} takes $O(|C|)$ time. If additionally we close some $i' \in S$, we need to reconnect each client in $\sigma^{-1}(i') \setminus C'$ to either $i$, or the top element in the heap for $j$, whichever is closer to $j$. Steps \ref{step:Delta-swap-in-Delta} and \ref{step:Delta-swap-in-i'} compute and return the best scaled cost increment and the best $i'$. Since $\sum_{i' \in S}|\sigma^{-1}(i')| = |C|$, the running time of the step can be bounded by $O(|C|)$. The running time for $\mathsf{try\mhyphen open}$, swapping two facilities and closing a facility (which are not defined explicitly as procedures, but used in Algorithms~\ref{alg:sample}) can be bounded by $O(|C|\log |F|)$. The running times come from updating the heap structures: For each of the $|C|$ heaps, we need to delete and/or add at most $2$ elements; each operation takes time $O(\log |F|)$. \subsection{Random Sampling of Local Operations} \begin{figure*} \begin{algorithm}[H] \caption{$\mathsf{sampled\mhyphen local\mhyphen search}$} \label{alg:sample} \begin{algorithmic}[1] \If{$\mathsf{rand}(0, 1) < 1/3$} \Comment{$\mathsf{rand}(0, 1)$ returns a uniformly random number in $[0, 1]$} \State$(\Delta, i') \gets \mathsf{\Delta\mhyphen close}$ \State\textbf{if} $\Delta < 0$ \textbf{then} close $i'$ by updating $S, \delta$ and heaps accordingly \Else \State $i \gets $ random facility in $F \setminus S$ \State $\Delta \gets \mathsf{\Delta\mhyphen open}(i), (\Delta', i') \gets \mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ \State \textbf{if} $\Delta \leq \Delta'$ and $\Delta < 0$ \textbf{then} open $i$ by updating $S, \delta$ and heaps accordingly \State \textbf{else if} $\Delta' < 0$ \textbf{then} open $i$ and close $i'$ by updating $S, \delta$ and heaps accordingly \EndIf \end{algorithmic} \end{algorithm} \vspace*{-15pt} \begin{algorithm}[H] \caption{$\mathsf{FL\mhyphen iterate}(M)$} \label{alg:FL-iterate} \begin{algorithmic}[1] \State $(S^{\mathrm{best}}, \sigma^{\mathrm{best}}) \gets (S, \sigma)$ \For{$\ell \gets 1$ to $M$} \State call $\mathsf{sampled\mhyphen local\mhyphen search}$ \If{$\mathsf{cost}(S, \sigma) < \mathsf{cost}(S^{\mathrm{best}}, \sigma^{\mathrm{best}})$} $(S^{\mathrm{best}}, \sigma^{\mathrm{best}}) \gets (S, \sigma)$ \EndIf \EndFor \State \Return $(S^{\mathrm{best}}, \sigma^{\mathrm{best}})$ \end{algorithmic} \end{algorithm} \end{figure*} With the support of the heaps, we can design a fast algorithm to implement randomized local search. $\mathsf{sampled\mhyphen local\mhyphen search}$ in Algorithm~\ref{alg:sample} gives one iteration of the local search. We first decide which operation we shall perform randomly. With probability $1/3$, we perform the $\mathsf{close}$ operation that will reduce the scaled cost the most (if it exists). With the remaining probability $2/3$, we perform either an $\mathsf{open}$ or a $\mathsf{swap}$ operation. To reduce the running time, we randomly choose a facility $i \in F \setminus S$ and find the best operation that opens or swaps in $i$, and perform the operation if it reduces the cost. One iteration of $\mathsf{sampled\mhyphen local\mhyphen search}$ calls the procedures in Algorithms~\ref{alg:Delta-open} to \ref{alg:Delta-close} at most once and performs at most one operation, and thus has running time $O(|C|\log |F|)$. In the procedure $\mathsf{FL\mhyphen iterate}(M)$ described in Algorithm~\ref{alg:FL-iterate}, we run the $\mathsf{sampled\mhyphen local\mhyphen search}$ $M$ times. It returns the best solution obtained in these iterations, according to the \emph{original (non-scaled) cost}, which is not necessarily the solution given in the last iteration. So we have \begin{obs} \label{obs:time-iterate} The running time of $\mathsf{FL\mhyphen iterate}(M)$ is $O(M|C|\log |F|)$, where $C$ is the set of clients when we run the procedure. \end{obs} Throughout this section, we fix a facility location instance. Let $(S^*, \sigma^*)$ be the optimum solution (w.r.t the original cost) and $\mathsf{opt} = \mathsf{cost}(S^*, \sigma^*)$ be the optimum cost. Fixing one execution of $\mathsf{sampled\mhyphen local\mhyphen search}$, we use $(S^0, \sigma^0)$ and $(S^1, \sigma^1)$ to denote the solutions before and after the execution respectively. Then, we have \begin{restatable}{lemma}{samplelocalsearch} \label{lemma:sample-local-search} Consider an execution of $\mathsf{sampled\mhyphen local\mhyphen search}$ and fix $(S^0, \sigma^0)$. We have \begin{align*} \mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)] \geq \frac1{3 |F|}\max\left\{ \begin{array}{c} \mathsf{cc}(\sigma^0) - (\lambda f(S^*) + \mathsf{cc}(\sigma^*))\\ \lambda f(S) - (\lambda f(S^*) + 2\mathsf{cc}(\sigma^*))\\ \mathsf{cost}_\lambda(S^0, \sigma^0) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)) \end{array} \right\}. \end{align*} \end{restatable} \begin{restatable}{lemma}{fliterate} \label{lemma:ufl-iterate} Let $(S^\circ, \sigma^\circ)$ be the $(S, \sigma)$ at the beginning of an execution of $\mathsf{FL\mhyphen iterate}(M)$, and assume it is an $O(1)$-approximation to the instance. Let $\Gamma \geq 2$ and $M = O\left(\frac{ |F|}{\epsilon'}\log\Gamma\right)$ is big enough. Then with probability at least $1-\frac1\Gamma$, the solution returned by the procedure is $(\alpha_{\mathsf{FL}} + \epsilon')$-approximate. \end{restatable} \section{$(1+\sqrt{2}+\epsilon)$-Competitive Online Algorithm with Recourse} \label{sec:ofl} In this section, we prove Theorem~\ref{UFL-recourse} by giving the algorithm for online facility location with recourse. \subsection{The Algorithm} For any $\epsilon >0$, let $\epsilon' = \Theta(\epsilon)$ be a parameter that is sufficiently small so that the approximation ratio $\alpha_{\mathsf{FL}} + O(\epsilon')= 1+\sqrt{2} + O(\epsilon')$ achieved by our algorithm is at most $\alpha_{\mathsf{FL}} + \epsilon$. Our algorithm for online facility location is easy to describe. Whenever the client $j_t$ comes at time $t$, we use a simple rule to connect $j_t$, as defined in the procedure $\mathsf{initial\mhyphen connect}$ in Algorithm~\ref{alg:initial-connect}: either connecting $j_t$ to the nearest facility in $S$, or opening and connecting $j_t$ to its nearest facility in $F \setminus S$, whichever incurs the smaller cost. Then we repeatedly perform $\phi$-efficient operations (Definition \ref{def:phieff}), until no such operations can be found, for $\phi=\frac{\epsilon'\cdot \mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|}$. \footnote{There are exponential number of possible operations, but we can check if there is a $\phi$-efficient one efficiently. $\mathsf{close}$ operations can be handled easily. To check if we can open a facility $i$, it suffices to check if $\sum_{j \in C: d(j, i) + \phi < d(j,\sigma_j)} (d(j, \sigma_j) - d(j, i)- \phi ) > \lambda f_i \cdot 1_{i \notin S}$. $\mathsf{swap}$ operations are more complicated but can be handled similarly.} \begin{algorithm}[htb] \caption{$\mathsf{initial\mhyphen connect}(j)$} \label{alg:initial-connect} \begin{algorithmic}[1] \If{$\min_{i \in F\setminus S}(f_i + d(i, j)) < d(j, S)$} \State let $i^* = \arg\min_{i \in F\setminus S}(f_i + d(i, j))$, $S \gets S \cup \{i^*\}, \sigma_j \gets i^*$ \Else \ $\sigma_j \gets \arg\min_{i \in S} d(j, i)$ \EndIf \end{algorithmic} \end{algorithm} We can show that the algorithm gives an $(\alpha_{\mathsf{FL}} + \epsilon)$-approximation with amortized recourse $O(\log D\log n)$; recall that $D$ is the aspect ratio of the metric. To remove the dependence on $D$, we divide the algorithm into stages, and \emph{freeze} the connections of clients that arrived in early stages. The final algorithm is described in Algorithm~\ref{alg:ofl}, and Algorithm~\ref{alg:ofl-one-stage} gives one stage of the algorithm. \begin{algorithm} \caption{One Stage of Online Algorithm for Facility Location} \label{alg:ofl-one-stage} \begin{algorithmic}[1] \Require{ \begin{itemize} \item $C$: initial set of clients \item $(S, \sigma)$: a solution for $C$ which is $O(1)$-approximate \item Clients $j_1, j_2, \cdots $ arrive from time to time \end{itemize} } \Ensure{ Guaranteeing that $(S, \sigma)$ at the end of each time $t$ is $\frac{\alpha_{\mathsf{FL}}}{1 - \epsilon'}$-approximate } \State $\mathsf{init} \gets \mathsf{cost}(S, \sigma)$ \For{$t \gets 1, 2, \cdots$, terminating if no more clients will arrive} \State $C\gets C\cup\{j_t\}$, and call $\mathsf{initial\mhyphen connect}(j_t)$ \label{step:ofl-settle-down} \While{there exists an $\frac{\epsilon'\cdot \mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|}$-efficient local operation} \label{step:ofl-while} perform the operation \EndWhile \If {$\mathsf{cost}(S, \sigma) > \mathsf{init}/\epsilon'$} terminate the stage \EndIf \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \caption{Online Algorithm for Facility Location} \label{alg:ofl} \begin{algorithmic}[1] \State $C \gets \emptyset, S \gets \emptyset, \sigma = ()$ \Repeat \State $C^\circ \gets C, (S^\circ, \sigma^\circ) \gets (S, \sigma)$ \State redefine the next time to be time 1 and run one stage as defined in Algorithm \ref{alg:ofl-one-stage} \State permanently open one copy of each facility in $S^\circ$, and permanently connect clients in $C^\circ$ according to $\sigma^\circ$ (we call the operation \emph{freezing} $S^\circ$ and $C^\circ$) \State $C \gets C \setminus C^\circ$, restrict the domain of $\sigma$ to be the new $C$ \Until no clients come \end{algorithmic} \end{algorithm} In Algorithm~\ref{alg:ofl-one-stage}, we do as described above,\janr{what does before refers to here?}\shr{I changed to ``above''.} with two modifications. First, we are given an initial set $C$ of clients and a solution $(S, \sigma)$ for $C$ which is $O(1)$-approximate. Second, the stage will terminate if the cost of our solution increases by a factor of more than $1/\epsilon'$. The main algorithm (Algorithm~\ref{alg:ofl}) is broken into many stages. Since we shall focus on one stage of the algorithm for most part of our analysis, we simply redefine the time so that every stage starts with time 1. The improved recourse comes from the \emph{freezing} operation: at the end of each stage, we permanently open one copy of each facility in $S^\circ$, and permanently connect clients in $C^\circ$ to copies of $S^\circ$ according to $\sigma^\circ$, where $C^\circ$ and $(S^\circ, \sigma^\circ)$ are the client set and solution at the beginning of the stage. Notice that we assume the original facilities in $S^\circ$ will still participate in the algorithm in the future; that is, they are subject to opening and closing. Thus each facility may be opened multiple times during the algorithm and we take the facility costs of all copies into consideration. This assumption is only for the sake of analysis; the actual algorithm only needs to open one copy and the costs can only be smaller compared to the described algorithm. From now on, we focus on one stage of the algorithm and assume that the solution given at the beginning of each stage is $O(1)$-approximate. In the end we shall account for the loss due to the freezing of clients and facilities. Within a stage, the approximation ratio follows directly from Theorem~\ref{thm:FL-offline-apx-ratio}: Focus on the moment after the while loop at time step $t$ in Algorithm~\ref{alg:ofl-one-stage}. Since there are no $\frac{\epsilon'\cdot \mathsf{cost}(S,\sigma)}{\alpha_{\mathsf{FL}}|C|}$-efficient local operations on $(S, \sigma)$, we have by the theorem that $\mathsf{cost}(S, \sigma) \leq \alpha_{\mathsf{FL}}\left(\mathsf{opt} + |C|\cdot \frac{\epsilon'\cdot \mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|}\right) = \alpha_{\mathsf{FL}}\mathsf{opt} + \epsilon'\cdot\mathsf{cost}(S, \sigma)$, where $\mathsf{opt}$ is the cost of the optimum solution for $C$. Thus, at the end of each time, we have $\mathsf{cost}(S, \sigma) \leq \frac{\alpha_{\mathsf{FL}}}{1-\epsilon'}\cdot\mathsf{opt}$. \subsection{Bounding Amortized Recourse in One Stage} We then bound the amortized recourse in a stage; we assume that $\mathsf{cost}(S, \sigma) > 0$ at the beginning of the stage since otherwise there will be no recourse involved in the stage (since we terminate the stage when the cost becomes non-zero). We use $T$ to denote the last time of the stage. For every time $t$, let $C_t$ be the set $C$ at the end of time $t$, and $\mathsf{opt}_t$ to be the cost of the optimum solution for the set $C_t$. For every $t \in [T]$, we define $\Delta_t$ to be the value of $\mathsf{cost}(S, \sigma)$ after Step~\ref{step:ofl-settle-down} at time step $t$ in Algorithm~\ref{alg:ofl-one-stage}, minus that before Step~\ref{step:ofl-settle-down}. We can think of this as the cost increase due to the arrival of $j_t$. The key lemma we can prove is the following: \begin{lemma}\label{lmm:ofl-delta-cost-bound} For every $T' \in [T]$, we have $$\sum_{t = 1}^{T'} \Delta_t \leq O(\log T')\mathsf{opt}_{T'}.$$ \end{lemma} \begin{proof} Consider the optimum solution for $C_{T'}$ and focus on any star $(i, C')$ in the solution; that is, $i$ is an open facility and $C'$ is the set of clients connected to $i$. Assume $C' \setminus C_0 = \{j_{t_1}, j_{t_2}, \cdots, j_{t_s}\}$, where $1 \leq t_1 < t_2 < \cdots < t_s \leq T'$; recall that $C_0$ is the initial set of clients given at the beginning of the stage. We shall bound $\sum_{s' = 1}^s\Delta_{t_{s'}}$ in terms of the cost of the star $(i, C' \setminus C_0)$. By the rule specified in $\mathsf{initial\mhyphen connect}$, we have $ \Delta_{t_1 }\le f_i + d(i, j_{t_1})$. Now focus on any integer $k \in [2, s]$. Before Step~\ref{step:ofl-settle-down} at time $t_k$, no $\Big(\phi:= \frac{ \epsilon'\cdot \mathsf{cost}(S, \sigma) }{\alpha_{\mathsf{FL}}|C_{t_k-1}|} \leq \frac{O(\epsilon')\cdot\mathsf{opt}_{t_k-1}}{t_k-1} \leq \frac{O(\epsilon')\cdot\mathsf{opt}_{T'}}{t_k-1} \Big)$-efficient operation that opens $i$ is available. Thus, we can apply Lemma~\ref{lemma:helper-star} on $i$, $\tilde C = \{j_{t_1}, j_{t_2}, \cdots, j_{t_{k-1}}\}$ and $\phi$ to conclude that before Step~\ref{step:ofl-settle-down}, we have \begin{align*} d(i,S)\leq \frac{f_i + 2\cdot \sum_{k'=1}^{k-1 } d(i, j_{t_{k'} }) }{k-1}+ \frac{ O(\epsilon') \cdot \mathsf{opt}_{T'}}{t_k-1}. \end{align*} \iffalse in particular, adding facility $ i $ and reconnecting $ \{j_{t_1}, j_{t_2}, \cdots, j_{t_{k-1}}\}$ to $i $ is not $ \frac{ \epsilon '\cdot \mathsf{cost}(S, \sigma) }{\mid C \mid } $-efficient. This gives that at the moment, we have \[ \sum_{k'=1}^{k-1}d(j_{t_{k'} }, S) \leq f_i + \sum_{k'=1}^{k-1} d(i , j_{t_{k'} }) + ( k-1)\cdot \frac{ \epsilon' \cdot \mathsf{cost}(S, \sigma)}{|C|}. \] Let $ D = d(i, S)$ be the distance between $ i $ and its nearest open facility at the moment. By triangle inequalities we have $ d(j_{t_{k'} }, S) \ge D - d(i, j_{t_{k'} } )$. Combining with the previous inequality yields: \[ D \le \frac{1}{k-1}\sum_{k'=1}^{k-1}\left(d(j_{t_{k'}}, S) + d(i, j_{t_{k'}})\right) \leq \frac{f_i + 2\cdot \sum_{k'=1}^{k-1 } d(i, j_{t_{k'} }) }{k-1}+ \frac{ \epsilon' \cdot \mathsf{cost}(S, \sigma)}{|C| }. \] \fi In $\mathsf{initial\mhyphen connect}(j_{t_k})$, we have the option of connecting $j_{t_k}$ to its nearest open facility. Thus, we have \begin{align*} \Delta_{t_{k} } \le d(i, S) + d(i, j_{t_{k}} ) &\le \frac{f_i + 2\cdot \sum_{k'=1}^{k-1 } d(i, j_{t_{k'} }) }{k-1}+ \frac{ O(\epsilon') \cdot \mathsf{opt}_{T'}}{t_k-1 } + d(i, j_{t_{k}} ). \end{align*} We now sum up the above inequality for all $k \in [2, s]$ and that $\Delta_{t_1}\leq f_i + d(j, j_{t_1})$. We get \begin{align} \sum_{k=1}^s \Delta_{t_{k}} \leq O(\log s)\left(f_i + \sum_{k'=1}^sd(i, j_{t_{k'}})\right) + O(\epsilon')\sum_{k=2}^s\frac{\mathsf{opt}_{T'}}{t_k-1}. \label{inequ:each-star} \end{align} To see the above inequality, it suffices to consider the coefficients for $f_i$ and $d(i, j_{t_{k'}})$'s on the right-hand side. The coefficient for $f_i$ is at most $1 + \frac11 + \frac12 + \cdots + \frac1{s-1} = O(\log s)$; the coefficient for each $d(i, j_{t_{k'}})$ is $ 1 + \frac{2}{k'} + \frac{2}{k'+1} + \cdots +\frac{2}{s-1} = O(\log s)$. We now take the sum of \eqref{inequ:each-star} over all stars $(i, C')$ in the optimum solution for $C_{T'}$. The sum for the first term on the right side of \eqref{inequ:each-star} will be $O(\log T')\mathsf{opt}_{T'}$ since $f_i + \sum_{k'=1}^sd(i, j_{t_{k'}})$ is exactly the cost of the star $(i, C' \setminus C_0 \subseteq C')$. The sum for the second term will be $O(\epsilon'\log T')\cdot \mathsf{opt}_{T'}$ since the set of integers $t_k-1$ overall stars $(i, C')$ and all $k \geq 2$ are all positive and distinct. Thus overall, we have $\sum_{t = 1}^{T'} \Delta_t \leq O(\log T')\mathsf{opt}_{T'}$. \end{proof} With Lemma~\ref{lmm:ofl-delta-cost-bound}, we can now bound the amortized recourse of one stage. In time $t$, $\mathsf{cost}(S, \delta)$ first increases by $\Delta_t$ in Step~\ref{step:ofl-settle-down}. Then after that, it decreases by at least $\frac{\epsilon'\mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|} \geq \frac{\epsilon'\mathsf{opt}_t}{\alpha_{\mathsf{FL}}|C|} \geq \frac{\epsilon'\mathsf{opt}_t}{\alpha_{\mathsf{FL}}|C_T|}$ for every reconnection we made. Let $\Phi_{T'} = \sum_{t = 1}^{T'}\Delta_t$; Lemma~\ref{lmm:ofl-delta-cost-bound} says $\Phi_t \leq \alpha \mathsf{opt}_{t}$ for some $\alpha = O(\log T)$ and every $t \in [T]$. Noticing that $(\mathsf{opt}_t)_{t \in T}$ is a non-decreasing sequence, the total number of reconnections is at most \begin{align*} &\frac{\textsf{init}}{\epsilon'\cdot\mathsf{opt}_1/(\alpha_{\mathsf{FL}}|C_T|)} + \sum_{t=1}^T\frac{\Delta_t}{\epsilon' \cdot \mathsf{opt}_{t}/(\alpha_{\mathsf{FL}}|C_T|)} = \frac{\alpha_{\mathsf{FL}}|C_T|}{\epsilon'} \left( \frac{\textsf{init}}{\mathsf{opt}_1} + \sum_{t = 1}^{T-1} \frac{\Delta_t}{\mathsf{opt}_{t}} + \frac{\Delta_T}{\mathsf{opt}_T}\right). \end{align*} Notice that $\mathsf{init} \leq O(1)\mathsf{opt}_0 \leq O(1)\mathsf{opt}_1$. Applying Lemma~\ref{lemma:helper-sum-b/a} with $T$ replaced by $T-1$, $b_t = \Delta_t, B_t = \Phi_t$ and $a_t = \mathsf{opt}_{t}$ for every $t$, we have that $\sum_{t=1}^{T-1}\frac{\Delta_t}{\mathsf{opt}_{t}} \leq \alpha \left(\ln\frac{\mathsf{opt}_{T-1}}{\mathsf{opt}_1} + 1\right) = O\left(\log T\log\frac1{\epsilon'}\right)$, since we have $\mathsf{opt}_{T-1} \leq O(1/\epsilon')\cdot\mathsf{opt}_1$\xguor{I guess this should be $\mathsf{opt}_{T-1} \leq \mathsf{opt}_1*O(1/\epsilon')$}\shr{Yes. Done}. Notice that $\Delta_T \leq \mathsf{opt}_T$ since $\mathsf{opt}_T \geq \min_{i \in F}(f_i + d(i, j_T)) \geq \Delta_T$. So, the total number of reconnections is at most $O\left(\frac{\log T}{\epsilon'}\log\frac1{\epsilon'}\right)\cdot|C_T|$. The amortized recourse per client is $O\left(\frac{\log T}{\epsilon'}\log\frac1{\epsilon'}\right) \leq O\left(\frac{\log n}{\epsilon'}\log\frac1{\epsilon'}\right)$, where in the amortization, we only considered clients involved in the stage. Recall that $n$ is the total number of clients arrived. As each client appears in at most 2 stages, the overall amortized recourse is $O\left(\frac{\log n}{\epsilon'}\log\frac1{\epsilon'}\right)$. Finally we consider the loss in the approximation ratio due to freezing of clients. Suppose we are in the $p$-th stage. Then the clients arrived at and before $(p-2)$-th stage has been frozen and removed. Let $\overline{\mathsf{opt}}$ be the cost of the optimum solution for all clients arrived at or before $(p-1)$-th stage. Then the frozen facilities and clients have cost at most $\overline\mathsf{opt} \cdot O\left(\epsilon' + \epsilon'^2 + \epsilon'^2 + \cdots \right) = O(\epsilon')\overline{\mathsf{opt}}$. In any time in the $p$-th stage, the optimum solution taking all arrived clients into consideration has cost $\overline\mathsf{opt}' \geq \overline\mathsf{opt}$, and our solution has cost at most $(\alpha_{\mathsf{FL}} + O(\epsilon'))\overline\mathsf{opt}'$ without considering the frozen clients and facilities. Thus, our solution still has approximation ratio $\frac{(\alpha_{\mathsf{FL}} + O(\epsilon'))\overline\mathsf{opt}' + O(\epsilon')\overline\mathsf{opt}}{\overline\mathsf{opt}'} = \alpha_{\mathsf{FL}} + O(\epsilon')$ when taking the frozen clients into consideration.
2,869,038,156,191
arxiv
\section{Introduction} \label{sec:intro} The study of the fundamental ground-state properties of exotic nuclei is one of the challenges in contemporary nuclear physics research \cite{nupecc2010}. Laser spectrosopic methods contribute to this ongoing research endeavor by providing information on the nuclear electromagnetic moments, spins, and changes in mean-squared charge radii. These observables provide key input towards a theoretical description of the nucleus, as illustrated e.g. in \cite{ruiz2016}. Experimental measurements on very exotic nuclei are challenging, reflecting a combination of short half-lives and production in only minute quantities, accompanied by a large amount of unwanted contamination. Furthermore, facilities that produce these exotic nuclei only allot a limited time to a given experiment, which means that the study of the most exotic cases requires techniques that are both very selective and efficient. In addition, such measurements are often performed only once, so systematic uncertainties must be understood, removed, or at least minimized. Many laser spectroscopic techniques have been applied in nuclear physics research, each with their strengths and weaknesses \cite{Cheal2010,Blaum2013,Campbell2016}. Resonance Ionization Spectroscopy (RIS) methods, which rely on multi-step laser ionization and subsequent ion detection, are typically very sensitive, motivating the development of numerous RIS experiments at online isotope separators \cite{Fedosseev2012}. To achieve efficient laser ionization, pulsed laser systems are typically used, often operating at high powers. This leads to a drawback for spectroscopy, including unwanted lineshape distortion and/or line broadening. These effects can become apparent when performing high-resolution RIS, as illustrated recently in \cite{de_Groote_2015}. Given the current developments towards high-resolution RIS of exotic nuclei in e.g. collinear RIS \cite{de_Groote_2015} and in-gas-jet laser spectroscopy \cite{Kudryavtsev2013,Raeder2016}, a detailed understanding of the interaction of atoms with pulsed lasers is vital. This article will present a model that describes the RIS process using CW or pulsed lasers for the resonant excitation and non-resonant ionization step (section \ref{sec:model}). Spontaneous decay of the intermediate level is taken into account and time delays between the two excitation steps are investigated. Through both model simulations and experimental verification, this article will address how some of the aforementioned detrimental line distortion effects can be understood and avoided by delaying the ionization laser pulse. This will be presented in section \ref{sec:dist}. Furthermore, through the same delayed-ionization approach, it is possible to remove virtually all power broadening due to both lasers in a two-step RIS scheme, which could be important for future high-precision studies on radioactive isotopes. This will be illustrated experimentally and through simulations in section \ref{sec:pb}. The delayed ionization method is greatly enhanced by using a weak transition to a long-lived excited state. Firstly, with short-lived excited states, a significant fraction of the excited state population would decay before the ionization can take place, reducing the efficiency of the method. Secondly, long-lived states result in intrinsically narrower linewidth since their natural width is smaller. The feasibility of using weak transitions for efficient resonance laser ionization spectroscopy using both continuous wave and pulsed lasers will be addressed in section \ref{sec:eff}. \section{A Model for RIS}\label{sec:model} The evolution of the population of a two level system irradiated by a laser tuned close to resonance can be calculated by solving the Schr\"odinger equations with the following Hamiltonian: \begin{align} H &= \frac{\hbar}{2} \begin{pmatrix} 0 & \Omega(t)\\ \Omega(t)& 2\Delta \end{pmatrix}, \end{align} where $\Omega(t)$ is the coupling parameter of the two states, also called the Rabi frequency, and $\Delta$ is the laser-atom detuning. Defining the laser frequency as $\omega_e$, the ground state level energy $\hbar\omega_0=0$ and the excited state energy $\hbar\omega_1$, $\Delta = \omega_1 - \omega_e$. When using linearly polarized light, the coupling parameter $\Omega(t)$ can be calculated using \begin{align} \Omega =& \sqrt{A P_e} \left(\frac{c}{\omega_1 - \omega_0}\right)^{3/2} (2F_1+1) \notag\\ & \times \sum_{m_{F_0},m_{F_1}}\begin{pmatrix} F_1 & 1 & F_0 \\ -m_{F_1} &0&m_{F_0}\end{pmatrix} \begin{Bmatrix} J_1 & F_1 & I \\ F_0 & J_0 & 1\end{Bmatrix},\label{rabi_frequency} \end{align} with $F_i$ the total angular momentum of state $i$, $()$ and $\{\}$ respectively Wigner 3J and 6J symbols, $A$ the Einstein $A$ coefficient of the transition and $P_e(t)$ the power of the laser. A second laser with laser power $P_i(t)$ can ionize excited atoms at a rate of $\Gamma = P_i(t)\sigma$, with $\sigma$ the non-resonant photo-ionization cross section of the excited state at the wavelength of the ionization laser. Photo-ionization requires modeling population loss, since population has to flow out of the two-level system, into the continuum. This requires a non-Hermitian Hamiltonian, given by \begin{align} H &= \frac{\hbar}{2} \begin{pmatrix} 0 & \Omega(t)\\ \Omega(t)& 2\Delta + 2 S(t) - i\Gamma(t) \end{pmatrix},\label{eq:ionization_ham} \end{align} In this Hamiltonian $S$ is the net dynamic Stark shift induced by both the ionization laser and excitation laser, with each laser contributing a shift proportional to the laser power \cite{Delone_1999,kumekov1981dynamic}. If there are no relaxation processes, the evolution of the system can be calculated using the time-dependent Schr\"odinger equation \begin{align} \dot{\rho} = \frac{1}{i\hbar}\left(H\rho - \rho H^\dagger\right).\label{eq:density_equation} \end{align} The right-hand side of this equation reduces to the more familiar commutator $[H, \rho]$ for Hermitian $H$. The populations of the hyperfine levels are the diagonal elements of the density matrix, $\rho_{ii}(t)$. Usually the spontaneous decay of the excited state to the ground state cannot be neglected. Including incoherent relaxation processes into equation \eqref{eq:density_equation} can be done as follows: \begin{align} \dot{\rho} = \frac{1}{i\hbar}\left(H\rho - \rho H^\dagger\right) + L(\rho). \label{eq:full_eom} \end{align} For the simplified two-level system, $L$ is given by \begin{align} L &= \begin{pmatrix} A\rho_{11} & -\frac12 A\rho_{01}\\ -\frac12 A\rho_{10} & -A\rho_{11} \end{pmatrix}, \end{align} This additional term in the equations of motion causes decay of the excited state to the lower-lying state, and exponentially dampens the coherence terms. \\ \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{general_picture.pdf} \caption{An atom with several hyperfine levels in the ground-state and in the excited state multiplet. Note the figure is not to scale, since the hyperfine splitting is typically $10^6$ times smaller than the transition energy.} \label{fig:general_picture} \end{center} \end{figure} These equations of motion can be generalized to systems with multiple hyperfine levels in a ground- or excited state multiplet (see Fig. \ref{fig:general_picture}) in a relatively straightforward manner. Using Greek indices for levels in the excited state multiplet and roman indices for the ground state multiplet, the Hamiltonian for such a system can be written down as \begin{align} &H_{ii}(t) = \hbar \omega_i \\ &H_{ij}(t) = 0 \\ &H_{\alpha\alpha}(t) = \hbar (S_{\alpha}(t) + \Delta_{\alpha} - \frac{i}{2} \Gamma(t) ) \\ &H_{i\alpha}(t) = H_{\alpha i}(t) = \frac{\hbar}{2} \Omega_{i\alpha}(t)\\ &H_{\alpha\beta}(t) = - \frac{\hbar}{2} \Gamma(t)(q+i), \label{eq:offdiag} \end{align} with $\hbar\omega_i$ and $\hbar\omega_\alpha$ the energy of the atomic states of the ground- and excited-state hyperfine multiplet, and $\Delta_\alpha = \omega_\alpha - \omega_e$. The off-diagonal terms in \eqref{eq:offdiag} are due to the embedding of structure into the continuum by the high-power ionization laser, and are characterized by a Fano parameter $q$ \cite{Knight1990}. This $q$ parameter plays a role in laser-induced continuum phenomena and can be calculated from first principles (see e.g. \cite{Dai1987,Nakajima1994,Yatsenko1997,Yatsenko1999}). It induces a coupling between the excited state multiplet levels by the ionization laser via the interaction with the continuum. This coupling can influence of RIS line profiles, but only for either very large values of $q$ or for unrealistically high laser powers. For the purpose of this article the Fano parameter will be taken to be zero. The generalized form of the matrix $L$ can be written down using the partial decay rates defined as \begin{align} \gamma_{\alpha i} & = \frac{4 \alpha}{3} \frac{| \omega_\alpha - \omega_i |^3}{c^2} (2F_i+1)(2J_\alpha+1)(2J_i+1)\\ & \times |\left\langle \alpha,L_\alpha || r || i,L_i\right\rangle|^2 \begin{Bmatrix} J_\alpha & 1 & J_i \\ F_i & I & F_\alpha \end{Bmatrix}^2 \begin{Bmatrix} L_\alpha & 1 & L_i \\ J_i & S & J_\alpha \end{Bmatrix}^2. \notag \end{align} These rates can be calculated using the observation that the partial decay rates of an excited hyperfine level should sum up to the Einstein $A$ coefficient. Using this definition of the partial rates, $L$ can be written down as \begin{align} &L(\rho)_{ii} = \sum_\alpha \rho_{\alpha\alpha} \gamma_{\alpha i} \\ &L(\rho)_{\alpha\alpha} = - \sum_i \rho_{\alpha\alpha} \gamma_{\alpha i} \\ &L(\rho)_{\alpha i} = - \frac{\rho_{\alpha i}}{2} \sum_j \gamma_{\alpha j} \\ &L(\rho)_{i\alpha} = - \frac{\rho_{i \alpha}}{2} \sum_j \gamma_{\alpha j} \\ &L(\rho)_{\alpha \beta} = - \frac{\rho_{\alpha\beta}}{2} \sum_j \gamma_{\alpha j} + \gamma_{\beta j}. \end{align} These are the generalized equations that will be used for the simulations presented throughout this article. The computer code developed in Python used to run the simulations, is available upon request to the authors. \section{Power broadening and delayed ionization}\label{sec:pb} \subsection{Model Predictions} In a two-step RIS scheme, both lasers can broaden the resonance line profiles. Power broadening due to the excitation laser in a closed two-level system using continuous-wave (CW) laser light is well understood in the steady-state limit. In this case, the linewidth of the optical resonance increases with the laser power \cite{Citron_1977}: \begin{equation} \text{FWHM} = A\sqrt{ 1+2(\Omega/A)^2 }. \end{equation} In other words, population is only excited efficiently to the excited state when \begin{equation} \left|\Delta\right| \lesssim A\sqrt{ 1+2(\Omega/A)^2 }.\label{eq:power_broadening} \end{equation} However, for pulsed laser excitation, this relationship is not always valid \cite{Vitanov2001}. It can be derived that for a Gaussian shaped excitation laser pulse (in absence of ionization and spontaneous decay), the detuning range that results in excited state population after the action of the excitation pulse is given by \cite{Boradjiev2013} \begin{equation} \left|\Delta\right| \lesssim \frac{\sqrt{\log{(\Omega/\Delta)}}}{T},\label{eq:Gaussbroadening} \end{equation} with T the length of the laser pulse in time. The different power dependence of the line widths for CW and pulsed laser excitation is illustrated in Fig. \ref{fig:pb_eqs}, using $A=1$\,MHz in \eqref{eq:power_broadening} and $T=50$\,ns in \eqref{eq:Gaussbroadening}. The linewidth predicted by equation \eqref{eq:Gaussbroadening} for pulsed laser excitation scales much more favorably with the laser power. Since this reduced linewidth is only obtained after the excitation pulse has passed (as illustrated further in Fig. \ref{fig:2ds}), the significantly reduced power broadening presented in Fig. \ref{fig:pb_eqs} can only be obtained by using a subsequently delayed ionization pulse. The considerable reduction of the resonance linewidth provides a strong argument in favor of using an ionization step that is delayed with respect to the pulsed excitation laser step, such that the narrower line shape is probed. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{pb_eq.pdf} \caption{Comparison of equations \eqref{eq:power_broadening} and \eqref{eq:Gaussbroadening}, representing the line broadening $\Delta$ due to the excitation coupling $\Omega$, for an Einstein A-coefficient of 1\,MHz for continuous-wave lasers and T=50\,ns for pulsed lasers. } \label{fig:pb_eqs} \end{center} \end{figure} \begin{figure*}[ht!] \centering \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{overlapping.pdf} \caption[]% {{Excited state population when using simultaneous laser pulses (see diagram on the left). The resonance ionization spectrum (red curve in the top panel) is power broadened by both laser pulses.}} \label{fig:2d1} \end{subfigure} \hfill \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{delayed.pdf} \caption[]% {{As Fig. \ref{fig:2d1}, but with a delayed ionization pulse. For comparison, the red ionization spectrum in the uppermost plot is taken from the simulation in Fig. \ref{fig:2d1}}. Delaying the ionization step removes the power broadening.} \label{fig:2d2} \end{subfigure} \caption[] {Two simulations for a two-level atom, using the parameters in table \ref{tab:parameters}. For each of the figures, the central surface plot shows the excited state population as a function of laser detuning and time. The diagram to the left of these central plot schematically displays the laser pulse sequence. On the top of each figure the resonance ionization spectrum is shown.} \label{fig:2ds} \end{figure*} Besides power broadening due to the excitation laser, the interaction of the system with the ionization laser, if applied during the excitation laser pulse, can also further broaden the level. This can be understood in an intuitive way. Since the ionization laser couples the excited state to the continuum, the lifetime of this excited state is reduced. By virtue of the Heisenberg energy-time uncertainty principle, this implies an increase in the energy uncertainty of the excited state. Indeed, the resonant excitation is probed by looking at the ions that are created by subsequent excitation of the atoms from the excited level towards the ionization continuum. The energy uncertainty induced in the intermediate level translates into a broadening of the resonance observed in the spectrum. If the ionization laser is delayed with respect to the excitation laser step, this broadening does not occur, since the perturbing ionizing laser field is not present when the resonant excitation happens. Fig. \ref{fig:2ds} illustrates these observations by presenting numerical solutions to the equations of motion for a two-level system using the parameters presented in table \ref{tab:parameters}. The population of the excited state as a function of the laser detuning (x-axis) and time (y-axis) is shown. To the left of these plots a schematic picture of the time sequence of the excitation laser pulse and the second laser pulse is shown. The spectrum above the two-dimensional plots shows the frequency dependence of the ionization efficiency obtained at the end of the pulsed resonance ionization process. \begin{table}[ht!] \begin{tabular}{lr} Parameter & Value\\ \hline Excitation Laser power & 10\,nJ \\ Excitation Laser pulse length & 50\,ns \\ Ionization Laser power & 1\,mJ \\ Ionization Laser pulse length & 8\,ns \\ A-parameter & 10\,MHz \\ Photo-ionization cross section $\sigma$ & 1\,Mb\\ Ionization laser delay for Fig. \ref{fig:2d1}, \ref{fig:2d3} & 0\,ns\\ Ionization laser delay for Fig. \ref{fig:2d2}, \ref{fig:2d4} & 80\,ns \\ Stark effect Fig. \ref{fig:2ds} & $S=0$\\ Stark effect Fig. \ref{fig:2ds2} & $S=0.9\cdot\Gamma(t)$ \\ Fano parameter q & 0 \\ \end{tabular} \caption{Parameters used for the simulations in figures \ref{fig:2ds} and \ref{fig:2ds2}.}\label{tab:parameters} \end{table} In Fig. \ref{fig:2d1} it is shown that population is transfered to the excited state as the excitation laser builds up. When the ionization laser fires, the accumulated population is removed from the excited state. Upon comparing the ionization spectra shown at the top of figures \ref{fig:2d1} and \ref{fig:2d2}, it becomes apparent that delaying the ionization laser until the excitation laser has ended considerably reduces the linewidth of the final optical resonance obtained through the resonance ionization. This is due to the transient nature of the population transfer outside of the narrow region governed by equation \eqref{eq:Gaussbroadening}: only in that region will population remain in the excited state after the action of the excitation laser. The width of the resonance ionization signal will therefore be narrower when using a delayed ionization laser. Note also in Fig. \ref{fig:2d1} how the presence of the ionization laser leads to additional broadening of the excitation spectrum, an additional source of line broadening that is avoided by using a delayed ionization stage. The goal of the experiments that are described below is to illustrate how power broadening can be mitigated by using a delayed ionization step. In this demonstration, the use of a weak transition to a long-lived excited state is crucial, since spontaneous decay from the excited state is minimal even with delayed ionization. This means that the resonance ionization process still occurs efficiently, which is critical for applications on exotic radioactive beams. \subsection{Experimental verification} \subsubsection{Description of the experiment} $^{63,65}$Cu atoms have been laser-ionized using a two-step resonance ionization process depicted in Fig. \ref{fig:cu_scheme}. The experiment was performed at the JYFL laboratory in Jyv\"askyl\"a, Finland. The resonant 244.237\,nm line from the $3d^{10}4s \ ^2S_{1/2}$ ground-state to the $3d^{9}4s4p \ ^4P^o_{1/2}$ state at 40943.78\,cm$^{-1}$ was followed by a 441.679\,nm transition to the auto-ionizing $3d^{9}4s \ 5s \ ^4D_{3/2}$ state at 63584.65\,cm$^{-1}$. Given the long lifetime of the excited bound state (479(28)\,ns \cite{Kono1982}), this system is well suited to study the behavior of power broadening for pulsed lasers and the role of the delay of the ionization laser on the line shape and ionization efficiency. Furthermore, the laser system used to excite the transition had a narrow bandwidth ($\approx 20$\,MHz) and could deliver an order of magnitude more laser power than the required saturation power density, resulting in clear power broadening effects. A description of the atomic beam unit used for this work is presented in \cite{Kessler2008}, but the essential features of the device are repeated here. The copper atoms were produced by resistively heating a tantalum tube containing a sample of copper. The resulting atomic beam passed through a collimation slit and then orthogonally crossed the laser beams. Electrostatic ion optics extracted the laser ionized copper atoms from the interaction region and guided them to an electron multiplier tube which served as the particle detector. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{scheme_copper.pdf} \caption{Ionization scheme used for $^{63}$Cu and $^{65}$Cu.% } \label{fig:cu_scheme} \end{center} \end{figure} The laser system used for this work is described in detail in \cite{Sonnenschein2015}. For the 244.237\,nm line, an injection-locked Ti:sapphire laser system produced narrowband laser light (bandwidth $\approx$ 20\,MHz), which was then frequency tripled. The master laser for this seeding cavity was a CW Matisse Ti:sapphire laser, which can be scanned continuously. The fundamental output of the seeded laser was 2.8\,W at 10\,kHz repetition rate, which after beam transport losses resulted in 300\,mW/cm$^2$ of tripled UV light entering into the atomic beam unit. A maximum of 1.6\,W/cm$^2$ of 411.679\,nm light for the ionization step was produced using an intra-cavity frequency doubled pulsed Ti:sapphire laser. The two lasers were pumped using different Nd:YAG lasers, which introduces a jitter in the timing synchronization of both Ti:sapphire lasers of about 10\,ns. This time jitter was of no consequence for the experiment. The pulse length of both lasers is typically 50\,ns. The wavelength of the injection seeded Ti:sapphire was recorded using a High Finesse WS6 wavemeter and further monitored with a Toptica scanning Fabry Perot Interferometer FPI-100-0750-y with a free spectral range of 1\,GHz. This interferometer was used to more precisely determine the wavelength of the laser as it was scanned. An example of a resonance ionization spectrum of $^{63,65}$Cu is shown in Fig. \ref{fig:full_scan}, \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{full_scan.pdf} \caption{Ionization spectrum of $^{63,65}$Cu. The black box indicates the peaks presented in the zoom-in in Fig. \ref{fig:comparison}.} \label{fig:full_scan} \end{center} \end{figure} \subsubsection{Discussion of results} Ionization spectra of $^{63,65}$Cu were obtained at different UV laser powers and for several time delays of the ionization step with respect to the excitation step. The linewidth of the Gaussian component of the fitted Voigt profiles was found to vary between 40 and 60\,MHz for all experimental conditions. The contributions to this Gaussian component from the remaining Doppler broadening and the laser linewidth could not be separated, but are likely of a similar magnitude. The measurement performed at the lowest UV laser power (3\,mW/cm$^2$) and using a temporally overlapped excitation and ionization laser resulted in a Lorentzian component with a width of 53.8(4)\,MHz. Increasing the power of the laser to 150\,mW/cm$^2$ increased the linewidth of the Lorentzian component to 124.3(3)\,MHz. The two right-most hyperfine transitions in the spectrum of $^{63,65}$Cu measured with this larger laser power are shown in Fig. \ref{fig:comparison}. The spectrum in red is measured with the excitation and ionization lasers firing simultaneously, while the green spectrum was obtained by delaying the ionization laser 40(10)\,ns. Symbols are the experimental data points, the full line is the fit with a Voigt line shape. As can be clearly seen by comparing the two spectra, keeping the laser power fixed and delaying the ionization laser drastically reduces the width of the resonance lines. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{comparison.pdf} \caption{Two lines in the spectrum of $^{63,65}$Cu (see Fig. \ref{fig:full_scan} for the full spectrum). Plotted in red is a spectrum obtained with temporally overlapping lasers (broad lines), while green is used for a spectrum obtained with a delayed ionization step (40(10)\,ns delay). Symbols are the experimental data points, the full lines are the fit. The laser power was 150 mW / cm$^2$ for both measurements. A sharp reduction in the linewidth can be clearly seen, without loss in efficiency. } \label{fig:comparison} \end{center} \end{figure} This sharp reduction in the experimental linewidth as the laser is delayed is further illustrated in Fig. \ref{fig:graph}. This figure shows the linewidth of the Lorentzian component as a function of the delay of the second laser. The red star and green triangle in this graph correspond to the data plotted in red and green of the spectra in Fig. \ref{fig:comparison}. Also shown on Fig. \ref{fig:graph} is a theoretical simulation, using the experimental parameters given earlier in this text. Even though there is an offset between theory and experiment, the general trend is well reproduced, indicating that the cause of the reduction in linewidth is understood. The experimental Lorentzian linewidth saturates at about 19\,MHz for delay times of more than 100\,ns. Note that this reduced Lorentzian linewidth is much less than the 53.8(4)\,MHz linewidth obtained at the lowest laser power of 3\,mW/cm$^2$ (with temporally overlapping laser pulses). This provides direct evidence for the absence of power broadening not only from the excitation laser but also from the ionization laser. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{graph.pdf} \caption{Linewidth of the Lorentzian component of the Voigt profile as a function of the delay time of the second laser pulse. The red star corresponds to the red spectrum in Fig. \ref{fig:comparison}, the green triangle corresponds to the green spectrum in Fig. \ref{fig:comparison}. Errors are smaller than the symbols. Also shown is a theoretical calculation of the linewidth as function of the laser delay, using the experimental parameters described in the text. } \label{fig:graph} \end{center} \end{figure} Note finally that the efficiency loss due to spontaneous decay is negligible for delays below 50\,ns due to the long lifetime of the excited state of 479(28)\,ns. This is the key advantage offered by using the weak transition to a long-lived state rather than a stronger transition. \section{Experiments on lineshape distortions and delayed ionization}\label{sec:dist} \subsection{Model Predictions} In addition to the power broadening effects discussed in the previous section, there is another effect to consider: the possibility of lineshape distortions induced by a high-power ionization laser. This effect is illustrated in Fig. \ref{fig:2ds2}. This figure repeats the simulations presented earlier in Fig. \ref{fig:2ds}, but this time includes a Stark shift induced by the ionization laser ($S$ in equation \eqref{eq:ionization_ham}). In Fig. \ref{fig:2d3} a clear asymmetry can be seen in both the population of the excited state and the final ionization spectrum. By contrast, when using a delayed ionization laser, this asymmetry is naturally absent. \begin{figure*}[ht!] \centering \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{overlapping_stark.pdf} \caption[]% {{Excited state population and ionization spectrum for simultaneous laser pulses. For comparison, the red curve in the top plot is taken from figure \ref{fig:2d1}}. This comparison shows a clear asymmetry in the ionization spectrum (shown as the blue curve in the top graph), which can also be seen in the population of the excited state just after the second laser pulse.} \label{fig:2d3} \end{subfigure} \quad \begin{subfigure}[b]{0.485\textwidth} \centering \includegraphics[width=\textwidth]{delayed_stark.pdf} \caption[]% {{Excited state population and ionization spectrum for simultaneous laser pulses. For comparison, he red and green curves in the top plot are taken from figures \ref{fig:2d1} and \ref{fig:2d2} respectively. By delaying the ionization laser, both the power broadening and asymmetry in the ionization spectrum disappear.}} \label{fig:2d4} \end{subfigure} \caption[] {As figure \ref{fig:2ds}, but including a Stark shift due to the ionization laser. } \label{fig:2ds2} \end{figure*} The next section will discuss some experimental data which demonstrate this kind of lineshape distortions. We will also show that the model for laser ionization introduced in section \ref{sec:intro} can be used to qualitatively explain these distortions. Delaying the ionization laser in time with respect to the excitation laser removes the unwanted effects, since the distortion is induced after the atomic structure is already probed by the excitation laser. \subsection{Experimental verification} \subsubsection{Description of the experiment} The possibility of ionization-related lineshape distortions, and how they can be removed by using a delayed ionization step, has been illustrated using the Collinear Resonance Ionization Spectroscopy (CRIS) experiment at ISOLDE-CERN \cite{de_Groote_2015,Flanagan2013}, using a radioactive beam of $^{221}$Fr. Details on the experimental set-up and measurement procedure can be found in \cite{de_Groote_2015}. The ionization scheme that was used is presented in Fig. \ref{fig:fr_scheme}, and consists of an excitation step from the $7s\ {}^2S_{1/2}$ ground state to the $8p\ {}^2 P_{3/2}$ state at 23658.306\,cm$^{-1}$ (422.685\,nm), and an ionization step that non-resonantly ionizes from the $8p\ {}^2 P_{3/2}$ state using pulsed 1064\,nm light. The lifetime of the excited atomic state is 83.5(1.5)\,ns, sufficiently long to justify the use of a delayed ionization pulse. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{scheme_francium.pdf} \caption{Ionization scheme used to excite and ionize $^{221}$Fr. The size of the hyperfine splittings was calculated using the values in \cite{Duong1987}. % } \label{fig:fr_scheme} \end{center} \end{figure} A $^{221}$Fr ion beam was produced by the ISOLDE facility at CERN by impinging 1.4\,GeV protons onto a uranium carbide target. Francium atoms diffuse out of this target and are then surface ionized in a hot capillary tube. After mass separation from the other francium isotopes, the $^{221}$Fr ion beam is then guided to a gas-filled linear Paul trap, where it is cooled and bunched. This beam is then accelerated to 30\,kV and transported towards the CRIS experiment. The first stage of the CRIS experiment consists of neutralizing the ion beam through collisionless charge exchange with a hot potassium vapor \cite{Procter2012}. This is often required, since suitable transitions are usually easier to find for neutral atoms rather than ions. The non-neutralized fraction of the beam is electrostatically deflected into a beam dump while the neutralized fraction is temporally and spatially overlapped with the laser beams in an ultra-high vacuum (UHV) interaction region. The laser frequency of the UV excitation laser is scanned across the hyperfine resonance transitions, and the resonantly ionized $^{221}$Fr ions are then deflected onto a copper dynode. The secondary electrons emitted from the dynode are detected using a microchannel plate (MCP) electron detector. The UHV is required to minimize collisional ionization that would otherwise result in a constant background in the hyperfine spectra. Because of the combination of an accelerated beam and the collinear overlap of the atom and laser beams, Doppler broadening is reduced to the point where it only contributes a few MHz to the total linewidth of the hyperfine structure spectra. The laser light for the first step was produced by frequency doubling the output of a Matisse TS cw Ti-sapphire laser with a Wavetrain external cavity frequency doubler. This continuous light was chopped into pulses of variable length through the use of a pockels cell and subsequent polarization sensitive beam optics, described in detail in \cite{de_Groote_2015}. This experimental configuration was used to create light pulses with a pulse length of $100$\,ns, at a repetition rate of 100\,Hz. The 1064\,nm light for non-resonant ionization was produced using a dual-cavity Litron LPY 601 50-100 PIV laser system, operated at 100\,Hz and with a pulse length of 13\,ns. After beam transport losses, 250\,mW/cm$^2$ of continuous wave laser light and 32\,mJ/pulse of 1064\,nm laser light reached the entry of the CRIS beamline. \subsubsection{Discussion of results} Fig. \ref{fig:fr_data} shows two measurements of the low-frequency component of the hyperfine structure of $^{221}$Fr. The red (broad) spectrum is obtained with the ionization laser temporally overlapped with the 100\,ns wide excitation laser pulse, as illustrated in the inset of Fig. \ref{fig:fr_data}. The green spectrum was obtained with an ionization pulse delayed by 100\,ns from the start of the excitation pulse. Using simultaneous laser pulses distorts the high-frequency side of the peaks, which displays a clear asymmetry. This asymmetry disappears when the ionization laser is delayed, which indicates that the tailing is induced by the ionization laser. The figure also shows simulations using the model introduced in section \ref{sec:model}. The ionization cross section $\sigma$ was taken to be 1\,Mb, which should at least be of the correct order of magnitude (see e.g. \cite{Ambartzumian1976,Gilbert1984} for cross sections in Rb and Cs). The Fano $q$ parameter was taken to be zero. The effective Stark shift $S$ was tuned to give the best match with the experimental data; a final value of $S(t) = 4 \Gamma(t)$ provided good agreement. The simulations were also rescaled to match the intensity of the highest peak in the experimental data. Using these parameters, the asymmetric tail of the peaks is well reproduced, supporting the idea that the observed asymmetry is due to a Stark shift caused by the strong electric field of the high-power ionization laser. The intensity of the smallest resonance in the spectrum is not well reproduced by the model. The reason for this discrepancy is unclear. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{with_simulations.pdf} \caption{Resonance ionization spectra of the leftmost components of the hyperfine structure of $^{221}$Fr, obtained with simultaneous laser pulses (red) and with a delayed ionization step (green). The solid lines are fits using the model for RIS presented in section \ref{sec:model}. The asymmetry of the line disappears when the ionization laser is delayed, while the total ionization efficiency does not decrease significantly.} \label{fig:fr_data} \end{center} \end{figure} As with the data on the copper isotopes, delaying the ionization step does not result in significant loss in efficiency, since the excited state is long-lived. The linewidth of the resonance is 20(1)\,MHz. This linewidth could only be reached due to the removal of power broadening and the lineshape-distorting AC Stark shift by delaying the ionization laser. In a two-step resonance ionization scheme, this can only be done efficiently with a weak transition to a sufficiently long-lived excited state. \section{Efficient laser excitation and ionization with weak transitions}\label{sec:eff} Weak transitions have shown desirable features for laser spectroscopy purposes. In addition to the inherently small linewidth, weak transitions show no sign of efficiency loss when delaying the ionization pulse. This section will further argue that weak transitions to long-lived states can be excited with very high absolute efficiencies, comparable to efficiencies obtained with stronger lines. Applying the model of section \ref{sec:model}, one obtains the steady state population of an excited level in a two-level approximation as: \begin{align} P_{exc}(\Delta=0) &= \dfrac{\Omega^2}{A^2+2\Omega^2}\\ & \propto \dfrac{I/A}{1+2I/A}, \end{align}\label{eq:steadystate} since $\Omega \propto \sqrt{IA}$. Since, for a fixed laser intensity, the equilibrium population is a monotonically decreasing function of $A$, weak transitions can achieve higher steady state population in the excited state. However, the irradiation time required to reach this steady state is longer than for strong transitions, though it also decreases with laser power. Therefore, there are two strategies to consider when maximizing the efficiency of excitations using weak transitions. First of all, one can use high power pulsed laser systems which increase the rate at which the equilibrium population is reached, resulting in higher efficiency for short pulse lengths. This is the approach used for the first dataset in this article (see section \ref{sec:pb}): a high-power pulsed laser can saturate the excitation step and therefore efficiently excite the system. On the other hand, employing low power chopped CW laser light with long interaction times, as in the experiment of \ref{sec:dist}, also allows for high efficiency ($>$1-10 \%). Indeed, the efficiency in the experiment described above using chopped CW laser light \cite{de_Groote_2015} is similar to that obtained in an earlier experiment using a pulsed high-power laser for the excitation step \cite{Flanagan2013}. In both experiments, the total efficiency was 1\%, where the detection efficiency was 80\%, beam transport efficiency $<$30\%, neutralization efficiency $<$50\%, and the laser ionization efficiency therefore $>8\%$. The ability to reach high laser ionization efficiencies with chopped cw laser pulses is also illustrated in Fig. \ref{fig:pulselength}. This figure shows simulated ionization efficiencies for weak and strong transitions for a system with a ground state doublet and a single excited state, as a function of the pulse length of the excitation step, and using overlapping excitation and ionization lasers. The laser power was set to 10\,mW, which is usually easily achieved with modern CW lasers. With sufficiently long interaction times, weak transitions can be excited very efficiently, with simulated ionization efficiencies better than or comparable to the efficiency obtained for short pulses and strong transitions. For time-separated laser beams, the ionization efficiency for a strong transition never reaches that of the weak transition, due to decay losses. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{pulsewidth.pdf} \caption{Ionization efficiency for a chopped cw-laser with a cw power of 10\,mW, overlapped with a 1\,mJ/pulse ionization laser and with a variable excitation laser pulse length. The system that was simulated here consists of a ground state doublet and a single excited state. For a weak transition ($A=1$\,MHz) the highest efficiency is reached if the pulse length is $\approx 1\,\mu s$. For extremely long laser pulses, the efficiency decreases in all cases due to optical pumping towards the other hyperfine level of the ground state.} \label{fig:pulselength} \end{center} \end{figure} \section{Conclusions} This article has shown that efficient resonance ionization spectroscopy with high resolving powers can be achieved with pulsed (or chopped) laser beams using weak transitions to long-lived states. This was demonstrated using simulations, supported by experimental observations and illustrates the twofold advantages that weak transition to long-lived states offer. Firstly, it is possible to remove virtually all power broadening due to both lasers in a two-step RIS scheme by choosing a suitable delay between the excitation and the ionization laser pulses. Secondly, lineshape distortions due to the presence of a strong ionizing laser field when using non-resonant ionization can cause significant lineshape distortions, which can also be removed by delaying the ionization step. Similar arguments can be made in the case of RIS schemes that use more than two lasers, but this lies outside of the scope of the work presented here. The long lifetime of the excited state ensures that no significant efficiency losses occur due to the delay of the ionization laser. Experimental evidence for both advantages was presented and compared to simulations with a theoretical model for resonance ionization spectroscopy. The experimental data also illustrate that high efficiencies can be obtained using weak transitions. \begin{acknowledgments} We acknowledge the support of the ISOLDE collaboration and technical teams. We are grateful to the COLLAPS collaboration for the use of their CW Ti:sapphire laser system and wavetrain doubling unit. We thank Wouter Gins for fruitful discussions and for comparisons to simulations with rate equation codes. This work was supported by the BriX Research Program No.~P7/12 and FWO-Vlaanderen (Belgium) and GOA 15/010 from KU Leuven, ERC Consolidator Grant no.~648381, the Science and Technology Facilities Council consolidated grant ST/F012071/1 and continuation grant ST/J000159/1, and the EU Seventh Framework through ENSAR(506065). K.~T.~F. was supported by STFC Advanced Fellowship Scheme Grant No.~ST/G006415/1. This work was also supported by the Academy of Finland under the Center of Excellence Programme 20122017 (Nuclear and Accelerator Based Physics Research at JYFL). \end{acknowledgments}
2,869,038,156,192
arxiv
\section{Introduction} Fractional calculus is the calculus of differentiation and integration of non-integer orders. During last three decades or so, fractional calculus has gained much attention due to its demonstrated applications in various fields of science and engineering \cite{oldham,samko,miller,podlubny,kilbs,carpinteri,hilfer,west,zaslav,metzler,hermann,agrawal,entropy,varCal,physScr,jmp,iomin,FundaTheorFC}. There are many good textbooks of fractional calculus and fractional differential equations, such as \cite{oldham,samko,miller,podlubny,kilbs}. For various applications of fractional calculus in physics, see \cite{carpinteri,hilfer,west,zaslav,metzler,hermann,agrawal,entropy,varCal,physScr,jmp,iomin,FundaTheorFC} and references therein. In Calculus, the Fundamental Theorem of Calculus (Newton-Leibniz Theorem) is of fundamental importance, matching its name. For fractional calculus, an analogous theorem has been highlighted recently in a paper \cite{FundaTheorFC}. When trying to construct a consistent fractional vector calculus, V. E. Tarasov observed that many of fundamental problems can be solved by using a generalization of the Fundamental Theorem of Calculus. Series expansion is an important tool to calculus. Particularly, series expansion plays an important role in solving some differential equations, such as the hypergeometric differential equations \cite{wzx,wiki_HyperF,wiki_ConfHyperF}. However, fractional series expansion has not yet been introduced to fractional calculus. In this paper, based on the Fundamental Theorem of Fractional Calculus, we will introduce fractional series expansion method to fractional calculus. We will define a kind of fractional Taylor series of infinitely fractionally-differentiable functions. By using of this kind of fractional Taylor series, we will give a fractional generalization of hypergeometric functions and derive corresponding differential equations. For finitely fractionally-differentiable functions, we observe that the non-infinitely fractionally-differentiability is due to more than one fractional indices. We will expand functions with two fractional indices and illustrate how this kind of series expansion can help to solve fractional differential equations. The structure of this paper is as follows. In Section \ref{FC}, we briefly review fractional derivative, fractional integral and the Fundamental Theorem of Fractional Calculus. In Section \ref{FTS}, we introduce the fractional Taylor series of an infinitely fractionally-differentiable function and give some examples. In Section \ref{FHyperF}, we make a generalization of the hypergeometric functions. In Section \ref{FtwoIn}, we discuss finitely fractionally-differentiable functions. In Section \ref{Summary}, we give our summary. \section{Fractional Caculus}\label{FC} In this section, we briefly review the definitions of fractional integral and fractional derivative, and the Fundamental Theorem of Fractional Calculus. For more details, see \cite{oldham,samko,miller,podlubny,kilbs}. \subsection{Fractional integral and fractional derivative} There are many ways to define fractional derivative and fractional integral. Most of them are based on the idea that we can generalize the equation \begin{equation}\label{dpower} \frac{d^n x^m}{d x^n}=\frac{m!}{(m-n)!}x^{m-n},~~~~n\in N, \end{equation} by replacing each factorial by a gamma function, to \begin{equation}\label{} \frac{d^{\alpha} x^{\beta}}{d x^{\alpha}}=\frac{\Gamma(\beta+1)}{\Gamma(\beta-\alpha+1)}x^{\beta-\alpha},~~~~\alpha>0. \end{equation} In terms of integral operation $I$, the idea is to generalize \begin{equation}\label{ipower} I^n x^m=\frac{m!}{(m+n)!}x^{m+n} \end{equation} to \begin{equation}\label{} I^{\alpha} x^{\beta}=\frac{\Gamma(\beta+1)}{\Gamma(\alpha+\beta+1)}x^{\beta+\alpha}. \end{equation} The most commonly-used fractional integral is the Riemann-Liouville fractional integral (RLFI). Let $f(x)$ be a function defined on the interval $[a,b]$. Let $\alpha$ be a positive real. The right Riemann-Liouville fractional integral (right-RLFI) is defined by \begin{equation}\label{} I^{\alpha}_{a|x}f(x)=\frac{1}{\Gamma(\alpha)}\int_a^x (x-\xi)^{\alpha-1}f(\xi)d\xi, \end{equation} and the left Riemann-Liouville fractional integral (left-RLFI) is defined by \begin{equation}\label{} I^{\alpha}_{x|b}f(x)=\frac{1}{\Gamma(\alpha)}\int_x^b (\xi-x)^{\alpha-1}f(\xi)d\xi. \end{equation} We note that in this paper our use of notations ``right" and ``left" is different from the common use, for reasons that will be clear later. For fractional derivatives, the Caputo fractional derivative (CFD) is a commonly-used one. Let $n\equiv [\alpha]+1$. The right-CFD and left-CFD are defined, respectively by \begin{equation}\label{} D^{C,\alpha}_{a|x}f(x)=I^{n-\alpha}_{a|x}\frac{d^n}{dx^n}f(x)=\frac{1}{\Gamma(n-\alpha)}\int_a^x (x-\xi)^{n-\alpha-1}\frac{d^n}{d\xi^n}f(\xi)d\xi, \end{equation} \begin{equation}\label{} D^{C,\alpha}_{x|b}f(x)=I^{n-\alpha}_{x|b}(-\frac{d}{dx})^nf(x)=\frac{1}{\Gamma(n-\alpha)}\int_x^b (\xi-x)^{n-\alpha-1}(-\frac{d}{d\xi})^n f(\xi)d\xi. \end{equation} One can check that the above definitions really generalize (\ref{dpower}) and (\ref{ipower}), and give \begin{eqnarray}\label{relation1} D^{C,\alpha}_{a|x}(x-a)^{\beta} &=& \frac{\Gamma(\beta+1)}{\Gamma(\beta-\alpha+1)}(x-a)^{\beta-\alpha},~~~\beta\neq 0,1,...,[\alpha]; \\ D^{C,\alpha}_{x|b}(b-x)^{\beta} &=& \frac{\Gamma(\beta+1)}{\Gamma(\beta-\alpha+1)}(b-x)^{\beta-\alpha},~~~\beta\neq 0,1,...,[\alpha]; \\ I^{\alpha}_{a|x}(x-a)^{\beta} &=& \frac{\Gamma(\beta+1)}{\Gamma(\beta+\alpha+1)}(x-a)^{\beta+\alpha}; \\ I^{\alpha}_{x|b}(b-x)^{\beta} &=& \frac{\Gamma(\beta+1)}{\Gamma(\beta+\alpha+1)}(b-x)^{\beta+\alpha}. \end{eqnarray} Especially, the Caputo fractional derivative on a constant ($\beta=0$) yields zero, \begin{equation}\label{} D^{C,\alpha}_{a|x}\cdot 1=0, \end{equation} \begin{equation}\label{relation6} D^{C,\alpha}_{x|b}\cdot 1=0. \end{equation} This simple property is decisive in the fractional series expansion and in our preference of the Caputo fractional derivative to another commonly-used fractional derivative, the Riemann-Liouville fractional derivative, whose operation on a constant gives not zero, \begin{equation}\label{} D^{RL,\alpha}_{a|x}\cdot 1=\frac{(x-a)^{-\alpha}}{\Gamma(1-\alpha)}, ~~~0<\alpha<1, \end{equation} \begin{equation}\label{} D^{RL,\alpha}_{x|b}\cdot 1=\frac{(b-x)^{-\alpha}}{\Gamma(1-\alpha)}, ~~~0<\alpha<1. \end{equation} \subsection{Fundamental Theorem of Fractional Calculus} The Fundamental Theorem of Calculus (Newton-Leibniz Theorem) is \begin{equation}\label{FT1} \int_a^b dx \frac{d}{dx}f(x)=f(b)-f(a), \end{equation} \begin{equation}\label{FT2} \frac{d}{dx} \int_a^x f(\xi) d\xi=f(x). \end{equation} This theorem means that the derivative operation is inverse to the integral operation, and vice versa. In fractional calculus, an analogous theorem exists \cite{kilbs,FundaTheorFC}. \noindent\emph{Fundamental Theorem of Fractional Calculus}. \noindent(1) Let $\alpha>0$ and let $f(x)\in L_{\infty}(a,b)$ or $f(x)\in C[a,b]$. Then \begin{eqnarray} D^{C,\alpha}_{a|x}I^{\alpha}_{a|x}f(x) &=& f(x), \\ D^{C,\alpha}_{x|b}I^{\alpha}_{x|b}f(x) &=& f(x). \end{eqnarray} \noindent(2) Let $0<\alpha<1$. If $f(x)\in AC[a,b]$ or $f(x)\in C[a,b]$. Then \begin{eqnarray} I^{\alpha}_{a|x}D^{C,\alpha}_{a|x}f(x) &=& f(x)-f(a), \\ I^{\alpha}_{x|b}D^{C,\alpha}_{x|b}f(x) &=& f(x)-f(b). \end{eqnarray} Here $L_{\infty}(a,b)$ is the set of those Lebesgue measurable functions $f$ on $(a,b)$ for which $\|f\|_{\infty}<{\infty}$, where $\|f\|_{\infty}=$ ess sup$_{a\leq x\leq b}|f(x)|$. Here ess sup$|f(x)|$ is the essential maximum of the function $|f(x)|$. $AC[a,b]$ is the space of functions $f$ which are absolutely continuous on $[a,b]$. $C[a,b]$ is the space of continuous functions $f$ on $[a,b]$ with the norm $\|f\|_C=\max_{x\in[a,b]}|f(x)|$. The proof of this theorem can be obtained from \cite{kilbs}, in which these results are included in Lemma 2.21 and Lemma 2.22 there. So, by this theorem, one can say that the right (left) Caputo fractional derivative operation and the right (left) Riemann-Liouville fractional integral operation are inverse to each other. \section{Fractional Taylor series of infinitely fractionally-differentiable functions}\label{FTS} In this section, we will introduce fractional series expansion method to fractional calculus and define a kind of fractional Taylor series. We observe that \begin{equation}\label{} f(x)=f(a)+D^{\alpha}_{a|x}f(x)\big|_{x=\xi}\cdot (I^{\alpha}_{a|x}\cdot 1), \end{equation} where $a<\xi<x$, and $\xi$ varies with the integral upper bound. Here and after, $D^{\alpha}_{a|x}$ denotes the right Caputo fractional derivative $D^{C,\alpha}_{a|x}$. The corresponding formula in integer-order calculus is \begin{equation}\label{} f(x)=f(a)+\frac{df}{dx}\bigg|_{x=\xi}\cdot(x-a), \end{equation} which is the Lagrange Mean Value Theorem. Make a step further, \begin{equation}\label{} f(x)=f(a)+D^{\alpha}_{a|x}f(x)\big|_{x=a}\cdot (I^{\alpha}_{a|x}\cdot 1)+D^{\alpha}_{a|x}D^{\alpha}_{a|x}f(x)\big|_{x=\xi}\cdot (I^{\alpha}_{a|x}I^{\alpha}_{a|x}\cdot 1). \end{equation} The corresponding formula in integer-order calculus is \begin{equation}\label{} f(x)=f(a)+\frac{df}{dx}\bigg|_{x=a}\cdot(x-a)+(\frac{d}{dx})^2 f\bigg|_{x=\xi}\cdot\frac{1}{2}(x-a)^2. \end{equation} And so on. One can extend this expansion to infinite order if the function is sufficiently smooth. Based on this observation a definition of a formal fractional Taylor series expansion can be made. \noindent{\bf Definition 1.a.} Let $f(x)$ be a function defined on the right neighborhood of $a$, and be an infinitely fractionally-differentiable function at $a$, that is to say, all $(D^{\alpha}_{a|x})^m f(x)$($m=0,1,2,3..$) exist, and are not singular at $a$. The formal fractional right-RL Taylor series of a function is \begin{equation}\label{} f(x)=\sum_{m=0}^{\infty} (D^{\alpha}_{a|x})^m f(x)\big|_{x=a}\cdot [(I^{\alpha}_{a|x})^m\cdot 1]. \end{equation} Explicitly, \begin{equation}\label{} (I^{\alpha}_{a|x})^m\cdot 1=\frac{1}{\Gamma(m\alpha+1)}(x-a)^{m\alpha}. \end{equation} \noindent{\bf Definition 1.b.} Let $f(x)$ be a function defined on the left neighborhood of $b$, and be an infinitely fractionally-differentiable function at $b$, that is to say, all $(D^{\alpha}_{x|b})^m f(x)$($m=0,1,2,3..$) exist, and are not singular at $b$. The formal fractional left-RL Taylor series of a function is \begin{equation}\label{} f(x)=\sum_{m=0}^{\infty} (D^{\alpha}_{x|b})^m f(x)\big|_{x=b}\cdot [(I^{\alpha}_{x|b})^m\cdot 1]. \end{equation} Explicitly, \begin{equation}\label{} (I^{\alpha}_{x|b})^m\cdot 1=\frac{1}{\Gamma(m\alpha+1)}(b-x)^{m\alpha}. \end{equation} In the above definitions, $D^{\alpha}_{a|x}$ is the right Caputo fractional derivative $D^{C,\alpha}_{a|x}$; $D^{\alpha}_{x|b}$ is the left Caputo fractional derivative $D^{C,\alpha}_{x|b}$. $I^{\alpha}_{a|x}$ and $I^{\alpha}_{x|b}$ are right- and left- Riemann-Liouvelle fractional integral, respectively. \noindent{\bf Remark 1.} One can easily check the formal correctness of the expansions by using of the Fundamental Theorem of Fractional Calculus, or the relations (\ref{relation1})-(\ref{relation6}). For rigorous validity, convergence is required. \noindent{\bf Remark 2.} Series expansion has played an important role in Calculus, particularly in solving differential equations. However, fractional series expansion has not yet been introduced to fractional calculus. This is because a pre-requisite that makes fractional series expansion possible is the Fundamental Theorem of Fractional Calculus, which is only recently proved and highlighted \cite{kilbs,FundaTheorFC}. \noindent{\bf Remark 3.} We may expect that fractional series expansion will shed new light on fractional calculus, especially the field of fractional differential equations. In the next section, we will use this expansion to define a fractional generalization of hypergeometric functions and discuss their differential equations. \noindent{\bf Remark 4.} The fractional Taylor series is defined for infinitely fractionally-differentiable functions. Finitely fractionally-differentiable functions will be discussed in Section \ref{FtwoIn}. In the following, we give some simple examples of fractional Taylor series. \noindent{\bf Example 1.} $(x-a)^{\beta}$, with no $l\in N$ satisfying $\beta=l\alpha$. \begin{equation}\label{} (D^{\alpha}_{a|x})^m(x-a)^{\beta}=\frac{\Gamma(\beta+1)}{\Gamma(\beta-m\alpha+1)}(x-a)^{\beta-m\alpha}, \end{equation} For large $m$, the derivative will be singular at $a$. So we cannot make the expansion. \noindent{\bf Example 2.} \begin{equation}\label{} e^{(x-a)^{\alpha}}=\sum_{m=0}^{\infty}\frac{1}{m!}(x-a)^{m\alpha}=\sum_{m=0}^{\infty}\frac{\Gamma(m\alpha+1)}{m!}\frac{1}{\Gamma(m\alpha+1)}(x-a)^{m\alpha}. \end{equation} \noindent{\bf Example 3.} The Mittag-Leffler function $E_{\alpha}((x-a)^{\alpha})$, which satisfies \begin{equation}\label{} D^{\alpha}_{a|x}E_{\alpha}((x-a)^{\alpha})=E_{\alpha}((x-a)^{\alpha}),~~~~~~~~~~~E_{\alpha}(0)=1, \end{equation} It is the fractional analogue of $exp(x)$. For arbitrary $m$, $(D^{\alpha}_{a|x})^m E_{\alpha}((x-a)^{\alpha})|_{x=a}=1$. So, \begin{equation}\label{} E_{\alpha}((x-a)^{\alpha})=\sum_{m=0}^{\infty}[(I^{\alpha}_{a|x})^m\cdot 1]=\sum_{m=0}^{\infty}\frac{1}{\Gamma(m\alpha+1)}(x-a)^{m\alpha}. \end{equation} Notice the difference between $e^{(x-a)^{\alpha}}$ and $E_{\alpha}((x-a)^{\alpha})$. \noindent{\bf Example 4.} $cos_{\alpha}((x-a)^{\alpha})$ and $sin_{\alpha}((x-a)^{\alpha})$. \begin{equation}\label{} cos_{\alpha}((x-a)^{\alpha})=1-\frac{(x-a)^{\alpha\cdot 2}}{\Gamma(\alpha\cdot 2+1)}+\frac{(x-a)^{\alpha\cdot 4}}{\Gamma(\alpha\cdot 4+1)}-\frac{(x-a)^{\alpha\cdot 6}}{\Gamma(\alpha\cdot 6+1)}+... \end{equation} \begin{equation}\label{} sin_{\alpha}((x-a)^{\alpha})=\frac{(x-a)^{\alpha}}{\Gamma(\alpha+1)}-\frac{(x-a)^{\alpha\cdot 3}}{\Gamma(\alpha\cdot 3+1)}+\frac{(x-a)^{\alpha\cdot 5}}{\Gamma(\alpha\cdot 5+1)}-... \end{equation} They satisfy \begin{eqnarray} D^{\alpha}_{a|x}sin_{\alpha}((x-a)^{\alpha}) &=& cos_{\alpha}((x-a)^{\alpha}), \\ D^{\alpha}_{a|x}cos_{\alpha}((x-a)^{\alpha}) &=& -sin_{\alpha}((x-a)^{\alpha}). \end{eqnarray} \section{Fractional hypergeometric function}\label{FHyperF} In the section, we will define a fractional generalization of the hypergeometric functions. Let us first consider a fractional generalization of the confluent hypergeometric differential equation: \begin{equation}\label{} z^{\alpha}(D^{\alpha}_{0|z})^2 y+(c-z^{\alpha})D^{\alpha}_{0|z}y-ay=0. \end{equation} Here $a$ and $c$ are complex parameters. When $\alpha=1$, this is the ordinary confluent hypergeometric equation. Introducing the fractional Taylor series \begin{equation}\label{yFTS} y(z)=\sum_{k=0}^{\infty}c_k z^{\alpha\cdot k}, \end{equation} and substituting, we get the ratio of successive coefficients \begin{eqnarray} \frac{c_{k+1}\cdot\Gamma(k\alpha+\alpha+1)}{c_{k}\cdot\Gamma(k\alpha+1)} &=& \frac{a+\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}}{c+\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}},\\ \frac{c_1\cdot\Gamma(\alpha+1)}{c_0} &=& \frac{a}{c}. \end{eqnarray} Thus we get a solution of the above differential equation, \begin{equation}\label{fconfhyperf} y(z)=\sum_{k=0}^{\infty}\frac{(a)^{\alpha}_k}{(c)^{\alpha}_k}\frac{1}{\Gamma(k\alpha+1)}z^{\alpha\cdot k}. \end{equation} Here $(a)^{\alpha}_k$ is defined as \begin{eqnarray} (a)^{\alpha}_0 &=& 1, ~~~~~(a)^{\alpha}_1=a, \nonumber\\ (a)^{\alpha}_k &=& \bigg(a+\frac{\Gamma(k\alpha-\alpha+1)}{\Gamma(k\alpha-2\alpha+1)}\bigg)\bigg(a+\frac{\Gamma(k\alpha-2\alpha+1)}{\Gamma(k\alpha-3\alpha+1)}\bigg)...(a)^{\alpha}_1, ~~k\geq2. \end{eqnarray} This can be seen as a fractional generalization of the rising factorial \begin{equation}\label{risingfactorial} (a)_k=(a+k-1)(a+k-2)...a. \end{equation} And the series (\ref{fconfhyperf}) can be seen as a generalization the confluent hypergeometric function. If $\alpha=1$, it is exactly the confluent hypergeometric function. For the fractional Gauss hypergeometric function, consider the following series \begin{equation}\label{fhyperf} y(z)=\sum_{k=0}^{\infty}\frac{(a)^{\alpha}_k(b)^{\alpha}_k}{(c)^{\alpha}_k}\frac{1}{\Gamma(k\alpha+1)}z^{\alpha\cdot k}, \end{equation} which reduces to the Gauss hypergeometric function when $\alpha=1$. The ratio of successive coefficients is \begin{equation}\label{} \frac{c_{k+1}\cdot\Gamma(k\alpha+\alpha+1)}{c_{k}\cdot\Gamma(k\alpha+1)} = \frac{\bigg(a+\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}\bigg)\bigg(b+\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}\bigg)}{\bigg(c+\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}\bigg)}. \end{equation} Making some manipulation, one can get \begin{eqnarray} &&c_{k+1}\cdot c\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}+c_{k+1}\cdot \frac{\Gamma(k\alpha+\alpha+1)}{\Gamma(k\alpha-\alpha+1)} \nonumber \\ &&=c_k\cdot ab+c_k\cdot(a+b)\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}+c_k\cdot\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}\frac{\Gamma(k\alpha+1)}{\Gamma(k\alpha-\alpha+1)}. \end{eqnarray} This equation can be translated to a fractional differential equation \begin{eqnarray} && ab\cdot f(z)+(a+b)(z-z_0)^{\alpha}D^{\alpha}_{z_0|z}f(z)+(z-z_0)^{\alpha}D^{\alpha}_{z_0|z}\big[(z-z_0)^{\alpha}D^{\alpha}_{z_0|z}f(z)\big] \nonumber \\ && =c\cdot D^{\alpha}_{z_0|z}f(z)+(z-z_0)^{\alpha}(D^{\alpha}_{z_0|z})^2 f(z). \end{eqnarray} When $\alpha=1$, this equation reduces to the ordinary Gauss hypergeometric equation. One can check $y(z-z_0)$ defined in (\ref{fhyperf}) satisfies this equation. For generalized fractional hypergeometric series \begin{equation}\label{gfhyperf} y(z)=\sum_{k=0}^{\infty}\frac{(a_1)^{\alpha}_k...(a_p)^{\alpha}_k}{(b_1)^{\alpha}_k...(b_q)^{\alpha}_k}\frac{1}{\Gamma(k\alpha+1)}z^{\alpha\cdot k}, \end{equation} making repeated use of $(z-z_0)^{\alpha}D^{\alpha}_{z_0|z}$, one can also get its differential equation. \section{Functions with two fractional indices}\label{FtwoIn} Not all functions are infinitely fractionally-differentiable, so it is meaningful to investigate finitely fractionally-differentiable functions. In Example 1 in Section \ref{FTS}, we have given an example function that is not infinitely fractionally-differentiable. We observe that the non-infinitely fractionally-differentiability is due to another fractional index $\beta$ (no $l$ satisfying $\beta=l\alpha$). A function $f(x)$ is said to have the behavior of fractional index $\alpha$ in the right neighborhood of $a$, if it can be expanded as a series with the basis \{$(x-a)^{m\alpha}|m= 0,1,2,3...$\}. We also observe that some functions can be expanded into a form with two fractional indices, i.e. \begin{equation}\label{twoInSer} f(x)=\sum_{m=0}^{\infty}\sum_{n=0}^{\infty}c_{mn}(x-a)^{m\alpha+n\beta}. \end{equation} These functions can reduce to a function that is infinitely $\alpha$-differentiable or infinitely $\beta$-differentiable, but generally they are finitely fractionally-differentiable functions. When we are solving fractional differential equations, we should take care if $[\alpha]>0$ and $\beta\in N$, for the reason of the relation (\ref{relation1}). However, for $[\alpha]=0$ or $\beta$ not an integer, the above series is fairly a good ansatz. A function $f(x)$ is said to be $N\cdot\alpha$ fractionally-differentiable at $a$, if $(D^{\alpha}_{a|x})^m f(x)|_{x=a}$ is finite for $m\leq N$, but infinite for $m=N+1$. Then it could be expanded as \begin{eqnarray} f(x) &=& c_{00}+c_{10}(x-a)^{\alpha}++c_{20}(x-a)^{2\alpha}+...+c_{N0}(x-a)^{N\alpha} \nonumber \\ &+& \sum_{N\alpha<m\alpha+n\beta<(N+1)\alpha}c_{mn}(x-a)^{m\alpha+n\beta}+\sum_{m\alpha+n\beta\geq(N+1)\alpha}c_{mn}(x-a)^{m\alpha+n\beta}, \end{eqnarray} in which the first term in the second line should not be zero. Now let us see how this kind of series expansion could help to solve fractional differential equations. Consider the following equation: \begin{equation}\label{ExamEQU} (x-a)^{\alpha}(D^{\alpha}_{a|x})^2f(x)-D^{\alpha}_{a|x}f(x)=\frac{x-a+(x-a)^{\alpha}}{1-x+a}. \end{equation} The right hand side can be expanded as: \begin{equation}\label{RightH} \frac{x-a+(x-a)^{\alpha}}{1-x+a}=\sum_{k=1}^{\infty}(x-a)^k+\sum_{k=0}^{\infty}(x-a)^k(x-a)^{\alpha}. \end{equation} It is of index $\alpha$ and index $\beta=1$. It is $1\cdot\alpha$ fractionally-differentiable. So $f(x)$ is $2\cdot\alpha$ fractionally-differentiable. We can write \begin{eqnarray} f(x) &=& c_{00}+c_{10}(x-a)^{\alpha}+c_{20}(x-a)^{2\alpha} \nonumber \\ &+& 0+c_{11}(x-a)^{\alpha+1} \nonumber \\ &+& c_{02}(x-a)^2 \nonumber \\ &+& \sum_{m\alpha+n>3\alpha,(m,n)\neq(0,2)}c_{mn}(x-a)^{m\alpha+n}. \end{eqnarray} Here we assume $0<\alpha<1$. Or more manageably, \begin{eqnarray}\label{fExpan} f(x) &=& c_{00}+c_{10}(x-a)^{\alpha} \nonumber\\ &+& \sum_{n=2}^{\infty}c_{0n}(x-a)^n + \sum_{n=1}^{\infty}c_{1n}(x-a)^{\alpha+n} \nonumber\\ &+& \sum_{n=0}^{\infty}c_{2n}(x-a)^{2\alpha+n} + \sum_{m\geq3}\sum_{n=0}^{\infty}c_{mn}(x-a)^{m\alpha+n}. \end{eqnarray} Substituting (\ref{RightH}) and (\ref{fExpan}) into (\ref{ExamEQU}), one gets \begin{eqnarray} c_{mn} &=& 0, ~~~~~m\geq3; \nonumber \\ 1/c_{2n} &=& \frac{\Gamma(n+2\alpha+1)}{\Gamma(n+1)}-\frac{\Gamma(n+2\alpha+1)}{\Gamma(n+\alpha+1)}; \nonumber \\ 1/c_{1n} &=& \frac{\Gamma(n+\alpha+1)}{\Gamma(n-\alpha+1)}-\frac{\Gamma(n+\alpha+1)}{\Gamma(n+1)}, ~~~n\geq1; \nonumber \\ c_{10} &=& 0; \nonumber \\ c_{0n} &=& 0,~~~n\geq1; ~~~c_{00}=f(a). \end{eqnarray} Thus we solved the fractional differential equation (\ref{ExamEQU}). The above procedure can apply generally to fractional differential equations. For a generic fractional differential equation $F[(x-a),y(x-a),D^{\alpha}_{a|x}]=0$, we can solve it by the above procedure summarized as follows. \begin{enumerate} \item Expand the $(x-a)$ part of the equation and find out the indices of $y(x-a)$, \item Find out the number $N$ such that $y(x-a)$ is $N\cdot\alpha$ fractionally-differentiable, \item Expand $y(x-a)$, \item Substitute the expansion series into the equation, \item Find out the coefficients of the expansion series of $y(x-a)$. \end{enumerate} We give another example in the following. Consider the fractional differential equation: \begin{equation}\label{ExamEQUfg} D^{\alpha}_{a|x}y(x)+f(x)y(x)=g(x), \end{equation} where $f(x)$ and $g(x)$ are given functions of index $\beta$, \begin{equation}\label{fcExpan} f(x)=f_0+f_1\cdot(x-a)^{\beta}+f_2\cdot(x-a)^{2\beta}+f_3\cdot(x-a)^{3\beta}+..., \end{equation} \begin{equation}\label{gcExpan} g(x)=g_0+g_1\cdot(x-a)^{\beta}+g_2\cdot(x-a)^{2\beta}+g_3\cdot(x-a)^{3\beta}+..., \end{equation} where $f_i$ and $g_i$ ($i=0,1,2,3,...$) are constants. So $y(x)$ have two indices $\alpha$ and $\beta$. If $g_1\neq0$ and $\alpha<\beta<2\alpha$, $g(x)$ is $1\cdot\alpha$ fractionally-differentiable, then $y(x)$ is $2\cdot\alpha$ fractionally-differentiable. We can write \begin{eqnarray} y(x) &=& c_{00}+c_{10}(x-a)^{\alpha}+c_{20}(x-a)^{2\alpha} \nonumber \\ &+& 0+c_{11}(x-a)^{\alpha+\beta} \nonumber \\ &+& c_{02}(x-a)^{2\beta} \nonumber \\ &+& \sum_{m\alpha+n\beta>3\alpha,(m,n)\neq(0,2)}c_{mn}(x-a)^{m\alpha+n\beta}. \end{eqnarray} Or more manageably, \begin{eqnarray}\label{yExpan} y(x) &=& c_{00}+c_{10}(x-a)^{\alpha} \nonumber\\ &+& \sum_{n=2}^{\infty}c_{0n}(x-a)^{n\beta} + \sum_{n=1}^{\infty}c_{1n}(x-a)^{\alpha+n\beta} \nonumber\\ &+& \sum_{n=0}^{\infty}c_{2n}(x-a)^{2\alpha+n\beta} + \sum_{m\geq3}\sum_{n=0}^{\infty}c_{mn}(x-a)^{m\alpha+n\beta}. \end{eqnarray} Substituting (\ref{fcExpan}), (\ref{gcExpan}) and (\ref{yExpan}) into (\ref{ExamEQUfg}), one gets \begin{eqnarray} c_{00} &=& y(a),~~~~~~~~~~~~~~~~ c_{0n}=0, ~~~~n\geq1; \nonumber \\ c_{1n} &=& \frac{\Gamma(n\beta+1)}{\Gamma(\alpha+n\beta+1)}(g_n-f_n c_{00}); \nonumber \\ c_{2n} &=& -\frac{\Gamma(\alpha+n\beta+1)}{\Gamma(2\alpha+n\beta+1)}(f_n c_{10}+c_{1n}); \nonumber \\ c_{mn} &=& -\frac{\Gamma((m-1)\alpha+n\beta+1)}{\Gamma(m\alpha+n\beta+1)}c_{(m-1)n} \nonumber \\ &=& (-1)^{m-2}\frac{\Gamma(2\alpha+n\beta+1)}{\Gamma(m\alpha+n\beta+1)}c_{2n},~~~~~~~~ m\geq3. \end{eqnarray} Thus we solved the fractional differential equation (\ref{ExamEQUfg}). In this section we have discussed functions with two fractional indices, but the extension to functions with more fractional indices will not be difficult. \section{Summary}\label{Summary} In summary, in this paper we introduced fractional series expansion method to fractional calculus. We defined a kind of fractional Taylor series of infinitely fractionally-differentiable functions. Based on our definition we generalized hypergeometric functions and derived their differential equations. For finitely fractionally-differentiable functions, we observed that the non-infinitely fractionally-differentiability is due to more than one fractional indices. We expanded functions with two fractional indices and illustrated how this kind of series expansion can help to solve fractional differential equations.
2,869,038,156,193
arxiv
\section{Introduction} There has been a growing interest over the past years in the analysis of elastic waves in thin plates in the metamaterial community with the theoretical proposal \cite{farhat09,farhat09PRL}, and its subsequent experimental validation \cite{stenger12,misseroni16} of a broadband cloak for flexural waves. Square \cite{colquitt14} and diamond \cite{pomot2019form} cloaks are based on an improved transformed plate model, while form-invariance of the transformed equations in the framework of pre-stressed anisotropic plates is analized in \cite{Brun14b,morvaridi18,Golgoon21}. There is currently a keen activity in transformation optics, whereby transformation based solutions to the Maxwell equations expressed in curvilinear coordinate systems travel along geodesics rather than in straight lines \cite{rayoptics}. The fact that light follows shortest trajectories, the physical principle behind transformation optics, was formulated by de Fermat back in 1662. This minimization principle is applicable to ray optics, when the wavelength is much smaller than the size of the diffraction object. Leonhardt has shown in 2006 \cite{leonhardt06} that this allows for instance the design of invisibility cloaks using conformal mappings. Pendry, Schurig and Smith simultaneously reported that the same principle applies to electromagnetic waves, i.e. when the wavelength is in resonance with the scattering object, by creating a hole in the curved space \cite{pendry2006controlling}. Interestingly, the mathematicians Greenleaf, Lassas and Uhlmann proposed an earlier route to invisibility using an inverse problem approach in 2003 \cite{greenleaf03}, and together with Kurylev have been able since then to bridge the cloaking theory with Einstein theory of relativity, thereby suggesting possible avenues towards electromagnetic wormholes \cite{greenleafprl07,kadic2014invisible}. Leonhardt and Philbin have further proposed an optical fibre experiment \cite{philbin08} for an analogue of Hawking's famous event horizon in his theory of black holes \cite{hawking}. It seems therefore fair to say that transformation optics offers a unique laboratory for thought experiments, leading to a plethora of electromagnetic paradigms. However, this would remain some academic curiosity without the practical side effect: so-called metamaterials, first introduced by Pendry in 1999 to obtain artificial magnetism in locally resonant periodic structures\cite{pendry1999magnetism}. The first realization of an electromagnetic invisibility cloak \cite{schurig06} is a metamaterial consisting of concentric arrays of split-ring resonators. This structured material effectively maps a concealment region into a surrounding shell thanks to its strongly anisotropic effective permittivity and permeability which further fulfil some impedance matching with the surrounding vacuum. The cloak thus neither scatter waves nor induces a shadow in the transmitted field. Split ring resonators enable to meet among others the prerequisite artificial magnetism property, otherwise unobtainable with materials at hand \cite{pendry1999magnetism}. This locally resonant micro-structured cloak was shown to conceal a copper cylinder around $8.5$ GHz, as predicted by numerical simulations \cite{schurig06}. The effectiveness of the transformation based invisibility cloak was demonstrated theoretically by Leonhardt \cite{leonhardt06} solving the Schr\"odinger equation. Note that this equation is not only valid to compute ray trajectories (geodesics) in the geometrical optic limit, but also for matter waves in the quantum theory framework. Zhang et al. used this analogy to propose a quantum cloak based upon ultracold atoms within an optical lattice \cite{zhangprl08}. Greenleaf et al. subsequently discussed resonances (so-called trapped modes) occurring at a countable set of discrete frequencies inside the quantum cloak, using a spectral theory approach \cite{greenleafprl08}. Using analogies between the Helmholtz and the Schr\"odinger equations, Cummer and Schurig demonstrated that pressure acoustic waves propagating in a fluid also undergo the same geometric transform in 2D \cite{cummer06b}. Chen and Chan further extended this model to 3D acoustic cloaks \cite{chen07}, followed by an independent derivation of the acoustic cloak parameters in \cite{cummer08}. Such meta-fluids require an effective anisotropic mass density as in the model of Torrent and Sanchez-Dehesa \cite{sanchez}. However, an acoustic cloak for linear surface water waves studied experimentally and theoretically in \cite{farhat08}, only involves an effective anisotropic shear viscosity. Nevertheless, transformation based invisibility cloaks cannot be applied in general to elastodynamic waves in structural mechanics as there is a lack of one-to-one correspondence between the equations of elasticity and the Schr\"odinger equation \cite{milton06b}. Bigoni et al. actually studied such neutral inclusions in the elastostatic context using asymptotic and computational methods in the case of anti-plane shear and in-plane coupled pressure and shear polarizations \cite{bigoni98}, but when one moves to the area of elastodynamics, geometrical transforms become less tractable and neutrality breaks down: there are no conformal maps available in that case, and one has to solve inherently coupled tensor equations. More precisely, Milton, Briane and Willis have actually shown that there is no symmetric rank 4 elasticity tensor describing the heterogeneous anisotropic medium required for an elastodynamic cloak in the context of Cauchy elasticity \cite{milton06b}. However, so-called Willis's equations, discovered by the British applied mathematician John Willis in the early 80's \cite{willis1981variational,willis1985nonlocal}, offer a new paradigm for elastodynamic cloaking, as they allow for introduction of additional rank-3 and rank-2 tensors in the equations of motion that make cloaking possible. Nevertheless, Brun, Guenneau and Movchan have shown \cite{brunapl} that it is possible to design an elastic cloak without invoking Willis's equations for in-plane coupled shear and pressure waves with a metamaterial described by a rank 4 elasticity tensor preserving the main symmetries, as well as a scalar density. Importantly, both elasticity tensor and density are spatially varying, and the former one becomes singular at the inner boundary of the cloak \cite{brunapl}. Some design based on a homogenization approach for polar lattices has been proposed by Nassar, Chen and Huang \cite{nassar2018degenerate} and Garau et al. \cite{Garau2019}. Achaoui et al. have proposed an alternative design making use of elastic swiss-rolls \cite{achaoui2020cloaking}. Diatta and Guenneau \cite{diatta2014controlling} have shown that a spherical elastodynamic cloak can be designed using the same route as in \cite{brunapl}, but the corresponding metamaterial design remains an open problem. There is an alternative, pre-stress, route to elastic cloaking proposed by Norris and Parnell that greatly relaxes constraints on material properties compared to the previous routes \cite{parnell2012nonlinear,norris2012hyperelastic,parnell2012employing}. In the present article, we further investigate cylindrical cloaks for in-plane elastic waves using a radially symmetric linear geometric transform which depends upon a parameter. Depending upon the value of the parameter, the transform is applied to the design of neutral (invisibility) cloaks, elastic concentrators or cylindrical lenses. We discuss their underlying mechanism using a finite element approach which is adequate to solve the Navier equations in anisotropic heterogeneous media. \section{Governing equations and elastic properties of cloaks} \subsection{The equations of motion} The propagation of in-plane elastic waves is governed by the Navier equations. Assuming time harmonic $\exp(-i\omega t)$ dependence, with $\omega$ as the wave frequency, allows us to work directly in the spectral domain. Such dependence is assumed henceforth and suppressed, leading to \begin{equation} \nabla\cdot{\bf C}:\nabla{\bf u}+\rho\,\omega^2 {\bf u}+{\bf b}={\bf 0} \; , \label{navier} \end{equation} where, considering cylindrical coordinates $(r,\theta)$, ${\bf u}=(u_r,u_\theta)$ is the in-plane displacement, $\rho$ the density and $C_{ijkl}$ $(i,j,k,l=r,\theta)$ the fourth-order elasticity tensor of the (possibly heterogeneous anisotropic) elastic medium. In eqn. (\ref{navier}) ${\bf b}$ is the body force. \subsection{The transformed equations of motion} Let us now consider the radial linear geometric transform $(r,\theta)\rightarrow(r',\theta')$, with $\theta'=\theta$, shown in Fig. \ref{fig01} \begin{equation} r'\!=\!\left\{ \begin{array}{lcrl} [(1\!+\!\alpha)r_1\!-\!\alpha r_0] \frac{r}{r_0} & \mbox{for} & r'\leq r'_0 & \mbox{(domain A)},\\ (1\!+\!\alpha)r_1\!-\!\alpha r & \mbox{for} & r'_0\leq\! r'\!\leq r'_1 & \mbox{(domain B)},\\ r & \mbox{for} & r'\geq r'_1 & \mbox{(domain C)}, \end{array} \right. \label{PTransform} \end{equation} where $\alpha=-(r'_1-r'_0)/(r_1-r_0)$ is a real parameter and $r'_0=(1+\alpha)r_1-\alpha\, r_0$, $r'_1=r_1$. The transformation gradient is ${\bf F}=(dr'/dr){\bf I}_r+(r'/r){\bf I}_\bot$ where ${\bf I}_r={\bf e}_r\otimes{\bf e}_r$ is the second-order projection tensor along the radial direction identified by the unit vector ${\bf e}_r$, and ${\bf I}_\bot={\bf I}-{\bf I}_r$, with ${\bf I}$ second-order identity tensor. Furthermore, $J=\det {\bf F}$ is the Jacobian of the transformation. Design of in-plane transformation-based elastic cloaks has been first discussed in \cite{brunapl} when $\alpha=-1+r'_0/r'_1$ ($r_0=0$): in that case, (\ref{PTransform}) simplifies into the geometric transform for an invisibility cloak $ r'=r'_0+ \frac{r'_1-r'_0}{r'_1} r$ in the domain (B) \cite{pendry2006controlling,greenleaf03}, where $r'_0$ and $r'_1$, respectively, denote the inner and outer radii of the circular cloak. However, other values of the parameter $\alpha$ lead to equally interesting cloaks, such as neutral concentrators, first studied in the context of electromagnetism \cite{rahm}, and we would like to discuss these in the sequel. \begin{figure} \centering \includegraphics[width=10cm]{Transform1.pdf} \vspace{0.2cm} \caption{Geometric transform of eqn. (\ref{PTransform}). (a) Representation of the transform $r\rightarrow r'$ for different values of the parameter $\alpha$. The domains A $(r'\leq r'_0)$, B $(r'_0\leq r'\leq r'_1)$ and C $(r'\geq r'_1)$ are indicated. (b) Transformation of the geometry for $\alpha>0$, $\alpha<1$ and $\alpha=-1+r'_0/r'_1$ (perfect cloak).} \label{fig01} \end{figure} We now need to consider two cases for the transformed equations of motion. \subsubsection{Gauge transform ${\bf u}'(r',\theta')={\bf u}(r,\theta)$} By application of transformation (\ref{PTransform}) with the Gauge ${\bf u}'(r',\theta')={\bf u}(r,\theta)$ the Navier eqns.(\ref{navier}) are mapped into the equations \begin{equation} \nabla'\cdot{\mbox{${\bf{C}}$}'}:\nabla'{\bf u}'+\rho'\omega^2 {\bf u}'+{\bf b}'={\bf 0} \;,\label{snavier} \end{equation} where ${\bf u}'(r',\theta')$ and ${\bf b}'(r',\theta')$ are the transformed displacement and body force, respectively, and $\nabla'={\bf F}^t\nabla$ the gradient operator in the transformed coordinates. In particular, we stress that we assume an identity gauge transformation\cite{brunapl,NorrisShuvalov2011}, i.e. ${\bf u}'(r',\theta')={\bf u}(r,\theta)$. The stretched density is the scalar field \begin{equation} \rho'= \left\{ \begin{array}{ll} \displaystyle{\left[\frac{(1\!+\!\alpha)r'_1-r'_0}{\alpha r'_0}\right]^2 \rho} & \mbox{in A},\\[3.5 mm] \displaystyle{\frac{r'-(1\!+\!\alpha)r'_1}{\alpha^2 r'} \rho} & \mbox{in B},\\[4.5 mm] \rho & \mbox{in C}, \end{array} \right. \label{srho} \end{equation} homogeneous in A and C. The transformed linear elasticity tensor has components \begin{equation} \label{sc0} C'_{ijkl}=J^{-1}C_{mnop}F_{im}F_{ko}\delta_{jn}\delta_{lp}\;, \end{equation} where $(i,j,k,l=r',\theta')$, $(m,n,o,p=r,\theta)$, $\delta_{jn}$ is the Kronecker delta and the usual summation convention over repeated indices is used. In particular, if before transformation the material is isotropic, i.e. $C_{ijkl}=\lambda\delta_{ij}\delta_{kl}+\mu(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})$ $(i,j,k,l=r,\theta)$, with $\lambda$ and $\mu$ the Lam\'e moduli, the transformed elasticity tensor $\mbox{${\bf{C}}$}'$ has non zero cylindrical components \begin{equation} \begin{array}{ll} C'_{r'\!r'\!r'\!r'\!}\!=\!\frac{r'-(1+\alpha)r_1}{r'}(\lambda\!+\!2\mu), \!& C'_{\!\theta'\!\theta'\!\theta'\!\theta'}\!=\!\frac{r'}{r'-(1+\alpha)r_1}(\lambda\!+\!2\mu),\\[1.6mm] C'_{r'\!r'\!\theta'\!\theta'\!}=C'_{\theta'\!\theta'\! r'\!r'}=\lambda, & C'_{r'\!\theta'\!\theta' \!r'}=C'_{\theta'\! r'\!r'\!\theta'}=\mu, \\[1.6mm] C'_{r'\!\theta'\! r'\!\theta'}=\frac{r'-(1+\alpha)r_1}{r'} \mu, & C'_{\theta'\! r'\!\theta'\! r'}=\frac{r'}{r'-(1+\alpha)r_1} \mu \; , \end{array} \label{sc} \end{equation} in B and $\mbox{${\bf{C}}$}'=\mbox{${\bf{C}}$}$ in A and C. The transformation and the corresponding transformed density $\rho'$ and elasticity tensor $\mbox{${\bf{C}}$}'$ are broadband, they do not depend on the applied frequency $\omega$. \subsubsection{Gauge transform ${\bf u}'(r',\theta')={\bf F}^{-t}{\bf u}(r,\theta)$} As noted in \cite{milton06b}, by application of transformation (\ref{PTransform}) with the Gauge ${\bf u}'(r',\theta')={\bf F}^{-t}{\bf u}(r,\theta)$, where ${\bf F}$ is the transformation gradient, the Navier eqns.(\ref{navier}) are mapped into the equations \begin{equation} \begin{array}{ll} \label{eq:willis-model-change-of-variable-princ} \nabla' \cdot \Big(\mbox{${\bf{C}}$}'': \nabla' {\mathbf u}' + \mbox{${\bf{D}}$}' \cdot {\mathbf u}' \Big) + \mbox{${\bf{S}}$}' : \nabla' {\mathbf u}' + \omega^2 \rho' \mathbf{u}' + \mathbf{b}' = \mathbf{0} \; . \end{array} \end{equation} The transformed rank-4 elasticity tensor $\mbox{${\bf{C}}$}''$ has components \begin{equation} \label{scwill} C''_{ijkl}=J^{-1}F_{im}F_{jn}C_{mnop}F_{ko}F_{lp} \; , \end{equation} where $(i,j,k,l=r',\theta')$, $(m,n,o,p=r,\theta)$. We note that $\mbox{${\bf{C}}$}''$ in (\ref{scwill}) has all the symmetries, unlike $\mbox{${\bf{C}}$}'$ in (\ref{sc}), which has the major but not the minor simmetries. The rank-3 tensors $\mbox{${\bf{D}}$}'$ and $\mbox{${\bf{S}}$}'$ in (\ref{scwill}) have elements \begin{equation} \label{sDwill} D'_{ijk}=J^{-1}F_{im}F_{jn} C_{mnop} \frac{\partial^2 x'_k}{\partial x_o \partial x_p} = D'_{jik} \; , \end{equation} and \begin{equation} \label{sSwill} S'_{ijk}=J^{-1} \frac{\partial^2 x'_i}{\partial x_m \partial x_n} C_{mnop} F_{jo}F_{kp} =S'_{jik} \; . \end{equation} Finally, the transformed density $\rho'$ in (\ref{scwill}) is matrix valued \begin{equation} \label{srhowill} \rho'_{ij}=J^{-1}\rho F_{im}F_{jm}+J^{-1} \frac{\partial^2 x'_i}{\partial x_m \partial x_n} C_{mnop} \frac{\partial^2 x'_j}{\partial x_o \partial x_p} = \rho'_{ji} \; . \end{equation} These expressions were first derived in \cite{milton06b}. Now, if before transformation the material is isotropic, then the transformed elasticity tensor $\mbox{${\bf{C}}$}''$ has non zero cylindrical components \begin{equation} \begin{array}{l} C''_{r'\!r'\!r'\!r'\!}\!=\!\frac{r'-(1+\alpha)r_1}{r'(r_1-r_0)^2}(\lambda\!+\!2\mu), \\[1.6mm] C''_{\!\theta'\!\theta'\!\theta'\!\theta'}\!=\!\frac{r'^3}{(r_1-r_0)^2(r'-(1+\alpha)r_1)^3}(\lambda\!+\!2\mu), \\[1.6mm] C''_{r'\!r'\!\theta'\!\theta'\!}=C''_{\theta'\!\theta'\! r'\!r'}=\frac{r'}{(r_1-r_0)^2(r'-(1+\alpha)r_1)}\lambda, \\[1.6mm] C''_{r'\!\theta'\!\theta' \!r'}=C'_{\theta'\! r'\!r'\!\theta'} =C''_{r'\!\theta'\! r'\!\theta'}=C''_{\theta'\! r'\!\theta'\! r'}= \frac{r'}{(r_1-r_0)^2(r'-(1+\alpha)r_1)} \mu \; , \end{array} \label{scwillwill} \end{equation} in B and $\mbox{${\bf{C}}$}''=\mbox{${\bf{C}}$}$ in A and C. On the other hand, the rank-3 tensors $\mbox{${\bf{D}}$}'$ and $\mbox{${\bf{S}}$}'$ have non zero cylindrical components \begin{equation} \begin{array}{l} D'_{r'\!r'\!r'\!}\!=\frac{1}{(r_1-r_0)^2(r'-(1+\alpha)r_1)(r'+r_1r_0/(r_1-r_0))}\lambda =-S'_{r'\!r'\!r'}\! \\[1.6mm] D'_{r'\!\theta'\!\theta'}=D'_{\theta'\! r'\!\theta'} =2\frac{1}{(r_1-r_0)^2(r'-(1+\alpha)r_1)^2} \mu =-S'_{\theta'\!\theta'\! r'}=-S'_{\theta'\! r'\!\theta'} \\[1.6mm] D'_{\theta'\!\theta'\! r'}\!=\frac{r'+r_1r_0/(r_1-r_0)}{(r_1-r_0)^2(r'-(1+\alpha)r_1)^3}(2\mu+\lambda) =-S'_{r'\!\theta'\!\theta'}\! \; . \end{array} \label{sdwillwill} \end{equation} Similar expressions can be derived for the transformed density. Expressions in (\ref{scwillwill}) and \ref{sdwillwill}) are more intricate than those in (\ref{sc}); thus, in the sequel, we focus on the transformed equations of motion (\ref{snavier}). \subsection{Interface conditions for Gauge ${\bf u}'(r',\theta')={\bf u}(r,\theta)$} \label{SectionIC} Perfect cloaking and perfect concentrator require additional conditions on displacements and tractions at the interfaces between the domains with different material properties introduced by the transformation (\ref{PTransform}). In the transformed problem (\ref{snavier}) there are two interfaces, between domains A and B, at $r'=r'_0$ and $r=r_0$, and at the cloak's outer boundary, between domains B and C, at $r'=r=r_1$. Transformed equations (\ref{snavier}) together with the assumption ${\bf u}'(r',\theta')={\bf u}(r,\theta)$ assure that displacements and tractions in the inhomogeneous transformed domain at a point $(r',\theta')$ coincide with displacements and tractions at the corresponding point $(r,\theta)$ in the original homogeneous problem (\ref{navier}) where no interfaces between different materials are present. In particular, if we introduce the Cauchy stress tensors ${\bf \sigma}'=\mbox{${\bf{C}}$}'\!:\!\nabla'{\bf u}'$ and ${\bf \sigma}=\mbox{${\bf{C}}$}\!:\!\nabla{\bf u}$ for the transformed and original problem, respectively, it is verified the following equality between tractions \begin{equation} {\bf \sigma}'\cdot{\bf e}_r'={\bf \sigma}\cdot{\bf e}_r, \label{EqTrac} \end{equation} at $r'=r'_0$, $r=r_0$ and at $r'=r=r_1$. Equality (\ref{EqTrac}) can be easily demonstrated by using Nanson's formula \cite{Ogden1997} ${\bf e}_r'=J {\bf F}^{-t}{\bf e}_r$ for the radial unit vectors. Note that the matching is independent on the particular value assumed by $\alpha$. \subsection{Perfect cloak. Singularity at the inner interface} We note that for the perfect cloak\cite{brunapl}, i.e. $r_0=0$ and $\alpha=-1+r'_0/r'_1$, a point at $r=0$ is mapped into a disk of radius $r'_0$. This is a singular transformation and, at the cloak inner boundary, $r'-(1+\alpha)r_1\to 0$. Therefore, at $r'=r'_0$, from (\ref{srho}) and (\ref{sc}) one can see that $\rho'\to 0$, $C'_{r'\!r'\!r'\!r'\!}\,,\, C'_{r'\!\theta'\! r'\!\theta'}\to 0$, while $C'_{\theta'\!\theta'\! \theta'\!\theta'}\,, C'_{\theta'\! r'\!\theta'\!r'}\to \infty$. Similarly, at $r'=r'_0$, from (\ref{scwillwill}) and (\ref{sdwillwill}) one can see that $C''_{r'\!r'\!r'\!r'\!}\to 0$, while $C''_{\theta'\!\theta'\! \theta'\!\theta'}\,,C''_{r'\!r'\!\theta'\!\theta'\!}=C''_{\theta'\!\theta'\! r'\!r'} \,, C''_{r'\!\theta'\!\theta' \!r'}=C'_{\theta'\! r'\!r'\!\theta'} =C''_{r'\!\theta'\! r'\!\theta'}=C''_{\theta'\! r'\!\theta'\! r'}\to \infty$. Moreover, one notes that the rate of divergence is faster for ${\bf C''}$ than ${\bf C}'$, and thus anisotropy is even more extreme in the neighborhood of the inner boundary for ${\bf C''}$. All expressions in (\ref{sdwillwill}) diverge when $r'-(1+\alpha)r_1\to 0$. The required extreme anisotropy physically means that pressure and shear waves propagate with an infinite velocity in the azimuthal $\theta'$-direction and zero velocity in the radial $r'$-direction along the inner boundary, which results in a vanishing phase shift between a wave propagating in a homogeneous elastic space and another one propagating around the coated region. Clearly the presence of unbounded physical properties poses limitations on possible realizations and numerical implementation of the model; regularization techniques have been proposed introducing the concept of {\em near cloak} \cite{Kohn2008,Colquittetal2013,Jonesetal2015}, but the realization of such elastodynamic cloaks remains a challenge. \subsection{General transformation} We now wish to extend first the proposal of Rahm et al. \cite{rahm} of an omni-directional electromagnetic concentrator to the elastic setting and second to consider a more general transformation including folded transformed geometries, as proposed by for quasi static equations of electromagnetism by Milton et al. \cite{Miltonetal2008}. We recall that the transformation (\ref{PTransform}) compress/expand a disc with radius $r_0$ at the expense of an expansion/compression of the annulus between $r_0$ and $r_1$. The inner disk is expanded for $-1<\alpha<-1+r_0'/r_1'$ with the limiting cases $\alpha=-1$ corresponding to an identity and $\alpha=-1+r_0'/r_1'$ to perfect cloaking. On the contrary the disk is compressed for $\alpha<-1$ and $\alpha>0$, and, in the case of positive value of the topological parameter $\alpha$, $r_0>r_1$ and a folding of the original geometry is obtained. The material remains homogeneous and isotropic in the inner disk A where only the density is changed. In the annulus region B the material is heterogeneous and elastically anisotropic. Consistently with Brun et al.\cite{brunapl} and differently from Milton et al. \cite{milton06b} the density remains a scalar field. We stress that the heterogeneity is smoothly distributed and the material is functionally graded with the absence of any jump in the material properties leading to possible scattering effects. As detailed above interface conditions are automatically satisfied and do not introduce any scattering. It is important to note that, excluding the perfect cloaking case, for bounded values of $\alpha$ all the elastic rigidities and the density are bounded leading to possible physical and numerical implementation of the metamaterial structure. \subsection{The radial field concentrator. Unbounded $\alpha$} The limiting cases $\alpha\to \pm\infty$, where $r_0=r_1$, correspond asymptotically to the radial transformation \begin{equation} r'= \left\{ \begin{array}{ll} \frac{r}{r_1} & \mbox{in A},\\ r_1 & \mbox{in B},\\ r & \mbox{in C}. \end{array} \right. \label{eqn1001} \end{equation} In such a case in the annulus region B $C'_{\theta'\!\theta'\! \theta'\!\theta'}\,, C'_{\theta'\! r'\!\theta'\!r'}\to \infty$, $\rho'\to\infty$ and all the other elastic rigidities components are unchanged. In such material, independently on the external field impinging the metamaterial region $r'\le r'_1$, the elastic fields in B are radially independent and depend only on the azimuthal coordinate $\theta'$. However, the harmonic behavior cannot be reached in a finite time after the transient since the density $\rho'$ is unbounded. \section{Numerical results and discussion} In this section, we report the finite element computations performed in the COMSOL multiphysics package. Normalized material parameters are used. A cloak of density $\rho'$ (\ref{srho}) and elasticity tensor $\mbox{${\bf{C}}$}'$ (\ref{sc0},\ref{sc}) is embedded in an infinite isotropic elastic material with normalized Lam\'e moduli $\lambda=2.3$ and $\mu=1$, and mass density $\rho=1$. The cloak has inner and outer radii $r_0'=0.2$ and $r_1'=0.4$, respectively. The disc inside the cloak consists of the same elastic material as the outer medium but different density. We further consider a harmonic unit concentrated force applied either in the direction $x_1$ or $x_2$ which vibrates with an angular frequency $\omega=40$. This force is sometimes located outside the cloak (cf. Fig. \ref{fig02}-\ref{fig04}), sometimes inside the coating (cf. Fig. \ref{fig05}) or within the central disc (cf. Fig. \ref{fig06}), depending upon whether we are looking for some neutrality feature, lensing/mirage effect or some localization. Before we start looking at the cloaks's features depending upon the ranges of values of the parameter $\alpha$, we briefly discuss the implementation of elastic perfectly matched layers (PMLs), in the framework of transformation elasticity. \subsection{Implementation of elastic cylindrical PMLs} A perfectly matched layer has been implemented in order to model the infinite elastic medium surrounding the cloak (cf. outer ring in Figs. \ref{fig02}-\ref{fig06}); this has been obtained by application of the geometric transform \cite{zh02}, \begin{equation} x_i''=(1+a)\hat x_i-ax_i, \qquad i=1,2, \end{equation} for $|x_i|>|\hat{x}_i|$, where $a$ is now a complex number whose imaginary part accounts for the decay of the elastic waves and $\hat{x}_i=\pm 1$ in Fig. \ref{fig02}-\ref{fig06}. The corresponding (complex) density $\rho'''$ and elasticity tensor $\mbox{${\bf{C}}$}'''$ are still given by (\ref{srho}) and (\ref{sc}). The accuracy of the PMLs has been numerically validated when $a=i-1$, by comparison with the Green's function in homogeneous elastic space (cf. Eq. \ref{eqn100} and Fig. \ref{fig02}b, c). \begin{figure} \centering \includegraphics[width=6.2cm]{Cloak_Out_Hz.png} \vspace{0.2cm} \caption{Elastic field generated by an horizontal unit force applied in the external homogenous region; $\alpha=-1+r'_0/r'_1=-0.5$, $\omega=40$, source ${\bf x}_0=(-0.42,0)$. Magnitude $u=\sqrt{u_1^2+u_2^2}$ of the displacement field in the system with inclusion and cloaking (a) and in an homogenous system (b). Comparison between the displacement magnitude $u$ computed in Comsol for a cloaked inclusion (black line) and the analytical Green's function in an infinite homogeneous linear elastic and isotropic material (gray line), see Eq. (\ref{eqn100}).} \label{fig02} \end{figure} \begin{figure} \centering \includegraphics[width=6.cm]{Cloak_Out_Vc.png} \vspace{0.2cm} \caption{Elastic field generated by a vertical unit force applied in the external homogeneous region. $\alpha=-1+r'_0/r'_1=-0.5$, $\omega=40$, source ${\bf x}_0=(-0.42,0)$. (a) Magnitude $u$ of the displacement field; (b) Comparison between the displacement magnitude $u$ computed in Comsol for a cloaked inclusion (black line) and the analytical Green's function in an infinite homogeneous linear elastic and isotropic material (gray line).} \label{fig03} \end{figure} We can therefore confidently carry out computations with these PMLs. We start by the study of an invisibility cloak for in-plane elastic waves, whereby the point source considered in \cite{brunapl} now lies in the close vicinity of the cloak (intense near field limit when the acoustic ray picture breaks down). \subsection{Neutrality for a point source outside the cloak} We report in Fig. \ref{fig02} and Fig. \ref{fig03} the computations for a point force applied at a distance $r=0.42$ away from the center of the cloak and close to the cloak itself of outer radius $r'_1=0.4$. The force is applied in the horizontal direction in Fig. \ref{fig02} and in the vertical direction in Fig. \ref{fig03}. In both upper panels (a), we clearly see that both the wave patterns of the magnitude of the displacement field are smoothly bent around the central region within the cloak (where the magnitude is uniform). The comparative analyses between panels (a) and (b) of Fig. \ref{fig02} shows that the wave patterns in the external homogeneous domain is not perturbed by the presence of the inclusion and cloaking interface. This is verified quantitatively in Fig. \ref{fig02} panel (c) and in Fig. \ref{fig03} panel (b) where the numerically computed wave pattern is compared with the Green's function in homogeneous elastic space \begin{equation} \label{eqn100} {\bf G}({\bf x})\!=\!\frac{i}{4\mu} \left\{ H_0^{(1)}(k_s r){\bf I}\!-\!\frac{Q}{\omega^2} \nabla\nabla\left[H_0^{(1)}(k_s r)\!-\!H_0^{(1)}(k_p r)\right] \right\}, \end{equation} with $H_0^{(1)}$ the Hankel function, $\bf I$ the second order identity tensor, $\nabla$ the gradient operator, $k_p=\omega/c_p$, $k_s=\omega/c_s$, $Q=(1/c_p^2+1/c_s^2)^{-1}(\lambda+\mu)/(\lambda+2\mu)$, $c_p=\sqrt{(\lambda+2\mu)/\rho}$, $c_s=\sqrt{\mu/\rho}$. The plot are reported along the horizontal line $x_2=0$ passing from the point of application of the force. The absence of forward or backward scattering is demonstrated by the excellent agreement between the two fields in the external homogeneous domain $r>0.4$. Clearly, the profile is much different when the coating is removed and the inner disc is clamped or freely vibrating. We also see that the field in the cloaking region has the same amplitude as the one in homogeneous case but shifted following the transformation $r'(r)$; in the inner disk the field is homogeneous. Finally the effectiveness of the PML domains can also be appreciated. In Fig. \ref{fig04} the deformation fields are also reported for both cases, where the force is applied in horizontal (first column) and vertical (second column) direction. In the upper (a,b) and central (c,d) panels the skew-symmetric nature of the components $\varepsilon_{11}$ and $\varepsilon_{22}$ of the deformation tensor reveals the tensor nature of the problem. The component $\varepsilon_{12}$ leads to a non-intuitive pattern whereby fully-coupled shear and pressure components create the optical illusion of interferences. Again, the effect of the cloaking is shown and also for deformation fields waves are bent around the cloaking region without backward and forward scattering. \begin{figure} \centering \includegraphics[width=10cm]{Cloak_Out_Deff.png} \vspace{0.2cm} \caption{Elastic deformation fields generated by a unit force applied in the the external homogeneous region. $\alpha=-1+r'_0/r'_1=-0.5$, $\omega=40$, source$=(-0.42,0)$ . (a), (c), (e) Force applid in the horizontal direction $x_1$. (b), (d), (f) Force applied in the vertical direction $x_2$. (a), (b) Component $\varepsilon_{11}=\frac{\partial u_1}{\partial x_1}$; (c), (d) Component $\varepsilon_{22}=\frac{\partial u_2}{\partial x_2}$; (e), (f) Component $\varepsilon_{12}=\varepsilon_{21}=\frac{1}{2}(\frac{\partial u_1}{\partial x_2}+\frac{\partial u_2}{\partial x_1})$.} \label{fig04} \end{figure} \subsection{Mirage effect for a point source in the coating} In this section, we look at the case of a point force located inside the coating. In a way similar to what was observed for an electromagnetic circular cylindrical cloak \cite{zolla07}, we observe in Fig. \ref{fig05} a mirage effect: the point force seems to radiate from a location shifted towards the inner boundary (further away from an observer) as given by \begin{equation} r=\frac{(1+\alpha)r_1-r'}{\alpha} \; , \; \theta=\theta' \;, \label{invPTransform} \end{equation} as also shown in panel (b). Importantly, the profile of the shifted point source in homogeneous elastic space is superimposed with that of the point source located inside the coating, but only outside the cloak. In the invisibility region i.e. the disc at the center of the cloak, the field is constant and this suggest that the central region behaves as a cavity. We study this cavity phenomenon in the next section. The example in Fig. \ref{fig05} reveals that any object located inside the coating would appear as a different elastic material with a different shape to an observer. \begin{figure} \centering \includegraphics[width=7cm]{Cloak_Mid_Vc.png} \vspace{0.2cm} \caption{Elastic field generated by a vertical unit force applied in the cloaking region. $\alpha=-1+r'_0/r'_1=-0.5$, $\omega=40$, source ${\bf x}_0=(-0.3,0)$. (a) Magnitude $u$ of the displacement field; (b) Comparison between the displacement magnitude $u$ computed in Comsol for a cloaked inclusion (black line) and the analytical Green's function in an infinite homogeneous linear elastic and isotropic material (gray line), corresponding to a force applied in a shifted source point ${\bf x}_0=(-.2,0)$.} \label{fig05} \end{figure} \subsection{Confinement for a point source in the central region} We now consider a point force inside the invisibility region. Interestingly, a point source located in the invisibility zone always radiates outside the cloak as if it was located at the origin and this is quite natural as the central disc is simply the image of the origin via the geometric transform (\ref{invPTransform}). The fact that the central disc behaves as a closed cavity is also intuitive, as the elasticity tensor $\mbox{${\bf{C}}$}'$ is singular on the boundary of the disc. \begin{figure} \centering \includegraphics[width=7cm]{Cloak_In_Vc.png} \vspace{0.2cm} \caption{Elastic field generated by a vertical unit force applied in the internal inclusion. $\alpha=-1+r'_0/r'_1=-0.5$, $\omega=40$, source ${\bf x}_0=(-0.17,0)$. (a) Magnitude $u$ of the displacement field; (b) Comparison between the displacement magnitude $u$ computed in Comsol for a cloaked inclusion (black line) and the analytical Green's function in an infinite homogeneous linear elastic and isotropic material (gray line), corresponding to a force applied at the origin ${\bf x}_0=(0,0)$.} \label{fig06} \end{figure} \subsection{Squeezing the wavelength with an elastic concentrator} \begin{figure} \centering \includegraphics[width=7cm]{PW_Pressure.png} \vspace{0.2cm} \caption{Elastic field generated by a pressure plane wave ${\bf u}=(A \exp(ik_pt),0)$ with $\omega=60$. Left column: magnitude $u$ of the displacement field. Right column: comparison between the displacement magnitude $u$ computed in Comsol for a cloaked inclusion (black line) and the pressure plane wave in an infinite homogeneous linear elastic and isotropic material (gray line), results are plotted along an horizontal line passing from the center of the inclusion. (a) $\alpha=-0.5$, (b) $\alpha=-0.6$, (c) $\alpha=-1$, (d) $\alpha=-5$} \label{fig07} \end{figure} We report the effects associated to an increase in the magnitude of the parameter $\alpha$ describing the linear transformation (\ref{PTransform}). In Fig. \ref{fig07} the effect of the cylindrical coating on the inclusion is given for a pressure plane wave ${\bf u}=(A \exp(ik_pt),0)$ propagating in the horizontal direction $x_1$. A decrease of $\alpha$ from $\alpha=-1+r'_0/r'_1=-0.5$ introduces wave propagation within the inclusion with progressive shorter wavelengths while the amplitude of the wave remains unchanged. From Fig. \ref{fig07}d it is evident that, when $\alpha<-1$, the interface act as an energy concentrator within the inclusion increasing the energy flux. The energy crossing the inclusion region $r\le r'_0=0.2$ equals the energy crossing the larger region $r\le r_0$, in a homogeneous material. In the interval $-1>\alpha>-\infty$, $r'_0<r_0<r'_1$. We also note that, when $\alpha\neq -0.5$ the transformation is regular and material parameters remain bounded indicating additional advantages in technological and numerical implementations of the model. Last but not least, the field in the external domain remains unchanged. \begin{figure} \centering \includegraphics[width=7cm]{Pressure_PW_Folding.png} \vspace{0.2cm} \caption{Elastic field generated by a pressure plane wave ${\bf u}=(A \exp(ik_pt),0)$ with $\omega=60$ and $\alpha=0.94$. (a) Magnitude $u$ of the displacement field. (b) Displacement magnitude $u$ computed in Comsol for a cloaked inclusion plotted along an horizontal line passing from the center of the inclusion.} \label{fig08} \end{figure} \subsection{Folding transformation. Superconcentration of an elastic wave with a cylindrical lens} We finally report in Fig. \ref{fig08} an enhanced energy concentration effect obtained from a folding transformation ($\alpha>0$). In such a case all the energy crossing a circular region larger than the region delimited by the cloaking interface is concentrated into the inner inclusion. In Fig. 8, $\alpha=0.94$ and the radius of the circular region in the homogenous space is $3.06$ times the radius of the inner inclusion. Again, such an effect is obtained by an increase in the energy flux leaving unperturbed both the wave amplitude and the fields in the external region. \section{Conclusion} We have proposed to use stretched coordinates in order to design an elastic cloak bending the trajectory of in-plane coupled shear and pressure waves around an obstacle, concentrating them in its core, or focussing them. We investigated the transformed equations of motion for the Mitlon-Briane-Willis transformation gauge ${\bf u}'(r',\theta')={\bf F}^{-t}{\bf u}(r,\theta)$ and the Brun-Guenneau-Movchan gauge ${\bf u}'(r',\theta')={\bf u}(r,\theta)$ (see \cite{NorrisShuvalov2011}). The former leads to Willis's equations with more extreme anisotropic parameters in the cloak than for the latter. However, the latter requires a transformed elasticity tensor without the minor symmetries, which is another hurdle for a metamaterial design. We have studied various limiting cases for the value of a parameter in the considered radially symmetric linear geometric transforms. These transforms are applied to the design of neutral (invisibility) cloaks, elastic concentrators or cylindrical lenses. We have numerically explored all the above for the gauge ${\bf u}'(r',\theta')={\bf u}(r,\theta)$ leading to a non-fully symmetric transformed elasticity tensor, and have notably shown that a source located inside the anisotropic heterogeneous elastic coating seems to radiate from a shifted location, and can also lead to anamorphism. We believe that our space folding based design of elastic cylindrical lenses can lead to an in-plane counterpart of the external cylindrical cloak for anti-plane shear waves introduced in \cite{guenneau2021time} and applied periodically in \cite{Meirbekova2020}. We hope our results might open new vistas in cloaking devices for elastodynamic waves. Whereas their governing equations do not generally retain their form under geometric transforms, unlike for electromagnetic and acoustic waves, one can choose specific gauges that can make the transformed equations of motions easier to handle.
2,869,038,156,194
arxiv
\section{Introduction}\label{sec:Intro} The philosophy underlying the theory of trisections is that four-dimensional objects can be decomposed into three simple pieces whose intersections are well-enough controlled that all of the four-dimensional data can be encoded on the two-dimensional intersection of the three pieces, leading to new diagrammatic approaches to four-manifold topology. Trisections were first introduced for four-manifolds by Gay and Kirby in 2016~\cite{GayKir_16_Trisecting-4-manifolds}. A few years later, the theory was adapted to the setting of closed surfaces in four-manifolds by the author and Zupan~\cite{MeiZup_17_Bridge-trisections-of-knotted,MeiZup_18_Bridge-trisections-of-knotted}. The present article extends the theory to the general setting of neatly embedded surfaces in compact four-manifolds, yielding two diagrammatic approaches to the study of these objects: one that applies in general and one that applies when we restrict attention to surfaces in~$B^4$. To introduce bridge trisections of surfaces in $B^4$, we must establish some terminology. First, let $H$ be a three-ball $D^2\times I$, equipped with a critical-point-free Morse function $D^2\times I\to I$. Let $\mathcal T\subset H$ be a neatly embedded one-manifold such that the restriction of the Morse function to each component of $\mathcal T$ has either one critical point or none. If there are $b$ components with one critical point and $v$ with none, we call $(H,\mathcal T)$ a \emph{$(b,v)$--tangle}. Next, let $Z$ be a four-ball $B^3\times I$, equipped with a critical-point-free Morse function $B^3\times I\to I$. Let $\mathcal D\subset Z$ be a collection of neatly embedded disks such that the restriction of the Morse function to each component of $\mathcal D$ has either one critical point or none. If there are $c$ components with one critical point and $v$ with none, we call $(Z,\mathcal D)$ a \emph{$(c,v)$--disk-tangle}. Finally, let $\mathbb T_0$ denote the \emph{standard trisection} of $B^4$ -- i.e., the decomposition $B^4 = Z_1\cup Z_2\cup Z_3$ in which, for each $i\in\mathbb{Z}_3$, the $Z_i$ are four-balls, the pairwise intersections $H_i = Z_{i-1}\cap Z_i$ are three-balls, and the common intersection $\Sigma = Z_1\cap Z_2\cap Z_3$ is a disk. A neatly embedded surface $\mathcal F\subset B^4$ is in \emph{$(b,\bold c,v)$--bridge position} with respect to $\mathbb T_0$ if the following hold for each $i\in\mathbb{Z}_3$: \begin{enumerate} \item $\mathcal F\cap Z_i$ is a $(c_i,v)$--disk-tangle, where $\bold c = (c_1,c_2,c_3)$; and \item $\mathcal F\cap H_i$ is a $(b,v)$--tangle. \end{enumerate} A definition very similar to this one was introduced independently in~\cite{BlaCamTay_20_Kirby-Thompson-distance-for-trisections}. The trisection $\mathbb T_0$ induces the open-book decomposition of $S^3 =\partial B^4$ whose pages are the disks and whose binding is $\partial\Sigma$. Let $\mathcal L = \partial\mathcal F$, and let $\beta_i = S^3\cap\mathcal D$. Then $\mathcal L = \beta_1\cup\beta_2\cup\beta_3$ is braided about $\partial \Sigma$ with index $v$. Having outlined the requisite structures, we can state our existence result for bridge trisections of surfaces in the four-ball. \begin{reptheorem} {thm:four-ball} Let $\mathbb T_0$ be the standard trisection of $B^4$, and let $\mathcal F\subset B^4$ be a neatly embedded surface with $\mathcal L = \partial \mathcal F$. Fix an index $v$ braiding $\widehat\beta$ of $\mathcal L$. Suppose $\mathcal F$ has a handle decomposition with $c_1$ cups, $n$ bands, and $c_3$ caps. Then, for some $b\in\mathbb{N}_0$, $\mathcal F$ can be isotoped to be in $(b,\bold c;v)$--bridge trisected position with respect to $\mathbb T_0$, such that $\partial\mathcal F = \widehat\beta$, where $c_2=b-n$. \end{reptheorem} Explicit in the above statement is a connection between the complexity parameters of a bridge trisected surface and the numbers of each type of handle in a Morse decomposition of the surface. An immediate consequence of this correspondence is the fact that a ribbon surface admits a bridge trisection where $c_3=0$. It turns out that this observation can be strengthened to give the following characterization of ribbon surfaces in $B^4$. Again, $\bold c = (c_1,c_2,c_3)$, and we set $c=c_1+c_2+c_3$. \begin{reptheorem}{thm:ribbon} Let $\mathbb T_0$ be the standard trisection of $B^4$, and let $\mathcal F\subset B^4$ be a neatly embedded surface with $\mathcal L = \partial \mathcal F$. Let $\widehat\beta$ be an index $v$ braiding $\mathcal L$. Then, the following are equivalent. \begin{enumerate} \item $\mathcal F$ is ribbon. \item $\mathcal F$ admits a $(b,\bold c;v)$--bridge trisection filling $\widehat\beta$ with $c_i=0$ for some $i$. \item $\mathcal F$ admits a $(b,0;v+c)$--bridge trisection filling a Markov perturbation $\widehat\beta^+$ of $\widehat\beta$. \end{enumerate} \end{reptheorem} A bridge trisection turns out to be determined by its spine -- i.e., the union $(H_1,\mathcal T_1)\cup(H_2,\mathcal T_2)\cup(H_3,\mathcal T_3)$, and each tangle $(H_i,\mathcal T_i)$ can be faithfully encoded by a planar diagram. It follows that any surface in $B^4$ can be encoded by a triple of planar diagrams whose pairwise unions are planar diagrams for split unions of geometric braids and unlinks. We call such triples \emph{tri-plane diagrams}. \begin{repcorollary} {coro:tri-plane} Every neatly embedded surface in $B^4$ can be described by a tri-plane diagram. \end{repcorollary} In Section~\ref{sec:tri-plane}, we show how to read off the data of the braiding of $\mathcal L$ induced by a bridge trisection from a tri-plane for the bridge trisection, and we describe a collection of moves that suffice to relate any two tri-plane diagrams corresponding to a given bridge trisection. The reader concerned mainly with surfaces in $B^4$ can focus their attention on Sections~\ref{sec:four-ball} and~\ref{sec:tri-plane}, referring to the more general development of the preliminary material given in Section~\ref{sec:general} when needed. Having summarized the results of the paper that pertain to the setting of $B^4$, we now describe the more general setting in which $X$ is a compact four-manifold with (possibly disconnected) boundary and $\mathcal F\subset X$ is a neatly embedded surface. To account for this added generality, we must expand the definitions given earlier for the basic building blocks of a bridge trisection. For ease of exposition, we will not record the complexity parameters, which are numerous in this setting; Section~\ref{sec:general} contains compete details. Let $H$ be a compressionbody $(\Sigma\times I)\cup(\text{3--dimensional 2--handles})$, where $\Sigma=\partial_+H$ is connected and may have nonempty boundary, while $P = \partial_-H$ is allowed to be disconnected but cannot contain two-sphere components. We work relative to the obvious Morse function on $H$. Let $\mathcal T\subset H$ be a neatly embedded one-manifold such that the restriction of the Morse function to each component of $\mathcal T$ has either one critical point or none. We call $(H,\mathcal T)$ a \emph{trivial tangle}. Let $Z$ be a four-dimensional compressionbody $(P\times I\times I)\cup(\text{4--dimensional 1--handles})$, where $P$ is as above. We work relative to the obvious Morse function on $Z$. Let $\mathcal D\subset Z$ be a collection of neatly embedded disks such that the restriction of the Morse function to each component of $\mathcal D$ has either one critical point or none. We call $(Z,\mathcal D)$ a \emph{trivial disk-tangle}. Let $X$ be a compact four-manifold, and let $\mathcal F\subset X$ be a neatly embedded surface. A \emph{bridge trisection} of $(X,\mathcal F)$ is a decomposition $$(X,\mathcal F) = (Z_1,\mathcal D_1)\cup(Z_2,\mathcal D_2)\cup(Z_3,\mathcal D_3)$$ such that, for each $i\in\mathbb{Z}_3$, \begin{enumerate} \item $(Z_i,\mathcal D_i)$ is a trivial disk-tangle. \item $(H_i,\mathcal T_i) = (Z_{i-1},\mathcal D_{i-1})\cap(Z_i,\mathcal D_i)$ is a trivial tangle. \end{enumerate} We let $(\Sigma,\bold x) = \partial(H_i,\mathcal T_i)$. The underlying trisection $X = Z_1\cup Z_2\cup Z_3$ induces an open-book decomposition on each component of $Y=\partial X$, and we find that the bridge trisection of $\mathcal F$ induces a braiding of $\mathcal L = \partial \mathcal F$ with respect to these open-book decompositions. Given this set-up, our general existence result can now be stated. \begin{reptheorem}{thm:general} Let $\mathbb T$ be a trisection of a four-manifold $X$ with $\partial X = Y$, and let $(B,\pi)$ denote the open-book decomposition of $Y$ induced by $\mathbb T$. Let $\mathcal F$ be a neatly embedded surface in $X$; let $\mathcal L = \partial \mathcal F$; and fix a braiding $\widehat\beta$ of $\mathcal L$ about $(B,\pi)$. Then, $\mathcal F$ can be isotoped to be in bridge trisected position with respect to $\mathbb T$ such that $\partial \mathcal F = \widehat\beta$. If $\mathcal L$ already coincides with the braiding $\beta$, then this isotopy can be assumed to restrict to the identity on $Y$. \end{reptheorem} If $H$ is not a three-ball, then $(H,\mathcal T)$ cannot be encoded as a planar diagram, as before. However, $H$ is determined by a collection of curves $\alpha\subset \Sigma\setminus\nu(\bold x)$, and $\mathcal T$ is determined by a collection of arcs $\mathcal T^*$ in $\Sigma$, where the arcs of $\mathcal T^*$ connect pairs of points of $\bold x$. We call the data $(\Sigma,\alpha,\mathcal T^*)$, which determine the trivial tangle $(H,\mathcal T)$, a \emph{tangle shadow}. A triple of tangle shadows that satisfies certain pairwise-standardness conditions is called a \emph{shadow diagram}. Because bridge trisections are determined by their spines, we obtain the following corollary. \begin{repcorollary}{coro:shadow_describe} Let $X$ be a smooth, orientable, compact, connected four-manifold, and let $\mathcal F$ be a neatly embedded surface in $X$. Then, $(X,\mathcal F)$ can be described by a shadow diagram. \end{repcorollary} A detailed development of shadow diagrams is given in Section~\ref{sec:shadow}, where it is described how to read off the data of the braiding of $\mathcal L$ induced by a bridge trisection from a shadow diagram corresponding to the bridge trisection. Moves relating shadow diagrams corresponding to a fixed bridge trisection are given. Section~\ref{sec:gluing} discusses how to glue two bridge trisected surfaces so that the result is bridge trisected, as well as how these gluings can be carried out with shadow diagrams. Section~\ref{sec:class} gives some basic classification results, as well as a handful of examples to add to the many examples included throughout Sections~\ref{sec:four-ball}--\ref{sec:gluing}. The proof of the main existence result, Theorem~\ref{thm:general}, is delayed until Section~\ref{sec:gen_proof}, though it requires only the content of Section~\ref{sec:general} to be accessible. In Section~\ref{sec:stabilize}, we discuss stabilization and perturbation operations that we conjecture are sufficient to relate any two bridge trisections of a fixed surface. A positive resolution of this conjecture would give complete diagrammatic calculi for studying surfaces via tri-plane diagrams and shadow diagrams. \subsection*{Acknowledgements} The author is deeply grateful to David Gay and Alexander Zupan for innumerable provocative and enlightening discussions about trisections over the last few years. The author would like to thank Juanita Pinz\'on-Caicedo and Maggie Miller for helpful suggestions and thoughts throughout this project. This work was supported in part by NSF grant DMS-1933019. \section{Preliminaries}\label{sec:general} In this section, we give a detailed development of the ingredients required throughout the paper, establishing notation conventions as we go. This section should probably be considered as prerequisite for all the following sections, save for Sections~\ref{sec:four-ball} and~\ref{sec:tri-plane}, which pertain to the consideration of surfaces in the four-ball. The reader interested only in this setting may be able to skip ahead, referring back to this section only as needed. \subsection{Some conventions} \label{subsec:conventions} \ Unless otherwise noted, all manifolds and maps between manifolds are assumed to be smooth, and manifolds are compact. The central objects of study here all have the form of a \emph{manifold pair} $(M,N)$, by which we mean that $N$ is \emph{neatly embedded} in $M$ in the sense that $\partial N\subset\partial M$ and $N\pitchfork \partial M$. Throughout, $N$ will usually have codimension two in $M$. In any event, we let $\nu(N)$ denote the interior of a tubular neighborhood of $N$ in $M$. If $M$ is oriented, we let $\overline{(M,N)}$ denote the pair $(M,N)$ with the opposite orientation and we call it the \emph{mirror} of $(M,N)$. We use the symbol $\sqcup$ to denote either the disjoint union or the split union, depending on the context. For example, writing $(M_1,N_1)\sqcup(M_2,N_2)$ indicates $M_1\cap M_2 = \emptyset$. On the other hand, $(M,N_1\sqcup N_2)$ indicates that $N_1$ and $N_2$ are \emph{split} in $M$, by which we usually mean there are disjoint, codimension zero balls $B_1$ and $B_2$ in $M$ (not necessarily neatly embedded) such that $N_i\subset\text{Int} B_i$ for each $i\in\{1,2\}$. \subsection{Lensed cobordisms} \label{subsec:Lensed} \ Given compact manifold pairs $(M_1,N_1)$ and $(M_2,N_2)$ with $\partial(M_1,N_1)\cong\partial(M_2,N_2)$ nonempty, we normally think of a cobordism from $(M_1,N_1)$ to $(M_2,N_2)$ as a manifold pair $(W,Z)$, where $$\partial(W,Z) = \left((M_1,N_1)\sqcup\overline{(M_2,N_2)}\right)\cup\left(\partial(M_1,N_1)\times I\right).$$ Thus, there is a cylindrical portion of the boundary. Consider the quotient space $(W',Z')$ of $(W,Z)$ obtained via the identification $(x,t)\sim(x,t')$ for all $x\in\partial M_1$ and $t,t'\in I$. The space $(W',Z')$ is diffeomorphic to $(W,Z)$, but we have $$\partial(W',Z') = (M_1,N_1)\cup_{\partial(M_1,N_1)}\overline{(M_2,N_2)}.$$ We refer to $(W',Z')$ as a \emph{lensed cobordism}. An example of a lensed cobordism is the submanifold $W'$ co-bounded by two Seifert surfaces for a knot $K$ in $S^3$ that are disjoint in their interior. If $W = M_1\times I$, then we call $W'$ a \emph{product lensed cobordism}. An example of a product lensed cobordism is the submanifold $W'$ co-bounded by two pages of an open-book decomposition on an ambient manifold $X$. See Figure~\ref{fig:lensed_tangle} below for examples of lensed cobordisms between surfaces that contain 1--dimensional cobordisms as neat submanifolds. We offer the following two important remarks regarding our use of lensed cobordisms. \begin{remark} \label{rmk:lensed} Throughout this article, we will be interested in cobordisms between manifolds with boundary. For this reason, lensed cobordisms are naturally well-suited for our purposes. However at times, we will be discussing cobordisms between closed manifolds (e.g. null-cobordisms). In this case, lensed cobordisms do not make sense. We request that the reader remember to drop the adjective `lensed', upon consideration of such cases. For example, if $(M,N)$ is any manifold pair with $N\subset\text{Int}(M)$ closed, then for the product lensed cobordism $(M,N)\times I$, we have that $M\times I$ is lensed, but $N\times I$ is not. \end{remark} \begin{remark} \label{rmk:no_Morse} Lensed cobordisms do not admit Morse functions where $(M_1,N_1)$ and $(M_2,N_2)$ represent distinct level sets, since $(M_1,N_1)\cap(M_2,N_2)\not=\emptyset$. However, the manifold pair $$(W'',Z'') = (W',Z')\setminus\nu(\partial(M_1,N_1))$$ does admit such a function and is trivially diffeomorphic to $(W',Z')$: We think of $(W'',Z'')$ as being formed by `indenting' $(W',Z')$ by removing $\nu(\partial (M_1,N_1))$. Note that there is a natural identification of $(W'',Z'')$ with the original (ordinary) cobordism $(W,Z)$. Since a generic Morse function on the cobordism $W''$ will not have critical points on its boundary, there is no loss of information here. We will have this modification in mind when we consider Morse functions on lensed cobordisms $(W',Z')$, which we will do throughout the paper. This subtlety illustrates that lensed cobordisms are unnatural in a Morse-theoretic approach to manifold theory, but we believe they are more natural in a trisection-theoretic approach. \end{remark} \subsection{Compressionbodies} \label{subsec:Compression} \ Given a surface $\Sigma$ and a collection $\alpha$ of simple closed curves on $\Sigma$, let $\Sigma^\alpha$ denote the surface obtained by surgering $\Sigma$ along $\alpha$. Let $H$ denote the three-manifold obtained by attaching a collection $\frak h_\alpha$ of three-dimensional 2--handles to $\Sigma\times[-1,1]$ along $\alpha\times\{1\}$, before filling in any resulting sphere components with balls. As discussed in Remark~\ref{rmk:lensed}, in the case that $\Sigma$ has nonempty boundary, we quotient out by the vertical portion of the boundary and view $H$ as a lensed cobordism from $\partial_+H = \Sigma$ to $\partial_-H=\Sigma^\alpha$. Considering $H$ as an oriented manifold yields the following decomposition: $$\partial H_\alpha = \partial_+H\cup_{\partial(\partial_+H)}\overline{\partial_-H}.$$ The manifold $H_\alpha$ is called a \emph{(lensed) compressionbody}. A collection $\mathcal D$ of disjoint, neatly embedded disks in a compressionbody $H$ is called a \emph{cut system} for $H$ if $H\setminus\nu(\mathcal D)\cong (\partial_-H)\times I$ or $H\setminus\nu(\mathcal D)\cong B^3$, according with whether $\partial(\partial_+H)=\partial(\partial_-H)$ is nonempty or empty. A collection of essential, simple closed curves on $\partial_+H$ is called a \emph{defining set of curves} for $H$ if it is the boundary of a cut system for $H$. In order to efficiently discuss compressionbodies $H$ for which $\partial_-H$ is disconnected, we will introduce the following terminology. \begin{definition} Given $m\in\mathbb{N}_0$, an \emph{ordered partition} of $m$ is a sequence $\bold m = (m_1,\ldots, m_n)$ such that $m_j\in\mathbb{N}_0$ and $\sum m_j = m$. We say that such an ordered partition is of type $(m,n)$. If $m_j>0$ for all $j$, then the ordered partition is called \emph{positive} and is said to be of type $(m,n)^+$. If $m_j = m'$ for all $j$, then the ordered partition is called \emph{balanced}. \end{definition} Let $\Sigma_g$ denote the closed surface of genus $g$, and let $\Sigma_{g,f}$ denote the result of removing $f$ disjoint, open disks from $\Sigma_g$. A surface $\Sigma$ with $n>1$ connected components is called \emph{ordered} if there is an ordered partition $\bold p = (p_1,\ldots, p_n)$ of $p\in\mathbb{N}_0$ and a positive ordered partition $\bold f = (f_1,\ldots, f_n)$ of $f\in\mathbb{N}$ such that $$\Sigma = \Sigma_{p_1,f_1}\sqcup\cdots\sqcup\Sigma_{p_n,f_n}.$$ We denote such an ordered surface by $\Sigma_{\bold p, \bold f}$, and we consider each $\Sigma_{p_j,f_j}\subset\Sigma_{\bold p,\bold f}$ to come equipped with an ordering of its $f_j$ boundary components, when necessary. Note that we are requiring each component of the \emph{disconnected} surface $\Sigma_{\bold p,\bold f}$ to have boundary. \begin{figure}[h!] \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=.8\linewidth]{lensed_tangle1} \caption{$(H_{2,0,1},\mathcal T_{2,3})$} \label{fig:lensed_tangle1} \end{subfigure}% \begin{subfigure}{.275\textwidth} \centering \includegraphics[width=.8\linewidth]{lensed_tangle2} \caption{$(H_{2,1,1},\mathcal T_{0,3})$} \label{fig:lensed_tangle2} \end{subfigure}% \begin{subfigure}{.275\textwidth} \centering \includegraphics[width=.8\linewidth]{lensed_tangle3} \caption{$(H_{0,(0,0),(1,1)},\mathcal T_{3,(1,3)})$} \label{fig:lensed_tangle3} \end{subfigure}% \caption{Three examples of trivial tangles inside lensed compressionbodies.} \label{fig:lensed_tangle} \end{figure} Let $H_{g,\bold p,\bold f}$ denote the lensed compressionbody satisfying \begin{enumerate} \item $\partial_+ H_{g,\bold p,\bold f} = \Sigma_{g,f}$, and \item $\partial_- H_{g,\bold p,\bold f} = \Sigma_{\bold p,\bold f}$. \end{enumerate} If $\alpha$ is a defining set for such a compressionbody, then $\alpha$ consists of $(n-1)$ separating curves and $(g-p)$ non-separating curves. See Figure~\ref{fig:lensed_tangle} for three examples of lensed compressionbodies, ignoring for now the submanifolds. Let $H_{p_j,f_j}$ denote the product lensed cobordism from $\Sigma_{p_j,f_j}$ to itself, and let $$H_{\bold p,\bold f} = \sqcup_{j=1}^\infty H_{p_j,f_j}.$$ We refer to $H_{\bold p,\bold f}$ as a \emph{spread}. A lensed compressionbody $H$ admits a Morse function $\Phi\colon H\to [-1,3]$, which, as discussed in Remark~\ref{rmk:no_Morse}, is defined on $H\setminus\nu(\partial(\partial_+H))$, such that $\Phi(\partial_+H)=-1$, $\Phi(\partial_-H) = 3$, and $\Phi$ has $(n-1)+(g-p)$ critical points, all of index two, and all lying in $\Phi^{-1}(2)$. We call such a $\Phi$ a \emph{standard} Morse function for $H$. For a positive natural number $I$, we let $\bold x_I\subset \Sigma_{g,f}$ denote a fixed collection of $I$ marked points. \subsection{Heegaard splittings and Heegaard-page splittings} \label{subsec:Heegaard} \ Let $M$ be an orientable three-manifold. A \emph{Heegaard splitting} of $M$ is a decomposition $$M = H_1\cup_\Sigma\overline H_2,$$ where $\Sigma\subset M$ is a neatly embedded surface $\Sigma_{g,f}$, and each $H_i$ is a lensed compressionbody $H_{g,\bold p, \bold f}$ with $\partial_+H_i = \Sigma$. It follows that $$\partial M = \overline{\partial_-H_1}\cup_{\partial\Sigma}\partial_-H_2.$$ We denote the Heegaard splitting by $(\Sigma; H_1, H_2)$, and we call it a $(g;\bold p, \bold f)$--splitting, in reference to the relevant parameters. Note that our notion of Heegaard splitting restricts to the usual notion when $M$ is closed, but is different from the usual notion when $M$ has boundary. Our Heegaard splittings are a special type of sutured manifold decomposition. Since each of the $H_i$ is determined by a collection $\alpha_i$ of curve on $\Sigma$, the Heegaard splitting, including $M$ itself, is determined by the triple $(\Sigma; \alpha_1, \alpha_2)$, which is called a \emph{Heegaard diagram} for $M$. \begin{remark} \label{rmk:ordering1} Note that we have defined Heegaard splittings so that the two compressionbodies are homeomorphic, since this is the only case we will be interested in. Implicit in the set-up are matching orderings of the components of the $\partial_-H_i$ in the case that $|\partial_-H_i|>1$. This will be important when we derive a Heegaard-page structure from a Heegaard splitting below. See also Remark~\ref{rmk:ordering2} \end{remark} A Heegaard splitting $(\Sigma; H_1, H_2)$ with $H_i\cong H_{g, \bold p, \bold f}$ is called \emph{$(m,n)$--standard} if there are cut systems $\mathcal D_i = \{D_i^l\}_{l=1}^{n-1+g-p}$ for the $H_i$ such that \begin{enumerate} \item For $1\leq l\leq n-1$, we have $\partial D_1^l = \partial D_2^l$, and this curve is separating; \item For $n\leq l\leq m+n-1$, we have $\partial D_1^l = \partial D_2^l$, and this curve is non-separating; and \item For $m+n\leq l,l' \leq g-p$, we have $|\partial D_1^l \cap \partial D_2^{l'}|$ given by the Kronecker delta $\delta_{l,l'}$, and the curves $\partial D_1^l$ and $\partial D_2^l$ are non-separating. \end{enumerate} A Heegaard diagram $(\Sigma; \alpha_1, \alpha_2)$ is called \emph{$(m,n)$--standard} if $\alpha_i = \partial\mathcal D_i$ for cut systems $\mathcal D_i$ satisfying these three properties. See Figure~\ref{fig:bridge_double1} for an example. In a sense, a standard Heegaard splitting is a ``stabilized double''. The following lemma makes this precise. \begin{lemma}\label{lemma:std_Heeg} Let $(\Sigma; H_1, H_2)$ be a $(m,n)$--standard Heegaard splitting with $H_i\cong H_{g,\bold p,\bold f}$. Then, $$(\Sigma; H_1, H_2) = \left(\mathop\#\limits_{j=1}^n((\Sigma')^j; (H'_1)^j, (H'_2)^j)\right)\#(\Sigma''; H''_1, H''_2),$$ where $(H'_1)^j\cong(H'_2)^j\cong H_{p_j,f_j}$, for each $j = 1, \ldots, n$, and $(\Sigma'';H_1'',H_2'')$ is the standard genus $g-p$ Heegaard surface for $\#^m(S^1\times S^2)$. \end{lemma} \begin{proof} Consider the $n$ regions of $\Sigma$ cut out by the $n-1$ separating curves that bound in each compressionbody. After a sequence of handleslides, we can assume that all of the non-separating curves of the $\alpha_i$ are contained in one of these regions. Once this is arranged, there is a separating curve $\delta$ in $\Sigma\setminus\nu(\alpha_1\cup\alpha_2)$ that cuts off a subsurface $\Sigma''$ such that $\Sigma''$ has only one boundary component (the curve $\delta$) and $g(\Sigma'') = g-p$. Since $\delta$ bounds in each of $H_1$ and $H_2$, we have that $(\Sigma;H_1,H_2) = (\Sigma';H_1',H_2')\#_\delta(\Sigma'';H_1'',H_2'')$, such that the latter summand is the standard splitting of $\#^m(S^1\times S^2)$, as claimed. The fact that the regions of $\Sigma'$ cut out by the separating curves that bound in both handlebodies contain no other curves of the $\alpha_i$ means that these curves give the connected sum decomposition $$(\Sigma'; H_1', H_2') = \left(\mathop\#\limits_{j=1}^n((\Sigma')^j; (H'_1)^j, (H'_2)^j)\right)$$ that is claimed. \end{proof} Let $H_1$ and $H_2$ be two copies of $H_{g, \bold p, \bold f}$, and let $h\colon\partial_+H_1\to \partial_+H_2$ be a diffeomorphism. Let $Y$ be the closed three-manifold obtained as the union of $H_1$ and $H_2$ along their boundaries such that $\partial_+H_1$ and $\partial_+H_2$ are identified via $h$ and $\partial_-H_1$ and $\partial_-H_2$ are identified via the identity on $\partial_-H_{g, \bold p, \bold f}$. The manifold $Y$ is called a \emph{Heegaard double} of $H_{g, \bold p, \bold f}$ along $h$. We say that a Heegaard double $Y$ is \emph{$(m,n)$--standard} if the Heegaard splitting $(\Sigma; H_1, H_2)$ is $(m,n)$--standard. Let $Y_{g,\bold p,\bold f}$ denote the Heegaard double of a standard Heegaard splitting whose compressionbodies are $H_{g,\bold p, \bold f}$. The uniqueness of $Y_{g,\bold p,\bold f}$ is justified by the following lemma, which is proved with slightly different terminology as Corollary~14 of~\cite{CasGayPin_18_Diagrams-for-relative-trisections}. \begin{lemma} \label{lem:HeegDouble} Let $M = H_1\cup_\Sigma\overline H_2$ be a standard Heegaard splitting with $H_i\cong H_{g,\bold p,\bold f}$. Then there is a unique (up to isotopy rel-$\partial$) diffeomorphism $\text{Id}_{(M,\Sigma)}\colon \partial_-H_1\to\partial_-H_2$ such that the identification space $M/_{x\sim\text{Id}_{(M,\Sigma)}(x)}$, where $x\in\partial_-H_1$, is diffeomorphic to the standard Heegaard double $Y_{g,\bold p,\bold f}$. \end{lemma} We now identify the total space of a standard Heegaard double. Let $\text{Id}_{p_j,f_j}\colon\Sigma_{p_j,f_j}\to\Sigma_{p_j,f_j}$ be the identity map, and let $M_{\text{Id}_{p_j,f_j}}$ be the total space of the abstract open-book $(\Sigma_{p_j,f_j},\text{Id}_{p_j,f_j})$. See Subsection~\ref{subsec:OBD}, especially Example~\ref{ex:Id_obd}, for definitions and details regarding open-book decompositions. \begin{lemma}\label{lemma:k-value} There is a decomposition $$ Y_{g,\bold p, \bold f} = \left(\mathop\#_{j=1}^nM_{\text{Id}_{p_j,f_j}}\right)\#(\#^m(S^1\times S^2)),$$ such that $\Sigma$ restricts to a page in each of the first $n$ summands and to a Heegaard surface in the last summand. Moreover, $$M_{\text{Id}_{p_j,f_j}}\cong\#^{2p_j+f_j-1}(S^1\times S^2),$$ so $Y_{g,\bold p, \bold f}\cong \#^k(S^1\times S^2)$, with $k = 2p+f-n+m$. \end{lemma} \begin{proof} Consider the abstract open-book $(\Sigma_{p_j,f_j}, \text{Id}_{p_j,f_j})$, and let $M_{\text{Id}_{p_j,f_j}}$ denote the total space of this abstract open-book. Pick two pages, $P_1$ and $P_2$, of the open-book decomposition of $M_{\text{Id}_{p_j,f_j}}$, and consider the two lensed cobordisms co-bounded thereby. Each of these pieces is a handlebody of genus $2p_j+f_j-1$, since it is diffeomorphic to $H_{p_j,f_j}$. A collection of arcs decomposing the page into a disk give rise to a cut system for either handlebody, but these cut systems have the same boundary. The object described is a genus $2p_j+f_j-1$ (symmetric) Heegaard splitting for $\#^{2p_j+f_j-1}(S^1\times S^2)$. The rest of the proof follows from Lemma~\ref{lemma:std_Heeg}. \end{proof} \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{bridge_double1} \caption{} \label{fig:bridge_double1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{bridge_double2} \caption{} \label{fig:bridge_double2} \end{subfigure}% \caption{(A) A $(1,2)$--standard Heegaard diagram for the standard Heegaard double $Y_{4,(1,0),(2,1)}$. (B) A schematic showing the standard Heegaard double $Y_{2,1,1}$, containing a $(3,4)$--bridge splitting for an unlink. The unlink has no flat components and four vertical components.} \label{fig:bridge_double} \end{figure} Let $Y$ be a standard Heegaard double. We consider the lensed compressionbodies $H_1$ and $H_2$ as embedded submanifolds of $Y$ in the following way, which is a slight deviation from the way they naturally embed in the Heegaard double. For $i=1,2$, let $P_i^j$ denote the result of a slight isotopy of $\partial_-H_i^j$ into $H_i$ along the product structure induced locally by the lensed cobordism structure of $H_i$. Let $Y_1^j$ denote lensed product cobordism co-bounded by $P_1^j$ and $P_2^j$. In this way, we think of the Heegaard double $Y$ as divided into three regions: $H_1$, $H_2$, and $\sqcup_jY_1^j$, each of whose connected components is a lensed compressionbody. The union of $H_1$ and $H_2$ along their common, southern boundary, which we denote by $\Sigma$ is a standard Heegaard splitting, and each $Y_1^j$ is the product lensed cobordism $H_{p_j,f_j}$. See Figure~\ref{fig:bridge_double2}, as well as Figure~\ref{fig:ortn_conv}, for a schematic illustration of this structure. We call this decomposition a \emph{(standard) Heegaard-page structure} and note that it is determined by the Heegaard splitting data $(\Sigma,H_1,H_2)$, by Lemma~\ref{lem:HeegDouble}. \subsection{Trivial tangles} \label{subsec:Tangles} \ A \emph{tangle} is a pair $(H,\mathcal T)$, where $H$ is a compressionbody and $\mathcal T$ is a collection of neatly embedded arcs in $H$, called \emph{strands}. Let $\Phi$ be a standard Morse function for $H$. After an ambient isotopy of $\mathcal T$ rel-$\partial$, we can assume that $\Phi$ restricts to $\mathcal T$ to give a Morse function $\Phi|_\mathcal T\colon\mathcal T\to[-1,3]$ such that each local maximum of $\mathcal T$ maps to $1\in[-1,3]$ and each local minimum maps to $0\in[-1,3]$. We have arranged that $\Phi$ be self-indexing on $H$ and when restricted to $\mathcal T$. A strand $\tau\subset \mathcal T$ is called \emph{vertical} if $\tau$ has no local minimum or maximum with respect to $\Phi|_\mathcal T$ and is called \emph{flat strand} if $\tau$ has a single local extremum, which is a maximum. Note that vertical strands have one boundary point in each of $\partial_+H$ and $\partial_-H$, while flat strands have both boundary points in $\partial_+H$. A tangle $\mathcal T$ is called \emph{trivial} if it is isotopic rel-$\partial$ to a tangle all of whose strands are vertical or flat. Such a tangle with $b$ flat strands and $v$ vertical strands is called an \emph{$(b,v)$--tangle}, with the condition that it be trivial implicit in the terminology. More precisely, if $H\cong H_{g,\bold p,\bold f}$, then we have an ordered partition of the vertical strands determined by which component $\Sigma_{p_j,b_j}$ of $\partial_-H\cong\Sigma_{\bold p,\bold f}$ contains the top-most endpoint of each vertical strand, and we can more meticulously describe $\mathcal T$ as an \emph{$(b,\bold v)$--tangle}. See Figure~\ref{fig:lensed_tangle} for three examples of trivial tangles in lensed compressionbodies. \begin{remark} \label{rmk:br_strand} In this paper, any tangle $(H,\mathcal T)$ with $\partial_+ H$ disconnected will not contain flat strands. Moreover, such an $H$ will always be a spread $H\cong H_{\bold p,\bold v}$. Therefore, we will never partition the flat strands of $\mathcal T$. \end{remark} There is an obvious model tangle $(H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})$ that is a lensed cobordism from $(\Sigma_{g,\bold f},\bold x_{2b+v})$ to $(\Sigma_{\bold p,\bold f},\bold y_{\bold v})$ in which the first $2b$ points of $\bold x_{2b+v}$ are connected by slight push-ins of arcs in $\Sigma_{g,\bold f}$, and the final $v$ rise vertically to $\Sigma_{\bold p,\bold f}$, as prescribed by the standard height function on $H_{g,\bold p,\bold f}$ and the ordered partitions. The points $\bold x_{2b+v}$ are called \emph{bridge points}. A pair $(H,\mathcal T)$ is determined up to diffeomorphism by the parameters $g$, $b$, $\bold p$, $\bold f$, and $\bold v$, and we refer to any tangle with these parameters as a $(g,b;\bold p,\bold f,\bold v)$--tangle. Note that this diffeomorphism can be assumed to be supported near $\partial_+H$ and can be understood as a braiding of the bridge points $\bold x_{2b+v}$. For this reason, we consider trivial tangles up to isotopy rel-$\partial$, and we think of each such tangle as having a fixed identification of the subsurface $(\Sigma_{g,b},\bold x_{2b+v})$ of its boundary. Let $\tau$ be a strand of a trivial tangle $(H,\mathcal T)$. Suppose first that $\tau$ is flat. A \emph{bridge semi-disk for $\tau$} is an embedded disk $D_\tau\subset H$ satisfying $\partial D_\tau = \tau\cup\tau^*$, where $\tau^*$ is an arc in $\partial_+H$ with $\partial\tau^* = \partial \tau$, and $D_\tau\cap\mathcal T = \tau$. The arc $\tau^*$ is called a \emph{shadow} for $\tau$. Now suppose that $\tau$ is vertical. A \emph{bridge triangle for $\tau$} is an embedded disk $D_\tau\subset H$ satisfying $\partial D_\tau = \tau\cup\tau^*\cup\tau^-$, where $\tau^*$ (respectively, $\tau^-$) is an arc in $\partial_+H$ (respectively, $\partial_-H$) with one endpoint coinciding with an endpoint of $\tau$ and the other endpoint on $\partial(\partial_+H)$, coinciding with the other endpoint of $\tau^-$ (respectively, $\tau^*$), and $D_\tau\cap\mathcal T =\tau$. \begin{remark} Note that the existence of a bridge triangle for a vertical strand $\tau$ requires that $\partial_-H$ have boundary; there is no notion of a bridge disk for a vertical strand in a compressionbody co-bounded by closed surfaces. In this paper, if $\partial_+H$ is ever closed, $H$ will be a handlebody and will not contain vertical strands, so bridge semi-disks and triangles will always exist for trivial tangles that we consider. \end{remark} Given a trivial tangle $(H,\mathcal T)$, a \emph{bridge disk system for $\mathcal T$} is a collection $\Delta$ of disjoint disks in $H$, each component of which is a bridge semi-disk or triangle for a strand of $\mathcal T$, such that $\Delta$ contains precisely one bridge semi-disk or triangle for each strand of $\mathcal T$. \begin{lemma} Let $(H,\mathcal T)$ be a trivial tangle such that either $\partial_+H$ has nonempty boundary or $\mathcal T$ contains no vertical strands. Then, there is a bridge disk system $\Delta$ for $\mathcal T$. \end{lemma} \begin{proof} There is a diffeomorphism from $(H,\mathcal T)$ to $(H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})$, as discussed above. This latter tangle has an obvious bridge disk system: The `slight push-in' of each flat strand sweeps out a disjoint collection of bridge semi-disks for these strands, while the points $x\in\bold x_{2b+v}$ corresponding to vertical strands can be connected to $\partial \Sigma_{g,\bold f}$ via disjoint arcs, the vertical traces of which are disjoint bridge triangles for the vertical strands. Pulling back this bridge system to $(H,\mathcal T)$ using the inverse diffeomorphism completes the proof. \end{proof} We will refer to a $(0,\bold v)$--tangle as a \emph{vertical $\bold v$--tangle} and to a $(b,0)$--tangle as a \emph{flat $b$--tangle}. In the case that $\mathcal T$ is a vertical tangle in a spread $H\cong H_{\bold p,\bold f}$, we call $\mathcal T$ a \emph{$\bold v$--thread} and call the pair $(H,\mathcal T)$ a \emph{$(\bold p,\bold f,\bold v)$--spread}. Note that a $(p,f,v)$--spread is simply a lensed geometric (surface) braid; in particular, a $(0,1,v)$--spread is a lensed geometric braid $(D^2\times I,\beta)$. \subsection{Bridge splittings} \label{subsec:Bridge} \ Let $K$ be a neatly embedded, one-manifold in a three-manifold $M$. A \emph{bridge splitting} of $K$ is a decomposition $$(M,K) = (H_1,\mathcal T_1)\cup_{(\Sigma,\bold x)}\overline{(H_2,\mathcal T_2)},$$ where $(\Sigma; H_1, H_2)$ is a Heegaard splitting for $M$ and $\mathcal T_i\subset H_i$ is a trivial tangle. If $\mathcal T_1$ is a trivial $(b,\bold v)$--tangle, then we require that $\mathcal T_2$ be a trivial $(b,\bold v)$--tangle, and we call the decomposition a \emph{$(g,\bold p,\bold f; b, \bold v)$--bridge splitting}. A one-manifold $K\subset M$ is in \emph{$(b,\bold v)$--bridge position} with respect to a Heegaard splitting of $M$ if $K$ intersects the compressionbodies $H_i$ as a $(b,\bold v)$--tangle. \begin{remark} \label{rmk:ordering2} As we have assumed a correspondence between the components of the $\partial_-H_i$ (see Remark~\ref{rmk:ordering1}, we can require that the partitions of the vertical strands of the $\mathcal T_i$ respect this correspondence. This is the sense in which both $\mathcal T_i$ are $(b,\bold v)$--tangles. This will be important when we turn a bridge splitting into a bridge-braid decomposition below. \end{remark} Consider the special case that $M$ is the trivial lensed cobordism between $\partial_-H_1$ and $\partial_-H_2$ and $K\subset M$ is a $v$--braid -- i.e., isotopic rel-$\partial$ so that it intersects each level surface of the trivial lensed cobordism transversely. (Note that the $\partial_-H_i$ are necessarily connected, since $\Sigma$ is.) If each of $\mathcal T_i=H_i\cap K$ is a trivial $(g,b;p,f,v)$--tangle, we call the union $(H_1,\mathcal T_1)\cup_{\Sigma,\bold x}\overline{(H_2,\mathcal T_2)}$ an \emph{$b$--perturbing of a $v$--braid}. More generally, we say that a bridge splitting is \emph{standard} if the underlying Heegaard splitting $M = H_1\cup_\Sigma\overline{H_2}$ is standard (as defined in Subsection~\ref{subsec:Heegaard} above) and there are collections of bridge semi-disks $\Delta_i$ for the flat strands of the tangles $\mathcal T_i$ whose corresponding shadows $\mathcal T_i^*$ have the property that $\mathcal T_1^*\cup_\bold x\mathcal T_2^*$ is an embedded collection of polygonal arcs and curves. As a consequence, if $(M,K)$ admits a standard bridge splitting, then $K$ is the split union of a an unlink (with one component corresponding to each polygonal curve of shadow arcs) with a braid (with one strand corresponding to each polygonal arc of shadows arcs). As described in Lemma~\ref{lemma:std_Heeg}, the ambient manifold $M$ is a connected sum of copies of surfaces cross intervals and copies of $S^1\times S^2$. Let $(H_1,\mathcal T_1)$ and $(H_2,\mathcal T_2)$ be two copies of the model tangle $(H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})$, and let $$h\colon\partial_+(H_1,\mathcal T_1)\to\partial_+(H_2,\mathcal T_2)$$ be a diffeomorphism. Let $(Y,L)$ be the pair obtained as the union of $(H_1,\mathcal T_1)$ and $(H_2,\mathcal T_2)$, where the boundaries $\partial_+(H_i,\mathcal T_i)$ are identified via $h$ and the boundaries $\partial_-(H_i,\mathcal T_i)$ are identified via the identity map of $\partial_-(H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})$. We call the pair $(Y,L)$ a \emph{bridge double} of $(H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})$ along $h$. Note that a component of $L$ can be referred to as \emph{flat} or \emph{vertical} depending on whether or not is is disjoint from $\partial_-H_i$. We say that the bridge double is \emph{standard} if \begin{enumerate} \item the bridge splitting $(H_1,\mathcal T_1)\cup_{(\Sigma,\bold x)}\overline{(H_2,\mathcal T_2)}$ is standard, and \item $L$ has exactly $v$ vertical components. In other words, each component of $L$ hits $\partial_-H_i$ exactly once or not at all. \end{enumerate} Let $(Y_{g,\bold p,\bold f},L_{b,\bold v})$ denote the bridge double of a standard bridge splitting with $(H_i,\mathcal T_i)\cong (H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})$. The uniqueness of the \emph{standard bridge double} $(Y_{g,\bold p,\bold f},L_{b,\bold v})$ is given by the following lemma, which generalizes Lemma~\ref{lem:HeegDouble} above. \begin{lemma} \label{lem:BridgeDouble} Let $(M,K)=(H_1,\mathcal T_1)\cup_{(\Sigma,\bold x)}\overline{(H_2,\mathcal T_2)}$ be a standard bridge splitting with $(H_i,\mathcal T_i)\cong (H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})$. Then there is a unique (up to isotopy rel-$\partial$) diffeomorphism $\text{Id}_{(M,K,\Sigma)}\colon \partial_-(H_1,\mathcal T_1)\to\partial_-(H_2,\mathcal T_2)$ such that the identification space $(M,K)/_{x\sim\text{Id}_{(M,K,\Sigma)}(x)}$, where $x\in\partial_-(H_1,\mathcal T_1)$, is diffeomorphic to the standard bridge double $(Y_{g,\bold p,\bold f},L_{b,\bold v})$. \end{lemma} \begin{proof} Let $(M,K)$ be a standard bridge splitting. Suppose $(Y,L)$ is the bridge double obtained via the gluing map $\text{Id}_{(M,\Sigma)}\colon\partial_-H_1\to\partial_-H_2$, which is determined uniquely up to isotopy rel-$\partial$ by Lemma~\ref{lem:HeegDouble}. The claim that must be justified is that $\text{Id}_{(M,\Sigma)}$ is unique up to isotopy rel-$\partial$ when considered as a map of pairs $\partial_-(H_1,\bold y_1)\to\partial_-(H_2,\bold y_2)$ Criterion (2) of a standard bridge double above states that $K$ must close up to have $v$ vertical components, where $v$ is the number of vertical strands in the splitting $(M,K)$. It follows that $\text{Id}_{(M,\Sigma)}$ restricts to the identity permutation as a map $\bold y_1\to\bold y_2$ -- i.e. the end of a vertical strand in $\bold y_1$ must get matched with the end of the same strand in $\bold y_2$. Let $(M,K)^\circ$ denote the pair obtained by deperturbing the vertical arcs of $K$ so that they have no local extrema, then removing tubular neighborhoods of them. Note that $(M,K)^\circ$ is a standard bridge splitting (of the flat components of $K$) of type $(g,\bold p,\bold f';b',0)$. The restriction $\text{Id}_{(M,\Sigma)}^\circ$ to $(\partial_-H_1)^\circ$ is the identity on $\partial(\partial_-H_1)^\circ$, so we can apply Lemma~\ref{lem:HeegDouble} to conclude that $\text{Id}_{(M,\Sigma)}^\circ$ is unique up to isotopy rel-$\partial$. Since $\text{Id}_{(M,\Sigma)}^\circ$ extends uniquely to a map $\text{Id}_{(M,\Sigma,K)}$ of pairs, as desired, we are done. \end{proof} Finally, consider a standard bridge double $(Y_{g,\bold p,\bold f},L_{b,\bold v})$, and recall the Heegaard-page structure on $Y_{g,\bold p,\bold f}$. This induces a structure on $L$ that we call a \emph{bridge-braid structure}. In particular, we have \begin{enumerate} \item $\mathcal T_i = L\cap H_i$ is a $(b,\bold v)$--tangle, and \item $\beta_1^j = L\cap Y_1^j$ is a $v_j$--braid. \end{enumerate} \subsection{Disk-tangles} \label{subsec:DiskTangles} \ Let $Z_k$ denote the four-dimensional 1--handlebody $\natural^k(S^1\times B^3)$. Given nonnegative integers $p$, $f$, $m$, and $n$ such that $k=2p+f-1+m$ and ordered partitions $\bold p$ and $\bold f$ of $p$ and $f$ of length $n$, there is a natural way to think of $Z_k$ as a lensed cobordism from the spread $Y_1 = H_{\bold p,\bold f}$ to the $(m,n)$--standard Heegaard splitting $(\Sigma;H_1,H_2) = (\Sigma_{g,\bold f}; H_{g,\bold f},H_{g,\bold f}$). Starting with $Y_1\times[0,1]$, attach $m+n-1$ four-dimensional 1--handles to $Y_1\times\{1\}$ so that the resulting four-manifold is connected. The three-manifold resulting from this surgery on $Y_1\times\{1\}$ is $H_1\cup_\Sigma\overline{H_2}$, and the induced structure on $\partial Z_k$ is that of the standard Heegaard-page structure on $Y_{g;\bold p,\bold f}$. With this extra structure in mind, we denote this distinguished copy by $Z_k$ by $Z_{g,k;\bold p,\bold f}$. A \emph{disk-tangle} is a pair $(Z,\mathcal D)$ where $Z\cong Z_k$ and $\mathcal D$ is a collection of neatly embedded disks. A disk-tangle is called \emph{trivial} if $\mathcal D$ can be isotoped rel-$\partial$ to lie in $\partial Z$. \begin{proposition}\label{prop:triv_disks} Let $\mathcal D$ and $\mathcal D'$ be trivial disk-tangles in $Z$. If $\partial \mathcal D =\partial \mathcal D'$, then $\mathcal D$ and $\mathcal D'$ are isotopic rel-$\partial$ in $Z$. \end{proposition} \begin{proof} Then case when $Z\cong B^4$ is a special case of a more general result of Livingston~\cite{Liv_82_Surfaces-bounding-the-unlink}, and is also proved in~\cite{Kam_02_Braid-and-knot}. See~\cite{MeiZup_18_Bridge-trisections-of-knotted} for the general case. \end{proof} A trivial disk-tangle $(Z,\mathcal D)$ inherits extra structure along with $Z_{g,k;\bold p,\bold f}$, since we can identify $\partial\mathcal D$ with an unlink $L$ in standard $(b,\bold v)$--bridge position in $Y_{g;\bold p,\bold f}$. In this case, a disk $D\subset\mathcal D$ is called \emph{vertical} (resp., \emph{flat}) if it corresponds to a vertical (resp., flat) component of $L$. With this extra structure in mind, we call a trivial disk-tangle a \emph{$(c,\bold v)$--disk-tangle} and denote it by $\mathcal D_{c,\bold v}$. Note that $\mathcal D_{c,\bold v}$ is a tangle of $c+v$ disks. We call the pair $(Z_{g,k;\bold p,\bold f},\mathcal D_{c;\bold v})$ a \emph{$(g,k,c;\bold p,\bold f,\bold v)$--disk-tangle}. Note that Proposition~\ref{prop:triv_disks} respects this extra structure, since part of the hypothesis was that the two disk systems have the same boundary. See Figure~\ref{fig:disk_tangle} for a schematic illustration. \begin{figure}[h!] \centering \includegraphics[width=.4\textwidth]{disk-tangle} \caption{A schematic of the disk-tangle $\mathcal D_{1,2}$, which contains one flat component and two vertical components. Note that the 3--component unlink on the boundary is in $(3,2)$--bridge position with respect to the standard Heeggaard double $Y_{0,0,1}$ for the 3--sphere.} \label{fig:disk_tangle} \end{figure} The special structure on $Z_{g,k;\bold p, \bold f}$ described above induces a special Morse function $\Phi\colon Z \to \mathbb{R}$ with $m+n-1$ critical points, all of which are index one. The next lemma characterizes trivial disk-tangles with respect to this standard Morse function. \begin{lemma}\label{lem:one_min} Let $Z = Z_{g,k;\bold p, \bold f}$, and let $\mathcal D\subset Z$ be a collection of neatly embedded disks with $\partial\mathcal D\cap Y_1$ a $\bold v$--thread. Suppose the restriction $\Phi_\mathcal D$ of $\Phi$ to $\mathcal D$ has $c$ critical points, each of which is index zero. Then $\mathcal D$ is a $(c,\bold v)$--disk-tangle for some ordered partition $\bold v$ of $v=|\mathcal D|-c$. \end{lemma} \begin{proof} We parameterize $\Phi\colon Z\to\mathbb{R}$ so that $\Phi(Z) = [0,1.5]$, $\Phi^{-1}(0) = Y_1\setminus\nu(P_1\cup P_2)$, $\Phi^{-1}(1.5) = (H_1\cup_\Sigma\overline{H_2})\setminus\nu(\overline{P_1}\cup P_2)$, and $\Phi(x) = 0.5$ for each critical point $x\in Z$ of $\Phi$. Let $\Gamma$ denote the cores of the 1--handles of $Z$. By a codimension argument, we can assume, after a small perturbation of $\Phi$ that doesn't introduce any new critical points, that $\mathcal D$ is disjoint from a neighborhood $\nu(\Gamma)\cup Y_1\times[0,1]$. Thus, we can assume that $\Phi_\mathcal D(x) = 1.0$ for any critical point $x\in\mathcal D$ of $\Phi_\mathcal D$. First, note that $0\leq c\leq |\mathcal D|$; each connected component of $\mathcal D$ can have at most one minimum, since $\Phi_\mathcal D$ has no higher-index critical points. Let $\{D_i\}_{i=1}^c\subset\mathcal D$ denote the sub-collection of disks in $\mathcal D$ that contain the index zero critical points of $\Phi_\mathcal D$. We claim that $D=\cup_{i=1}^cD_i$ is a $(c,0)$--disk-tangle. We will now proceed to construct the required boundary-parallelism. Consider the moving picture of the intersection $D_{\{t\}}$ of $D$ with the cross-section $Z_{\{t\}} = \Phi^{-1}(1+t)$ for $t\in[0,0.5]$. This movie shows the birth of a $c$--component unlink $L$ from $c$ points at time $t=0$, followed by an ambient isotopy of $L$ as $t$ increases. Immediately after the birth, say $t=\epsilon$, we have that the sub-disks $D_{[1,1+\epsilon]} = D\cap\Phi^{-1}([1,1+\epsilon])$ of $D$ are clearly boundary-parallel to a spanning collection of disks $E_\epsilon$ for $L_\epsilon = D_{\{1+\epsilon\}}$. Now, we simply push this spanning collection of disks $E_\epsilon$ along through the isotopy taking $L_\epsilon$ to $\partial D$. Because this isotopy is ambient, the traces of the disks of $E_\epsilon$ are disjoint, thus they provide a boundary parallelism for $D$, as desired. It remains to see that the collection $D''$ of disks in $\mathcal D$ containing no critical points of $\Phi_\mathcal D$ are also boundary parallel. Note however, that they will not be boundary parallel into $\Phi^{-1}(1.5)$, as before. Let $\beta = D''\cap Y_1$; by hypothesis, $(Y_1,\beta)$ is a $(\bold p, \bold f, \bold v)$--spread, i.e., $Y_1$ is a product lensed bordism (a spread) $H_{\bold p,\bold f}$ and $\beta$ is a vertical $\bold v$--tangle (a $\bold v$--thread) therein. Similar to before, we can assume that $D''$ is disjoint from a small neighborhood of the cores of the 1--handles. Since $D''$ contains no critical points, it is vertical in the sense that we can think of it as the trace of an ambient isotopy of $\beta$ in $Y_1$ as $t$ increases from $t=0$ to $t=0.5$, followed by the trace of an ambient isotopy of $\beta$ in $H_1\cup_\Sigma\overline{H_2}$ between $t=0.5$ and $t=1.5$. The change in the ambient space is not a problem, since $D''$ is disjoint form the cores $\Gamma$ of the 1--handles, hence these isotopies are supported away from the four-dimensional critical points. If $\Delta$ is any choice of bridge triangles for $\beta$ in $Y_1$, then the trace of $\Delta$ under this isotopy gives a boundary-parallelism of $D''$, as was argued above. We omit the details in this case. \end{proof} Note that the assumption that $\beta$ be a thread was vital in the proof, as it gave the existence of~$\Delta$. If $\beta$ contained knotted arcs, the vertical disk sitting over such an arc would not be boundary parallel. Similarly, if $\beta$ contained closed components, the vertical trace would be an annulus, not a disk. The converse to the lemma is immediate, hence it provides a characterization of trivial disk-tangles. We next show how a standard bridge splitting can be uniquely extended to a disk-tangle. The following lemma builds on portions of~\cite[Section~4]{CasGayPin_18_Diagrams-for-relative-trisections}. \begin{lemma} \label{lem:LP} Let $(M,K) = (H_1,\mathcal T_1)\cup_{(\Sigma,\bold x)}\overline{(H_2,\mathcal T_2)}$ be a standard $(g,\bold p, \bold f;b, \bold v)$--bridge splitting. There is a unique (up to diffeomorphism rel-$\partial$) pair $(Z,\mathcal D)$, diffeomorphic to $(Z_{g,k;\bold p,\bold f},\mathcal D_{c,\bold v})$, such that the bridge double structure on $\partial(Z,\mathcal D)$ is the bridge double of $(M,K)$. \end{lemma} \begin{proof} By Lemma~\ref{lem:BridgeDouble}, there is a unique way to close $(M,K)$ up and obtain its bridge double $(Y,L)$. By Laudenbach-Poenaru~\cite{LauPoe_72_A-note-on-4-dimensional}, there is a unique way to cap off $Y\cong\#^k(S^1\times S^2)$ with a copy of $Z$ of $Z_k$. By Proposition~\ref{prop:triv_disks}, there is a unique way to cap off $L$ with a collection $\mathcal D$ of trivial disks. Since these choice are unique (up to diffeomorphism rel-$\partial$ and isotopy rel-$\partial$, respectively), the pair $(Z,\mathcal D)$ inherit the correct bridge double structure on its boundary, as desired. \end{proof} \subsection{Open-book decompositions and braidings of links} \label{subsec:OBD} \ We follow Etnyre's lecture notes~\cite{Etn_04_Lectures-on-open} to formulate the definitions of this subsection. Let $Y$ be a closed, orientable three-manifold. An \emph{open-book decomposition} of $Y$ is a pair $(B,\pi)$, where $B$ is a link in $M$ (called the \emph{binding}) and $\pi\colon Y\setminus B \to S^1$ is a fibration such that $P_\theta = \pi^{-1}(\theta)$ is a non-compact surface (called the \emph{page}) with $\partial P_\theta = B$. Note that it is possible for a given link $B$ to be the binding of non-isotopic (even non-diffeomorphic) open-book decomposition of $Y$, so the projection data $\pi$ is essential in determining the decomposition. An \emph{abstract open-book} is a pair $(P,\phi)$, where $P$ is an oriented, compact surface with boundary, and $\phi\colon P \to P$ is a diffeomorphism (called the \emph{monodromy}) that is the identity on a collar neighborhood of $\partial P$. An abstract open-book $(P,\phi)$ gives rise to a closed three-manifold, called the \emph{model manifold}, with an open-book decomposition in a straight-forward way. Define $$Y_\phi = (P\times_\phi S^1) \cup \left(\bigsqcup_{|\partial P|} S^1\times D^2\right),$$ where $P\times_\phi S^1$ denotes the mapping torus of $\phi$, and $Y_\phi$ is formed from this mapping torus by capping off each torus boundary component with a solid torus such that each $p\times_\phi S^1$ gets capped off with a meridional disk for each $p\in\partial P$. (Note that $p\times_\phi S^1 = p\times S^1$ by the condition on $\phi$ near the boundary of $P$.) Our convention is that $P\times_\phi S^1 = P\times[0,1]/_{(x,1)\sim(\phi(x),0)}$ for all $x\in P$. If we let $B_\phi$ denote the cores of the solid tori used to form $Y_\phi$, then we see that $Y_\phi\setminus B_\phi$ fibers over $S^1$, so we get an open-book decomposition $(B_\phi,\pi_\phi)$ for $Y_\phi$. Conversely, an open-book decomposition $(B,\pi)$ of a three-manifold $M$ gives rise to an abstract open-book $(P_\pi,\phi_\pi)$ in the obvious way such that $(Y_{\phi_\pi},B_{\phi_\pi})$ is diffeomorphic to $(M,B)$. We now recall an important example which appeared in Lemma~\ref{lemma:k-value}. \begin{example} \label{ex:Id_obd} Consider the abstract open-book $(P,\phi)$, where $P = \Sigma_{p,f}$ is a compact surface of genus $p$ with $f$ boundary components and $\phi\colon P\to P$ is the identity map. the total space $Y_\phi$ of this abstract open-book is diffeomorphic to $\#^{2p+f-1}(S^1\times S^2)$. To see this, simply note that the union of half of the pages gives a handlebody of genus $2p+f-1$; since the monodromy is the identity, $Y_\phi$ is the symmetric double of this handlebody. \end{example} Harer described a set of moves that suffice to pass between open-book decompositions on a fixed three-manifold~\cite{Har_82_How-to-construct-all-fibered-knots}. These include Hopf stabilization and destabilization, as well as a certain double-twisting operation, which was known to be necessary in order to change the homotopy class of the associated plane field. (Harer's calculus was recently refined in~\cite{PieZud_18_Special-moves-for-open}.) In fact, Giroux and Goodman proved that two open-book decompositions on a fixed three-manifold have a common Hopf stabilization if and only if the associated plane fields are homotopic~\cite{GirGoo_06_On-the-stable-equivalence-of-open}. For a trisection-theoretic account of this story, see~\cite{CasIslMil_19_The-relative-L-invariant-of-a-compact}. Having introduced open-book decompositions, we now turn our attention to braided links. Suppose that $\mathcal L\subset Y$ is a link and $(B,\pi)$ is an open-book decomposition on $Y$. We say that $\mathcal L$ is \emph{braided with respect to $(B,\pi)$} if $\mathcal L$ intersects each page of the open-book transversely. We say that $(Y,\mathcal L)$ is equipped with the structure of an \emph{open-book braiding}. The \emph{index} of the braiding is the number of times that $\mathcal L$ hits a given page. By the Alexander Theorem~\cite{Ale_20_Note-on-Riemann-spaces} and the generalization due to Rudolph~\cite{Rud_83_Constructions-of-quasipositive-knots}, any link can be braided with respect to any open-book in any three-manifold. An \emph{abstract open-book braiding} is a triple $(P,\bold y,\phi)$, where $P$ is an oriented, compact surface with boundary, $\bold y\subset P$ is a collection of points, and $\phi\colon (P,\bold y)\to (P,\bold y)$ is a diffeomorphism. As with abstract open-books, this data gives rise to a manifold pair $(Y_\phi,\mathcal L_\phi)$, called the \emph{model open-book braiding} of the abstract open-book braiding, where $Y_\phi$ has an open-book structure with binding $B_\phi$ and projection $\pi_\phi$ and $\mathcal L_\phi$ is braided with respect to $(B_\phi,\pi_\phi)$. More precisely, $$(Y_\phi,\mathcal L_\phi) = (P,\bold y)\times_\phi S^1 = (P,\bold y)\times[0,1]/_{(x,0)\sim(\phi(x),1)}$$ for all $x\in P$. Conversely, a braiding of $\mathcal L$ about $(B,\pi)$ gives rise in the obvious way to an abstract open-book braiding $(P_\pi,\phi_\pi)$ such that $(Y_{\phi_\pi},\mathcal L_{\phi_\pi})$ is diffeomorphic to $(Y,\mathcal L)$. By the Markov Theorem~\cite{Mar_35_Uber-die-freie-Aquivalenz} or its generalization to closed 3--manifolds~\cite{Sko_92_Closed-braids-in-3-manifolds,Sun_93_The-Alexander-and-Markov-theorems}, any two braidings of $\mathcal L$ with respect to a fixed open-book decomposition of $Y$ can be related by an isotopy that preserves the braided structure, except at finitely many points in time at which the braiding is changed by a \emph{Markov stabilization or destabilization}. We think of a Markov stabilization in the following way. Let $J$ be a meridian for a component of the binding $B$ of the open-book decomposition on $Y$, and let $\frak b$ be a band connecting $\mathcal L$ to $J$ such that the core of $\frak b$ is contained in a page of the open-book decomposition and such that the link $\mathcal L' = \mathcal L_\frak b$ resulting from the resolution of the band is braided about $(B,\pi)$. We say that $\mathcal L'$ is obtained from $\mathcal L$ via a \emph{Markov stabilization}, and we call the inverse operation \emph{Markov destabilization}. (Markov destabilization can be thought of as attaching a vertical band to $\mathcal L'$ such that resolving the band has the effect of splitting off from $\mathcal L'$ a meridian for a binding component.) See Figure~\ref{fig:Markov}. \begin{figure}[h!] \centering \includegraphics[width=.8\textwidth]{Markov} \caption{Markov stabilization, depicted as the banding of a braid to a meridian of the binding.} \label{fig:Markov} \end{figure} Suppose that $Y = Y^1\sqcup\cdots\sqcup Y^n$ is the disjoint union of closed three-manifolds such that each $Y^j$ is equipped with an open-book decomposition $(B^j,\pi^j)$. Suppose that $\mathcal L = \mathcal L^1\sqcup\cdots\mathcal L^n$ is a link such that $Y^j\subset Y^j$ is braided about $(B^j,\pi^j)$. We say that $\mathcal L$ has \emph{multi-index} $\bold v = (v^1,\ldots,v^n)$ if $\mathcal L^j$ has index $v^j$. We allows the possibility that $\mathcal L^j = \emptyset$ for any given $j$. \begin{remark} \label{rmk:braid_ortn} If $Y$ is oriented, and we pick orientations on $\mathcal L$ and on a page $P$ of $(B,\pi)$, then we can associate a sign to each point of $\mathcal L\cap P$. By definition, if $\mathcal L$ is a knot, then each such point will have identical sign; more generally, connected components of $\mathcal L$ have this property. If the orientations of the points $\mathcal L\cap P$ all agree, then we say that the braiding is \emph{coherently oriented}. If the orientation of these points disagree across components of $\mathcal L$, then we say that the braiding is \emph{incoherently oriented}. Our reason for considering incoherently oriented braidings is that sometimes a bridge trisection of a surface will induce a braiding of the boundary link that is incoherently oriented once the surface is oriented. A simple example of this, the annulus bounded by the $(2,2n)$--torus link, will be explored in Examples~\ref{ex:2-braid} and~\ref{ex:T24coherent}. Even though some bridge trisections induce incoherently oriented braidings on the boundary link, it is always possible to find a bridge trisection of a surface so that the induced braiding is coherently oriented. \end{remark} \subsection{Formal definitions} \label{subsec:Formal} \ Finally, we draw on the conventions laid out above to give formal definitions. \begin{definition}\label{def:Trisection} Let $X$ be an orientable, connected four-manifold, and let $$Y = \partial X = Y^1\sqcup\cdots\sqcup Y^n,$$ where $Y^j$ is a connected component of $\partial X$ for each $j=1,\ldots, n$. Let $g$, $k^*$, $p$, and $f$ be non-negative integers, and let $\bold k$, $\bold p$, and $\bold f$ be ordered partitions of type $(k^*,3)$, $(p,n)$, and $(b,n)^+$, respectively. A \emph{$(g,\bold k;\bold p,\bold f)$--trisection} $\mathbb T$ of $X$ is a decomposition $X = Z_1\cup Z_2\cup Z_3$ such that, for all $j=1,\ldots, n$ and all $i\in\mathbb{Z}_3$, \begin{enumerate} \item $Z_i \cong Z_{g,k_i;\bold p,\bold f}$, \item $Z_i\cap Z_{i+1} \cong H_{g;\bold p,\bold f}$, \item $Z_1\cap Z_2\cap Z_3 \cong \Sigma_{g,b}$, and \item $Z_i\cap Y^j \cong H_{\bold p,\bold f}$. \end{enumerate} The four-dimensional pieces $Z_i$ are called \emph{sectors}, the three-dimensional pieces $H_i = Z_i\cap Z_{i-1}$ are called \emph{arms}, and the central surface $\Sigma = Z_1\cap Z_2\cap Z_3$ is called the \emph{core}. If $k_1 = k_2 = k_3 = k$, then $\mathbb T$ is described as a \emph{$(g,k;\bold p,\bold f)$--trisection} and is called \emph{balanced}. Otherwise, $\mathbb T$ is called \emph{unbalanced}. Similarly, if either of the ordered partitions $\bold p$ and $\bold f$ are balanced, we replace these parameters with the integers $p/n$ and/or $f/n$, respectively. The parameter $g$ is called the \emph{genus} of $\mathbb T$. The surfaces $P_i^j = H_i\cap Y^j$ are called \emph{pages}, and their union is denoted $P_i$. The lensed product cobordisms $Y_i^j = Z_i\cap Y^j$ are called \emph{spreads}, and their union is denoted $Y_i$. The links $B^j = \Sigma\cap Y^j$ are called \emph{bindings}, and their union is $B=\partial \Sigma$. If $X$ is oriented, we require that the orientation on $Z_i$ induces the oriented decompositions $$\partial Z_i = H_i\cup Y_i\cup \overline H_{i+1}, \hspace{.25in} \partial H_i=\Sigma\cup_B\overline{P_i}, \hspace{.25in} \text{\ and\ } \hspace{.25in} \partial Y_i=P_i\cup_B\overline{P_{i+1}}.$$ See Figure~\ref{fig:ortn_conv} (below) for a schematic illustrating these conventions. \end{definition} \begin{remarks}\ \begin{enumerate} \item If $X$ is closed, then $n=0$, $Y = \emptyset$, and $\mathbb T$ is a trisection as originally introduced by Gay and Kirby~\cite{GayKir_16_Trisecting-4-manifolds} and generalized slightly in~\cite{MeiSchZup_16_Classification-of-trisections-and-the-Generalized}. \item If $X$ has a single boundary component, then $n=1$, and $\mathbb T$ is a relative trisection as first described in~\cite{GayKir_16_Trisecting-4-manifolds} and later developed in~\cite{Cas_16_Relative-trisections-of-smooth}, where gluing of such objects was studied, and in~\cite{CasGayPin_18_Diagrams-for-relative-trisections}, where the diagrammatic aspect to the theory was introduced. The general case of multiple boundary components was recently developed in~\cite{Cas_17_Trisecting-smooth-4--dimensional}. \item Since $Y^j = Y^j_1\cup Y^j_2\cup Y^j_3$, with each $Y^j_i\cong H_{p_j,b_j}$, it follows that $Y^j$ admits an open-book decomposition where $P_i^j$ is a page for each $i\in\mathbb{Z}_3$ and $B^j$ is the binding. This open-book decomposition is determined by $\mathbb T$, and the monodromy can be explicitly calculated from a relative trisection diagram~\cite{CasGayPin_18_Diagrams-for-relative-trisections}. \item Note that the triple $(\Sigma, P_i, P_{i+1})$ defines the standard Heegaard double structure on $\partial Z_i\cong Y_{g;\bold p,\bold f}$. It follows from Lemma~\ref{lemma:k-value} that $k_i = 2p+f-n+m_i$, where $(\Sigma;H_i,H_{i+1})$ is an $(m_i,n)$--standard Heegaard splitting. We call $m_i$ the \emph{interior} complexity of $Z_i$. Notice that $g$ is bounded below by $m_i$ and $p$, but not by $f$ nor $k_i$. \end{enumerate} \end{remarks} \begin{definition}\label{def:Bridge} Let $\mathbb T$ be a trisection of a four-manifold $X$. Let $\mathcal F$ be a neatly embedded surface in $X$. Let $b$, $c^*$, and $v$ be non-negative integers, and let $\bold c$ and $\bold v$ be ordered partitions of type $(c^*,3)$ and $(v,n)$, respectively. The surface $\mathcal F$ is in \emph{$(b,\bold c;\bold v)$--bridge trisected position} with respect to $\mathbb T$ (or is \emph{$(b,\bold c; \bold v)$--bridge trisected} with respect to $\mathbb T$) if, for all $i\in\mathbb{Z}_3$, \begin{enumerate} \item $\mathcal D_i = Z_i\cap \mathcal F$ is a trivial $(c_i;\bold v)$--disk-tangle in $Z_i$, and \item $\mathcal T_i = H_i\cap\mathcal F$ is a trivial $(b;\bold v)$--tangle in $H_i$. \end{enumerate} The disk components of the $\mathcal D_i$ are called \emph{patches}, and the $\mathcal T_i$ are called \emph{seams}. Let $$\mathcal L = \partial \mathcal F = \mathcal L^1\sqcup\cdots\sqcup\mathcal L^n,$$ where $\mathcal L^j = \mathcal L\cap Y^j$ is the link representing the boundary components of $\mathcal F$ that lie in $Y^j$. The pieces $\beta^j_i = \mathcal L^j\cap Z_i$ comprising the $\mathcal L_i$ are called \emph{threads}. If $\mathcal F$ is oriented, we require that the induced orientation of $\mathcal D_i$ induces the oriented decomposition $$\partial \mathcal D_i = \mathcal T_i\cup\beta_i\cup\overline\mathcal T_{i+1}.$$ See Figure~\ref{fig:ortn_conv} (below) for a schematic illustrating these conventions. The induced decomposition $\mathbb T_\mathcal F$ given by $$(X,\mathcal F) = (Z_1, \mathcal D_1)\cup(Z_2, \mathcal D_2)\cup(Z_3, \mathcal D_3),$$ is called a \emph{$(g,\bold k,b,\bold c;\bold p, \bold f, \bold v)$--bridge trisection} of $\mathcal F$ (or of the pair $(X,\mathcal F)$). If $\mathbb T$ is balanced and $c_i= c$ for each $i\in\mathbb{Z}_3$, then $\mathbb T_\mathcal F$ is described as a \emph{$(g,k,b,c;\bold p,\bold f,\bold v)$--bridge trisection} and is called \emph{balanced}. Otherwise, $\mathbb T_\mathcal F$ is called \emph{unbalanced}. Similarly, if the partition $\bold v$ is balanced, we replace this parameter with the integer $v/n$. The parameter $b$ is called the \emph{bridge number} of $\mathbb T_\mathcal F$. \end{definition} \begin{remarks} \ \begin{enumerate} \item If $X$ is a closed four-manifold, then $n=0$, $\mathcal L = \emptyset$, and $\mathcal F$ is a closed surface in $X$. If $g=0$, we recover the notion of bridge trisections originally introduced in~\cite{MeiZup_17_Bridge-trisections-of-knotted}, while the more general case of arbitrary $g$ is treated in in~\cite{MeiZup_18_Bridge-trisections-of-knotted}. \item Note that if $\mathcal L\cap Y^j = \emptyset$ for some $j=1,\ldots, n$, then $\mathcal L^j = \emptyset$. Equivalently, $v_j = 0$. If $\mathcal L^j$ is not empty, then we have $$\mathcal L^j = \beta^j_1\cup\beta^j_2\cup\beta^j_3.$$ If follows that $\mathcal L^j$ is braided with index $v_j$ with respect to the open-book decomposition $(B^j,P^j_i)$ on $Y^j$ induced by $\mathbb T$. \item The link $L_i = \partial\mathcal D_i$ is in $(b,\bold v)$--bridge position with respect to the standard Heegaard double structure on $\partial Z_i$. \item The surface $\mathcal F$ has a cellular decomposition consisting of $(2b+4v)$ 0--cells, $3v$ of which lie in the pages of $\partial X$; $(3b+6v)$ 1--cells, $3v$ of which lie in the spreads of $\partial X$; and $(c_1+c_2+c_3+3v)$ 2--cells, $3v$ of which are vertical patches. It follows that the Euler characteristic of $\mathcal F$ is given as $$\chi(\mathcal F) = c_1 + c_2 + c_3 + v - b.$$ \item Note that $c_i\geq b$, but that $v$ is independent of $b$ and the $c_i$. \end{enumerate} \end{remarks} We conclude this section with a key fact about bridge trisections. We refer to the union $$(H_1,\mathcal T_2)\cup(H_2,\mathcal T_2)\cup(H_3,\mathcal T_3)$$ as the \emph{spine} of the bridge trisection $\mathbb T$. Two bridge trisections $\mathbb T$ and $\mathbb T'$ for pairs $(X,\mathcal F)$ and $(X,\mathcal F')$ \emph{diffeomorphic} if there is a diffeomorphism $\Psi\colon (X,\mathcal F)\to (X',\mathcal F')$ such that $\psi(Z_i,\mathcal D_i) = (Z_i',Dd_i')$ for all $i\in\mathbb{Z}_3$. \begin{proposition} \label{prop:spine} Two bridge trisections are diffeomorphic if and only if their spines are diffeomorphic. \end{proposition} \begin{proof} If $\Psi$ is a diffeomorphism of bridge trisections $\mathbb T$ and $\mathbb T'$, then the restriction of $\Psi$ to the spine of $\mathbb T$ is a diffeomorphism onto the spine of $\mathbb T'$. Conversely, suppose $\Psi$ is a diffeomorphism from the spine of $\mathbb T$ to the spine of $\mathbb T'$ -- i.e., $\Psi(H_i,\mathcal T_i) = (H_i',\mathcal T_i')$ for all $i\in\mathbb{Z}_3$. By Lemma~\ref{lem:LP}, $\Psi$ there is an extension of $\Psi$ across $(Z_i,\mathcal D_i)$ that is uniquely determined up to isotopy fixing $(H_1,\mathcal T_i)\cup_{(\Sigma,\bold x)}\overline{(H_{i+1},\mathcal T_{i+1})}$ for each $i\in\mathbb{Z}_3$. It follows that that $\Psi$ extends to a diffeomorphism bridge trisections, as desired. \end{proof} In light of this, we find that the four-dimensional data of a bridge trisection is determined by the three-dimensional data of its spine, a fact that will allow for the diagrammatic development of the theory in Sections~\ref{sec:tri-plane} and~\ref{sec:shadow}. \begin{corollary} \label{coro:spine} A bridge trisection is determined uniquely by its spine. \end{corollary} \section{The four-ball setting}\label{sec:four-ball} In this section, we restrict our attention to the study of surfaces in the four-ball. Moreover, we work relative to the standard genus zero trisection. These restrictions allow for a cleaner exposition than the general framework of Section~\ref{sec:general} and give rise to a new diagrammatic theory for surfaces in this important setting. \subsection{Preliminaries and a precise definition}\label{subsec:Special}\ Here, we revisit the objects and notation introduced in Section~\ref {sec:general} with the setting of $B^4$ in mind, culminating in a precise definition of a bridge trisection of a surface in $B^4$. Let $H$ denote the three-ball, and let $B$ denote an equatorial curve on $\partial H$, which induces the decomposition $$\partial H = \partial_+ H \cup_B \partial_- H$$ of the boundary sphere into two hemispheres. We think of $H$ as being swept out by disks: smoothly isotope $\partial_+ H$ through $H$ to $\partial_- H$. (Compare this description of $H$ with the notion of a lensed cobordism from Subsection~\ref{subsec:Lensed} and the development for a general compressionbody in Subsection~\ref{subsec:Compression}.) A trivial tangle is a pair $(H,\mathcal T)$ such that $H$ is a three-ball and $\mathcal T\subset H$ is a neatly embedded 1--manifold with the property that $\mathcal T$ can be isotoped until the restriction $\Phi_\mathcal T$ of the above Morse function to $\mathcal T$ has no minimum and at most one maximum on each component of $\mathcal T$. In other words, each component of $\mathcal T$ is a neatly embedded arc in $H$ that is either \emph{vertical} (with respect to the fibering of $H$ by disks) or parallel into $\partial_+ H$. The latter arcs are called \emph{flat}. We consider trivial tangles up to isotopy rel-$\partial$. If $\mathcal T$ has $v$ vertical strands and $b$ flat strands, we call the pair $(H,\mathcal T)$ an $(b,v)$--tangle. This is a special case of the trivial tangles discussed in Subsection~\ref{subsec:Tangles}. Let $H_1$ and $H_2$ be three-balls, and consider the union $H_1\cup_\Sigma \overline H_2$, where $\Sigma = \partial_+ H_1 = \partial_+ \overline H_2$. We consider this union of as a subset of the three-sphere $Y$ so that $B = \partial\Sigma$ is an unknot and $\Sigma$, $\partial_- H_1$, and $\partial_- H_2$ are all disjoint disk fibers meeting at $B$. Let $Y_1$ denote $$Y\setminus\text{Int}(H_1\cup_\Sigma \overline H_2),$$ and notice that $Y_1$ is simply an interval's worth of disk fibers for $B$, just like the $H_i$. We let $Y$ denote the three-sphere with this extra structure, which we call the \emph{standard Heegaard double} (cf. Subsection~\ref{subsec:Heegaard}). \begin{figure}[h!] \centering \includegraphics[width=.25\textwidth]{ortn_conv} \caption{A schematic illustration of a standard Heegaard double, with orientation conventions for the constituent pieces of $\partial Z_1$ indicated.} \label{fig:ortn_conv} \end{figure} An unlink $L\subset Y$ is in \emph{$(b,v)$--bridge position} with respect the standard Heegaard double structure if $L\cap H_i$ is a $(b,v)$--tangle, $L$ is transverse to the disk fibers of $Y_1$, and each component of $L$ intersects $Y_1$ in at most one arc. The $v$ components of $L$ that intersect $Y_1$ are called \emph{vertical}, while the other $b$ components are called \emph{flat}. Let $Z$ denote the four-ball, with $\partial Z = Y$ regarded as the standard Heegaard double. A trivial disk-tangle is a pair $(Z,\mathcal D)$ such that $Z$ is a four-ball and $\mathcal D$ is a collection of neatly embedded disks, each of which is parallel into $\partial Z$. Note that the boundary $\partial \mathcal D$ is an unlink. If $\partial \mathcal D$ is in $(b,v)$--bridge position in $Y = \partial Z$, then the disk components of $\mathcal D$ are called \emph{vertical} and \emph{flat} in accordance with their boundaries. A \emph{$(c,v)$--disk-tangle} is a trivial disk-tangle with $c$ flat components and $v$ vertical components. \begin{definition}\label{def:Bridge2} Let $\mathcal F$ be a neatly embedded surface in $B^4$, and let $\mathbb T_0$ be the standard genus zero trisection of $B^4$. Let $b$ and $v$ be non-negative integers, and let $\bold c = (c_1, c_2, c_3)$ be an ordered triple of non-negative integers. The surface $\mathcal F$ is in \emph{$(b,\bold c; v)$--bridge trisected position} with respect to $\mathbb T_0$ (or is \emph{$(b,\bold c; v)$--bridge trisected} with respect to $\mathbb T_0$) if, for all $i\in\mathbb{Z}_3$, \begin{enumerate} \item $\mathcal D_i = Z_i\cap \mathcal F$ is a trivial $(c_i, v)$--disk-tangle in the four-ball $Z_i$, and \item $\mathcal T_i = H_i\cap\mathcal F$ is a trivial $(b, v)$--tangle in the three-ball $H_i$. \end{enumerate} The disk components of the $\mathcal D_i$ are called \emph{patches}, and the $\mathcal T_i$ are called \emph{seams}. Let $\mathcal L = \partial \mathcal F$. The braid pieces $\beta_i = \mathcal L\cap Z_i$ are called \emph{threads}. If $\mathcal F$ is oriented, we require that the induced orientation of $\mathcal D_i$ induces the oriented decomposition $$\partial \mathcal D_i = \mathcal T_i\cup\beta_i\cup\overline \mathcal T_{i+1}.$$ The induced decomposition $\mathbb T_\mathcal F$ given by $$(X,\mathcal F) = (Z_1, \mathcal D_1)\cup(Z_2, \mathcal D_2)\cup(Z_3, \mathcal D_3),$$ is called a \emph{$(b,\bold c, v)$--bridge trisection} of $\mathcal F$ (or of the pair $(X,\mathcal F)$). If $\mathbb T_\mathcal F$ is balanced and $c_1 = c_2 = c_3 = c$, then $\mathbb T_\mathcal F$ is a \emph{$(b,c,v)$--bridge trisection} and is called \emph{balanced}. Otherwise, $\mathbb T_\mathcal F$ is called \emph{unbalanced}. \end{definition} \subsection{Band presentations}\label{subsec:band_pres}\ Let $M$ be a three-manifold, and let $J$ be a neatly embedded one-manifold in $M$. Let $\frak b$ be a copy of $I\times I$ embedded in $M$, and denote by $\partial_1\frak b$ and $\partial_2\frak b$ the portions of $\partial \frak b$ corresponding to $I\times\{-1,1\}$ and $\{-1,1\}\times I$, respectively. We call such a $\frak b$ a \emph{band} for $J$ if $\text{Int}(\frak b)\subset M\setminus J$ and $\partial \frak b\cap J = \partial_1\frak b$. The arc of $\frak b$ corresponding to $\{0\}\times I$ is called the \emph{core} of $\frak b$. Let $J_\frak b$ denote the one-manifold obtained by \emph{resolving} the band $\frak b$: $$J_\frak b = (J\setminus\partial_1\frak b)\cup\partial_2\frak b.$$ The band $\frak b$ for $J$ gives rise to a \emph{dual band} $\frak b^*$ that is a band for $J_\frak b$, so $\partial_1 \frak b^* = \partial_2 \frak b$ and $\partial_2\frak b^* = \partial_1\frak b$. Note that, as embedded squares in $M$, we have $\frak b = \frak b^*$, though their cores are perpendicular. More generally, given a collection $\frak b$ of disjoint bands for $J$, we denote by $J_\frak b$ the \emph{resolution} of all the bands in $\frak b$. As above, the collection $\frak b^*$ of dual bands is a collection of bands for $J_\frak b$. \begin{definition}[\textbf{\emph{band presentation}}] \label{def:band_pres} A \emph{band presentation} is a 2--complex in $S^3$ defined by a triple $(\mathcal L,U,\frak b)$ as follows: \begin{enumerate} \item $\mathcal L\subset S^3$ is a link; \item $U$ is a split unlink in $S^3\setminus\nu(\mathcal L)$; and \item $\frak b$ is a collection of bands for $\mathcal L\sqcup U$ such that $U'=(\mathcal L\sqcup U)_\frak b$ is an unlink. \end{enumerate} If $U$ is the empty link, then we write $(\mathcal L,\frak b)$ and call the encoded 2--complex in $S^3$ a \emph{ribbon presentation}. We consider two band presentations to be \emph{equivalent} if they are ambient isotopic as 2--complexes in $S^3$. Given a fixed link $\mathcal L\subset S^3$, two band presentations $(\mathcal L,U_1,\frak b_1)$ and $(\mathcal L,U_2,\frak b_2)$ are \emph{equivalent rel-$\mathcal L$} if they are equivalent via an ambient isotopy that preserves $\mathcal L$ set-wise. (In other words, $\mathcal L$ is fixed, although the attaching regions of $\frak b$ are allowed to move along $\mathcal L$.) \end{definition} Band presentations encode smooth, compact, neatly embedded surfaces in $B^4$ in a standard way. Before explaining this, we first fix some conventions that will be useful later. (Here, we follow standard conventions, as in~\cite{KawShiSuz_82_Descriptions-on-surfaces-in-four-space.,Kaw_96_A-survey-of-knot-theory,MeiZup_17_Bridge-trisections-of-knotted,MeiZup_18_Bridge-trisections-of-knotted}.) Let $h\colon B^4\to [0,4]$ be a standard Morse function on $B^4$ -- i.e., $h$ has a single critical point, which is definite of index zero and given by $h^{-1}(0)$, while $h^{-1}(4) = \partial B^4 = S^3$. For any compact submanifold $X$ of $B^4$ and any $0\leq t < s\leq 4$, let $X_{[t,s]}$ denote $X\cap h^{-1}\left([t,s]\right)$ and let $X_{\{t\}} = X\cap h^{-1}(t)$. For example, $B^4_{[t,s]} = h^{-1}[t,s]$. Similarly, for any compact submanifold $Y$ of $B^4_{\{t\}}$ and any $0\leq r<s\leq 4$, let $Y[r,s]$ denote the vertical cylinder obtained by pushing $Y$ along the gradient flow across the height interval $[r,s]$, which we call a \emph{gradient product}. We extend these notions in the obvious way to open intervals and singletons in $[0,4]$. Now we will show how, given a band presentation $(\mathcal L,U,\frak b)$, we can construct the \emph{realizing surface} $\mathcal F_{(\mathcal L,U,\frak b)}$: a neatly embedded surface in $B^4$ with boundary $\mathcal L$. Start by considering $(\mathcal L,U,\frak b)$ as 2--complex in $B^4_{\{2\}}\cong S^3$, and consider the surface $\mathcal F$ with the following properties: \begin{enumerate} \item $\mathcal F_{(3,4]} = \mathcal L(3,4]$; \item $\mathcal F_{\{3\}} = \mathcal L\{3\}\sqcup D$, where $D$ is a collection of spanning disks for the unlink $U\{3\}\subset B^4_{\{3\}}\cong S^3$; \item $\mathcal F_{(2,3)} = (\mathcal L\sqcup U)(2,3)$; \item $\mathcal F_{\{2\}} = (\mathcal L\sqcup U)\cup\frak b$; \item $\mathcal F_{(1,2)} = U'(1,2)$; \item $\mathcal F_{\{1\}} = D'$, where $D'$ is a collection of spanning disks for the unlink $U'\subset B^4_{\{1\}}\cong S^3$; and \item $\mathcal F_{[0,1)} = \emptyset$. \end{enumerate} Note that the choices of spanning disks $D$ and $D'$ are unique up to perturbation into $B^4_{(3,3+\epsilon)}$ and $B^4_{(1,1-\epsilon)}$, respectively, by Proposition~\ref{prop:triv_disks}. Note also that $\partial \mathcal F = \mathcal F\cap B^4_{\{4\}} = \mathcal L\{4\}$. \begin{proposition}\label{prop:band_pres} Every neatly embedded surface $\mathcal F$ with $\partial \mathcal F = \mathcal L$ is isotopic rel-$\partial$ to a realizing surface $\mathcal F_{(\mathcal L,U,\frak b)}$ for some band presentation $(\mathcal L,U,\frak b)$. If $\mathcal F$ has a handle-decomposition with respect to the standard Morse function on $B^4$ consisting of $c_1$ cups, $n$ bands, and $c_3$ caps, then $(\mathcal L,U,\frak b)$ can be assumed to satisfy $|U|=c_3$, $|\frak b|=n$, and $|U'|=c_1$. \end{proposition} \begin{proof} Given $\mathcal F$, we can assume after a minor perturbation that the restriction $h_\mathcal F$ of a standard height function $h\colon B^4\to [0,4]$ is Morse. After re-parametrizing the codomain of $h$, we can assume that the critical points of $h_\mathcal F$ are contained in $h^{-1}\left((1.5,2.5)\right)$. For each index zero critical point $x$ of $h_\mathcal F$, we choose a vertical strand $\omega$ connecting $x$ to $B^4_{\{1\}}$. (Here, vertical means that $\omega_{\{t\}}$ is a point or empty for each $t\in[1,2.5]$.) By a codimension count, $\omega$ is disjoint from $\mathcal F$, except at $x$. We can use a small regular neighborhood of $\omega$ to pull $x$ down to $B^4_{\{1\}}$. Repeating, we can assume that the index zero critical points of $h_\mathcal F$ lie in $B^4_{\{1\}}$. By a similar argument, we achieve that the index two critical points of $h_\mathcal F$ lie in $B^4_{\{3\}}$ and that the index one critical points of $h_\mathcal F$ lie in $B^4_{\{2\}}$. Next, we perform the standard flattening of the critical points: For each critical point $x$ of index~$i$, find a small disk neighborhood $N$ of $x$ in $\mathcal F$, and isotope $\mathcal F$ so that $N$ lies flat in $B^4_{\{i+1\}}$. Near critical points of index zero or two, $\mathcal F$ now resembles a flat-topped or flat-bottomed cylinder; for index one critical points, $N$ is now a flat square. Let $\frak b'$ denote the union of the flat, square neighborhoods of the index one critical points in $B^4_{\{2\}}$. So far, we have achieved properties (2), (4), (6), and (7) of a realizing surface. Properties (1), (3), and (5) say that $\mathcal F$ should be a gradient product on the intervals $(3,4]$, $(2,3)$, and $(1,2)$, respectively. The products $\mathcal F_{(3,4]}$ and $\mathcal L(3,4]$ (for example) agree at $\mathcal F_{\{4\}} = \mathcal L\{4\}$, but may disagree in $B^4_{\{t\}}$ for $t\in(3,4)$. This issue can be addressed by a ``combing-out'' process. For each $t\in[1,4]$, we can choose ambient isotopies $G_t\colon [0,1]\times B^4_{\{t\}}\to B^4_{\{t\}}$ such that \begin{enumerate} \item $G_4(s,x) = x$ for all $s\in[0,1]$ and $x\in B^4_{\{4\}}$; \item $G_t(0,x) = x$ for all $t\in[1,4]$ and $x\in B^4_{\{t\}}$; \item $G_t(1,\mathcal F_{\{t\}}) = \mathcal L\{t\}$ for all $t\in(3,4]$, where we now let $\mathcal L = \mathcal F_{\{4\}}$; \item $G_t(1,\mathcal F_{\{t\}}) = (\mathcal L\sqcup U)\{t\}$ for all $t\in(2,3)$, where we now let $\mathcal L\sqcup U = G_3(\mathcal F_{\{3\}}\setminus\text{Int} D)$; \item $G_t(1,\mathcal F_{\{t\}}) = U'\{t\}$ for all $t\in(1,2)$, where we now let $U' = G_2(\partial\mathcal F_{[0,2)})$; and \item $G_t$ is smoothly varying in $t$. \end{enumerate} After applying the family $G_t$ of ambient isotopies to $\mathcal F_{[1,4]}$, we have properties (1), (3), and (5), as desired. However, the ambient isotopies $G_t$ have now altered $\mathcal F_{\{t\}}$ for $t = 1, 2, 3$. For example, the disks $D$ and $D'$ have been isotoped around in their respective level sets; but, clearly, properties (2), (4), (6), and (7) are still satisfied. We remark that, if desired, we can choose $G_t$ so that (a) the disks of $D$ end up contained in small, disjoint 3--balls and either (b) the disks of $D'$ have the same property or (c) the bands $\frak b$ have the same property. However, we cannot always arrange (a), (b), \emph{and} (c) if we want $\mathcal F_{(1,2)}$ to be a gradient product. With a slight abuse of notation, we now let $\mathcal L = \mathcal L\{2\}$, $U = U\{2\}$, and $\frak b = G_2(\frak b')$. (The only abuse is which level set of the now-gradient-product portion $\mathcal L[2,4]$ of $\mathcal F$ should be denoted by $\mathcal L$.) In the end, we have that $\mathcal F$ is the realizing surface of the band presentation $(\mathcal L,U,\frak b)$. With regards to the second claim of the proposition, assume that $\mathcal F$ has $c_1$ cups, $n$ bands, and $c_3$ caps once it is in Morse position. Each cap gives rise to a component of $U$, while each cup gives rise to a component of $U'$. The numbers of bands, cups, and caps are constant throughout the proof. \end{proof} Examples of a band presentations are shown below in Figures~\ref{fig:f81},~\ref{fig:steve1}, and~\ref{fig:square7}. However, each of these is a ribbon presentation. Throughout the rest of the paper, we will work almost exclusively with ribbon presentation. To emphasize the generality of Definition~\ref{def:band_pres}, we give in Figure~\ref{fig:band_pres} a non-ribbon band presentation, where the black unknot is $\mathcal L$ and the orange unknot is $U$. Note that a non-ribbon band presentation $(\mathcal L,U,\frak b)$ for a surface $\mathcal F$ can always be converted to a ribbon presentation $(\mathcal L',\frak b)$ for a surface $\mathcal F'$ by setting $\mathcal L'=\mathcal L\sqcup U$. The ribbon surface $\mathcal F'$ is obtained from the non-ribbon surface $\mathcal F$ by puncturing at each maxima and dragging the resulting unlink to the boundary. \begin{figure}[h!] \centering \includegraphics[width=.4\textwidth]{band_pres} \caption{A band presentation for the punctured spun trefoil, considered as a neatly embedded disk in $B^4$ with unknotted boundary.} \label{fig:band_pres} \end{figure} \subsection{Bridge-braiding band presentations}\label{subsec:braiding_presentations}\ Recall the standard Heegaard-double decomposition $Y=Y_{0,0,1}$ of $S^3$ that was introduced in Subsection~\ref{subsec:Heegaard} and revisited in Subsection~\ref{subsec:Special}, which is a decomposition of $S^3$ into three trivial lensed cobordisms (three-balls), $H_1$, $H_3$, and $Y_3$, which meet along disk pages $H_1\cap \overline H_3 = \Sigma$ and $H_i\cap Y_3 = P_i$ whose boundary is the unknotted braid axis $B$ in $S^3$. The choice to use $H_3$ instead of $H_2$ will ensure that the labelings of our pieces agree with our conventions for the labeling of the pieces of a bridge trisection; cf. the proof of Proposition~\ref{prop:band_to_bridge} below. \begin{definition}[\textbf{\emph{bridge-braided}}] \label{def:bridge-braided} A band presentation $(\mathcal L,U,\frak b)$, considered with respect to the standard Heegaard-page decomposition $Y_{0,0,1}$ of $S^3$, is called \emph{$(b,\bold c; v)$--bridge-braided} if the following conditions hold. \begin{enumerate} \item $\beta_3 = \mathcal L\cap Y_3$ is a $v$--braid; \item $\mathcal L\cap(H_1\cup_\Sigma\overline H_3)$ is an $b'$--perturbing of a $v$--braid; \item $U$ is in $b''$--bridge position with respect to $\Sigma$; \item $\frak b\cap\Sigma$ is precisely the cores $y_*$ of $\frak b$, which are embedded in $\Sigma$; \item There is a bridge system $\Delta$ for the trivial tangle $\mathcal T_3 = H_3\cap(\mathcal L\cup U)$ whose shadows $\Delta_*$ have the property that $\Delta_*\cup y_*$ is a collection of embedded arcs in $\Sigma$; and \item $U' = (\mathcal L\cup U)_\frak b$ is a $(c_1+v)$--component unlink that is in standard $(b,v)$--bridge position with respect to $Y_{0,0,1}$. (Hence, $U'$ consists of $c_1$ flat components and $v$ vertical components.) \end{enumerate} Here, $b = b' + b''$, $c_3 = |U|$, $c_2 =b-|\frak b|$, and $c_1 = |U'|-v$. Let $\widehat\beta$ denote the index $v$ braiding of $\mathcal L$ given by $\beta_3\cup\mathcal T_1\cup\overline\mathcal T_3$. In reference to this added structure, we denote the bridge-braided band presentation by $(\widehat\beta,U,\frak b)$. If $U=\emptyset$, so $(\mathcal L,\frak b)$ is a ribbon presentation, we denote the corresponding bridge-braiding by $(\widehat\beta,\frak b)$. \end{definition} \begin{proposition}\label{prop:to_BBB_realizing} Let $\mathcal F\subset B^4$ be a surface with $\partial\mathcal F = \mathcal L$, and let $\widehat\beta$ be an index $v$ braiding of $\mathcal L$. There is a bridge-braided band presentation $(\widehat\beta,U,\frak b)$ such that $\mathcal F = \mathcal F_{(\widehat\beta,U,\frak b)}$. If $\mathcal F$ has a handle-decomposition with respect to the standard Morse function on $B^4$ consisting of $c_1$ cups, $n$ bands, and $c_3$ caps, then $(\widehat\beta,U,\frak b)$ can be assumed to be $(b,(c_1,b-(n+v),c_3);v)$--bridge-braided, for some $b\in\mathbb{N}$. \end{proposition} \begin{proof} Consider $\mathcal F\subset B^4$ with $\partial\mathcal F = \mathcal L$. By Proposition~\ref{prop:band_pres}, we can assume (after an isotopy rel-$\partial$) that $\mathcal F = \mathcal F_{(\mathcal L,U,\frak b)}$ for some band presentation $(\mathcal L,U,\frak b')$. We assume that $|U|=c_3$, $|\frak b'|=n$, and $|(\mathcal L\sqcup U)_{\frak b'}|=c_1$. By Alexander's Theorem~\cite{Ale_20_Note-on-Riemann-spaces}, there is an ambient isotopy $G_4\colon I\times B^4_{\{4\}}\to B^4_{\{4\}}$ taking $\partial \mathcal F$ to $\widehat\beta$. As in the proof of Proposition~\ref{prop:band_pres}, there is a family $G_t$ of ambient isotopies extending $G_4$ across $B^4$. This results in the ``combing-out'' of Alexander's isotopy $G_4$, with the final effect that $\mathcal F$ is the realizing surface of the (not-yet-bridge-braided) band presentation $(\widehat\beta,U,\frak b')$. Henceforth, we consider the 2--complex corresponding to $(\widehat\beta,U,\frak b')$ to be living in $B^4_{\{2\}}$, as in Proposition~\ref{prop:band_pres}. We have already obtained properties (1) and (2) towards a bridge-braided band presentation; although, presently $b'=0$. (This will change automatically once we begin perturbing the bridge surface $\Sigma$ relative to $\widehat\beta$ and $U$.) By an ambient isotopy of $B^4_{\{2\}}$ that is the identity in a neighborhood of $\widehat\beta$, we can move $U$ to lie in bridge position with respect to $\Sigma$, realizing property (3). (Again, the bridge index $b''$ of this unlink will change during what follows.) Since this ambient isotopy was supported away from $\widehat\beta$ it can be combed-out (above and below) via a family of isotopies that are supported away from the gradient product $\widehat\beta[2,4]$; so $\mathcal F$ is still the realizing surface. Next, after an ambient isotopy that fixes $\widehat\beta\sqcup U$ set-wise (and point-wise near $\Sigma$), we can arrange that $\frak b'$ lies in $H_1\cup_\Sigma\overline H_3$. (Think of the necessity of sliding the ends of $\frak b'$ along $\beta_3$ to extract it from $Y_3$, while isotoping freely the unattached portion of $\frak b'$ to the same end.) This time, we need only comb-out towards $h^{-1}(0)$. Using the obvious Morse function associated to $(H_1\cup_\Sigma \overline H_3)\setminus\nu(B)$, we can flow $\frak b'$, in the complement of $\widehat\beta\sqcup U$, so that the cores of the bands lie as an immersed collection of arcs $y$ in $\Sigma\setminus\nu(\bold x)$. At this point, we can perturb the bridge surface $\Sigma$ relative to $\widehat\beta\sqcup U$ to arrange that the cores $y$ be embedded in $\Sigma$. For details as to how this is achieved, we refer the reader to Figure~10 (and the corresponding discussion starting on page~17) of~\cite{MeiZup_17_Bridge-trisections-of-knotted}. Now that the cores $y_*$ of $\frak b'$ are embedded in $\Sigma$, we can further perturb $\Sigma$ relative to $\widehat\beta\sqcup U$ (as in Figure~11 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}) to achieve that $\frak b'\cap\Sigma$ is precisely the cores of $\frak b'$. Thus, we have that the bands $\frak b'$ satisfy property (4). A further perturbation of $\Sigma$ relative to $\widehat\beta\sqcup U$ produces, for each band $\upsilon$ of $\frak b'$, a dualizing bridge disk $\Delta_\upsilon$, as required by property (5). (See Figure~12 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}.) However, at this point it is possible that the $c_1$--component unlink $U'' = (\widehat\beta\sqcup U)_{\frak b'}$ is \emph{not} in standard $(b,v)$--bridge position; more precisely, it is possible that components of $U''$ intersect $Y_3$ is more than one strand. On the other hand, we automatically have that $U''\cap Y_3$ is a $v$--braid, since the band resolutions changing $\mathcal L\cup U$ into $U''$ were supported away from $Y_3$. Moreover, we know that $U''\cap H_i$ is a $(b,v)$--tangle; this follows from the proof of Lemma~3.1 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}. Thus, we must modify $U''$ in order to obtain an unlink in standard position. To do so, we will produce a new collection $\frak b''$ of bands such that $U' = U''_{\frak b''}$ is a $(c_1+v)$--component unlink in $(b,v)$--bridge position. We call the bands $\frak b''$ \emph{helper bands}. We will then let $\frak b = \frak b'\sqcup\frak b''$, and the proof will be complete. Since $(Y_3,\beta_3)$ is a $v$--braid, there is a collection of bridge triangles $\Delta$ for $\beta_3$. Let $\omega = \Delta\cap(P_1\cup_B\overline P_3)$. Let $\frak b''$ denote the collection of $v$ bands whose core are the arcs $\omega$ and that are framed by the two-sphere $P_1\cup_B\overline P_3$. By a minor isotopy that fixes $U''$ set-wise (and point-wise away from a neighborhood of $\partial\omega$), we consider $\frak b''$ as lying in the interior of $H_1\cup_\Sigma\overline H_3$. Thus, $\frak b''$ is a collection of bands for $\mathcal T_1\cup_\bold x\overline\mathcal T_3$. See Figure~\ref{fig:helpers} for two simple examples. \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{helper1} \caption{} \label{fig:helpers1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{helper2} \caption{} \label{fig:helpers2} \end{subfigure} \caption{Adding extra bands to ensure that $U'$ is in standard $(b,v)$--bridge position.} \label{fig:helpers} \end{figure} Let $U' = U''_{\frak b''}$. Let $J$ denote the components of $U'$ containing the strands of $\beta_3$. Since the helper bands $\frak b''$ were created from the bridge triangles of $\Delta$, we find that $J$ bound a collection of $v$ disjoint meridional disks for $B$. In particular, $J$ is a $v$--component unlink in $v$--braid position with respect to $B$. Let $K = U'\setminus J$, and note that $K$ is isotopic (disregarding the Heegaard double structure) to the unlink $U''$. It follows that $K$ is a $c_1$--component unlink in bridge position with respect to $\Sigma$. Therefore, $U'$ is a $(c_1+v)$--component unlink in standard $(b,v)$--bridge position, as required by property (6) of Definition~\ref{def:bridge-braided}. Now, to wrap up the construction, we let $\frak b = \frak b'\cup\frak b''$. While we have arranged the bands of $\frak b'$ are in the right position with respect to the Heegaard splitting, we must now repeat the process of perturbing the bridge splitting in order to level the helper bands $\frak b''$. The end result is that the bands of $\frak b$ satisfy properties (4) and (5) of Definition~\ref{def:bridge-braided}. In the process, we have not changed the fact that properties (1)--(3) and (6) are satisfied, though we may have further increased the parameters $b'$ and $b''$ (and, thus, $b=b'+b''$) during this latest bout of perturbing. We complete the proof by noting that $|U|=c_3$, $|U'|=c_1+v$, and $|\frak b|=n+v$. \end{proof} \begin{remark} \label{rmk:helpers} A key technical step in the proof of Proposition~\ref{prop:to_BBB_realizing} was the addition of the so-called \emph{helper bands} $\frak b''$ to the original set $\frak b'$ of bands that were necessary to ensure that $U'$ was in standard position. In the proof, $\frak b''$ consisted of $v$ bands; in practice, one can make do with a subset of these $v$ bands. This can be seen in the two simple examples of Figure~\ref{fig:helpers}, where the addition of only one band (in each example) suffices to achieve standard bridge position. In Figure~\ref{fig:helpers1}, the addition of the single band shown transforms an unknot component of $U''$ that is in 2--braid position into a pair of 1--braids (one of which is perturbed) in the link $U'$. In Figure~\ref{fig:helpers2}, an unknot component that is not braided at all is transformed to the same result. In each of these examples, the addition of a second band corresponding to the second arc of $\omega$ would be superfluous. From a Morse-theoretic-perspective, the helper bands correspond to cancelling pairs of minima and saddles: the minima are the meridional disks bounded by $J$. Using more bands from $\frak b''$ than is strictly necessary results in a surface with more minima (and bands) than are actually required to achieve the desired bridge-braided band presentation. Below, when we convert the bridge-braided band presentation to a bridge trisection, we will see that the superfluous bands and minima have the effect that the bridge trisection produced is perturbed -- see Section~\ref{sec:stabilize}. Another way of thinking about the helper bands is that they ensure that the trivial disk-tangle $\mathcal D_1$ in the resulting bridge trisection has enough vertical patches. \end{remark} Before proving that a bridge-braided band presentation can be converted to a bridge trisection, we pause to give a few examples illustrating the process of converting a band presentation into a bridge-braided band presentation. \begin{example} \label{ex:F81} \textbf{(Figure-8 knot Seifert surface)} Figure~\ref{fig:f81} shows a band presentation for the genus one Seifert surface for the figure-8 knot, together with a gray dot representing an unknotted curve about which the knot will be braided; this braiding is shown in Figure~\ref{fig:f82}. Note that the resolution of the bands at this point would yield a unknot (denoted $U''$ in the proof of Proposition~\ref{prop:to_BBB_realizing}) that is in 3--braid position. Thus, at least two helper bands are need. In Figure~\ref{fig:f83} we have attached three helper bands, as described in the proof of Proposition~\ref{prop:to_BBB_realizing}. Note that the cores of these bands are simultaneously parallel to the arcs one would attach to form the braid closure, and the disks exhibiting this parallelism correspond to the bridge triangles in the proof. In Figure~\ref{fig:f84}, all five bands have been leveled so that they are framed by the bridge sphere, intersecting it only in their cores. In addition, each band is dualized by a bridge disk for $\mathcal T_3$. Three of these bridge disks are obvious. The remaining two are only slightly harder to visualize; one can choose relatively simple disks corresponding to any two of the three remaining flat arcs. \begin{figure}[h!] \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.6\linewidth]{f81} \caption{} \label{fig:f81} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.5\linewidth]{f82} \caption{} \label{fig:f82} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.9\linewidth]{f83} \caption{} \label{fig:f83} \end{subfigure}% \begin{subfigure}{.45\textwidth} \centering \includegraphics[width=.8\linewidth]{f84} \caption{} \label{fig:f84} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{f85} \caption{} \label{fig:f85} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\linewidth]{f86} \caption{} \label{fig:f86} \end{subfigure} \caption{(A)--(D) The process of converting a band presentation for the genus one Seifert surface for the figure-8 knot into a bridge-braided band presentation. (E) A tri-plane diagram corresponding to the bridge-braided band presentation of~(D). See Figure~\ref{fig:fig8} for a second instantiation of this example.} \label{fig:f8} \end{figure} Figure~\ref{fig:f85} shows a tri-plane diagram for the bridge trisection that can be obtained from the bridge-braided band presentation given in Figure~\ref{fig:f84} according to Proposition~\ref{prop:band_to_bridge}. (See Section~\ref{sec:tri-plane} for precise details regarding tri-plane diagrams.) Figure~\ref{fig:f86} shows the pairwise unions of the seams of this bridge trisection. Relevant to the present discussion is the fact that the second two unions each contain a closed, unknotted component. The fact that the red-blue union contains such a component is related to the fact that we chose to use three helper bands, when two would suffice. The fact that the green-blue union contains such a component is related to the fact that the bridge splitting in Figure~\ref{fig:f84} is excessively perturbed. We leave it as an exercise to the reader to deperturb the bridge splitting of Figure~\ref{fig:f84} to obtain a simpler bridge-braided band presentation. \end{example} \begin{example} \label{ex:f82} \textbf{(Figure-8 knot Seifert surface redux)} As discussed in Remark~\ref{rmk:helpers}, it is often not necessary to append $v$ helper bands. The frames of Figure~\ref{fig:fig8} are analogous to those of Figure~\ref{fig:f8}, with the main change being that only two of the three helper bands are utilized. The two inner most bands from Figure~\ref{fig:f83} have been chosen, and they have each been slid once over the original bands from Figure~\ref{fig:fig82} to make subsequent picture slightly simpler. \begin{figure}[h!] \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.64\linewidth]{fig81} \caption{} \label{fig:fig81} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.72\linewidth]{fig82} \caption{} \label{fig:fig82} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.72\linewidth]{fig83} \caption{} \label{fig:fig83} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.7\linewidth]{fig84} \caption{} \label{fig:fig84} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{fig85} \caption{} \label{fig:fig85} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{fig86} \caption{} \label{fig:fig86} \end{subfigure} \caption{(A)--(D) The process of converting a band presentation for the genus one Seifert surface for the figure-8 knot into a bridge-braided band presentation. (E) A tri-plane diagram corresponding to the bridge-braided band presentation of~(D). See Figure~\ref{fig:f8} for another instantiation of this example.} \label{fig:fig8} \end{figure} Since fewer bands are included, the bridge splitting required to level and dualize them is simpler. In this case, the perturbing in Figure~\ref{fig:fig84} is minimal. In light of these variations, we see in Figure~\ref{fig:fig86} that the pairwise unions of the seams of the bridge trisection contain no closed components, implying the bridge trisection is not perturbed -- see Section~\ref{sec:stabilize}. \end{example} \begin{example} \label{ex:steve} \textbf{(Stevedore knot ribbon disk)} Figure~\ref{fig:steve1} shows a band presentation for a ribbon disk for the stevedore knot, together with a gray dot representing an unknotted curve about which the knot is braided in Figure~\ref{fig:steve2}. Note that the result of resolving the band in Figure~\ref{fig:steve2} is a 4--braiding of the 2--component unlink, with each component given by a 2--braid. Thus, at least two helper bands are required to achieve bridge-braided band position in this example; Figure~\ref{fig:steve3} shows two such bands that suffice. (See Remark~\ref{rmk:helpers2} below.) \begin{figure}[h!] \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.8\linewidth]{steve1} \caption{} \label{fig:steve1} \end{subfigure}% \begin{subfigure}{.225\textwidth} \centering \includegraphics[width=.48\linewidth]{steve2} \caption{} \label{fig:steve2} \end{subfigure}% \begin{subfigure}{.225\textwidth} \centering \includegraphics[width=.8\linewidth]{steve3} \caption{} \label{fig:steve3} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.7\linewidth]{steve4} \caption{} \label{fig:steve4} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.9\linewidth]{steve5} \caption{} \label{fig:steve5} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.9\linewidth]{steve6} \caption{} \label{fig:steve6} \end{subfigure} \caption{(A)--(D) The process of converting a band presentation for a ribbon disk for the stevedore knot into a bridge-braided band presentation. (E) A tri-plane diagram corresponding to the bridge-braided band presentation of~(D). (F) A second tri-plane diagram, obtained from the first via a sequence of tri-plane moves.} \label{fig:steve} \end{figure} Figure~\ref{fig:steve4} gives a bridge-braided band presentation for the ribbon disk, with the caveat that the helper bands do not appear to be leveled as shown. However, we claim that such a leveling is possible: First, note that the left helper band can be isotoped so that its core lies in the bridge sphere without self-intersection. Depending on how one chooses to do this, the core may intersect the core of the dark blue band (the original fission band for the ribbon disk). However, since this latter band is dualized by a bridge disk for $\mathcal T_3$, there is an isotopy pushing the helper band off the fission band. At this point, the left helper band and the fission band are both level, disjoint, and dualized by bridge disks. Now, we note that the right helper band can be isotoped so that its core lies in the bridge sphere without self-intersection. To do this, however, we must slide the right helper band over the fission band so that their endpoints (attaching regions) are disjoint. Again, the core may intersect the cores of the other two bands, but since the other two bands are each dualized by bridge disks, we may push the core of the right helper band off the cores of the other two bands. The end result is that all three bands lies in the required position. Figure~\ref{fig:steve5} shows a tri-plane diagram for the bridge trisection corresponding to the bridge-braided band position from Figure~\ref{fig:steve4}. It is worth observing that it was not necessary to carry out the leveling of the bands described in the previous paragraph; it suffices simply to know that it can be done. Had we carried out the leveling described above, the result would have been a tri-plane diagram that could be related to the one given by a sequence of interior Reidemeister moves. Figure~\ref{fig:steve6} shows a tri-plane diagram that is related to the tri-plane diagram of Figure~\ref{fig:steve5} by tri-plane moves. See Section~\ref{sec:tri-plane} for details regarding these moves. \end{example} \begin{remark} \label{rmk:helpers2} There is a subtle aspect to Figure~\ref{fig:steve3} that is worth pointing out. Suppose instead that the left helper band were chosen to cross over the braid in the two places where it crosses under. It turns out that this new choice is still a helper band but would fail to result in a bridge-braided band position. To be precise, let $\mathcal T$ denote the braid in Figure~\ref{fig:steve3}, which we think of as a 4--stranded tangle, and let $\frak b$ denote this new choice of bands -- i.e., three bands that are identical to the ones shown in Figure~\ref{fig:steve3}, except that the left helper band passes above $\mathcal T$ in two places, rather than under. The resolution $\mathcal T_\frak b$ is a new 4--stranded tangle. Regardless of any concerns about bridge position that could be alleviated by perturbing $\mathcal T$, it is necessary that $\mathcal T_\frak b$ be a 4--braid. However, this is not the case in this example. In fact, $\mathcal T_\frak b$ is not even a trivial tangle! The reader can check that $\mathcal T_\frak b$ is the split union of two trivial arcs, together a 2--stranded tangle $\mathcal T'$ that has a closure to the square knot. So, the ``helper bands'' of the $\frak b$ presently being considered are not actually helper bands in the sense that they don't transform $U''$ into an unlink $U'$ in standard position, as required. Of course, by the proof of Proposition~\ref{prop:to_BBB_realizing}, we know that we can augment $\frak b$ by adding two more helper bands, resulting in a total of five bands, so that the result can be bridge-braided. On the other hands, Figure~\ref{fig:steve} shows that it is possible to achieve a bridge-braided band position with fewer than four helper bands; comparison of Figures~\ref{fig:f8} and~\ref{fig:fig8} gives another example of this. Precisely when this is possible and precisely how one chooses a more efficient set of helper bands of this sort is not clear; we pose the following question. \end{remark} \begin{question} Does there exist a surface $\mathcal F$ in $B^4$ such that every $(b,v)$--bridge braided band presentation of $\mathcal F$ requires $v$ helper bands? \end{question} Such a surface would have the property that every bridge trisection contains some flat patches. For this reason, it cannot be ribbon, due to the results of Subsection~\ref{subsec:Ribbon} below. Having discussed in detail the above examples, we now return our attention to the goal of bridge trisecting surfaces. \begin{proposition}\label{prop:band_to_bridge} Let $\mathcal F\subset B^4$ be the realizing surface for a $(b,\bold c;v)$--bridge-braided band presentation $(\widehat\beta,U,\frak b)$. Then, $\mathcal F$ admits a $(b,\bold c;v)$--bridge trisection $\mathcal T_{(\widehat\beta,U,\frak b)}$. \end{proposition} \begin{proof} As in Proposition~\ref{prop:band_pres}, we imagine that the 2--complex $\mathcal L\cup U\cup \frak b$ corresponding to the bridge-braided band presentation $(\widehat\beta, \frak b, U)$ is lying in the level set $B^4_{\{2\}}$, which inherits the Heegaard double structure $(H_1,H_3,Y_3)$. Assume that $\mathcal F$ is the corresponding realizing surface. We modify this 2--complex so that the bands $\frak b$ lie in the interior of $H_3$, rather than centered on $\Sigma$. Let $\epsilon>0$, and assume that the resolution of the bands $\frak b$ for $\mathcal L\cup U$ occurs in $H_3(2-\epsilon,2)$. So, $\mathcal F\{2\} = \mathcal L\cup U$, while $\mathcal F\{2-\epsilon\} = U'$. Let $(P_3^+,\bold x_3^+)$ denote a slight push-off of $(P_3,\bold x_3)$ into $(H_3,\mathcal T_3)$. Let $(H^-_{13},\beta^-_{13})$ denote the corresponding contraction of $(Y_3,\beta_3)$, and let $(H^+_3,\mathcal T^+_3)$ denote the corresponding expansion of $(H_3,\mathcal T_3)$. In other words, we remove a (lensed) collar of $P_3$ from $Y_3$ and add it to $H_3$. We will now describe the pieces of a bridge trisection for $\mathcal F$. Figure~\ref{fig:Morse_to_tri} serves as a guide to the understanding these pieces. Define: \begin{enumerate} \item $(\Sigma',\bold x') = (\Sigma,\bold x)\{2\}\cup B[2,4]$; \item $(H_1',\mathcal T_1') = (H_1,\mathcal T_1)\{2\} \cup (P_1,\bold x_1)[2,4]$; \item $\overline{(H_2',\mathcal T_2')} = (\Sigma,\bold x)[2-\epsilon,2] \cup (H_3^+,\mathcal T_3^+)\{2-\epsilon\} \cup (P_3^+,\bold x_3^+)[2-\epsilon,4]$; \item $(H_3',\mathcal T_3') = (H_3,\mathcal T_3)\{2\} \cup (P_3,\bold x_3)[2,4] $; \item $(Z_1',\mathcal D_1') = (B^4,\mathcal F)_{[0,2-\epsilon]}\cup((H_1,\mathcal T_1)[2-\epsilon,2])\cup(Y_3^-,\beta_3^-)[2-\epsilon,2]$; \item $(Z_2',\mathcal D_2') = ((B^4,\mathcal F)_{[2-\epsilon,2]}\cap H_3^+[2-\epsilon,2]) \cup ((Y_3\setminus\text{Int}(Y_3^-),\beta_3\setminus\text{Int}(\beta_3^-))[2,4])$; and \item $(Z_3',\mathcal D_3') = (B^4,\mathcal F)_{[2,4]}\cap(H_1\cup_\Sigma\overline H_3)[2,4]$. \end{enumerate} \begin{figure}[h!] \centering \includegraphics[width=.5\textwidth]{Morse_to_tri} \caption{A schematic illustrating how to obtain a bridge trisection from a bridge-braided band presentation; codimension two objects are not shown.} \label{fig:Morse_to_tri} \end{figure} It is straight-forward to verify that the pairs (1)-(7) have the right topology, except in the case of (3) and (6), where slightly more care is needed. For (3), the claim is that $(H_2',\mathcal T_2') \cong (H_2, (\mathcal T_2)_\frak b)$ is a trivial $(b,v)$--tangle. For (6), the claim is that the trace $(Z_2',\mathcal D_2')$ of this band attachment is a trivial $(c_2,v)$--disk-tangle. Both of these claims follow from the fact that each band of $\frak b$ is dualized by a bridge disk for $\mathcal T_3$; this is essentially Lemma~3.1 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}. Finally, it only remains to verify that the pieces (1)-(7) intersect in the desired way. This is straight-forward to check, as well. \end{proof} \begin{remark} \label{rmk:or1} Care has been taken to track the orientations throughout this section so that the orientations of the pieces of the bridge trisection produced in Proposition~\ref{prop:band_to_bridge} agree with the orientation conventions given in Subsection~\ref{subsec:Formal}. For example, the union $H_1\cup_\Sigma\overline H_3$ appearing in the bridge-braided band presentation set-up of Definition~\ref{def:bridge-braided} gets identified with a portion of $B^4\{2\}$ in the proof of Proposition~\ref{prop:band_to_bridge}, where it is oriented as the boundary of $B^4[0,2]$. This agrees with the convention that $\partial Z_1 = H_3\cup_\Sigma H_1\cup Y_3$, so $\partial(Z_2\cup Z_3) = Y_1\cup H_1\cup_\Sigma \overline H_3\cup Y_2$. See Figure~\ref{fig:two-thirds}. \end{remark} \begin{proposition}\label{prop:bridge_to_band} If $\mathcal F$ admits a $(b,\bold c;v)$--bridge trisection, then $\mathcal F = \mathcal F_{(\widehat\beta, U,\frak b)}$ for some $(b,\bold c;v)$--bridge-braided band presentation $(\widehat\beta,U,\frak b)$. \end{proposition} \begin{proof} Suppose $\mathcal F$ is in bridge position with respect to $\mathbb T_0$. Consider link $L_3 = \beta_3\cup\mathcal T_3\cup\overline\mathcal T_1 = \partial \mathcal D_3$. Let $\overline L$ denote the vertical components of $L_3\setminus\text{Int}(\beta_3) = \mathcal T_3\cup_\bold x\overline\mathcal T_1$, and let $U$ denote the flat components. Then we have $\partial\mathcal D_3 = \overline L\cup\beta_3\cup U$; in particular, $\overline L$ is parallel to $\beta_3$ (as oriented tangles) through the vertical disks of $\mathcal D_3$. Let $\mathcal L$ be the closed one-manifold given by $$\beta_1\cup\beta_2\cup \overline L.$$ By the above reasoning, $\mathcal L$ is boundary parallel to the boundary braid $\beta_1\cup\beta_2\cup\beta_3 = \widehat\beta = \partial\mathcal F$ via the vertical disks of $\mathcal D_3$. \begin{figure}[h!] \centering \includegraphics[width=.3\textwidth]{two-thirds} \caption{Two-thirds of a trisection, with induced orientations on the boundary.} \label{fig:two-thirds} \end{figure} Let $Y = Y_1\cup H_1 \cup \overline H_3\cup Y_2$ and note that $Y$ has the structure of a standard Heegaard-double decomposition $(H_1, H_3, Y_1\cup Y_2)$ on $S^3 = \partial(Z_1\cup Z_2)$ and is oriented as the boundary of $Z_1\cup Z_2$, which induces the opposite orientations on the 3--balls $H_1$ and $H_3$ as does $Z_3$. See Figure~\ref{fig:two-thirds}. It will be with respect to this structure that we produce a bridge-braided band presentation for $\mathcal F$. Note that $\mathcal L\cap (Y_1\cup Y_2)$ is already a $v$--braid, giving condition (1) of the definition of a bridge-braided band presentation. Similarly, conditions (2) and (3) have been met given the position of $\overline L\cup U$ with respect to the Heegaard splitting $H_1\cup_\Sigma \overline H_3$. Next, we must produce the bands $\frak b$. This is done in the same way as in Lemma~3.3 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}. We consider the bridge splitting $(H_2,\mathcal T_2)\cup_{(\Sigma,\bold x)}\overline{(H_3,\mathcal T_3)}$, which is standard -- i.e., the union of a perturbed braid and a bridge spilt unlink. Choose shadows $\mathcal T_2^*$ and $\mathcal T_3^*$ on $\Sigma$ for these tangles. Note that we choose shadows only for the flat strands in each tangle, not for the vertical strands. Because the splitting is standard, we may assume that $\mathcal T_2^*\cup\mathcal T_3^*$ is a disjoint union of $c_2$ simple closed curves $C_1,\ldots,C_{c_2}$, together with some embedded arcs, in the interior of $\Sigma$. For each closed component $C_i$, choose a shadow $\bar \tau_i^*\subset(\mathcal T_2^*\cap C_i)$. Let $$\omega^* = \mathcal T_2^*\setminus\left(\bigcup_{i=1}^{c_2}\bar\tau_i^*\right).$$ In other words, $\omega^*$ consists of the shadow arcs of $\mathcal T_2^*$, less one arc for each closed component of $\mathcal T_2^*\cup\mathcal T_3^*$. Note that $|\omega^*| = b-c_2$. The arcs of $\omega^*$ will serve as the cores of the bands $\frak b$ as follows. Let $\frak b = \omega^*\times I$, where the interval is in the vertical direction with respect to the Heegaard splitting $H_1\cup_{\overline\Sigma} \overline H_3$. In other words, $\frak b$ is a collection of rectangles with vertical edges lying on $\overline L\cup U$ and a horizontal edge in each of $H_1$ and $\overline H_3$ that is parallel through $\frak b$ to $\omega^*$. We see that condition (4) is satisfied. Note that the arcs $\omega^*$ came from chains of arcs in $\mathcal T_2^*\cup\mathcal T_3^*$, so each one is adjacent to a shadow arc in $\mathcal T_3^*$. This is obvious in the case of the closed components, since each such component must be an even length chain of shadows alternating between $\mathcal T_2^*$ and $\mathcal T_3^*$. Similarly, each non-closed component consists of alternating shadows. This follows from the fact that these arcs of shadows correspond to vertical components of $\overline L$, each of which must have the same number of bridges on each side of $\Sigma$. These adjacent shadow arcs in $\mathcal T_3^*$ imply that $\frak b$ is dual to a collection of bridge disks for $\mathcal T_3$, as required by condition (5). Finally, let $U' = \mathcal L_\frak b$, which should be thought of as lying in $H_1\cup \overline H_2\cup\beta_1$. In fact, $U' = \mathcal T_1\cup\overline \mathcal T_2\cup\beta_1$, so it is the standard link $L_{c_1,w}$ in the standard Heegaard-double structure on $\partial Z_1$. Thus, (6) is satisfied, and the proof is complete. \end{proof} The following example illustrates the proof of Proposition~\ref{prop:bridge_to_band}. \begin{example} \label{ex:square} Figure~\ref{fig:square1} shows a tri-plane diagram for a surface that we will presently determine to be the standard ribbon disk for the square knot, as described by the band presentation in Figure~\ref{fig:square7}. The first step to identifying the surface is to identify the boundary braid. In the proof of Proposition~\ref{prop:bridge_to_band}, this was done by considering the union $\beta_1\cup\beta_2\cup\overline L$. Diagrammatically, this union can be exhibited by the following three part process: (1) Start with the cyclic union $$\mathcal T_1\cup\overline\mathcal T_3\cup\mathcal T_3\cup\overline\mathcal T_2\cup\mathcal T_2\cup\overline\mathcal T_1$$ of the seams of the bridge trisection; see Figure~\ref{fig:square3}. (2) Discard any components that are not braided; there are no such components in the present example, though there would be if this process were repeated with the tri-plane diagram in Figure~\ref{fig:f85} -- a worthwhile exercise. (3) Straighten out (deperturb) near the intersections $\mathcal T_3\cap\overline\mathcal T_2$ and $\mathcal T_2\cap\overline\mathcal T_1$; see Figure~\ref{fig:square4}. If we continued straightening out near $\mathcal T_1\cup\overline\mathcal T_3$, we would obtain a braid presentation for the boundary link; see Subsection~\ref{subsec:tri-plane_braid} for a discussion relating to this point. Presently, however, it suffices to consider the 1--manifold $\beta_1\cup\beta_2\cup\overline L$ shown in Figure~\ref{fig:square4}, which we know to be isotopic (via the deperturbing near $\mathcal T_1\cap\overline\mathcal T_3$) to the boundary braid. \begin{figure}[h!] \centering \begin{tabular}{ccccc} \multirow{3}{*}{ \begin{subfigure}{.21\textwidth} \setcounter{subfigure}{2} \centering \includegraphics[width=.9\linewidth]{square3} \caption{} \label{fig:square3} \end{subfigure}% } & \multirow{3}{*}{ \begin{subfigure}{.21\textwidth} \centering \includegraphics[width=.9\linewidth]{square4} \caption{} \label{fig:square4} \end{subfigure}% } & \multicolumn{3}{c}{ \begin{subfigure}{.48\textwidth} \setcounter{subfigure}{0} \centering \includegraphics[width=\linewidth]{square1} \caption{} \label{fig:square1} \end{subfigure}% } \\[.75in] & & \multicolumn{3}{c}{ \begin{subfigure}{.48\textwidth} \centering \includegraphics[width=.5\linewidth]{square2} \caption{} \label{fig:square2} \end{subfigure}% } \\[1.5in] & & \begin{subfigure}{.15\textwidth} \setcounter{subfigure}{4} \centering \includegraphics[width=.9\linewidth]{square5} \caption{} \label{fig:square5} \end{subfigure}% & \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.6\linewidth]{square6} \caption{} \label{fig:square6} \end{subfigure}% & \begin{subfigure}{.15\textwidth} \centering \includegraphics[width=.6\linewidth]{square7} \caption{} \label{fig:square7} \end{subfigure}% \par\vspace{2mm} \end{tabular} \caption{The process of converting the tri-plane diagram (A) into a bridge-braided band presentation (E) in order to identify the underlying surface, which in this case can be seen to be the standard ribbon disk for the square knot (G).} \label{fig:square} \end{figure} Having identified the boundary braid, we must identify a set of bands that will exhibit a bridge-braided band presentation corresponding to the original bridge trisection. Following the proof of Proposition~\ref{prop:bridge_to_band}, these bands will come from a subset of the shadows $\mathcal T_2^*$. To this end, shadows for the tangles $\mathcal T_2$ and $\mathcal T_3$ are shown in Figure~\ref{fig:square2}. If there are closed components, one shadow of $\mathcal T_2^*$ is discarded from each such component. In the present example, this step is not necessary; again, consider repeating this exercise with the tri-plane diagram from Figure~\ref{fig:f85}. So, the set $\omega_*$ of the cores of the bands we are looking for, is precisely the blue shadows of Figure~\ref{fig:square2}. In Figure~\ref{fig:square4} these shadows have been thickened vertically into bands that are framed by the bridge sphere $\mathcal T_1\cap\overline\mathcal T_3$. In Figure~\ref{fig:square5}, this picture has been simplified, and the bands have been perturbed into $\overline\mathcal T_3$. In Figure~\ref{fig:square6}, the bridge splitting structure has been forgotten, and the boundary braid is clearly visible. At this point, we see that one band (light blue) is a helper band and can be discarded. At last, Figure~\ref{fig:square7}, we recover an efficient band presentation for the surface originally described by the tri-plane diagram of Figure~\ref{fig:square1}. \end{example} \begin{example} \label{ex:mono} \textbf{(2--stranded torus links)} Figure~\ref{fig:mono1} shows a tri-plane diagram corresponding to a bridge trisection of the M\"obius band bounded in $S^3$ by the $(2,3)$--torus knot; see Figure~\ref{fig:mono4} for the band presentation. However, this example could be generalized by replacing the four half-twists in the first diagram $\mathbb P_1$ with $n$ half-twists for any $n\in\mathbb{Z}$, in which case the surface described would be the annulus (respectively, the M\"obius band) bounded by the $(2,n)$--torus link when $n$ is even (respectively, the $(2,n)$--torus knot when $n$ is odd). \begin{figure}[h!] \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.9\linewidth]{2-braid_triplane} \caption{} \label{fig:mono1} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.8\linewidth]{mono1} \caption{} \label{fig:mono2} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.8\linewidth]{mono2} \caption{} \label{fig:mono3} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=.8\linewidth]{mono3} \caption{} \label{fig:mono4} \end{subfigure}% \caption{Recovering the boundary braid (D) from a tri-plane diagram (A), with bands tracked. The surface described is the M\"obius band bounded by the right-handed trefoil in $S^3$.} \label{fig:mono} \end{figure} In any event, Figures~\ref{fig:mono2}--\ref{fig:mono4} give cross-sections of the bridge trisected surface with concentric shells of $B^4$, as described in Example~\ref{ex:monodromy} below. In this example, we also track the information about bands encoded in the tri-plane diagram; cf. Figure~\ref{fig:square} and Example~\ref{ex:square}. In slight contrast to the square knot examples, the shadows of $\mathcal T_2$ are quite simple, so the bands are easy to include. In Figure~\ref{fig:mono3}, it becomes apparent that the right band (light blue) is a helper band and can be disregarded. A shadow diagrammatic analysis of this example is given in Example~\ref{ex:Mob_sh}. \end{example} \begin{theorem} \label{thm:four-ball} Let $\mathbb T_0$ be the standard trisection of $B^4$, and let $\mathcal F\subset B^4$ be a neatly embedded surface with $\mathcal L = \partial \mathcal F$. Fix an index $v$ braiding $\widehat\beta$ of $\mathcal L$. Suppose $\mathcal F$ has a handle decomposition with $c_1$ cups, $n$ bands, and $c_3$ caps. Then, for some $b\in\mathbb{N}_0$, $\mathcal F$ can be isotoped to be in $(b,\bold c;v)$--bridge trisected position with respect to $\mathbb T_0$, such that $\partial\mathcal F = \widehat\beta$, where $c_2=b-n$. \end{theorem} \begin{proof} By Proposition~\ref{prop:to_BBB_realizing}, $\mathcal F = \mathcal F_{(\widehat\beta,U,\frak b)}$ for some bridge-braided band presentation $(\widehat\beta,U,\frak b)$ of type $(b,\bold c;v)$. By Proposition~\ref{prop:band_to_bridge}, $\mathcal F$ admits a bridge trisection of the same type. \end{proof} \subsection{Bridge-braided ribbon surfaces} \label{subsec:Ribbon} \ By construction, a $(b,\bold c;v)$--bridge-braided ribbon presentation $(\widehat\beta,\frak b)$ will have $c_3=0$. The next lemma shows that this fact can be used to systematically decrease the number $c_1$ of components of the unlink $U'$, at the expense of increasing the index $v$ of the braid $\widehat\beta$. \begin{lemma}\label{lem:decrease_c1} If $\mathcal F$ is the realizing surface for a $(b,(c_1,c_2,0);v)$--bridge-braided ribbon presentation $(\widehat\beta,\frak b)$ with $c_1>0$, then $\mathcal F$ is the realizing surface for a $(b,(c_1-1,c_2,0);v+1)$--bridge-braided ribbon presentation $(\widehat\beta^+,\frak b)$, where $\widehat\beta^+$ is a Markov perturbation of $\widehat\beta$. The Markov perturbation can be assumed to be positive. \end{lemma} \begin{proof} Suppose that $(\widehat\beta, \frak b)$ is a bridge-braided ribbon presentation with respect to the standard Heegaard double structure $(H_1,H_3,Y_3)$ on $S^3$, as in Definition~\ref{def:bridge-braided}. We orient $\widehat\beta$ so that it winds counterclockwise about the braid axis $B=\partial \Sigma$. This induces an orientation on the arcs of $\overline L=\mathcal T_1\cup_\bold x\overline \mathcal T_3$, which induces an orientation on the bridge points $\bold x$: A bridge point $x\in\bold x$ is \emph{positive} if an oriented arc of $\overline L$ passes from $H_1$ to $\overline H_3$ through $x$. Since $c_3=0$, every point of $\bold x$ can be oriented in this way. Recall from the proof of Proposition~\ref{prop:band_to_bridge} that we can perturb the bands of $\frak b$, which originally intersect $\Sigma$ in their core arcs, into the interior of $H_3$ so that they may be thought of as bands for the tangle $\mathcal T_3$. Let $\mathcal T_2 = (\mathcal T_3)_\frak b$, and let $L' = \mathcal T_1\cup_\bold x\overline\mathcal T_2$. Utilizing the assumption that $c_1>0$, let $J$ be a flat component of $U'$. Let $x$ be a positive point of $L\cap\Sigma$ so that $x\in J$. See Figure~\ref{fig:Markov_drag1}. Such a point exists, since $J$ contains a flat arc of $\mathcal T_1$, and the endpoints of this arc have differing signs. We perturb $\Sigma$ at $x$ to produce a new bridge splitting $\mathcal T_1'\cup_{\bold x'}\overline\mathcal T_3'$, which we consider as $\mathcal T_i' = \mathcal T_i\cup\tau_i$, where $\tau_i$ is the new flat strand near $x$. If $\Delta_i$ was a bridge system for $\mathcal T_i$, then $\Delta_i' = \Delta_i\cup D_i$ is a bridge system for $\mathcal T_i'$, where $D_i$ is a bridge semi-disk for $\tau_i$. See Figure~\ref{fig:Markov_drag2}, and note that there may or may not be a band attached to $\mathcal T_3$ near $x$. \begin{figure}[h!] \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Markov_drag1} \caption{} \label{fig:Markov_drag1} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Markov_drag2} \caption{} \label{fig:Markov_drag2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Markov_drag3} \caption{} \label{fig:Markov_drag3} \end{subfigure} \caption{Modifying a bridge trisection of a ribbon surface to remove a flat patch at the expense of Markov-stabilizing the boundary braid.} \label{fig:Markov_drags} \end{figure} Now, we have that $x' = \tau_1\cap\tau_3$ is negative. Let $x'_i = \partial\tau_i\setminus x$ denote the positive points introduced by this perturbation. Let $\lambda = \tau_1\cup_x\tau_3$. Note that we can assume there is no band of $\frak b$ incident to either $\tau_i$. The bridge splitting $\mathcal T_1'\cup_\bold x\overline\mathcal T_3'$ is perturbed at $x'$. We will swap this perturbation for a Markov perturbation by dragging the point $x'$ towards and through the boundary $B$ of $\Sigma$. Let $\omega$ be an embedded arc in $\Sigma$ connecting $x'$ to $B$ such that $\text{Int}(\omega)\cap \bold x = \emptyset$. Since $\omega$ is dualized by each of the two small bridge semi-disks $D_i\subset \Delta_i'$, we can assume that $\text{Int}(\omega)\cap\Delta_i' = \emptyset$. Change $(\widehat\beta,\frak b)$ by an ambient isotopy that is supported in a tubular neighborhood of $\omega$ and that pushes $x'$ along $\omega$ towards and past $B$. This is a finger move of $\lambda$ along $\omega$. (Note that the surface $\mathcal F$ is locally a product of $\lambda$ near $x'$.) Let $\lambda'$ denote the end result of this finger move; i.e., a portion of $\lambda$ has been pushed out of $H_1\cup_\Sigma\overline H_3$ into $Y_3$. Let $\tau_i'' = \lambda'\cap H_i$. Let $\tau_{13}'' = \lambda'\cap Y_3$. Let $D_i''$ denote the bridge triangle resulting from applying the ambient isotopy to $D_i$. We see immediately that $\tau_i''$ are vertical, and that $\Delta_i'' = (\Delta_i'\setminus D_i)\cup D_i''$ is a bridge system for $\mathcal T_i'' = (\mathcal T_i'\setminus\tau_i)\cup\tau_i''$. It's also clear that $\tau_{13}''$ is a vertical strand in $Y_3$. We make the following observations, with an eye towards Definition~\ref{def:bridge-braided}: \begin{enumerate} \item $\beta_3'' = \beta_3\cup\tau_{13}''$ is a $(v+1)$--braid. \item $L'' = \mathcal T_1''\cup_{\bold x''}\overline\mathcal T_3''$ is a perturbing of a $(v+1)$--braid. \item We still have $c_1=0$; the $\mathcal T_i''$ are $(b-v-1)$--perturbings of $(v+1)$--braids. \item The bands $\frak b$ can still be isotoped to intersect $\Sigma$ in their cores. \item The bride disks $\Delta_3''$ dualize the bands $\frak b$. \item $(\widehat\beta)_\frak b$ has one fewer flat component. \end{enumerate} Thus, we have verified that conditions (1)-(6) of Definition~\ref{def:bridge-braided} are still satisfied, with the only relevant differences being that each tangle has an additional vertical strand and the flat component $J$ of $U'$ is now vertical. It follows that we have produced a bridge-banded ribbon presentation $(\widehat\beta^+,\frak b)$ for $\mathcal F$, where $\widehat\beta^+$ is a Markov perturbation of $\widehat\beta$. \end{proof} \begin{remark}\label{rmk:why_no_U} The hypothesis that $c_3=0$ is the above lemma was necessary to ensure that the process described in the proof resulted in a Markov perturbation of the boundary. If $c_3>0$, then it is possible that each point $x\in\bold x\cap J$ lies on a (flat) component of $U$. If the proof were carried out in this case, it would have the effect of changing the link type from $\mathcal L$ to the split union of $\mathcal L$ with an unknot on the boundary of $\mathcal F$. This is reflective of the general fact that a non-ribbon $\mathcal F$ with boundary $\mathcal L$ can be thought of as a ribbon surface for the split union of $\mathcal L$ with an unlink. \end{remark} Recall that $\bold c$ is an ordered partition of type $(c,3)$ for some $c\in\mathbb{N}_0$; in particular, $c=c_1+c_2+c_3$. \begin{lemma}\label{lem:decrease_all_c} If $\mathcal F$ is the realizing surface for a $(b,\bold c;v)$--bridge-braided band presentation $(\widehat\beta,U,\frak b)$ with $c_i=0$ for some $i$, then $\mathcal F$ is the realizing surface for a $(b,0;w+c)$--bridge-braided ribbon presentation $(\widehat\beta^{++},\frak b'')$, where $\widehat\beta^{++}$ is a Markov perturbation of $\widehat\beta$. \end{lemma} \begin{proof} Suppose $\mathcal F$ is the realizing surface for a $(b,\bold c;v)$--bridge-braided band presentation $(\widehat\beta,U,\frak b)$ with $c_i=0$ for some $i$. By Proposition~\ref{prop:band_to_bridge}, $\mathcal F$ admits a $(b,(c_1,c_2,c_3);v)$--bridge trisection filling $\widehat\beta$. By re-labeling the pieces, we can assume that $c_3=0$. By Proposition~\ref{prop:bridge_to_band}, this gives us a $(b,(c_1,c_2,0);v)$--bridge-braided ribbon presentation $(\widehat\beta,\frak b')$. Note that while the braid type $\widehat\beta$ hasn't changed, the bands $\frak b$ may have, and the intersection of $\widehat\beta$ with the pieces of the standard Heegaard-double decomposition may have, as well. Nonetheless, we can apply Lemma~\ref{lem:decrease_c1} iteratively to decrease $c_1$ to zero, at the cost of Markov-perturbing $\widehat\beta$ into a $(w+c_1)$--braid $\widehat\beta^+$. Passing back to a $(b,(0,c_2,0);w+c_1)$--bridge trisection filling $\widehat\beta^+$ via Proposition~\ref{prop:band_to_bridge}, re-labelling, and applying Proposition~\ref{prop:bridge_to_band}, we extract a $(b,(c_2,0,0);w+c_1)$--bridge-braided ribbon presentation $(\widehat\beta^+,\frak b'')$. Again, the bands and the precise bridge splitting may have changed. However, a second application of Lemma~\ref{lem:decrease_c1} allows us to decrease $c_2$ to zero, at the cost of Markov perturbing $\widehat\beta^+$ into a $(w+c_1+c_2)$--braid $\widehat\beta^{++}$. Note that we have Markov perturbed a total of $c = c_1+c_2$ times. \end{proof} \begin{theorem} \label{thm:ribbon} Let $\mathbb T_0$ be the standard trisection of $B^4$, and let $\mathcal F\subset B^4$ be a neatly embedded surface with $\mathcal L = \partial \mathcal F$. Let $\widehat\beta$ be an index $v$ braiding $\mathcal L$. Then, the following are equivalent. \begin{enumerate} \item $\mathcal F$ is ribbon. \item $\mathcal F$ admits a $(b,\bold c;v)$--bridge trisection filling $\widehat\beta$ with $c_i=0$ for some $i$. \item $\mathcal F$ admits a $(b,0;v+c)$--bridge trisection filling a Markov perturbation $\widehat\beta^+$ of $\widehat\beta$. \end{enumerate} \end{theorem} \begin{proof} Assume (1). Since $\mathcal F$ is ribbon, it admits a $(b,(c_1,c_2,0);v)$--bridge-braided ribbon presentation $(\widehat\beta,\frak b)$, by Proposition~\ref{prop:to_BBB_realizing}. By Proposition~\ref{prop:band_to_bridge}, this can be turned into a $(b,(c_1,c_2,0);v)$--bridge trisection filling $\widehat\beta$, which implies (2). Assume (2). The bridge trisection filling $\widehat\beta$ with $c_i=0$ for some $i$ gives a bridge-braided ribbon presentation $(\widehat\beta,\frak b')$ with $c_i=0$ for the same $i$. By Lemma~\ref{lem:decrease_all_c}, there is a $(b,0;w+c)$--bridge-braided ribbon presentation $(\widehat\beta^{++},\frak b'')$ for $\mathcal F$, where $\widehat\beta^{++}$ is a Markov perturbation of $\widehat\beta$. By Proposition~\ref{prop:band_to_bridge}, this gives a $(b,0;w+c)$--bridge trisection of $\mathcal F$ filling $\widehat\beta^{++}$. This implies (3), where $\widehat\beta^{++}$ is denoted by $\widehat\beta^+$ for simplicity. Assume (3). The $(b,0;w')$--bridge trisection filling $\widehat\beta^+$ gives rise to a bridge-braided ribbon presentation $(\widehat\beta^+,\frak b'')$ of the same type, by Proposition~\ref{prop:bridge_to_band}, such that $\mathcal F = \mathcal F_{(\widehat\beta^+,\frak b'')}$. However, a band presentation of a surface is precisely a handle-decomposition of the surface with respect to the standard Morse function on $B^4$. It follows that $\mathcal F$ can be built without caps; hence, $\mathcal F$ is ribbon, and (1) is implied. Note for completeness that (2) can be seen to imply (1) by the argument immediately above, and that (3) implies (2) trivially. \end{proof} \section{Tri-plane diagrams} \label{sec:tri-plane} A significant feature of the theory of trisections (broadly construed) is that it gives rise to new diagrammatic representations for four-dimensional objects (manifolds and knotted surfaces therein). In this section, we describe the diagrammatic theory for bridge trisections of surfaces in the four-ball. Recall the notational set-up of Subsection~\ref{subsec:Special}. Let $(H,\mathcal T)$ be a tangle with $H\cong B^3$. Let $E\subset H$ be a neatly embedded disk with $\partial \mathcal T\subset \partial E$. By choosing a generic projection of $H$ onto $E$, we can represent $(H,\mathcal T)$ by a \emph{tangle diagram}. In the case that $H\cong B^3$, the lensed cobordism structure on $(H,\mathcal T)$ discussed in Subsection~\ref{subsec:Compression} can be thought of as inducing the hemispherical decomposition of $\partial H\cong S^2$. So, we refer to $\partial_+H$ and $\partial_-H$ as the \emph{southern} and \emph{northern} boundaries. This induces a decomposition of $\partial E$ into a \emph{northern arc} and a \emph{southern arc}. See Figure~\ref{fig:moves} for examples of $(1,2)$--tangle diagrams. \begin{definition} A \emph{$(b,\bold c;v)$--tri-plane diagram} is a triple $\mathbb P = (\mathbb P_1,\mathbb P_2,\mathbb P_3)$ such that $\mathbb P_i$ is a $(b,v)$--tangle diagram and the union $\mathbb P_i\cup\overline{\mathbb P_i}$ is a tangle diagram for a split union of a $v$--braid with a $c_i$--component unlink. (Note that $\overline{\mathbb P_i}$ is the diagram $\mathbb P_i$ with crossing information reversed.) The southern arcs (and the $2b+v$ points $\bold x$ that they contain) are assumed to be identified. We denote the $v$ points contained in the northern arc of $\mathbb P_i$ by $\bold y_i$; the three northern arcs are not identified. \end{definition} A tri-plane diagram describes a bridge trisected surface in the following way. Let $(H_i,\mathcal T_i)$ be tangles corresponding to the tangle diagrams $\mathbb P_i$. Then the triple of tangle diagrams can be thought of as describing the union $$(H_1,\mathcal T_1)\cup(H_2,\mathcal T_2)\cup(H_3,\mathcal T_3)$$ of these tangles, where $(H_i,\mathcal T_i)\cap\overline{(H_{i+1},\mathcal T_i)} = (\Sigma,\bold x)$. This explains the identification of the souther portions of the tangle diagrams in the definition. Now, by definition, each union $(H_i,\mathcal T_i)\cup\overline{(H_{i+1},\mathcal T_{i+1})}$ is a the split union of a braid with an unlink of $c_i$ components inside a 3--ball. By Lemma~\ref{lem:LP}, there is a unique way to glom on to this 3--ball a $(c_i,v)$--disk-tangle $(Z_i,\mathcal D_i)$, where $Z_i\cong B^4$. Therefore, the union $$(Z_1,\mathcal D_1)\cup(Z_2,\mathcal D_2)\cup(Z_3,\mathcal D_3)$$ is a bridge trisected surface in $B^4$. In the next section, we will prove the existence of bridge trisections for surfaces in $B^4$. Since bridge trisections are determined by their spine (Corollary~\ref{coro:spine}, this gives the following. \begin{corollary} \label{coro:tri-plane} Every neatly embedded surface in $B^4$ can be described by a tri-plane diagram. \end{corollary} \begin{proof} By Theorem~\ref{thm:four-ball}, every such surface in $B^4$ can be put in bridge position with respect to the genus zero trisection $\mathbb T_0$. The corresponding bridge trisection is determined by its spine $$(H_1,\mathcal T_1)\cup(H_2,\mathcal T_2)\cup(H_3,\mathcal T_3).$$ This spine can be represented by a tri-plane diagram by choosing a triple of disks $E_i\subset H_i$ whose boundaries agree and choosing generic projections $H_i\twoheadrightarrow E_i$ that induce tangle diagrams for the $\mathcal T_i$. \end{proof} The union $E_i\cup E_2\cup E_3$ of disks that appeared in the proof above is called a \emph{tri-plane} for the bridge trisection. We consider bridge trisections up to ambient isotopy, and an ambient isotopy of a bridge trisection can change the induced tri-plane diagram. These changes can manifest in following three ways, which we collectively call \emph{tri-plane moves}. See Figure~\ref{fig:moves} for an illustration of each move. \begin{figure}[h!] \begin{subfigure}{\textwidth} \centering \includegraphics[width=.5\linewidth]{move_1} \caption{} \label{fig:b=11} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.5\linewidth]{move_2} \caption{} \label{fig:b=13} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.5\linewidth]{move_3} \caption{} \label{fig:b=12} \end{subfigure} \caption{(A) A tri-plane diagram. (B) The result of applying to (A) a bridge sphere braid transposition at the third and fourth bridge points. (C) The result of applying to (B) a page braid transposition in the fist tangle and a Reidemeister move in the third tangle.} \label{fig:moves} \end{figure} An \emph{interior Reidemeister move} on $\mathbb P$ is a Reidemeister move that is applied to the interior of one of the tangle diagrams $\mathbb P_i$. Interior Reidemeister moves correspond to ambient isotopies of the surface that are supported away from $\partial B^4$ and away from the core surface $\Sigma$. They also reflect the inherent indeterminacy of choosing a tangle diagram to represent a given tangle. A \emph{core (braid) transposition} is performed as follows: Pick a pair of adjacent bridge points $x,x'\in\bold x$, recalling that $x$ and $x'$ are (identified) points in the southern arc of each of the three tangles diagram. Apply a braid transposition to all three tangle diagrams that exchanges $x$ and $x'$. This introduces a crossing in each tangle diagram; the introduced crossing should have the same sign in each diagram. Bridge sphere braiding corresponds to ambient isotopies of the surface that are supported in a neighborhood of the core surface $\Sigma$. Note that this gives an action of the braid group $\mathcal M(D^2,\bold x)$ on the set of tri-plane diagrams. A \emph{page (braid) transposition} is performed as follows Pick a pair of adjacent points $y,y'\in\bold y_i$ in the northern arc of one of the tangle diagrams. Apply a braid transposition to this tangle diagram that exchanges $y$ and $Y'$. In contrast to a core transposition, the braid transposition is only applied simultaneously to one diagram. Page transpositions correspond to ambient isotopies of the surface that are supported near $\partial B^4$. Interior Reidemeister moves and core transpositions featured in the theory of bridge trisections of closed surfaces in the four-sphere described in~\cite{MeiZup_17_Bridge-trisections-of-knotted}. See, in particular, Lemma~7.4 for more details. \begin{proposition} \label{prop:tri-plane_moves} Suppose $\mathbb P$ and $\mathbb P'$ are tri-plane diagrams corresponding to isotopic bridge trisections. Then $\mathbb P$ and $\mathbb P'$ can be related by a finite sequence tri-plane moves. \end{proposition} \begin{proof}[Proof of Proposition~\ref{prop:tri-plane_moves}] As in the proof of Lemma~7.4 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}, it suffices to assume that the two tri-plane diagrams $\mathbb P$ and $\mathbb P'$ are induced by different choices of tri-planes $E_1\cup E_2\cup E_3$ and $E_1'\cup E_2'\cup E_3'$ for a fixed bridge trisection. Further more, we can switch perspective and assume that the tri-planes agree, but the seams of the tangles do not. Thus, assume we have a fixed $\mathcal E =E_1\cup E_2\cup E_3$ within $\mathcal H = H_1\cup H_2\cup H_3$ and that we have two sets of seams $\mathcal T=\mathcal T_1\cup\mathcal T_2\cup\mathcal T_3$ and $\mathcal T'=\mathcal T_1'\cup\mathcal T_2'\cup\mathcal T_3'$ determining a pair of isotopic spines in $B^4$. Note that the southern endpoints of the $\mathcal T_i$ and the $\mathcal T_i'$ are both contained in the southern arc $\partial E_i\cap\text{Int}(B^4)$, while all the northern endpoints are contained in the northern arc $\partial E_i\cap \partial B^4$. Without loss of generality, we assume the northern (resp., southern) endpoints of $\mathcal T_i$ agree with the northern (resp., southern) endpoints of $\mathcal T_i'$ for each $i$. As in the proof of Lemma~7.4 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}, if $f_t$ is an ambient isotopy of $\mathcal H$ such that $f_0$ is the identity and $f_1(\mathcal T) =\mathcal T'$, then $f_t$ induces a loop in the configuration space of the bridge points $\bold x=\mathcal T\cap\Sigma$. In this setting, $f_t$ also induces, for each $i\in\mathbb{Z}_3$, a loop in the configuration space of the points $\bold y\mathcal T_i\cap \partial_- H_i$ in the disk $\partial_- H_i$. We write $f_t$ as $f_t^\Sigma\cup f_t^1\cup f_t^2\cup f_t^3\cup f_t'$, where $f_t^\Sigma$ agrees with $f_t$ in a small neighborhood of $\Sigma$ and is the identity outside of a slightly larger neighborhood of $\Sigma$; $f_t^i$ agrees with $f_t$ in a small neighborhood of $\partial_- H_i$ and is the identity outside a slightly larger neighborhood of $\partial_- H_i$; and $f_t'$ is supported away from the small neighborhoods of $\Sigma$ and $\partial_- H_i$. Since these can be isolated to a single region near $\partial_- H_i$ for some $i$, they are independent of each other. Since $f_t^\Sigma$ corresponds to a braiding of the bridge points $\bold x$ there are tri-plane diagrams $\mathbb P$ and $\mathbb P^\Sigma$ corresponding to $\mathcal T$ and $\mathcal T^\Sigma = f_1^\Sigma(\mathcal T)$ that are related by a sequence of core transpositions. Continuing, there is a tri-plane diagram $\mathbb P''$ corresponding to $\mathcal T''=(f_1^1\cup f_1^2\cup f_1^3)(\mathcal T^\Sigma)$ that is related to $\mathbb P^\Sigma$ by a sequence of interior Reidemeister moves. Finally, the tri-plane diagram $\mathbb P'$ corresponds to $f_1'(\mathcal T'')$ and is related to $\mathbb P''$ by a sequence of page transpositions. In total, $\mathbb P$ and $\mathbb P'$ are related by a sequence of tri-plane moves, as desired. \end{proof} \subsection{Recovering the boundary braid from a tri-plane diagram} \label{subsec:tri-plane_braid} \ We now describe how to recover the boundary braid $(S^3,\mathcal L) = \partial(B^4,\mathcal F)$ from the data of a tri-plane diagram for $(B^4,\mathcal F)$. This process is illustrated in the example of the Seifert surface for the figure-8 knot in Figure~\ref{fig:monodromy}; cf. Figure~\ref{fig:f8} for more details regarding this example. See also Figure~\ref{fig:square} for another example. Let $\mathbb P = (\mathbb P_1,\mathbb P_2,\mathbb P_3)$ be a tri-plane diagram for a surface $(B^4,\mathcal F)$. Let $\mathcal E = (E_1,E_2,E_3)$ denote the underlying tri-plane. Let $\partial_-E_i$ and $\partial_+E_i$ denote the northern and southern boundary arcs of these disks, respectively, and let $S^0_i = \partial_-E_i\cap\partial_+E_i$ their 0--sphere intersections. Recall that, diagrammatically, the arcs $\partial_+E_i$ correspond to the core surface $\Sigma$ of the trisection, which is a disk, and the 0--spheres $S^0_i$ correspond to the unknot $B = \partial\Sigma$, which we think of as the binding of an open-book decomposition of $S^3$ with three disk pages given by the $P_i$. Recall that $\Sigma$ is isotopic rel-$\partial$ to each of the $\mathbb P_i$ via the arms $H_i$. With this in mind, consider the planar link diagram $\circ\widehat\mathbb P$ obtained as follows. First, form the cyclic union $$\mathbb P_1\cup\overline\mathbb P_3\cup\mathbb P_3\cup\overline\mathbb P_2\cup\mathbb P_2\cup\overline\mathbb P_1\cup\mathbb P_1,$$ where $\mathbb P_{i+1}$ and $\overline\mathbb P_i$ are identified along their southern boundaries, $\overline\mathbb P_i$ and $\mathbb P_i$ are identified along their northern boundaries, and the two copies of $\mathbb P_1$ are identified point-wise. Note that the cyclic ordering here is the opposite of what one might expect. This important subtlety is explained in the proof of Proposition~\ref{prop:tri-plane_braid} below. The corresponding union of the disks of the tri-plane $$E_1\cup \overline E_3\cup E_3\cup \overline E_2\cup E_2\cup \overline E_1\cup E_1$$ is topologically a two-sphere $S^2$. In particular, the 0--spheres $S^0_i$ have all been identified with a single 0--sphere $S^0$, which we think of as poles of the two-sphere. We represent this two-sphere in the plane by cutting open along $\partial_-E_1$ and embedding the resulting bigon so that the $E_i$ and $\overline E_i$ lie in the $yz$--plane with $E_3\cap\overline E_2$ on the $y$--axis. See Figure~\ref{fig:monodromy2} In this way, the diagram $\circ\mathbb P$ encodes a link in a three-sphere. The unknotted binding $B$ in $S^3$ can be thought of as the unit circle in the $xy$--plane. (The positive $x$--axis points out of the page.) Each longitudinal arc on $S^2$, including the northern and southern arcs of each $E_i$, corresponds to a distinct page, given six in all. However, the ambient three-sphere in which this link lives is not $S^3=\partial B^4$, as the proof of Proposition~\ref{prop:tri-plane_braid} will make clear. Note that the diagram $\circ\widehat\mathbb P$ will have only two types of connected components: (1) components that meet each disk $E_i$ and are homotopically essential in $S^2\setminus\nu(S^0)$, and components that are null-homotopic and are contained in some pair $E_{i+1}\cup\overline E_i$. Components of type (1) will correspond to the boundary link $(S^3,\mathcal L)$, while components of the second kind will correspond to split unknots. The components of type (1) are not braided in the sense of being everywhere transverse to the longitudinal arcs of $S^2$ but, as we shall justify below, they become braided after a sequence of Reidemeister moves and isotopies that are supported away from $S^0$. Define $\circ\mathbb P$ to be the result of discarding all components of type (2) from $\circ\widehat\mathbb P$, then straightening out the arcs of type (1) until they give a braid diagram in the sense that they are everywhere transverse to the longitudinal arcs connecting the poles $S^0\subset S^2$. \begin{proposition} \label{prop:tri-plane_braid} Suppose $\mathbb P = (\mathbb P_1,\mathbb P_2,\mathbb P_3)$ is a tri-plane diagram for $(B^4,\mathcal F)$. Then the diagram $\circ\mathbb P$ is a braid diagram for the boundary link $(S^3,\mathcal L) = \partial(B^4,\mathcal F)$. \end{proposition} \begin{proof} Consider the spine $H_1\cup H_2\cup H_3$ of the genus zero trisection $\mathbb T_0$ of $B^4$. Let $N$ be a small lensed neighborhood of this spine inside $B^4$. Here, the qualifier `lensed' has the effect that $N\cap\partial B^4$ is unchanged: $$N\cap\partial X = P_1\sqcup P_2\sqcup P_3.$$ We can decompose $\partial N$ into six pieces: $$\partial N = H_1^+\cup H_3^-\cup H_3^+\cup H_2^-\cup H_2^+\cup H_1^-,$$ where the pieces intersect cyclically in the following manner: the $H_{i+1}^+\cap H_i^- = \Sigma_{i}^-$ are the three obvious push-offs of $\Sigma$ into $\partial N$, and $H_i^-\cap H_i^+ = P_i$. Because $B = \partial \Sigma =\partial \Sigma_i = \partial P_i$, it follows that $\partial N$ is a closed 3--manifold. In fact, there is an obvious `radial' diffeomorphism $N\to B^4$ that pushes $H_{i+1}^+\cup H_i^-$ onto $Y_i$ into an \emph{orientation preserving} way. To unpack this last statement, recall that $Z_i$ induces an orientation on it's boundary such that $$\partial Z_i = H_i\cup_\Sigma \overline H_{i+1}\cup Y_i.$$ In $\partial N$, we have corresponding pieces $H_{i+1}^+\cup_{\Sigma^-_i}H_i^+$, but the correspondences $$H_i\leftrightarrow H_i^-, \hspace{.5cm} \overline H_{i+1}\leftrightarrow H_{i+1}^+, \hspace{.5cm} \text{and} \hspace{.5cm} \Sigma\leftrightarrow\Sigma_i^-$$ all reverse orientation. This is because the outward normal to $N$ points into $Z_i$. Figure~\ref{fig:Morse1} provides a potentially helpful schematic. Bringing the surface $\mathcal F$ into the picture, we have the identification $$\partial N\cap\mathcal F = (H_1,\mathcal T_1)\cup\overline{(H_2,\mathcal T_2)}\cup(H_2,\mathcal T_2)\cup\overline{(H_3,\mathcal T_3)}\cup(H_3,\mathcal T_3)\cup\overline{(H_1,\mathcal T_1)}.$$ If $\mathcal E = E_1\cup E_2\cup E_3$ was our original tri-plane, then there are obvious disks $E_i^\pm\subset H_i^\pm$ onto which $\partial N\cap\mathcal F$ can be projected. As discussed in the text preceding this proposition, the union of the $E_i^\pm$ is a two-sphere, which can be identified with the plane, as discussed. Adopting this identification, we find that the induced diagram $\circ\mathbb P$ is a planar diagram for $\partial N\cap \mathcal F$. Recall that, by definition, $\mathbb P_{i+1}\cup\overline\mathbb P_i$ is a diagram for (the mirror of) a split union of a braid with an unlink. Thus, the total union $\circ\mathbb P$ is (currently) a diagram for a closed braid split union three unlinks. Note that although the diagram describes a closed (geometric) braid, the diagram may not be braided. See Figure~\ref{fig:monodromy2}. It remains to observe how this diagram changes as the neighborhood $N$ is enlarged until it fills up all of $B^4$ and $\partial N$ coincides with $S^3 = \partial B^4$. Two things happen in the course of this. First, the unlinks will shrink to points and disappear as the neighborhood $N$ is enlarged to encompass the flat patches of the trivial disk-tangles that cap them off. Second, the portions of the diagram corresponding to the closed braid will `straighten out', meaning they will deperturb until the diagram is an honest braid diagram. Finally, the neighborhood $N$ will coincide with all of $B^4$, the union of the $E_i^\pm$ will live in $S^3$, and the diagram $\circ\mathbb P$ will correspond to a braid diagram for $\mathcal L = \partial \mathcal F$, as desired. \end{proof} \begin{figure}[h!] \begin{subfigure}{\textwidth} \centering \includegraphics[width=.8\linewidth]{f85} \caption{} \label{fig:monodromy1} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.8\linewidth]{monodromy1} \caption{} \label{fig:monodromy2} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.8\linewidth]{monodromy2} \caption{} \label{fig:monodromy3} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.8\linewidth]{monodromy3} \caption{} \label{fig:monodromy4} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.8\linewidth]{monodromy4} \caption{} \label{fig:monodromy5} \end{subfigure} \caption{Recovering the boundary braid (E) from a tri-plane diagram (A). Compare with Figure~\ref{fig:f8}.} \label{fig:monodromy} \end{figure} \begin{example} \label{ex:monodromy} Figure~\ref{fig:monodromy2} shows the diagram $\circ\widehat\mathbb P$ corresponding to the tri-plane diagram in Figure~\ref{fig:monodromy1}. (This tri-plane diagram corresponds to the Seifert surface for the figure-8 knot; see Figure~\ref{fig:f8} for more details.) The two black dots represent the braid axis, and each arc connecting the these dots corresponds to a disk page of the braid axis. As described in the proof of Proposition~\ref{prop:tri-plane_braid}, the sequence Figures~\ref{fig:monodromy2}--\ref{fig:monodromy5} can be thought of as describing the cross-section of the bridge trisected surface with concentric shells in $B^4$, starting with the boundary of a regular neighborhood of the spine of the trisection of $B^4$ and terminating in the boundary of $B^4$. Moving from Figure~\ref{fig:monodromy2} to Figure~\ref{fig:monodromy3}, the cross-section changes only by isotopy, revealing clearly the presence of two unknotted, type (2) components. In the transition to Figure~\ref{fig:monodromy4}, these components cap off and disappear. In the transition to Figure~\ref{fig:monodromy5}, the flat structure is forgotten as we deperturb. The end result is the boundary of the surface, described by a braid. \end{example} For more examples, see Figures~\ref{fig:square} and~\ref{fig:mono}, which were discussed in Examples~\ref{ex:square} and~\ref{ex:mono}, respectively. \section{Shadow diagrams} \label{sec:shadow} Consider a $(g,b;\bold p,\bold f,\bold v)$--tangle $(H,\mathcal T)$. Let $\Delta$ be a bridge disk system for $\mathcal T$. We now fix some necessary notation. \begin{itemize} \item Let $\Sigma = \partial_+H$. \item Let $\alpha\subset\Sigma$ be a defining set of curves for $H$, disjoint from $\Delta$. \item Let $\frak a$ denote a collection of neatly embedded arcs, disjoint from $\Delta$ and $\alpha$ such that surgering $\Sigma$ along $\alpha$ and $\frak a$ results in a disjoint union of disks. We assume $|\frak a|$ is minimized. \item Let $\mathcal T^*$ denote the shadows of the flat strands of $\mathcal T$ -- i.e. those coming from the bridge semi-disks. \item Let $\mathcal A^*$ denote the shadows for the vertical strands -- i.e. those coming from the bridge triangles. \item Let $\bold x = \mathcal T\cap\Sigma$. \end{itemize} The tuple $(\Sigma,\alpha,\mathcal T^*,\bold x)$ is called a \emph{tangle shadow} for the pair $(H,\mathcal T)$. The tuple $(\Sigma,\alpha,\frak a,\mathcal T^*,\mathcal A^*,\bold x)$ is called an \emph{augmented tangle shadow} for the pair $(H,\mathcal T)$. We will say that an augmented tangle shadow is an \emph{augmenting} of the underlying tangle shadow. Figure~\ref{fig:std_diag} shows a pair of augmented tangle shadows: One is found by considering the red, pink, and orange arcs and curves, while the other is found by considering the dark blue, light blue, and orange arcs and curves. Note that we consider tangle shadows up to isotopy rel-$\partial$. \begin{figure}[h!] \centering \includegraphics[width=.8\linewidth]{std_diag} \caption{A pair of (augmented) tangle shadows that, taken together, give a standard (augmented) splitting shadow. The relevant parameters for each handlebody are $g=6$, $n=2$, $m=3$, $\bold p=(0,1)$, $\bold f = (2,1)$. The relevant parameters for each tangle are $b=16$ and $\bold v=(0,2,1)$. The arcs and curves of the $\alpha_i$ and $\mathcal T_i^*$ for $i=1,2$ are shown in red and blue, respectively, while the arcs of the $\mathcal A_i$ are shown in pink and light blue, respectively, and the arcs of $\frak a_1=\frak a_2$ are shown in orange.} \label{fig:std_diag} \end{figure} \begin{lemma} \label{lem:tangle_shadow} A tangle shadow determines a tangle $(H,\mathcal T)$. \end{lemma} \begin{proof} Given a shadow diagram $(\Sigma,\alpha,\mathcal T^*,\bold x)$, let $H$ be the lensed cobordism obtained from the spread $H\times[0,1]$ by attaching 3--dimensional 2--handles along the curves $\alpha\times\{1\}$. Let $\mathcal T\subset H$ be obtained by perturbing the interiors of the shadows $\mathcal T^*\times\{0\}$ into the interior of $H$ to obtain the flat strands of $\mathcal T$ and extending the marked points $\bold x$ to vertical arcs $\bold x\times[0,1]$ using the product structure of the spread to obtain the vertical strands of $\mathcal T$. This completes the proof the first claim. \end{proof} As a matter of convention, we have assumed without loss of generality that the curves and arcs of $\alpha\cup\frak a\cup\mathcal T^*\cup\mathcal A^*$ are all pairwise disjoint; it is not strictly necessary, for example, to assume $\alpha\cap\mathcal T^*=\emptyset$, but this can always be achieved. Given a tangle shadow $(\Sigma,\alpha,\mathcal T^*,\bold x)$, we recall two standard moves: Let $\alpha_1$ and $\alpha_2$ be two curves in $\alpha$, and let $\omega$ be a embedded arc in $\Sigma$ connecting $\alpha_1$ to $\alpha_2$ such that $\text{Int}(\omega)\cap(\alpha\cup\mathcal T^*\cup\bold x)=\emptyset$. Then $N=\nu(\alpha_1\cup\omega\cup\alpha_2)$ is a pair of pants. Let $\alpha_1'$ be the boundary component of $N$ not parallel to $\alpha_1$ or $\alpha_2$. Then $\alpha' = \alpha\setminus\{\alpha_1\}\cup\{\alpha_1'\}$ is a new defining set of curves for $H$. We say that $\alpha'$ is obtained from $\alpha$ by a \emph{curve slide} of $\alpha_1$ over $\alpha_2$ along $\omega$. Now let $\tau_1^*$ be an arc of $\mathcal T^*$ and let $\alpha_2$ be a curve in $\alpha$ (respectively, the boundary of a regular neighborhood of another arc $\tau_2^*$ of $\mathcal T^*$). Let $\omega$ be a embedded arc in $\Sigma$ connecting $\tau_1^*$ to $\alpha_2$ such that $\text{Int}(\omega)\cap(\alpha\cup\mathcal T^*\cup\bold x)=\emptyset$. Let $(\tau_1^*)'$ denote the arc obtained by banding $\tau_1^*$ to $\alpha_2$ using the surface-framed neighborhood of $\omega$. Then, $(\mathcal T^*)' = \mathcal T^*\setminus\tau_1^*\cup(\tau_1^*)'$ is a new collection of shadows for the flat strands of $\mathcal T$. We say that $(\mathcal T^*)'$ is obtained from $\mathcal T^*$ by an \emph{arc slide} of $\tau_1^*$ over $\alpha_2$ (respectively, $\tau_2^*)$ along $\omega$. Two shadow diagrams for $(H,\mathcal T)$ are called \emph{slide-equivalent} if they can be related by a sequence of curve slides and arc slides. Given an augmented tangle shadow $(\Sigma,\alpha,\frak a,\mathcal T^*,\mathcal A^*,\bold x)$, we have further moves. Similar to above, we have arc slide moves that allow one to slide arcs of $\frak a$ or $\mathcal A^*$ over arcs and curves of $\alpha$ and $\mathcal T^*$. Note that we do not allow an arc or curve of any type to slide over an arc of $\frak a$ nor $\mathcal A^*$. Two (augmented) shadow diagrams that are related by a sequence of these two types of moves are called \emph{slide-equivalent}. The following is a modest generalization of a foundational result of Johansson~\cite{Joh_95_Topology-and-combinatorics-of-3-manifolds}, and follows from a standard argument. \begin{proposition} \label{prop:Joh} Two (augmented) tangle shadows for a given tangle are slide-equivalent. \end{proposition} A tuple $(\Sigma,\alpha_1,\alpha_2,\mathcal T_1^*,\mathcal T_2^*,\bold x)$ is called a \emph{splitting shadow} if each tuple $(\Sigma,\alpha_i,\mathcal T_i^*,\bold x)$ is a tangle shadow. A splitting shadow gives rise to a bridge splitting of pair $(M,K)$ in the same way that a tangle shadow gives rise to a tangle. Recall the notion of a standard bridge splitting of $(M,K)$ from Subsection~\ref{subsec:Bridge}. If a splitting shadow corresponds to a standard bridge splitting, then the tangle shadows $(\Sigma,\alpha_i,\mathcal T_i^*, \bold x)$ are (respectively, for $i=1,2$) slide-equivalent to tangle shadows $(\Sigma,\alpha_i',(\mathcal T_i^*)', \bold x)$ such that $(\Sigma,\alpha_1',\alpha_2')$ is a standard Heegaard diagram (cf. Subsection~\ref{subsec:Heegaard}) and $(\mathcal T_1^*)'\cup(\mathcal T_2^*)'$ is a neatly embedded collection of polygonal arcs and curves such that the polygonal curves bound disjointly embedded disks. We call such a splitting shadow $(\Sigma,\alpha_1,\alpha_2,\mathcal T_1^*,\mathcal T_2^*,\bold x)$ \emph{standard}. Figure~\ref{fig:std_diag} shows a standard splitting shadow (ignore the pink, light blue, and orange arcs for now). Two splitting shadows are called \emph{slide-equivalent} if the two pairs of corresponding tangle shadows are slide-equivalent. \begin{definition} A \emph{$(g,\bold k, f,\bold c;\bold p, \bold f, \bold v)$--shadow diagram} is a tuple $(\Sigma, \alpha_1, \alpha_2, \alpha_3,\mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\bold x)$, such that the tuple $(\Sigma,\alpha_i,\alpha_{i+1},\mathcal T_i^*,\mathcal T_{i+1}^*,\bold x)$ slide-equivalent to a standard splitting shadow for each $i\in\mathbb{Z}_3$. Two shadow diagrams are called \emph{slide-equivalent} if the three pairs of corresponding tangle shadows are slide-equivalent. \end{definition} \begin{figure}[h!] \centering \includegraphics[width=.5\linewidth]{steve_shadow} \caption{A shadow diagram for the bridge trisection given in Figure~\ref{fig:steve}, which corresponds a ribbon disk for the stevedore knot.} \label{fig:steve_shadow} \end{figure} \begin{proposition} \label{prop:shadow} A $(g,\bold k, f,\bold c;\bold p, \bold f, \bold v)$--shadow diagram uniquely determines the spine of a $(g,\bold k, f,\bold c;\bold p, \bold f, \bold v)$--bridge trisection. Any two shadow diagrams for a fixed bridge trisection are slide-equivalent. \end{proposition} \begin{proof} First, note that a shadow diagram determines the spine of a bridge trisection. This follows immediately from the definition of a shadow diagram, and Lemma~\ref{lem:tangle_shadow}. The first claim follows from the fact that a bridge trisection is determined up to diffeomorphism by its spine. The second claim follows from Proposition~\ref{prop:Joh}. \end{proof} Since bridge trisections are determined by their spines (Corollary~\ref{coro:spine}), we find that any surface $(X,\mathcal F)$ can be described by a shadow diagram. \begin{corollary} \label{coro:shadow_describe} Let $X$ be a smooth, orientable, compact, connected four-manifold, and let $\mathcal F$ be a neatly embedded surface in $X$. Then, $(X,\mathcal F)$ can be described by a shadow diagram. \end{corollary} \subsection{Recovering the boundary braid from a shadow diagram} \label{subsec:shadow_braid} \ We now see how to recover the information about the boundary of a bridge trisected pair $(X,\mathcal F)$. By augmenting a shadow diagram for the bridge trisection, we will recover this information in the form of an abstract open-book braiding, as defined in Subsection~\ref{subsec:OBD}. What follows is based on the monodromy algorithm described by Castro, Gay, Pinz\'on-Caicedo in~\cite{CasGayPin_18_Diagrams-for-relative-trisections} and is closely related to the notion of an arced relative trisection diagram, as described in~\cite{GayMei_18_Doubly-pointed}. To start, we return our attention to pairs of augmented tangle shadows. A tuple $(\Sigma,\alpha_1,\alpha_2,\frak a, \mathcal T_1^*,\mathcal T_2^*,\mathcal A_1^*,\mathcal A_2^*,\bold x)$ is called a \emph{standard augmented splitting shadow} if \begin{itemize} \item For each $i=1,2$, $(\Sigma,\alpha_i,\frak a, \mathcal T_i^*,\mathcal A_i^*,\bold x)$ is a augmented tangle shadow; \item $(\Sigma,\alpha_1,\alpha_2, \mathcal T_1^*,\mathcal T_2^*,\bold x)$ is a standard splitting shadow; \item The components of $\mathcal T_1^*\cup\mathcal T_2^*\cup\mathcal A_1^*\cup\mathcal A_2^*$ intersecting $\partial\Sigma$ bound disjointly embedded polygonal disks, each of which intersects $\partial \Sigma$ in a single point. \end{itemize} See Figure~\ref{fig:std_diag} for an example of a standard augmented splitting shadow. \begin{definition}[\textbf{\emph{augmented shadow diagram}}] An \emph{augmented $(g,\bold k, f,\bold c;\bold p, \bold f, \bold v)$--shadow diagram} is a tuple $(\Sigma, \alpha_1, \alpha_2, \alpha_3,\frak a_1, \mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\mathcal A_1^*,\bold x)$, such that the tuple $(\Sigma, \alpha_1, \alpha_2, \alpha_3,\mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\bold x)$ is a shadow diagram, and $(\Sigma, \alpha_1, \frak a_1,\mathcal T_1^*,\mathcal A_1^*,\bold x)$ is an augmented tangle shadow. A \emph{fully augmented $(g,\bold k, f,\bold c;\bold p, \bold f, \bold v)$--shadow diagram} is a tuple $$\left(\Sigma, \alpha_1, \alpha_2, \alpha_3,\frak a_1,\frak a_2,\frak a_3,\frak a_4, \mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\mathcal A_1^*,\mathcal A_2^*,\mathcal A_3^*,\mathcal A_4^*,\bold x\right),$$ such that the tuple $(\Sigma, \alpha_1, \alpha_2, \alpha_3,\mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\bold x)$ is a shadow diagram, the tuples $(\Sigma, \alpha_1, \frak a_1,\mathcal T_1^*,\mathcal A_1^*,\bold x)$ and $(\Sigma, \alpha_1, \frak a_4,\mathcal T_1^*,\mathcal A_4^*,\bold x)$ are augmented tangle shadows for the same tangle, and the following hold: \begin{enumerate} \item For each $i\in\mathbb{Z}_3$, the diagram $$\left(\Sigma, \alpha_i,\alpha_{i+1}, \frak a_i, \frak a_{i+1}, \mathcal T_i^*, \mathcal T_{i+1}^*, \mathcal A_i^*, \mathcal A_{i+1}^*, \bold x\right)$$ is slide-equivalent to a diagram $$\left(\Sigma, \alpha_i', \alpha_{i+1}', \frak a_i', \frak a_{i+1}', (\mathcal T_i^*)', (\mathcal T_{i+1}^*)', (\mathcal A_i^*)', (\mathcal A_{i+1}^*)', \bold x\right)$$ that is a standard augmented splitting shadow, with $\frak a_i'=\frak a_{i+1}'$. \item The diagram $$\left(\Sigma, \alpha_3,\alpha_1, \frak a_3, \frak a_4, \mathcal T_3^*, \mathcal T_1^*, \mathcal A_3^*, \mathcal A_4^*, \bold x\right)$$ is slide-equivalent to a diagram $$\left(\Sigma, \alpha_3'', \alpha_1'', \frak a_3'', \frak a_4'', (\mathcal T_3^*)'', (\mathcal T_1^*)'', (\mathcal A_3^*)'', (\mathcal A_4^*)'', \bold x\right)$$ that is a standard augmented splitting shadow, with and satisfies $\frak a_3''=\frak a_4''$. \end{enumerate} We say that an augmented shadow diagram is an \emph{augmenting} of the underlying shadow diagram and that a fully augmented shadow diagram is a \emph{full-augmenting} of the underlying (augmented) shadow diagram. \end{definition} We now describe how the data of an augmented shadow diagram allows us to recover the boundary open-book braiding $(Y,\mathcal L)$ of the corresponding bridge trisected pair $\partial(X,\mathcal F)$. First, we note the following crucial connection between augmented shadow diagrams and fully augmented shadow diagrams. \begin{proposition} \label{prop:aug} There is an algorithmic way to complete an augmented shadow diagram to a fully augmented shadow diagram, which is unique up to slide-equivalence. \end{proposition} \begin{proof} Start with an augmented shadow diagram $(\Sigma, \alpha_1, \alpha_2, \alpha_3,\frak a_1, \mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\mathcal A_1^*,\bold x)$. Restrict attention to the splitting shadow $(\Sigma,\alpha_1,\alpha_2,\mathcal T_1^*,\mathcal T_2^*,\bold x)$. By definition, this diagram is slide-equivalent to a standard splitting shadow $(\Sigma,\alpha_1',\alpha_2',(\mathcal T_1^*)',(\mathcal T_2^*)',\bold x)$. Choose a sequence of arc and curve slides realizing this equivalence. Whenever a slide involving the arcs and curves of $\alpha_1\cup\mathcal T_1^*$ would be performed along an arc $\omega$ that intersects $\frak a_1\cup\mathcal A_1^*$, first slide the offending arcs of $\frak a_1\cup\mathcal A_1^*$ out of the way using the same slide-arc $\omega$. Now the splitting shadow has been standardized, but the arcs of $\frak a_1\cup\mathcal A_1^*$ may intersect the curves and arcs of $\alpha_2'\cup(\mathcal T_2^*)'$. Intersections of $\frak a_1\cup\mathcal A_1^*$ with the curves of $\alpha_2'$ can be removed via slides over the curves of $\alpha_1'$ dual to curves of~$\alpha_2'$. Recall that the closed components of $(\mathcal T_1^*)\cup(\mathcal T_2^*)'$ are embedded polygonal curves, while the non-closed components are embedded polygonal arcs. Moreover, the arcs of $\mathcal A_1^*$ connect one end of each polygonal arc to $\partial\Sigma$. Intersections of (the interior of) $\frak a_1\cup\mathcal A_1^*$ with the polygonal curves of $(\mathcal T_1^*)\cup(\mathcal T_2^*)'$ can be removed via slides over the arcs of $(\mathcal T_1^*)'$ included in these polygonal curves. Intersections of (the interior of) $\frak a_1\cup\mathcal A_1^*$ with the polygonal arcs of $(\mathcal T_1^*)\cup(\mathcal T_2^*)'$ can be removed via slides over the arcs of $(\mathcal T_1^*)'$ included in these polygonal arc, provided one is careful to slide towards the end of the polygonal arc that is not attached to~$\mathcal A_1^*$. Once the described slides have all been carried out, the collection $\frak a_1$ and $\mathcal A_1^*$ of arc will have been transformed into new collection, which we denote $\frak a_1'$ and $(\mathcal A_1^*)'$, respectively. The key fact is that $\frak a_1'$ and $(\mathcal A_1^*)'$ are disjoint (in their interiors) from the arcs and curves of $\alpha_2'\cup(\mathcal T_2^*)'$. Set $\frak a_2=\frak a_1'$, and note that $\frak a_2$ has the desired property of being (vacuously) slide-equivalent to $\frak a_2' = \frak a_1'$. To define $\mathcal A_2^*$, note that at this point the union of the polygonal arcs of $(\mathcal T_1^*)'\cup(\mathcal T_2^*)'$ with $(\mathcal A_1^*)'$ is a collection of embedded `augmented' polygonal arcs each of which intersects $\partial\Sigma$ in a single point. Let $\mathcal A_2^*$ be the collection of arcs obtained by pushing each augmented polygonal arc off itself slightly, while preserving its endpoint that lies in the interior of $\Sigma$. See Figure~\ref{fig:push-off}. This can be thought of as sliding the endpoint of $(\mathcal A_1^*)'$ that lies in the interior of $\Sigma$ along the polygonal arc of $(\mathcal T_1^*)'\cup(\mathcal T_2^*)'$ that it intersects until it reaches the end. Having carried out these steps, we have that $(\Sigma, \alpha_1', \alpha_2',\frak a_1', \frak a_2, (\mathcal T_1^*)',(\mathcal T_2^*)',(\mathcal A_1^*)',\mathcal A_2^*,\bold x)$ is a standard augmented splitting shadow, as desired. \begin{figure}[h!] \centering \includegraphics[width=.25\linewidth]{push-off} \caption{Obtaining $\mathcal A_2^*$ from $(\mathcal A_1^*)'$.} \label{fig:push-off} \end{figure} Next, we repeat the process outlined in the first two paragraph, starting this time with the splitting shadow $(\Sigma,\alpha_2',\alpha_3,(\mathcal T_2^*)',\mathcal T_3^*,\bold x)$: Standardize the splitting shadow, and include the arcs of $\frak a_2\cup\mathcal A_2^*$ in the slides when necessary. Perform additional slides to obtain the new collection of arcs $\frak a_2'$, and $(\mathcal A_2^*)'$ whose interiors are disjoint from all other arcs and curves. Let $\frak a_3 = \frak a_2'$, and obtain $\mathcal A_3^*$ from $(\mathcal A_2^*)'$ in the same way as before, so that the new diagram $(\Sigma, \alpha_2'', \alpha_3',\frak a_2', \frak a_3, (\mathcal T_2^*)'',(\mathcal T_3^*)',(\mathcal A_2^*)',\mathcal A_3^*,\bold x)$ is a standard augmented splitting shadow, as desired. Note that $(\Sigma,\alpha_2'',(\mathcal T_2^*)'',\bold x)$ is slide-equivalent to the original diagram $(\Sigma,\alpha_2,\mathcal T_2^*,\bold x)$. Finally, repeat the process once more, starting with the splitting shadow $(\Sigma,\alpha_3',\alpha_1',(\mathcal T_3^*)',(\mathcal T_1^*)',\bold x)$ and performing slides until we can obtain new collections $\frak a_4$ and $\mathcal A_4^*$ from the modified collections $\frak a_3'$ and $(\mathcal A_3^*)'$, as before. At this point, there is a minor wrinkle. We are not finished once we set $\frak a_4 = \frak a_3'$ and obtain $\mathcal A_4^*$ from $(\mathcal A_3^*)'$ as before. The reason is that these choices for $\frak a_4$ and $\mathcal A_4$ might not be compatible with the original tangle shadow $(\Sigma, \alpha_1,\mathcal T_1^*,\bold x)$, rather these choices are compatible with the slide-equivalent tangle shadow $(\Sigma, \alpha_1'', (\mathcal T_1^*)'',\bold x)$. To remedy this issue, we perform the slides to change this latter tangle shadow to the former one, and we carry $\frak a_4$ and $\mathcal A_4^*$ with us along the way, sliding them over arcs and curves when necessary. In abuse of notation, we denote the results of this transformation $\frak a_4$ and $\mathcal A_4^*$. In summary, we have produce the collections of arcs $\frak a_2$, $\frak a_3$, $\frak a_4$, $\mathcal A_2^*$, $\mathcal A_3^*$, and $\mathcal A_4^*$ required to fully augment the original augmented shadow diagram. \end{proof} Following Castro, Gay, and Pinz\'on-Caicedo, we refer to the above algorithm as the \emph{monodromy algorithm}. What follows a is generalization of the discussion of~\cite[Section~3]{GayMei_18_Doubly-pointed}; see also~\cite[Section~4]{CasGayPin_18_Diagrams-for-relative-trisections} and~\cite[Section~2]{CasOzb_19_Trisections-of-4-manifolds-via-Lefschetz}. Given an augmented shadow diagram $\mathfrak D=(\Sigma, \alpha_1, \alpha_2, \alpha_3,\frak a_1, \mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\mathcal A_1^*,\bold x)$, let $(H,\mathcal T)$ denote the tangle determined by the tangle shadow $(\Sigma,\alpha_1,\mathcal T_1^*,\bold x)$. Let $(P,\bold y)_\mathfrak D = \partial_-(H,\mathcal T)$. We call $(P,\bold y)_\mathfrak D$ the \emph{page} of the shadow diagram. Fix an identification $\text{Id}\colon(P,\bold y)_\mathfrak D\to(\Sigma_{\bold p,\bold f},\bold x_{\bold p,\bold f})$. We use the standard Morse structure on $H$ to consider $\frak a_1$ and $\mathcal A_1^*$ as lying in $P$. Consider the arcs $\frak a=\text{Id}(\frak a_1)$, which cut the standard surface into a collection of disks, and the arcs $\mathcal A^*=\text{Id}(\mathcal A^*_1)$, which connect the marked points to the boundary in the standard pair. Let $\mathfrak D^+=\left(\Sigma, \alpha_1, \alpha_2, \alpha_3,\frak a_1,\frak a_2,\frak a_3,\frak a_4, \mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\mathcal A_1^*,\mathcal A_2^*,\mathcal A_3^*,\mathcal A_4^*,\bold x\right)$ be a full-augmenting of $\mathfrak D$. We consider the arcs $\frak a_4$ and $\mathcal A^*_4$ as lying in $P$, as well. Consider the arcs $\frak a'=\text{Id}(\frak a_4)$ and the arcs $(\mathcal A^*)'=\text{Id}(\mathcal A^*_4)$. Let $\phi_\mathfrak D$ be the automorphism of $(\Sigma_{\bold p,\bold f},\bold x_{\bold p,\bold f})$ satisfying $\phi_\mathfrak D(\frak a\cup\mathcal A^*) = \frak a'\cup(\mathcal A^*)'$, noting that $\phi$ is unique up to isotopy. We call $\phi_\mathfrak D$ the \emph{monodromy} of the shadow diagram. It is straight-forward to check that monodromy $\phi_\mathfrak D$ of an augmented shadow diagram $\mathfrak D$ depends only on the underlying shadow diagram (not on the choice of augmentation). The relevance of $\phi_\mathfrak D$ is given in the following proposition; we refer the reader to Subsection~\ref{subsec:OBD} for relevant notation and terminology regarding open-book decompositions and braidings. The following is a generalization of~\cite[Theorem~5]{CasGayPin_18_Diagrams-for-relative-trisections} and~\cite[Lemma~3.1]{GayMei_18_Doubly-pointed}. \begin{proposition} \label{prop:monodromy} Suppose that $\mathfrak D$ is a shadow diagram for a bridge trisection $\mathbb T$ of a pair $(X,\mathcal F)$. Let $\phi_\mathfrak D$ denote the monodromy of the shadow diagram, and let $(Y_{\phi_\mathfrak D},\mathcal L_{\phi_\mathfrak D})$ denote the model open-book braiding corresponding to the abstract open-book braiding $(\Sigma_{\bold p,\bold f},\bold x_{\bold p,\bold f},\phi_\mathfrak D)$. Then, there is a canonical (up to isotopy) diffeomorphism $$\psi_\mathfrak D\colon \partial(X,\mathcal F)\to(Y_{\phi_\mathfrak D},\mathcal L_{\phi_\mathfrak D}).$$ \end{proposition} \begin{proof} Let $(H_1,\mathcal T_1)\cup(H_2,\mathcal T_2)\cup(H_3,\mathcal T_3)$ denote the spine of the bridge trisection determined by the diagram~$\mathfrak D$; cf. Proposition~\ref{prop:spine} and Proposition~\ref{prop:shadow}. Fix an identifcation $\psi\colon(P_1\bold y_1)\to(\Sigma_{\bold p,\bold f},\bold x_{\bold p,\bold f})$ and regard this latter pair as a page $(P,\bold y)\times\{0\}$ in the model open-book braiding $(Y_{\phi_\mathfrak D},\mathcal L_{\phi_\mathfrak D})$, which we think of as $(P,\bold y)\times_{\phi_\mathfrak D}S^1$. Choose an augmenting of $\mathfrak D$ by picking arcs $\frak a_1$ and $\mathcal A_1^*$, which we consider as having been isotoped vertically to lie in $(P_1,\bold y_1)$. Let $\frak a\times\{0\}$ and $\mathcal A^*\times\{0\}$ denote the arcs on $(P,\bold y)\times\{0\}$ that are the images of $\frak a_1$ and $\mathcal A_1^*$ under $\psi$. Apply the monodromy algorithm of Proposition~\ref{prop:monodromy} to obtain a full-augmenting of $\mathfrak D$. Consider the arcs $\frak a_1'$, $(\mathcal A_1^*)'$, and $(\mathcal A_2^*)'$ coming from the standard augmented splitting diagram for $$(M_1,K_1)=(H_1,\mathcal T_1)\cup_{(\Sigma,\bold x)}\overline{(H_2,\mathcal T_2)},$$ noting that, regarded as arcs in $(P_1,\bold y_1)$, $\frak a_1$ and $\frak a_1'$ are isotopic rel-$\partial$, as are $\mathcal A_1^*$ and $(\mathcal A_1^*)'$. These arcs determine the identity map $\text{Id}_{(M_1,K_1,\Sigma)}$ described in Lemma~\ref{lem:BridgeDouble}. In particular, this gives a unique extension of $\psi$ to a diffeomorphism from the spread $(Y_1,\beta_1)$ in $\partial(X,\mathcal F)$ to the spread $(P,\bold y)\times[0,1/3]$ in $(Y_{\phi_\mathfrak D},\mathcal L_{\phi_\mathfrak D})$. Repeating the step described above ($i=1$) for $i=2$ and $i=3$ allows us to extend $\psi_1$ to a map $\psi_\mathfrak D$ whose domain is the entire boundary $$\partial(X,\mathcal F) = (Y_1,\beta_1)\cup(Y_2,\beta_2)\cup(Y_3,\beta_3)$$ and whose codomain is $(P,\bold y)\times[0,1]$, equipped with the identification $x\sim\phi'(x)$, where $\phi'$ must take the arcs $\frak a_4\cup\mathcal A_4^*$ to the arcs $\frak a_1\cup\mathcal A_1^*$, in order for $\psi_\mathfrak D$ to be continuous. However, this implies that $\phi'$ is isotopic rel-$\partial$ to $\phi_\mathfrak D$, by definition, and we have that $\psi_\mathfrak D$ respects the original identification space structure on $(Y_{\phi_\mathfrak D},\mathcal L_{\phi_\mathfrak D})$, hence is a diffeomorphism, as desired. \end{proof} \begin{figure}[h!] \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_1} \caption{} \label{fig:Mob_sh_1} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_2} \caption{} \label{fig:Mob_sh_2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_3} \caption{} \label{fig:Mob_sh_3} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_4} \caption{} \label{fig:Mob_sh_4} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_5} \caption{} \label{fig:Mob_sh_5} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_6} \caption{} \label{fig:Mob_sh_6} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_7} \caption{} \label{fig:Mob_sh_7} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_8} \caption{} \label{fig:Mob_sh_8} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_9} \caption{} \label{fig:Mob_sh_9} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_10} \caption{} \label{fig:Mob_sh_10} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_11} \caption{} \label{fig:Mob_sh_11} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{Mob_sh_12} \caption{} \label{fig:Mob_sh_12} \end{subfigure} \caption{A shadow diagram (A), a augmented shadow diagram (B), and a fully augmented shadow diagram (C) for a bridge trisection for the M\"obius band bounded by the right-handed trefoil in $S^3$. (E)--(K) illustrate the process described by the monodromy algorithm of Proposition~\ref{prop:monodromy}, used to find the full-augmenting (C) of the augmented shadow diagram (B). We recover the braiding induced on the boundary of the bridge trisection by studying (L), which show the arcs $\frak a$ and $\frak a'$ in the page $(P,\bold Y)$.} \label{fig:Mob_sh} \end{figure} \begin{example} \label{ex:Mob_sh} \textbf{(M\"obius band for the trefoil)} Figure~\ref{fig:Mob_sh_1} shows a shadow diagram corresponding to the bridge trisection of the M\"obius band bounded by the right-handed trefoil in $S^3$ that was discussed in Example~\ref{ex:mono}. Since this is a $(2;0,2)$--bridge trisection, we have that $(P,\bold y) = \partial_-(H_1,\mathcal T_1)$ is a disk with two distinguished points in its interior. This pair is shown in Figure~\ref{fig:Mob_sh_4}, together with a pair of arcs that connect the points $\bold y$ to $\partial P$. Using the Morse function on $(H_1,\mathcal T_2)$, these arcs can be flowed rel-$\partial$ to lie in $\Sigma$, as shown in Figure~\ref{fig:Mob_sh_5}. In Figure~\ref{fig:Mob_sh_6}, the shadows for $(H_2,\mathcal T_2)$ have been added, making an splitting shadow for $(M_1,K_1)$, which is a geometric 2--braid in $D^2\times I$, one component of which is twice-perturbed, while the other is not perturbed. In Figure~\ref{fig:Mob_sh_7}, a slide of an arc of $\mathcal A^*_1$ has been performed to arrange that all arcs are disjoint in their interiors, and the arcs of $\mathcal A_2^*$ have been obtained, as described in the proof of Proposition~\ref{prop:monodromy}; this is an augmented splitting shadow for $(M_1,K_1)$. Figure~\ref{fig:Mob_sh_8} shows a splitting shadow for $(M_2,K_2)$, with $\mathcal A_2^*$ remembered, and since all arcs are disjoint in their interiors, the arcs of $\mathcal A_3^*$ have been derived. Figure~\ref{fig:Mob_sh_9} shows a splitting shadow for $(M_3,K_3)$, with the arcs of $\mathcal A_3^*$ remembered, and Figure~\ref{fig:Mob_sh_10} is obtained from this diagram by arc slides of arcs from $\mathcal T_3^*\cup\mathcal A_3^*$, before $\mathcal A_4^*$ is obtained. In Figure~\ref{fig:Mob_sh_11}, the arcs of $\mathcal A_1^*$ and $\mathcal A_4^*$ are shown with the arcs of $\mathcal T_1^*$ in $\Sigma$. Figure~\ref{fig:Mob_sh_12} shows the result of flowing $\mathcal A_1^*\cup\mathcal A_4^*$ up to the page $(P,\bold y)$. \begin{figure}[h!] \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_1} \caption{} \label{fig:tref_disk_1} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_2} \caption{} \label{fig:tref_disk_2} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_3} \caption{} \label{fig:tref_disk_3} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_4} \caption{} \label{fig:tref_disk_4} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_5} \caption{} \label{fig:tref_disk_5} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_6} \caption{} \label{fig:tref_disk_6} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_7} \caption{} \label{fig:tref_disk_7} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_8} \caption{} \label{fig:tref_disk_8} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_9} \caption{} \label{fig:tref_disk_9} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_10} \caption{} \label{fig:tref_disk_10} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_11} \caption{} \label{fig:tref_disk_11} \end{subfigure}% \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_12} \caption{} \label{fig:tref_disk_12} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \includegraphics[width=.9\linewidth]{tref_disk_13} \caption{} \label{fig:tref_disk_13}% \end{subfigure} \caption{A shadow diagram (A), a augmented shadow diagram (B), and a fully augmented shadow diagram for a bridge trisection for the disk bounded by the right-handed trefoil in $(\mathbb{CP}^2)^\circ$. (D)--(M) illustrate the process described by the monodromy algorithm of Proposition~\ref{prop:monodromy}, used to find a full-augmenting of a shadow diagram and recover the braiding induced on the boundary of the bridge trisection.} \label{fig:tref_disk} \end{figure} Figure~\ref{fig:Mob_sh_12} allows us to see that the braiding induced on the boundary of the bridge trisection is diffeomorphic to the abstract open-book $(P,\bold y, \tau^3)$, where $P$ is a disk, $\bold y$ is two points, and $\tau$ is a right-handed Dehn twist about a curve parallel to $\partial P$ in $P\setminus\nu(\bold y)$. This derivation is a shadow diagram version of the calculation of this braiding given in Example~\ref{ex:mono} and Figure~\ref{fig:mono}. \end{example} \begin{example} \label{ex:tref_disk} \textbf{(Disk for the trefoil in $(\mathbb{CP}^2)^\circ$)} Figure~\ref{fig:tref_disk_1} shows a shadow diagram corresponding to a bridge trisection of a disk bounded by the right-handed trefoil in $(\mathbb{CP}^2)^\circ$, the result of removing a neighborhood of a point from $\mathbb{CP}^2$. The two circles represent the foot of a handle for the surface $\Sigma$ and are identified via vertical reflection. If one forgets the bridge points $\bold x$ and all shadow arcs, one obtains a $(1,0;0,1)$--trisection diagram for this four-manifold. The bridge trisection itself is type $(2,(0,1,0);2)$; the union of the blue and green shadows includes a bigon. As in the previous example, we have that $(P,\bold y) = \partial_-(H_1,\mathcal T_1)$ is a disk with two distinguished points in its interior. This pair is shown in Figure~\ref{fig:tref_disk_4}, together with a pair of arcs that connect the points $\bold y$ to $\partial P$. Using the Morse function on $(H_1,\mathcal T_2)$, these arcs can be flowed rel-$\partial$ to lie in $\Sigma$, as shown in Figure~\ref{fig:tref_disk_5}. In Figure~\ref{fig:tref_disk_6}, the shadows for $(H_2,\mathcal T_2)$ have been added, giving a splitting shadow for $(M_1,K_1)$, which is a geometric 2--braid in $D^2\times I$, one component of which is twice-perturbed with respect to the once-stabilized Heegaard splitting of this spread. In Figure~\ref{fig:tref_disk_7}, a number of arc slides of $\mathcal T_1^*\cup\mathcal A^*_1$ have been performed to arrange that all arcs and curves are disjoint in their interiors, save the standard curve pair $\alpha_1\cup\alpha_2$. From this standard splitting shadow, the arcs of $\mathcal A_2^*$ have been obtained, as described in the proof of Proposition~\ref{prop:monodromy}. Figure~\ref{fig:tref_disk_8} shows a splitting shadow for $(M_2,K_2)$, with $\mathcal A_2^*$ remembered. Figure~\ref{fig:tref_disk_9} shows the standard augmented splitting shadow resulting from a number or arc slides, together with the arcs of $\mathcal A_3^*$. Figure~\ref{fig:tref_disk_10} shows a splitting shadow for $(M_3,K_3)$, with the arcs of $\mathcal A_2^*$ remembered, and Figure~\ref{fig:tref_disk_11} shows a slide-equivalent standard splitting shadow, with $\mathcal A_\circ^*$ derived. In Figure~\ref{fig:tref_disk_12}, the arcs of $\mathcal A_1^*$ and $\mathcal A_\circ^*$ are shown with the arcs and curves of the original tangle shadow for $(H_1,\mathcal T_1)$ in $\Sigma$. Figure~\ref{fig:tref_disk_13} shows the result of flowing $\mathcal A_1^*\cup\mathcal A_\circ^*$ up to the page $(P,\bold y)$. Figure~\ref{fig:tref_disk_13} allows us to see that the braiding induced on the boundary of the bridge trisection is diffeomorphic to the abstract open-book $(P,\bold y, \tau^3)$, where $P$ is a disk, $\bold y$ is two points, and $\tau$ is a right-handed Dehn twist about a curve parallel to $\partial P$ in $P\setminus\nu(\bold y)$. This proves that this bridge trisection corresponds to a surface bounded by the right-handed trefoil in $(\mathbb{CP}^2)^\circ$. From the bridge trisection parameters, we conclude that the surface is a disk, since it has Euler characteristic one and is connected. \begin{figure}[h!] \centering \includegraphics[width=.5\linewidth]{genus_one_ex} \caption{A three-dimensional rendering of the shadow diagram in Figure~\ref{fig:tref_disk_1} corresponding to the disk bounded by the right-handed trefoil in $(\mathbb{CP}^2)^\circ$.} \label{fig:genus_one_ex} \end{figure} A three-dimensional rendering for this example is given in Figure~\ref{fig:genus_one_ex}. The ambient 3--manifold is $S^3 = \partial(CP^2)^\circ$, equipped with the Heegaard-page structure coming from the compressionbody $H_{1,0,1}$. The right-handed trefoil is in 2--braid position, and perturbed twice with respect to the genus one Heegaard surface $\Sigma$. The closed curve shown in blue is the belt-sphere for the 2--handle that is attached to a 0--cell $B^4$ to build $(\mathbb{CP}^2)^\circ$. The curve lies on $\Sigma$ with surface-framing $-1$. This reflects the fact that $(\mathbb{CP}^2)^\circ$ can be thought of as being built from $\overline{S^3}\times[-1,0]$ by attaching a $(+1)$--framed 2--handle along the corresponding curve in the mirror manifold $\overline{S^3}\times\{-1\}$, before capping off with a 0--handle below. A single band is shown for the boundary knot, but this band is a helper-band in the sense of Remarks~\ref{rmk:helpers} and~\ref{rmk:helpers2} and Subsection~\ref{subsec:braiding_presentations} more generally. In fact, relative to the Morse function on $(\mathbb{CP}^2)^\circ$, the disk bounded by the trefoil can be (and has been) assumed to have no saddle points, just a single minimum. However, the Morse function on $(\mathbb{CP}^2)^\circ$ coming from the bridge trisection will require the disk to be built from a pair of vertical disks (since we require a 2--braid on the boundary), and the helper-band joins these disks together. Compare with the Morse-theoretic proof of Theorem~\ref{thm:general}. \end{example} \section{Gluing bridge trisected surfaces and shadow diagrams} \label{sec:gluing} In this section, we describe how to glue bridge trisected surfaces along portions of their boundary in a way that respects the bridge trisection structure. The gluing of trisections was first discussed by Castro~\cite{Cas_17_Trisecting-smooth-4--dimensional}, with further development given by Castro and Ozbagci~\cite{CasOzb_19_Trisections-of-4-manifolds-via-Lefschetz} and by the author and Gay~\cite{GayMei_18_Doubly-pointed}. We conclude this section with some examples of simple gluings of bridge trisected pairs with disconnected boundary, as well as a more complicated example involving the surfaces bounded by the right-handed trefoil discussed above. We refer the reader to Section~\ref{sec:shadow} for necessary concepts related to shadow diagrams. The development below is a generalization of previous developments to the setting of bridge trisections for four-manifold pairs and is complicated by the fact that we allow the four-manifolds being glued to have multiple boundary components and for the gluings to involve proper submanifolds of these boundaries. To account for this, we will allow our gluing maps to be \emph{partial diffeomorphisms}, which means that they may be defined on proper subsets of their domain. This subset is called the \emph{domain of definition} of the map; the image of the domain of definition is called the \emph{range}, and may be a proper subset of the codomain. The domain of definition and range of our partial diffeomorphisms will always be closed submanifolds of the domain and codomain, respectively. Let $\mathbb T$ be a bridge trisection of a pair $(X,\mathcal F)$, and let $\mathfrak D$ be a shadow diagram for $\mathbb T$. Let $(P,\bold y) = \partial_-(H_1,\mathcal T_1)$, and let $\phi_\mathfrak D\colon(P,\bold y)\to(P,\bold y)$ be the monodromy automorphism determined by $\mathfrak D$ according to Proposition~\ref{prop:aug}. Let $\psi_\mathfrak D\colon\partial(X,\mathcal F)\to(Y_{\phi_\mathfrak D},\mathcal L_{\phi_\mathfrak D})$ be the diffeomorphism given by Proposition~\ref{prop:monodromy}, where $(Y_{\phi_\mathfrak D},\mathcal L{\phi_\mathfrak D})$ is the model pair of the abstract open-book $(P,\bold y,\phi_\mathfrak D)$. We note that both $\phi_\mathfrak D$ and $\psi_\mathfrak D$ depend on the underlying bridge trisection $\mathbb T$, and are determined up to post-composing with a automorphism of $(P,\bold y)$. Thus, we might as well denote these maps by $\phi_\mathbb T$ and $\psi_\mathbb T$; we will adopt either decoration, depending on whether we wish to emphasize the shadow diagram or the underlying bridge trisection. We work in the generality of bridge trisected pairs with disconnected boundary, so we emphasize the decomposition $$(Y,\mathcal L) = (Y^1,\mathcal L^1)\sqcup\cdots\sqcup(Y^n,\mathcal L^n)$$ of $(Y,\mathcal L) = \partial(X,\mathcal F)$ into connected components of $Y$; for any connected component $Y^j$ of $Y$, we may have $\mathcal L^j$ disconnected -- i.e. a link. Thus, we have corresponding decomposition of the pairs $(P,\bold y)$, $(P_{\phi_\mathbb T},\bold y_{\phi_\mathbb T})$, and $(Y_{\phi_\mathbb T},\mathcal L_{\phi_\mathbb T})$, and of the maps $\phi_\mathbb T$ and $\psi_\mathbb T$. Our first result is that bridge trisections that induce diffeomorphic braidings on some portion of their boundaries can be glued along those boundaries to obtain a new bridge trisection. By a \emph{diffeomorphism of open-book braidings} we mean a diffeomorphism of three-manifold pairs that restricts to a diffeomorphism of pages (hence, commutes with the monodromies). \begin{proposition} \label{prop:glue_tri} Let $\mathbb T'$ and $\mathbb T''$ be bridge trisections for pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$. Suppose we have an orientation reversing partial diffeomorphism of open-book braidings $\Psi\colon\partial(X',\mathcal F')\to\partial(X'',\mathcal F'')$. Then the pair $(X,\mathcal F) = (X',\mathcal F')\cup_\Psi(X'',\mathcal F'')$ inherits a canonical bridge trisection $\mathbb T = \mathbb T'\cup_\Psi\mathbb T''$. \end{proposition} \begin{proof} Let $(Y',\mathcal L')$ and $(Y'',\mathcal L'')$ denote the domain of definition and range of $\Psi$, respectively, noting that these are closed (possibly proper) submanifolds of $\partial(X',\mathcal F')$ and $\partial(X'',\mathcal F'')$, respectively. After potentially changing $\Psi$ by an isotopy through diffeomorphisms of open-book braidings, we can assume that $\Psi(P_i',\bold y_i') = (P_i'',\bold y_i'')$ for each $i\in\mathbb{Z}_3$. We will verify that gluing the various corresponding pieces of $\mathbb T'$ and $\mathbb T''$ together according to $\Psi$ results in a collection of pieces giving a bridge trisection of $(X,\mathcal F)$. Consider the restriction of $\Psi$ to the binding $B'$ of the open-book decomposition of $(Y',\mathcal L')$, recalling that $B' = \partial(\Sigma',\bold x')$ and $B'' =\Psi(B') = \partial(\Sigma'',\bold x'')$. Let $(\Sigma,\bold x) = (\Sigma',\bold x')\cup_\Psi(\Sigma'',\bold x'')$, which is simply the union of two surfaces with marked points and boundary along closed subsets of their respective boundaries, hence a new surface with marked points and (possibly empty) boundary. Consider the restriction of $\Psi$ to the pages $P_i'$ for each $i\in\mathbb{Z}_3$, recalling that $(P_i',\bold y_i') = \partial(H_i',\mathcal T_i')$ and $(P_i'',\bold y_i'') =\Psi(P_i',\bold y_i') = \partial(H_i'',\mathcal T_i'')$. Let $(H_i,\mathcal T_i) = (H_i',\mathcal T_i')\cup_{\Psi_{(P_i',\bold y_i')}}(H_i'',\mathcal T_i'')$, noting that $$\partial(H_i,\mathcal T_i) = (\Sigma,\bold x)\cup_B\left((\partial_-(H_i',\mathcal T_i')\setminus(P_i',\bold y'))\sqcup(\partial_-(H_i'',\mathcal T_i'')\setminus(P_i'',\bold y''))\right).$$ (A word of caution regarding notation: The fact that we are considering gluings along potentially strict subsets of the boundaries complicates the exposition notationally. For example, earlier in the paper, we would have written $(P_i',\bold y_i') = \partial_-(H_i',\mathcal T_i')$, but here we regard $(P_i',\bold y_i')\subset\partial_-(H_i',\mathcal T_i')$ as the portion of $\partial_-(H_i',\mathcal T_i')$ lying in the domain of definition.) For each $i\in\mathbb{Z}_3$, let $\frak a_i'$ be a neatly embedded collection of arcs in $P_i'\setminus\bold y_i'$ such that surgery along the arcs reduces $P_i'$ to a collection of disks with the number of connected components as $P_i'$. Moreover, we require that $\frak a_i'$ and $\frak a_{i+1}'$ be isotopic rel-$\partial$ in $Y'\setminus\mathcal L'$ via an isotopy that is monotonic with respect to the open-book structure. Let $\frak a_i'' = \Psi(\frak a_i')$. For each $i\in\mathbb{Z}_3$, let $\mathcal A_i$ be an embedded collection of arcs connecting the points of $\bold y_i'$ to $\partial P_i'$, and assume, as before, that $\mathcal A_i'$ and $\mathcal A_{i+1}'$ are isotopic via an isotopy that fixes $\mathcal A_i'\cap\partial P_i'$ and is monotonic with respect to the open-book-braiding structure; the free endpoints of $\mathcal A_i'$ will move along $\mathcal L'$. Let $\mathcal A_i'' = \Psi(\mathcal A_i')$. Using the Morse structure on $(H_i',\mathcal T_i')$, flow the arcs of $\frak a_i'$ and $\mathcal A_i'$ down to $\Sigma'$, and denote the results $(\frak a_i^*)'$ and $(\mathcal A_i^*)'$, respectively. Let $E_i'$ and $T_i'$ denote the traces of the respective isotopies, noting that the $E_i'$ are compression disks for the $H_i'$, and that the $T_i'$ are bridge triangles for the vertical strands $\bold y_i'\times[0,1]$. Do the same for $\frak a_i''$ and $\mathcal A_i''$ to obtain $(\frak a_i^*)''$ and $(\mathcal A_i^*)''$ on $\Sigma''$, with corresponding traces $E_i''$ and $T_i''$. Let $D_i'$ and $D_i''$ be collections of neatly embedded disks in $H_i'$ and $H_i''$, respectively, such that surgery along $D_i'$ and $D_i''$ reduces $H_i'$ and $H_i''$, respectively, to spreads $\partial_-(H_i',\mathcal T_i')\times[0,1]$ and $\partial_-(H_i'',\mathcal T_i'')\times[0,1]$. For each connected component of $(P_i',\bold y')$, pick a disk of $D_i'$ adjacent to that component in the sense that one of the two scars resulting from surgery along the chosen disk lies in the corresponding component of $(P_i',\bold y')\times[0,1]$. (Equivalently, the chosen disk is the cocore of a 1--handle connecting the component of $(P_i',\bold y')\times[0,1]$ to another component of the spread obtained by surgery.) Let $F_i'\subset D_i'$ denote the chosen disks. Then, we claim that $$D_i = (D_i'\setminus F_i')\sqcup (E_i'\cup_\Psi E_i'')\sqcup D_i''$$ is a collection of compression disks in $H_i$ such that surgery along $D_i$ reduces $H_i$ to $$(\partial_-(H_i')\setminus P_i')\sqcup(\partial_-(H_i'')\setminus P_i'').$$ To see that this is the case, note that the result of surgering $H_i$ along $D_i\sqcup F_i'$ is precisely $$((\partial_-(H_i')\setminus P_i')\times[0,1]) \sqcup(\sqcup_{m'}D^2\times[0,1]) \sqcup((\partial_-(H_i'')\setminus P_i'')\times[0,1]),$$ where $m'$ is the number of connected components of $Y_i'$, $P_i'$, and $F_i'$. The effect of removing the disks of $F_i'$ from this collection of compression disk is to attach 1--handles, one for each $D^2\times[0,1]$ in the above decomposition, connecting the $m'$ copies of $D^2\times[0,1]$ to the rest of the spread. It follows that $H_i$ is a compressionbody with $\partial_+H_i = \Sigma$ and $\partial_-(H_i) = (\partial_-(H_i')\setminus P_i')\sqcup(\partial_-(H_i'')\setminus P_i'')$, as desired. Moreover, let $\Delta_i'$ and $\Delta_i''$ be bridge disks for the flat strands of $\mathcal T_i'$ and $\mathcal T_i''$, respectively. Then, $$\Delta_i = \Delta_i'\sqcup(T_i'\cup_\Psi T_i'')\sqcup\Delta_i''$$ is a collection of bridge semi-disks and triangles for the strands of $\mathcal T_i'\cup_\Psi\mathcal T_i''$ in $H_i$. The key thing to note here is that the bridge triangles $T_i'$ for the vertical strands $\bold y_i'\times[0,1]$ glue to the corresponding bridge triangles $T_i''$ for the vertical strands of $\bold y_i''\times[0,1]$ along the identified arcs $\mathcal A_i'\cup_\Psi\mathcal A_i''$ to give bridge disks for the new flat strands $(\bold y_i'\times[0,1])\cup_\Psi(\bold y_i''\times[0,1])$. Finally, consider the restriction of $\Psi$ to the spreads $(Y_i',\beta_i')$ cobounded by $(P_i',\bold y_i')$ and $(P_{i+1}',\bold y_{i+1}')$ in $(Y',\mathcal L)$, recalling that $(Y_i',\beta_i') = (Z_i',\mathcal D_i')\cap\partial(X',\mathcal F')$, and noting that $\Psi(Y_i',\beta_i') = (Y_i'',\beta_i'')$. Let $(Z_i,\mathcal D_i) = (Z_i',\mathcal D_i')\cup_\Psi(Z_i'',\mathcal D_i'')$ for each $i\in\mathbb{Z}_3$. We claim that the fact that the $(Z_i,\mathcal D_i)$ are trivial disk-tangles follows easily from the detailed argument just given that the $(H_i,\mathcal T_i)$ are trivial tangles. The reason is that a trivial disk-tangle $(Z,\mathcal D)$ can be naturally viewed as the lensed product $(H,\mathcal T)\times[0,1]$ such that the decomposition of $\partial(H,\mathcal T) = (S,\bold x)\cup_{\partial S}(P,\bold y)$ gives rise to a bridge-braid structure on $\partial(Z,\mathcal D)$. Precisely, the lensed product $(H_{g,\bold p,\bold f},\mathcal T_{b,\bold v})\times[0,1]$ is $(Z_{g,k;\bold p,\bold f},\mathcal D_{c;\bold v})$, where $k = g+p+f-n$ and $n$ is the length of the partition $\bold p$. The structure on the boundary is that of a symmetric Heegaard double. Moreover, we have that $\partial_-(Z,\bold D) = \partial_-(H,\mathcal T)\times[0,1]$, so gluing two trivial disk-tangles along a portion of their negative boundaries is the same as gluing the corresponding trivial tangles (of which the trivial disk-tangles are lensed products) along the corresponding portions of their negative boundaries, then taking the product with the interval. Succinctly, the gluings along portions of the negative boundaries commute with the taking of the products with the interval. Therefore, the $(Z_i,\mathcal D_i)$ are trivial disk-tangles, as desired. It remains only to verify that $(Z_i,\mathcal D_i)\cap(Z_{i-1},\mathcal D_{i-1}) = (H_i,\mathcal T_i)$ and $(H_i,\mathcal T_i)\cap(H_{i+1},\mathcal T_{i+1}) = \Sigma$, but this is immediate. \end{proof} \begin{remark} \label{rmk:self_glue} It is interesting to note that Proposition~\ref{prop:glue_tri} holds in the case that $\mathbb T'=\mathbb T''$ and $\Psi$ is a (partial) \emph{self}-gluing! See Example~\ref{ex:torus1} below. \end{remark} Having established how to glue bridge trisections from the vantage point of bridge trisected pairs, we now turn our attention to understanding gluings diagrammatically. Suppose that $\mathbb T'$ and $\mathbb T''$ are bridge trisections of pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$ with augmented shadow diagrams $\mathfrak D'$ and $\mathfrak D''$, respectively. Let $f\colon\partial(\Sigma,\frak a_1',(\mathcal A_1^*)')\to\partial(\Sigma',\frak a_1',(\mathcal A_1^*)')$ be an orientation-reversing \emph{partial} diffeomorphism. We call $\mathfrak D'$ and $\mathfrak D''$ \emph{gluing compatible} if there is an orientation-reversing \emph{partial} diffeomorphism $$\psi_f(\mathfrak D',\mathfrak D'')\colon(P_1',\bold y_1')\to(P_1'',\bold y_1'')$$ that extends $f$ and commutes with the monodromies of the diagrams -- i.e., $\psi_f(\mathfrak D',\mathfrak D'')\circ\phi_{\mathfrak D'} = \phi_{\mathfrak D''}$ -- where this composition is defined. In this case, we call $f$ a \emph{compatible (partial) gluing}. The map $\psi_f(\mathfrak D',\mathfrak D'')$ determines an orientation-reversing (partial) diffeomorphism $$\Upsilon_f(\mathfrak D',\mathfrak D'')\colon (Y_{\phi_{\mathfrak D'}},\mathcal L_{\phi_{\mathfrak D'}})\to(Y_{\phi_{\mathfrak D''}},\mathcal L_{\phi_{\mathfrak D''}})$$ of abstract open-book braidings. So, we can define a (partial) gluing map $\Psi_f(\mathfrak D',\mathfrak D'')\colon\partial(X',\mathcal F')\to\partial(X'',\mathcal F'')$ of the bridge trisected pairs by $$\Psi_f(\mathfrak D',\mathfrak D'') = \psi_{\mathfrak D''}^{-1}\circ\Upsilon_f(\mathfrak D',\mathfrak D'')\circ\psi_{\mathfrak D'}.$$ Again, we are interested in partial boundary-gluings, so we reiterate that the above caveats regarding the domain and codomain apply to $\Psi_f(\mathfrak D',\mathfrak D'')$. Given this set-up, we can now describe how gluing shadow diagrams corresponds to gluing bridge trisected four-manifold pairs. \begin{proposition} \label{prop:glue_diag} Suppose that $\mathbb T'$ and $\mathbb T''$ are bridge trisections of four-manifold pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$, respectively, and that the corresponding fully-augmented shadow diagrams $\mathfrak D'$ and $\mathfrak D''$ admit a compatible gluing $f$. Let $\mathfrak D = \mathfrak D'\cup_f\mathfrak D''$, and let $(X,\mathcal F) = (X',\mathcal F')\cup_{\Psi_f(\mathfrak D',\mathfrak D'')}(X'',\mathcal F'')$. Then, $\mathfrak D$ is a fully-augmented shadow diagram for the bridge trisection on $(X,\mathcal F)$ given in Proposition~\ref{prop:glue_tri}, once it is modified in the following two ways: \begin{enumerate} \item The arcs of $(\frak a_4)'\sqcup(\mathcal A_4^*)'$ and $(\frak a_4)''\sqcup(\mathcal A_4^*)''$ whose endpoints lie in the domain of definition and range of $f$ should be deleted. \item If $\partial X''$ is disconnected, then, for each component $Y''$ of the range of $\Psi_f(\mathfrak D',\mathfrak D'')$ there is a subcollection of curves of $\alpha_i''$, for each $i\in\mathbb{Z}_3$, that separate the components of $\partial \Sigma''$ corresponding to $Y''$ from the other components of $\partial\Sigma''$. Throw out one curve from the subcollection of curves corresponding to each connected component of the range of $\Psi_f(\mathfrak D',\mathfrak D'')$. \item If $\partial X''$ is connected but $\partial X'$ is disconnected, then, for each component $Y'$ of the domain of definition of $\Psi_f(\mathfrak D',\mathfrak D'')$ there is a subcollection of curves of $\alpha_i'$, for each $i\in\mathbb{Z}_3$, that separate the components of $\partial \Sigma'$ corresponding to $Y'$ from the other components of $\partial\Sigma'$. Throw out one curve from the subcollection of curves corresponding to each connected component of the domain of definition of $\Psi_f(\mathfrak D',\mathfrak D'')$. \end{enumerate} \end{proposition} The first modification required above is a minor issue. If this is not done, then the would-be-deleted arcs give rise to extra shadows and curves that are redundant in the encoding of the trivial tangle $(H_1,\mathcal T_1)$. The next two modifications are more serious, and are required to ensure that the resulting diagram is a shadow diagram. The rationale was made clear in the proof of Proposition~\ref{prop:glue_tri}, where this precise discarding was carried out at the level of compression disks. Note that only one of the final two modification will need to be made in practice. \begin{proof} The proof of the proposition follows from the proof of Proposition~\ref{prop:glue_tri}, as applied to the gluing $\Psi_f(\mathfrak D',\mathfrak D'')$. \end{proof} We conclude this section with some examples illustrating gluings of bridge trisected four-manifold pairs. \begin{example} \label{ex:sphere} First, we recall the bridge trisected surfaces bounded by the right-handed trefoil discussed in Examples~\ref{ex:Mob_sh} and~\ref{ex:tref_disk}. Let $\mathfrak D'$ denote the fully-augmented shadow diagram in Figure~\ref{fig:sphere1}, which corresponds to a bridge trisection of the pair $(X',\mathcal F')$, where $\mathcal F'$ is a disk bounded by the right-handed trefoil in $X'=(\mathbb{CP}^2)^\circ$. Let $\mathfrak D''$ denote the fully-augmented shadow diagram in Figure~\ref{fig:sphere2}, which corresponds to the pair $(X'',\mathcal F'')$, where $\mathcal F''$ is the M\"obius band bounded by the left-handed trefoil in $S^3$, which we imagine as being perturbed so that its interior lies in $X''=B^4$. Note that $\mathfrak D''$ is the mirror of the diagram show in Figure~\ref{fig:Mob_sh_3}. Orientations for the boundaries of the diagrams are shown. \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{tref_disk_3_1} \caption{} \label{fig:sphere1} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.7\linewidth]{Mob_sh_3_1} \caption{} \label{fig:sphere2} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.7\textwidth} \centering \includegraphics[width=.9\linewidth]{gluing1} \caption{} \label{fig:sphere3} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.9\linewidth]{gluing2} \caption{} \label{fig:sphere4} \end{subfigure} \caption{(A) A shadow diagram for the disk bounded by the right-handed trefoil in $(\mathbb{CP}^2)^\circ$. (B) A shadow diagram for the M\"obius band bounded by the right-handed trefoil in $B^4$. (C) The result of gluing these diagrams via the unique compatible gluing: a shadow diagram for a projective plane in $\mathbb{CP}^2$. (D) is obtained from (C) by deperturbing along the indicated shadows.} \label{fig:sphere} \end{figure} These bridge trisections induce open-book braidings on the boundaries of their corresponding manifold pairs that are orientation-reversing diffeomorphic. Both open-book braidings have disk page and boundary link in 2--braid position: For $\mathfrak D'$, the monodromy is three \emph{positive} half-twists about the two braid points. This was described in Example~\ref{ex:tref_disk} and Figure~\ref{fig:tref_disk}. However, for $\mathfrak D''$, the half-twists are \emph{negative}, since $\mathfrak D''$ is the mirror of the diagram discussed in Example~\ref{ex:Mob_sh} and Figure~\ref{fig:Mob_sh}. Let $f\colon\partial\mathfrak D'\to\partial\mathfrak D''$ be the orientation-reversing diffeomorphism that matches the endpoints of the arcs $(\mathcal A^*_1)'$ with those of $(\mathcal A^*_1)''$. There is an orientation-reversing diffeomorphism $\psi_f(\mathfrak D',\mathfrak D'')\colon (P_1',\bold y_1')\to(P_2'',\bold y_2'')$ that extends $f$; simply pick the obvious diffeomorphism relating pair in Figure~\ref{fig:tref_disk_4} to the mirror of the pair in Figure~\ref{fig:Mob_sh_4}. It follows that is a compatible gluing corresponding to an orientation-reversing diffeomorphism $\Psi_f(\mathfrak D',\mathfrak D'')$. Let $(X,\mathcal F) = (X',\mathcal F')\cup_{\Psi_f(\mathfrak D',\mathfrak D'')}(X'',\mathcal F'')$. By Proposition~\ref{prop:glue_diag}, the $\mathfrak D=\mathfrak D'\cup_f\mathfrak D''$ shown in Figure~\ref{fig:sphere3} is a shadow diagram for $(X,\mathcal F)$. Observe how the arcs $(\mathcal A_4^*)'$ and $(\mathcal A_4^*)''$ have been discarded according with the first modification required by Proposition~\ref{prop:glue_diag}. (The second and third modification are not necessary in this example, since $\partial X'$ and $\partial X''$ are connected.) A brief examination reveals that this diagram can be deperturbed three times, using the indicated shadows. (See Section~\ref{sec:stabilize} for details about perturbation.) Doing so produces the diagram of Figure~\ref{fig:sphere4}. We have that $X\cong\mathbb{CP}^2$ and $\mathcal F\cong\mathbb{RP}^2$, but it is not true that $(X,\mathcal F)\cong(\mathbb{CP}^2,\mathbb{RP}^2)$, where the latter pair is the projectivization of the standard pair $(\mathbb{C}^3,\mathbb{R}^3)$. The standard projective pair $(\mathbb{CP}^2,\mathbb{RP}^2)$ is depicted in Figure~2 of~\cite{MeiZup_18_Bridge-trisections-of-knotted}. One way to distinguish these two pairs is to note that $\mathcal F$ has normal Euler number +6, while $\mathbb{RP}^2$ has normal Euler number +2. Moreover, $\pi_1(X\setminus\nu(\mathcal F))\cong\mathbb{Z}/2\mathbb{Z}$, while $\pi_1(\mathbb{CP}^2\setminus\nu(\mathbb{RP}^2))\cong 1$. These facts are left as exercises to the reader. \end{example} \begin{example} \label{ex:annulus} Consider the shadow diagram $\mathfrak D'$ shown in Figure~\ref{fig:annulus1}, which corresponds to a bridge trisection of the cylinder pair $(X',\mathcal F')=(S^3\times I, S^1\times I)$. The underlying trisection of $S^3\times I$ can be thought of as follows. If one ``trisects" $S^3$ into three three-balls, which meet pairwise along disk pages of the open-book decomposition with unknotted boundary -- so the triple intersection of the three-balls is this binding, then the trisection of $S^3\times I$ can be thought of as the product of this ``trisection" of $S^3$ with the interval, and the core $\Sigma$ is simply the product of the binding with the interval. So, the diagram $\mathfrak D'$ can be thought of as a bridge trisection for a copy $\mathcal F$ of $\Sigma$. To carry this out, the copy $\mathcal F$ of the annular core must be perturbed relative the original copy $\Sigma$ of the core. We leave it as an exercise to the reader to verify that $\mathfrak D'$ describes the cylinder pair, as claimed. \begin{figure}[h!] \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.8\linewidth]{annulus1} \caption{} \label{fig:annulus1} \end{subfigure}% \begin{subfigure}{.6\textwidth} \centering \includegraphics[width=.9\linewidth]{annulus2} \caption{} \label{fig:annulus2} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.6\textwidth} \centering \includegraphics[width=.8\linewidth]{annulus3} \caption{} \label{fig:annulus3} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.8\linewidth]{annulus4} \caption{} \label{fig:annulus4} \end{subfigure} \caption{(A) A shadow diagram for $S^3\times I$. (B) A copy of this diagram and a copy of its mirror, with compatible gluing $f$ indicated. (C) The result of the gluing, $S^3\times I$. (D) is obtained from (C) by deperturbing along the indicated shadows.} \label{fig:annulus} \end{figure} Now, let $\mathfrak D''$ denote a mirror copy of $\mathfrak D'$ that corresponds to a second copy of cylinder pair: $(X'',\mathcal F'')=(S^3\times I,S^1\times I)$. Each of the two boundary components of both $(X',\mathcal F')$ and $(X'',\mathcal F'')$ have induced open-book braidings with page a disk with one braid point. Let $f\colon\partial\mathfrak D'\to\partial\mathfrak D''$ be the orientation-reversing partial diffeomorphism shown in Figure~\ref{fig:annulus2} -- i.e., $f$ maps the boundary component $S^1\times\{1\}$ fo $\mathfrak D'$ to the boundary component $S^1\times\{0\}$ of $\mathfrak D''$. Trivially, $f$ extends to an orientation-reversing partial diffeomorphism $\psi_f(\mathfrak D',\mathfrak D'')\colon (P_1',\bold y_1')\to(P_2'',\bold y_2'')$ between the page pairs corresponding to the boundary components of the chosen boundary components of $\mathfrak D'$ and $\mathfrak D''$. Thus, we have an orientation-reversing partial diffeomorphism $\Psi_f(\mathfrak D',\mathfrak D'')\colon\partial(X',\mathcal F')\to\partial(X'',\mathcal F'')$. Let $(X,\mathcal F) = (X',\mathcal F')\cup_{\Psi_f(\mathfrak D',\mathfrak D'')}(X'',\mathcal F'')$. By Proposition~\ref{prop:glue_diag}, the diagram $\mathfrak D = \mathfrak D'\cup_f\mathfrak D''$ shown in Figure~\ref{fig:annulus3} is a shadow diagram for $(X,\mathcal F)$. Note that one curve of each color has been discarded in accordance with modification (2). As before, the diagram obtained from gluing can be deperturbed. (This is a common phenomenon when gluing shadow diagrams.) The diagram obtained after deperturbing, shown in Figure~\ref{fig:annulus4}, is diffeomorphic to the original diagram $\mathfrak D'$. Of course, $(X,\mathcal F)\cong(S^3\times I,S^1\times I)$. In this example, modification (1) of Proposition~\ref{prop:glue_diag} is implicit; the arcs $\frak a_4'$, $(\mathcal A_4^*)'$, $\frak a_4''$, and $(\mathcal A_4^*)''$ were never drawn and were never needed. More interestingly, we see how modification (2) is required. The curves of $\mathfrak D''$ have been discarded upon gluing. Had this not been done, there would have been parallel curves in $\alpha_i$ for each $i\in\mathbb{Z}_3$. This would imply that $P_i = \partial_-H_i$ would have a two-sphere component, which is not allowed. \end{example} \begin{example} \label{ex:torus1} Finally, we consider two more compatible gluings involving $\mathfrak D'$. First, let $\mathfrak D''$ denote a mirror copy of $\mathfrak D'$, and let $f\colon\partial\mathfrak D'\to\partial\mathfrak D''$ be the compatible gluing shown in Figure~\ref{fig:torus1}. This compatible gluing is similar to the one explored in Example~\ref{ex:annulus}, but this time $f$ is not a partial diffeomorphism. The induced gluing $\Psi_f(\mathfrak D',\mathfrak D'')$ matches the two boundary components of $(X,\mathcal F)$ with the corresponding components of $(X'',\mathcal F'')$. As a result, $(X,\mathcal F) = (X',\mathcal F')\cup_{\Psi_f(\mathfrak D',\mathfrak D'')}(X'',\mathcal F'')$ is the closed four-manifold pair $(S^3\times S^1,S^1\times S^1)$, and the diagram $\mathfrak D = \mathfrak D'\cup_f\mathfrak D''$ for this pair is shown in Figure~\ref{fig:torus2}. As in Example~\ref{ex:annulus}, the redundant arcs have been suppressed, and the curves $\alpha_i''$ have been discarded upon gluing. Also, we can again deperturb, arriving at the diagram of Figure~\ref{fig:torus4}. \begin{figure}[h!] \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.8\linewidth]{annulus1} \caption{} \label{fig:torus0} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.7\linewidth]{torus1} \caption{} \label{fig:torus1} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.7\linewidth]{torus2} \caption{} \label{fig:torus2} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.7\linewidth]{torus3} \caption{} \label{fig:torus3} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=.7\linewidth]{torus4} \caption{} \label{fig:torus4} \end{subfigure} \caption{(A) A shadow diagram for $S^3\times I$. (B) A copy of this diagram and a copy of its mirror, with compatible gluing $f$ indicated. (C) The result of the gluing, $S^3\times S^1$. (D) A compatible self-gluing of the diagram. (E) The result of the self gluing, $S^3\times S^1$. (E) is obtained from (C) by deperturbing along the indicated shadows.} \label{fig:torus} \end{figure} Now, let $f$ denote the compatible self-gluing shown in Figure~\ref{fig:torus3}. The induced self-map of $(S^3\times I, S^1\times I)$ is $\Psi_f(\mathfrak D')\colon(S^3\times\{0\},S^1\times\{0\})\to(S^3\times\{1\},S^1\times\{1\})$. The diagram resulting from the compatible self-gluing $f$ is the diagram of Figure~\ref{fig:torus4}, which describes $(S^3\times S^1,S^1\times S^1)$, as noted before. \end{example} \section{Classification and Examples} \label{sec:class} In this section, we classify $(b,\bold c;v)$--bridge trisections in the trivial cases where one or more of the parameters is sufficiently small. Then, we present families of examples representing more interesting choices of parameters and pose questions about further possible classification results. To get started, we discuss the connected sum and boundary connected sum operations, then we introduce some notions of reducibility for bridge trisections. \subsection{Connected sum of bridge trisections} \label{subsec:sum} \ Given trisections $\mathbb T'$ and $\mathbb T''$ for four-manifolds $X'$ and $X''$, it is straight-forward to see that there is a trisection $\mathbb T=\mathbb T'\#\mathbb T''$ describing $X'\#X''$. Let $\varepsilon\in\{',''\}$. All that needs to be done is to choose the points $x^\varepsilon\in X^\varepsilon$ that determine the connected sum to lie on the respective cores. Having done so, the pieces of the trisection $\mathbb T$ can be described as by: $\Sigma = \Sigma'\#\Sigma''$, $H_i = H_i'\natural H_i''$, and $Z_i = Z_i'\natural Z_i''$. Note that $\mathbb T$ is independent of the choice of points made above. \begin{remark} \label{rmk:sum_glue} Note that the connected sum operation, as described, is a very simple example of a gluing of trisections, as described in detail in Section~\ref{sec:gluing}. Each of $\mathbb T^\varepsilon\setminus\nu(x^\varepsilon)$ is automatically a trisection with one new boundary component diffeomorphic to $S^3$. If $\mathfrak D^\varepsilon$ is a shadow diagram for $\mathbb T^\varepsilon$, then $\mathfrak D^\varepsilon\setminus\nu(x^\varepsilon)$ is a diagram for $\mathbb T^\varepsilon\setminus\nu(x^\varepsilon)$ after a simple modification is made in the case that $\partial X\not=\emptyset$: In this case, a curve $\delta$ must be added to each of the $\alpha_i$ that is parallel to the curve $\partial\nu(x^\varepsilon)$ where $\Sigma'$ and $\Sigma''$ were glued together. (This is a separating reducing curve in the sense of Definition~\ref{def:reduce}, below.) \end{remark} There is a complication in extending this interpretation to connected sum of bridge trisections with boundary that was not present in discussions of the connected sum of \emph{closed} bridge trisections elsewhere in the literature. The na\"ive idea is to simple choose the connected sum points $x^\varepsilon$ to be bridge points. This works for closed bridge trisections, because every bridge point is incident to a flat strand in each of the three trivial tangles. This is not the case for bridge trisections with boundary. To convince oneself of the problem, try to form the connect sum of two bridge trisections, each of which is a copy of the bridge trisection described in Figure~\ref{fig:b=11}, which corresponds to the standard positive M\"obius band. It is simply not possible: The removal of an open neighborhood around any bridge point has the effect that one of the trivial tangles will no longer be trivial, since it will have a strand with no endpoints on $\Sigma$. One might think that perturbing the bridge trisection (see Subsection~\ref{subsec:interior_perturbation} below) would fix the problem by creating a bridge point that is incident to flat strands in each arm, however, the problem persists due to consideration of the vertical patches. Since vertical patches are only allowed to be incident to one component of $\partial X$, we cannot puncture our bridge trisection at a bridge point that is incident to a vertical patch. The next lemma makes precise when puncturing a bridge trisection at a bridge point produces a new bridge trisection and indicates how to form the connected sum of bridge trisections. \begin{lemma} \label{lem:punc} Let $\mathbb T$ be a bridge trisection for a pair $(X,\mathcal F)$, and let $x$ be a bridge point. Then, $\mathbb T\setminus\nu(x)$ is a bridge trisection for the pair $(X\setminus\nu(x),\mathcal F\setminus\nu(x))$ if and only if $x$ is incident to a flat patch of $\mathcal D_i$ for each~$i\in\mathbb{Z}_3$. If $\mathfrak D = (\Sigma,\alpha_1,\alpha_2,\alpha_3,\mathcal T_1^*,\mathcal T_2^*,\mathcal T_3^*,\bold x)$ is a shadow diagram for $\mathbb T$, then a shadow diagram for $\mathbb T\setminus\nu(x)$ can be obtained as follows: Let $\delta = \partial\nu(x)$ in $\mathfrak D$. Let $\delta_i$ denote the result of sliding $\delta$ off the arc $\tau_i^*$ of $\mathcal T_i^*$ that is incident to $x$. Let $\Sigma' = \Sigma\setminus\nu(x)$, $\alpha_i' = \alpha_i\cup\delta_i$, $(\mathcal T_i^*)' = \mathcal T_i^*\setminus\tau_i^*$, and $\bold x' = \bold x\setminus\{x\}$. Then, there are two cases: If $\partial X=\emptyset$, then $$\mathfrak D = (\Sigma',\alpha_1,\alpha_2,\alpha_3,(\mathcal T_1^*)',(\mathcal T_2^*)',(\mathcal T_3^*)',\bold x'),$$ is a shadow diagram for $T\setminus\nu(x)$. If $\partial X\not=\emptyset$, then $$\mathfrak D = (\Sigma',\alpha_1',\alpha_2',\alpha_3',(\mathcal T_1^*)',(\mathcal T_2^*)',(\mathcal T_3^*)',\bold x'),$$ is a shadow diagram for $T\setminus\nu(x)$. \end{lemma} \begin{proof} If $x$ is incident to a flat patch of $\mathcal D_i$ for each $i\in\mathbb{Z}_3$, then it is straight-forward to verify that the pieces of $\mathbb T\setminus\nu(x)$ form a bridge trisection. The main substantive changes are that (1) the number of components of $\partial X$, $\partial\Sigma$, and $\partial_-H_i$ all increase by one; and (2) for each $i\in\mathbb{Z}_3$, the flat strand of $\mathcal T_i$ becomes a vertical strand and the flat patch of $\mathcal D_i$ incident to $x$ becomes a vertical patch. Conversely, if $x$ is incident to a vertical patch $D\subset\mathcal D_i$ for some $i\in\mathbb{Z}_3$, then $\mathcal D_i\setminus\nu(x)$ is no longer a trivial disk-tangle, since $D\setminus\nu(x)$ is neither vertical nor flat, as it intersects multiple components of $\partial X$. If $\partial X=\emptyset$, then the $H_i$ are handlebodies and the $H_i'$ are compressionbodies with $\partial_-H_i'\cong D^2$. In this case, the curves $\alpha_i$ still encode $H_i'$ without modification. If $\partial X\not=\emptyset$, then $\partial_-H_i' \cong \partial_-H_i\sqcup D^2$. In this case, $\delta$ must be added to $\alpha_i$ in order to encode the fact that the new component of $\partial_-H_i'$ is disjoint from the original ones. As curves in a defining set, $\delta$ and $\delta_i$ serve the same role, since they are isotopic. The only reason for pushing $\delta$ off $\tau_i^*$ is to satisfy our convention that the shadow arcs be disjoint from the defining set of curves for the handlebody. The shadow arcs $\tau_i^*$ are deleted regardless of whether $\partial X$ is empty, since these shadows correspond to flat strands that become vertical strands upon removal of $\nu(x)$. \end{proof} \begin{example} \label{ex:conn_sum} Consider the shadow diagram $\mathfrak D$ shown in Figure~\ref{fig:conn_sum1 }, which corresponds to a bridge trisection of the trivial disk in the four-ball. Figure~\ref{fig:conn_sum2} shows the diagram corresponding to the bridge trisection $\mathbb T' = \mathbb T\setminus\nu(x)$ for $(X',\mathcal F') = (B^4\setminus\nu(x),D^2\setminus\nu(x))$. Note that this diagram is equivalent to that of Figures~\ref{fig:annulus1} and~\ref{fig:torus0}. \end{example} \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{conn_sum1} \caption{} \label{fig:conn_sum1 } \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.67\linewidth]{conn_sum2} \caption{} \label{fig:conn_sum2} \end{subfigure} \caption{(A) A shadow diagram for a bridge trisection of $(B^4,D^2)$. (B) The diagram obtained by puncturing at the bridge point $x$.} \label{fig:conn_sum} \end{figure} In light of this lemma, it is clear that we can obtain a bridge trisection for the connected sum of surfaces by choosing the connected sum points to be bridge points incident only to flat patches. Though such bridge points need not always exist (see the M\"obius band example reference above), they can be created via interior perturbation -- at most one in each direction. The punctured trisections $\mathbb T^\varepsilon\setminus\nu(x^\varepsilon)$ can be canonically glued along the novel three-sphere-unknot boundary-component, according to the techniques of Section~\ref{sec:gluing}. Note that in the case that at least one of $X'$ and $X''$ have boundary, then at least one of the curves $\delta_i'$ or the curves $\delta_i''$ should be discarded upon gluing, as dictated by Propositions~\ref{prop:glue_tri} and~\ref{prop:glue_diag}. Compare Example~\ref{ex:conn_sum} to Example~\ref{ex:annulus}. So far, we have viewed the connected sum of bridge trisections as a special case of gluing bridge trisections, and it has been noted that, for this approach to work, we must form the connected sum at bridge points that are incident to flat patches in each disk-tangle. However, it is possible to work in a slightly more general way so that the punctured objects need not be bridge trisections themselves, but their union will be a bridge trisection of the connected sum. \begin{lemma} \label{lem:conn_sum} Let $\mathbb T'$ and $\mathbb T''$ be bridge trisections for pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$, respectively, and let $x'$ and $x''$ be bridge points such that, for each $i\in\mathbb{Z}_3$, one of $x'$ or $x''$ is incident to a flat patch in $\mathbb T^\varepsilon$. Then, the result $$\mathbb T = (\mathbb T'\setminus\nu(x'))\cup(\mathbb T''\setminus\nu(x''))$$ obtained by removing open neighborhoods of the $x^\varepsilon$ from the $\mathbb T^\varepsilon$ and gluing along resulting boundaries so that the corresponding trisection pieces are matched is a bridge trisection for $(X,\mathcal F) = (X',\mathcal F')\#(X'',\mathcal F'')$. \end{lemma} \begin{proof} Let $D_i^\varepsilon$ be the patch of $\mathcal D_i^\varepsilon$ containing $x^\varepsilon$ for each $i\in\mathbb{Z}_3$ and each $\varepsilon\in\{',''\}$. Let $D_i = D_i'\cup_{\partial\nu(x^\varepsilon)}D_i''$. Then $$\mathcal D_i = \mathcal D_i'\cup_{\partial\nu(y^\varepsilon)}\mathcal D_i'' = (\mathcal D_i'\setminus D_i')\sqcup(\mathcal D_i''\setminus D_i'')\sqcup D_i.$$ For each $i\in\mathbb{Z}_3$, one of the $D_i^\varepsilon$ will be flat, so $D_i$ will be flat or vertical, according to whether the other of the $D_i^\varepsilon$ is flat or vertical. In any event, each disk of $\mathcal D_i$ has at most one critical point, we have a trivial disk-tangle, since the boundary sum of trivial disk-tangles is a trivial disk-tangle. A similar argument shows that the arms of $\mathbb T$ are just the boundary sum of the arms of the $\mathbb T^\varepsilon$ and that each strand is vertical or flat, as desired. The details are straight-forward to check. \end{proof} Note that while the parameters $g$ and $\bold k$ are additive under connected sum, the parameters $b$ and $\bold c$ are $(-1)$--subadditive (eg. $b = b'+b''-1$). In the case that the $(X^\varepsilon,\mathcal F^\varepsilon)$ have non-empty boundary, the boundary parameters $\bold p$, $\bold f$, $\bold v$, and $n$ are all additive, since we are discussing connected sum at an \emph{interior point} of the pairs. Unlike the case of the connected sum of two four-manifold trisections, here, the resulting bridge trisection is highly dependent on the choice of bridge points made above. \subsection{Boundary connected sum of bridge trisections} \label{subsec:bound_sum} \ Now consider the operation of boundary connected sum of four-manifolds. We start with the set-up as above, but now we choose the summation points to be points $y^\varepsilon$ lying in components $K^\varepsilon$ of the bindings $\partial \Sigma^\varepsilon$ for each $\varepsilon\in\{',''\}$. In this case, the pieces of the trisection $\mathbb T = \mathbb T'\natural\mathbb T''$ can be described as follows: $\Sigma = \Sigma'\natural\Sigma''$, $H_i = H_i'\natural H_i''$, $Z_i = Z_i'\natural Z_i''$, $B = B'\#B''$, $P_i = P_i'\natural P_i''$, and $Y_i = Y_i'\natural Y_i''$. And in this case, $g$, $\bold k$, and $\bold p$ are additive, while $\bold f$ and $n$ are $(-1)$--subadditive, and $\mathbb T$ is highly dependent on the choice of binding component $K^\varepsilon$ made above. The situation becomes more complicated when we consider boundary connected sum of bridge trisected pairs. The issue here is that $\mathcal F^\varepsilon\cap\partial\Sigma^\varepsilon =\emptyset$, so we cannot choose the $y^\varepsilon$ to lie simultaneously on $\Sigma^\varepsilon$ and on $\mathcal F^\varepsilon$. Our approach is to first perform the boundary connected sum of the ambient four-manifolds, as just described, then consider the induced bridge trisection of the split union $(X,\mathcal F'\sqcup\mathcal F'')$ of surface links. We now describe a modification of this bridge trisection that will produce a bridge trisection of $(X,\mathcal F'\natural\mathcal F'')$. Suppose that we would like to form the boundary connected sum of $(X',\mathcal F')$ with $(X'',\mathcal F'')$ at points $y^\varepsilon\in\partial\mathcal F^\varepsilon$. Without loss of generality, we can assume that $y^\varepsilon\in \mathcal F^\varepsilon\cap P_i^\varepsilon$; in relation to the open-book structure on (the chosen component of) $\partial X^\varepsilon$, we assume that $y^\varepsilon$ lies on the page $P_i^\varepsilon$. Henceforth, our model is dependent on the choice of $i\in\mathbb{Z}_3$. Choose arcs $\omega^\varepsilon$ connecting the points $y^\varepsilon$ to the chosen binding components $K^\varepsilon\subset B^\varepsilon$. Let $z^\varepsilon$ denote the points of $\omega^\varepsilon\cap K^\varepsilon$. Form the boundary connected sum of the ambient four-manifolds at the points $z^\varepsilon$, as described above, so that $\mathcal F'\sqcup\mathcal F''$ is in bridge position with respect to $\mathbb T$. Note that the arcs $\omega^\varepsilon$ give rise to an arc $\omega$ in the page of $P_i$ connecting the points $y^\varepsilon$. Use the height function on $H_i$ to flow $\omega$ down to the core $\Sigma$. Let $Q$ represent the square traced out by this isotopy, and let $\omega_* = Q\cap\Sigma$. Let $N$ be a regular neighborhood of $Q$ in $X$. We will change $\mathcal F'\sqcup\mathcal F''$ to $\mathcal F'\natural\mathcal F''$ in a way that will produce a bridge trisection for the latter from the bridge trisection of the former, and this change will be supported inside $N$. See Figure~\ref{fig:bcs_schem1} for a (faithful) schematic of this set-up. The figures depict the case of $i=1$. \begin{proposition} \label{prop:bcs} A bridge trisection for $(X,\mathcal F) = (X'\natural X'',\mathcal F'\natural\mathcal F'')$ can be obtained from the bridge trisection of $(X,\mathcal F'\sqcup\mathcal F'')$ described above by replacing the local neighborhood $N$ of $Q$ shown in Figure~\ref{fig:bcs_schem1} with the local neighborhood $N'$ shown in Figure~\ref{fig:bcs_schem2}. The replacement can be seen in a shadow diagram as the local replacement of the portion of the diagram supported near $\omega_*$ shown in Figure~\ref{fig:bcs_schem3} with the portion shown in Figure~\ref{fig:bcs_schem4}. \end{proposition} \begin{proof} Near $\omega_*$, the neighborhood $N$ is precisely the $(0;0,2)$--bridge trisection of two copies of the trivial disk in $B^4$. To recover all of $N$, we extend upward along $Q$. Because $\omega$ was lowered to $\Sigma$ along a pair of vertical strands of $(H_i,\mathcal T_i'\sqcup\mathcal T_i'')$, we see that the entirety of $N$ is still just the 2--bridge trisection of two copies of the trivial disk. In other words, $N$ is isolating, in a bridge-trisected way, a small disk from each of the $\mathcal F^\varepsilon$. \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{bcs_schem1} \caption{The neighborhood $N$.} \label{fig:bcs_schem1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{bcs_schem2} \caption{The neighborhood $N'$.} \label{fig:bcs_schem2} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=.7in]{bcs_schem3} \caption{Local shadow diagram before...} \label{fig:bcs_schem3} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=.7in]{bcs_schem4} \caption{...and after boundary connected sum.} \label{fig:bcs_schem4} \end{subfigure} \caption{The trisected local neighborhood (A) is exchanged for the trisected local neighborhood (B) to carry out an ambient boundary connected sum of surface-links. The local change is depicted with shadow diagrams in the change from (C) to (D). Note that, globally, the pink shadow arcs necessarily correspond to vertical strands of $\mathcal T_1$, while the light blue and light green shadow arcs may correspond (globally) to either flat or vertical strands.} \label{fig:bcs} \end{figure} Now, to perform the (ambient) boundary connected sum of the $\mathcal F^\varepsilon$ at the points $y^\varepsilon$, we must attach a half-twisted band $\frak b$ connecting these points. (It should be half-twisted because $\partial \mathcal F'$ and $\partial\mathcal F''$ are braided about $B$; the half-twist will ensure that the result $\partial\mathcal F'\#\partial\mathcal F''$ is still braided about $B$.) We also assume that the core of $\frak b$ lies in $P_i$. The change affected by attaching the half-twisted band is localized to the neighborhood $N$. Therefore, it suffices to understand how $N$ is changed. Although we are describing an ambient boundary connected sum of surface in a four-manifold $X$ that may be highly nontrivial, the neighborhood $N$ is a four-ball, so it makes sense to import the bridge-braided band presentation technology from Section~\ref{sec:four-ball}. Figure~\ref{fig:bcs_band1} shows a bridge-braided ribbon presentation for~$N$, together with the half-twisted band~$\frak b$. Figure~\ref{fig:bcs_band2} shows the effect of attaching the band, together with the dual band; this is a ribbon presentation for the boundary connected sum of the two disks in $N$. Figure~\ref{fig:bcs_band3} shows a bridge-braided ribbon presentation for this object, which we denote by $N'$. Note that the boundaries of $N$ and $N'$ are both 2--braids and are identical, except where they differ by a half-twist. As stated before, we assume this difference is supported near $P_i$. (Note that in the schematic of Figure~\ref{fig:bcs_schem2}, the half-twist is shown in the spread $Y_{i-1}$, rather than in $P^i$, due the reduction in dimension. Similarly, in the frames of Figure~\ref{fig:bcs_band1} and~\ref{fig:bcs_band2}, the band $\frak b$ and the crossing are similarly illustrated away from $P_i$.) \begin{figure}[h!] \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.8\linewidth]{bcs_band1} \caption{} \label{fig:bcs_band1} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.8\linewidth]{bcs_band2} \caption{} \label{fig:bcs_band2} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \includegraphics[width=.8\linewidth]{bcs_band3} \caption{} \label{fig:bcs_band3} \end{subfigure}% \caption{(A) A ribbon presentation for $N$, together with the band $\frak b$ realizing the boundary connected sum; (B) A ribbon presentation for $N'$, the result of the boundary connected sum; (C) a bridge-braided ribbon presentation for $N'$.} \label{fig:bcs_band} \end{figure} The neighborhood $N'$ is the $(1;0,2)$--bridge trisection of the spanning disk for the unknot that induces the braiding of the unknot as a $(2,1)$--curve in the complement of the (unknotted) binding. The corresponding bridge-braided ribbon presentation has one band, which is a helper band in the sense of Remarks~\ref{rmk:helpers} and~\ref{rmk:helpers2}. This helper band is the dual band to $\frak b$. Because $\partial N$ and $\partial N'$ are identical away from a neighborhood of $\omega$, we can cut $N$ out and glue in $N'$ to realize the attaching of $\frak b$; i.e., to realize the ambient boundary connected sum. \end{proof} \subsection{Notions of reducibility} \label{subsec:reduce} \ We now discuss three notions of reducibility for trisections of pairs that we will show correspond with the connected sum and boundary connected sum operations discussed above. These properties are distinct from, but related to, the properties of being stabilized or perturbed, which are discussed in Section~\ref{sec:stabilize}. \begin{definition} \label{def:reduce} Let $\mathbb T$ be a bridge trisection for a pair $(X,\mathcal F)$. Let $\delta\subset\Sigma\setminus\nu(\bold x)$ be an essential simple closed curve. \begin{enumerate} \item The curve $\delta$ is called a \emph{reducing curve} if, for each $i\in\mathbb{Z}_3$, there exists a disk $E_i\subset H_i\setminus\nu(\mathcal T_i)$ with~$\partial E_i = \delta$. \item The curve $\delta$ is called a \emph{decomposing curve} if, for each $i\in\mathbb{Z}_3$, there exists a disk $E_i\subset H_i$ with~$\partial E_i = \delta$ and with $|E_i\cap\mathcal T_i|=1$. A decomposing curve is called \emph{trivial} if it bounds a disk in $\Sigma$ containing a single bridge point. \item An embedded three-sphere $S\subset X$ is a \emph{trisected reducing sphere} if $Z_i\cap S$ is a three-ball and $H_i\cap S$ is a disk for each~$i\in\mathbb{Z}_3$, and $\Sigma\cap S$ is a reducing curve. \item An embedded three-sphere-unknot pair $(S,K)\subset(X,\mathcal F)$ is a \emph{(nontrivial) trisected decomposing sphere pair} if $$(Z_i\cap S,\mathcal D_i\cap S)\cong(B^3,I)$$ is a trivial 1--strand tangle in a three-ball for each~$i\in\mathbb{Z}_3$, and $\Sigma\cap S$ is a (nontrivial) decomposing curve. \item A trisection is \emph{reducible} (resp., \emph{decomposable}) if it admits a reducing curve (resp., a nontrivial decomposing curve). \setcounter{saveenum}{\value{enumi}} \end{enumerate} Let $\eta\subset\Sigma\setminus\nu(\bold x)$ be an essential, neatly embedded arc. \begin{enumerate} \setcounter{enumi}{\value{saveenum}} \item The arc $\eta$ is called a \emph{reducing arc} if, for each $i\in\mathbb{Z}_3$, there exists a neatly embedded arc $\eta_i\subset P_i$ and a disk $E_i\subset H_i\setminus\nu(\mathcal T_i)$ with $\partial E_i = \eta\cup\eta_i$. \item A neatly embedded three-ball $B\subset X\setminus\mathcal F$ is a \emph{trisected boundary-reducing ball} if, for all $i\in\mathbb{Z}_3$, we have $Z_i\cap B$ is a three-ball and $H_i\cap B$ is a disk, and $\Sigma\cap B$ is a reducing arc. \item A trisection is \emph{boundary-reducible} if it admits a reducing arc. \end{enumerate} \end{definition} \begin{lemma} \label{lem:reduce} If a trisection $\mathbb T$ is reducible, decomposable, or boundary-reducible, then $\mathbb T$ admits, respectively, a trisected reducing sphere, a nontrivial trisected decomposing sphere pair, or a trisected boundary-reducing ball. \end{lemma} \begin{proof} What follows is closely based on the proof of Proposition~3.5 from~\cite{MeiSchZup_16_Classification-of-trisections-and-the-Generalized}, where reducing curves are assumed (implicitly) to be separating, and some clarification is lacking. Here, we give added detail and address the latter two conditions, which are novel. Suppose $\mathbb T$ is either reducible or decomposable, with reducing or decomposing curve $\delta$ bounding disks $E_i$ in the $H_i$. Let $R_i = E_i\cup_\delta\overline E_{i+1}$ be the given two-sphere in $H_i\cup_\Sigma\overline H_{i+1}\subset \partial Z_i$. Recall (Subsection~\ref{subsec:DiskTangles}) that $Z_i$ is built by attaching 4--dimensional 1--handles the lensed product $Y_i\times[0,1]$ along $Y_i\times\{1\}$. A priori, the $R_i$ may not be disjoint from the belt spheres of the 1--handles in $Z_i$; however, by~\cite{Lau_73_Sur-les-2-spheres-dune-variete}, it can be arranged via handleslides and isotopies of the 1--handles that $R_i$ is disjoint from the belt spheres. Thus, we can assume that either (1) $R_i$ is parallel to a belt sphere, or (2) $R_i$ is contained in $Y_i\times\{1\}$. These cases correspond to whether $\delta$ is non-separating or separating, respectively. In case (1), $R_i$ bounds the cocore of the 1--handle, which is a three-ball in $Z_i$. In case (2), since $Y_i$ is irreducible, $R_i$ bounds a three-ball in $Y_i$ whose interior can be perturbed into $Z_i$. In either case, we get a three-ball $B_i$ in $Z_i$ whose boundary is $E_i\cup_\delta\overline E_{i+1}$, and the union $S_\delta = B_1\cup B_2\cup B_3$ gives a trisected three-sphere. In the case that $\delta$ is reducing, we are done: $S_\delta$ is a trisected reducing sphere. In the case that $\delta$ is a decomposing curve, it remains to show that $S_\delta\cap\mathcal F$ is unknotted and $B_i\cap\mathcal F$ is a trivial arc; the former is implied by the latter, which we now show. Note that $B_i$ and $\mathcal D_i$ are both neatly embedded in $Z_i$ and that $\mathcal D_i$ is boundary parallel. Using the boundary parallelism of $\mathcal D_i$, we can arrange that a component $D$ of $\mathcal D_i$ intersects $B_i$ if and only if $D$ intersects $R_i = \partial B_i$. It follows that there is a unique component $D\subset \mathcal D_i$ that intersects $B_i$. If we isotope $D$ to a disk $D_*\subset\partial Z_i$, then we find that $D_*\cap R_i$ consists of an arc and some number of simple close curves. By an innermost curve argument, we may surgery $D_*$ to obtain a new disk $D_*'$ such that $D_*'\cap R_i$ consists solely of an embedded arc. Since $D_*'$ and $D_*$ have the same boundary, they are isotopic rel-$\partial$ in $Z_i$ by Proposition~\ref{prop:triv_disks}. Reversing this ambient isotopy, we can arrange that $B_i\cap\mathcal D = B_i\cap D$ consists of a single arc. Moreover, this arc is trivial, since it is isotopic to the arc $R_i\cap D_*$ in $\partial Z_i$, and $R_i$ is a decomposing sphere for either (a) the unknot $\partial D$ or (b) the unknotted, vertical strand $\mathcal D\cap H_i\cup_\Sigma\overline H_{i+1}$. Either way, $R_i$ cuts off an unknotted arc. Thus, $(S_\delta,K)$ can be constructed to be a decomposing sphere for the trisection, as desired, where $K$ is the three-fold union of the trivial arcs $B_i\cap\mathcal F$. Now suppose that $\mathbb T$ is boundary-reducible, with reducing arc $\eta$ and arcs $\eta_i$ such that $\eta\cup\eta_i$ bounds a disk $E_i\subset H_i$. Consider the neatly embedded 2--disk $R_i = E_i\cup_\eta \overline E_{i+1}$ in $H_i\cup_\Sigma\overline H_{i+1}\subset\partial Z_i$. Let $B_i$ be the trace of a small isotopy that perturbs the interior of $R_i$ into $Z_i$. Then the union $B_\eta = B_1\cup B_2\cup B_3$ is a trisected three-ball. If $\eta$ is a reducing arc, we are done. \end{proof} \begin{remark}[\textbf{Regarding non-separating curves}] \label{rmk:nonsep} Reducing curves are almost always separating in the following sense. Suppose that $\delta$ is a non-separating reducing curve. Then there is a curve $\eta\subset\Sigma$ that is dual to $\delta$. Let $\delta' = \partial\nu(\delta\cup\eta)$. Then $\delta'$ is a separating reducing curve, unless it is inessential (i.e., parallel to a boundary component of $\Sigma$ or null-homotopic in $\Sigma$). This only occurs if $\Sigma$ is the core of the genus one trisection for $S^1\times S^3$ or for its puncture, $(S^1\times S^3)^\circ$. In any event, the neighborhood $\nu(S_\delta\cup\eta)$, where $S_\delta$ is the reducing sphere corresponding to $\delta$ as in Lemma~\ref{lem:reduce} below, is diffeomorphic to $(S^1\times S^3)^\circ$. If $\delta$ is a non-separating decomposing curve with corresponding decomposing pair $(S_\delta,K_\delta)$, then $K_\delta$ can be separating or non-separating as a curve in $\mathcal F$. If $K_\delta$ is non-separating, then we can surgery $(X,\mathcal F)$ along the pair $(S,K)$ to obtain a new pair $(X',\mathcal F')$. That the surgery of $\mathcal F$ along $K$ can be performed ambiently uses the fact that $K$ is an unknot in $S$, hence bounds a disk in $X\setminus\mathcal F$. Working backwards, there is a $S^0\subset\mathcal F'\subset X$ along which we can surger $(X',\mathcal F')$ to obtain $(X,\mathcal F)$. It follows that $X = X'\#(S^1\times S^3)$ and $\mathcal F$ is obtained from $\mathcal F'$ by tubing. Diagrammatically, the surgery from $(X,\mathcal F)$ to $(X',\mathcal F')$ is realized by surgering $\Sigma$ along $\delta$. Note that this tubing is not necessarily trivial in the sense that it may or may not be true that $(X,\mathcal F) = (X',\mathcal F')\#(S^1\times S^3,S^1\times S^1)$. \end{remark} A bridge trisection satisfying one of the three notions of reducibility decomposes in a natural way. See Subsection~\ref{subsec:sum} for a detailed discussion of connected sum and boundary connected sum operations. For example, presently, we let $\mathbb T'\#\mathbb T''$ denote the connected sum of trisections, regardless of whether the connected summing point is a bridge point or not. \begin{proposition} \label{prop:reduce} Let $\mathbb T$ be a bridge trisection for a pair $(X,\mathcal F)$. \begin{enumerate} \item If $\mathbb T$ admits a separating reducing curve, then there exist pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$ with trisections $\mathbb T'$ and $\mathbb T''$ such that $\mathbb T = \mathbb T'\#\mathbb T''$ and $$(X,\mathcal F) = (X'\#X'',\mathcal F'\sqcup\mathcal F'').$$ \item If $\mathbb T$ admits a nontrivial, separating decomposing curve, then there exist pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$ with trisections $\mathbb T'$ and $\mathbb T''$ such that $\mathbb T = \mathbb T'\#\mathbb T''$ and $$(X,\mathcal F) = (X'\#X'',\mathcal F'\#\mathcal F'').$$ \item If $\mathbb T$ admits a separating reducing arc, then there exist pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$ with trisections $\mathbb T'$ and $\mathbb T''$ such that $\mathbb T = \mathbb T'\natural\mathbb T''$ and $$(X,\mathcal F) = (X'\natural X'',\mathcal F'\sqcup\mathcal F'').$$ \end{enumerate} \end{proposition} \begin{proof} If $\mathbb T$ admits a separating reducing curve $\delta$, then it admits a separating trisected reducing sphere $S_\delta$, by Lemma~\ref{lem:reduce}. Cutting open along $S_\delta$ and capping off the two resulting three-sphere boundary components with genus zero trisections of $B^4$ results in two new trisections $\mathbb T'$ and $\mathbb T''$ for pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$, as desired in part (1). For part (2), we proceed as above, except we cap off with two genus zero 0--bridge trisections of $(B^4,D^2)$ to achieve the desired result. (If any of the disks $E_i$ bounded by $\delta$ in the $H_i$ intersect vertical strands $\tau_i$, then we can perturb to make these intersecting strands flat. If such perturbations are performed before cutting, they can be undone with deperturbation after gluing. This is related to the discussion immediately preceding Lemma~\ref{lem:conn_sum}.) If $\mathbb T$ admits a separating reducing arc $\eta$, then it admits a separating trisected reducing ball $B_\eta$, by Lemma~\ref{lem:reduce}. Cutting open along $B_\eta$ results in two new trisections $\mathbb T'$ and $\mathbb T''$ for pairs $(X',\mathcal F')$ and $(X'',\mathcal F'')$, as desired in part (3). \end{proof} \begin{remark}[\textbf{Boundary-decomposing arcs}] \label{rmk:bda} Conspicuously absent from the above notions of reducibility is a characterization of what might be referred to as boundary-decomposability -- in other words, a characterization of when we have $$(X,\mathcal F) = (X'\natural X'',\mathcal F'\natural\mathcal F'').$$ The obvious candidate for such a notion would be the existence of a neatly embedded, essential arc $\eta\subset\Sigma$, similar to the one involved in the notion of boundary-reducibility, but where the disks $E_i$ each intersect the respective $\mathcal T_i$ in precisely one point. However, a lengthy examination of such arcs reveals that they rarely correspond to surfaces that are boundary connected sums in the desired way. To the point, many of the examples given later in this section admit such arc, but are not boundary-connected sums of bridge trisected surfaces. We have been unable to find a satisfying characterization of when this occurs. \end{remark} \subsection{Classification for small parameters} \label{subsec:small} \ As a first example, consider the $(4,(2,4,2);3)$--bridge trisection shown in Figure~\ref{fig:c=b}, which is the boundary sum of a 1--bridge trisection, a 3--bridge trisection that is perturbed, and three 0--bridge trisections and corresponds to $(B^4,S^2\sqcup S^2\sqcup D^2\sqcup D^2\sqcup D^2)$. It turns out that such a bridge trisection is obtained whenever $c_i=b$ for some $i\in\mathbb{Z}_3$. (Recall that $\partial D_i$ contains a flat $b$--bridge $c_i$ component unlink, so $b\geq c_i$ for all $i\in\mathbb{Z}_3$.) \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{c=b1} \caption{} \label{fig:c=b1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{c=b2} \caption{} \label{fig:c=b2} \end{subfigure} \caption{A shadow diagram (A) and schematic tri-plane diagram (B) for the unique $(4,(2,4,2);3)$--bridge trisection, which is totally reducible.} \label{fig:c=b} \end{figure} \begin{proposition} \label{prop:c=b} Let $\mathbb T$ be a $(b,\bold c;v)$--bridge trisection for a surface $(B^4,\mathcal F)$. If $b=c_i$ for some $i\in\mathbb{Z}_3$, then $c_{i+1}=c_{i+2}=c$ and $$(B^4,\mathcal F) = \left(B^4,(\sqcup_cS^2)\sqcup(\sqcup_vD^2)\right),$$ and $\mathbb T$ is the boundary sum of $c$ genus zero bridge trisections of $(B^4,S^2)$, each of which is a finger perturbation of the 1--bridge trisection, and $v$ genus zero 0--bridge trisections of $(B^4,D^2)$. \end{proposition} \begin{proof} Suppose without loss of generality that $c_2=b$. By Proposition~\ref{prop:bridge_to_band}, $\mathcal F$ admits a $(b,\bold c;v)$--bridge-braided band presentation. In particular, $\mathcal F$ can be built with $n = b-c_2=0$ bands. It follows that $c_1=c_3$. It also follows that the flat disks of $(Z_2,\mathcal D_2)$ are given as products on the $b$ flat strands of $(H_2,\mathcal T_2)$. We can assume that the union of the red and blue shadow arcs is a collection of $c_1$ embedded polygons in $\Sigma$, since they determine a $b$--bridge $c_1$--component unlink in $H_1\cup_\Sigma\overline H_2$. We can also assume that the green shadow arcs coincide with the blue shadow arcs, due to the product structure on the flat disks of $\mathcal D_2$. See Figure~\ref{fig:c=b1}. Let $\delta$ be a simple closed curve in $\Sigma\setminus\nu(\bold x)$ that separates the red/blue polygons from the bridge points that are adjacent to no shadow arc. (Note that, here, every bridge point is adjacent to either 0 or 3 shadow arcs by the above considerations.) Then $\delta$ is a reducing curve for $\mathbb T$ so that $\mathbb T=\mathbb T^1\#\mathbb T^2$, where $\mathbb T^1$ is a $(b,\bold c)$--bridge trisection for a pair $(S^4,\mathcal F^1)$ and $\mathbb T^2$ is a $(0,0;v)$--bridge trisection for a pair $(B^4,\mathcal F^2)$. Because the blue and green shadow arcs coincide, each polygon is a finger perturbation of the 1--bridge splitting of $(S^4,S^2)$, and $\mathcal F^1 = \sqcup_cS^2$. Moreover, $\mathbb T^1$ admits $c-1$ reducing curves that completely separate the polygons. It follows that $\mathbb T^1$ is connected sum of perturbations of the 1--bridge trisection of $(S^4,S^2)$, as desired. Finally, the bridge trisection $\mathbb T^2$ admits $v-1$ reducing arcs that cut it up into $v$ copies of the genus zero 0--bridge trisection of $(B^4,D^2)$, as desired. \end{proof} Having dispensed of the case when $c_i=b$ for some $i\in\mathbb{Z}_3$, we consider the case when $b=1$ and, in light of the above, $c_i=0$ for all $i\in\mathbb{Z}_3$. Two simple examples of such bridge trisections are given in Figure~\ref{fig:b=1}. \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=20mm]{b=11} \caption{} \label{fig:b=11} \setcounter{subfigure}{2} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=20mm]{b=13} \caption{} \label{fig:b=13} \setcounter{subfigure}{1} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=15mm]{b=12} \caption{} \label{fig:b=12} \setcounter{subfigure}{3} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[height=15mm]{b=14} \caption{} \label{fig:b=14} \end{subfigure} \caption{(A) and (B) The $(1,0;1)$--bridge trisection corresponding to the standard (positive) M\"obius band $(B^4,M^2)$. (C) and (D) The $(1,0;2)$--bridge trisection corresponding to the unknotted disk $(B^4,D^2)$ with (positive) Markov stabilized, unknotted boundary.} \label{fig:b=1} \end{figure} For a more interesting family of examples, consider the $(2,4)$--torus link $T_{2,4}$, which bounds the union of the trivial M\"obius band $M^2$ and the trivial disk $D^2$. (Imagine Figure~\ref{fig:T243} with the three parallel circles replaced with a single circle.) Now, consider the surface $\mathcal F_v$ obtained by replacing the $D^2$ with $v-1$ parallel, trivial disks; Figure~\ref{fig:T243} shows the case of $v=4$. A $(1,0;v)$--bridge trisection $\mathbb T_v$ for $(B^4,\mathcal F_v)$ is shown in Figures~\ref{fig:T241} and~\ref {fig:T242}. Note that when $v=1$, $\mathbb T_v$ corresponds the trivial (positive) M\"obius band with unknotted boundary and was given diagrammatically in Figures~\ref{fig:b=11} and~\ref{fig:b=12}. \begin{figure}[h!] \centering \begin{tabular}{cc} \multirow{2}{*}{ \begin{subfigure}{.4\textwidth} \setcounter{subfigure}{0} \centering \includegraphics[width=.5\linewidth]{T243} \caption{} \label{fig:T243} \end{subfigure}% } & \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{T241} \caption{} \label{fig:T241} \end{subfigure}% \\[20mm] & \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.6\linewidth]{T242} \caption{} \label{fig:T242} \end{subfigure}% \end{tabular} \caption{.} \label{fig:T24} \end{figure} One can check using the techniques of Subsection~\ref{subsec:tri-plane_braid} that the bridge trisection $\mathbb T_v$ induces the $v$--braiding of $\partial \mathcal F_v$ given in Artin generators by $$(\sigma_1\sigma_2\cdots\sigma_{v-2}\sigma_{v-1}^2\sigma_{v-2}\cdots\sigma_2\sigma_1)^2.$$ In other words, one strand wraps twice around the other $v-1$ strands. The link $\partial\mathcal F_v$ can be thought of as taking the $(v-1,0)$--cable of one component of $T_{2,4}$. \begin{proposition} \label{prop:b=1} The bridge trisection $\mathbb T_v$ is the unique (up to mirroring) totally irreducible $(1,0;v)$--bridge trisection. \end{proposition} \begin{proof} Suppose that $\mathbb T$ is a totally irreducible $(1,0;v)$--bridge trisection, and consider a shadow diagram for~$\mathbb T$. Since $b=1$, there is a unique shadow arc of each of color. Since $c=0$, the union of any two of these shadow arcs is a connected, embedded, polygonal arc in $\Sigma$. There are two cases: Either the union of the three shadow arcs is a circle, or the union of the three shadow arcs is a Y-shaped tree. Suppose the union is a Y-shaped tree. Let $\eta$ be an arc connecting the tree to $\partial\Sigma$, and let $\omega$ be the arc boundary of the union of $\eta$ and the tree. In other words, $\omega$ is a neatly embedded arc in $\Sigma\setminus\nu(\bold x)$ that separates the tree from the rest of the diagram. If the rest of the diagram is nonempty, then $\delta$ is a reducing arc for the bridge trisection, and we have $\mathbb T = \mathbb T^1\natural\mathbb T^2$, where $\mathbb T^1$ is a $(1,0;2)$--bridge trisection (with Y-shaped shadow diagram) and $\mathbb T^2$ is a $(0,0;v)$--bridge trisection, with $v>0$. This contradicts the assumption that $\mathbb T$ was totally irreducible. If $v=0$ (i.e. the rest of the diagram is empty), then $\mathbb T=\mathbb T^1$ is the Markov perturbation of the genus zero 0--bridge trisection and is shown in Figure~\ref{fig:b=14}, so $\mathbb T$ is not totally irreducible, another contradiction. Now suppose that the union of the three shadow arcs is a circle, and let $D\subset\Sigma$ denote the disk the union bounds. Suppose there is a bridge point in $\Sigma\setminus D$. Then there is a reducing arc separating the bridge point from $D$, so $\mathbb T$ is boundary reducible, a contradiction. So, the $v-1$ bridge points that are not adjacent to a shadow arc are contained in $D$. Therefore, the shadow diagram is the one given in Figure~\ref{fig:T242} or, in the case that $v=1$, in Figure~\ref{fig:b=12}. This completes the proof. \end{proof} Having walked through these modest classification results, we now present some families of examples, as well as some questions and conjectures about further classification results. \begin{example} \label{ex:punc} Consider the three $(2,0;1)$--bridge trisections shown in Figure~\ref{fig:punc}, which correspond the punctured torus and two different Klein bottles. All three surfaces are isotopic into $S^3$ and are bounded by the unknot. The two Klein bottles decompose as boundary connected sums of M\"obius bands bounded by the unknot in $S^3$. The Klein bottle depicted in Figures~\ref{fig:punc3} and~\ref{fig:punc4} is the boundary connected sum of two positive M\"obius bands; the Klein bottle depicted in Figures~\ref{fig:punc5} and~\ref{fig:punc6} is the boundary connected sum of a positive and a negative M\"obius bands \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{punc1} \caption{} \label{fig:punc1} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.6\linewidth]{punc2} \caption{} \label{fig:punc2} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{punc3} \caption{} \label{fig:punc3} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.6\linewidth]{punc4} \caption{} \label{fig:punc4} \end{subfigure} \par\vspace{5mm} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{punc5} \caption{} \label{fig:punc5} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.6\linewidth]{punc6} \caption{} \label{fig:punc6} \end{subfigure} \caption{Three $(2,0;1)$--bridge trisections for surfaces bounded by the unknot and isotopic into $S^3$. (A) and (B) describe a punctured torus; (C) and (D) describe the boundary connected sum of two positive M\"obius bands; (E) and (F) describe the boundary connected sum of a positive and a negative M\"obius band.} \label{fig:punc} \end{figure} These three bridge trisections can be obtained by taking the three unique $(3,1)$--bridge trisections of closed surfaces in $S^4$ and puncturing at a bridge point. \end{example} \begin{conjecture} \label{conj:punc} There are exactly three (up to mirroring) totally irreducible $(2,0;1)$--bridge trisections. \end{conjecture} \begin{example} \label{ex:2-braid} Consider the $(2,0;2)$--bridge trisection shown in Figure~\ref{fig:2-braid41}, which corresponds the annulus $S^3$ bounded by the $(2,4)$--torus link. Compare with Example~\ref{ex:mono} and Figure~\ref{fig:mono1}. By replacing the three positive half-twists with $n$ half-twists for some $n\in\mathbb{Z}$, gives a surface in $S^3$ bounded by the $(2,n)$--torus link that is a M\"obius band if $n$ is odd and an annulus if $n$ is even. One interesting aspect of the case when $n$ is even relates to the orientation of the boundary link. The boundary link, which is the $(2,n)$--torus link, inherits an orientation as a 2--braid. It also inherits an orientation from the spanning annulus that the bridge trisection describes. These orientations don't agree! In other words, the bridge trisections of the spanning annuli for these links induce a braiding of the links, but this braiding is not coherent with respect to the orientation of the links induced by the annuli. Compare with Example~\ref{ex:T24coherent} below. \end{example} \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.9\linewidth]{2-braid41} \caption{} \label{fig:2-braid41} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.5\linewidth]{2-braid42} \caption{} \label{fig:2-braid42} \end{subfigure} \caption{Diagrams for a $(2,0;2)$--bridge trisection of the planar surface bounded by the $(2,n)$--torus link in $S^3$; shown is $n=4$.} \label{fig:2-braid4} \end{figure} \begin{conjecture} \label{conj:punc} Every $(2,0;2)$--bridge trisection is diffeomorphic to one of those described in Example~\ref{ex:2-braid} and in Figure~\ref{fig:2-braid4}. \end{conjecture} \begin{example} \label{ex:T24coherent} Figure~\ref{fig:3-braid41} gives a $(3,0;3)$--bridge trisection for the annulus in $S^3$ bounded by the $(2,4)$--torus link. In contrast to the bridge trisection for this surface discussed in Example~\ref{ex:2-braid} and illustrated in Figure~\ref{fig:2-braid4}, this bridge trisection induces a coherent 3--braiding of the boundary link. This example could be generalized to give a $(n+1,0;n+1)$--bridge trisection for the annulus bounded by the $(2,n)$--torus link for any even $n\in\mathbb{Z}$. \end{example} \begin{figure}[h!] \begin{subfigure}{.6\textwidth} \centering \includegraphics[width=.9\linewidth]{3-braid41} \caption{} \label{fig:3-braid41} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=.7\linewidth]{3-braid42} \caption{} \label{fig:3-braid42} \end{subfigure} \caption{Diagrams for a $(3,0;3)$--bridge trisection of the planar surface bounded by the $(2,n)$--torus link in $S^3$; shown is $n=4$.} \label{fig:3-braid4} \end{figure} \section{Proof of Theorem~\ref{thm:general}} \label{sec:gen_proof} We now make use of the general framework outlined in Section~\ref{sec:general} to give a proof of Theorem~\ref{thm:general}, which we restate for convenience. We adopt the notation and conventions of Definition~\ref{def:Trisection}. \begin{theorem} \label{thm:general} Let $\mathbb T$ be a trisection of a four-manifold $X$ with $\partial X = Y$, and let $(B,\pi)$ denote the open-book decomposition of $Y$ induced by $\mathbb T$. Let $\mathcal F$ be a neatly embedded surface in $X$; let $\mathcal L = \partial \mathcal F$; and fix a braiding $\widehat\beta$ of $\mathcal L$ about $(B,\pi)$. Then, $\mathcal F$ can be isotoped to be in bridge trisected position with respect to $\mathbb T$ such that $\partial \mathcal F = \widehat\beta$. If $\mathcal L$ already coincides with the braiding $\beta$, then this isotopy can be assumed to restrict to the identity on $Y$. \end{theorem} Note that if $X$ is closed, then Theorem~\ref{thm:general} is equivalent to Theorem~1 of~\cite{MeiZup_18_Bridge-trisections-of-knotted}. For this reason, we assume henceforth that $Y=\partial X\not=\emptyset$. We will prove Theorem~\ref{thm:general} using a sequence of lemmata. Throughout, we will disregard orientations. All isotopies are assumed to be smooth and ambient. First, we describe the existence of a Morse function $\Phi_\mathbb T$ on (most of) $X$ that is well-adapted to the trisection $\mathbb T$. We will want to think of $X$ as a lensed cobordism from $Y_1$ to $Y_2\cup_{P_3}Y_3$. \begin{lemma}\label{lem:trisection_to_Morse} There is a self-indexing Morse function $$\Phi_\mathbb T\colon X\setminus\nu(P_1\cup_B P_2\cup_BP_3)\to [0,4]$$ such that \begin{enumerate} \item $\Phi_\mathbb T$ has no critical points of index zero or four; \item $Y_1\setminus\nu(P_1\cup_B \overline P_2) = \Phi_\mathbb T^{-1}(0)$; \item $(H_1\cup_\Sigma \overline H_2)\setminus\nu(P_1\cup_B \overline P_2) = \Phi_\mathbb T^{-1}(1.5)$; \item $\Phi_\mathbb T(H_3\setminus\nu(P_3))\subset[1.5,2.5)$; \item $Y_3\setminus\nu(P_3\cup_B \overline P_1) = \Phi_\mathbb T^{-1}(4)$; and \item The index $j$ critical points of $\Phi_\mathbb T$ are contained in $\text{Int}(Z_j)$. \end{enumerate} \end{lemma} Note that if $\Phi_\mathbb T(x)\geq 2.5$, then $x\in Z_3$. \begin{proof} The existence of the Morse function and property (1) are standard consequences of the cobordism structure. The other properties are easy and commonly discussed within the theory of trisections; see~\cite{GayKir_16_Trisecting-4-manifolds}, for example. The set-up is made evident by the schematics of Figure~\ref{fig:Morse}. \end{proof} \begin{figure}[h!] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.6\linewidth]{tri_schem} \caption{} \label{fig:Morse1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\linewidth]{Morse_schem} \caption{} \label{fig:Morse2} \end{subfigure} \caption{Passing from a trisection to a natural Morse function on $X\setminus\nu(P_1\cup P_2\cup P_3)$.} \label{fig:Morse} \end{figure} Now, $Z_1$ is the result of attaching four-dimensional 1--handles to the lensed product $Y_1\times I$. The core $\Sigma$ can be assumed to satisfy $\Phi_\mathbb T(\Sigma\setminus\nu(B)) = 1.5$, and, together with $P_1$ and $P_2$, it gives a standard Heegaard double decomposition of $\partial Z_1$. The attaching circles of the four-dimensional 2--handles are assumed to be contained in a (1--complex) spine of the compressionbody $H_2$, with the result of Dehn surgery thereupon being $H_3$. The trace of this 2--handle attachment is $Z_2$, and $Z_3$ is the (lensed cobordism) trace of attaching four-dimensional 3--handles to $H_3\cup_\Sigma \overline H_1$, the result of which is $Y_3$. (Note that $Z_2$ is not quite a lensed cobordism from this perspective, since $Y_2$ is a vertical portion of its boundary $\partial Z_2 = H_2\cup Y_2\cup \overline H_3$.) For the remainder of the section, we let $\Phi = \Phi_\mathbb T$. Let $\Phi_i = \Phi\vert_{Z_i}$ for $i=1,2,3$. Recall the standard Morse function on $Z_i\cong Z_{g,k_i,\bold p, \bold f}$ that was discussed in Subsection~\ref{subsec:DiskTangles}. By the above discussion, we have the following consequence of Lemma~\ref{lem:trisection_to_Morse}. \begin{corollary}\label{coro:std_Morse} If $i=1$ or $i=3$, then $\Phi_i$ is a standard Morse function on $Z_i\cong Z_{g,k_i,\bold p, \bold f}$. \end{corollary} Presently, we will begin to isotope $\mathcal F$ to lie in bridge trisected position with respect to $\mathbb T$. \begin{lemma}\label{lem:transversifying} After an isotopy of $\mathcal F$ that is supported near $\partial X$, we can assume that $\mathcal L=\widehat\beta$. \end{lemma} \begin{proof} By the Alexander Theorem~\cite{Ale_20_Note-on-Riemann-spaces} or the generalization due to Rudolph~\cite{Rud_83_Constructions-of-quasipositive-knots}, $\mathcal L$ can be braided with respect to the open-book decomposition $(B,\pi)$. By the Markov Theorem~\cite{Mar_35_Uber-die-freie-Aquivalenz} or its generalization to closed 3--manifolds~\cite{Sko_92_Closed-braids-in-3-manifolds,Sun_93_The-Alexander-and-Markov-theorems}, any two braidings of $\mathcal L$ with respect to $(B,\pi)$ are isotopic. Thus, by an isotopy of $\mathcal F$ that is supported near $Y$, we can assume that $\mathcal L$ is given by the braiding to $\widehat\beta$. \end{proof} Any modifications made to $\mathcal F$ henceforth will be isotopies that restrict to the identity on $Y$. Let $\Phi_\mathcal F$ denote the restriction of $\Phi$ to $\mathcal F$. (Note that by choosing a small enough collar $\nu(Y)$ in $X$, we can assume that $\mathcal F\cap\nu(Y) = \mathcal L\times I$. By a small isotopy of $\mathcal F$ rel-$\partial$, we can assume that $\Phi_\mathcal F$ is Morse.) \begin{lemma}\label{lem:Morse_Ff} After an isotopy of $\mathcal F$ rel-$\partial$, we can assume that $\Phi_\mathcal F\colon \mathcal F\to \mathbb{R}$ is Morse and that \begin{enumerate} \item the minima of $\Phi_\mathcal F$ occur in $Z_1$, \item the saddles of $\Phi_\mathcal F$ occur in $ \Phi^{-1}(1.5)$, and \item the maxima of $\Phi_\mathcal F$ occur in $Z_3$. \end{enumerate} \end{lemma} \begin{proof} That the critical points can be rearranged as desired follows from an analysis of their various ascending and descending manifolds. A detailed analysis of this facet of (embedded) Morse theory can be found in~\cite{Bor}. Here, we simply make note of the key points. The ascending (unstable) membrane of a maximum of $\Phi_\mathcal F$ is one-dimensional; think of a vertical arc emanating from the maximum and terminating in $Y_3$. (Vertical means the intersection with each level set is either a point or empty.) Generically, such an arc will be disjoint from $\mathcal F$ and will be disjoint from the descending spheres of the critical points of $\Phi$ (which have index one, two, or three) in each level set. Thus, the gradient flow of $\Phi$ can be used to push the maxima up (and the minima down), and we obtain that the minima lie below $\Phi^{-1}(1.5)$ (i.e., in $Z_1$) and that the maxima lie above $\Phi^{-1}(2.5)$ (i.e., in $Z_3$). Having arranged the extrema in this way, we move on to consider the saddles. The ascending membranes of the saddles of $\Phi_\mathcal F$ are two-dimensional, while the the descending spheres of the index one critical points of $\Phi$ are zero-dimensional. Thus, we can flow the saddles up past the index one critical points of $\Phi$, until they lie in $\Phi^{-1}(1.5)$. Symmetrically, we can flow saddles down past the index three critical points of $\Phi$ to the same result. \end{proof} Let $\mathcal D_i = \mathcal F\cap Z_i$ for $i=1,2,3$. Assume that $\widehat\beta$ is a braiding of $\mathcal L$ of multi-index $\bold v$. \begin{lemma}\label{lem:Dd_1/3} If $\Phi_\mathcal F$ has $c_1$ minima and $c_3$ maxima, then $\mathcal D_1$ is a $(c_1,\bold v)$--disk-tangle, and $\mathcal D_3$ is a $(c_3,\bold v)$--disk-tangle. \end{lemma} \begin{proof} By Corollary~\ref{coro:std_Morse}, $\Phi_1$ is a standard Morse function on $Z_i$. By Lemma~\ref{lem:one_min}, since $(\Phi_1)\vert_{\mathcal D_1}$ has $c_1$ minimal and no other critical points, and since $\mathcal F\cap Y_1 = \widehat\beta\cap Y_1$ is a $\bold v$--thread, this implies that $\mathcal D_1$ is a $(c,\bold v)$--disk-tangle. The corresponding result holds for $\mathcal D_3$, after turning $\Phi_3$ and $(Z_3,\mathcal D_3)$ upside down. \end{proof} Next, we see that the trisection $\mathbb T$ can be isotoped to ensure the intersections $\mathcal T_i = \mathcal F\cap H_i$ are trivial tangles for $i=1,2,3$. \begin{lemma}\label{lem:Tt_i} After an isotopy of $\mathbb T$, we can assume that each $\mathcal T_i$ is a $(b,\bold v)$--tangle, for some $b\geq 0$. \end{lemma} \begin{proof} The level set $\Phi^{-1}(1.5)$ is simply $M = (H_1\cup_\Sigma \overline H_2)\setminus\nu(\overline P_1\cup_B P_2)$. The intersection $\mathcal F\cap\Phi^{-1}(1.5)$ is a 2--complex $L\cup\frak b$, where $L$ is a neatly embedded one-manifold $L$, and $\frak b$ is a collection of bands. Here, we are employing the standard trick of flattening $\mathcal F$ near each of the saddle points of $\Phi_\mathcal F$. (See Subsection~\ref{subsec:band_pres} for a precise definition of a band.) We have a Heegaard splitting $(\Sigma;H_1,H_2)$ that induces a Morse function $\Psi\colon\Phi^{-1}(1.5)\to\mathbb{R}$. In what follows, we will perturb this splitting (i.e., homotope this Morse function) to improve the arrangement of the 2--complex $L\cup\frak b$. First, we perturb $\Sigma$ so that it becomes a bridge surface for $L$. At this point, we have arranged that $\mathcal T_1$ and $\mathcal T_2$ are $(b',\bold v)$--tangles, for some value $b'$ that will likely be increased by what follows. Next, we can perturb $\Sigma$ until the bands $\frak b$ can be isotoped along the gradient flow of $\Psi$ so that their cores lie in $\Sigma$. We can further perturb $\Sigma$ until $\frak b\cap\Sigma$ consists solely of the cores of $\frak b$, which are embedded in $\Sigma$; said differently, the bands of $\frak b$ are determined by their cores in $\Sigma$, together with the surface-framing give by the normal direction to $\Sigma$ in $\Psi^{-1}(1.5)$. Finally, we can further perturb $\Sigma$ until each band is dualized by a bridge semi-disk for $\mathcal T_2$. The details behind this approach were given in the proof of Theorem~1.3 (using Figures~10-12) of~\cite{MeiZup_17_Bridge-trisections-of-knotted} and discussed in~\cite{MeiZup_18_Bridge-trisections-of-knotted}. Finally, we isotope $\Sigma$ so that $\frak b$ is contained in $H_2$; in other words, we push the bands slightly into $H_2$ so as to be disjoint from $\Sigma$. Since each band of $\frak b$ is dualized by a bridge semi-disk for $\mathcal T_2$, the result $\mathcal T_3 = (\mathcal T_2)_\frak b$ of resolving $\mathcal T_2$ using the bands of $\frak b$ is a new trivial tangle. The proof of this claim is explained in detail in Lemma~3.1 and Figure~8 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}. (Though it is not necessary, we can even perturb $\Sigma$ so that $\frak b$ is dualized by a bridge disk at \emph{both} of its endpoints, as in the aforementioned Figure~8.) Note that all of the perturbations of $\Sigma$ were supported away from $\nu(P_1\cup_B P_2)$, so each of the $\mathcal T_i$ contained precisely $\bold v$ vertical strands throughout. In the end, each is a $(b,\bold v)$--tangle for some $b\geq0$. \end{proof} Finally, we verify that $\mathcal D_2$ is a trivial disk-tangle in $Z_2$. \begin{lemma}\label{lem:Dd_2} If $c_2 = b - |\frak b|$, then $\mathcal D_2$ is a $(c_2,\bold v)$--disk-tangle. \end{lemma} \begin{proof} As in the preceding lemma, this follows exactly along the lines of Lemma~3.1 of~\cite{MeiZup_17_Bridge-trisections-of-knotted}, with only slight modification to account for the vertical strands. This is particularly easy to see if one assumes that $\frak b$ meets dualizing disks at each of its endpoints, as in the aforementioned Figure~8. \end{proof} Thus, we arrive at a proof of Theorem~\ref{thm:general}. \begin{proof}[Proof of Theorem~\ref{thm:general}] After performing the isotopies of $\mathcal F$ and $\mathbb T$ outlined in the lemmata above, we have arranged that, for $i=1,2,3$, the intersection $\mathcal D_i = \mathcal F\cap Z_i$ is a $(c_i,\bold v)$--disk-tangle in $Z_i$ (Lemmata~\ref{lem:Dd_1/3} and,~\ref{lem:Dd_2}) and the intersection $\mathcal T_i = \mathcal F\cap H_i$ is a $(b,\bold v)$--tangle (Lemma~\ref{lem:Tt_i}). Thus, $\mathcal F$ is in $(b,\bold c;\bold v)$--bridge trisected position with respect to $\mathbb T$, where $\bold c = (c_1, c_2, c_3)$, and the ordered partition $\bold v$ comes from the multi-index $\bold v$ of the braiding $\widehat\beta$ of $\mathcal L = \partial \mathcal F$. \end{proof} \section{Stabilization operations} \label{sec:stabilize} In this section we describe various stabilization and perturbation operations that can be used to relate two bridge trisections of a fixed four-manifold pair. We encourage the reader to refer back to the discussion of connected sums and boundary connected sums of bridge trisections presented in Section~\ref{sec:class}. \subsection{Stabilization of four-manifold trisections} \label{subsec:stabilization} \ First, we'll recall the original stabilization operation of Gay and Kirby~\cite{GayKir_16_Trisecting-4-manifolds}, as developed in~\cite{MeiSchZup_16_Classification-of-trisections-and-the-Generalized}. \begin{definition}[\textbf{\emph{core stabilization}}] Let $\mathbb T$ be a $(g,\bold k;\bold p, \bold f)$--trisection for a four-manifold $X$, and let $\omega$ be an arc in $\text{Int}(\Sigma)$. Fix an $i\in\mathbb{Z}_3$. Perturb the interior of $\omega$ into $H_{i+1} = Z_i\cap Z_{i+1}$, and let $\Sigma'$ denote the surface obtained by surgering $\Sigma$ along $\omega$. Then, $\Sigma'$ is the core of a $(g+1,\bold k';\bold p, \bold f)$--trisection $\mathbb T'$ for $X$, where $\bold k' = \bold k$, except that $k'_i = k_i+1$, which is called the \emph{core $i$--stabilization} of $\mathbb T$. \end{definition} The importance of this operation rests in the following result of Gay and Kirby. \begin{theorem}[\cite{GayKir_16_Trisecting-4-manifolds}] \label{thm:GK_unique} Suppose that $\mathbb T$ and $\mathbb T'$ are two trisections of a fixed four-manifold $X$, and assume that either $\partial X = \emptyset$ or $\mathbb T$ and $\mathbb T'$ induce isotopic open-book decomposition on each connected component of $\partial X$. Then, $\mathbb T$ and $\mathbb T'$ become isotopic after they are each core stabilized some number of times. \end{theorem} Performing a core $i$--stabilization is equivalent to forming the (interior) connected sum with a simple trisection of $S^4$. Let $\mathbb T_i$ denote the genus one trisection of $S^4$ with $k_i=1$. See~\cite{MeiSchZup_16_Classification-of-trisections-and-the-Generalized} for details. \begin{proposition}\label{prop:core_stab} $\mathbb T'$ is a core $i$--stabilization of $\mathbb T$ if and only if $\mathbb T' = \mathbb T\#\mathbb T_i$. \end{proposition} Next, we recall the stabilization operation for trisections that corresponds to altering the induced open-book decomposition on the boundary by the plumbing of a Hopf band. Let $\mathbb T_{\text{Hopf}}^+$ (respectively, $\mathbb T_{\text{Hopf}}^-$) denote the genus one trisection of $B^4$ that induces the open-book decomposition on $S^3$ with binding the positive (respectively, negative) Hopf link. \begin{definition}[\textbf{\emph{Hopf stabilization}}]\label{def:Hopf} Let $\mathbb T$ be a $(g,\bold k;\bold p,\bold f)$--trisection for a four-manifold $X$. Let $\omega\subset (\Sigma\setminus\alpha_i)$ be a neatly embedded arc, which we consider in $P_i$. Let $\mathbb T^\pm$ denote the trisection obtained by plumbing $\mathbb T$ to $\mathbb T_\text{Hopf}^\pm$ along the projection of $\omega$, as in Figure~\ref{fig:plumbing}. We call $\mathbb T^\pm$ the \emph{positive/negative Hopf $(i,j)$--stabilization} of $\mathbb T$ along $\omega$. \end{definition} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{plumbing} \caption{The positive Hopf stabilization $\mathbb T^+$ of a trisection $\mathbb T$ along an arc $\omega$ in the core of $\mathbb T$.} \label{fig:plumbing} \end{figure} By a \emph{plumbing} of trisections, we mean a plumbing of pages along the projection of arcs to the pages. Diagrammatically, this is represented by plumbing the relative trisection diagrams along the corresponding arcs in the core surface, as in Figure~\ref{fig:plumbing}. This induces boundary connected sums at the level of the three-dimensional and four-dimensional pieces of the trisections and plumbing at the level of the core surfaces and pages. Hopf stabilization was first studied in the setting of trisections by Castro~\cite{Cas_16_Relative-trisections-of-smooth} and Castro--Gay--Pinz\'on-Caicedo~\cite{CasGayPin_18_Diagrams-for-relative-trisections}. We rephrase their main result in the more general setting of the present article. \begin{proposition}[\cite{CasGayPin_18_Diagrams-for-relative-trisections}, Corollary 17] Let $\mathbb T$ be a $(g,\bold k;\bold p,\bold f)$--trisection for a four-manifold $X$ inducing an open-book decomposition $(B,\pi)$ on $\partial X$. Then a positive (resp., negative) Hopf stabilization $\mathbb T^\pm$ is a $(g+1,\bold k;\bold p',\bold f')$--trisection of $X$ inducing a positive (resp., negative) Hopf stabilization of $(B,\pi)$, where $\bold f'$ is obtained from $\bold f$ by either increasing or decreasing the value of $f_j$ by one, and $\bold p'$ is obtained from $\bold p$ by either decreasing or increasing the value of $p_j$ by one, according with, in each case, whether or not $\omega$ spans distinct boundary components of $P_i^j$ or not. \end{proposition} The upshot of this proposition is that, to the extent that open-book decompositions of three-manifolds are related by Hopf stabilization and destabilization, any two trisections of a compact four-manifold can be related by a sequence of Hopf stabilizations and core stabilizations. Giroux and Goodman proved that two open-book decompositions on a fixed three-manifold have a common Hopf stabilization if and only if the associated plane fields are homotopic~\cite{GirGoo_06_On-the-stable-equivalence-of-open}, answering a question of Harer~\cite{Har_82_How-to-construct-all-fibered-knots}. From this, together with Theorem~\ref{thm:GK_unique}, we can state the following. \begin{corollary} \label{coro:rel_tri_unique} Suppose that $\mathbb T$ and $\mathbb T'$ are two trisections of a fixed four-manifold $X$. Assume that $\partial X \not= \emptyset$ and that for each component of $\partial X$, the open-book decompositions induced by $\mathbb T$ and $\mathbb T'$ have associated plane fields that are homotopic. Then, $\mathbb T$ and $\mathbb T'$ become isotopic after they are each core stabilized and Hopf stabilized some number of times. \end{corollary} Recently, Piergallini and Zuddas showed there is a complete set of moves that suffice to relate any two open-book decompositions on a given three-manifold~\cite{PieZud_18_Special-moves-for-open}. By giving trisection-theoretic versions of each move, Castro, Islambouli, Miller, and Tomova were able to prove the following strengthened uniqueness theorem for trisected manifolds with boundary~\cite{CasIslMil_19_The-relative-L-invariant-of-a-compact}. \subsection{Interior perturbation of bridge trisections} \label{subsec:interior_perturbation} \ Having overviewed stabilization operations for four-manifold trisections, we now discuss the analogous operations for bridge trisections. To avoid confusion, we will refer to these analogous operations as \emph{perturbation operations}; they will generally correspond to perturbing the bridge trisected surface relative to the core surface. Throughout, the obvious inverse operation for a perturbation will be referred to as a \emph{deperturbation}. We begin by recalling the perturbation operation for bridge trisections first introduced in~\cite{MeiZup_17_Bridge-trisections-of-knotted} and invoked in~\cite{MeiZup_18_Bridge-trisections-of-knotted}. This perturbation operation requires the existence of a flat disk in $\mathcal D_i$. To distinguish this operation from the subsequent one, we append the adjective ``Whitney''. \begin{definition}[\textbf{\emph{Whitney perturbation}}] Let $\mathcal F$ be a neatly embedded surface in a four-manifold $X$ such that $\mathcal F$ is in $(b,\bold c;\bold v)$--bridge trisected position with respect to a trisection $\mathbb T$ of $X$. Let $D\subset\mathcal D_i$ be a flat disk, and let $D_*\subset Y_i$ be a disk that has no critical points with respect to the standard Morse function on $Y_i$ and that is isotopic rel-$\partial$ to $D$, via a three-ball $B$. Let $\Delta$ be a neatly embedded disk in $B$ that intersects $D_*$ in a vertical strand. Let $\mathcal F'$ denote the surface obtained by isotoping $\mathcal F$ via a Whitney move across $\Delta$. Then, $\mathcal F'$ is in $(b+1,\bold c';\bold v)$--bridge trisected position with respect to $\mathbb T$, where $\bold c'=\bold c$, except that $c_i'=c_i+1$. This Whitney move is called an \emph{Whitney $i$--perturbation}. \end{definition} See Figures~14 and~23 of~\cite{MeiZup_18_Bridge-trisections-of-knotted} for a visualization of a Whitney pertubation. The usefulness of Whitney perturbations is made clear by the following result, which was proved in~\cite{MeiZup_17_Bridge-trisections-of-knotted} in the case that $\mathbb T$ as genus zero (so $X=S^4$) and in~\cite{HugKimMil_18_Isotopies-of-surfaces-in-4-manifolds} in the general case. \begin{theorem}[\cite{MeiZup_17_Bridge-trisections-of-knotted,HugKimMil_18_Isotopies-of-surfaces-in-4-manifolds}] \label{thm:HKM} Fix a four-manifold $X$ and a trisection $\mathbb T$ of $X$. Let $\mathcal F,\mathcal F'\subset X$ be isotopic closed surfaces, and suppose $\mathbb T_\mathcal F$ and $\mathbb T_{\mathcal F'}$ are bridge trisections of $\mathcal F$ and $\mathcal F'$ induced by~$\mathbb T$. Then, there is a sequence of interior (Whitney) perturbations and deperturbations relating $\mathbb T_\mathcal F$ and $\mathbb T_{\mathcal F'}$ \end{theorem} Even without the presence of a flat disk, there is still a perturbation operation available. Despite being called a ``finger'' perturbation, the following perturbation is not an inverse to the Whitney perturbation. The adjective ``Whitney'' and ``finger'' are simply descriptive of how the surface is isotoped relative to the core to achieve the perturbation. However, it is true that the inverse to a Whitney perturbation (or a finger perturbation) is a finger deperturbation. \begin{definition}[\textbf{\emph{finger perturbation}}] Let $\mathcal F$ be a neatly embedded surface in a four-manifold $X$ such that $\mathcal F$ is in $(b,\bold c;\bold v)$--bridge trisected position with respect to a trisection $\mathbb T$ of $X$. Fix a bridge point $x\in\bold x$, and let $N$ be a small neighborhood of $x$, so $N\cap\mathcal F$ is a small disk. Let $\omega\subset \partial N$ be a trivial arc connecting $\mathcal T_i$ to $\Sigma$. Perform a finger-move of $\mathcal F$ along $\omega$, isotoping a small bit of $\mathcal F$ toward and through $\Sigma$, as in Figure~\ref{fig:finger}. Let $\mathcal F'$ denote the resulting surface. Then, $\mathcal F'$ is in $(b+1,\bold c';\bold v)$--bridge position with respect to $\mathbb T$, where $\bold c'=\bold c$, except that $c_i'=c_i+1$. This finger move is called an \emph{finger $i$--perturbation}. \end{definition} \begin{figure}[h!] \centering \includegraphics[width=.8\textwidth]{finger} \caption{A local picture corresponding to a finger 1--perturbation.} \label{fig:finger} \end{figure} Note that the disk of the disk-tangle $\mathcal D_i$ containing the bridge point $x$ is neither required to be flat nor vertical in the definition of a finger perturbation. However, if this disk \emph{is} flat, then the operation is the simplest form of a Whitney perturbation, corresponding to the case where the vertical strand in $D_*$ is boundary parallel through vertical strands. The simplicity of the finger perturbation operation is expressed by the following proposition. Let $\mathbb T_{S^2}^i$ denote the 2--bridge trisection of the unknotted two-sphere satisfying $c_i = 2$. \begin{proposition} If the bridge trisection $\mathbb T_\mathcal F'$ is obtained from the bridge trisection $\mathbb T_\mathcal F$ via a finger $i$--perturbation, then $\mathbb T_\mathcal F' = \mathbb T_\mathcal F\#\mathbb T_{S^2}^i$. \end{proposition} The proof is an immediate consequence of how bridge trisections behave under connected sum. Note that a Whitney perturbation corresponds to a connected sum as in the proposition if and only if it is a finger perturbation; in general, a Whitney perturbation cannot be described as the result of a connected sum of bridge trisections. For example, the unknotted two-sphere admits a $(4,2)$--bridge trisection that is not a connected sum of (nontrivial) bridge trisections, even though it is (Whitney) perturbed. \subsection{Markov perturbation of bridge trisections} \label{subsec:Markov_perturbation} \ Let $\mathbb T_{D^2}$ denote the 0--bridge trisection of the unknotted disk $D^2$ in $B^4$. \begin{definition}[\textbf{\emph{Markov perturbation}}] Let $\mathbb T'$ be a $(b,\bold c;\bold v)$--bridge trisection of a neatly embedded surface $(X',\mathcal F')$, and let $\mathbb T''$ be the 0--bridge trisection of $(B^4, D^2)$. Choose points $y^\varepsilon\in\mathcal T_i^\varepsilon\cap P_i^\varepsilon$ for $\varepsilon\in\{',''\}$. Let $(X,\mathcal F)=(X',\mathcal F')\natural (B^4, D^2)$, and let $\mathbb T = \mathbb T'\natural\mathbb T''$. Then $\mathbb T$ is a $(b+1,\bold c;\bold v')$--bridge trisection of $(X,\mathcal F) = (X',\mathcal F')$, where $\bold v = \bold v'$, except that $v^j = (v^j)'+1$, where $y^1\in \mathcal L^j$. The bridge trisection $\mathbb T'$ is called the \emph{Markov $i$--pertubation} of $\mathbb T$. \end{definition} \begin{figure}[h!] \centering \includegraphics[width=.6\textwidth]{markov_shadow} \caption{Shadow diagrams depicting the local process of Markov 3--perturbation.} \label{fig:markov_shadow} \end{figure} In justification of this definition: That $\mathbb T'$ is a new bridge trisection follows from Proposition~\ref{prop:bcs}. That $\mathcal F'$ is isotopic to $\mathcal F$ follows from the fact that we are forming the boundary connected sum with a trivial disk. That $\mathcal L'$ is obtained from $\mathcal L$ via a Markov perturbation follows from our understanding of a Markov perturbation as the trivial connected sum of a braided link with a meridian of a component of the binding -- see Subsection~\ref{subsec:OBD}. Note that the left-most blue and green arcs are shown in light blue and light green to indicate that they might correspond to flat or vertical strands. The pink arcs correspond to vertical strands. The importance of this operation is due to the Generalized Markov Theorem, which states that any two braidings of a given link with respect to a fixed open-book decomposition can be related by an isotopy that preserves the braided structure, except at finitely many points in time at which the braiding is changed by a Markov stabilization or destabilization~\cite{Mar_35_Uber-die-freie-Aquivalenz,Sko_92_Closed-braids-in-3-manifolds,Sun_93_The-Alexander-and-Markov-theorems}. See Subsection~\ref{subsec:OBD}. Taken together, the stabilization and perturbation moves described in this section should suffice to relate any two bridge trisections of a fixed four-manifold pair. \begin{conjecture} Let $\mathbb T_1$ and $\mathbb T_2$ be bridge trisections of a given surface $(X,\mathcal F)$ that are diffeomorphic as trisections of $X$. Then, there are diffeomorphic bridge trisections $\mathbb T_1'$ and $T_2'$ such that $\mathbb T_\varepsilon'$ is obtained from $\mathbb T_\varepsilon$ via a sequence of moves, each of which is of one of the following types: \begin{enumerate} \item core stabilization \item Hopf stabilization \item relative double twist \item interior perturbation/deperturbation \item Markov perturbation/deperturbation \end{enumerate} \end{conjecture} To prove this conjecture, it should suffice to carefully adapt the techniques of~\cite{HugKimMil_18_Isotopies-of-surfaces-in-4-manifolds} from the setting of isotopy of closed four-manifold pairs equipped with Morse functions to the setting of isotopy rel-$\partial$ of four-manifold pairs with boundary. The following is a diagrammatic analog to this conjecture. \begin{conjecture} Suppose that $\mathfrak D^1$ and $\mathfrak D^2$ are shadow diagrams for a fixed surface-link $(X,\mathcal F)$. Then, $\mathfrak D^1$ and $\mathfrak D^2$ can be related by a finite sequence of moves, each of which is of one of the following types: \begin{enumerate} \item core stabilization/destabilization \item Hopf stabilization/destabilization \item relative double twist \item interior perturbation/deperturbation \item Markov perturbation/deperturbation \item arc and curve slides \item isotopy rel-$\partial$ \end{enumerate} \end{conjecture} \bibliographystyle{amsalpha}
2,869,038,156,195
arxiv
\section{Introduction and results} The evolution of the Quark-Gluon Plasma (QGP) produced in heavy ion collisions is fairly well described, after a short period of thermalization and before hadronization, by relativistic hydrodynamics, consistently with the strongly coupled regime of such system \cite{arsene}. Numerical simulations of the hydrodynamic evolution of the QGP require as input the value of the transport coefficients. While recent simulations indicate that the evolution of the QGP should be quite insensitive to most of the second order coefficients (see for example \cite{Luzum:2008cw,Rajagopal:2009yw,Song:2009rh}), it is definitely sensitive to the value of the shear viscosity \cite{lista}, and can be influenced in a sizable way by the bulk viscosity and possibly the relaxation times \cite{Rajagopal:2009yw,Song:2009rh,Denicol:2010tr}. Moreover, a complete characterization of the Quark-Gluon Plasma of QCD up to second order still requires the knowledge of the whole set of coefficients. There are currently no first-principle reliable calculations of almost all the second order coefficients for QCD at strong coupling: lattice results give some estimates of the viscosities and the shear relaxation time \cite{Meyer:2007dy}, but they are affected by considerable uncertainties (see for example \cite{Moore:2008ws}). In fact, actual simulations, lacking solid data for QCD, make often use, as benchmark values of the transport coefficients, of the ones derived from the gravitational dual of ${\cal N}=4$ SYM \cite{Baier:2007ix} (in some cases together with the bound on the bulk viscosity proposed in \cite{buchelbound} and a relation for the relaxation times from \cite{Romatschke:2009kr}). While the ${\cal N}=4$ SYM values for the ``shear'' coefficients are expected to be in the right ballpark for QCD, they still concern an exactly conformal theory, and in particular the bulk viscosity and many of the second order coefficients are not determined. In order to improve this situation, the first step is to break conformal invariance. Since QCD is approximately conformal in the temperature window $1.5 T_c \lesssim T \lesssim 4 T_c$, the conformality breaking effects can be treated perturbatively. In this situation, probably the simplest way of modeling QCD holographically is by a theory where conformality is slightly broken by a marginally relevant operator. The aim of this note is to point out that, in such a scenario, all the transport coefficients up to second order for the uncharged plasma are given in terms of a single parameter (weighting the conformality breaking) by making use of the results in \cite{Kanitscheider:2009as,Romatschke:2009kr}: they are collected in Table \ref{relations}. In particular, the behavior of the shear and bulk relaxation times is briefly discussed in section \ref{secresults}. There are surely more precise ways of modeling holographically QCD (none of which is of course completely correct). Nevertheless, the model considered in this note has the considerable advantage of full calculability, providing one of the few examples in which all the second order transport coefficients are determined. Moreover, the results in Table \ref{relations} hold for any theory with gravity dual, where conformality is broken at leading order by a marginally (ir)relevant operator (dual to a scalar with the simplest possible potential (\ref{potential})), including the cascading plasma \cite{Gubser:2001ri} and the D3D7 plasmas \cite{Bigazzi:2009bk}. \subsection{Notation} Uncharged relativistic hydrodynamics is determined, up to second order in the derivative expansion, by seventeen transport coefficients, fifteen of which are possibly independent \cite{Baier:2007ix}, \cite{Romatschke:2009kr}. On a general space with metric $g_{\mu\nu}$, the energy momentum tensor \begin{equation}\label{tmunu} T^{\mu\nu}=\varepsilon u^\mu u^\nu + p \Delta^{\mu\nu} + \pi^{\mu\nu} + \Delta^{\mu\nu}\Pi \,, \qquad {\rm where}\qquad \Delta^{\mu\nu}=g^{\mu\nu}+u^\mu u^\nu\,, \end{equation} is determined by the energy density $\varepsilon$, fluid velocity $u^\mu$ ($u^\mu u_\mu=-1$), the transport coefficients in its ``viscous shear'' part: \begin{eqnarray}\label{shear} \pi^{\mu\nu}&=&-\eta \sigma^{\mu\nu} +\eta \tau_\pi \Bigl[\langle D \sigma^{\mu\nu}\rangle + \frac{\nabla \cdot u}{3}\sigma^{\mu\nu} \Bigr] + \kappa \Bigl[ R^{<\mu\nu>}-2 u_\alpha u_\beta R^{\alpha <\mu\nu> \beta} \Bigr] + \lambda_1 \sigma^{<\mu}_{\lambda} \sigma^{\nu>\lambda} \nonumber \\ &&+ \lambda_2 \sigma^{<\mu}_{\lambda} \Omega^{\nu>\lambda} + \lambda_3 \Omega^{<\mu}_{\quad \lambda} \Omega^{\nu>\lambda} + \kappa^* 2 u_\alpha u_\beta R^{\alpha <\mu\nu> \beta} \nonumber \\ && + \eta \tau_\pi^* \frac{\nabla \cdot u}{3}\sigma^{\mu\nu} + \lambda_4 \nabla^{<\mu} \log{s} \nabla^{\nu >} \log{s} \end{eqnarray} and in its ``viscous bulk'' part: \begin{eqnarray}\label{bulk} \Pi &=&-\zeta (\nabla \cdot u) + \zeta \tau_\Pi D (\nabla \cdot u) + \xi_1 \sigma^{\mu\nu}\sigma_{\mu\nu}+ \xi_2 (\nabla \cdot u)^2 + \xi_3 \Omega^{\mu\nu}\Omega_{\mu\nu} + \xi_4 \nabla_{\mu}^{\perp} \log{s} \nabla^{\mu}_{\perp} \log{s} \nonumber \\ && + \xi_5 R + \xi_6 u^\alpha u^\beta R_{\alpha \beta}\,, \end{eqnarray} while the pressure is given by the equation of state $p(\varepsilon)$. The various structures in (\ref{shear}) and (\ref{bulk}), apart from the obvious Riemann and Ricci tensors and scalar curvature ($R^{\mu\nu\rho\sigma}, R^{\mu\nu}, R$), are given by: \begin{eqnarray} D &\equiv & u^\mu\nabla_\mu\,, \qquad \nabla^{\mu}_{\perp} \equiv \Delta^{\mu\nu} \nabla_{\nu}\,, \qquad \sigma^{\mu\nu}\equiv \nabla^{\mu}_{\perp} u^\nu + \nabla^{\nu}_{\perp} u^\mu -\frac23 \Delta^{\mu\nu}(\nabla \cdot u)\,, \nonumber\\ \Omega^{\mu\nu} &\equiv & \frac12 (\nabla^{\mu}_{\perp} u^\nu - \nabla^{\nu}_{\perp} u^\mu)\,, \qquad \end{eqnarray} and for a generic tensor $A^{\mu\nu}$ it was used the notation: \begin{equation} \langle A^{\mu\nu} \rangle = A^{<\mu\nu>} \equiv \frac12 \Delta^{\mu\alpha}\Delta^{\nu\beta}(A_{\alpha\beta}+A_{\beta\alpha})-\frac13 \Delta^{\mu\nu} \Delta^{\alpha\beta}A_{\alpha\beta}\,. \end{equation} Finally, $s$ is the entropy density, while the speed of sound will be denoted as $c_s^2=dp/d\varepsilon$. The shear viscosity $\eta$ and the second order coefficients $\tau_\pi$ (``shear'' relaxation time), $\kappa$, $\lambda_1, \lambda_2, \lambda_3$ are the only ones defined in conformal fluids, as the one of ${\cal N}=4$ SYM. All the others coefficients, i.e. the bulk viscosity $\zeta$ and the second order coefficients $\kappa^*, \tau_\pi^*, \lambda_4, \tau_\Pi$ (``bulk'' relaxation time), $\xi_1, \xi_2, \xi_3, \xi_4, \xi_5, \xi_6$, are only defined in non-conformal plasmas. \subsection{The estimate}\label{secresults} \setcounter{table}{0} Consider a gravity dual model for QCD at large temperature, where the leading conformality breaking effect is captured by adding to the five dimensional metric a non trivial dilaton profile, dual to a marginally relevant operator. The main observation of this note is that, for the simplest scalar potential, the transport coefficients are completely determined in terms of a single parameter. Defining: \begin{equation}\label{Delta} \delta \equiv (1-3c_s^2)\,, \end{equation} at first order in $\delta$ the transport coefficients are given in Table \ref{relations}. \begin{table}[h] \begin{center} \begin{tabular}{||c|c||c|c||c|c||} \hline & & & & & \\ $ \frac{\eta}{s} $ & $\frac{1}{4\pi}$ & $T\tau_{\pi} $ & $ \frac{2-\log{2}}{2\pi} + \frac{3(16-\pi^2)}{64\pi}\delta $ & $ \frac{T\kappa}{s} $ & $ \frac{1}{4\pi^2}\Bigl(1-\frac34 \delta \Bigr) $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \lambda_1}{s} $ & $\frac{1}{8\pi^2}\Bigl(1+\frac34 \delta \Bigr) $ & $\frac{T \lambda_2}{s} $ & $-\frac{1}{4\pi^2}\Bigl( \log{2}+\frac{3\pi^2}{32}\delta \Bigr) $ & $\frac{T \lambda_3}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T\kappa^*}{s} $ & $-\frac{3}{8\pi^2}\delta $ & $T\tau_{\pi}^* $ & $-\frac{2-\log{2}}{2\pi}\delta $ & $\frac{T \lambda_4}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{\zeta}{\eta} $ & $\frac23 \delta $ & $T\tau_{\Pi} $ & $\frac{2-\log{2}}{2\pi} $ & $\frac{T \xi_{1}}{s} $ & $\frac{1}{24\pi^2}\delta $ \\ & & & & & \\ \hline \hline & & & & & \\ $ \frac{T \xi_{2}}{s} $ & $\frac{2-\log{2}}{36\pi^2}\delta $ & $\frac{T \xi_{3}}{s} $ & $0 $ & $\frac{T \xi_{4}}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \xi_{5}}{s} $ & $\frac{1}{12\pi^2}\delta $ & $\frac{T \xi_{6}}{s} $ & $\frac{1}{4\pi^2}\delta $ & & \\ & & & & & \\ \hline \end{tabular} \end{center} \caption{The transport coefficients, in the notation of (\ref{tmunu})-(\ref{bulk}), for a marginally (ir)relevant deformation of a conformal theory, at leading order in the deformation parameter $\delta \equiv (1-3c_s^2)$. The holographic equation of state is $\varepsilon=3(1+\delta)p$.}\label{relations} \end{table} This result, which is the main content of this note, follows directly from \cite{Kanitscheider:2009as}, \cite{Romatschke:2009kr} (which already contains a part of the relations in Table \ref{relations}\footnote{See also \cite{tutti1,buchelbound,gubserspeed,tutti2,Kanitscheider:2009as,tutti3,gubserspeed2,noi}.}) and will be derived in section \ref{proof}. Possibly the main novel results contained in Table \ref{relations} concern the two relaxation times $\tau_{\pi}, \tau_{\Pi}$. Specifically, at leading order in the conformality breaking, the bulk relaxation time $\tau_{\Pi}$ is \emph{not} proportional to the bulk viscosity. The behavior of the shear relaxation time $\tau_{\pi}$ is instead more interesting, since it depends on the speed of sound. For a phenomenologically realistic behavior of the latter, $\tau_{\pi}$ is \emph{decidedly increasing} when reducing the temperature. In particular, it increases \emph{faster} than $\tau_{\Pi}$. Moreover, using the above results it is easy to verify that the relation \begin{equation} 4\,\lambda_1 + \lambda_2 = 2\,\eta\, \tau_{\pi}\,, \label{relhy} \end{equation} holds, at first order in $\delta$. It has been shown in \cite{erd,hy} that (\ref{relhy}) is satisfied in all the known examples of conformal plasmas (in $d\ge 4$ spacetime dimensions, with of without conserved global charges) with dual gravity description. Our results provide a unique validity check of the above relation in non-conformal settings.\footnote{We thank Todd Springer for this observation.} While the present system can model at best the regime of QCD away from the critical temperature, where there are certainly other ways of modeling QCD, it would be unexpected if the qualitative behavior of the transport coefficients described above turned out to be drastically different. In order to give an illustrative example of numerical estimates of the coefficients, we have to chose one input parameter. As in \cite{Song:2009rh}, we use the results for the speed of sound from the lattice study in \cite{Katz:2005br}. We consider a temperature $T\sim 1.5 T_c$, which is a reasonable value for the RHIC experiment. Then from \cite{Katz:2005br} we read $c_s^2\sim 0.283$ from which we get the numbers in Table \ref{heiz1}. \begin{table}[h] \begin{center} \begin{tabular}{||c|c||c|c||c|c||} \hline & & & & & \\ $ \frac{\eta}{s} $ & $\frac{1}{4\pi}$ & $T\tau_{\pi} $ & $0.222 $ & $ \frac{T\kappa}{s} $ & $0.022 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \lambda_1}{s} $ & $0.014 $ & $\frac{T \lambda_2}{s} $ & $-0.021 $ & $\frac{T \lambda_3}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T\kappa^*}{s} $ & $-0.006 $ & $T\tau_{\pi}^* $ & $-0.031 $ & $\frac{T \lambda_4}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{\zeta}{\eta} $ & $0.101 $ & $T\tau_{\Pi} $ & $0.208 $ & $\frac{T \xi_{1}}{s} $ & $0.001 $ \\ & & & & & \\ \hline \hline & & & & & \\ $ \frac{T \xi_{2}}{s} $ & $0.001 $ & $\frac{T \xi_{3}}{s} $ & $0 $ & $\frac{T \xi_{4}}{s} $ & $0 $ \\ & & & & & \\ \hline \hline & & & & & \\ $\frac{T \xi_{5}}{s} $ & $0.001 $ & $\frac{T \xi_{6}}{s} $ & $0.004 $ & & \\ & & & & & \\ \hline \end{tabular} \end{center} \caption{The transport coefficients at $T\sim 1.5 T_c$ and $c_s^2 \sim 0.283$.} \label{heiz1} \end{table} The reported values provide corrections up to $17\%$ to the conformal ones (when the latter are defined). In particular $2\pi T\tau_\pi= 1.394$ is a bit larger than the conformal value (1.307) and more similar to the one used in \cite{Song:2009rh}. The numerical difference is by definition not very large, but sizable. Obviously, increasing the temperature reduces this difference and at $T \sim 3 T_c$, which could be a significant temperature for LHC, the corrections to the conformal values are below $10\%$. Let us conclude this section by describing the approximations involved in applying these relations to QCD. First of all, QCD does not have a purely gravitational dual. Nevertheless, experience teaches that simple gravity models of metric plus scalar are in good quantitative agreement with the lattice results for certain observables. In particular, we are interested in the regime, relevant in the early stages of the QGP evolution, at $T>T_c$ where QCD is nearly conformal and strongly coupled. Moreover, in the hydrodynamic regime the gravity description and actual QCD are in good agreement (e.g. the result for the shear viscosity). On the other hand, in QCD the gluon condensate is marginally relevant in the asymptotically free regime, while in the experimental regime we are interested in, the theory is strongly coupled and this operator can be expected to have developed a sizable anomalous dimension.\footnote{This situation could be modeled with a scalar dual to a relevant operator \cite{gubserspeed,gubserspeed2}. In this case the computation of the second order coefficients is highly more complicated.} The other caveat concerns the effects of the flavors and the chemical potential, which are not accounted for in Table \ref{relations}, but are expected to give the latter subleading corrections. In view of these considerations, the relations in Table \ref{relations} can provide a fair estimate\footnote{Better than the one provided by ${\cal N}=4$ SYM \cite{Baier:2007ix}.} of the initial behavior of the hydrodynamic evolution at RHIC and LHC. \section{Derivation}\label{proof} The leading conformality breaking effects of a source for a marginally (ir)relevant operator can be captured in the dual gravitational setting by a so-called Chamblin-Reall model. Consider an effective five dimensional theory with metric plus a single scalar $\phi$ with potential $V(\phi)$. It models the breaking of conformality at leading order in a small parameter $\epsilon$ if $V(\phi)|_{\epsilon=0}=V_0$, where the negative cosmological constant $V_0$ allows for and $AdS$ solution. The operator dual to $\phi$ (on the unperturbed $AdS$ solution) is of dimension four, that is it is marginally (ir)relevant, if $\partial_{\phi}^2 V(\phi)|_{\phi=0}={\cal O}(\epsilon^{1+\alpha})$ with positive $\alpha$. The simplest such class of models, and the one we are interested in, is given by: \begin{equation}\label{potential} V(\phi) = V_0 + \epsilon \phi + {\cal O}(\epsilon^{1+\alpha})\,. \end{equation} At leading order: \begin{equation} V(\phi) \sim V_0 e^{\epsilon \phi/V_0}\,, \end{equation} i.e. the model is in the Chamblin-Reall class \cite{Chamblin:1999ya}.\footnote{To be precise, with unit $AdS$ radius, $V_0=-12$ and, from the calculation of the speed of sound, $\epsilon^2=96\delta$ \cite{gubserspeed}.} For this class of models, the proof of the relations in Table \ref{relations} follows directly from \cite{Kanitscheider:2009as}. Let us summarize it. The starting point is the fact that Chamblin-Reall models in $d+1$ dimensions, for particular values of the coefficient of the exponential in the potential, can be obtained from dimensional reduction on a $2\sigma-d$ torus of pure gravity plus cosmological constant in $2\sigma+1$ dimensions. This happens when the parameter $\sigma$, which determines together with $d$ the coefficient in the exponential, is semi-integer. For these values of $\sigma$, one can then start from the well-known $AdS_{2\sigma+1}$ solution and its dual hydrodynamic energy-momentum tensor, and obtain the hydrodynamic energy-momentum tensor for the dual to the Chamblin-Reall model by simple toroidal dimensional reduction. The crucial observation in \cite{Kanitscheider:2009as} is that, from the point of view of the theory in $d+1$ dimensions, the equations are smooth in the parameter $\sigma$. This allows for the computation of the hydrodynamic energy-momentum tensor for arbitrary values of $\sigma>d/2$.\footnote{At $\sigma=d/2$ the action is singular \cite{Kanitscheider:2009as}.} The procedure is as follows. One starts from a Chamblin-Reall model in $d+1$ dimensions for whatever $\sigma>d/2$ and performs the continuation (which is smooth) to the nearest value $\tilde\sigma$ which is semi-integer. The latter theory is the compactification of a theory admitting a $AdS_{2\tilde\sigma+1}$ solution, so its dual energy-momentum tensor, which will be a function of $\tilde\sigma$, can be calculated straightforwardly. This energy-momentum tensor can thus be continued (smoothly) back to the one of the theory corresponding to the original value $\sigma$. In particular, all the transport coefficients for this theory will automatically be determined by the conformal ones in the higher dimensional theory, modulo an overall constant (the volume of the torus) which can be fixed knowing just one coefficient. Let us see concretely how this procedure is implemented. One can determine $\sigma$ by the fact that the equation of state in these models is $\varepsilon=(2\sigma-1)p$ \cite{Kanitscheider:2009as}, so that $\sigma=2+3\delta/2$ in the notation of Table \ref{relations}.\footnote{And $\sigma=2+\epsilon^2/64$ in the notation of (\ref{potential}).} Thus, for a small deformation $\delta$ of a conformal theory, $\tilde\sigma=5/2$ and the relevant starting solution is $AdS_{2\tilde\sigma+1}=AdS_{6}$, whose dual conformal hydrodynamics was considered in \cite{Bhattacharyya:2008mz}. Let us write the results of \cite{Bhattacharyya:2008mz} in the present notation: \begin{eqnarray}\label{ads6} \eta^{(2\tilde\sigma)}&=&\frac{s^{(2\tilde\sigma)}}{4\pi}\,, \nonumber \\ \kappa^{(2\tilde\sigma)}&=&\frac{\eta^{(2\tilde\sigma)}\tilde\sigma}{\pi T(2\tilde\sigma-2)}\,, \nonumber \\ \tau_{\pi}^{(2\tilde\sigma)}&=&\frac{\tilde\sigma}{2\pi T}\Bigl(1-\int_1^{\infty} \frac{y^{2\tilde\sigma-2}-1}{y(y^{2\tilde\sigma}-1)}dy\Bigr) \,,\nonumber \\ \lambda_1^{(2\tilde\sigma)}&=& \frac{\eta^{(2\tilde\sigma)}\tilde\sigma}{4\pi T}\,, \nonumber \\ \lambda_2^{(2\tilde\sigma)}&=& -\frac{\eta^{(2\tilde\sigma)}\tilde\sigma}{\pi T}\int_1^{\infty} \frac{y^{2\tilde\sigma-2}-1}{y(y^{2\tilde\sigma}-1)}dy \,, \nonumber \\ \lambda_3^{(2\tilde\sigma)}&=& 0\,. \end{eqnarray} The procedure to obtain the desired coefficients involves reducing the energy momentum tensor on a circle ($2\tilde\sigma-d=1$) of volume $V$, continuing it back to $\sigma$ and expanding it at first order in $\delta$ \cite{Kanitscheider:2009as}; examples of results of this procedure are (the arrows denote the analytic continuation): \begin{eqnarray}\label{reduction} \eta &=&\eta^{(2\tilde\sigma)}V \quad \rightarrow \quad \eta^{(2\tilde\sigma)}V \,, \\ \kappa &=&\kappa^{(2\tilde\sigma)}V\quad \rightarrow \quad \frac{\eta (4+3\delta)}{2\pi T (2+3\delta)} \sim \frac{\eta}{\pi T}\Bigl( 1-\frac{3}{4}\delta \Bigr) \,, \\ \tau_{\pi} &=&\tau_{\pi}^{(2\tilde\sigma)}V \quad \rightarrow \quad \frac{(4+3\delta)V}{4\pi T}\Bigl(1-\int_1^{\infty} \frac{y^{2+3\delta}-1}{y(y^{4+3\delta}-1)}dy\Bigr)\sim \frac{V}{\pi T}\Bigl[ \frac{2-\log{2}}{2}+\frac{3(16-\pi^2)}{64}\delta \Bigr] \,,\nonumber\label{taupi} \\ \\ \lambda_1 &=& \lambda_1^{(2\tilde\sigma)}V \quad \rightarrow \quad \frac{\eta}{2\pi T}\Bigl( 1+\frac{3}{4}\delta \Bigr) \,, \\ \lambda_2 &=& \lambda_2^{(2\tilde\sigma)}V \quad \rightarrow \quad -\frac{\eta (4+3\delta)}{2\pi T} \int_1^{\infty} \frac{y^{2+3\delta}-1}{y(y^{4+3\delta}-1)}dy \sim -\frac{\eta}{\pi T}\Bigl( \log{2}+\frac{3\pi^2}{32}\delta \Bigr) \,, \\ \lambda_3 &=& 0\,,\\ \zeta & = & 2\eta^{(2\tilde\sigma)}V\frac{2\tilde\sigma-d}{(2\tilde\sigma-1)(d-1)} \quad \rightarrow \quad 2\eta\frac{3\delta}{3(3+3\delta)}\sim \frac{2}{3}\eta \delta\,, \end{eqnarray} where the leading ``conformal'' term in (\ref{taupi}) fixes the value $V=1$ \cite{Baier:2007ix}. The other coefficients in Table \ref{relations} are obtained in the same way. Let us conclude by stressing again the fact that the relations in Table \ref{relations} are valid for any theory where conformality is broken at leading order by a marginally (ir)relevant deformation, with the dual scalar having the potential (\ref{potential}). These theories\footnote{For a recent study of their shear spectral sum rule see \cite{spr}.} include the cascading plasmas \cite{Gubser:2001ri} and the D3D7 plasmas \cite{Bigazzi:2009bk}. The latter are the first examples of holographic plasmas including the effects of dynamical flavors in a completely controllable framework. In this case, the relations in Table \ref{relations} match precisely the coefficients calculated in \cite{noi}\footnote{The small parameter in \cite{noi} was denoted as $\epsilon_h^2=6\delta$.} and complete the determination of all the second order transport coefficients in those plasmas. \vskip 15pt \centerline{\bf Acknowledgments} \vskip 10pt \noindent We are grateful to D. Mayerson, T. Springer and J. Tarrio for discussions. This work is supported by the FWO -Vlaanderen, project G.0235.05 and by the Federal Office for Scientific, Technical and Cultural Affairs through the Interuniversity Attraction Poles Programme (Belgian Science Policy) P6/11-P. { \it F. B. and A. L. C. would like to thank the Italian students, parents, teachers and scientists for their activity in support of public education and research.}
2,869,038,156,196
arxiv
\section{Introduction} A merger of binary neutron stars (NSs) is one of the most promising targets of direct detection of gravitational waves (GWs). Such a detection pointing to a few hundred Mpc is expected to come true in the very near future with the second generation of ground-based GW detectors (Abadie et al. 2010; Nissanke et al. 2013). For achieving such an expectation, some observations of electromagnetic counterparts of the GW signals are necessarily needed, which can help to confirm the position, time, redshift, and astrophysical properties of the GW sources. During a NS-NS merger event, the rapidly rotating compact system could eject a highly collimated relativistic jet and a quasi-isotropic sub-relativistic outflow, where various electromagnetic emission could be produced. Specifically, internal and external dissipations of the jet energy can result in a short-duration gamma-ray burst (GRB) and its multi-wavelength afterglow emission, which could be the most attractive electromagnetic counterparts in view of their high brightness (see Nakar 2007, Berger 2014 for reviews). However, unfortunately, the detection probability of an associated GRB is significantly suppressed by the small opening angle of the jet (Fong et al. 2014), although a late and weak orphan afterglow could still be observed by off-axis (van Eerten \& MacFadyen 2011; Metezger \& Berger 2012). Alternatively, more and more attentions have been paid to the emission originating from the isotropic ejecta, e.g., thermal emission due to the diffusion of internal energy of the ejecta (called mergernova by Yu et al. 2013; Li \& Paczynski 1998; Kulkarni 2005; Rosswog 2005; Metzger et al. 2010) and non-thermal emission due to the shock interaction between the ejecta and environment medium (Nakar \& Piran 2011; Metzger \& Berger 2012; Piran et al. 2013; Gao et al. 2013). The isotropic merger ejecta is probably highly neutron-rich, which makes it possible to effectively synthesize nuclei much heavier than $^{56}$Ni via rapid neutron capture processes (r-processes; Rosswog et al. 1999, 2014; Roberts et al. 2011; Goriely et al. 2011; Korobkin et al. 2012; Bauswein et al. 2013a; Hotokezaka et al. 2013, 2015; Wanajo et al. 2014; Goriely 2015; Just et al. 2015; Martin et al. 2015). The radioactive decays of these newly synthesized elements can heat the ejecta to produce a detectable thermal emission. However, being limited by the low mass of the ejecta (no more than a few percent of solar mass) and the element synthesis efficiency, the luminosity of a radioactive-powered mergernova is expected to be not much higher than $\sim10^{41}\rm erg~s^{-1}$. Thus these phenomena are widely known as kolinova or macronova (Kulkarni 2005; Metzger et al. 2010; Barnes \& Kasen 2013; Tanaka \& Hotokezaka 2013; Grossman et al. 2014; Kasen et al. 2015a). Such a luminosity limit can be breached if a more powerful energy source can be provided by the central post-merger compact object. By invoking a remnant supra-massive NS that spins extremely rapidly and is highly magnetized, Yu et al. (2013) and Metzger \& Piro (2014) investigated the characteristics of mergernova emission with an energy injection from the spin-down of the NS. It was found that the peak luminosity of such a NS-powered mergernova, which of course depends on the lifetime of the NS and the collimation of the NS wind, could sometimes be comparable to and even exceed the luminosity of ordinary supernovae, but with a much shorter duration on the order of a few days. Excitingly, some unusual rapidly evolving and luminous transients were discovered by some recent observations such as the Pan-STARRS1 Medium Deep Survey (Drout et al. 2014). The characteristics of these transients are basically consistent with the predictions for NS-powered mergernovae (Yu et al. 2015), although multi-wavelength cross-identifications are still demanded. Recently some multi-wavelength studies have already been carried out on observational candidates of mergernovae. For example, being indicated by the shallow-decay afterglows of GRB 130603B, the widely-discussed infrared excess after this GRB is argued to be probably powered by a remnant NS (Fan et al. 2013) rather than by radioactivities as usually considered (Tanvir et al. 2013; Berger et al. 2013). More directly, by considering of the high-energy emission from a NS wind (Yu et al. 2010; Zhang 2013), which can partly leak from a preceding merger ejecta at late time, a NS-powered mergernova is expected to be possibly accompanied by a late X-ray re-brightening. Very recently Gao et al. (2015) found that the late optical and X-ray bumps after GRB 080503 provided a perfect sample for such multi-wavelength features. In this paper, going ahead, we would reveal another possible X-ray signature prior to a NS-powered mergernova, which is caused by the breakout of the shock arising from the early interaction between the NS wind and the merger ejecta. This X-ray precursor emission would play an essential role in future mergernova identifications and in GW detections. \section{The model} \subsection{Remnant neutron star and merger ejecta} A merger would happen inevitably in a NS-NS binary when the gravity between the NSs can no longer be supported by angular momentum because of GW radiation release. After the merger, in some situations, a supra-massive NS rather than a black hole could be left with an extremely rapid differential rotation. This is supported indirectly by many emission features of short GRBs and afterglows including extended gamma-ray emission, X-ray flares, and plateaus (Dai et al. 2006; Fan \& Xu 2006; Zhang 2013; Giacomazzo \& Perna 2013; Rowlinson et al. 2010, 2013). The presence of such a supra-massive NS is permitted by both theoretical simulations (Bauswein et al. 2013b; Hotokezaka et al. 2011) and observational constraints (e.g., the present lower limit of the maximum mass of Galactic NSs is precisely set by PSR J0348+0432 to $2.01\pm0.04M_{\odot}$; Antoniadis et al. 2013). During the first few seconds after the birth of a remnant NS, on one hand, a great amount of neutrinos can be emitted from the very hot NS. On the other hand, differential rotation of the NS can generate multipolar magnetic fields through some dynamo mechanisms (Duncan \& Thompson 1992; Price \& Rosswog 2006; Cheng \& Yu 2014), making the NS to be a magnetar. Consequently, during the very early stage, the neutrino emission and ultra-strong magnetic fields together could drive a continuous baryon outflow (Dessart et al. 2009; Siegel et al. 2014; Siegel \& Ciolfi 2015), which provides an important contribution to form an isotropic merger ejecta. A few of seconds later, the NS eventually enters into an uniform rotation stage and meanwhile a stable dipole structure of magnetic fields can form. From then on, the NS starts to lose its rotational energy via a Poynting flux-dominated wind. The spin-down luminosity carried by the wind can be estimated with the magnetic dipole radiation formula as $L_{\rm sd}=L_{\rm sd,i}\left(1+{t_{\rm}}/{t_{\rm md}}\right)^{-2}$, where $L_{\rm sd,i}=10^{47}\mathcal R^{6}_{\rm s,6}\mathcal B^{2}_{14}\mathcal P^{-4}_{\rm i,-3}\,\rm erg\ s^{-1}$, $t_{\rm md}=2\times10^{5}\mathcal R^{-6}_{\rm s,6}\mathcal B^{-2}_{14}\mathcal P^{2}_{\rm i,-3}\,\rm s$, and the zero point of time $t$ is set at the beginning of the magnetic dipole radiation which is somewhat later than the NS formation by several seconds. Here $\mathcal R_{\rm s}$, $\mathcal B$, and $\mathcal P_{\rm i}$ are the radius, dipolar magnetic filed strength, and initial spin periods of the NS, respectively, and the convention $Q_{x}=Q/10^x$ is adopted in cgs units. During a merger event, although the overwhelmingly majority of matter falls finally into the central remnant NS, there is still a small fraction of matter ejected outwards, e.g., the baryon wind blown from the NS mentioned above. Besides that component, a quasi-isotropic merger ejecta can also be contributed by a wind from a short-lived disk surrounding the NS, by an outflow from the colliding interface between the two progenitor NSs, and by a tidal tail due to the gravitational and hydrodynamical interactions. The latter two components are usually called dynamical components. These ejecta components differ with each other in masses, electron fractions, and entropies. It is difficult and nearly impossible to describe precisely the specific constitutes and distributions of a merger ejecta, which depend on the relative sizes of the two progenitor NSs, equation of states of NS matter, and magnetic field structures. In any case, according to the numerical simulations in literature, on one hand, the mass of dynamical components could range from $\sim10^{-4}M_{\odot}$ to a few times $\sim0.01M_{\odot}$ (Oechslin et al. 2007; Bauswein et al. 2013a; Hotokezaka et al. 2013; Rosswog 2013). On the other hand, in presence of a remnant supra-massive NS, the mass of a neutrino-driven wind is found to be at least higher than $3.5\times10^{-3}\, M_{\odot}$ (Perego et al. 2014), and the mass-loss rate due to ultrahigh magnetic fields is about $10^{-3}-10^{-2}M_{\odot}\,\rm s^{-1}$ during the first $1-10$ s (Siegel et al. 2014). Therefore, by combining all of these contributions (see Rosswog 2015 for a brief review), we would take $M_{\rm ej}=0.01M_{\odot}$ as a reference value for the total mass of an ejecta. Furthermore, we would adopt power-law density and velocity distributions of this mass as follows (Nagakura et al. 2014): \begin{eqnarray} \rho_{\rm ej}(r,t)=\frac{(\delta-3)M_{\rm ej}}{4\pi r_{\rm max}^{3}}\left[\left(\frac{r_{\rm min}}{r_{\rm max}}\right)^{3-\delta}-1\right]^{-1}\left({r\over r_{\max}}\right)^{-\delta},\label{Eq.rhoa} \end{eqnarray} and \begin{eqnarray} {v_{\rm ej}(r,t)}={v_{\rm max}}\frac{r}{r_{\rm max}(t)},{~~\rm for~~}r\leq r_{\max}(t), \end{eqnarray} where $r$ is the radius to the central NS, $v_{\rm max}$ is the maximum velocity of the head of ejecta which is probably on the order of $\sim 0.1c$, and the slope $\delta$ ranges from 3 to 4 according to the numerical simulation of Hotokezaka et al. (2013). We fix $\delta=3.5$ as in Nagakura et al. (2014). The variation of $\delta$ within a wide range in fact cannot significantly affect the primary results of this paper except for an extremely high value (e.g. $\delta>5$). The maximum radius of ejecta can be calculated by $r_{\max}(t)\approx r_{\rm max,i}+v_{\max}t$, where $r_{\rm max,i}\approx v_{\max}\Delta t$ with $\Delta t$ being the time on which the dipolar magnetic field is stabilized. Correspondingly, the minimum radius reads $r_{\rm min}(t)=r_{\rm min,i}+v_{\rm min}t$ and its initial value could be determined by an escape radius as $r_{\rm min,i}=\left({2GM_{\rm c}r_{\rm max,i}^{2}}/{v_{\max}^{2}}\right)^{1/3}$, where $G$ is the gravitational constant and $M_{\rm c}$ is the mass of the remnant NS. The huge energy released from a remnant NS (i.e. millisecond magnetar) would eventually drive an ultra-relativistic wind mixing Poynting flux and leptonic plasma, which catches up with the preceding merger ejecta very quickly. On one hand, if this wind always keeps Poynting-flux-dominated even until it collides with the ejecta, the material at the bottom of ejecta could be heated by absorbing low-frequency electromagnetic waves from the Poynting component. On the other hand, more probably, some internal dissipations (e.g. the ICMART processes; Zhang \& Yan 2011) could take place in the NS wind to produce non-thermal high-energy emission. Subsequently, a termination shock could be formed at the interface between the wind and ejecta, if the wind magnetization has become sufficiently low there (Mao et al. 2010). As a result, the bottom of ejecta can be heated by absorbing high-energy photons from the emitting wind region and/or by transmitting heat from the neighbor hot termination-shock region. Additionally, even if we arbitrarily assume an extreme situation that all of the wind energy is completely reflected from the interface, the bottom material of ejecta would also be heated due to adiabatic compression by the high pressure of the wind. Therefore, in any case, the energy carried by the wind can always be mostly injected into the bottom of ejecta and heat the material there. \subsection{Shock heating and emission} When the bottom of a merger ejecta is heated by an injected NS wind, a pressure balance can be built naturally between the wind and the ejecta bottom. On one hand, the pressure balance can gradually extend to larger radii through thermal diffusion, which however happens very slowly for an extremely optical thick ejecta. On the other hand, the high pressure of the bottom material can lead itself to expand adiabatically and to get a high speed. This speed would result in the formation of a forward shock propagating outwards into the ejecta. By denoting the radius and speed of a shock front by $r_{\rm sh}$ and $v_{\rm sh}$, respectively, the increase rate of the mass of shocked ejecta can be calculated by \begin{eqnarray} \frac{dM_{\rm }}{dt}=4\pi r_{\rm sh}^{2}\left[v_{\rm sh}-v_{\rm ej}(r_{\rm sh},t)\right]\rho_{\rm ej}(r_{\rm sh},t), \end{eqnarray} where $v_{\rm ej}(r_{\rm sh},t)$ and $\rho_{\rm ej}(r_{\rm sh},t)$ are the velocity and density of the upstream material just in front of the shock. Obviously, we also have $dr_{\rm sh}=v_{\rm sh}dt$. As the propagation of the shock, its bulk kinetic energy, which is previously gained from adiabatic acceleration, can again be partly converted into internal energy of the newly shocked material. The rate of this shock heating effect can be written as \begin{eqnarray} H_{\rm sh}^{}={1\over 2}\left[v_{\rm sh}-v_{\rm ej}(r_{\rm sh},t)\right]^2\frac{dM_{\rm}}{dt}.\label{Hsh} \end{eqnarray} Then the total internal energy accumulated by the shock, $U_{\rm sh}$, can be derived from \begin{eqnarray} \frac{dU_{\rm sh}^{}}{dt^{}}=H_{\rm sh}^{}-{P_{\rm sh}^{} }\frac{d (\epsilon V_{\rm}^{})}{dt^{}}-L_{\rm sh}^{}.\label{Ush} \end{eqnarray} Here $P_{\rm sh}^{} =U_{\rm sh}^{}/(3\epsilon V_{\rm}^{})$ is an average pressure, $V_{\rm}^{}\sim{(4/3)\pi r_{\rm sh}^3}$ is the volume of the whole shocked region experiencing an adiabatic expansion, and the fraction $\epsilon$ is introduced by considering that the shock-accumulated heat is mostly deposited at a small volume immediately behind the shock front. The product of this average pressure and the corresponding volume represents the cooling effect due to adiabatic expansion. $L_{\rm sh}^{}$ is the luminosity of shock thermal emission, which is caused by the diffusion and escaping of the shock heat. Approximately following a steady diffusion equation $L=(4\pi r^2/3\kappa \rho_{\rm })(\partial u/\partial r)c$, where $\kappa$ is opacity, $\rho$ and $u$ are densities of mass and internal energy, respectively, we roughly estimate the luminosity of shock thermal emission by \begin{eqnarray} L_{\rm sh}\approx {r_{\rm max}^2U_{\rm sh}^{}c\over\epsilon r_{\rm sh}^3+ (r_{\rm max}^3-r_{\rm sh}^3)}\left[{1-e^{-(\epsilon\tau_{\rm sh}+\tau_{\rm un})}\over \epsilon\tau_{\rm sh}+\tau_{\rm un}}\right],\label{shemi} \end{eqnarray} where $\tau_{\rm sh}= {\kappa M_{\rm }}/{4\pi r_{\rm sh}^{2} }$ and $\tau_{\rm un}= \int_{r_{\rm sh}}^{r_{\max}} \kappa \rho_{\rm ej} dr$ are the optical depths of the shocked and unshocked ejecta, respectively. Here the former optical depth, which only influences the decrease of the shock emission after breakout, is calculated by considering that the most of shocked material is concentrated within a thin shell behind the shock front (Kasen et al. 2015b). The value of parameter $\epsilon$ can be fixed by equating the shock luminosity during breakout to the simultaneous heating rate, because after that moment freshly-injected shock heat can escape from the ejecta nearly freely. The opacity of merger ejecta is predicted to be on the order of magnitude of $\sim (10-100)~\rm cm^2 g^{-1}$, which results from the bound-bound, bound-free, and free-free transitions of lanthanides synthesized in the ejecta (Kasen et al. 2013). This value is much higher than the typical one of $\kappa = 0.2 ~\rm cm^2 g^{-1}$ for normal supernova ejecta. In this paper we take $\kappa = 10~\rm cm^2 g^{-1}$. Fairly speaking, some reducing effects on the opacity could exist, e.g., (1) the lanthanide synthesis in the wind components of ejecta could be blocked by neutrino irradiation from the remnant NS by enhancing electron fractions (Metzger \& Fern\'{a}ndez 2014) and (2) the lanthanides in the dynamical components could be ionized by the X-ray emission from the NS wind (Metzger \& Piro 2014). \subsection{Shock dynamics} The temporal evolution of shock thermal emission, as presented in Equation (\ref{shemi}), is obviously dependent on the dynamical evolution of the shock. Due to the slow thermal transmission in the optical thick ejecta, a significant pressure/temperature gradient must exist in the ejecta during shock propagation, which would lead the shock to be continuously accelerated. Therefore, the dynamical evolution of such a radiation-mediated shock is completely different from the internal and external shocks in GRB situations. For an optical thin GRB ejecta, a pressure balance can be built throughout the whole shocked region simultaneously with the shock propagation. In that case, the shock velocity can be simply derived from shock jump conditions (Dai 2004; Yu \& Dai 2007; Mao et al. 2010). On the contrary, in the present optical thick case, a detailed dynamical calculation of the shock in principle requires an elaborate description of the energy and mass distributions of the ejecta (see Kasen et al. 2015b for a 1D hydrodynamical simulation for a similar process), which is however beyond the scope of an analytical model. Nevertheless, a simplified and effective dynamical equation can still be obtained by according to the energy conservation of the system. The total energy of the shocked region, can be written as\footnote{For simplicity, in this paper we do not taken into account the relativistic effects that were considered in Yu et al. (2013) for some extremely light ejecta.} $E={1\over2}M_{\rm}v_{\rm sh}^{2}+U^{}$, where $U^{}$ is the total internal energy of the shocked region. For a radiation-mediated shock, the value of $U$ should be much higher than $U_{\rm sh}^{}$. In principle, the concept ``shocked region" used here can generally include the NS wind regions, because the mass of wind leptons is drastically smaller than that of shocked ejecta and, more importantly, the energy released from the NS is continuously distributed in both the wind and ejecta through thermal diffusion, which makes them behaving like a whole. By ignoring the relatively weak energy supply by radioactivities, the variation of the total energy can be written as \begin{eqnarray} {dE}=(\xi L_{\rm sd}-L_{\rm e})dt+{1\over 2}v_{\rm ej}^2(r_{\rm sh},t){dM_{\rm }},\label{conservation} \end{eqnarray} where $\xi$ represents the energy injection efficiency from the NS wind, which could be much smaller than one if the NS wind is highly anisotropic, and $L_{\rm e}$ is the total luminosity of the thermal emission of merger ejecta. As a general expression, the specific form of the energy injection is not taken into account. Substituting the expression of $E$ into Equation (\ref{conservation}), we can get the dynamical equation of the forward shock as \begin{eqnarray} \frac{dv_{\rm sh}}{dt}={1\over{M_{\rm}v_{\rm sh}}}\left[(\xi L_{\rm sd}-L_{\rm e})-{1\over2}\left(v_{\rm sh}^2-v_{\rm ej}^2\right)\frac{dM_{\rm}}{dt}- \frac{dU_{\rm}^{}}{dt^{}}\right].\label{vsh} \end{eqnarray} In order to clarify the expressions of $L_{\rm e}$ and ${dU_{\rm}^{}/dt^{}}$, we denote $\tilde{U}^{}=U^{}-U_{\rm sh}^{}$, which represents the internal energy excluding the shock-accumulated part. The evolution of this internal energy component can be given by \begin{eqnarray} \frac{d\tilde{U}^{}}{dt^{}}=\xi L_{\rm sd}^{}-\tilde{P}^{}\frac{dV_{\rm}^{}}{dt^{}}-L_{\rm mn}^{},\label{Us} \end{eqnarray} where the adiabatic cooling is also calculated with an average pressure as $\tilde{P}^{}=\tilde{U}^{}/3V_{\rm}^{}$ and \begin{eqnarray} L_{\rm mn}\approx{\tilde{U}^{}c\over r_{\rm max}}\left[{1-e^{-(\tau_{\rm sh}+\tau_{\rm un})}\over \tau_{\rm sh}+\tau_{\rm un}}\right]. \end{eqnarray} The above expression is different from Equation (\ref{shemi}) because the majority of internal energy of the shocked region is deposited in the most inner part of the region, which is much deeper than the shock front. Then we have $L_{\rm e}=L_{\rm sh}+L_{\rm mn}$. The emission component represented by $L_{\rm mn}$ actually is the mergernova emission discussed in Yu et al. (2013). By substituting Equations (\ref{Hsh}), (\ref{Ush}), and (\ref{Us}) into (\ref{vsh}), we can obtain another form of the dynamical equation as \begin{eqnarray} \frac{dv_{\rm sh}}{dt}={1\over{M_{\rm}v_{\rm sh}}}\left({U\over 3V}\right){dV\over dt},\label{vsh2} \end{eqnarray} which is just the expression adopted in Kasen et al. (2015b) to calculate the shock breakout for super-luminous supernovae. This equation can be easily understood in the framework of adiabatic acceleration of a ``fireball". As discussed above, what accelerating the ejecta material is actually the local internal pressures at different radii, which are much lower than the pressure of the NS wind due to the significant delay of pressure transmission. The work done by these varying pressures can be effectively estimated with an average internal pressure of $(U/3V)$ with respect to a volume variation of $dV=4\pi r_{\rm sh}^2v_{\rm sh}dt$. With the above dynamical equation, the energy conservation and assignments of the system can be well described. By considering that the internal energy at the time $t$ is on the order of magnitude $U\sim L_{\rm sd}t$, Equation (\ref{vsh2}) can naturally determine a kinetic energy also on the same order of magnitude ${1\over2}Mv_{\rm sh}^2\sim L_{\rm sd}t$. \begin{figure} \centering\resizebox{0.8\hsize}{!}{\includegraphics{fig1.eps}}\caption{Evolutions of velocities (top), bolometric luminosities (middle), and emission temperature (bottom). In the middle panel, the injected spin-down luminosity ($\xi L_{\rm sd}$) and the shock heating rate ($H_{\rm sh}$) are also presented for references. The model parameters are taken as $\xi L_{\rm sd,i}=10^{47} \rm erg~s^{-1}$, $t_{\rm md}=10^{4}$ s, $\Delta t=2$ s, $M_{\rm ej}=0.01M_{\odot}$, $v_{\rm max}=0.1c$, $\delta=3.5$, and $\kappa=10~\rm cm^2g^{-1}$.}\label{fitting} \end{figure} \section{Results and analyses} A supra-massive NS surviving from a merger event is believed to initially spin with a Keplerian limit period of about $1$ ms. This corresponds to a rotational energy of several times $10^{52}$ erg with a high stellar mass, which could be much higher in view of the rapid differential rotation. The most of this energy is probably consumed very quickly to generate and amplify magnetic fields, to drive a short GRB, and maybe also to radiate GWs. The duration of this violent stage should not be much longer than the duration of the short GRB. So we would take $\Delta t=2$ s which is the boundary dividing long and short GRBs. When a steady magnetic dipole radiation begins, the spin period could have been reduced to a few milliseconds, corresponding to an energy of $\sim10^{51-52}$ erg. This energy supply could be further discounted for the merger ejecta by the parameter $\xi$, if a remarkable fraction is collimated within a small cone to power an extended gamma-ray emission and X-ray afterglow plateau after the GRB. \begin{figure} \centering\resizebox{\hsize}{!}{\includegraphics{fig2.eps}}\caption{Three chromatic light curves for photon energies of $h \nu=0.1$ keV (soft X-ray; solid), 6 eV (UV; dashed), and 2 eV (optical; dash-dotted), respectively, where the bolometric light curve (dotted) is presented for a reference.}\label{} \end{figure} \begin{figure} \centering\resizebox{\hsize}{!}{\includegraphics{fig3.eps}}\caption{Cumulations of different energy components. Solid black line: total energy provided by the NS; Dotted red line: kinetic energy of shocked ejecta; Dashed magenta line: internal energy $\tilde{U}$; Dash-dotted blue line: shock-produced internal energy $U_{\rm sh}$; Dash-dot-dotted cyan line: energy of shock emission; Short-dashed orange line: energy of mergernova emission. }\label{} \end{figure} For typical values of spin-down luminosity and timescale as $\xi L_{\rm sd,i}=10^{47} \rm erg~s^{-1}$ and $t_{\rm md}=10^{4}$ s, we present a representative numerical result in Figure 1. As shown in the top panel, the shock initially moves slowly, with a velocity much smaller than the maximum velocity of ejecta, and experiences a gradual acceleration process. When the shock velocity exceeds the maximum ejecta velocity by about a few tens percentage, shock breakout happens, as indicated by the sharp peak in the middle panel. It takes a remarkably long time ($\sim 10^3$ s) by the shock to break out from the ejecta. This period is much longer than the shock breakout time given in some previous works ($\sim1-10$ s; Gao et al. 2013; Wang et al. 2015; Siegel \& Ciolfi 2015a,b), because there the shock velocity was significantly overestimated by using shock jump conditions with an assumed global pressure balance. However, as discussed above, such a global pressure balance actually cannot be built very quickly for a radiation-mediated shock. The internal energy produced by the shock only occupies a very small fraction of the total internal energy behind the shock. The shock jump conditions could be satisfied only after a very long time acceleration or after the ejecta is close to optical thin, before which the shock has already crossed the whole ejecta. The middle panel of Figure 1 also shows the shock breakout emission is very luminous, which is comparable to that of the following bright mergernova emission peaking at a few days. Nevertheless, since the shock breakout and mergernova are emitted at very different radii, the corresponding emission temperature ($\sim10^5$ K) of the former can be much higher than the latter ($\sim10^4$ K), as presented in the bottom panel. Here an effective temperature is defined by $T_{\rm e}=(L_{\rm e}/4\pi r_{\rm max}^2\sigma)^{1/4}$ with $\sigma$ being the Stefan-Boltzmann constant. In more details, we further plot three chromatic light curves in Figure 2, by assuming a black-body spectrum\footnote{In Yu et al. (2013), this spectrum is incorrectly written with an internal temperature rather than the effective surface temperature used here, although the integrated bolometric luminosity there is still right because of the reduction by optical depth. More strictly, the surface temperature in fact should be defined at a photosphere radius, beyond which photons can escape freely, rather than the maximum radius. Nevertheless, before the mergernova peak emission, the difference between these two radii actually is always negligible (e.g., see Equation \ref{rshbo}). Thus we approximately use the maximum radius here for simplicity. The shift of the photosphere in the ejecta could significantly influence the mergernova emission mainly after the peak time (Wang et al. 2015).} \begin{eqnarray} \nu L_{\nu}=\frac{8\pi^{2}r^{2}_{\rm max}}{h^{3}c^{2}}\frac{(h\nu)^{4}}{{\exp}(h\nu/kT_{\rm e})-1} \end{eqnarray} with temperatures given in the bottom panel of Figure 1, where $h$ is the Planck constant and $k$ the Boltzmann constant. As shown, while the mergernova emission falls into the ultraviolet band, the shock breakout is mainly concentrated within soft X-rays, for the adopted model parameters. Finally, in order to verify some energy arguments mentioned above, we plot the temporal evolutions of different energy components in Figure 3. It is exhibited that, although the energy released from the NS is initially injected into the ejecta in the form of internal energy, the majority of this internal energy is finally converted into the kinetic energy of the ejecta. As a result, during the whole optical thick period, the internal and kinetic energies keep to be basically comparative with each other. The internal energy produced by the shock is obviously less than the injected one but, at the shock breakout time, the instantaneous release of this small amount of energy can still temporarily overshine the emission component due to the thermal diffusion. \begin{figure*} \centering\resizebox{\hsize}{!}{\includegraphics{fig4a.eps}\includegraphics{fig4b.eps}}\caption{Variations of the luminosity (left) and peak photon energy (right) of shock breakout emission in the $\xi L_{\rm sd,i}-t_{\rm md}$ parameter space. The shaded region on the right-top corner is forbidden because an unrealistically high rotational energy is required, while in the left-bottom shaded region the shock breakout is buried in the mergernova emission. }\label{} \end{figure*} For a straightforward understanding of the characteristics of the shock breakout emission, here we provide some analytical analyses. Firstly, in physics, shock breakout happens when the dynamical time begins to be longer than the diffusion timescale on which photons diffuse from the shock front to the outmost surface of merger ejecta. Hence we can in principle solve the shock breakout time $t_{\rm bo}$ from the equation \begin{eqnarray} t_{\rm bo}=t_{\rm d}&=&\left({r_{\rm max}-r_{\rm sh,bo}\over \lambda}\right)^2{\lambda\over c}\nonumber\\ &=&{(r_{\rm max}-r_{\rm sh,bo})^2\kappa \rho_{\rm ej}\over c}, \end{eqnarray} where $\lambda=1/(\kappa \rho_{\rm ej})$ is the average path of photons. Approximately we get \begin{eqnarray} r_{\rm sh,bo}\approx r_{\max}\left(1-{ r_{\max}\over r_{*}}\right),\label{rshbo} \end{eqnarray} where $r_{*}\approx(v_{\max}\kappa M_{\rm ej}/c)^{1/2}=4.5\times10^{15}{\rm cm}$ $(v_{\max}/0.1c)^{1/2}(\kappa/10{\rm cm^{2}g^{-1}})^{1/2}(M_{\rm ej}/0.01M_{\odot})^{1/2}$. This means, when the shock breakout happens, the shock radius has been very close to the maximum radius of ejecta. By invoking $U\sim\xi L_{\rm sd}t$, $P=U/(3V)$, and acceleration rate $a\sim4\pi r_{\rm sh}^2P/M$, we can estimate the breakout radius by $r_{\rm sh,bo}\sim{1\over2}at_{\rm bo}^{2}\sim(\xi L_{\rm sd}/2M_{\rm ej})^{1/2}t_{\rm bo}^{3/2}$, where $M\approx M_{\rm ej}$ is adopted because $r_{\rm sh,bo}\approx r_{\max}$. Then, from the equation $r_{\rm sh,bo}\approx r_{\max}=v_{\max}t_{\rm bo}$, we can simply derive the shock breakout time to \begin{eqnarray} t_{\rm bo}&\sim& {2M_{\rm ej}v_{\max}^2\over\xi L_{\rm sd}}\nonumber\\ &=&3600{\rm s}\left({v_{\max}\over0.1c}\right)^{2}\left({M_{\rm ej}\over0.01M_{\odot}}\right)\left({\xi L_{\rm sd}\over10^{47}\rm erg~s^{-1}}\right)^{-1}. \end{eqnarray} Furthermore we get $v_{\rm sh,bo}\sim2v_{\max}$ and $r_{\rm sh,bo}\sim{2M_{\rm ej}v_{\max}^3/\xi L_{\rm sd}}=1.1\times10^{13}\rm cm$ that is indeed much smaller than $r_*$. Then the luminosity and temperature of shock breakout can be estimated to \begin{eqnarray} L_{\rm sh,bo}&\approx&H_{\rm sh}\sim 2\pi r_{\rm sh,bo}^2(v_{\rm sh,bo}-v_{\rm max})^3\rho_{\rm ej}\nonumber\\ &\sim& 8\times10^{45}{\rm erg~s^{-1}}\left({\xi L_{\rm sd}\over10^{47}\rm erg~s^{-1}}\right), \end{eqnarray} and \begin{eqnarray} T_{\rm sh,bo}&=&\left({L_{\rm sh,bo}\over 4\pi r_{\rm max}^2\sigma}\right)^{1/4}\sim6\times10^5{\rm K} \nonumber\\ && \times\left({v_{\max}\over0.1c}\right)^{-3/2}\left({M_{\rm ej}\over0.01M_{\odot}}\right)^{-1/2}\left({\xi L_{\rm sd}\over10^{47}\rm erg~s^{-1}}\right)^{3/4}. \end{eqnarray} The above analytical expressions qualitatively exhibits the physical mechanisms and the parameter-dependencies of the shock breakout emission, although the numbers given here are somewhat higher than the numerical ones because a linear acceleration is assumed. It is indicated that the dependence of the shock breakout emission on the density profile of the merger ejecta (i.e., the parameter $\delta$) is very weak. In more details, in Figure 4 we present the variations of the shock breakout luminosity $L_{\rm sh,bo}$ and the peak photon energy $\varepsilon_{\rm p}=4kT_{\rm sh,bo}$ in the $\xi L_{\rm sd,i}-t_{\rm md}$ parameter space. It is indicated that, for a significantly bight shock breakout emission, a large amount of energy is required to be released from a NS within a sufficiently short time. \section{Conclusion and discussions} Mergernovae, in particular, the ones powered by a remnant supra-massive NS, are one of the most competitive electromagnetic counterparts of GW signals during NS-NS mergers. The discovery of NS-powered mergernovae could also substantially modify and expand our conventional understandings of supernova-like transient phenomena. Therefore, it would be an usual and essential question that how to identify a mergernova in future searchings and observations. Undoubtedly, a multi-wavelength method is necessary and helpful for answering this question. In this paper, we uncover that a shock breakout can be driven by the early interaction between a merger ejecta and a succeeding NS wind. Such a breakout appears at a few hours after the merger, by leading the mergernova emission as a precursor. The breakout emission would be mainly in soft X-rays with a luminosity of $\sim(10^{44}-10^{46})~\rm erg~s^{-1}$, corresponding to an X-ray flux of a few$\times (10^{-11}-10^{-9})~\rm erg~s^{-1}cm^{-2}$ for a distance of $\sim 200$ Mpc, which can be above the sensitivity of many current and future telescopes, e.g., Swift X-ray telescope (Burrows et al. 2005), Einstein Probe (Yuan et al. 2015), etc. More optimistically, some X-ray shock breakout emission could have been appeared in some X-ray afterglows of short GRBs, which probably exhibits as an early X-ray flare. It will be interesting to find out such candidates. \acknowledgements This work is supported by the National Basic Research Program of China (973 Program, grant 2014CB845800), the National Natural Science Foundation of China (grant No. 11473008), and the Program for New Century Excellent Talents in University (grant No. NCET-13-0822).
2,869,038,156,197
arxiv
\section{INTRODUCTION\vspace{-2ex}} \normalsize Recent advancements in materials studies and the field of terahertz spectroscopy have led to the widespread use of optics, emitters, and equipment for the THz frequency range. This has resulted in improvements in device performance and has created a demand for the development of less expensive, but superior equipment \cite{thzimportance,terahertzreview}. When probing the properties of a material, polarized electromagnetic radiation allows for the easy isolation of certain information in a transmission or reflection measurement \cite{polarizermathandapp}. Polarizers are an important optical component because the accuracy of measurements can depend on the degree to which the electromagnetic waves are polarized. \par Due to the longer wavelength of THz spectroscopy, polarizers for the THz range can be made from a wire grid. The most common commercial wire grid polarizers are made from tungsten wire \cite{useofwiregridpolarizers} and they tend to be expensive \cite{thorlabs}. If a high degree of polarization is required, it is likely that multiple polarizers will need to be used in precise measurements. Although there have been recent advances in the technology \cite{firstterahertzdevice, micrometer, micrometerr, Nanotubes, analysis}, there is interest in developing cheaper and more effective polarizers for THz spectroscopies. \par In this paper, we describe a simple and inexpensive printing technique to create polarizers made from lines of silver ink printed on Kapton film. This technique has also been used to print small circuits, electronics \cite{inkjet, printingtech} and - in a related fashion - THz range metamaterial structures \cite{Metamaterials}. Using time domain terahertz spectroscopy (TDTS), we studied the effect of three parameters on the capability of polarizers: the spacing between the silver ink lines (G), the width of the ink lines (W), and the number of passes with the ink-jet printer (L). We also studied the effect that stacking multiple polarizers had on the degree of polarization (DoP). \par \section{METHODS AND MATERIALS\vspace{-2ex}} Numerous proof-of-concept polarizer prototypes were inkjet printed on polyimide substrate (Dupont Kapton 500 HN, 5 mil). Each sample was 1 cm $\times$ 1 cm in size and up to 10 were printed at once. Sun Chemical SunTronic EMD5730 ink was printed with a drop spacing of 20 $\mu m$, and each pattern layer took approximately 2 to 4 minutes to print. After printing, the samples were heated on hot plate at 90$\degree$C for 30 minutes to evaporate the solvent. Once dry, samples were sintered at 200$\degree$C for 2 hours.\par \begin{figure}[!ht] \center \includegraphics[width = 0.67\linewidth]{Polarizer.jpg} \caption{ \small An image of the 40 $\mu m$ gap, 40 $\mu m$ width polarizer with 4 layers (40G/40W/4L). The period of the polarizer is 80 $\mu m$; however, the gap between the lines is slightly smaller than the expected 40 $\mu m$ due to running of excess silver ink while printing. Implications of this are discussed in the text.} \end{figure} \normalsize In TDTS, an infrared femtosecond laser pulse is split into two paths that sequentially excite a pair of photoconductive antenna called "Auston" switches. The first switch generates a mostly vertically-polarized THz pulse, which then travels through the sample. The second antenna receives the THz pulse and measures its electric field \cite{tdts}. Calculating the transmission of the polarizer when the printed lines are aligned $T_{pass}$ and perpendicular $T_{polarize}$ to the incoming THz radiation allows for the calculation of the DoP of the polarizer. We define \par \begin{equation} \label{DoP} DoP = \frac{T_{pass} - T_{polarize}}{T_{pass} + T_{polarize}}. \end{equation} To perform the measurements, the polarizers were sandwiched between two metal disks each with an 8 mm aperture for their easy alignment. This was placed in a rotation optic stage to allow for manual angle adjustments. When measuring more than one polarizer at a time, multiples of the metal disk and polarizer sandwich were placed in a single rotation optic stage. Conventional wire grid polarizers were placed in front of and behind the polarizer in the THz beam path to remove all horizontal components of the incident electric field. The manual alignment of stacked polarizers was performed under an optical microscope. \par The TDTS spectrometer on which the measurements were performed reliably measures over the frequency range of 0.2 - 2.2 THz \cite{polarizermathandapp}, however, the polarizers used in this study limit this range. To determine the effective range of measurement, multiple scans for each polarizer geometry were compared. If there was at least 97$\%$ agreement between the scans at a certain frequency, this frequency was considered within the effective range of measurement and the effective range of polarization. This range was determined to be at least 0.3 THz to 1 THz for all polarizers. \section{RESULTS AND DISCUSSION\vspace{-2ex}} Measurements were carried out on 8 different polarizers, each with one of 8 possible combinations of the parameters measured. The gap (G) between the printed lines was either 40 $\mu m$ or 80 $\mu m$. The width (W) of the lines was also either 40 $\mu m$ or 80 $\mu m$. The number of printed layers (L) was either 2 or 4. By measuring the various combinations of these polarizers, we determined which parameters were optimal for polarization. \par As mentioned previously, especially with the 4-layer polarizers, the expected gap size was larger than what was observed. With 80 $\mu m$ width samples there was significant bleeding between ink lines and this likely reduced the transmission and DoP of these polarizers. \par \begin{figure}[!ht \center \includegraphics[width = 0.67\linewidth]{DoPFrequencyDependence.jpg} \caption{ \small A comparison of the degree of polarization (DoP) of polarizers with different parameters over the effective frequency range. The red line depicts two 40 $\mu m$ gap, 40 $\mu m$ width, 4 layer (40G/40W/4L) polarizers stacked and shows a very high DoP along most of the frequency range.} \end{figure} \normalsize Fig. 2 shows a comparison of the DoP as a function of frequency for a sampling of the polarizers measured in this experiment. These polarizers represent the range of quality of polarization and show the effectiveness of stacking polarizers. The red curve depicts two stacked 40 $\mu m$ gap, 40 $\mu m$ width, 4 layer (40G/40W/4L). This DoP is nearly identical to $1 - (1 - DoP)^2$ using the DoP of the 40G/40W/4L polarizer. This indicates that stacked polarizers behave as a series of independent polarizers. \par \begin{figure}[!ht \center \includegraphics[width = 0.32\linewidth]{THzGap.jpg} \includegraphics[width = 0.315\linewidth]{THzWidth.jpg} \includegraphics[width = 0.32\linewidth]{THzLayers.jpg} \caption{\small Degree of polarization (DoP) at 0.7 THz as a function of the (a) gap size, (b) ink line width, and (c) number of printed layers.} \end{figure} \normalsize Fig. 3 depicts the result of holding two parameters constant and varying the third. In Fig. 3a the gap between lines is varied and a large DoP dependence is revealed; with a decrease in the gap, there is an increase in DoP. Fig. 3b also shows an inverse relationship between DoP and the width. Fig. 3c shows a slight increase in polarization with an increase in the number of layers printed; however, this may be attributed to the increase in number of layers causing slight ink bleeding and larger ink line width. As long as the ink lines are un-broken and opaque to THz radiation with just 2 layers, the polarization may not be strongly dependent on the number of layers. \par Effective medium theory can be used to model the dependence of properties of the polarizers. Here we use the same approach as detailed in Ref. \cite{analysis}. First, we define $T_{polarize}$ and $T_{pass}$: $T_{polarize} \approx \frac{4n_{\parallel}}{(1+n_{\parallel})^2}$, $T_{pass} \approx \frac{4n_{\bot}}{(1+n_{\bot})^2}$. Here, $n_{\parallel}$ is the effective index of refraction when the polarizer's conductive ink lines are parallel to the eletric field of the THz radiation, and $n_{\bot}$ is the effective polarizer with perpendicular allignment. By approximating the ink lines to be long rectangular prisms and the wavelength of the THz radiation to be much greater than the period ($G + W$) of the polarizers, we can use a low order approximation given by the two following equations, \begin{equation} \label{parallel index of refraction} n_{\parallel} = \sqrt{n_{0}^2(1-D) + n_m^2D} \end{equation} \begin{equation} \label{perp index of refraction} n_{\bot} = \sqrt{\frac{n_0^2 n_m^2}{n_m^2(1-D) + n_0^2D}} \end{equation} where $D = \frac{W}{G + W}$, $n_m$ is the index of refraction of the ink, and $n_0$ is the index of refraction of the material that is between the ink lines. Using these equations, it can be shown that $n_{\parallel} \approx n_{m} \sqrt{D}$ and $n_{\bot} \approx \frac{n_0}{\sqrt{1-D}}$, $n_0 \approx 1$. We plug back into Eq. (1) and find that \begin{equation} \label{perp index of refraction} DoP \approx \frac{\sqrt{G}(G+W) + n_m^2 W \sqrt{G} - n_m G \sqrt{W} - n_m\sqrt{W}(G+W)}{\sqrt{G}(G+W) + n_m^2 W \sqrt{G} + n_m G \sqrt{W} + n_m\sqrt{W}(G+W) + 4n_m\sqrt{GW(G+W)}} \end{equation} By fixing $W$ and increasing $G$, we find that DoP decreases, and by fixing $G$ and increasing $W$, DoP decreases as well for reasonable values of $n_m$. Additionally, this equation implies that the DoP should decrease more quickly for increasing $G$ than for increasing $W$. This is consistent with our results. \begin{figure}[!ht] \center \includegraphics[width=0.49\linewidth]{StackedPolarizersDoP.jpg} \includegraphics[width=0.49\linewidth]{THz_StackedDoP.jpg} \caption{\small (a) Plot showing the effect on 1-DoP of stacking 40G/40W/4L polarizers (the DoP increases as the number of polarizers increases). (b) 1-DoP at 0.7 THz as a function of the number of stacked polarizers. Each additional polarizer reduces the DoP by nearly one order of magnitude.} \end{figure} \normalsize Fig. 4 depicts the result of investigating the effect on DoP from stacking the best individual polarizer: 40G/40W/4L. Fig 4a shows the result of stacking polarizers on the effective THz frequency range. For each polarizer added, 1-DoP is reduced by nearly one order of magnitude across the entire effective frequency range. This proceeds until the amount of light passed through starts approaching the noise level, which occurred with 4 stacked polarizers. A frequency cut at 0.7 THz was taken and plotted against the DoP (Fig 4b). There is a linear relationship between the log of the DoP and the number of stacked polarizers. Assuming this linear relationship holds up, we can predict with reasonable accuracy the DoP for larger numbers of polarizers. Additionally, fitting implies a 86$\%$ decrease in 1-DoP for each additional polarizer. The inaccuracy in the fit for $n = 4$ could be due to the measurement of 4 stacked polarizers nearing the noise floor of our measurement technique. \par Since the polarizers were printed on Kapton films, and this material is nearly transparent in the THz frequency range, there is little reduction in transmission due to the polarizers. At 0.7 THz, the highest and lowest measured transmissions were the 40W and 4L polarizers with transmissions of 0.88 $\pm$ 0.02 and 0.74 $\pm$ 0.17, respectively. The transmission for a single 40G/40W/4L polarizer at 0.7 THz is 0.89 $\pm$ 0.01, with an average transmission across the effective frequency range of 0.93 $\pm$ 0.02. Using this value to calculate the expected transmission for 4 stacks of the 40G/40W/4L polarizers results in a transmission of 0.75. This is similar to the measured value of 0.792 $\pm$ 0.003. Since these polarizers are inexpensive and easy to make with the correct machinery, 10-15 of these polarizers can be stacked to achieve ultra high extinction ratios. Stacking, for example, 10 of these polarizers may only reduce transmission by 56$\%$ and could have an extinction ratio of over $2 \times 10^{8}$:1 instead of the 2300:1 achieved with 4 polarizers. This far surpasses any individual THz range polarizer and could, with correct alignment, be set up in a single optic much like a wire-grid polarizer. This stack of polarizers would be less fragile, have ultra high DoP/extinction ratio, and cost much less than current commercial polarizers. \section{CONCLUSION\vspace{-2ex}} We have characterized the polarizing capability in the THz frequency range of novel polarizers made from silver-nanoparticle ink printed on Kapton film. We found that a decrease in the gap between ink lines, decrease in the ink line width, and increase in the number of layers results in a higher degree of polarization within the parameter limits of our experiment and method used for printing. We demonstrated that these simple polarizers when stacked are comparable to commercial polarizers. The polarizer topology made of lines of a gap of 40 $\mu m$ and a width of 40 $\mu m$ is highly effective in the 0.3-1 THz frequency range. Further reduction in gap size may also increase the degree of polarization, and further stacking of these polarizers may demonstrate an ultimate capability for polarization.\par The THz instrumentation development was funded by the Gordon and Betty Moore Foundation through Grant GBMF2628 to NPA. The authors would like to thank the National Science Foundation, the Defense Threat Reduction Agency, and the Semiconductor Research Corporation for their support with the printing method used in this work.
2,869,038,156,198
arxiv
\section{Introduction} JavaScript is one of the fundamental technologies underpinning the world wide web today. From its humble beginnings as a scripting language to support basic interactive content, it has matured to the point where it powers large applications for multi-billion dollar businesses. In addition to client-side JavaScript run in the user's browser, server-side JavaScript is becoming increasingly popular for high-throughput, low-latency web applications. The JavaScript language (actually ECMAScript) is one of the most popular languages today\footnote{\url{https://github.com/blog/2047-language-trends-on-github}}. The increasing complexity of JavaScript applications and the deployment in environments with high performance requirements has driven the development of JavaScript compilers that produce more efficient, highly-optimised code. These compilers are complex pieces of software themselves and make a plethora of configurable parameters available to the user -- one size does not fit all, and how exactly a piece of code should be optimised may depend on the particular application and execution environment. While many JavaScript code optimisers exist, these usually focus on ``compressing'' the source code so that it can be transferred from the server to the browser more efficiently by means of source-level transformations. Such optimisations do not affect the semantics of the code nor how it is compiled, and they usually do not improve performance in terms of running time. There are many benefits to optimising not only the code but also the JavaScript compiler for particular applications. On mobile devices, power consumption is a major issue, and optimised code can help reduce it. On a server, reducing the running time of a software component means that more transactions can be supported on the same hardware. On a desktop machine, lag in interactions can be reduced and user interfaces be made more responsive. In the vast majority of applications, the JavaScript compiler is run in its default configuration, which has been chosen by its developers to achieve robust performance across a broad range of use cases. While these default settings will provide reasonable performance in most situations, we demonstrate that often, substantial gains can be realised easily, by searching the configuration space for compiler parameter settings that better optimise the JavaScript code produced for a particular application. With little effort, applications can be made more efficient and consume less resources. This effect even holds for the large, heterogeneous benchmark sets used during development of the JavaScript engines themselves, upon which the default settings are purportedly based. Compared to traditional applications of automated algorithm configuration, JavaScript code often runs for only relatively short periods of time, but very frequently. The same code can be run millions of times a day, each time a website is loaded or a request is made to a server. Even small improvements translate to massive aggregate savings in resources. We apply state-of-the-art machine learning techniques for automatic parameter configuration to the two main JavaScript compilers. On a range of popular and representative benchmarks, we show that performance can be improved by more than 35\% even with relatively modest configuration effort, without any modification to the JavaScript source code under consideration, or of the JavaScript engine running it, other than the change in parameter configuration. \section{Background} The idea of optimising the configuration of a compiler for a particular application or set of applications is not new. The Milepost GCC project~\cite{fursin_milepost_2011} is perhaps the most prominent example and uses machine learning to dynamically determine the best level of optimisation. In an iterative process, it can improve execution time, code size, compilation time and other metrics. The approach has been integrated into the widely-used GCC compiler. Other approaches that optimise the code generation for C programs include \cite{haneda_automatic_2005,pan_fast_2006,plotnikov_automatic_2013}. While most of these optimise the GCC compiler, there exists some work on LLVM as well~\cite{fursin_collective_2014}. Another focus of research for automatic dynamic optimisation of compiled code has been the Jikes Java compiler~\cite{alpern_jikes_2005}. \citet{hoste_automated_2010}~use multi-objective evolutionary search to identify configurations that are Pareto-optimal in terms of compilation time and code quality. \citet{cavazos_method-specific_2006}~learn logistic regression models that predict the best optimisation to apply to a method. \citet{kulkarni_mitigating_2012}~use artificial neural networks to determine the order in which a set of optimisations should be applied during compilation. A major concern with all compiler configuration optimisation approaches is the computational effort required to determine a good or optimal configuration. If this is too large, any benefits gained through the optimisation may be negated. One approach to reducing the initial overhead is to move the configuration process online and to learn to identify good configurations over successive compilations, but other approaches have been explored in the literature (see, e.g.~\cite{thomson_reducing_2010,ansel_siblingrivalry_2012,tartara_continuous_2013}). Compilers that translate JavaScript to native code are relatively new compared to compilers for more established languages likes C. While they are also highly optimised and, in the case of JavaScriptCore through the use of the LLVM framework, leverage at least some of the benefits decades of optimisation effort has brought to compilers for other languages, we believe that performance improvements over the default configuration are to be gained more easily here. Furthermore, due to the widespread use of JavaScript in applications with hundreds of millions of end users (such as web browsers), any performance improvements are likely to be impactful. \subsection{JavaScript Optimisation} Existing JavaScript optimisers, such as Google's Closure Tools\footnote{\url{https://developers.google.com/closure/}} and Yahoo's YUI compressor\footnote{\url{https://yui.github.io/yuicompressor/}}, focus on source code transformations that do not alter the syntax or semantics of the code, but compress the representation by shortening identifiers, removing white space or inlining code. The aim of these optimisations is to reduce the size of the code that has to be transferred from the server to the user's browser, thereby reducing the load time of the page. It focuses on efficiency \emph{before} the code is run, but does nothing to improve performance \emph{while} the code is running. Indeed, many of those tools and techniques are not specific to JavaScript, but are also applied to other resources that are transferred to the client when a web page is loaded, such as Cascading Stylesheets (CSS). In contrast, what we propose here leverages the specific configuration options of JavaScript engines to optimise the actual runtime behaviour and efficiency of the code. One attractive aspect of our approach is that it naturally complements any extensions implemented to an existing JavaScript engine (by performing our automated configuration procedure again), and is able to search for improving engine configurations while consuming commodity compute cycles, without significant impact on development and engineering effort. Running an automated configuration procedure on a commodity compute cluster for a week is significantly cheaper than the salary of even a single engineer for the same period, and optimising the engine configuration automatically frees up human development resources, which can then be used to further enhance the JavaScript engine with new or improved optimisation mechanisms. \todo{CF}{I feel as if there is a small amount more to say here regarding the suitability of JavaScript engine parameters to automated algorithm configuration. Specifically, that we are in essence solving extremely similar (or truly identical!) problem instances potentially hundreds of millions of times. This would be extremely strange in, say, the SAT setting. We do mention this a bit in the introduction, but I think it should be stressed and this might be a good spot.} \todo{LK}{I don't think that this makes JS inherently more suitable -- we can't expect the massive gains seen in other domains. For real-world stuff, this is probably even worse as a large part will be IO.} \subsection{Automated Algorithm Configuration} Most software has switches, flags and options through which the user can control how it operates. As the software becomes more complex or is used to solve more challenging and diverse problems, the number of these options also tends to increase. While some of these parameters control the input/output behaviour of a given piece of software or algorithm, others merely affect efficiency in terms of resource use. The algorithm configuration problem is concerned with finding the best parameter configuration for a given algorithm on a set of inputs, where the definition of ``best'' can vary, depending on the given application scenario. In many practical cases, the goal is to achieve better performance, and this is how we use algorithm configuration here -- we want to achieve the same functionality, but with reduced resource requirements. Specifically, in this work we focus on minimizing the CPU time required, but in principle, any scalar measure of performance can be used. Finding the best parameter configuration for a given algorithm is a long-standing problem. Humans tend to be bad at solving it -- evaluating parameter configurations requires substantial effort, and interactions between parameters may be complex and unintuitive. \citet{minton_automatically_1996} notes that, \begin{quote} ``Unlike our human subjects, [the system] experimented with a wide variety of combinations of heuristics. Our human subjects rarely had the inclination or patience to try many alternatives, and on at least one occasion incorrectly evaluated alternatives that they did try.'' \end{quote} Fortunately, there exist many automated procedures for algorithm configuration. Perhaps the simplest approach is to try all combinations of parameter values. This approach is known as a full factorial design in the statistics literature on experimental design and as grid search in computer science (specifically, in machine learning); its main disadvantage lies in its high cost -- the number of configurations to be evaluated grows exponentially with the number of parameters and their values. For most practical applications, including the ones we consider in the following, complete grid search is infeasible. A commonly used alternative is simple random sampling: Instead of evaluating every combination of parameter values, we randomly sample a small subset. This is much cheaper in practice and achieves surprisingly good results~\cite{bergstra_random_2012}. Indeed, in machine learning, random sampling is a widely used method for hyper-parameter optimisation. Unfortunately, when searching high-dimensional configuration spaces, random sampling is known to achieve poor coverage and can waste substantial effort evaluating poorly performing candidate configurations. A more sophisticated approach to algorithm configuration is provided by so-called racing methods~\cite{birattari_racing_2002}, which iteratively evaluate candidate configurations on a series of inputs and eliminate candidates as soon as they can be shown to significantly fall behind the current leader of this race. Local search based configurators, on the other hand, iteratively improve a given configuration by applying small changes and avoid stagnation in local optima by means of diversification techniques (see, e.g., \cite{hutter_paramils_2009}). More recently, model-based algorithm configuration methods have gained prominence. These are based on the key idea of constructing a model of how the parameters affect performance; this empirical performance model is then used to select candidate configurations to be evaluated and updated based on the results from those runs. Arguably the best known model-based configurator (and the current state of the art) is SMAC~\cite{hutter_sequential_2011}, which we use in the following. SMAC and the other general-purpose algorithm configuration methods mentioned above have been applied with great success to a broad range of problems, including propositional satisfiability~\cite{hutter_automatic_2007}, mixed integer programming~\cite{hutter_automated_2010}, machine learning classification and regression~\cite{thornton_auto-weka_2013}, and improving the performance of garbage collection in Java~\cite{lengauer_taming_2014}. The existence of effective algorithm configuration procedures has implications for the design and development of high-performance software. Namely, rather than limiting design choices and configurable options to make it easier (for human developers) to find good settings, there is now an incentive to introduce, expose and maintain many design choices, and to let automated configuration procedures find performance-optimized configurations for specific application contexts. This is the core idea behind the recent \emph{Programming by Optimization (PbO)} paradigm~\cite{pbo}. However, if software is not developed using specific tools supporting PbO, the application of automated configuration procedures requires the manual specification of a \emph{configuration space} based on the definitions of and constraints on all configurable parameters. For complex and highly parameterised software, such as the JavaScript engines we consider in this work, this process can be somewhat involved, since it not only involves collecting the names and domains (i.e., permissible values) for all parameters, but also conditionality relations between them (e.g., parameter $a$'s value only matters if parameter $b$ has value $x$), and constraints that rule out certain configurations (e.g., configurations known to cause crashes or faulty behaviour). Furthermore, in typical applications of automated algorithm configuration, developers need to carefully construct a set of `training' inputs that is representative of those encountered in the intended application context of the algorithm or software to be configured. If automated configuration is applied to produce a performance-optimised configuration using training inputs unlike those seen in typical use, the resulting configuration is unlikely to perform as well in the actual application as on the training set used as the basis for configuration. (This, of course, also holds for manual configuration, but the effect tends to become more pronounced if more effective optimisation methods are used.) Interestingly, JavaScript engine parameter optimisation (and, more generally, certain flavours of compiler optimisation) differs from most other applications of automated algorithm configuration, in that it makes sense to use a training set consisting of a single input in the form of a program source, whose performance is to be optimised by means of compilation and execution with specific engine parameters. \todo{HH}{Is it just compilation or also execution?CF: Fixed.} Consider a popular Node.js application running a JavaScript workload that does not change appreciably for each request it receives. Any performance increases on that particular workload are of immediate, significant benefit, and performance decreases on other hypothetical workloads are irrelevant. These situations are ideal for our approach, as they allow for the performance gains achieved in offline performance optimisation to be leveraged across potentially hundreds of millions of future runs of the software thus optimised. \section{Automated Configuration of JavaScript Engines} \subsection{JavaScript Engines} We consider two state-of-the-art JavaScript engines in this work; JavaScriptCore\footnote{\url{https://trac.webkit.org/wiki/JavaScriptCore}} and Google's V8\footnote{\url{https://code.google.com/p/v8/}}. This choice was motivated by the popularity and availability of these engines, rather than absolute performance. We note that our goal was not to compare the performance of the two engines, but rather to investigate to what extent the default configuration of each can be improved. JavaScriptCore (or JSC) is an optimising JavaScript virtual machine developed as the JavaScript engine for WebKit; it is used in Apple's Safari browser on both OS X and iOS, as well as in many other Apple software projects, web browsers, and in a WebKit extension of Node.js. It contains a low-level interpreter (LLInt), a simpler baseline just-in-time (JIT) compiler, another JIT compiler optimizing for low latency (DFG JIT), and a JIT compiler optimizing for high throughput (FTL JIT). All of these components can be active simultaneously for different blocks of code, based on execution thresholds, and blocks can be optimised (and deoptimised) between them many times. In fact, a recursive function can be executing in different JITs (or the LLInt) simultaneously at different levels of the recursion. Our JSC parameter space contains 107 parameters (Table~\ref{tab:compiler-parameter-spaces}), where most of the parameters have numerical domains. These numerical parameters mostly control counters and thresholds for activating various functionality, and for triggering optimisation/deoptimisation between the LLInt and the various JITs. \begin{table} \begin{center} \begin{tabular}{r@{\hskip 2em}r@{\hskip 1em}rrr} \input{tables/compiler-parameter-spaces} \end{tabular} \caption{ For each of the two JavaScript engines considered in this work, we give the total number of parameters in the configuration space as well as how many have Boolean, integer and real-valued domains, respectively. (There are no parameters with categorical domains in either configuration space.) } \label{tab:compiler-parameter-spaces} \end{center} \end{table} The V8 JavaScript engine was initially developed for Google's Chrome browser and is now used in other web browsers such as Opera, in server-side applications using projects like Node.js\footnote{\url{https://nodejs.org/}} and as a library embedded in other software applications. V8 is somewhat unique in that it does not contain an interpreter, but instead compiles JavaScript code blocks directly to native machine code when they are first encountered, which is then optimised continuously over the course of running on a given input. Our interpretation of V8's parameter configuration space contains 173 parameters, primarily Boolean choices to enable or disable specific functionality. The remaining integer parameters control various aspects of that functionality, including inlining levels, loop unrolling, garbage collection thresholds and stack frame sizing. In order to specify the parameter configuration space for our two JavaScript engines, JSC and V8, we determined the name and type of each parameter, based on the documentation and command-line parser source code. Unfortunately, domains for the numerical parameters are not specified by the developers, and only few conditional dependencies are explicitly described. We therefore had to resort to educated guesses; when in doubt, we aimed to err on the side of larger domains (within reason). Each space was then refined by sampling 100\,000 random configurations and running the engines on a simple problem instance to check for segmentation faults and other abnormal behaviour. For both engines, many crashing configurations were thus identified, leading to iterative refinement of the configuration spaces through domain reduction as well as by adding forbidden parameter combinations and conditional parameter dependencies. \subsection{Benchmark Instances} We have selected four benchmark sets containing heterogeneous JavaScript problem instances, identified as relevant to the JavaScript engine development community and end users. We aimed to avoid bias towards benchmark sets preferred by particular development teams. In particular, we included the benchmark sets developed by the developers of JSC and V8. Our benchmark suite comprises the Octane 2.0~\footnote{\url{https://developers.google.com/octane/}}, SunSpider 1.0.2~\footnote{\url{https://www.webkit.org/perf/sunspider/sunspider.html}}, Kraken 1.1~\footnote{\url{http://krakenbenchmark.mozilla.org}} and Ostrich~\cite{khan_using_2014} benchmark sets. We created harnesses that allowed us to execute and measure these benchmarks programmatically, outside of a browser environment. We note that the techniques we use here readily extends to browser-based settings, albeit the integration effort would be higher. Octane 2.0 is Google's JavaScript compiler benchmark suite and includes 18 real-world benchmarks that range over different types of tasks, including a 2D physics engine, a PDF rendering engine, a portable game system emulator, a regular expression generator as well as instances testing, e.g., node allocation and reclamation. The SunSpider 1.0.2 benchmark set was developed by the Web\-Kit team and contains 26 problem instances representing a variety of different tasks that are relevant to real-world applications, including string manipulation, bit operations, date formatting and cryptography. Kraken 1.1 was developed by Mozilla and contains 14 problem instances that were extracted from real-world applications and libraries. These benchmarks primarily cover web-specific tasks (e.g., JSON parsing), signal processing (e.g., audio and image processing), cryptography (e.g., AES, PBKDF2, and SHA256 implementations) and general computational tasks, such as combinatorial search. Ostrich is based on benchmark suites for important numerical computation tasks, such as OpenDwarf~\cite{feng_opencl_2012}. While the other benchmarks focus on the types of computations that are common on the web, Ostrich provides a way to measure the performance on computations that are becoming increasingly relevant as JavaScript gains in popularity and is deployed in new contexts. \subsection{Experimental Setup} All of the experiments reported in the following were performed using a single Microsoft Azure Cloud instance of type ``G5'' running a standard installation of Ubuntu 15.04. This instance type has two 16-core processors with a total of 448GB of RAM; it is the sole user of the underlying hardware, based on a one-to-one mapping to two Intel Xeon E5-2698A v3 processors. We use JavaScriptCore r188124 and V8 version 4.6.40, release builds compiled from source using GCC 4.9.2. Our version of the SMAC configurator is v2.10.03\footnote{\url{http://www.cs.ubc.ca/labs/beta/Projects/SMAC/}}, run using Oracle Java JDK 1.8.0\_25. For each of our configuration scenarios, we performed 25 independent runs of SMAC with a 1 CPU-day runtime cutoff, allocating a maximum of 60 CPU seconds to each run on a particular problem instance. The objective value to be minimised by SMAC is the so-called \emph{Penalised Average Runtime} (PAR) score, which penalises timed-out and crashing runs by assigning them an objective value of 10 times the runtime cutoff (PAR-10), and otherwise assigns an objective value of the CPU time used. This greatly disincentivizes bad and invalid configurations, in order to bias the configurator against selecting them. The incumbent configuration with the best PAR-10 score reported by SMAC after termination was selected as the final result of the configuration process, and a subsequent validation phase was performed to run both the JSC/V8 default configuration and the optimised configuration selected by our procedure on the entire problem instance set. For these validation runs, we perform 100 runs per configuration and benchmark instance, and compute the PAR-10 score across all runs for each configuration. \todo{HH}{Not PAR-10? CF: How's this?} We require repeated runs to obtain statistically stable results. Individual runs are very short and subject to susbstantial noise from the environment, e.g.\ operating system jobs and contention for shared memory. Through repeated runs and averaging, we achieve more realistic results that are less affected by very short and very long outlier runs. \section{Empirical Results} The purpose of our experiments is twofold. First, we intend to demonstrate that the performance \emph{across a set of diverse benchmarks} can be improved by using a different parameter configuration than the default. This would indicate that compiler developers may want to adjust the default settings with which they ship their compilers, or that users that focus on particular types of applications may wish to do so themselves. It also demonstrates the potential for techniques that periodically adjust the configuration of the JavaScript engine based on the types of JavaScript code run recently. The second part of our experiments focusses on \emph{specific individual benchmarks} and shows that performance can be improved significantly by specialising the compiler configurations to a specific piece of code to run, rather than being forced to accept tradeoffs due to competing requirements by a heterogeneous set of benchmarks. This finding can be exploited in two ways: Users who run the same piece of JavaScript code over and over again (e.g., in a server-side JavaScript application) can benefit from offline tuning, while at the same time, very short online configuration runs for code that a user's browser accesses frequently can potentially optimise its performance. \todo{LK}{Can we show some examples of how many iterations SMAC needs before it finds a significantly better than default config to give some credence to this? Maybe even random sampling? HH: Good question.} \todo{CF}{I can still trawl through the SMAC logs if we have time.} \subsection{Results on Benchmark Sets} \todo{HH}{I've rearranged the following to make it sound less defensive. If you like this, please rearrange the table to show the JSC results first. We may then also want to rearrange the order in which the two engines are introduced and discussed in 3.1. I like doing this, because it makes the order lexicographic.} As can be seen in Table~\ref{tab:full-set-configuration-perf}, we obtained substantial performance improvements for JavaScriptCore (JSC) on the Ostrich, Octane and Sunspider benchmark sets, indicating that the default configuration of JSC leaves room for optimisation. This is not the case for V8, where we did not find significant improvements for any of our benchmark sets, which suggests that the default parameter values are already well optimised. This may seem disappointing, but needs to be viewed in light of the fact that compiler developers test against these same benchmarks, and have much incentive, through constant competition, to be successful in their efforts to find the best configurations. It is therefore remarkable that we achieved sizeable performance gains for JSC, even on the SunSpider benchmark developed by the WebKit team (as noted earlier, WebKit uses the JSC engine). \begin{table*} \begin{center} \begin{tabular}{l@{\hskip 2em}rrrr@{\hskip 1em}rrrr} \input{tables/full-set-configuration-perf} \end{tabular} \caption{ Validation results using 100 runs per problem instance for each of our 4 configuration scenarios using complete instance sets. We give results for the JavaScriptCore and V8 default configurations, as well as for the best configuration obtained by SMAC (as identified by training performance). We note that the configurations found for JSC exhibit significant performance improvements over the entire instance set, while those for V8 show only marginal improvement over the defaults. } \label{tab:full-set-configuration-perf} \end{center} \end{table*} \subsection{Results on Individual Benchmark Instances} When configuring the JavaScript engine parameters for individual instances from our benchmark sets, we obtain much greater improvements than for the complete sets. We selected the five most promising individual instances for the experiments in this section to keep the resource requirements moderate. We chose the instances based on where we observed performance improvements in the experiments that optimised the configuration across the entire benchmark sets. Three of these instances are taken from the Ostrich set: graph-traversal, sparse-linear-algebra, and structured-grid, and two instances stem from the Octane set: PDFjs and Splay. Results from these experiments are shown in Table~\ref{tab:individual-instance-configuration-perf}, and we show additional empirical cumulative distribution functions of running time and scatter plots for the default \emph{vs} optimised configuration in Figure~\ref{fig:ostrich-sparse-rtd-scatter} and Figure~\ref{fig:octane-rtd-scatter}. On Ostrich graph-traversal or structured-grid, not shown in the table and figures, we have not obtained significant performance improvements for either of the two engines. \begin{table*} \begin{center} \begin{tabular}{l@{\hskip 2em}rrrr@{\hskip 1em}rrrr} \input{tables/individual-instance-configuration-perf} \end{tabular} \caption{ Validation results using 100 runs per problem instance for 3 configuration scenarios using a single problem instance, one from our Ostrich set (Sparse Linear Algebra) and two from the Octane set (Splay and PDFjs). We omit two other experiments on Ostrich instances (Graph Traversal and Structured Grid), where neither compiler showed any improvement after configuration. We give results for the JavaScriptCore and V8 default configurations, as well as for the best configuration obtained by SMAC (as identified by training performance). } \label{tab:individual-instance-configuration-perf} \end{center} \end{table*} \begin{figure*} \begin{center} \subfigure[JSC - Octane - PDFjs]{ \includegraphics[width=0.4\textwidth]{plots/jsc-octane2-pdfjs-single-8core-rtd} \label{fig:jsc-octane-pdfjs-rtd} } \subfigure[JSC - Octane - PDFjs]{ \includegraphics[width=0.4\textwidth]{plots/jsc-octane2-pdfjs-single-8core-scatter} \label{fig:jsc-octane-pdfjs-scatter} } \\ \subfigure[JSC - Octane - Splay]{ \includegraphics[width=0.4\textwidth]{plots/jsc-octane2-splay-single-8core-rtd} \label{fig:jsc-octane-splay-rtd} } \subfigure[JSC - Octane - Splay]{ \includegraphics[width=0.4\textwidth]{plots/jsc-octane2-splay-single-8core-scatter} \label{fig:jsc-octane-splay-scatter} } \caption{ For the Octane Splay and PDFjs invidual-instance configuration scenarios, we show empirical CDFs of runtime for 100 runs on the respective problem instance, along with scatter plots vs. the default configuration. } \label{fig:octane-rtd-scatter} \end{center} \end{figure*} \begin{figure*} \begin{center} \subfigure[JSC - Ostrich - Sparse LA]{ \includegraphics[width=0.4\textwidth]{plots/jsc-ostrich-sparse-single-8core-rtd} \label{fig:jsc-ostrich-sparse-rtd} } \subfigure[JSC - Ostrich - Sparse LA]{ \includegraphics[width=0.4\textwidth]{plots/jsc-ostrich-sparse-single-8core-scatter} \label{fig:jsc-ostrich-sparse-scatter} } \\ \subfigure[V8 - Ostrich - Sparse LA]{ \includegraphics[width=0.4\textwidth]{plots/v8-ostrich-sparse-single-8core-rtd} \label{fig:v8-ostrich-sparse-rtd} } \subfigure[V8 - Ostrich - Sparse LA]{ \includegraphics[width=0.4\textwidth]{plots/v8-ostrich-sparse-single-8core-scatter} \label{fig:v8-ostrich-sparse-scatter} } \caption{ Considering the Ostrich Sparse Linear Algebra individual-instance configuration scenario, we show empirical cumulative distribution functions (CDFs) of runtime for 100 runs on the respective problem instance, along with scatter plots vs. the default configuration. The CDFs show the probability that a run will complete within a certain amount of time as a function of the time, as observed empirically. That is, a finished run of an instance at a particular time increases the probability. } \label{fig:ostrich-sparse-rtd-scatter} \end{center} \end{figure*} Overall, the performance improvements on these individual-instance configuration scenarios are surprisingly pronounced. JavaScriptCore achieves a relative performance improvement of 35.23\% over the default configuration on the Octane Splay benchmark, and of 14.76\% on Octane PDFjs. For V8, we observed a 10.13\% improvement over the default on Ostrich sparse-linear-algebra. \todo{HH}{Add some discussion of the RTDs (mention randomisation in the code, measurement noise) - make it clear why we even need multiple runs on the same instance for evaluation. I don't understand why the instances shown in Figure 1 were chosen. I'd show detailed results for those mentioned above in the text.)} Overall, these results are remarkable as even new code optimisation methods often only result in performance improvements by single-digit percentages. We hypothesize that there are some specific aspects of these problem instances which differ sufficiently from the other instances in their respective benchmark sets, that these configurations cannot be successfully be used across those entire sets, but are very effective on the individual instance in question. We present some preliminary results towards identifying the source of these improvements in the following. \subsection{Time to Find Improving Configurations} Even when considering the remarkable performance improvements seen in our individual-instance configuration experiments, there may be some concern about the time required to find these improving configurations, given that we used 25 independent SMAC runs of 1 CPU day to achieve these. Upon further investigation, in all of our individual-instance configuration scenarios, the final optimised configuration was found in less than 3 CPU hours of runtime, with initial improvements over the default configuration typically found in less than 5 CPU minutes. Longer runtimes are required for the complete instance set configuration scenarios, but even in those cases, the final configuration was found in less than 6 CPU hours, with initial improving configurations typically being found in less than 1 CPU hour. In practice, a much smaller configuration budget would be sufficient to achieve qualitatively similar results. In fact, we observed the first improvements after only a few minutes of configuration. \subsection{Changed Parameter Values} In order to better understand the source of our individual-instance performance improvements, we empirically analysed the parameters changed from their default values using \emph{ablation analysis}~\cite{fawcettHoos2015}. This approach has been previously used successfully to assess the importance of parameter changes observed in applications of automated algorithm configuration techniques to propositional satisfiability, mixed-integer programming and AI-planning problems. Ablation analysis greedily constructs a path through the parameter configuration space from the default to a given target configuration, selecting at each stage the single parameter modification resulting in the greatest performance improvement. The order of the resulting modifications reflects the relative contribution to the overall performance improvements obtained by the configuration process, where later changes may occasionally achieve bigger improvements that would not have been possible before earlier modifications to the default configuration. The three parameter modifications resulting in the greatest performance improvement for the Octane Splay and PDFjs instances are shown in Table~\ref{tab:jsc-octane-splay-ablation} and Table~\ref{tab:jsc-octane-pdfjs-ablation}, respectively. For JavaScriptCore on Octane Splay, the parameter changes that achieved the most significant improvements are related to object tracking and garbage collection. For the Octane PDFjs benchmark instance, the configuration process resulted in modifications to various parameters controlling memory management and the aggressiveness of the code optimisation. We note that numberOfGCMarkers is important in both cases, where the value is changed to 1 from a default of 7. This parameter controls the amount of parallelism in the garbage collector. Here, reduced parallelism avoids overhead and improves overall performance. While the portion of the relative improvement indicated in the tables is approximate due to the nature of the ablation analysis procedure, it appears that in both cases, over 90\% of the observed relative improvement can be explained by the modification of the three parameters shown. This is consistent with previous results using ablation analysis, where in many scenarios, the vast majority of the improvement was observed to be achieved by modifying a small set of parameters. Of course, identifying these parameters in \emph{post hoc} ablation analysis is much easier than determining them within the configuration process that gave rise to the optimised configurations thus analysed. \begin{table*} \begin{center} \begin{tabular}{l@{\hskip 2em}r@{\hskip 2em}rrr} distance from default & parameter modified & from & to & approx. portion of rel. impr. \\ \midrule 1 & numberOfGCMarkers & 7 & 1 & 38\% \\ 2 & minCopiedBlockUtilization & 0.9 & 0.196 & 47\% \\ 3 & collectionTimerMaxPercentCPU & 0.05 & 0.292 & 6\% \\ \end{tabular} \caption{ Parameters modified from the respective default settings for JavaScriptCore in order to achieve the three highest marginal performance gains on the Octane 2.0 instance ``Splay'', as determined by ablation analysis. Reported marginal improvement is only approximate, as the ablation analysis procedure is not performing the full 100 runs per instance validation as in our configuration experiments. } \label{tab:jsc-octane-splay-ablation} \end{center} \end{table*} \begin{table*} \begin{center} \begin{tabular}{l@{\hskip 2em}r@{\hskip 2em}rrr} distance from default & parameter modified & from & to & approx. portion of rel. impr. \\ \midrule 1 & likelyToTakeSlowCaseMinimumCount& 20 & 56 & 41\% \\ 2 & numberOfGCMarkers & 7 & 1 & 40\% \\ 3 & forceDFGCodeBlockLiveness & false & true & 16\% \\ \end{tabular} \caption{ Parameters modified from the respective default settings for JavaScriptCore in order to achieve the three highest marginal performance gains on the Octane 2.0 instance ``pdfjs'', as determined by ablation analysis. Reported marginal improvement is only approximate, as the ablation analysis procedure is not performing the full 100 runs per instance validation as in our configuration experiments. } \label{tab:jsc-octane-pdfjs-ablation} \end{center} \end{table*} \subsection{Performance under Different Loads} Modern computers have multiple processors, with multiple CPU cores each, and it is desirable to run multiple processes simultaneously in order to take full advantage of the processing power thus provided. However, other factors, such as shared caches, memory bandwidth and the I/O subsystem can affect performance negatively, if too many processes are vying for resources. In order to investigate to which extent such factors may impact our experimental setup, we ran different configurations of workloads. First, we utilized all 32 cores of the machine used for our experiments by running 32 benchmark experiments in parallel. Second, we ran only 8 experiments in parallel, leaving the remaining cores for operating system processes. \todo{HH}{I've reworded, but still find it unclear what was done. Was it the case that in the first case, 32 runs of the same benchmark instance were done in parallel? If so, we should state it clearly.} The results show that there are significant differences. The graph-traversal instance of the Ostrich benchmark set requires a large amount of memory and sufficient memory bandwidth. With the machine fully loaded, we observe that we easily find a parameter configuration that performs better than the default. On the lightly loaded machine we are unable to do so, and the benchmark runs significantly faster than on the fully loaded machine, even with the improved configuration. This clearly indicates a memory bottleneck that can be mitigated through configuration. The default configuration of JavaScriptCore performs well on the SunSpider, Kraken and Octane benchmarks on the fully-loaded machine, and we were unable to find a better configuration of parameter settings. On the lightly loaded machine, on the other hand, we did find better configurations for SunSpider and Octane. This may indicate that the JavaScriptCore default configuration is optimised for a highly-loaded machine, which is unlikely, when the engine is used inside a browser on a user's desktop or laptop machine. The fact that JavaScriptCore and V8 and exhibit different behaviour with respect to how easy it is to improve on their default configurations on machines with different load suggests that the benchmarking and tuning the respective development teams perform may use different experimental setups. This result shows that the optimisation of compiler flags should be done not only for the machine that the code will be run on, but also for the expected load on that machine -- configuring for a lightly loaded machine will yield different results than configuring for a heavily loaded one. Furthermore, there is much promise in switching between different configurations based on machine load. \begin{table*} \begin{center} \scriptsize \begin{tabular}{l@{\hskip 2em}r@{\hskip 1em}rrrrrrrrrr} \input{tables/machine-load-stability} \end{tabular} \caption{ Using the Octane pdfjs problem instance, we performed 100 independent runs of the 25 SMAC configurations for JSC, as well as the JSC default configuration. This was repeated 3 times with the same random seeds, first allowing 32 simultaneous runs and again allowing 8 simultaneous runs. We give the PAR10 score for the default configurations, as well as for the 10 best configurations by validation score in each experiment (along with the configuration ID for each). The configuration ID for the ``best training'' configuration of JSC on this instance is 14. It is clear that the best configurations are quite different in the case of 32 simultaneous runs, even with a fixed instance and seeds. As this variability disappears in the case of 8 simultaneous runs, we attribute it to noise from the load (and subsequent cache contention, etc.). } \label{tab:machine-load-stability} \end{center} \end{table*} \section{Conclusions} JavaScript is ubiquitous in the modern world wide web and increasingly spreading into other areas that have traditionally been dominated by other programming languages. It is used client-side in web browsers as well as server-side in backend applications. Performance increasingly matters in practical JavaScript, as applications grow in size and complexity. In part, the success of JavaScript has been due to the availability of highly optimised compilers that produce efficient code that can be executed with minimal overhead. Just-in-time compilation and dynamic optimisations further increase the performance of the code. However, contemporary compilers have a large number of parameters, most of which are only poorly documented. While the default configuration of these parameters provides good performance in most cases, the parameter values need to be optimised for the application at hand to get the best performance in all cases. Exploring this huge and complex parameter space is a daunting task. We apply a state-of-the-art, general-purpose automated configuration procedure with an excellent track record in applications in machine learning and combinatorial optimisation to the problem of finding the best parameter configuration for JavaScript engines for a particular set of problem instances. Sequential model-based optimisation leverages state-of-the-art techniques from statistics, optimisation and machine learning to efficiently and automatically explore the parameter space of an algorithm and to home in on promising configurations quickly. Our experimental evaluation shows that notable performance improvements can be achieved through automated configuration. Specifically, we demonstrate that the performance of JavaScriptCore can be substantially improved on 3 out of 4 heterogeneous benchmark sets in common use for JavaScript compiler benchmarking. We also show that JavaScriptCore (and to a lesser extent V8) can be specialised to obtain runtime gains of up to 35\% on tasks such as PDF rendering. This is particularly significant as we are optimising code that is run millions of times. In contrast, algorithm configuration for combinatorial optimisation problems considers the different setting where each problem instance needs to be solved only once. We believe that our results are promising and believe that our approach enables many interesting applications and follow-up work. We are currently planning additional work including a broader set of experiments, additional analysis of the parameter space structure, a deeper investigation into the effect of machine load on runtime performance and configuration, and an investigation of the transferability of these configuration results to machines other than those used for training. \subsection*{Acknowledgements} Part of this research was supported by a Microsoft Azure for Research grant. HH also acknowledges support through an NSERC Discovery Grant. \clearpage
2,869,038,156,199
arxiv
\section{INTRODUCTION} \label{Introduction} It is well known that coatings are subjected to \textit{residual stresses}, already present at the end of the deposition process. These stresses have two main origins. \textit{Intrinsic stresses}, which are due to the deposition process itself, depending on the deposition conditions and by the mismatch of the properties between the coating and the substrate materials (e.g. lattice parameter) \cite{Marques1998}. \textit{Thermal stresses}, due to a thermal expansion mismatch between the coating and the substrate, they depend on the elastic properties of the deposited and the base material, usually rising when the sample is cooled down to room temperature after deposition. When the coated components operate at variable temperatures, additional thermal stresses generate. These stresses, typically intensifying at the film-substrate interface, can lead to coating failure, by either cracking or delamination. Predicting and monitoring these stresses is crucial to guarantee the operational integrity of the coated devices. This requires the knowledge of the elastic moduli and the CTE of the materials. In the case of coatings this cannot be taken for granted. Firstly, because the thermomechanical properties, which depend on the specific film structure and morphology, can be significantly different from the ones of the corresponding bulk form, and depend on the deposition process. Secondly, because, for coatings, direct measurement can be a challenging task. \newline A wide range of techniques is available to investigate the elastic properties of coatings; namely, nanoindentation \cite{Ferrè2013, UlHamid} and various acoustic based techniques \cite{BeghiBook}, including Brillouin spectroscopy \cite{Besozzi2016, Sumanya2017, Ozsdolay2017, Faurie2017}, while little is known about the CTE of films. The standard techniques adopted to measure the CTE of bulk materials (e.g. dilatometry \cite{Huang2005, Jackson2016}) are usually not viable for coatings. Several unconventional techniques have been proposed, such as X-ray diffraction \cite{Zoo2006, Bartosik2017, Lei2017}, ellipsometry \cite{Singh2004} and different optical based techniques \cite{James2001}. Among them, the optical implementation of the substrate curvature (SC) technique has shown to be one of the most promising methods \cite{Lei2017, Hang2010, Lima1999, Woehrl2009, Knepper2007, Dutta2016}. This method exploits laser beams to detect changes in the curvature radius of the coating-substrate system upon temperature variations \cite{Chason2001}. The CTE of the coating can be then deduced if the CTE of the substrate and the elastic properties of both the film and the substrate are known (see section 2). \newline In this work, we investigate the CTE and the residual stresses of tungsten (W) coatings deposited by Pulsed Laser Deposition (PLD). W coatings are of particular interest in a wide range of technological applications, such as in microelectronic and optoelectronic devices, as absorption layers in X-ray lithography \cite{Chen2005, Radic2004, Kobayashi}, and in nuclear fusion energy \cite{Boucalossi2014, Ruset2013, Guilhem2016}. Thanks to the high versatility of PLD in tuning many process parameters (e.g. background gas pressure during deposition, laser fluence on target), both mono-elemental and multi-elemental coatings can be grown with tailored nanostructure, from amorphous to nanocrystalline, and morphology, from porous to compact \cite{Besozzi2016, Pezzoli2015, Luo2017, Li2015}. Here, we focus on W-based coatings with three different nanostructures, namely (i) nanocrystalline W (n-W), (ii) ultra-nano-crystalline W (u-n-W) and (iii) amorphous-like W (a-W), with the aim of highlighting the correlation between the thermal expansion behavior, the residual stresses and the structural properties of the materials. \newline For the coating characterization, we develop an optimized SC setup that allows the CTE determination over a wide range of temperatures (25 - 1000 $^{\circ }$C). An ad-hoc designed vacuum chamber is equipped with an optical system that drives a 2D pattern of parallel laser beams on the surface of the coated substrate, and detects the reflected beams by a CMOS sensor. The beam positions, when the sample is thermally bent, allow the direct determination of the substrate-coating curvature as function of temperature. From curvature measurements, the residual stresses and the CTE of the coatings are derived, under the Stoney approximation \cite{Stoney}, for known elastic moduli, which have already been measured by Brillouin spectroscopy (BS) \cite{Besozzi2016}. \section{The principle of obtaining the residual stress and the CTE of the coatings} Upon a temperature variation, the mismatch in the CTE between the coating and the substrate, combined with the dilation constraint represented by the film adhesion to the substrate, leads the sample to a progressive bending. The total bending depends on the difference between the CTEs of the two materials, on their thicknesses and their elastic moduli, and obviously on the temperature itself; it is well described by the continuum mechanics theory for multilayers \cite{2002Hsueh}. In the case of a bilayer formed by a film much thinner than the substrate, such that the stress within the film can be taken as approximately uniform, the stress within the coating can be expressed in terms of the bending curvature radius $R$ as: \begin{equation} \sigma _{f}(T)=\frac{E_{s}}{1-\nu _{s}}\frac{t_{f}}{t_{s}^{2}}\frac{1}{6} \frac{1}{R(T)}-\frac{1}{R_{0}}) \label{eq1} \end{equation} In Eq.\ref{eq1} the sub-indexes $s$ and $f$ stand for substrate and film respectively, $t$ is the thickness, $R(T)$ the radius of curvature at temperature $T$ and $R_{0}$ the initial radius of curvature at a reference temperature. $E$ is the Young modulus and $\nu$ the Poisson's ratio. Eq. \ref{eq1} is often known as Stoney's equation \cite{Stoney, Janssen2009}. If the total film stress is only due to the thermal component, it is given by: \begin{equation} \sigma _{f}=\sigma _{thermal}=\frac{E_{f}}{1-\nu _{f}}(CTE_{f}-CTE_{s} \Delta T\quad , \label{eq2} \end{equation and, taking the derivative of eq. \ref{eq2} over temperature: \begin{equation} \frac{d\sigma _{f}}{dT}=\frac{E_{f}}{1-\nu _{f}}(CTE_{f}-CTE_{s})\quad , \label{eqboh} \end{equation such that \begin{equation} CTE_{f}=CTE_{s}+\frac{d\sigma _{f}}{dT}\frac{1-\nu _{f}}{E_{f}} \label{eqCTE} \end{equation} \begin{figure}[!t] \centering \includegraphics[width = 0.6\columnwidth]{1} \caption{a) Schematic principle of two initially parallel laser beams that are reflected by a curved surface. b) the reflected beams are collected by the collecting objective and recorded by the CMOS} \label{Fig4} \end{figure} Once the film stress is derived from the curvature measurement by eq. \ref{eq1}, eq. \ref{eqCTE} is exploited to derive the CTE of the coatings. Equations \ref{eq1}-\ref{eqCTE}, are valid only if the elastic moduli are considered as temperature independent. A more realistic approach would clearly consider this temperature dependence. However, it is not trivial to obtain the temperature correlations of the elastic moduli, in particular in the case of coatings, since they can completely differ from the ones of the corresponding bulk materials. For this reason, here, the obtained CTE refers to the mean value of the CTE over the imposed temperature range. \newline Residual stresses can possibly be superposed to $\sigma _{f}$. If, instead of $R(T)$ and $R_{0}$, the curvature radii after and before deposition are considered in eq. \ref{eq1}, the residual stress can be determined. \newline The key experimental step is, evidently, the correct determination of the sample curvature; it is performed exploiting an array of parallel laser beams. The procedure of obtaining the curvature is analyzed in the simplest case of two parallel laser beams, with an initial spacing $D_{0}$, that impinge on a sufficiently reflective surface at two points A e B, at a nominal incidence angle $\theta $ with respect to the normal to the surface (see Fig. \ref{Fig4}a). If the surface is flat ($R=\infty $), reflection occurs at an angle $2\theta $ and the two beams are again parallel, at distance $D_{0}$. If the surface has a convex shape with a finite radius $R$, the two reflected beams are no longer parallel. Simple reflection implies that the angle $\alpha $ between the normals to the surface at the reflection points A and B is related to the nominal incidence angle $\theta$ as: \begin{equation} \sin \alpha =\frac{D_{0}}{2R\cos \theta } \label{eq3} \end{equation and that the angle between the two reflected laser beams is $4\alpha$. The beams are finally detected by a CMOS sensor, supported by a measurement arm of length $A$. The beams produce on the CMOS screen two spots, at distance d = d$(R)$; for a perfectly flat surface d = d$(R=\infty )$ = d$_{\infty }=D_{0}$. It is intuitive, and it is detailed in the Appendix, that the absolute sensitivity $d$d$/d(1/R)$ increases with the arm length $A$. However, a larger arm length also implies a stronger sensitivity to vibrations and the need of a larger sensor (although d$_{\infty }$ does not increase).The insertion of a converging lens, of focal length $F$, in the measurement arm has been considered, with the objectives of reducing the spot distances, to allow a smaller CMOS sensor, and to limit the arm length $A$, without losing sensitivity. The lens is at distance $L$ from the sample, and the light sensor is at a further distance $K$ : $A=L+K$ (see Fig. \ref{Fig4}b). In a typical experiment the sample has an initial radius of curvature $R_{i}$, due to the residual stresses, which makes the beams to be reflected with an angle $\alpha _{i}$; the distance between the spots is shifted by $\Delta $d$_{i}$ from d$_{\infty }$. Imposing a temperature variation, the curvature changes to the final value $R_{f}$, the angle changes to $\alpha_{f}$, and the distance between the spots undergoes a further shift $\Delta$d$_{f}$. The change of the curvature radius can be derived from the final displacement $\Delta$d$_{f}$ exploiting the classical matrix optics used in ray tracing algorithms. This method adopts two approximations: the paraxial one, i.e. the smallness of the deviation angle of the beam with respect to the optical axis of the system, and the thin lens one. Both approximations are fully appropriate: firstly, since (see eq. \ref{eq3}) $D_{0}\sim 1$cm and $R$ is at least several meters, $\sin \alpha \precsim 10^{-3}$; secondly, the radius of curvature of the adopted collecting lens is much larger than its thickness. The beams on the sample surface and on the CMOS screen are related by a transfer matrix as follows: \begin{equation} \begin{bmatrix} z^{\prime } \\ \alpha ^{\prime } \end{bmatrix} = \begin{bmatrix} 1-K/F & L(1-K/F)+K \\ -1/F & 1-L/F \end{bmatrix} \begin{bmatrix} z \\ 2\alpha \end{bmatrix} \label{eq4} \end{equation} where $z$, $\alpha $, $z\prime $ and $\alpha \prime $ are the distance of the beam and its deviation angle, from the optical axis, respectively on the sample and on the CMOS. Eq. \ref{eq4} gives \begin{equation} z^{\prime }=(1-K/F)z+2\alpha \lbrack L(1-K/F)+K] \label{eq5} \end{equation} and $\Delta $d$_{f}$ is given by (see Fig. \ref{Fig4}b) \begin{equation} \Delta d_{f}=2 \times (z^{\prime }(2\alpha _{f})-z^{\prime }(2\alpha_{i}))\quad . \label{eq6} \end{equation} Combining with eq. \ref{eq3} we obtain \begin{equation} \frac{\Delta d_{f}}{D_{0}}B=(\frac{1}{R_{f}}-\frac{1}{R_{i}}) \label{eq7} \end{equation} where $B=(\cos (\theta ))/(2[L(1-K/F)+K])$ is a pure geometrical factor that depends on the angle of incidence of the beams, on the arm length and on the presence of the focusing lens. If the lens is removed, eq. \ref{eq7} becomes the standard equation for measuring the curvature change of a sample by a 2D array of parallel laser beams \cite{Hang2010, Chason2001}: \begin{equation} \frac{\Delta d_{f}}{D_{0}}\frac{\cos \theta }{2A}=(\frac{1}{R_{f}}-\frac{1}{R_{i}}) \label{eq8} \end{equation} \begin{figure}[!t] \centering \includegraphics[width = 0.6\columnwidth]{2} \caption{Relative sensitivity $d(\Delta d_{f}/$d$_{\infty })/d(1/R)$ [m] map for different lens positions $L$, sample curvature radii $R$, and for fixed arm length $A = 0.4$ m and focal length $F = 0.1$ m. Continuous yellow lines delimit the white region where the measurements are not possible (see Appendix). Dashed green lines indicate the positions $L = 0.24$ m and $L = 0.34$ m of Fig. \ref{Fig_A1}c and Fig. \ref{Fig_A1}b respectively. The right border ($L = 0.4$ m) is the lens-less case of Fig. \ref{Fig_A1}a.} \label{Fig_x} \end{figure} In our experimental setup, $K$,$L$ and $F$ can be varied; an optimization process has been performed, as detailed in the Appendix. Both cases $K<F$ and $K>F$ have been considered. The image on the CMOS sensor can be shrinked, such that the spot distance for a flat specimen, d$(R=\infty )=$ d$_{\infty }$, becomes smaller than $D_{0}$. The absolute sensitivity $d\left(\Delta d_{f}\right) /d(1/R)$ has to be assessed against the physical pixel size of the sensor; however, the performance of the experimental configuration is better characterized by the relative sensitivity $d(\Delta d_{f}/$d$_{\infty })/d(1/R)$, which has to be assessed against the sensor resolution, in terms of the number of sensor pixels. Fig. \ref{Fig_x} presents the relative sensitivity obtained for a fixed $A=0.4$ m and a fixed $F=0.1$ m, varying $L$ between $0.16$ m and $0.4$ m (the latter distance is the lens-less case). The distance $L=0.34$ m has been identified, which allows an image shrinkage by a factor of more than 2 (as shown in the Appendix), therefore a significantly smaller sensor, with a relative sensitivity which is larger, by over 20\%, than that of the lens-less case with the same $A$ (as shown by Fig. \ref{Fig_x}). This configuration ($A=0.4$ m, $F=0.1$ m and $L=0.34$ m) is adopted in our measurements; it is suitable up to strong curvatures ($R$ down to 5 m or even less). As it can be seen from Fig. \ref{Fig_x}, if the curvature is not very strong (e.g. $R$ above 10 m) the lens can be shifted to slightly smaller values of $L$, obtaining a further boost of the relative sensitivity. \newline Operationally, a small array of laser beams is adopted. From the image collected by the CMOS sensor, the positions of the spots due to the various beams are obtained by standard image analysis procedures, namely the centroid determination, and eq. \ref{eq7} is exploited, taking for the value of $\Delta d_{f}/D_{0}$ the ratio averaged on all adjacent spots. \newline The obtained curvature radius gives, by eq. \ref{eq1}, the total stress in the sample, from which, by eq. \ref{eqCTE}, the average $CTE_{f}$ over the imposed temperature range can be obtained. It must be remembered that the measured CTE refers to the $in-plane$ component of the liner expansion thermal coefficient. In anisotropic sample the in-plane component can significantly differ from the out-of plane CTE, that must be determined by other techniques. \newline As discussed in the following section and in Appendix B, different noise sources related to the experimental setup severely affect measurement accuracy. However, also surface roughness can result in inaccuracies of CTE determination when, instead of the substrate uncoated surface, the film surface is probed. Roughness can induce imperfections in the reflected beam spots shape, eventually affecting the accuracy of the spot centroid calculation. In our case, PLD coatings on flat silicon substrates show a very low surface roughness (i.e. few nanometers) that, added to multiple frames average, limit this error source. Thickness inhomogeneities, instead, generally influence measurement accuracy, resulting in an apparent increase or decrease of curvature. All the W coatings analyzed in this work are characterized by a high planarity, of the order of $\pm$ 10\% the mean film thickness. This means that, for 400 nm thick coatings, a variation of $\pm$ 40 nm can eventually result in a change of curvature radius of $\pm$ 400 m. This value is an order of magnitude higher than the commonly measured curvature radii (i.e. few tens of meters), thus introducing a small error in the CTE computation. Thickness inhomogeneities, instead, become critical for micrometric thick coatings. \section{EXPERIMENTAL SETUP} \label{Experimental techniques} The schematic diagram of the apparatus developed for CTE measurement is shown in Fig.\ref{Fig1}. It consists of three main parts: a set of laser optics, a vacuum chamber for thermal annealing processes and a sensor for laser beams positions measurement. \subsection{Laser beams array generation and collection systems} The laser beam array is generated by coupling a laser diode ($\sim$ 5 mW output, 630 nm wavelength) and a pair of etalons. The first etalon multiplies the input laser beam in a direction, while the second etalon, oriented at 45$^\circ$ with respect to the first one, duplicates the 1D array in the other direction, obtaining a 2D parallel laser beams array. \begin{figure}[!t] \centering \includegraphics[width = 0.7\columnwidth]{3} \caption{Schematic diagram of the apparatus for CTE measurements. The heating module is schematically shown in the inset of figure \ref{Fig3}.} \label{Fig1} \end{figure} In our case, we create a 2 x 2 array of 1 cm equally spaced laser beams for a total coverage of 1 x 1 cm$^2$ measurement area. The array strikes with an angle of incidence of 60$^\circ$ at the center of the substrate polished surface. The measurement position is kept constant during the entire analysis process. The reflected beams are recorded by the CMOS camera, through a collecting lens, as discussed above. The adopted camera is characterized by a 4/3, 1.3 Megapixel sensor with a 1024 x 1248 digitized image. The acquisition rate of the sensor is 10 fps in the full format, but it can be further increased up to 200 fps if only certain regions of interest are selected. In this way, multiple measurements for a certain temperature step can be acquired, so that the signal can be averaged on successive frames in order to reduce the overall noise. The position of the beams on the CMOS is followed by the determination of the centroid of each laser spot. The centroids are determined by a classical centroid of intensity algorithm, which weights the intensity of each pixel in the irradiation area over the total irradiated pixels. It has to be noted that the accuracy of the method is deeply affected by noise sources (see Appendix B), such as vibrations from the vacuum system or gas flow and fluctuations of pixels intensity. The use of multiple laser beams array is thus crucial to guarantee high measurement accuracy. With respect to standard laser scanning systems, where a single laser beam is rastered across the entire sample surface \cite{Flinn1987}, the use of multiple beam array results in measurements that do not depend on the absolute position of each laser spot on the sensor \cite{Floro1996}. The laser beams strike all at the same time on the sample surface and the differential beam spacing between adjacent spots, which is less sensitive to the sample vibrations than the absolute position of the beam, is adopted to measure the change of sample curvature. With our setup, we obtain an accuracy of the beam spacing measurement of 0.09 pixels, that with our sensor of 6.66 x 6.66 $\mu$m/pixel stands for $\pm$0.5 $\mu$m maximum deviation (see Appendix B). \subsection{Vacuum chamber and heating module} \begin{figure}[!t] \centering \includegraphics[width = 0.5\columnwidth]{4} \caption{Temperature measurement of the heating module during the maximum temperature ramp adopted for this work. The straight line refers to the measurement by the K-type thermocouple in the point A; the circle scatter points indicate the temperature measured by the pyrometer on the top of the heating plate (point B)} \label{Fig3} \end{figure} The vacuum chamber equipped with the laser beams array generation and collecting systems described above accommodates the heating module for controlled thermal annealing processes (see Fig. \ref{Fig1}). The vacuum chamber is a 300 mm spherical chamber supplied with two 2” viewport flanges at 60$^\circ$ orientation for the input and the output of the laser beams. Other two symmetrical flanges provide connections with the vacuum system and the gas inlet for controlled atmosphere treatments. A coupled rotary and turbomolecular pumps are exploited to guarantee a base pressure of 5 x 10$^{-6}$ mbar during each thermal treatments. The heating module can reach up 1200 $^\circ$C. The temperature is measured by a thermocouple (type K) place under the sample in the middle of the holding plate (5 mm thick). A standard temperature ramp is shown in Fig. \ref{Fig3}. The uniformity of temperature along the plate thickness and lateral dimension has been assessed by a pyrometer measurement (red circle marks). The heating and cooling rate are fixed at 40 $^\circ$C/min and 20 $^\circ$C/min respectively. Temperature is measured every 0.5 s by an external acquisition system triggered with the CMOS data acquisition by an ad-hoc developed Labview interface. In this way, the centroid positions of each spot are automatically synchronized with temperature data. \begin{table*}[t] \centering {\relsize{-2} \begin{tabular}{lccccccc} \toprule Structure & Morphology & Deposition conditions & Composition & D (nm) & $\rho$ (g cm$^{-3}$) & Thickness ($\mu$m) & $M$ (GPa)\\ \toprule n-W & Compact & Vacuum & metallic W & 16 & 18 & 0.4 & 527\\ u-n-W & Compact & Vacuum & W 90$\%$ - Ta 10$\%$ & 11 & 13 & 0.4 & 500 \\ u-n-W & Compact & He 70Pa annealed & metallic W & 7 & 12 & 0.38 & 353 \\ a-W & Compact &He 70Pa & metallic W & $<$ 2 nm & 11 & 0.41 & 227 \\ a-W & Compact & $O_2$ 5Pa & W-O & $<$ 2 nm & 11 & 1 & 265 \\ a-W & Porous & He 100Pa & metallic W & $<$ 2 nm & 9 & 0.43 & 189 \\ \bottomrule \end{tabular} } \caption{Samples investigated in this work. Biaxial modulus is derived from Brillouin spectroscopy \cite{Besozzi2016}, coating thickness by SEM analysis, crystallites dimension $D$ by XRD \cite{Besozzi2016, DellasegaJAP} and mass density $\rho$. The coatings are deposited by PLD in vacuum or in presence of background gases (He, O$_2$) at different partial pressures as reported in \cite{Pezzoli2015, DellasegaJAP}.} \label{Tab2} \end{table*} \section{RESULTS AND DISCUSSION} Firstly, we tested the performances of our experimental setup by investigating the CTE of different coating materials. In particular, we analyze thermally evaporated silver (Ag) films as the ones investigated in \cite{Lima1999}. The Ag coatings have been deposited on a Si(100) 500 $\mu$m thick double side polished substrate. The thicknesses of the coatings have been determined by Scanning Electron Microscopy (SEM), being all sub-micrometric. Since Ag films have been deposited in very similar conditions of \cite{Lima1999}, the biaxial modulus of the deposited material have been chosen between 50 $\pm$ 10 GPa. The CTE of Ag films was investigated in the 25 - 150 $^\circ$C temperature range. We obtained a CTE of 38 $\pm$ 4 10$^{-6}$ K$^{-1}$, that well fits the result obtained in literature of 33 $\pm$ 4 10$^{-6}$ K$^{-1}$. Starting from this result, we proceeded with the characterization of W based coatings as described below. \subsection{Samples preparation, structural and elastic properties} All the nanostructured W coatings were deposited by PLD on silicon (Si) (100) substrates 500 $\mu$m thick. For the $\sigma_f$ computation of eq. \ref{eq1} we consider $E$ = 160 GPa, $\nu$ = 0.28 and CTE = 2.7 10$^{-6}$ K$^{-1}$ for this type of substrate \cite{Okada1984}. The tailored nanostructure is obtained tuning the background gas pressure (i.e. helium (He) and oxygen (O$_2$)) during deposition. For more details about the deposition process of these coatings refer to \cite{Besozzi2016, Pezzoli2015, DellasegaJAP}. As summarized in Tab. \ref{Tab2}, the change of film nanostructure from nanocrystalline (n-W) to amorphous-like (a-W) is achieved increasing the background gas pressure from vacuum conditions to 100 Pa of He pressure. While up to 75 Pa of He the a-W morphology is still compact, at 100 Pa of He pressure the a-W starts to become porous. O$_2$ is exploited to obtain again the a-W structure. As reported in \cite{Pezzoli2015}, at O$_2$ pressures of 5 Pa the W/O ratio is sub-stoichiometric (i.e. about 2.4) and the film preserves its metallic nature. As comfirmed by X-ray Diffraction (XRD) analysis, the W-O sample analyzed in this work is characterized by an amorphous-like structure. Finally, the ultra-nano-crystalline (u-n-W) structure is formed by thermal annealing of a-W pure W coatings over their recrystallization temperature (i.e. 650 $^\circ$C) or by adding tantalum (Ta) as solid solution during coatings deposition. The detailed study of the recrystallization behavior of a-W and the effect of Ta alloying on coating structure is reported in \cite{Besozzi2016}. Here, we limit our study to a 650 $^\circ$C thermally annealed a-W and an u-n-W coating with 10$\%$ of Ta concentration. The morphology and structural evolutions of the coatings when going from n-W to compact and porous a-W are highlighted by Scanning Electron Microscopy (SEM) analysis summarized in Fig. \ref{FigSEM}. SEM analysis were exploited also to determine the coatings thicknesses that are summarized in Tab \ref{Tab2}. \begin{figure*}[!t] \centering \includegraphics[width = 0.8\columnwidth]{5} \caption{From the left: SEM top view and cross section images of nanocrystalline columnar W, ultra-nano-crystalline W, compact amorphous-like W and porous amorphous-like W samples analyzed in this work.} \label{FigSEM} \end{figure*} \newline These three different nanostructures are characterized by different mean crystallite sizes ($D$), determined by XRD analysis using the Scherrer correlation. As shown in Tab. \ref{Tab2}, $D$ goes from 16 nm in the case of n-W to below 2 nm for a-W, assuming values between 7 and 11 nm in u-n-W samples. The film mass density $\rho$ of pure nanostructured W coatings was determined by quartz microbalance measurements during deposition \cite{DellasegaJAP}. $\rho$ decreases from 18 g cm$^{-3}$ to 9 g cm$^{-3}$ when going from n-W to a-W. On the contrary, the mass densities of W-Ta and the u-n-W annealed samples were derived from the lever rule and numerical simulations respectively, as described in \cite{Besozzi2016}. No direct measurmente of the $\rho$ for the W-O coating is available. However, since its nanostructure and morphology were found to be similar to the one of pure a-W, $\rho$ is fixed at the same value of 11 g cm$^{-3}$. \newline The elastic properties (i.e. the biaxial modulus $M = E/(1-\nu)$) of each W coating have been determined by Brillouin spectroscopy. For a detailed description of the derivation method see \cite{Besozzi2016}. As it can be seen in Tab. \ref{Tab2}, $M$ is strictly related to the changes of film mass density and crystallites dimension, going from 527 GPa for n-W to 189 GPa for a-W. It has been shown \cite{Besozzi2016} that in the regions where the mass density does not significantly change, $D$ is the key parameter that affects the elastic behavior of the material and vice versa. This explain why different a-W and u-n-W samples show different values of $M$. \subsection{Residual stress of nanostructured W coatings} The initial state of stress of the coatings is obtained by measuring the curvature change between the uncoated and coated Si wafer at room temperature. In the case of n-W, for instance, we found an initial state of compressive residual stress of 684 $\pm$ 42 MPa. The compressive stress is in agreement with the residual stresses found in other coatings deposited by PLD. As pointed out in different works \cite{Bonelli2000, Teixeira2002, Ganne2002, Lackner2004, Cibert2008}, the higher the energy of the ablated particles, the higher the compressive residual stresses are found in the PLD coating. For this reason, it is clear that columnar nanocrystalline W samples, that are deposited in vacuum conditions, are characterized by a higher compressive residual stress than the amorphous ones. In the case of a-W coatings, indeed, the presence of a background gas during deposition implies a loss of the ablated particles energy before impinging on the substrate. As a result, the particles on the substrate are not enough energetic to be as closely packed as in columnar film; the coatings grow with completely different structures and morphologies, and they are characterized by a lower state of stress. We thus evaluate the residual stresses of our PLD W coatings. Since the stress is strictly related to the coating thickness, we reported in Fig. \ref{FigRes} only the residual stresses of the coatings with aproximately the same thickness (i.e. 400 nm). They are plotted versus film mass density. As it can be seen, the trend observed is fully consistent with the explanation proposed herein. The residual stress, indeed, drops from 684 MPa for n-W, where the highest mass density is observed, to around 80 MPa for the porous a-W structure when the mass density becomes the 50$\%$ the n-W one. \begin{figure}[!t] \centering \includegraphics[width = 0.6\columnwidth]{6} \caption{Residual stresses of nanostructured W coatings plotted versus film mass density. The lower density of the film, which is related to a lower energy of the ablated particles, implies a lower state of compressive stress.} \label{FigRes} \end{figure} \subsection{CTE characterization of nanostructured W coatings} \begin{figure}[!t] \centering \includegraphics[width = 0.7\columnwidth]{7} \caption{Thermal cycles performed on a-W sample. The black and the red lines are the first and the second thermal cycles respectively. The blue dashed line represents the linear interpolation of the stress-temperature curve adopted to derive the mean CTE of the coating (see eq. \ref{eqboh}).} \label{Fig5} \end{figure} For the CTE characterization all the elastic properties for both the film and the substrate are considered temperature independent. In this way only a mean value of the CTE over the imposed temperature range can be determined by this method. The maximum temperatures reached for each samples have been chosen depending on the recrystallization temperatures of each phase. It has been shown \cite{Besozzi2016, Pezzoli2015} that the crystallization process of a-W starts even at 450 $^\circ$C, which is well below the bulk W recrystallization temperature (i.e. 1400 $^\circ$C). Therefore, a-W coatings are annealed below 400 $^\circ$C in order to avoid the formation of the $\alpha$-W phase, hindering crystallites growth that triggers the formation of the u-n-W structure. For n-W and u-n-W no phase changes are observed below 1000 $^\circ$C. However, the choice of the maximum annealing temperature is also strictly dependent by the type of the substrate material. In the case of Si, it is known that above 650 $^\circ$C tungsten silicide can form at W-Si interface \cite{Tsaur1984}, deeply affecting the CTE measurements. For this reason all the measurements for n-W and u-n-W samples are limited to a maximum temperature value of 650 $^\circ$C. \newline As an example, Fig. \ref{Fig5} shows the thermal stress cycles measured for a-W sample. The sample is heated from room temperature to about 360 $^\circ$C and then cooled down again to room temperature. This cycle has been performed twice. The thermal stress is plotted versus annealing temperature. As it can be seen, the negative intensity of the thermal stress stands for a compressive state of stress in the coating. This is always the case when the film shows a higher CTE than the one of the substrate material. In this way, the coating tries to dilatate but it is constrained by the substrate, developing a compressive stress which grows as the annealing temperature increases. On the contrary, the stress becomes tensile during the cooling cycle of the sample. A clear difference can be found between the first and the second thermal cycle around the maximum annealing temperature. During the first heating cycle the compressive thermal stress grows linearly with temperature till around 290 $^\circ$C. Over 290 $^\circ$C a clear nonlinear behavior is observed. This trend is exhausted during cooling, when the tensile state of stress starts to grow again linearly with cooling temperature. This characteristic feature can be explained by the beginning of stress relaxation processes, which lead to plastic deformation in the 290 - 360 $^\circ$C temperature range. The relaxation processes could indicate the origin of grain growth or the triggering of defects diffusion processes which can be present in a-W even at very low temperatures. The relaxation process is driven by an enhanced surface and bulk atoms diffusion, which continues till the atoms reach their equilibrium positions. The consequent volume shrinkage associated with the developed plastic flow results in the development of a tensile state of stress which is highlighted in the stress-temperature curve in Fig. \ref{Fig5} by the deviation from linearity during heating. After plastic deformation behavior takes place, the sample is not able to recover the same state of stress during cooling. Due to the development of relaxation irreversible changes of the layer structure, the non linear part of the stress curve can not be used to derive the CTE of the material. This trend is not observed when the sample is subjected to a second thermal cycle between the same temperatures, clearly indicating that no more relaxation processes take place. In this way, the total stress temperature curve (i.e. heating and cooling) can be fitted by a linear regression in order to derive the contribution $d\sigma /dT$ of eq. \ref{eqCTE} (dotted blue line). Once the slope is determined, using the biaxial modulus of the film summarized in Tab. \ref{Tab2}, the mean CTE of the coating are obtained. The behavior of the thermal stress upon heating is found in all the analyzed coatings. However, we observe that for higher increasing initial residual stress, higher number of cycles are needed to exhaust the relaxation process of the stress and to obtain the linear trend of stress versus temperature during both heating and cooling. \newline The results are shown in Fig. \ref{Fig6}a. All the CTE of the coatings lie above the bulk value of 4.2 x 10$^{-6}$ K$^{-1}$ reported in literature for polycrystalline W \cite{Nix}. A clear dependence of the CTE by the nanostructure is found. n-W has a mean CTE of 5.1 x 10$^{-6}$ K$^{-1}$, which is close to the bulk one. u-n-W samples are characterized by an increase of the CTE to around 6.6 x 10$^{-6}$ K$^{-1}$. Finally, for a-W samples, CTE reach a maximum value of 8.9 x 10$^{-6}$ K$^{-1}$, which is almost twice the bulk one. The mean value and the error bars associated to each point are evaluated by the multiple measurements performed on each sample. The uncertainty related to the CTE derivation from the geometric values via Stoney's equation (see eq. \ref{eq1}) obviously also depends on the uncertainties related to the elastic moduli and the thicknesses which must be independently measured. \newline Nanocrystalline metals are used to show a higher CTE with respect to the one of the crystalline counterpart \cite{Lima1999, Lu1995, Marques2003}. With respect to a crystalline bulk W, the presence of a higher fraction of interfaces between the small grains deeply affect the properties of the material \cite{Daniel}. The weaker bonding of grain boundaries atoms modify the interatomic potential, lowering it and making it more asymmetrical. The net result is a favoured movement of the atoms around their lattice positions upon heating. This means an enhancement of the CTE, which is thus strictly related to the volume fraction of grain boundaries. It has been shown that, in the case of nanocrystalline metallic films, the CTE at grain boundaries can even increase 2 - 5 times the crystalline value \cite{Birringer1988, Klam1987}. This dependence of the CTE with the crystallites dimension is shown in Fig. \ref{Fig6}b. As it can be seen, the overall observed behavior is that as $D$ decreses the CTE grows. n-W sample shows a CTE 1.2 times the bulk one, increasing up to 2.1 times for a-W samples. This trend is qualitatively and quantitatively in accordance with the reported dependence by $D$ of the CTE of some metallic films investigated in literature. As reported in \cite{Lu1995}, for example, copper films with 8 nm grains show a 1.8 times higher CTE than the corresponding monocrystalline structure. In our case, u-n-W coatings with $D$ between 7 and 11 nm are characterized by a CTE 1.6 $\pm$ 0.1 times higher the bulk W one. However, when the amorphous regime is reached, the dependence of the CTE by the grain boundaries fraction becomes not consistent to explain its further increase up to 2.1 times the crystalline value. The monotonically increasing behaviour of the CTE in the a-W region is not worthy, since the investigation of the CTE of amorphous materials still leads to controversial results in literature. In some cases \cite{Lima1999, Magisa2014, Tong1992}, starting from the coarse grained structure, an increase of the CTE is observed as $D$ decreases, but, when the amorphous region is reached, a drop of the CTE is obtained. On the other hand, other works \cite{Lu1995, Daniel, Hay2006, Miller2010}, in accordance with the trend observed in this work, reported a still higher CTE of the amorphous phase wih respect to the nanocrystalline one. This is consistent with higher mean interatomic distance, which means a lower binding energy. On the other hand, the mean interatomic potential can be affected by the density of defects, that in turn are related to the tensile or compressive residual stress \cite{Zoo2006, Chaplot}. The porosity of the film can be a key parameter in driving the thermal expansion of the coating, inducing preferred dilatation directions, with a net result of an increase of material CTE \cite{Miller2010}. \begin{figure}[!t] \centering \includegraphics[width = 0.5\columnwidth]{8} \caption{a) CTE of the analyzed samples. b) Dependence of he CTE to the crystallite dimension of the sample c) The ratio between the measured and the bulk CTE plotted versus the ratio of the coating and bulk stiffness. The three regions of n-W, u-n-W and a-W are highlighted by the green, the blue and the red dotted lines respectively.} \label{Fig6} \end{figure} In our case, for a-W samples, in particular for the a-W samples deposited at 70 and 100 Pa of He, the higher porosity degree of the films with respect to the other samples can be identified as the main parameter to properly justify the increase of the CTE above the u-n-W values. This increasing porosity is remarked by the measured drop of the film mass density, that goes from 18 g cm$^{-3}$ in n-W to 9 g cm$^{-3}$ in the amorphous region. The CTE trend we observed for W in the different n-W, u-n-W and a-W is consistent with the behavior of other bcc metals reported in literature, such as chromium and tantalum \cite{Knepper2007, Daniel}. \newline Finally, as the variations of $D$ and $\rho$ modify the thermal expansion properties of the material, they also affect the elastic behavior of the samples \cite{Besozzi2016}. Here, referring to Fig. \ref{Fig6}c, we want to highlight the relationship between the CTE and the stiffness of nanostructured coatings. So, in Fig. \ref{Fig6}c the CTE is plotted versus the film to bulk stiffness ratio (i.e. E$_{film}$/E$_{bulk}$). Again, the three n-W, u-n-W and a-W regions are clearly distinguishable. n-W sample is characterized by around 92$\%$ of bulk stiffness; in the u-n-W region the stiffness ratio goes from 80$\%$ to 60$\%$, while for a-W it goes below 47$\%$, down to 27$\%$. As a general qualitative trend, the softer the material, the higher the CTE. This well known stiffness-CTE behaviour is reported in several literature works \cite{Zoo2006, Lima1999, Marques2003}. However, this relationship is not linear as it can be expected from eq. \ref{eq2}. The deviation from the linear proportionality can be thus attributed again to the interplay between the crystallites dimension and the mass density of the material. \section{CONCLUSIONS} In this work we performed a systematic study of the residual stresses and the coefficient of thermal expansion of nanostructured W based coatings deposited by PLD, with the aim of elucidating the correlation between the CTE, the residual stresses, the structural (i.e. crystallites dimension, mass density) and elastic properties of the materials. In particular we analyzed pure W, W-tantalum and W-O coatings with different nanostructures. In order to obtain the residual stress and the CTE of the coatings, we developed a novel experimental setup based on the thermally induced substrate curvature method. All the W coatings deposited by PLD are characterized by a compressive residual state of stress. The stress is strictly correlated to the specific nanostructure, becoming lower as going from n-W to a-W, due to a lower energy of the ablated particles. We found that all the analyzed samples show a higher CTE than the corresponding bulk form. n-W shows a CTE of 5.1 10$^{-6}$ K$^{-1}$, u-n-W a CTE of 6.6 10$^{-6}$ K$^{-1}$, while a-W a CTE between 6.6 10$^{-6}$ K$^{-1}$ and 8.9 10$^{-6}$ K$^{-1}$. The CTE is thus deeply affected by the crystallites size, growing as the crystallites dimension decreases, where a higher fraction of grain boundaries is present. This trend is fully consistent with the behavior observed for other bcc metals, such as chromium and tantalum. In the amorphous region, where $D$ does not substantially change, the CTE further increases. Here, we highlight the relationship between the CTE and the film mass density. The higher porosity degree, that characterize the amorphous coatings, plays a pivotal role in giving preferential dilatation directions, favouring thermal expansion. In addition, in accordance with literature, we observed that as the material becomes softer, the CTE of the coating increases. \section*{Acknowledgments} This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053, and from the MISE-ENEA `\textit{Accordo di Programma}' (AdP), PAR2015 and PAR2016. The views and opinions expressed herein do not necessarily reflect those of the European Commission. The research leading to these results has also received funding from the European Research Council Consolidator Grant ENSURE (ERC-2014-CoG No. 647554). \bibliographystyle{model1a-num-names}
2,869,038,156,200
arxiv
\section{Introduction} Negative refraction phenomenon has attracted many interests since its prediction by Veselago a half century ago.~\cite{veselago1968electrodynamics} Veselago predicted that if a material possesses simultaneous negative dielectric constant ($\varepsilon$) and magnetic permeability ($\mu$), it will give a negative refractive index. The negative refractive index will lead to some unusual properties of the light, such as negative refraction and reversed Doppler and Cherenkov effects.~\cite{veselago1968electrodynamics,cubukcu2003electromagnetic,pendry2000negative,habe2015spin,cheianov2007focusing,smith2004metamaterials} By utilizing negative refraction, in which the light will be bent in an unusual way with an angle of refraction negative to the normal direction of the material surface, one may be able to construct a superlens whose resolution is smaller than the light wave length.~\cite{cubukcu2003electromagnetic,pendry2000negative,smith2004metamaterials} A better Cherenkov radiation detector can also be realized based on the material having a negative refractive index, which is useful in the field of accelerator physics.~\cite{lu2003vcerenkov,ziemkiewicz2015cherenkov} However, materials having simultaneous negative $\varepsilon$ and $\mu$ have not been found in nature so far. To realize negative refraction, many researchers developed artificial structures that are called as metamaterials.~\cite{shalaev2007optical,boltasseva2008fabrication,padilla2006negative,valentine2008three} These structures usually contain an array of split ring resonators~\cite{smith2004metamaterials,ishikawa2005negative,moser2005terahertz,bilotti2007design} or dielectric photonic crystals with periodically modulated $\varepsilon$ and $\mu$,~\cite{cubukcu2003electromagnetic,smith2004metamaterials,parimi2003photonic} which are often complicated to fabricate. To overcome the difficulties, in this paper we predict that negative refraction can take place in a bulk Weyl semimetal (WSM) even without having negative $\mu$ and without constructing complicated structure. The WSM is a three-dimensional material having a pair of Dirac cones separated in the $k$ space in its energy dispersion shown in Fig.~\ref{fig1}(a).~\cite{hofmann2016surface,burkov2011weyl,vazifeh2013electromagnetic,koshino2016magnetic,ominato2015quantum} An example of the WSM is pyrochlore ($\mathrm{Eu_2Ir_2O_7}$).~\cite{hofmann2016surface,sushkov2015optical} In each cone, the valence and conduction bands coincide at the so-called Weyl nodes. The presence of a pair of separated Dirac cones is the consequence of symmetry breaking in the WSM, which induces the Hall current, even without magnetic field.~\cite{hofmann2016surface,burkov2011weyl,vazifeh2013electromagnetic} This phenomenon is known as the anomalous Hall effect, which is responsible for the tensor form of the dielectric function of the WSM~\cite{zyuzin2015chiral,hofmann2016surface}. In this work, we predict that the EM wave can propagate through WSM even though the frequency is smaller than plasmon frequency. This propagation requires the refractive index of WSM to be negative in order to conserve the energy, that will be shown in this paper. \section{Model and Methods} The electromagnetic response of WSM can be derived from the formula of action for the electromagnetic field.~\cite{zyuzin2015chiral,vazifeh2013electromagnetic,grushin2012consequences} Here, we will give brief derivation of the electromagnetic response of WSM represented by electric displacement vector $\textbf{D}$. The more detailed derivation is given by Zyuzin and Burkov~\cite{zyuzin2012topological,zyuzin2012weyl} or Hosur and Qi.~\cite{hosur2013recent} The action of electromagnetic field is given by, \begin{equation} S_{\theta}=-\frac{e^2}{8\pi^2\hbar}\int dt d\textbf{r}\partial_\gamma\theta\epsilon^{\gamma\nu\rho\eta}A_\nu\partial_\rho A_\eta, \label{eq:action} \end{equation} where $A_\nu$ is electromagnetic potential, $\epsilon^{\gamma\nu\rho\eta}$ is the Levi-Civita tensor and each index $\gamma,\nu,\rho,\eta$ takes values $0,1,2,3$. The term $\theta$ is called the axion angle given by $\theta=2(\textbf{b}\cdot\textbf{r})$, where $\textbf{b}$ is a wave vector separating the Weyl nodes [see Figure~\ref{fig1}(a)]. The current density $j_\nu$ is given by varying the action with respect to electromagnetic potential, \begin{align} j_\nu\equiv\frac{\delta S_\theta}{\delta A_\nu}=\frac{e^2}{4\pi^2\hbar}\partial_\gamma\theta\epsilon^{\gamma\nu\rho\eta} \partial_\rho A_\eta. \label{eq:current} \end{align} By writing $\textbf{E}=-(\nabla A_0)-\partial_0\textbf{A}$, Eq.~(\ref{eq:current}) gives the Hall current $\textbf{j}=\frac{e^2}{4\pi^2\hbar}\nabla\theta\times \textbf{E}$, which gives additional terms in $\textbf{D}$ of the normal metals as the second term of Eq.~(\ref{eq:displacemet}). We can write the electric displacement vector as follows, \begin{equation} \textbf{D}=\varepsilon_0\varepsilon_b\left(1-\frac{\omega_p^2}{\omega^2}\right)\textbf{E}+\frac{ie^2}{4\pi^2\hbar\omega}(\nabla\theta)\times\textbf{E}, \label{eq:displacemet} \end{equation} where $\omega_p$ is the plasmon frequency, $\varepsilon_b$ is the background dielectric constant. Hereafter, we consider a particular value of the dielectric constant, $\varepsilon_b=13$, which was measured in pyrochlore.~\cite{hofmann2016surface,sushkov2015optical} The first term of Eq.~(\ref{eq:displacemet}) is the Drude dielectric function, which is similar to normal metals (NMs). The appearance of Hall current without external magnetic field is known as anomalous Hall effect given by the second term of Eq.~(\ref{eq:displacemet}). The anomalous Hall current only depends on the structure of the electron dispersion of WSM represented by $\theta$. Due to the anomalous Hall effect, the dielectric tensor has non-zero off-diagonal terms, which can be written as \begin{equation} \varepsilon = \begin{bmatrix} \varepsilon_1 & 0 &i\varepsilon_2\\ 0 & \varepsilon_1 & 0\\ -i\varepsilon_2 & 0 & \varepsilon_1 \end{bmatrix} \label{eq:tensor} \end{equation} where we assume that $\textbf{b}$ lies in the direction of $y$, $\textbf{b}=b\mathbf{\hat{y}}$, and that $\varepsilon_1$ and $\varepsilon_2$ are expressed by \begin{align} \varepsilon_1 &= \varepsilon_0\varepsilon_b \left(1-\frac{1}{\Omega^2}\right)\label{eq:e1},\\ \varepsilon_2 &= \varepsilon_0\varepsilon_b \left(\frac{\Omega_{b}}{\Omega}\right)\label{eq:e2}, \end{align} with $\Omega=\omega/\omega_p$ and $\Omega_{b}=e^2b/(2\pi^2\varepsilon_0\varepsilon_b\hbar\omega_p)$ as dimensionless quantities. We take $\Omega_{b}=0.5$ as a fixed parameter throughout this paper, otherwise it will be mentioned. Similar to NMs, in the WSM we have $\varepsilon_1>0~(\varepsilon _1<0)$ if $\Omega>1~(\Omega <1)$. \begin{figure}[t] \centering\includegraphics[width=85mm]{Fig1} \caption{(a) Schematic of energy dispersion of WSM showing a pair of Dirac cones with two Weyl nodes represented by dots, separated by the wave vector $b$. (b) A TM wave coming to $xy$ surface of WSM at angle $\theta_i$ and transmitted to WSM at angle $\theta_t$.} \label{fig1} \end{figure} In order to calculate the reflection and transmission spectra of a bulk WSM, we will determine the refractive index of the WSM $(n_w)$. Suppose that we have a transverse magnetic (TM) wave incident at angle $\theta_i$ from vacuum to a WSM as shown in Fig.~\ref{fig1}(b) where $E^{i}$, $E^{r}$ and $E^{t}$ ( $H^{i}$, $H^{r}$ and $H^{t}$) are the incident, reflected and transmitted electric (magnetic) fields, respectively. The transmitted wave propagates toward positive $z$ direction inside WSM, while the reflected wave propagates toward negative $z$ direction. Due to the vanishing $\varepsilon_{xy}$ and $\varepsilon_{zy}$, the direction of electric field inside WSM does not rotate. By using Eq.~\eqref{eq:tensor}, we can write down the equation $\textbf{D}=\mathbf{\hat{\varepsilon}} \textbf{E}$ for the TM wave inside the WSM as follows, \begin{equation} \begin{bmatrix} D^{t}_x\\D^{t}_y\\D^{t}_z \end{bmatrix} = \begin{bmatrix} \varepsilon_1 & 0 & i\varepsilon_2\\ 0 & \varepsilon_1 & 0\\ -i\varepsilon_2 & 0 & \varepsilon_1 \end{bmatrix} \begin{bmatrix} E^{t}_x \\ 0 \\ E^{t}_z \end{bmatrix} \label{eq:D} \end{equation} where $D^{t}$ and $E^{t}$ are the displacement and electric fields inside the WSM. From Maxwell's equations, we get a differential equation for the EM wave as follows; \begin{equation} \nabla\times\nabla\times\textbf{E}^t =-\nabla^2\textbf{E}^t+\nabla \left(\nabla\cdot\textbf{E}^t\right)=\omega^2\mu_0\textbf{D}^t. \label{eq:waveeq} \end{equation} Since the solutions of $\textbf{E}^t$ and $\textbf{D}^t$ are proportional to $\exp\left[i\omega n_w/c~\left(\textbf{s}\cdot\textbf{r}\right)\right]$, where $\textbf{s}=\left(\sin\theta_t,0,\cos\theta_t\right)$ is the unit wave vector, we can obtain from Eq.~(\ref{eq:waveeq}), \begin{equation} \frac{1}{\mu_0}\left(\frac{n_w}{c}\right) \left[\textbf{E}^t-\textbf{s} \left(\textbf{s}\cdot\textbf{E}^t\right)\right]=\textbf{D}^t. \label{eq:waveeq2} \end{equation} From Eqs.~(\ref{eq:D}) and~(\ref{eq:waveeq2}), we get the following relations, \begin{equation} E^t_x = \frac{\varepsilon_1D^t_x - i\varepsilon_2D^t_z}{\varepsilon_1^2 - \varepsilon_2^2} ,~\textrm{and}\quad E^t_z = \frac{i\varepsilon_2D^t_x + \varepsilon_1D^t_z}{\varepsilon_1^2 - \varepsilon_2^2}. \label{eq:E} \end{equation} Inserting Eq.~(\ref{eq:E}) to Eq.~(\ref{eq:waveeq2}), we obtain simultaneous equations of $E^t_x$ and $E^t_z$ as follows: \begin{widetext} \begin{equation} \label{eq:det} \begin{bmatrix} (1-s_x^2)\varepsilon_1-is_xs_z\varepsilon_2- \mu_0\left(\frac{c}{n_w}\right)^2\left(\varepsilon_1^2-\varepsilon_2^2\right) & -i\left(1-s_x^2\right)\varepsilon_2-s_xs_z\varepsilon_1 \\ i\left(1-s_z^2\right)\varepsilon_2-s_xs_z\varepsilon_1 & (1-s_z^2)\varepsilon_1+is_xs_z\varepsilon_2-\mu_0 \left(\frac{c}{n_w}\right)^2\left(\varepsilon_1^2-\varepsilon_2^2\right) \end{bmatrix} \begin{bmatrix} E^t_x\\E^t_z \end{bmatrix} =0 \end{equation} \end{widetext} In order to have nontrivial solutions of $\textbf{E}^t$, the determinant of the $2\times 2$ matrix in Eq.~(\ref{eq:det}) should vanish: \begin{equation} \frac{\mu_0c^2\left(\varepsilon_1^2-\varepsilon_2^2\right)}{n_w^4} \left[-n_w^2\varepsilon_1+c^2\mu_0 \left(\varepsilon_1^2-\varepsilon_2^2\right)\right]=0. \label{eq:det2} \end{equation} from which, we obtain $n_w$, \begin{align} n_w = \pm c\sqrt{\mu_0\left(\varepsilon_1^2-\varepsilon_2^2\right)/\varepsilon_1} \equiv n_w^{\pm}, \label{eq:nw} \end{align} where the $n_w^{+}$ ($n_w^{-}$) solution corresponds to the positive (negative) wave vector inside the WSM. If we put $\varepsilon_2=0$ in Eq.~(\ref{eq:nw}), we can obtain the refractive index of NM. \begin{figure}[h] \includegraphics[width=85mm]{Fig2} \caption{ (a) The refractive index of WSM for TM wave $(n_{w})$ as a function of $\Omega$ for the positive solution of Eq.~(\ref{eq:nw}) ($n_w^{+}$). Solid and dashed lines are the real and imaginary parts of $n_{w}^{+}$, respectively. We use $\Omega_b=0.5$ for the WSM. The plot is divided into four regions. Inset: The real part of refractive index $(n)$ for the WSM compared with a normal metal (NM). (b) The refractive index of WSM for TM wave $(n_{w})$ as a function of $\Omega$ for the negative solution of Eq.~(\ref{eq:nw}) ($n_w^{-}$). (c) Schematics of EM wave propagations to the WSM for all the four regions of panel (a) and (b).} \label{fig2} \end{figure} In Fig.~\ref{fig2}(a) and (b) we plot $n_{w}$ as a function of $\Omega$ for the positive solution of Eq.~(\ref{eq:nw}) [Fig.~\ref{fig2}(a)] and the negative solution of Eq.~(\ref{eq:nw}) [Fig.~\ref{fig2}(b)]. The solid and the dashed lines correspond to the real and imaginary parts of $n_w$, respectively. It is noted that $n_w^{\pm}$ at each frequency is either purely real or purely imaginary, because we neglect the effects of the impurity and scattering of charge in Eq.~(\ref{eq:e1}). Therefore, the wave vector $\omega n_w^{\pm}/c$ can be either real or imaginary depending on $n_w^{\pm}$. The real (imaginary) wave vector represents a propagating (decaying) wave. Here we divide our results into four regions as shown in Fig.~\ref{fig2}(c): region I $\left(0~\leq~\Omega~\leq \Omega_-\right)$, region II $\left(\Omega_-~\leq~\Omega~\leq 1\right)$, region III $\left(~1\leq~\Omega~\leq \Omega_+\right) $ and region IV $\left(~\Omega_+\leq\Omega\right) $, where $\Omega_{\pm}$ are frequencies that give $n_w=0$ [Eqs.~(\ref{eq:e1}), (\ref{eq:e2}), (\ref{eq:nw})]. \begin{align} \Omega_{\pm}=1/2\left(\pm\Omega_{b}+\sqrt{4+\Omega_{b}^2}\right). \label{eq:opm} \end{align} As defined before, $\Omega_{b}=e^2b/(2\pi^2\varepsilon_0\varepsilon_b\hbar\omega_p)$, where $b=0.37\textrm{\AA}^{-1}$ for pyrochlore~\cite{sushkov2015optical} and plasmon frequency is given by~\cite{hofmann2016surface} \begin{align} \omega_p=\sqrt{\frac{4\alpha}{3\pi}}\frac{E_{\textrm{F}}}{\hbar} \label{eq:wp} \end{align} with $\alpha=\frac{e^2}{\hbar v_{\textrm{F}}\varepsilon_0\varepsilon_b}$ and $v_{\textrm{F}}=4\times 10^{7}~\textrm{cm/s}$.~\cite{sushkov2015optical} From Fig.~\ref{fig2}(a), it is important to point out that we may have a propagating wave even at frequencies smaller than plasmon frequency $(\Omega< 1)$, in the shaded region II, which is in contrast with NM where an EM wave can propagate if $\Omega> 1$ [see inset of Fig.~\ref{fig2}(a)]. As shown in the inset of Fig.~\ref{fig2}(a), the refractive indices of WSM and NM differ only near $\Omega\simeq 1$. At $\Omega\gg 1$, they both converge to the value of $n\approx\sqrt{\varepsilon_b}$. It is important to note that the negative solution of Eq.~(\ref{eq:nw}) ($n_w^{-}$) is assigned to have propagating wave toward positive $z$ direction in the region II, which will be shown later. Let us calculate the reflection and transmission spectra. In NM with applied external magnetic field, the polarization of EM wave undergoes rotation as it enters the material if the direction of propagation is parallel to the direction of applied external magnetic field making the wave polarization not linear. In our case of WSM, we choose the propagation direction of \textit{the purely TM wave} $(E_y=0)$ to be \textit{perpendicular} to the "effective applied magnetic field", which is in the direction of the $\textbf{b}=b\mathbf{\hat{y}}$. Therefore, we expect no rotation of polarization and the wave polarization keeps linear as TM wave. This fact can also be deduced from the vanishing $\varepsilon_{xy}$ and $\varepsilon_{zy}$. As shown in Fig.~\ref{fig1}(b), the incident, reflected, and transmitted electric fields $\textbf{E}_{i}$, $\textbf{E}_{r}$ and $\textbf{E}_{t}$ can be written as \begin{align} \textbf{E}^{i}(z) &= \left(\cos\theta_i, 0, -\sin\theta_i\right)E^{i}_{0}\exp(ik_{vz}z),\label{eq:efield1}\\ \textbf{E}^{r}(z) &= \left(-\cos\theta_i, 0, -\sin\theta_i\right)E^{r}_{0}\exp(-ik_{vz}z),\label{eq:efield2}\\ \textbf{E}^{t}(z) &= \left(\cos\theta_t, 0, -\sin\theta_t\right)E^{t}_{0}\exp(ik_{wz}^{\pm}z),\label{eq:efield} \end{align} with $k_{vz}=(\omega/c)~\cos\theta_i$ and $k_{wz}^{\pm}=\omega (n_{w}^{\pm}/c)~\cos\theta_t$. The angles $\theta_i$ and $\theta_t$ are related each other by the Snell's law $\sin\theta_i=n_w^{\pm}\sin\theta_t$. The magnetic fields in the $y$ direction can be obtained from the relations $H^{i,r}_y=i\omega\int \varepsilon_0 E^{i,r}_x dz$ and $H^{t}_y=i\omega\int D^{t}_x dz$, where $D^{t}_x=\varepsilon_1E^{t}_x+i\varepsilon_2E^{t}_z$ is obtained from Eq.~(\ref{eq:D}). Then, the magnetic fields can be written as \begin{align} \textbf{H}^{i}(z) =& \frac{\omega\varepsilon_0}{k_{vz}} \left(0,\cos\theta_i,0\right) E^{i}_{0}\exp(ik_{vz}z),\label{eq:hfield1}\\ \textbf{H}^{r}(z) =& \frac{\omega\varepsilon_0}{k_{vz}} \left(0,\cos\theta_i,0\right) E^{r}_{0}\exp(-ik_{vz}z),\label{eq:hfield2}\\ \textbf{H}^{t}(z)=&\frac{\omega}{k_{wz}^{\pm}} \left(0,\varepsilon_1\cos\theta_t-i\varepsilon_2\sin\theta_t,0\right) E^{t}_{0}\nonumber\\&\times \exp(ik_{wz}^{\pm}z). \label{eq:hfield} \end{align} After defining the EM fields in both media, we can write down boundary conditions of the EM wave at incidence surface $\left(z=0\right)$ as follows, \begin{align} &E^{i}_{0}\cos\theta_i-E^{r}_{0}\cos\theta_i = E^{t}_{0}\cos\theta_t,\label{eq:bond1} \end{align} and \begin{align} &\frac{\omega\varepsilon_0}{k_{vz}} \left(E^{i}_{0} \cos\theta_i+E^{r}_{0}\cos\theta_i\right) \notag \\ &= \frac{\omega}{k_{wz}^{\pm}}\left(\varepsilon_1E^{t}_{0} \cos\theta_t-i\varepsilon_2E^{t}_{0}\sin\theta_t\right)\label{eq:bond2}, \end{align} where Eqs.~(\ref{eq:bond1}) and (\ref{eq:bond2}) describe the continuity for the tangential components of electric fields and magnetic fields at $z=0$, respectively. Reflection coefficient $r=E^{r}_{0}/E^{i}_{0}$ and transmission coefficient $t=E^{t}_{0}/E^{i}_{0}$ are given by \begin{align} r&=1-t \frac{\cos\theta_t}{\cos\theta_i}, \label{eq:rp} \end{align} and \begin{align} t&=\frac{2k_{wz}^{\pm}\varepsilon_0\cos\theta_i}{k_{vz} \left(\varepsilon_1\cos\theta_t - i\varepsilon_2\sin\theta_t\right) + k_{wz}^{\pm}\varepsilon_0\cos\theta_t}. \label{eq:tp} \end{align} \section{Results and Discussion} In Fig.~\ref{fig3}, we plot the reflection probability defined by $R=\left|r\right|^2$ as a function of $\theta_i$ for region I and III (that is $\Omega=0.3$ and 1.2, respectively). Fig.~\ref{fig3}(a) shows $R$ if we use $n_w^{+}$ and Fig.~\ref{fig3}(b) shows $R$ if we use $n_w^{-}$. From Fig.~\ref{fig3}, we can see that the incident EM wave will be totally reflected $R=1$ for all $\theta_i$ for both $n_w^{\pm}$ as shown in Fig.~\ref{fig3}(a) and (d), due to the purely imaginary $n_w^{\pm}$ given in Fig.~\ref{fig2}(a). The $\textbf{E}^{t}$ is decaying inside WSM, hence no transmitted energy into WSM. The most interesting case is region II, where we predict that WSM acquires a negative refractive index. In region II, we have a real $n_w^{\pm}$, which means that the wave propagation inside WSM is allowed, even though the wave frequency is smaller than the plasmon frequency. \begin{figure}[h!] \includegraphics[width=85mm]{Fig3} \caption{ The reflection probability (R) of the EM wave in a WSM for region I [$\Omega=0.3$] and region III [$\Omega=1.2$] for (a) $n_w^{+}$ and (b) $n_w^{-}$.} \label{fig3} \end{figure} Normally, we use $n_w^{+}$ which gives a positive value of $k_{wz}^{\pm}$ because the transmitted wave propagates toward positive $z$ direction [see Eq.~(\ref{eq:efield})]. However, $n_w^{+}$ for the transmitted wave in region II gives an unphysical $R>1$ , which means that at the point of incidence there is a flux of energy coming from the WSM side. We can infer from Eqs.~(\ref{eq:rp}) and (\ref{eq:tp}) that $R>1$ if $n_w^{+}$ is selected for region II. The reflection coefficient $r$ can be written as \begin{align} r&=\frac{A-B-iC}{A+B-iC}\\ &=r_{1}+ir_{2} \label{eq:rr} \end{align} where $A=\varepsilon_1\cos\theta_i\cos\theta_t$, $B= n_w^{\pm}\varepsilon_0\cos^2\theta_t$, $C=\varepsilon_2\cos\theta_i\sin\theta_i$. The reflection probability can be obtained from $R=r_{1}^2+r_{2}^2$, where we define \begin{align} r_{1}&=\frac{(A+B)(A-B)+C^2}{(A+B)^2+C^2}\label{eq:rria}\\ r_{2}&=\frac{2CB}{(A+B)^2+C^2}. \label{eq:rrib} \end{align} $R>1$ if either $r_{1}>1$ or $r_{2}>1$. Let us investigate the case of $r_1$. From Eq.~(\ref{eq:rria}), we can define the requirement in order to have $r_1<1$ giving us physically sound $R<1$, otherwise we will have unphysical $R>1$, \begin{align} \left|A-B\right|&<\left|A+B\right|\\ &\textrm{or}\nonumber\\ \left|\varepsilon_1\cos\theta_i- n_w^{\pm}\varepsilon_0\cos\theta_t\right|&<\left|\varepsilon_1\cos\theta_i+ n_w^{\pm}\varepsilon_0\cos\theta_t\right|. \label{eq:req} \end{align} To better visualize Eq.~(\ref{eq:req}), we plot $\left|A-B\right|$ and $\left|A+B\right|$ as a function of $\Omega$. From Fig.~\ref{fig4}(a), where $n_w^{+}$ is selected, $\left|A-B\right|>\left|A+B\right|$ in region II, which does not fulfill Eq.~(\ref{eq:req}) giving the unphysical $R>1$. On the other hand, from Fig.~\ref{fig4}(b), where $n_w^{-}$ is selected, $\left|A-B\right|<\left|A+B\right|$ in region II, which fulfills Eq.~(\ref{eq:req}) and we can have physically correct $R<1$. This negative solution ($n_w^{-}$) should be selected only for region II, because if we apply $n_w^{-}$ to region IV, we have an unphysical $R>1$, which is shown by Fig.~\ref{fig4}(b), in which $\left|A-B\right|>\left|A+B\right|$ for region IV. We argue later that the reason why $n_w^{-}$ is selected in region II for having transmitted wave toward positive $z$-direction, is due to the energy conservation. \begin{figure}[h!] \includegraphics[width=70mm]{Fig4} \caption{$\left|A-B\right|$ and $\left|A+B\right|$ as a function of $\Omega$ if we use (a) $n_w^{+}$ and (b) $n_w^{-}$. In region II, $n_w^{-}$ is selected to fulfill Eq.~(\ref{eq:req}), while in region IV, $n_w^{+}$ is selected. Otherwise, we will have unphysical $R>1$ in both region. } \label{fig4} \end{figure} The negative refractive index of WSM in region II will cause the wave refracted negatively, which means that the refracted angle $\theta_t$ is negative. The refractive index also means that the wave vector of transmitted wave ($k_{wz}^{-}$) is negative.~\cite{saleh1991fundamentals,smith2000negative,ramakrishna2005physics,woodley2006backward} The negative wave vector does not mean that the transmitted wave propagates backward, which violates the conservation of energy. The direction of propagation is better determined by the direction of the Poynting vector. By using Eqs.~(\ref{eq:efield}) and (\ref{eq:hfield}) at $z=0$, the power per unit cross section transmitted in the direction of $z$ can be expressed as \begin{align} I_t &=\textbf{S}_t\cdot\hat{\textbf{z}}\nonumber\\ &=\frac{1}{2}\textrm{Re}\left[\textbf{E}^{t}(0) \times\textbf{H}^{\textbf{*}t} (0)\right]\cdot \hat{\textbf{z}}\notag\\ &=\frac{c\left|t\right|^2\left| E^{i}_{0}\right|^2}{2n_w^{\pm}} \varepsilon_1\cos\theta_t, \label{eq:intensity} \end{align} In order to have transmitted power propagate toward positive $z$ direction, Eq.~(\ref{eq:intensity}) should have a positive value. Since $\varepsilon_1<0$ in region II [Eq.~(\ref{eq:e1})], while $\left|t\right|^2$, $\left| E^{i}_{0}\right|^2$, and $\cos\theta_t >0$, $n_w^{\pm}$ has to be \textit{negative} $(n_w^{-})$ in order to have $I_t>0$. On the other hand, $n_w^{+}$ is selected in region IV, because $\varepsilon_1>0$. We refer the transmitted wave as backward wave because the transmitted wave vector points towards negative $z$-direction shown by Fig.~\ref{fig5}, otherwise it is forward wave. In short, the negative refraction is needed for the propagation of the EM wave with frequency smaller than the plasmon frequency to \textit{conserve energy}. To show the negative refraction more explicitly, we calculate the tangential component of the transmitted Poynting vector with respect to the interface. The tangential component of Poynting vector is given by, \begin{align} \textbf{S}_t\cdot\hat{\textbf{x}}=&=\frac{1}{2}\textrm{Re}\left[\textbf{E}^{t}(0) \times\textbf{H}^{\textbf{*}t} (0)\right]\cdot \hat{\textbf{x}}\nonumber\\ &=\frac{c\left|t\right|^2\left| E^{i}_{0}\right|^2}{2(n_w^{\pm})^2} \varepsilon_1\sin\theta_i. \label{eq:sx} \end{align} Because at region II, $\varepsilon_1<0$ and all other terms are positive, then $\textbf{S}\cdot\hat{\textbf{x}}<0$, which means that we have negative refraction. Therefore, at region II, we expect the light is transmitted as backward wave with negative refraction shown by Fig.~\ref{fig5}. It is also interesting to compare our case with hyperbolic metamaterial. The negative refraction phenomenon in WSM is similar to hyperbolic metamaterials, where we can obtain negative refraction without having negative magnetic permeability. In hyperbolic metamaterials, due to the anisotropy of its dielectric tensor with respect to crystal axis, where the parallel and perpendicular component of dielectric tensor are opposite sign $(\bar{\bar{\varepsilon}}=\varepsilon_\perp\hat{\textbf{x}}\hat{\textbf{x}}+\varepsilon_\parallel[\hat{\textbf{y}}\hat{\textbf{y}}+\hat{\textbf{z}}\hat{\textbf{z}}]$, with $\varepsilon_\perp<0, \varepsilon_\parallel>0)$, the light can be refracted negatively as a forward wave.~\cite{poddubny2013hyperbolic,belov2003backward} This refraction phenomenon can also take place in bulk Rashba system, which can act as hyperbolic metamaterial at certain frequency range.~\cite{shibata2016theory} Therefore, due to the forward transmitted wave, in hyperbolic metamaterial the negative refraction can take place without having negative effective refractive index. This situation is different from our case for WSM, where the negative refraction takes place with backward transmitted wave, similar to Veselago medium. \begin{figure}[t] \includegraphics[width=60mm]{Fig5} \caption{The negative refraction in WSM. $\textbf{S}_t$ is the transmitted Poynting vector. $\textbf{k}_i, \textbf{k}_r, \textbf{k}_t$ are incident, reflected and transmitted wave vectors, respectively.} \label{fig5} \end{figure} If we use $\Omega_{b}=0.5$ we have $\omega_p=800~\textrm{THz}$ and the corresponding region II can be found within $\left(625\leq\omega\leq 800~\textrm{THz}\right)$. If we use $\omega_p=9~\textrm{THz}$ which is measured in experiment,~\cite{sushkov2015optical} $\Omega_{b}=44$ and the corresponding region II can be found within $\left(0.2\leq\omega\leq 9~\textrm{THz}\right)$. \begin{figure}[h!] \includegraphics[width=85mm]{Fig6} \caption{The $R$ and $T$ spectra as a function of $\theta_i$ shown as solid and dashed line, respectively, for (a) region II ($\Omega=0.85$), (b) region IV ($\Omega=3$). In (a) the negative solution of $n_w$ is used, while in (b) the positive one is used. In both cases, $R+T=1$. (c) The $R$ and $T$ spectra as a function of $\Omega$ with fixed $\theta_i=20^\circ$ shown as solid and dashed line, respectively. In shaded regions II and IV, the wave is transmitted. However, only in region II we expect that negative refraction could occur. } \label{fig6} \end{figure} Using Eq.~(\ref{eq:intensity}), the transmission probability $T$ is given by \begin{align} T = \frac{I_t}{I_i} = \frac{1}{n_w^{\pm}} \frac{\varepsilon_1}{\varepsilon_0} \frac{\cos\theta_t}{\cos\theta_i}\left|t\right|^2, \label{eq:t} \end{align} where $I_i=(c/2)\left| E^{i}_{0}\right|^2\varepsilon_0\cos\theta_i$ is the incident intensity. The reflection probability $R$ is given by $R=\left|r\right|^2$. In Figs.~\ref{fig6}(a) and \ref{fig6}(b) we show the $R$ and $T$ spectra for region II ($\Omega=0.85$) and region IV ($\Omega=3$), where the EM wave propagation is allowed. In the case of region II, we adopt the $n_w^{-}$, while in the case of region IV, we adopt $n_w^{+}$. In region IV, the WSM acts as a NM for $\Omega>1$. Figure~\ref{fig6}(b) shows $R=0$ at $\theta_i=\arctan~n_w$, which corresponds to the Brewster angle. In both cases, we found $R+T=1$. In Fig.~\ref{fig6}(c), we plot the $R$ and $T$ spectra as a function of $\Omega$ at a fixed incident angle $\theta_i=20^\circ$. In region II, we expect that the negative refraction can take place. In NM, all EM wave is reflected in the region II due to the imaginary transmitted wave vector. The region II of WSM, the $R$ gradually decreases with increasing $\Omega$ because the transmitted wave vector acquires real value, which signifies the transmission of the incident wave to WSM. After reaching the minimum of $R$ at $\Omega=0.9$, the reflection probability increases gradually up to $R=1$ at $\Omega=1$, above which the transmitted wave vector has only imaginary value that makes $T=0$. It is important to note that the negative refraction in WSM occurs only in region II, which has frequency range close to $\omega_p$, which can be seen in Figs.~\ref{fig2} and~\ref{fig6}(c). Because $\omega_p$ depends on $E_{\textrm{F}}$ [See Eq.~(\ref{eq:wp})], by controlling the $E_{\textrm{F}}$, we can control the frequency, where negative refraction occurs, which will be discussed as below. \begin{figure}[t] \includegraphics[width=85mm]{Fig7} \caption{(a) The $\Lambda_-$ as a function of $E_{\textrm{F}}$. (b) The real part of $n_w^{-}$ for $E_{\textrm{F}}=0.2~\textrm{eV}$ and $0.9~\textrm{eV}$. } \label{fig7} \end{figure} It is also useful to have a parameter that gives us information whether or not we have negative refraction for a given $E_{\textrm{F}}$. By using Eq.~(\ref{eq:wp}), the frequency range of region II, where we expect the negative refraction, can be rewritten as $\Lambda_-<\frac{\hbar\omega}{E_{\textrm{F}}}<\sqrt{\frac{4\alpha}{3\pi}}$, where $\Lambda_-=\sqrt{\frac{4\alpha}{3\pi}}\Omega_-$ and $\sqrt{\frac{4\alpha}{3\pi}}=0.423$. $\Omega_-$ is given by Eq.~(\ref{eq:opm}). $\Lambda_-$ is plotted in Fig.~\ref{fig7}(a) as a function of $E_{\textrm{F}}$. Hence, by taking ratio of EM wave energy ($\hbar\omega$) and $E_{\textrm{F}}$, we can predict whether the negative refraction occurs by using Fig.~\ref{fig7}(a). In Fig.~\ref{fig7}(b), we plot the $n_w^{-}$ as a function of frequency $(\omega)$ in a real unit for $E_{\textrm{F}}=0.2~\textrm{eV}$ and $0.9~\textrm{eV}$. Increasing $E_{\textrm{F}}$ will shift the region II and region IV to higher frequency. The frequency range of region II monotonically increases with increasing $E_{\textrm{F}}$. Note that $\alpha$ depends on $v_{\textrm{F}}$ [see Eq.~(\ref{eq:wp})]. \section{Conclusion} In conclusion, we have shown theoretically that negative refraction can occur in the WSM, which is justified from its reflection spectra. The refractive index of WSM is negative at a specific frequency range close to the plasmon frequency. The negative refractive index is required for the propagation of TM EM wave with frequency smaller than the plasmon frequency in the direction perpendicular to the separation of Weyl nodes to conserve the energy and to obtain the physically correct solution. We suggest that by using only the WSM, it is not necessary to make a complicated structure of metamaterials to obtain negative refraction. It would be desired if the phenomenon could be measured in future experiments. \begin{acknowledgments} M.S.U is supported by the MEXT scholarship, Japan. A.R.T.N. acknowledges the Leading Graduate School in Tohoku University. R.S. acknowledges JSPS KAKENHI Grant Numbers JP25107005 and JP25286005. \end{acknowledgments}
2,869,038,156,201
arxiv
\section{Introduction}\label{sec_introduction} The interaction of an intense laser pulse with an over-dense plasma, possessing a sharp density gradient, can result in accelerate charged particles with relativistic velocities \citep{wilks92,wilks97,kruer85, brunel87,brunel88,kruer,ren06,beg97}. The irradiation of structured targets \citep{kluge12, jiang14}, such as periodic grooves (gratings) on a metal surface, by ultra short laser pulses are of particular interest for generating intense Surface Plasma Waves (SPWs), that can store the laser energy and efficiently accelerate electrons. In this scenario, high energy transfer from the laser to the plasma is achieved when the frequency and wavelength of the interacting laser pulse match those given by the SPW's dispersion relation \citep{raether88, kaw70, bigongiari11}. The high intensity and ultra-short laser-plasma interaction regime ($\leq 10^{19}$ W/cm$^2$ and $\leq100 f$s), showed that a significant percentage of electrons trapped in the SPW can be accelerated along the surface in the range of $\sim 10$ MeV~\citep{ceccotti13,riconda15,naseri13,Willingale11,Willingale13,fedeli:16}. High charge electron bunches (up to $\sim 650$ pC) were also observed \citep{fedeli:16, cantono18, raynaud20, zhu20, marini} with applications including the generation of bright sources of ultra-short pulsed X-rays, ultra-fast electron diffraction, tabletop electron accelerators, and ultra-fast electron spectroscopy \cite{Azamoum, liu, tokita,lupetti}. Recently, a scheme exploiting up to date laser techniques was proposed for controlling the duration and amplitude of SPWs by which a laser with an intensity of a few $ 10^{19}$ W/cm$^2$ and a pulse duration of a few tens of $f$s should be able to accelerate electrons up to $\sim 70$MeV \cite{marini}. Surprisingly, in these experiments and simulations, the non-relativistic cold dispersion relation successfully defined the conditions of the SPW excitation with laser beam intensity up to $\sim 10^{19}$ W/cm$^2$. Extending the regime of ultra-high laser intensity interaction beyond $10^{21}$ W/cm$^2$ can result in surface waves with extremely large amplitudes at the over-dense plasma surface, potentially allowing to obtain unprecedentedly high currents of energetic electrons as well as emitting radiation with interesting characteristics. However, the excitation and survival of these SPWs in the ultra-high laser intensity regime remains an open question, as in this limit the plasma grating can evolve on relatively short time scales, and nonlinear effects can affect the dispersion relation in the relativistic regime. In this paper we determine the conditions for improving laser-plasma energy transfer as well as accelerating charged particles by the SPW excitation mechanism in an over-dense plasma with a grating, in the ultra-high laser intensity regime of interaction. We employed 2D Particle-In-Cell (PIC) simulations for laser intensities ranging from $10^{16}$ to $10^{22}$ W/cm$^2$, for various angles of incidence. The influence of both the plasma density and the grating depth of the modulated plasma surface were investigated since previous studies identified them as important parameters in SPW excitation \citep{fedeli:16, cantono18, raynaud20, zhu20, marini}. The paper is organized as follows: section \ref{parameter_simul} describes the PIC simulation setup with parameters closely corresponding to recent experiments \citep{fedeli:16, cantono18}. Section \ref{opt_angle} analyses SPW excitation as a function of laser incidence and intensity. The results are then compared to analytical values obtained by the dispersion relation for cold SPWs and a heuristic relativistic correction. The importance of considering high density plasma to maintain SPW excitation in the ultra relativistic regime is shown. Section \ref{elec} studies the behavior of accelerated electrons along the plasma surface. A strong correlation is demonstrated between the angle of SPW excitation and the laser's angle of incidence that optimizes electron acceleration along the plasma surface. The section \ref{depth} investigates the influence of the grating depth at higher laser intensities. Then, in the last section our conclusions are presented. \section{Parameter of the simulations}\label{parameter_simul} 2D3V PIC simulations have been performed with the open-source code SMILEI \citep{smilei}. The geometry is depicted in Fig. \ref{fig:LaserPulseSetup} where the plasma lies in the $(x,y)$ plane for $x\geq 0$, its surface being along the $y$ direction. \begin{figure}[ht] \begin{center} \includegraphics[width=4.cm]{fig1.pdf} \caption{Simulation set-up: the laser beam is focused thought an angle $\theta_{inc}$ over the interface of the plasma target with constant electron density $n_0$, grating depth $h$ and period $d$. Here, the red-blue scale represents the magnetic field amplitude of the laser pulse impinging over the target.} \label{fig:LaserPulseSetup} \end{center} \end{figure} The driven laser is a $P-$polarized Gaussian pulse with a waist equal to $5\lambda_0$ ($=4\mu$m) and a pulse duration equal to $\tau_L=10\lambda_0/c$ ($\simeq27$ fs) full width at half maximum (FHWM), where $c$ is the speed of light in vacuum, and $\lambda_0=0.8\mu m$ is the chosen laser wavelength. The laser pulse impinges the plasma interface through an angle $\theta_ {inc}$ in relation to the normal surface along the $x-$direction. The plasma grating has constant electron density $n_0$ with a sinusoidal-modulated vacuum-plasma interface located at $x_g(y) = (h/2)\,\sin(2\pi y/d)$ where $h$ is the grating depth and $d$ the period. In all cases studied, we considered $d=2\lambda_0$ ($=1.6\mu$m) and we used $h=0.1 \lambda_0$ ($=0.08\mu$m) or $0.4 \lambda_0$ ($=0.32\mu$m) for the grating depth. The plasma consists of electrons with a small initial temperature of $T_e = 50{\rm eV}$ as well as a neutralizing background of ions free to move in the space with initial temperature $T_i/(ZT_e)=0.1$, where $Z=1$ is the atomic number. In the systematic study we have performed, we selected two values for the plasma density: $n_0=100n_c$ and $n_0=200 n_c$ where $n_c= \epsilon_0 m_e \omega_0^2/e^2$ ($\omega_0$ is the laser frequency and $\epsilon_0$ the vacuum permittivity). These values are chosen in order to study the theoretical dependence on the plasma density and are compatible with the plasma density obtained in experiments by ionizing solid gratings \citep{ceccotti13, fedeli:16, cantono18, zhu20}. Additionally, we varied the laser field strength (normalized vector potential $a_0 \equiv e E_0/(m_e c\,\omega_0)$) from $a_0=0.1$ ($ \sim \times 10^{16}$ W/cm$^2$) to $a_0=50$ ($\sim 4\times 10^{21}$ W/cm$^2$) as may be reached on forthcoming multi-petawatt laser systems, see {\it e.g.} Refs.~\cite{apollon,jeong14}. For any given ($n_0,a_0$), we have performed a parametric scan varying the incidence angle of the laser from typically $\theta_{\rm inc} = 28^{\circ}$ to $\theta_{\rm inc} = 50^{\circ}$ in order to extract the optimal condition for SPW excitation. In these simulations, the box extends over $20\lambda_0$ ($=16\mu$m) in the $x$-direction [roughly $16\lambda_0$ ($=12.8\mu$m) of vacuum and $4\lambda_0$ ($=3.2\mu$m) of plasma], and $64\lambda_0$ ($=51.2\mu$m) in the $y$-direction. The spatial resolution was set to $\Delta x = \Delta y = \lambda_0/128$ ($=0.00625\mu$m). The simulation time step is chosen to be $\Delta t = 0.95~\Delta x/\sqrt{2}$ that corresponds to 95\% of the Courant– Friedrich– Lewy (CFL) condition for the standard finite-difference time-domain (FDTD) solver~\cite{nuter2014}. Every cells contains initially $16$ randomly distributed particles of each species (electrons and ions). Electromagnetic field boundary conditions are injecting/absorbing in $x$ and periodic in $y$. Particle boundary conditions in $x$ are reflecting (left) or thermalizing (right), and periodic in $y$. The simulations were run over until particles or radiation get the position $y=60\lambda_0$ ($=48\mu$m), which determines the final simulation time $t=t_f$. Notice that $t_f$ varies according to the laser incidence angle and it gets larger as $\theta_{inc}$ increases. \section{Resonance condition for SPW excitation at high intensity}\label{opt_angle} In order to evidence the condition for SPW excitation as function of the laser intensity, we perform a set of simulations with intensity corresponding to $a_0$ varying from $a_0=0.1$ to $a_0=50$ and incident angle ranging from $\theta_{inc}=28^{\circ}$ to $50^{\circ}$. The plasma grating period and depth are kept constant. Initially, the depth is chosen as $h=0.1\lambda_0$, so that corrections to the dispersion relation due to finite depth are negligible. The SPW dispersion relation in the cold plasma non relativistic limit is \citep{kaw70}: \begin{eqnarray}\label{eq:dispRel} \frac{c^2k^2}{\omega^2} = \frac{\omega_p^2/\omega^2-1}{\omega_p^2/\omega^2-2}, \end{eqnarray} $k$ and $\omega$ are the SPW wavelength and the frequency, and $\omega_p$ is the plasma frequency. In the presence of high-intensity lasers plasma interaction, and in particular when the laser electric field $E_0$ becomes of the order of $m_e c \omega_0/e$ ({\it i.e.} for a normalized vector potential $a_0 \equiv e E_0/(m_e c\,\omega_0) \gtrsim 1$), it has been proposed \citep{Akhiezer, macchi01,siminos12, raynaud18, macchi18} to correct the response of the electrons by considering an effective electron mass $m_e \rightarrow \gamma_0\,m_e$, with $\gamma_0 \simeq \sqrt{1+a_0^2/2}$ the Lorentz factor of an electron in a plane wave with normalized vector potential $a_0$. In the case of SPW excitation by the laser, we thus consider a heuristic correction to the dispersion relation by replacing $\omega_p^2/\omega^2 \equiv \omega_p^2/\omega_0^2$ by $\omega_p^2/(\gamma_0\,\omega_0^2)$. As a consequence, correcting the phase-matching condition leads a $a_0$-dependent optimal angle of incidence for the surface plasma wave excitation: \begin{eqnarray}\label{eq:optAngle} \theta_{\rm opt}(a_0) = \arcsin\left( \sqrt{\frac{n_0/(\gamma_0 n_c)-1}{n_0/(\gamma_0 n_c)-2}} - \frac{\lambda_0}{d}\right)\,. \end{eqnarray} This results in an optimal angle, $\theta_{opt}$ that increases with the amplitude of the SPW field. For $a_0 \gg 1$ it depends on the parameter $n_0/(\gamma_0 n_c)\sim \sqrt{2} n_0/(a_0 n_c) $. In order to verify the validity of this scaling, we considered two electron densities, $n_0 = 100n_c$ and $200n_c$. As detailed in the following we find in simulations that at high intensity the resonance is quite broad. Although for values of $n_0/(a_0 n_c) \lesssim 10$ the correction to the dispersion relation can improve the coupling of the laser with plasma. We notice no further improvement for higher value of $a_0$, and the resonance angle becomes roughly independent of $a_0$. We can then conclude that Eq.~\eqref{eq:optAngle} does not hold at ultra-high intensity. To show this let us recall that SPW are TM-modes, so their signature can be sought by inspecting the Fourier transform of the $B_z$ component of the magnetic field. Taking into account that the SPW and incident/reflected laser waves have different dispersion relations, filtering in ($k_x,k_y$) Fourier space allows to extract the component associated to the SPW. Then an inverse Fourier transform is done to obtain the $B_z$ component of the SPW magnetic field in the reconstructed real space domain. \begin{figure}[ht] \begin{center} \includegraphics[width=8.cm]{fig2.pdf} \caption{SPW $B_z$ field amplitude evolution for $a_0=20$, $n_0=200n_c$ and $h=0.1\lambda_0$, and laser incidence angle in between $30^\circ$ and $45^\circ$, $t=0$ corresponds to the instant of time when the laser pulse reaches the plasma. \label{fig:SPWevolution01}} \end{center} \end{figure} The time evolution of the maximum amplitude of the SPW $B_z$ field normalized to $a_0$ for a typical case ($a_0=20$, $n_0=200n_c$, $h=0.1\lambda_0$ and different values of the laser incidence angle between $30^\circ$ and $45^\circ$) is reproduced in Fig.~\ref{fig:SPWevolution01}. The field component reaches a maximum around $t=12\lambda_0/c$ for an incidence angle of $33^\circ$, named hereafter $\theta_{opt}$ with $t=0$ corresponding to the time when the laser pulse reaches the plasma surface. We notice that the SPW field amplitude does not become larger than the laser field $a_0$, as opposed to what has been found for longer pulses and lower intensities \cite{bigongiari11}. In this short pulse regime ($\simeq27f$s) the SPW excitation does not have time to reach the stationary regime. From the figure we can also see that the resonance condition is not sharp. A laser incident at angles close to the optimal values excite a field with very similar behaviour to the optimal one. This is also due, as discussed in Ref.~\cite{raynaud20}, to the fact that the width of the incident laser transverse profile induces a spectral mode distribution of the SPW which induces an angular width for the $\theta_{opt}$ equals here to $\sim 4^\circ$. In Fig.~\ref{nfig3} we report the optimum laser incidence $\theta_{opt}$ (red dots and error-bar) as a function of $a_0$ for the two plasma densities considered. The $\theta_{opt}$ is obtained by considering for each $a_0$ the angle that corresponds to the peak value of SPW $B_z$ in time (following the same procedure that is illustrated in Fig. \ref{fig:SPWevolution01}). In the panels, the error-bars measure the uncertain measuring the peak value of SPW $B_z$ while the gray shadow identifies the region where $\max|B_{z}^{SP}|\gtrsim 0.85 \max|B_{z}^{SP}|$. As $a_0$ increases, and in particular for $a_0 \gtrsim n_0/(10 n_c) $, the incertitude in determining the optimum angle of the SPW $B_z$ becomes large since many angles correspond more or less to the same maximum value of the field. Moreover, when increasing $a_0$, the normalised amplitude of the field $B^{SP}_z/a_0$ decreases. We notice that going from $a_0\sim 1$ to $a_0 \sim n_0/(10 n_c) $ results in a reduction of the field amplitude of $\approx45\%$. Further increasing $a_0$ and taking $a_0 \sim n_0/(4 n_c) $ results in a field amplitude reduction of $\approx60\%$ in relation to the field observed when $a_0=1$ (not shown here). \begin{figure}[!ht] \begin{center} \includegraphics[width=4.2cm]{nfig3a.pdf} \includegraphics[width=4.2cm]{nfig3b.pdf} \caption{In red (rounds) laser angles of incidence that optimizes the SPW $B_z$ field amplitude as a function of the laser strength parameter $a_0$ for (a) $n_0= 100n_c$, and (b) $n_0=200n_c$. The gray region represents the laser angles of incidence where $\max|B_{z}^{SP}|\gtrsim 0.85 \max|B_{z}^{SP}|$. In blue (squares) we report the results from simulations assuming immobile ions. In both cases, $h=0.1\lambda_0$. The solid (dashed) black line represents the expected value obtained using the dispersion relation for the cold SPW limit with the heuristic relativistic correction as a function of $a_0$ ($a_0/5$) (see the discussion in the text).} \label{nfig3} \end{center} \end{figure} In Fig. \ref{nfig3} we also plot in black the expected value obtained using Eq. (\ref{eq:optAngle}). As anticipated, while at first the values obtained in the simulations fit the equation, for larger values of $a_0$ the resonance angle becomes roughly independent of $a_0$. The threshold, noted $a_{0,T}$ in the following, is about $a_{0,T}=10$ in the case when $n_0=100n_c$ and increases up to $20$ when $n_0=200n_c$ (or, equivalently, $n_0/(a_{0,T} n_c) \sim 10$). As we can see, even if Eq. (\ref{eq:optAngle}) does not hold, the parameter $n_0/(a_0 n_c)$ is a relevant quantity to describe the laser plasma coupling and the SPW excitation. More importantly, this parameter shows the importance of considering higher density plasma to maintain SPW excitation in the ultra relativistic regime. In Eq. (\ref{eq:optAngle}) the heuristic correction to the dispersion relation is obtain using the laser parameter $a_0$. In the present simulations the SPW maximum field amplitude is always smaller than $a_0$, and typically, as shown in Fig. \ref{fig:SPWevolution01}, of the order of $a_0/5$. Therefore, for reference we also report in dashed black line in Fig. \ref{nfig3} the result from Eq.~\eqref{eq:optAngle} considering $a_0/5$ instead of $a_0$ in the $\gamma_0$ function. \begin{figure}[ht] \begin{center} \includegraphics[width=4.2cm]{fig4a.pdf} \includegraphics[width=4.2cm]{fig4b.pdf} \caption{SPW $B_z$ field amplitude evolution at $\theta_{inc}=33^{\circ}$ with time for (a) $n=100n_c$, $a_0=27$, and (b) $n=200n_c$, $a_0=50$. $t=0$ corresponds to the instant of time when the laser pulse reaches the plasma. } \label{nfig4} \end{center} \end{figure} Increasing $a_0$ increases the laser pressure, which may alter the grating and suppress the SPW excitation. To check the importance of this effect and to verify if the relativistic correction of the dispersion relation (Eq.~\eqref{eq:optAngle}) is recovered, we also performed a set of simulations with immobile ions (represented by blue squares in Fig.~\ref{nfig3} and a blue dashed line in Fig.~\ref{nfig4}). As we can see, the optimal angle is barely modified when the ions are immobile. However, as shown in Fig.~\ref{nfig4} where we plot the SPW field amplitude evolution with time for two densities and $a_0>a_{0,T}$, in the case of immobile ions the SPW field survives a longer time and peaks to higher values. This means that the grating deformation affects the SPW field on time scales larger than few laser periods ($\sim 12\lambda_0/c$ here). Above $a_{0,T}$ the damping of the SPW by the electrons is large, resulting in strong electron acceleration along the surface trapped in the SPW \citep{riconda15,fedeli:16, raynaud20}. In the next section of this paper we consider the SPW evolution as related to the electron dynamics along the grating. \section{Electron acceleration along the plasma surface}\label{elec} As mentioned in the introduction, SPW excitation resulting from high intensity ultra-short laser plasma interaction ($\leq 10^{19}$ W/cm$^2$ and $\leq100 f$s) has been shown to be an efficient way to increase the acceleration of high charge electron bunches along the plasma surface up to $\sim 10$MeV and $\sim 650$pC \citep{cantono18,riconda15,naseri13,Willingale11,marini,Willingale13,fedeli:16,raynaud20,zhu20}. Using the same laser intensities and plasma densities as in the previous section, we will first analyze the maximum energy of the electrons that propagate along the plasma surface as a function of the laser angle of incidence. The results are summarized in Fig. \ref{fig3new} where we report the optimal laser's angle of incidence, $\theta^e_{opt}$ (which optimizes the formation of high energetic electron bunches propagating along the plasma surface) as a function of the laser strength parameter $a_0$ for (a) $n_0= 100n_c$, and (b) $n_0=200n_c$ (case $h=0.1\lambda_0$). To identify the electrons that propagate along the surface, we have defined the emission angle $\phi_e=\tan^{-1}(p_y/p_x)$ and selected electrons with $\phi_e=90^{\circ}\pm 3^{\circ}$. The bars indicates the range of angles of the laser incidence giving the highest electron energy. This was determined by analyzing for each angle the energy spectrum of the electrons propagating along ($\phi_e=90^{\circ}\pm 3^{\circ}$) the plasma surface. Notice that for the angles considered in the error bar the electron peak energy is about the same within a percentage of up to $10\%$. \begin{figure}[ht] \begin{center} \includegraphics[width=4.2cm]{fig5a.pdf} \includegraphics[width=4.2cm]{fig5b.pdf} \caption{In red (bars), angle of incidence of the laser that optimize electron bunches energy propagating along the plasma surface ($\theta_{opt}^e$) as a function of the laser strength parameter $a_0$ for (a) $n_0= 100n_c$, and (b) $n_0=200n_c$. In blue (bars) is reported results from simulations assuming immobile ions. In both cases, $h=0.1\lambda_0$. The solid black line reports the optimal angle of SPW excitation obtained using the dispersion relation for cold SPW with the heuristic relativistic correction (see the discussion in the text).} \label{fig3new} \end{center} \end{figure} As before, we have considered both mobile and immobile ions with the same color code as in Fig.~\ref{nfig3} (red - mobile, blue -immobile). Comparing Fig.~\ref{nfig3} and Fig.~\ref{fig3new} we find at low laser intensity a strong correlation between the optimum angle of SPW excitation and the laser angle of incidence that optimize the electron acceleration along the plasma surface. The optimum angle giving the highest energy of the electron bunch propagating along the surface is $\sim 31^{\circ}$ for $a_0\sim 1$ and increases slightly up to $\sim 33^{\circ}$ with $a_0$ until it reach $a_{0,T}$. It confirms the robustness of the SPW excitation in this range of intensity. Above $a_{0,T}$, we observe for the realistic simulations (mobile ions) that the laser incidence angle that optimize the electron bunch propagating along the surface is no longer the same one that optimize the SPW field. The transition occurs for $a_0$ around $20$ if the plasma density is $n_0 = 100n_c$, and around $30$ if $n_0 = 200n_c$. However when considering simulations with immobile ions (blue bars) we recover the result of the previous Fig.~\ref{nfig3}: the optimal angle for electron acceleration coincides with the optimal angle for SPW excitation. This shows that the electrons dynamic is sensitive to the grating deformation. Other acceleration mechanisms along and across the surface have been suggested associated to the laser absorption \citep{Macchi19} : indeed we find that above $a_{0,T}$ acceleration by SPW is not the main mechanism of electron acceleration, and the fast electrons angular distribution is much wider. This will be discussed in more detail in the next section, but we can anticipate that the analysis of the electron phase space confirms this hypothesis, since above $a_{0,T}$ the electron velocity distribution does not show the characteristic behaviour of the acceleration by SPW, namely bunches with periodicity equal to the SPW wavelength, and directed along the surface \cite{raynaud04}. Finally, we checked the effect of the laser on the plasma surface examining the spatial ion density distribution in two different time scales. Both an increase in plasma density due to the radiation pressure and an expansion of the plasma is observed (not reported here). In the short time interval (comparable to the laser pulse duration), the diffraction grating is distorted and the plasma is pushed, which results in a large increase in the local plasma density. In the second and long scale that happens few cycles after the laser-plasma interaction, the plasma expansion creates an under-dense region in front of the target. That also might have a major effect on the laser absorption mechanism and to define the optimal angle to the electron acceleration. The effect of under-dense sheet in front of the plasma surface has been investigated in Ref. \citep{zhu20}. To overcome the possible limitation of SPW-laser coupling at high laser intensity, we now consider the influence of the target grating depth that, when chosen appropriately, can significantly improve the acceleration by SPW. \section{Recovery of SPW acceleration by adapting the grating depth}\label{depth} In laser-solid interaction and also at high laser intensity where plasma is created, it is well known that the ratio between the target grating depth and the grating periodicity plays a major role in the SPW excitation \cite{Hutley, raynaud18}. Thus here, in order to find the optimum grating parameters for SPW excitation in the ultra high laser intensity regime ($a_0\geq 25$), we have redone the PIC simulations increasing the grating depth of the plasma to $h=0.4\lambda_0$. \begin{figure}[ht] \begin{center} \includegraphics[width=4.2cm]{fig6a.pdf} \includegraphics[width=4.2cm]{fig6b.pdf} \caption{Optimal angle of incidence of the laser that optimize electron bunches energy propagating along the plasma surface ($\theta_{opt}^e$) as a function of the laser strength parameter $a_0$ for (a) $n_0= 100n_c$, and (b) $n_0=200n_c$ (case $h=0.1\lambda_0$ in red and $h=0.4\lambda_0$ in green). The black line reports the expected value obtained using the dispersion relation for cold SPW with the heuristic relativistic correction (see the discussion in the text).}\label{fig7new} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=8.cm]{fig7a.pdf} \includegraphics[width=8.cm]{fig7b.pdf} \caption{SPW $B_z$ field amplitude evolution with time for $a_0=30$, $n_0=200n_c$, laser incidence angle in between $30^\circ$ and $45^\circ$, and $h=0.1\lambda_0$ (a) and $h=0.4\lambda_0$ (b). $t=0$ corresponds to the instant of time when the laser pulse reaches the plasma.} \label{fig:SPWevolution} \end{center} \end{figure} In Fig. \ref{fig7new} we compare the optimal angle of incidence of the laser that optimize electron bunches energy propagating along the plasma surface (bars) found in the previous section for $h=0.1\lambda_0$ (in red) with the one found for $h=0.4\lambda_0$ (in green) keeping unchanged the other parameters. As we can see in the case $h=0.4\lambda_0$ the optimum angle for particle acceleration remains between $30^{\circ}$ and $36^{\circ}$ and coincides with the optimum angle for SPW excitation as presented in Fig. 3. As in section III the best laser incidence angle to excite highly energetic electron bunches stay roughly constant and does not scale with the laser strength. This is illustrated as an example by the simulations at $a_0=30$. In Fig. \ref{fig:SPWevolution} we plot the maximum $B_z$ field amplitude evolution in time for different values of laser incidence angle and $h=0.1\lambda_0$ (a) and $h=0.4\lambda_0$ (b). Comparing Fig. \ref{fig:SPWevolution}a, where $a_0=30$ and $h=0.1\lambda_0$, the time evolution of the field is quite similar to that observed in Fig. \ref{fig:SPWevolution01} where $a_0=20$ and $h=0.1\lambda_0$. However, when we increase the grating's depth, the value of the field amplitude is larger and the optimal angles ($31^{\circ}-33^{\circ}$) coincide with the optimal angles for electron acceleration in Fig. \ref{fig7new}b. As a consequence with the deeper grating we expect both that the electrons are mainly accelerated by the SPW and that the maximum energy gained by the electrons is higher than if the grating is shallow. \begin{figure}[ht] \begin{center} \includegraphics[width=8.cm]{fig8.pdf} \caption{Maximum value of gamma factor, $\gamma_f$ along the target's surface, observed at the end of simulation as a function of a function of the laser strength parameter $a_0$ for $n_0= 100n_c$, and $n_0=200n_c$ (case $h=0.1\lambda_0$ in red and $h=0.4\lambda_0$ in green). The dashed lines represents the general tendency of the results. }\label{figspec} \end{center} \end{figure} In Fig. \ref{figspec} we show the maximum value of the gamma factor, $\gamma_f$ along the target's surface, for the electrons observed at the end of simulation as a function of the laser strength parameter $a_0$, taking $\theta_{inc}=\theta^e_{opt}$ and the parameters used in the Fig. \ref{fig7new}. As expected we observe that the energy transfer is better when the gratings are deeper ($h = 0.4 \lambda_0$) than when they are shallow ($h =0.1 \lambda_0$) in the high-intense regime. The red dotted line is the function $\gamma_f=1+5.1a_0$ that fits the data when $h=0.1\lambda_0$ and the green dashed curve is the function $\gamma_f=1+9.3a_0$ that fits the data when $h=0.4\lambda_0$. A more detailed analysis of the electron dynamics can be inferred from their energy distributions as a function of the propagation angle and from their phase space ($p_y/m_e c$,$y/\lambda_0$). If $h=0.4\lambda_0$ and $\theta_{inc}=33^{\circ}$, a large amount of highly energetic electrons propagates along the surface $\phi_e=90^{\circ}$ (Fig. \ref{fig:particleacceleration2}(a)), and the phase space shows bunches distanced by a wavelength (Fig.~\ref{fig:particleacceleration2}(b)), consistent with the SPW acceleration mechanism.\\ % \begin{figure}[ht!] \begin{center} \includegraphics[width=3.9cm]{fig9a.pdf} \includegraphics[width=3.69cm]{fig9b.pdf}\\ \includegraphics[width=3.9cm]{fig9c.pdf} \includegraphics[width=3.69cm]{fig9d.pdf}\\ \includegraphics[width=3.9cm]{fig9e.pdf} \includegraphics[width=3.69cm]{fig9f.pdf}\\ \vspace{0.5cm} \includegraphics[width=8 cm]{fig9g.pdf}\\ \caption{For $\theta_{inc}=33^{\circ}$, $a_0= 30$, $n_0=200n_c$ and $h=0.4\lambda_0$, (a) electron energy distribution at $t=t_f$. The plasma surface is along $90^{\circ}$, the red arrow shows the direction of the incident laser beam and the green arrow the reflected one; (b) phase space ($p_y/m_e c$,$y/\lambda_0$) of the electrons in the simulation box; the panels (c) and(d) [(e) and (f)] represent the same as the panels (a) and (b) for $h=0.1\lambda_0$ and $\theta_{inc}=33^{\circ}$ [$\theta_{inc}=45^{\circ}$]; (g) spectrum of the electron bunches along the surface for the tree parameter sets discussed.} \label{fig:particleacceleration2} \end{center} \end{figure} This is very different from the case with $h=0.1\lambda_0$, and $\theta_{inc}=33^{\circ}$ reported in Fig. \ref{fig:particleacceleration2} (c) and (d) or $h=0.1\lambda_0$, and $\theta_{inc}=45^{\circ}$ reported in Fig. \ref{fig:particleacceleration2} (e) and (f). We observe for these lasts two parameters sets that the faster electrons are accelerated mainly along the direction of the incident and reflected laser beam and fewer electron are found propagating along the surface at $90^{\circ}$. Moreover a large amount of fast electrons are pushed inside the plasma. It is worth to point out that although the peak energy is reduced in this configuration, the laser plasma coupling is still large so that this configuration might be a way to enhance TNSA at the rear of the thin target \citep{heron20}. In such a limit, the SPW field when present (Fig. \ref{fig:particleacceleration2} (c) and (d)) is weak and the SPW wave is no longer the predominant acceleration mechanism. This might be attributed to the grating deformation due to laser pressure which prevents laser-SPW coupling. We can thus conclude that a deeper grating allows to recover the exciting of SPW in the ultra high intensity laser regime and acceleration along a preferential direction. This effect is evident in Fig. \ref{fig:particleacceleration2}(g) when comparing the electron's spectra (selecting only the ones emitted parallel to the target $\phi_e=90^{\circ}\pm 6^{\circ}$) for $h=0.1\lambda_0$ (in blue) and $h=0.4\lambda_0$ (in red), with $\theta_{inc}=33^{\circ}$ in both cases. The electron energy obtained when increasing the grating depth is increased by a factor of two for the deepest grating and the optimal angle. Instead for $h=0.1\lambda_0$ the energy spectrum changes very little between $\theta_{inc}=33^{\circ}$ and $45^{\circ}$ (in green), even if, when comparing the phase space $p_y/m_e c$,$y/\lambda_0$ for both incident angles (Fig.~\ref{fig:particleacceleration2} (d) and (f)) we observe a small signature of the SPW excitation (bunching of the phase space), that is lost at $45^{\circ}$. \begin{figure}[ht!] \begin{center} \includegraphics[width=3.9cm]{fig10a.pdf} \includegraphics[width=3.9cm]{fig10b.pdf}\\ \caption{Electron energy distribution at $t=t_f$, $\theta_{inc}=33^{\circ}$ and $h=0.4\lambda_0$ and, $a_0=100$ (a) and $a_0=200$ (b). The plasma surface is along $90^{\circ}$, the red arrow indicates the direction of the incident laser beam, and the green arrow, the reflected one. Note that although the ratio $\gamma/a_0$ is about the same in both panels, $\gamma_f$ is about $800$ in (a) and $1600$ in (b).} \label{fig:temp} \end{center} \end{figure} To conclude this section we verified that for $h=0.4\lambda_0$, the SPW is still excited even at significantly higher laser intensities. In Fig.~\ref{fig:temp} the electrons emission spectrum assuming two extreme laser conditions (a) $a_0=100$, and (b) $200$ is shown. There, the plasma density is equal to $n_0=200n_c$. From the panels, we observe a large increase of the electron energy achieving $\gamma_f/a_0\sim 7, 8$ ($\gamma_f\approx 800$ for $a_0=100$ and $\gamma_f\approx1600$ for $a_0=200$), even if for the largest laser strength $a_0=200$ (Fig.~\ref{fig:temp} (b)), the angular distribution of the electrons tends to increase. Our results show that, even in the very high-intensity regime of interaction, there is good evidence that SPW excitation and the consequent electron acceleration still present when the diffraction grating is correctly chosen. However, they do not account for additional processes that may set at extreme intensities, such as radiation reaction or quantum effects (like pair creation)\cite{niel18}. These processes are under investigation and remain beyond the scope of this work. \section{Conclusion}\label{sec_conclusions} In this work, we consider a laser pulse impinging on an over dense plasma, whose surface presents a periodic modulation (grating), in order to generate large amplitude Surface Plasma Waves (SPWs). Key parameters were obtained for optimising laser-plasma coupling in the ultra-relativistic regime ($\sim10^{22}$ W/cm$^2$). A systematic study in function of the laser incidence angle and intensity, $a_0$, employing the SMILEI Particle-In-Cell simulations, showed that at ultra high laser intensities ($a_0\ge30)$ the SPW resonance angle becomes roughly independent of $a_0$. A strong correlation was also observed between the optimum SPW excitation angle and the laser's angle of incidence that optimizes electron acceleration along the plasma surface. The production of high energetic electron bunches is analysed as well as the appropriate values of plasma density and surface shape to ensure SPW survival at ultra-high laser intensity. Furthermore, the parameter $n_0/(a_0 n_c)$ is shown as crucial for describing laser plasma coupling and SPW excitation, as it highlights the importance of the prior consideration of higher density plasma to maintain SPW excitation in the ultra relativistic regime. Finally, as high-intense lasers illuminating the grating inevitably distorts it, increasing the grating's depth provides a more robust condition for SPW excitation. This may be a way to obtain unprecedentedly high currents of energetic electrons as well as emitting radiation with interesting characteristics thereby paving the way to new experiments on forthcoming multi-petawatt laser systems. \vspace{1cm} \section*{Acknowledgement} P.S.K. was supported by the CEA NUMERICS program, which has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 800945. Financial support from Grant No. ANR-11-IDEX-0004-02 Plas@Par is acknowledged. Simulations were performed on the Irene-SKL machine hosted at TGCC- France, using High Performance Computing resources from GENCI-TGCC (Grant No. 2018-x2016057678). We acknowledge PRACE for awarding us access to Irene-SKL. Technical support from the SMILEI dev-team was also greatly appreciated.
2,869,038,156,202
arxiv
\section{Conclusions \label{sec6}} While the Minimal Supersymmetric Standard Model (MSSM) with a most general flavour structure has been extensively studied in the context of collider signatures, the possibility of squark flavour mixing has not been considered for observables related to dark matter so far. However, as the LHC is running and more precise cosmological and astrophysical experiments are taking data or being set up, it will become more and more important to take into account such effects when studying the interplay between collider and astroparticle phenomenology. In the case of neutralino dark matter in supersymmetric theories, flavour violating couplings can influence the (co)annihilation cross section, and in consequence the predicted relic density, in different ways. The strongest effect is due to the modified mass spectrum of squarks, the lightest squark becoming lighter with increasing flavour non-diagonal terms in the mass matrices. The exchange of squarks in neutralino pair annihilation as well as the presence of coannihilation with a squarks become then important. Another effect comes from the fact that couplings of neutralinos to squarks are not diagonal in flavour space any more. This opens new (co)annihilation channels, such as $\tilde{\chi}_1^0\tilde{\chi}_1^0 \to c\bar{t}$ or $\tilde{\chi}_1^0\tilde{u}_1 \to ch^0 (cg, cZ^0)$, which can give sizeable contributions to the annihilation cross-section already for moderate flavour violation parameters. Considering flavour mixing in the sector of right-handed up-type squarks, we have shown that the modified squark masses and flavour contents have a strong impact on the (co)annihilation modes. New annihilation channels are opened due to the presence of non-diagonal couplings in flavour space. These new contributions may become numerically important in particular regions of the parameter space. As a consequence, new regions that are compatible with the relic density constraint are opened. We emphasize the fact that these new regions are not excluded by the rather strong constraints imposed by flavour physics observables. Moreover, effects of lepton flavour violation on neutralino dark matter have recently been discussed in Ref.\ \cite{LFV_CoAnn}. A brief study of the corresponding LHC phenomenology has shown that the clean signature $pp \to c\bar{t} E_{\rm T}^{\rm miss}$, that has recently been studied in Ref.\ \cite{NMFV_Squark2}, can only be realized for moderate flavour mixing, when the lightest squark mass comes not too close to the neutralino mass. For rather large flavour violation, however, this channel is closed and NMFV-signatures arise through production and decay of gluinos rather than squarks. Such signatures include production of a top quark in association with charm-jets and may yield a few hundred events at the LHC with $\sqrt{s}=14$ TeV and an integrated luminosity of 100 fb$^{-1}$. Since the annihilation cross section of the neutralino also governs the particle fluxes, flavour violating couplings would also have an impact on indirect detection of dark matter. In particular, additional $\tilde{c}$--$\tilde{t}$ mixing, as discussed in this paper, would change the spectrum of photons originating from dark matter annihilation. The impact of flavour mixing is, however, expected to be very small compared to the astrophysical uncertainties in this context. Direct dark matter detection might also be influenced by the discussed flavour mixing. Here, the scattering of a neutralino off a nucleus can proceed through squark-exchange, such that the charm-content in the nucleon becomes relevant if the lightest squark is a mixture of stop and scharm. In the same way, flavour mixing in the sector of down-type squarks would increase the importance of the strange quark in the nucleus. Detailed studies of direct or indirect detection of dark matter in the context of flavour violation are, however, beyond the scope of this work. \begin{acknowledgments} The authors would like to thank W.~Porod, A.~Pukhov and F.~Staub for their help concerning the used computer programs and S.~Kraml for helpful discussions. This work is supported by Helmholtz Alliance for Astroparticle Physics and by the Landes-Exzellenzinitiative Hamburg. The work of Q.L.B.\ is supported by a Ph.D.\ grant of the French Ministry for Education and Research. \end{acknowledgments} \section{Introduction \label{sec1}} Among the numerous extensions of the standard model of particle physics, supersymmetry ranks among the most popular ones. In particular, the Minimal Supersymmetric Standard Model (MSSM) is probably the best studied scenario of new physics. It allows to cure the hierarchy problem by stabilizing the Higgs mass and leads to gauge coupling unification. Moreover, it includes promising candidates for dark matter, whose presence remains the most compelling observational evidence for physics beyond the standard model. Nevertheless, several open questions remain, e.g., concerning the flavour structure of the theory. While models with minimal flavour violation (MFV) \cite{Hall, D'Ambrosio, Altmannshofer} assume that the mechanism of flavour violation is the same as in the standard model, the framework of non-minimal flavour violation (NMFV) allows for new sources of flavour mixing, depending on the exact mechanism of supersymmetry breaking. In the former case, the rotation of the Yukawa couplings from gauge to mass eigenstates remains the only source of flavour violation, and thus all flavour-violating interactions are parameterized through the CKM- and PMNS-matrices as in the standard model. For NMFV, the terms originating from the additional sources are not related to these matrices, such that they are considered as additional parameters at the SUSY scale. In recent years, supersymmetric scenarios beyond minimal flavour violation have received considerable attention in the community, especially in the context of signatures at current or future colliders. Concerning (s)quark flavour violation, the production and subsequent decays of squarks and gluinos at the Large Hadron Collider (LHC) have been studied, e.g., in Refs.\ \cite{NMFV_mSUGRA, NMFV_GMSB, HurthPorod, WienGluino, NMFV_Squark1, NMFV_Squark2}. Apart from the production of superpartners at colliders, the flavour-violating terms also appear in the (co)annihilation cross section of the neutralino, which is needed in the calculation of its relic density for a given scenario. In case of neutralino pair annihilation into quarks, the squarks appear as internal propagators. Additional flavour-violating terms can then increase the relative contributions of these diagrams since the mass splitting of the squarks is modified. Moreover, NMFV allows for efficient annihilation into final states, that are forbidden in the case of MFV. Flavour violating effects are also important in the case of coannihilations of a neutralino with a squark, since the latter is then an external particle. The importance of such processes crucially depends on the mass difference of neutralino and squark. The increased mass splitting of the squarks can therefore have an important impact on coannihilation processes. Finally, also in this case new final states are opened, leading to additional coannihilation channels. Recently, the impact of non-minimal flavour violation in the sector of sleptons on the coannihilation of a neutralino with a slepton has been discussed in Ref.\ \cite{LFV_CoAnn}. The aim of the present paper is to provide a study of quark flavour violation in the context of neutralino dark matter. In this context, possible flavour-mixing effects are generally not considered in the literature. We present a detailed analysis of neutralino pair annihilation and neutralino-squark coannihilation in the MSSM beyond MFV. In Sec.\ \ref{sec2}, we will briefly introduce the MSSM with NMFV in the sector of squarks and discuss its parameterization. The role of generation mixing in the context of neutralino (co)annihilations is discussed in detail in Sec.\ \ref{sec3}. Sec.\ \ref{sec4} is then devoted to numerical examples in the context of neutralino (co)annihilation and its relic density. A discussion of LHC phenomenology for the corresponding scenarios follows in Sec.\ \ref{sec5}. Finally, conclusions are given in Sec.\ \ref{sec6}. \section{LHC phenomenology \label{sec5}} Finally, we discuss the collider phenomenology corresponding to CMSSM scenarios that feature new annihilation or coannihilation channels induced through flavour violating elements. Typical signatures for quark flavour violation in the context of squark production at hadron colliders have been discussed in Refs.\ \cite{HurthPorod, NMFV_Squark1, NMFV_Squark2}. A particularly promising process is the production of the lightest squark-antisquark pair, and their subsequent decay into charm- and top-quarks. The rather clean signature $pp \to \tilde{u}_1 \tilde{u}_1^* \to c \bar{t} (t \bar{c}) \tilde{\chi}_1^0 \tilde{\chi}_1^0$, where the neutralinos would manifest as sizeable missing energy, might lead to up to $10^4$ events at the LHC with $\sqrt{s}=14$ TeV and an integrated luminosity of 100 fb$^{-1}$ \cite{NMFV_Squark2}. Alternatively, new contributions to the decay $t\to c\gamma$ can increase its branching ratio to as much as 10$^{-6}$ and thus render it detectable, e.g., in $t\bar{t}$ production at the LHC, as has been pointed out in Refs.\ \cite{deDivitiis1997, Delepine2004}. In the case in which $\delta_{23}^{u,LR}$ is strongly constrained, e.g.\ by the $B_s$ mixing, and only $\delta_{23}^{u,RR}$ is large, several years of high-luminosity operation might, however, be required. In order to evaluate the production of squarks and gluinos at the LHC, we have computed the relevant cross sections using the Monte-Carlo package {\tt WHIZARD 1.95} together with the associated matrix element generator {\tt O'MEGA} \cite{WhizardOmega}, where the MSSM with the most general generation mixing as discussed in Sec.\ \ref{sec2} has been implemented \cite{NMFV_Squark2}. We have employed the {\tt CTEQ6L} \cite{CTEQ} set for the parton distribution functions, the factorization scale being set to the average of the produced masses. Finally, the branching ratios of squarks and gluinos have been obtained using {\tt SPheno} \cite{SPheno}. In the first graph of Fig.\ \ref{fig:LHC}, we show the obtained dominant production modes of squarks and gluinos at the LHC with $\sqrt{s}=14$ TeV for the example scenario already discussed in Sec.\ \ref{sec4a}. In the case of MFV ($\delta^{u,\rm RR}_{23}=0$), gluino pair production is dominant due to the colour structure, while squark-antisquark production is the subdominant channel. Due to the lighter mass, production of $\tilde{u}_1$ is preferred over the production of $\tilde{u}_2$. We focus therefore on the production and decay of the lightest squark and the gluino. \begin{figure} \includegraphics[width=0.45\textwidth]{figures/xsec_su1_14TeV_P1.eps} \includegraphics[width=0.45\textwidth]{figures/signal_lhc_14TeV_P1.eps} \caption{Dominant production cross-sections of up-type squarks and gluinos (left) and resulting NMFV signal cross-section (right) for the CMSSM scenario $m_0=1500$ GeV, $m_{1/2}=680$ GeV, $A_0 = -500$ GeV, $\tan\beta=10$, and $\mu>0$ as a function of the NMFV-parameter $\delta^{u,\rm RR}_{23}$ at the LHC with $\sqrt{s}=14$ TeV. The notation in the legend of the right panel is according to Eqs.\ (\ref{eq:signal0}) and (\ref{eq:signal2}), taking into account all possible combinations of (s)quarks and anti(s)quarks.} \label{fig:LHC} \end{figure} Due to only flavour-diagonal couplings, the gluino pair production cross section is practically independent of the NMFV-parameter $\delta^{u,\rm RR}_{23}$. Contrary, the squark-antisquark pair production receives new contributions in a similar way as the neutralino pair annihilation discussed in Sec.\ \ref{sec3}. Since $\tilde{u}_1$ now has a sizeable $\tilde{c}$-admixture, initial states containing charm-quarks can now contribute. Moreover, the lighter squark in the $t$-channel propagator enhances this channel. Finally, also the phase space is increased due to the decreased squark mass. Taking into account all these effects, the production of $\tilde{u}_1\tilde{u}_1^*$ becomes the dominant channel for $\delta^{u,\rm RR}_{23} \gtrsim 0.5$ and reaches production cross sections of up to $10^3$ fb for $\delta^{u,\rm RR}_{23} \sim 0.95$. For lower mixing parameters, gluino pair production remains numerically most important. Similar arguments hold for the associated production of a gluino and a squark. For the same reasons as given above, the production of $\tilde{u}_1\tilde{g}$ is enhanced for large flavour mixing as compared to the MFV case. Note that, although the charge conjugated channel $pp \to \tilde{g}\tilde{u}_1^*$ is not shown in Fig.\ \ref{fig:LHC} (left), it is taken into account in the following calculation of event rates. The associated production of $\tilde{g}$ and heavier squarks is negligible in this context. The practically only flavour content of the second lightest squark $\tilde{u}_2$ is $\tilde{t}_L$, such that its production remains insensitive to the discussed flavour mixing in the right-right sector. \begin{figure} \includegraphics[width=0.45\textwidth]{figures/BR_su1_duRR_P1} \includegraphics[width=0.45\textwidth]{figures/BR_gl_duRR_P1} \caption{Branching ratios of the lightest up-type squark (left) and the gluino (right) for the scenario of Fig.\ \ref{fig:LHC} as a function of the NMFV-parameter $\delta^{u,\rm RR}_{23}$.} \label{fig:BR} \end{figure} The branching ratios of squarks and gluinos are also affected by flavour-violating elements, as can be seen from Fig.\ \ref{fig:BR}. Since the lightest squark $\tilde{u}_1$ is a pure stop-like state in the case of MFV, it dominantly decays into $\tilde{\chi}^+_1 b$ and $\tilde{\chi}^0_i t$ ($i=1,2,3,4$). Note that the decay into $\tilde{\chi}_2^0$ is relatively small, since $\tilde{\chi}_2^0$ is almost purely wino-like and couples only to the tiny $\tilde{t}_L$-component of $\tilde{u}_1$. For increasing $\delta^{u,\rm RR}_{23}$, the $\tilde{c}_R$-content increases, and decays into final states including second generation quarks open up. Decays into heavier neutralinos are kinematically forbidden for $\delta^{u,\rm RR}_{23} \gtrsim 0.5$. Although $\tilde{t}_R$ remains the dominant flavour in $\tilde{u}_1$ (see Fig.\ \ref{fig:CMSSM4}), its decay into $\tilde{\chi}_1^0 t$ is closed for $\delta^{u,\rm RR}_{23} \gtrsim 0.9$, since the mass difference between squark and neutralino is smaller than the top mass (see also Fig.\ \ref{fig:CMSSM4}). The only remaining decay mode is then $\tilde{u}_1 \to \tilde{\chi}_1^0 c$. As a consequence, the NMFV signature \begin{equation} pp \to \tilde{u}_1 \tilde{u}_1^* \to c \bar{t}\,(t \bar{c})\,\tilde{\chi}_1^0 \tilde{\chi}_1^0 \label{eq:signal0} \end{equation} discussed in Ref.\ \cite{NMFV_Squark2} cannot be realized for large $\delta^{u,\rm RR}_{23} \gtrsim 0.9$. This channel can have sizeable event rates only for smaller flavour mixing parameters. This can be seen in the second graph of Fig.\ \ref{fig:LHC}, where we show the production cross section combined with the relevant branching ratios in order to estimate the signal rate at the LHC. For such important flavour mixing, a moderately sizeable signal rate can, however, stem from gluino pair and from gluino-squark production. In the former case, NMFV final states can be achieved through \begin{equation} pp \to \tilde{g} \tilde{g} \to c \tilde{u}_1 \, t \tilde{u}_1 \to c c c t \,\tilde{\chi}^0_1 \tilde{\chi}^0_1, \label{eq:signal1} \end{equation} where one of the produced gluinos decays into a top quark. As can be seen in Fig.\ \ref{fig:BR}, this decay mode remains allowed (and even dominant) for all values of $\delta^{u,\rm RR}_{23}$. The notation of the final state in Eq.\ (\ref{eq:signal1}) is understood to include all possible combinations of quarks and antiquarks. The second possibility, mediated through associated production of a squark and a gluino, gives rise to signal events of the type \begin{equation} pp \to \tilde{u}_1 \tilde{g} \to c \tilde{\chi}^0_1 \,t \tilde{u}_1 \to c c t \,\tilde{\chi}^0_1 \tilde{\chi}^0_1, \label{eq:signal2} \end{equation} where again the gluino decays into a top quark and the process is understood to include all possible combinations of (s)quarks and anti(s)quarks. In the second graph of Fig.\ \ref{fig:LHC}, we show the mentioned signal cross sections as a function of the NMFV-parameter $\delta^{u,\rm RR}_{23}$. As discussed above, the signature in Eq.\ (\ref{eq:signal0}) increases with $\delta^{u,\rm RR}_{23}$, but drops when $m_{\tilde{u}_1} - m_{\tilde{\chi}_1^0} < m_{\rm top}$, i.e.\ in the region where coannihilations with the lightest squark are most important. In this region, flavour violating signatures can be expected from the processes in Eqs.\ (\ref{eq:signal1}) and (\ref{eq:signal2}). They feature, however, cross sections that are smaller by about two orders of magnitude. For the LHC with $\sqrt{s}=14$ TeV and an integrated luminosity of 100 fb$^{-1}$, the strongest signal in Eq.\ (\ref{eq:signal0}) can in our example scenario lead to up to about $10^4$ events. As discussed in Ref.\ \cite{NMFV_Squark2}, this signature is rather clean and not subject to important backgrounds. The region, where the correct relic density is achieved through efficient coannihilation, however, forbids this particular channel. The other potentially interesting channels may then lead to about a few hundred events each. Note that the event rate, the exact dependence on the NMFV-parameters, as well as the dark matter relic density are scenario-dependent. We can, however, expect a similar behaviour for other scenarios in the NMFV-MSSM. \section{The MSSM beyond minimal flavour violation \label{sec2}} In the standard model, the only source of flavour violation are the Yukawa interactions, since their diagonalization leads to a mismatch between flavour and mass eigenstates of quarks and leptons. The flavour structure of the quark sector is very well described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix, which only appears in charged currents, while flavour changing neutral currents are strongly suppressed. In supersymmetric theories with minimal flavour violation (MFV), the Yukawa matrices remain the only source of flavour violation, so that all flavour violating interactions of squarks are also related to the CKM-matrix. However, new sources of flavour violation may be present in supersymmetric models, especially if they are embedded in a grand unification framework. Depending on the exact realization and the involved representations, specific relations to the Yukawa matrices can lead to flavour non-diagonal entries in the soft-breaking terms. These are not related to the CKM-matrix and the corresponding framework is in consequence referred to as non-minimal flavour violation (NMFV). Considering the most general flavour structure, the squark mass matrices at the electroweak scale take the form \begin{equation} {\cal M}^2_{\tilde{q}} ~=~ \left( \begin{array}{cc} {\cal M}^2_{\tilde{q},\rm LL} & {\cal M}^2_{\tilde{q},\rm LR} \\[2mm] {\cal M}^2_{\tilde{q},\rm RL} & {\cal M}^2_{\tilde{q},\rm RR} \end{array} \right) \label{eq:massmatrix} \end{equation} for $q=u,d$, respectively. Their diagonal blocks are given by \begin{eqnarray} {\cal M}^2_{\tilde{d},{\rm RR}} &=& M^2_{\tilde{D}} + m^2_d + e_d m_Z^2 \sin^2\theta_W \cos 2\beta, \label{eq:RR}\\ {\cal M}^2_{\tilde{d},{\rm LL}} &=& M^2_{\tilde{Q}} + m^2_d + m_Z^2 \cos 2\beta (I_d-e_d \sin^2\theta_W),\\ {\cal M}^2_{\tilde{u},{\rm RR}} &=& M^2_{\tilde{U}} + m^2_u + e_u m_Z^2 \sin^2\theta_W \cos 2\beta,\\ {\cal M}^2_{\tilde{u},{\rm LL}} &=& V_{\rm CKM} M^2_{\tilde{Q}} V_{\rm CKM}^{\dag} + m^2_u + m_Z^2 \cos 2\beta (I_u-e_u \sin^2\theta_W), \end{eqnarray} where $M_{\tilde{Q}}$, $M_{\tilde{U}}$, and $M_{\tilde{D}}$ are the soft-breaking mass terms of the squarks. The diagonal mass matrices of up- and down-type quarks are denoted $m_u$ and $m_d$. Due to the SU(2) symmetry, the left-left entries are related through the CKM-matrix $V_{\rm CKM}$. The above expressions also involve the mass $m_Z$ of the Z-boson, the fractional electric charge $e_q$ and the weak isospin $I_q$ of the (s)quark, the weak mixing angle $\theta_W$, and the Higgs-mixing parameter $\beta$ defined through the ratio of the vacuum expectation values of the two Higgs doublets, $\tan\beta=v_u/v_d$. The off-diagonal blocks of the matrix in Eq.\ (\ref{eq:massmatrix}) are given by \begin{eqnarray} {\cal M}^2_{\tilde{u},{\rm RL}} ~=~ \big({\cal M}^2_{\tilde{u},{\rm LR}}\big)^{\dag} &=& \frac{v_u}{\sqrt{2}} T_U - \mu^* m_u \cot\beta ,\label{eq:RLu} \\ {\cal M}^2_{\tilde{d},{\rm RL}} ~=~ \big({\cal M}^2_{\tilde{d},{\rm LR}}\big)^{\dag} &=& \frac{v_d}{\sqrt{2}} T_D - \mu^* m_d \tan\beta , \label{eq:RLd} \end{eqnarray} where $\mu$ is the Higgs mass parameter. The trilinear matrices $T_{U,D}$ are related to the soft-breaking matrices $A_{u,d}$ and the respective Yukawa matrices $Y_{u,d}$ through $\left( T_{U,D} \right)_{ij} = \left( A_{u,d} \right)_{ij} \left( Y_{u,d} \right)_{ij}$. At the GUT scale, the usual CMSSM condition $\left( A_u \right)_{33} = \left( A_d \right)_{33} = A_0$ applies, and the numerical values for $A_{u,d}$ at the SUSY scale are obtained through renormalization group running. All parameters appearing in Eqs.\ (\ref{eq:RR}) to (\ref{eq:RLd}) are understood to be in the super-CKM basis, where the neutral currents are flavour-diagonal and the quark (but not the squark) fields are in the mass eigenstate basis \cite{Gabbiani, Hagelin}. In order to have a scenario-independent and dimensionless parameterization of flavour-mixing, the off-diagonal entries are usually normalized to the diagonal ones according to \begin{eqnarray} \delta^{\rm LL}_{ij} &=& \big(M^2_{\tilde{Q}}\big)_{ij}\,/\,\sqrt{ \big(M^2_{\tilde{Q}}\big)_{ii} \big(M^2_{\tilde{Q}}\big)_{jj} }, \\ \delta^{u,\rm RR}_{ij} &=& \big(M^2_{\tilde{U}}\big)_{ij}\,/\,\sqrt{ \big(M^2_{\tilde{U}}\big)_{ii} \big(M^2_{\tilde{U}}\big)_{jj} }, \\ \delta^{d,\rm RR}_{ij} &=& \big(M^2_{\tilde{D}}\big)_{ij}\,/\,\sqrt{ \big(M^2_{\tilde{D}}\big)_{ii} \big(M^2_{\tilde{D}}\big)_{jj} }, \\ \delta^{u,\rm RL}_{ij} &=& \frac{v_u}{\sqrt{2}}\big(T_U\big)_{ij}\,/\,\sqrt{ \big(M^2_{\tilde{Q}}\big)_{ii} \big(M^2_{\tilde{U}}\big)_{jj} }, \\ \delta^{d,\rm RL}_{ij} &=& \frac{v_d}{\sqrt{2}}\big(T_D\big)_{ij}\,/\,\sqrt{ \big(M^2_{\tilde{Q}}\big)_{ii} \big(M^2_{\tilde{D}}\big)_{jj} }, \\ \delta^{u,\rm LR}_{ij} &=& \frac{v_u}{\sqrt{2}}\big(T_U^{\dag}\big)_{ij}\,/\,\sqrt{ \big(M^2_{\tilde{U}}\big)_{ii} \big(M^2_{\tilde{Q}}\big)_{jj} }, \\ \delta^{d,\rm LR}_{ij} &=& \frac{v_d}{\sqrt{2}}\big(T_D^{\dag}\big)_{ij}\,/\,\sqrt{ \big(M^2_{\tilde{D}}\big)_{ii} \big(M^2_{\tilde{Q}}\big)_{jj} }. \end{eqnarray} The normalization factor is defined in terms of the corresponding diagonal elements of the soft-breaking matrices. We emphasize that the following numerical analysis is based on the diagonalisation of the full $6\times 6$ mass matrices. This is realized by introducing two rotation matrices, such that \begin{equation} {\cal R}_{\tilde{q}} {\cal M}^2_{\tilde{q}} {\cal R}_{\tilde{q}}^{\dag} ~=~ {\rm diag}\left( m^2_{\tilde{q}_1}, \dots, m^2_{\tilde{q}_6} \right) \label{eq:massdiag} \end{equation} with the mass order $m_{\tilde{q}_1} \le \dots \le m_{\tilde{q}_6}$ for $q=u,d$, respectively. The rotation matrices appear in the couplings of squarks with other particles, and, in consequence, the flavour-violating elements will influence observables like decay widths or production and annihilation cross sections. Analytical expressions for couplings including squark generation mixing can, e.g., be found in Refs.\ \cite{NMFV_mSUGRA, NMFV_GMSB, NMFV_Squark1}. We shall discuss the relevant couplings for our analysis in more detail in Sec.\ \ref{sec3}. A large variety of experimental measurements puts constraints on the parameter space of new physics models. Below are summarized all the constraints that will be considered (at the 95\% confidence level) in this study. The most important one for study this is naturally the relic density of cold dark matter. Combining data from the WMAP satellite and other cosmological measurements, the relic density of dark matter in the universe is constrained to \cite{WMAP} \begin{equation} \Omega_{\rm CDM}h^2 = 0.1126 \pm 0.0036, \label{eq:WMAP} \end{equation} where $h$ denotes the present Hubble expansion rate $H_0$ in units of 100 km s$^{-1}$\,Mpc$^{-1}$. Then, searches for superpartners at LEP and Tevatron lead to the following mass limits for Higgs bosons, neutralinos, charginos, squarks, and gluinos: $m_{h^0} > 114.4$ GeV, $m_{\tilde{\chi}^0_1} > 46$ GeV, $m_{\tilde{\chi}^{\pm}_1} > 94$ GeV, $m_{\tilde{t}_1} > 96$ GeV, $m_{\tilde{g}} > 308$ GeV \cite{PDG}. Moreover, recent results from the Large Hadron Collider (LHC) lead to more stringent limits for squarks and gluinos within the constrained MSSM \cite{LHClimits}. These limits are based on the hypothesis of minimal flavour violation, so that we do not take them into account explicitly in the present study. Note, however, that the scenarios considered in the following feature rather heavy gluinos. For the lightest Higgs boson, we require $m_{h^0} > 111.4$~GeV, taking into account a theoretical uncertainty of 3 GeV \cite{SPheno}. Moreover, precision measurements in the sector of D-, B-, and K-mesons constrain some of the flavour-violating elements in the mass matrices. In particular, flavour mixing involving the first generation of squarks is severely limited \cite{Gabbiani, Hagelin, Ciuchini}. We therefore focus on flavour mixing between the second and third generation squarks. The most relevant constraints on such flavour mixing are listed in Tab.\ \ref{tab1} together with the current experimental measurements and the theoretical error estimate. If not indicated otherwise, they are taken from Refs.\ \cite{PDG, HFAG}. They include branching ratios of rare decays, B-meson oscillation measurements, the electroweak $\rho$-parameter, and the anomalous magnetic moment of the muon. For the latter, taking into account recent calculations which bring the standard model theoretical expectation closer to the experimental measured value \cite{Bodenstein}, we use only the upper bound given in \cite{PDG} as a constraint. In the following study, the most important limits are imposed through the precise measurements of the rare decay $b\to s\gamma$ and the B-meson oscillation parameter $\Delta M_{B_s}$. \begin{table} \caption{Experimental constraints on the MSSM parameter space, in particular on quark flavour violating elements.} \bigskip \begin{tabular}{|c|ccc|} \hline & Exp.\ value & Exp.\ error & Theor.\ uncertainty \\ \hline $10^4 \times {\rm BR}(b\to s\gamma)$ & $3.55$ & $\pm 0.26$ & $\pm 0.23$ \cite{bsgNLO}\\ $10^8 \times {\rm BR}(B_s\to \mu^+ \mu^-)$ & $<5.6$ \cite{LHCb}& & \\ $\Delta M_{B_s}$ [ps$^{-1}$] & $17.77$ & $\pm 0.12$ & $\pm 3.3$ \cite{Deltambs} \\ $\Delta \rho$ &$<0.0012$ \cite{Deltarho}& & \\ $10^{11} \times \Delta a_{\mu}$ &$255$& $\pm 80$ & \\ \hline \end{tabular} \label{tab1} \end{table} \section{Numerical analysis \label{sec4}} The following numerical analyses are mainly based on the constrained MSSM with the five parameters $m_0$, $m_{1/2}$, $A_0$, $\tan\beta$, and sgn($\mu$). We also consider variants of this model featuring non-universal Higgs or gaugino masses. Starting from the high-scale parameters, the soft-breaking terms at the scale $Q=1$~TeV \cite{SPA} are obtained through renormalization group running using the public program {\tt SPheno~3} \cite{SPheno}. At the same scale, we introduce the non-diagonal entries in the squark mass matrices as discussed in Sec.\ \ref{sec2}. The physical mass spectrum is then calculated again using {\tt SPheno}, which takes into account the general flavour structure. The same code is also used for the evaluation of the constraining observables mentioned in Sec.\ \ref{sec2}, again taking into account squark generation mixing. For the standard model parameters, we refer the reader to Ref.\ \cite{PDG}. The pole mass of the top-quark is taken to be $m_{\rm top} = 173.1$~GeV according to recent measurements from D0 and CDF \cite{TopMass}. The CKM-matrix is taken in the usual Wolfenstein parametrization with the recent values $\lambda=0.2253$, $A=0.808$, $\bar{\rho}=0.132$, and $\bar{\eta}=0.341$ \cite{PDG}. Making use of the SUSY Les Houches Accord \cite{SLHA}, the mass spectrum and related mixing parameters are transferred to the public program {\tt micrOMEGAs~2.4} \cite{micrOMEGAs} in order to evaluate the relic density of the neutralino. The calculation of the annihilation cross section is done by the program {\tt CalcHEP} \cite{CalcHEP}, where we have implemented the MSSM with squark generation mixing as discussed in Sec.\ \ref{sec2}. The corresponding model files have been obtained using the package {\tt SARAH} \cite{SARAH}. We also include important effects from the running strong coupling constant and running quark masses, as they are also included in the default implementation of the MSSM in {\tt micrOMEGAs} / {\tt CalcHEP}. \subsection{Constrained MSSM \label{sec4a}} In order to illustrate the numerical influence of flavour violating elements, we start by analyzing the neutralino relic density within the constrained MSSM (CMSSM), where we allow for flavour violation between the second and third generation of up-type squarks in the right-right chiral sector. In Fig.\ \ref{fig:CMSSM1}, we show typical scans of the $m_0$-$m_{1/2}$ plane for fixed values of $A_0 = -500$ GeV and $\tan\beta=10$ and for positive values of $\mu$. The cosmologically favoured region of parameter space according to Eq.\ (\ref{eq:WMAP}) together with the relevant constraints discussed in Sec.\ \ref{sec2} are shown for the case of minimal flavour violation (MFV, $\delta^{u,\rm RR}_{23} = 0$) and for the case of important off-diagonal elements, $\delta^{u,\rm RR}_{23} = 0.98$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/Fig3_left} \includegraphics[width=0.45\textwidth]{figures/Fig3_right} \end{center} \caption{Cosmologically favoured region and related exclusion limits in the ($m_0$, $m_{1/2}$) plane of the CMSSM for $\delta^{u,\rm RR}_{23} = 0$ (left) and $\delta^{u,\rm RR}_{23} = 0.98$ (right).} \label{fig:CMSSM1} \end{figure} In the case of MFV, the most stringent constraints on this parameter plane are due to a charged dark matter candidate (low $m_0$), tachyonic solutions of the renormalization group equations (high $m_0$ and low $m_{1/2}$) as well as the constraints from $b\to s\gamma$ and the lightest Higgs mass (low mass region). The cosmologically favoured region of parameter space is divided into several distinct regions: the so-called focus point region (high $m_0$, not visible here), the resonance of the light Higgs boson (low $m_{1/2}$ and moderate $m_0$), and the coannihilation region (close to the exclusion due to a charged dark matter candidate), where the neutralino mass is close to the stau mass. In the corresponding figure for the NMFV-case, we depict the same constraints together with the relative contribution from new (co)annihilation channels as discussed in Sec.\ \ref{sec3}. In this case, this involves neutralino pair annihilation into a mixed charm-top final state and coannihilation of a neutralino with the lightest squark $\tilde{u}_1$. In the latter corresponding region ($m_{1/2} \gtrsim 450$ GeV), where the relic density constraint is fulfilled, the mass difference between the lightest squark and the neutralino is about $30$ GeV, as can be seen from the left panel of Fig.\ \ref{fig:CMSSM2}, where we show the cosmologically favoured regions of parameter space in the plane of the physical masses. The dominant annihilation processes are then $\tilde{\chi}^0_1 \tilde{u}_1 \to g t$ ($30 \%$) and $\tilde{u}_1 \tilde{u}_1 \to g g$ ($25 \%$). Two other important processes are neutralino annihilation into pairs of top quarks ($10 \%$), and $\tilde{\chi}^0_1 \tilde{u}_1 \to g c$ ($15 \%$). Note that the presence of a charm quark in the final state is a genuine effect of flavour violation. Indeed, as a consequence of the off-diagonal elements in squark mass matrices, the lightest up-type squark is here a mixing of $\tilde{t}_R$ and $\tilde{c}_R$ (with a small admixture of $\tilde{t}_L$), opening up the (co)annihilation into charm-quarks. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/Fig4_left} \includegraphics[width=0.45\textwidth]{figures/Fig4_right} \end{center} \caption{Cosmologically favoured region and related exclusion limits for $\delta^{u,\rm RR}_{23} = 0.98$ in the ($m_{\tilde{\chi}^0_1}$, $m_{\tilde{u}_1}-m_{\tilde{\chi}^0_1}$) plane for fixed $A_0=-500$~GeV and $\tan\beta=10$ (left) and in the ($\delta^{u,\rm RR}_{23}$, $A_0$) plane for fixed $m_0=200$ GeV and $m_{1/2}=400$ GeV (right).} \label{fig:CMSSM2} \end{figure} For lower masses (e.g.\ $m_0 \sim 200$ GeV and $m_{1/2} \sim 400$ GeV), coannihilation processes such as $\tilde{\chi}^0_1 \tilde{u}_1 \to g t/c$ are still important ($20 \%$). However, the squark being much lighter ($m_{\tilde{u}_1} \sim 190$ GeV), the squark pair annihilation $\tilde{u}_1 \tilde{u}_1 \to gg$ is now subdominant. Moreover, the neutralino mass of $m_{\tilde{\chi}_1^0} \sim 160$ GeV (see Fig.\ \ref{fig:CMSSM2} left) forbids annihilation into top quark pairs. As a consequence, the flavour violating process $\tilde{\chi}^0_1 \tilde{\chi}^0_1 \to t \bar{c} (c \bar{t})$, which is kinematically allowed and enhanced by the rather light squark in the $t$-channel propagator, becomes important ($40 \%$). This is represented by the green area in the left part of the plot. Notice the cut at $m_{1/2} \approx 420$ GeV, which corresponds to $m_{\tilde{\chi}^0_1} \approx m_t$. For $m_{\tilde{\chi}^0_1} > m_t$, neutralino annihilation into top quark pairs is kinematically allowed, and the $t\bar{c} (c\bar{t})$ final state is suppressed. This can also be seen in relation to the physical neutralino and squark masses in Fig.\ \ref{fig:CMSSM2} left. For low $m_{1/2}$ but large $m_0$, the squark being heavier, coannihilation is not relevant and neutralino annihilation into $t\bar{c} (c\bar{t})$ is less important. Therefore, even if the relative contribution of this channel is still important, its absolute contribution is not large enough to satisfy the relic density constraint. In the region excluded by $\textnormal{BR}(b \to s \gamma)$ most of the deviation from the standard model value comes from large negative chargino contributions due to the smallness of the stop and/or chargino mass. There is, however, no significant effect coming from the flavour violating parameter $\delta^{u,\rm RR}_{23}$, since $\textnormal{BR}(b \to s \gamma)$ constrains mainly flavour violation in the left-left sector. Let us now discuss the interplay of helicity mixing and additional flavour mixing. The former is induced through the trilinear matrices $T_{U}$ (see Eq.\ (\ref{eq:RLu})) and thus the GUT-scale parameter $A_0$, while the latter is included at the electroweak scale through the parameter $\delta^{u,\rm RR}_{23}$. In the case of MFV, i.e.\ for $\delta^{u,\rm RR}_{23}=0$, a rather large $|A_0|$ is needed in order to decrease the stop mass close to the neutralino mass, and therefore allow for efficient coannihilation. For sizeable additional flavour mixing, the coannihilation is important already for lower values of $A_0$, since the squark mass splitting is then increased by the off-diagonal elements in the mass matrix. This is illustrated in the right graph of Fig.\ \ref{fig:CMSSM2}, where the constraints, cosmologically favoured regions, and different contributions to the annihilation cross section are shown in the ($A_0$,$\delta^{u,\rm RR}_{23}$) plane. The mass splitting of the squarks depends strongly on both of these parameters, which therefore have a competitive effect on the light stop mass. As a consequence, as explained above, one of these parameters has to be large in order to allow for an important coannihilation contribution. On the other hand, the flavour violating effects are only related to $\delta^{u,\rm RR}_{23}$. Therefore the flavour violating neutralino annihilation processes depend mainly on this parameter. The only possibility to satisfy simultaneously the relic density and BR$(b \to s \gamma)$ constraints is for very large $\delta^{u,\rm RR}_{23}$ and a rather low $A_0$. This is explained by the strong dependance of BR$(b \to s \gamma)$ on the squark mass spectrum, and therefore on $A_0$. Contrary, and as explained above, BR$(b \to s \gamma)$ does not depend on any flavour mixing among right up-type squarks, and the mass effects become important only for very large values of $\delta^{u,\rm RR}_{23}$. It has been checked that the other constraints described in Tab.\ \ref{tab1} are fulfilled in the whole parameter space shown in Figs.\ \ref{fig:CMSSM1} and \ref{fig:CMSSM2}. Moreover the calculated spectrum is compatible with the mass limits given in Sec.\ \ref{sec2}, except for the stop in some regions where it is the LSP (i.e. already excluded). \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/MSUGRA8} \end{center} \caption{Cosmologically favoured region and related exclusion limits in the $(\delta_{23}^{u,RR},\delta_{23}^{u,LR})$ plane for fixed $m_0=200$ GeV, $m_{1/2}=400$ GeV, and $A_0=-500$ GeV.} \label{fig:CMSSM2_2} \end{figure} Next, we study the possibility that not only the parameter $\delta_{23}^{u,RR}$ is large, {\em i.e.} of ${\cal O}(1)$, while all others are small, which might not be very natural. We therefore show in Fig.\ \ref{fig:CMSSM2_2} the cosmologically favoured region and related exclusion limits in the $(\delta_{23}^{u,RR}, \delta_{23}^{u,LR})$ plane for fixed $m_0=200$ GeV, $m_{1/2}=400$ GeV, and $A_0=-500$ GeV. We observe that the second flavour-violating parameter $\delta_{23}^{u,LR}$ can reach values up to $0.15$ before being constrained by the lower Higgs mass bound of $111.4$ GeV. Similarly, the RL and LL parameters (not shown) are restricted by the FCNC process $b\to s\gamma$ to values below $0.15$ and $0.1$, respectively, as would be the LR parameter if one applied this limit at the two (not three) sigma level. In Fig.\ \ref{fig:CMSSM3} we show for a given parameter point the neutralino relic density and the contributing processes as a function of the flavour-violation parameter $\delta^{u,\rm RR}_{23}$. While for the case of MFV, this scenario is cosmologically strongly disfavoured with $\Omega_{\tilde{\chi}_1^0}h^2 \gtrsim 20$, the relic density decreases with increasing flavour mixing to reach the favoured value of $\Omega_{\tilde{\chi}_1^0}h^2 \approx 0.11$ for $\delta^{u,\rm RR}_{23} \sim 0.98$. For low values of $\delta^{u,\rm RR}_{23}$, the annihilation is dominated by lepton final states (about 75\%), which do, however, not lead to a sufficiently enhanced annihilation cross section. The subleading channel is annihilation into top-quark pairs (about 25\%). For $\delta^{u,\rm RR}_{23} \gtrsim 0.2$, flavour violation effects start to manifest by opening the channel $\tilde{\chi}_1^0 \tilde{\chi}_1^0 \to c \bar{t} (t \bar{c})$. The relative contribution of this process amounts to almost 40\% at $\delta^{u,\rm RR}_{23} \sim 0.8$. For $\delta^{u,\rm RR}_{23} > 0.5$, the annihilation into top-quarks is significantly enhanced due to the lighter squark in the $t$-channel propagator, so that this channel remains more important than the newly opened annihilation into top- and charm-quarks. All contributions from neutralino pair annihilation drop at $\delta^{u,\rm RR}_{23} \sim 0.95$ when the squark $\tilde{u}_1$ becomes light enough for efficient coannihilation. The corresponding total relative contribution amounts to about 60\%. When the squark becomes even lighter, also squark pair annihilation into gluon pairs plays an important role (see Eq.\ (\ref{eq:CoAnn})), leading to relative contributions of about 90\% at most. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/Fig5_left} \includegraphics[width=0.45\textwidth]{figures/Fig5_right} \end{center} \caption{Relic density of the neutralino (left) and contributing processes (right) as a function of $\delta^{u,\rm RR}_{23}$ for $m_0 = 1500$ GeV, $m_{1/2} = 680$ GeV, $A_0 = -500$ GeV, $\tan\beta = 10$, and $\mu > 0$.} \label{fig:CMSSM3} \end{figure} For this discussed scenario, the favoured relic density of the neutralino is achieved through important coannihilation for rather large values of the flavour mixing parameter $\delta^{u,\rm RR}_{23}$. Note that, depending on the exact parameter point under consideration and the corresponding relic density in the MFV case, this can also happen for lower values of $\delta^{u,\rm RR}_{23}$. In the same way, the enhancement of the total cross section through the new contributions from $c\bar{t} (t\bar{c})$ final states can be sufficient to achieve $\Omega_{\tilde{\chi}^0_1}h^2 \sim 0.11$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/fig_msup_P1_duRR23} \includegraphics[width=0.45\textwidth]{figures/fig_su1mix_P1_duRR23} \end{center} \caption{Masses of the two lightest up-type squarks, gluino, and lightest neutralino (left) and flavour decomposition of lightest up-type squark (right)as a function of $\delta^{u,\rm RR}_{23}$ for $m_0 = 1500$ GeV, $m_{1/2} = 680$ GeV, $A_0 = -500$ GeV, $\tan\beta = 10$, and $\mu > 0$.} \label{fig:CMSSM4} \end{figure} For completeness, we show in Fig.\ \ref{fig:CMSSM4} the masses of the two lightest up-type squarks, the gluino, and the lightest neutralino as a function of the NMFV-parameter $\delta^{u,\rm RR}_{23}$ as well as the flavour decomposition for the same scenario as discussed above. The squark mass splitting is increased due to the additional off-diagonal entries in the mass matrix, so that the mass of $\tilde{u}_1$ decreases. For large flavour mixing, it comes close to the neutralino mass, leading to the important coannihilation as seen in Fig.\ \ref{fig:CMSSM2}. The masses of $\tilde{u}_2$ ($=\tilde{c}_L$), the neutralino and the gluino remain practically unaffected by the considered generation mixing. \subsection{Non-universal gaugino masses \label{sec4b}} When considering $SO(10)$ grand unification theories (GUT), the properties of the SUSY breaking mechanism are related to the breaking of an $SU(5)$ subgroup into the standard model gauge group $SU(3) \times SU(2) \times U(1)$. The relations between the gaugino masses $M_i$ ($i=1,2,3$) at the unification scale are given by the embedding coefficients of the standard model groups in $SU(5)$. In particular, the unification constraint $M_i = m_{1/2}$ of the CMSSM can be relaxed without spoiling the unification of the gauge couplings. Three independent parameters are then needed to fully parameterize the gaugino sector. A possible set is the wino mass $M_2$ together with the two dimensionless variables $x_1 = M_1/M_2$ and $x_3 = M_3/M_2$. The case $x_1=x_3=1$ corresponds to the CMSSM discussed above. Previous studies have shown that non-universal gaugino mass models have an interesting dark matter phenomenology \cite{DM_NUGM, DM_NUGM2}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/Fig7_up_left} \includegraphics[width=0.45\textwidth]{figures/Fig7_up_right} \includegraphics[width=0.45\textwidth]{figures/Fig7_low_left} \includegraphics[width=0.45\textwidth]{figures/Fig7_low_right} \end{center} \caption{Top: Constraints in the ($m_0$, $M_{2}$) plane for $x_1 = 1/2$, $x_3 = 7/4$, $\delta^{u,\rm RR}_{23} = 0$ (left) and $\delta^{u,\rm RR}_{23} = 0.95$ (right) in NUGM. Bottom: Constraints in the ($x_{1}$, $x_{3}$) plane for $m_0 = 320$ GeV, $M_{2} = 700$ GeV, $\delta^{u,\rm RR}_{23} = 0$ (left) and $\delta^{u,\rm RR}_{23} = 0.95$ (right).} \label{fig:NUGM} \end{figure} We start by showing the relevant constraints in the ($m_0$, $M_{2}$) plane for $A_0 = 0$ GeV, $\tan \beta = 10$, $\mu >0$, $x_1 = 1/2$ and $x_3 = 7/4$ in the upper panels of Fig.\ \ref{fig:NUGM}. The flavour violating parameter $\delta^{u,\rm RR}_{23}$ is set to zero (left) and to $0.95$ (right). In both cases, one WMAP-favoured region is situated around the resonance of the lightest Higgs-boson (for $M_{2} \approx 300$ GeV). The neutralino-stau coannihilation region is also present in both cases for $m_0 \approx 100$ GeV, next to the stau-LSP region. For $M_{2} \gtrsim 700$ GeV, due to the large mass splitting for $\delta^{u,\rm RR}_{23} = 0.95$, the squark-LSP region and its neighbouring coannhilation region are present, as it is the case in the CMSSM (see Fig.\ \ref{fig:CMSSM1} right). In this case, a region with sizeable relative contribution from neutralino pair annihilation into $t \bar{c}$ is also present. Note that, due to the non-universality, the neutralino can be lighter as compared to the CMSSM. Therefore, this region is bounded at a certain value of $M_2$, since below this bound the neutralino is not heavy enough to kinematically allow the top production. An upper bound for this region at a higher value of $M_2$ is also observable, since above this value the neutralino is heavy enough to produce top quark pairs. As a result, this region lies in the range $450 \lesssim M_2 \lesssim 850$ GeV. The region excluded by the $b \to s \gamma$ branching ratio is significantly larger for $\delta^{u,\rm RR}_{23} = 0.95$ ($m_0 < 1200$ GeV, $M_{2} < 150$ GeV) than for $\delta^{u,\rm RR}_{23} = 0$ ($m_0 < 450$ GeV, $M_{2} < 100$ GeV). For $\delta^{u,\rm RR}_{23} = 0.95$ this region is similar to the one excluded by the light Higgs mass. We then show in the lower panels of Fig.\ \ref{fig:NUGM} the constraints in the ($x_1$,$x_3$) plane for a particular point of the parameter space for $\delta^{u,\rm RR}_{23} = 0$ (left) and $\delta^{u,\rm RR}_{23} = 0.95$ (right). Note that the chosen point, for $\delta^{u,\rm RR}_{23} = 0.95$ (i.e right panel) and $x_1 = 1/2$, $x_3 = 7/4$, corresponds to a point where the relic density lies in the WMAP interval (see upper right panel of Fig.\ \ref{fig:NUGM}). For $\delta^{u,\rm RR}_{23} = 0$ different allowed regions are visible. One is the very low $x_1$ ($\approx 0.2$) region in which the neutralino annihilates mainly to pairs of $b$ quarks or tau leptons via a light Higgs resonance. Then there is a diagonal line corresponding to the heavy Higgs resonance (the neutralino mass increases with $x_1$ and the heavy Higgs mass with $x_3$). For large $x_1$ ($\gtrsim 1.8$) the neutralino becomes mainly wino and annihilates strongly into $W$ boson pairs, thus decreasing the relic density below the lower limit. In the favoured ellipse-shaped region for $x_1 \simeq 1.6$, $x_3 \simeq 0.8$, the heavy Higgs resonance and $W$ boson final states processes contribute in a way that the relic density is compatible with the required interval. For $\delta^{u,\rm RR}_{23} = 0.95$ it is immediately visible that new allowed regions appear. In the low $x_3$ region annihilation of neutralinos into $t\bar{c}$ (and $t\bar{t}$ for $x_1 \gtrsim 0.6$) are the main contributions. However these new contributions essentially do not modify the shape of the allowed region compared to the $\delta^{u,\rm RR}_{23} = 0$ case. Moreover, we can notice that this region is excluded by BR$(b \to s \gamma)$. The interesting region is for larger $x_3$, where the neutralino coannihilation with the lightest squark appears. In this region the mass difference between the neutralino and the lightest squark is approximatively $30$ GeV and the main contributions are neutralino-squark annihilation into top/charm quark and gluon, and squark annihilation into gluon pairs. While the neutralino mass increases with $x_1$, the lightest squark mass increases with $x_3$ until some point, and then decreases, which explains the particular arc shape of this region. Therefore this coannihilation region is relatively symmetric with respect to $x_3 \approx 1.1$, and for each favoured point with $x_3 < 1.1$ there is a corresponding favoured point with $x_3 > 1.1$. As explained below, the $x_3 < 1.1$ part of this region is more strongly constrained by BR$(b \to s \gamma)$, which explains our choice for the $x_3$ value in the upper plots. Compared to the CMSSM, the predicted value for BR$(b \to s \gamma)$ is much less constraining. First, because of the chosen value of $A_0$, but also because of the spectrum modification induced by the non-universal gaugino masses. This is illustrated in the lower right panel of Fig.\ \ref{fig:NUGM}, where $m_0$ and $M_2$ have been fixed and the (dis)favoured regions are shown in the ($x_1$,$x_3$) plane. It is striking that BR$(b \to s \gamma)$ has a strong dependence on $x_3$, which influences the whole SUSY spectrum through the renormalization group running. Most of the non standard model contribution to BR$(b \to s \gamma)$ comes from chargino and charged Higgs loops, the former being negative, and the latter positive. At low $x_3$, the (negative) chargino contribution is dominant, which leads to a branching ratio far below the standard model prediction. With increasing $x_3$ sparticles become heavier, and all absolute values of contributions decrease. However the absolute value of the chargino contribution come closer to the one from Higgs bosons, which leads to important cancellations and the branching ratio gets very close to the standard model value. For large $x_3$, the chargino contribution (in absolute value) becomes smaller than the Higgs contribution. The sum is then positive, but rather small as the masses are large. Among the other observables given in Tab.\ \ref{tab1}, the ones which exclude some regions of the parameter space presented here are $\Delta a_{\mu}$ and $\Delta \rho$, but only in the very low mass region which is already partially excluded by $\textnormal{BR}(b \to s \gamma)$. For the plots in the ($m_0$, $M_{2}$) plane the neutralino mass excludes the region $M_{2} < 250$ GeV and the chargino mass excludes the region $M_{2} < 150$ GeV. The stop mass excludes also some region for which it is the LSP (i.e.\ already excluded). For the plots in the ($x_{1}$, $x_{3}$) plane, the neutralino mass excludes the region $x_1 < 0.1$. \subsection{Non-universal Higgs masses \label{sec4c}} Similarly to the mechanism leading to non-universal gaugino masses in $SO(10)$ SUSY GUTs, depending on the exact representation to which the Higgs doublets belong, their corresponding SUSY breaking masses $m_{H_D}$ and $m_{H_U}$ need not necessarily be the same. In non-universal Higgs mass models they can therefore be treated as independent parameters at the high scale \cite{DM_NUHM2, DM_NUHM3}. For $m_{H_U} = m_{H_D} = m_0$ the standard CMSSM is recovered. We start by studying the ($m_0$, $m_{1/2}$) plane for fixed $m_{H_U} = 1250$ GeV, $m_{H_D} = 2290$ GeV, $A_0=0$, $\tan\beta=10$, and $\mu>0$. The resulting excluded and cosmologically favoured regions are shown in Fig.\ \ref{fig:NUHM} (upper panels), again for both the case of MFV ($\delta^{u,\rm RR}_{23}=0$) and a rather large flavour mixing parameter $\delta^{u,\rm RR}_{23}=0.95$. For $\delta^{u,\rm RR}_{23} = 0$, the only allowed regions arise from the coannihilation of the neutralino with the superpartners of the tau or neutrinos and from the annihilation of neutralino pairs into $W^{\pm}$-bosons and top quark pairs due to the high higgsino component. The latter region is actually divided into two parts, parallel to the excluded region where certain squared sfermion masses become negative. For $\delta^{u,\rm RR}_{23} = 0.95$, new contributions from $\tilde{\chi}_1^0 \tilde{\chi}_1^0 \to c \bar{t} (t \bar{c})$ make their appearance, as it is also the case in the discussed CMSSM and NUGM scenarios. Here, however, important coannihilations of the neutralino with the lightest squark are present, leading to a completely modified picture with respect to the MFV case. Among the WMAP favoured regions, only the one due to coannihilation survives the $b \to s \gamma$ constraint. The discussion of this constraint (as all the constraints given in Tab.\ \ref{tab1}) is here similar to the CMSSM one. Let us now study the ($m_{H_U}$, $m_{H_D}$) plane for fixed values of $m_0=900$ GeV and $m_{1/2}=700$ GeV, shown in Fig.\ \ref{fig:NUHM} (lower panels). For $\delta^{u,\rm RR}_{23} = 0$ the situation is quite similar as described above. Two parallel allowed regions, where the neutralino is strongly higgsino and annihilating through the light Higgs resonance, are present. In addition, two other allowed regions, corresponding to the heavy neutral Higgs resonance, are observed for large $M_{H_U}$. Allowing for flavour violation ($\delta^{u,\rm RR}_{23} = 0.95$), two large additional allowed regions appear, where neutralino-squark coannihilation processes are dominant (up to $50 \%$ including light squark annihilation into gluons). Note that the corresponding WMAP-favoured areas are very large as compared to $\delta^{u,\rm RR}_{23} = 0$. The flavour violating annihilation channel $\tilde{\chi}_1^0 \tilde{\chi}_1^0 \to c \bar{t} (t \bar{c})$ is less important in this case. Again, all constraints have been checked to be fulfilled in the shown parameter space, except for the light neutralino/chargino mass in the region which lies close to the unphysical region. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/Fig8_up_left} \includegraphics[width=0.45\textwidth]{figures/Fig8_up_right} \includegraphics[width=0.45\textwidth]{figures/Fig8_low_left} \includegraphics[width=0.45\textwidth]{figures/Fig8_low_right} \end{center} \caption{Top: Constraints in the ($m_0$, $m_{1/2}$) plane for $\delta^{u,\rm RR}_{23} = 0$ (left) and $\delta^{u,\rm RR}_{23} = 0.95$ (right) in NUHM. Bottom: Constraints in the ($m_{H_U}$, $m_{H_D}$) plane for $\delta^{u,\rm RR}_{23} = 0$ (left) and $\delta^{u,\rm RR}_{23} = 0.95$ (right).} \label{fig:NUHM} \end{figure} \section{Impact on the relic density of dark matter \label{sec3}} The relic abundance in our universe of a stable dark matter candidate can be evaluated by solving the Boltzmann equation \begin{equation} \frac{{\rm d}n}{{\rm d}t} ~=~ -3 H n - \langle \sigma_{\rm ann}v \rangle \left( n^2 - n^2_{\rm eq} \right) , \end{equation} where $n$ is the number density of the relic particle, $H$ the (time-dependent) Hubble expansion rate, and $n_{\rm eq}$ the number density in thermal equilibrium. All information concerning the particle physics model parameters is contained in the annihilation cross section $\sigma_{\rm ann}$ multiplied with the relative velocity $v$ of the annihilating particles. This product has to be convolved with the velocity distribution of the non-relativistic dark matter particle in order to obtain the thermally averaged cross section $\langle \sigma_{\rm ann}v \rangle$. Denoting the mass of the dark matter candidate by $m_0$ and taking into account a set of $N$ potentially co-annihilating particles with masses $m_i$ ($i=1,\dots,N$) such that $m_0 \leq m_1 \leq \dots \leq m_N$, the thermally averaged annihilation cross section can be written as \cite{GriestSeckel, EdsjoGondolo} \begin{equation} \langle \sigma_{\rm ann}v \rangle ~=~ \sum_{i,j=0}^N \langle\sigma_{ij}v_{ij}\rangle \frac{n^{\rm eq}_i n^{\rm eq}_j}{n_{\rm eq}^2} ~=~ \sum_{i,j=0}^N \langle\sigma_{ij}v_{ij}\rangle \frac{g_i g_j}{g^2_{\rm eff}} \left( \frac{m_i m_j}{m_0^2} \right)^{3/2} {\rm exp}\left\{-\frac{(m_i+m_j-2m_0)}{T}\right\}. \label{eq:CoAnn} \end{equation} Here, $n^{\rm eq}_i$ denotes the equilibrium density of the particle $i$ and $i=0$ refers to the dark matter candidate. The cross sections $\sigma_{ij}$ relate to the different coannihilation processes within the ensemble of particles and $v_{ij}$ is the relative velocity between the particles $i$ and $j$. Moreover, $g_i$ denotes the number of degrees of freedom of particle $i$ and $g_{\rm eff}$ is a normalization factor. From Eq.\ (\ref{eq:CoAnn}) it becomes immediately clear that the mass differences between the annihilating particles play a crucial role. Due to the exponential suppression, the coannihilation between two given particles $i$ and $j$ will only lead to a significant contribution, if the two masses $m_i$ and $m_j$ are nearly degenerate \cite{GriestSeckel}. In the following discussion, we assume that the lightest neutralino is the lightest supersymmetric particle (LSP) and therefore the dark matter candidate. In wide regions of the MSSM parameter space, the pair annihilation of two neutralinos into standard model particles is the dominant process. The diagrams for annihilation into quarks, i.e.\ where flavour violation in the (s)quark sector can become relevant, are shown in Fig.\ \ref{fig:diagrams1}. At the tree-level, squarks can then appear only in internal propagators in case of annihilation into quark-antiquark pairs, i.e.\ $\tilde{\chi}^0_1 \tilde{\chi}^0_1 \to q\bar{q}$ through the exchange of a squark in the $t$- or $u$-channel \cite{DM_mSUGRA, DM_NUHM}. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{figures/neuneu-ff_Hs_2} \includegraphics[width=0.3\textwidth]{figures/neuneu-ff_Zs_2} \includegraphics[width=0.3\textwidth]{figures/neuneu-ff_Ft_2} \end{center} \caption{Feynman diagrams for the annihilation of neutralinos into fermion pairs through the exchange of a neutral Higgs boson $H^0_i = h^0, H^0, A^0$ (left), a $Z^0$-boson (centre), or a sfermion (right). The corresponding $u$-channel diagram, obtained through crossing, is not shown.} \label{fig:diagrams1} \end{figure} Going beyond minimal flavour violation, the mass splitting of the involved squarks is increased due to the additional off-diagonal elements in the mass matrix. In particular, the lightest squark mass eigenstate (purely stop-like in the CMSSM with MFV) becomes lighter with increasing flavour mixing. Its contributions to neutralino pair annihilation through $t$- or $u$-channel exchange are therefore enhanced. Apart from the impact on the squark mass eigenvalues, the flavour-violating terms discussed in Sec.\ \ref{sec2} directly affect the neutralino-squark-quark coupling, which is present in the $t$- or $u$-channel diagram. The analytical expressions for the left- and right-handed parts of this coupling are given by \cite{NMFV_mSUGRA} \begin{eqnarray} \label{eq:NeuSqQ} L_{\tilde{\chi}^0_i \tilde{u}_j u_k} &=& \Big[ \left( e_u - T_u \right) \sin\theta_W {\cal N}_{i1} + T_u \cos\theta_W {\cal N}_{i2} \Big] {\cal R}^{\tilde{u}*}_{jk} + \frac{m_{u_k} \cos\theta_W}{2m_W \cos\beta} {\cal N}_{i4} {\cal R}^{\tilde{u}*}_{j(k+3)}, \\ L_{\tilde{\chi}^0_i \tilde{d}_j d_k} &=& \Big[ \left( e_d - T_d \right) \sin\theta_W {\cal N}_{i1} + T_d \cos\theta_W {\cal N}_{i2} \Big] {\cal R}^{\tilde{d}*}_{jk} + \frac{m_{d_k} \cos\theta_W}{2m_W \sin\beta} {\cal N}_{i3} {\cal R}^{\tilde{d}*}_{j(k+3)}, \\ -R^*_{\tilde{\chi}^0_i \tilde{u}_j u_k} &=& e_u \sin\theta_W {\cal N}_{i1} {\cal R}^{\tilde{u}}_{jk} + \frac{m_{u_k} \cos\theta_W}{2m_W \cos\beta} {\cal N}_{i4} {\cal R}^{\tilde{u}}_{j(k+3)}, \\ -R^*_{\tilde{\chi}^0_i \tilde{d}_j d_k} &=& e_d \sin\theta_W {\cal N}_{i1} {\cal R}^{\tilde{d}}_{jk} + \frac{m_{d_k} \cos\theta_W}{2m_W \sin\beta} {\cal N}_{i3} {\cal R}^{\tilde{d}}_{j(k+3)}, \end{eqnarray} with the same notations as in Sec.\ \ref{sec2}. Flavour mixing effects arise through the squark rotation matrix ${\cal R}^{\tilde{q}}$ ($q=u,d$). This can allow for new annihilation channels, that are closed in the case of minimal flavour violation. Such channels can, e.g., be $\tilde{\chi}\tilde{\chi} \to c\bar{c}$ through exchange of a squark $\tilde{u}_1$ which is now a mixture of $\tilde{c}$ and $\tilde{t}$. In the case of MFV, this final state is only possible through exchange of a heavier $\tilde{c}$ and therefore suppressed. Another example is annihilation into a mixed final state, $\tilde{\chi}\tilde{\chi} \to c\bar{t}$, which is forbidden in MFV. The discussed enhancements and new channels increase the total annihilation cross section, which in turn decreases the predicted relic density of the neutralino. The diagrams with $s$-channel exchange of a Higgs or gauge boson remain insensitive to squark flavour mixing. \begin{figure} \begin{center} \includegraphics[width=0.3\textwidth]{figures/neuF-fH_fs_2} \includegraphics[width=0.3\textwidth]{figures/neuF-fH_Ft_2} \includegraphics[width=0.3\textwidth]{figures/neuF-fH_Gu_2} \bigskip \\ \includegraphics[width=0.3\textwidth]{figures/neuF-fV_fs_2} \includegraphics[width=0.3\textwidth]{figures/neuF-fV_Ft_2} \includegraphics[width=0.3\textwidth]{figures/neuF-fV_Gu_2} \end{center} \caption{Feynman diagrams for the coannihilation of a neutralino with a sfermion into a fermion together with a Higgs boson $H=h^0,H^0,A^0,H^{\pm}$ (top) or a gauge boson $V=\gamma, Z^0, W^{\pm}, g$ (bottom). These processes proceed through the exchange of a fermion (left), a sfermion (centre) or a gaugino (right). The $u$-channel diagram is not present in case of the gluonic final state.} \label{fig:diagrams2} \end{figure} Let us now turn to the case of neutralino-squark coannihilation. The possible final states are a quark together with a Higgs or a gauge boson. The relevant Feynman diagrams at the tree-level are depicted in Fig.\ \ref{fig:diagrams2}. The main impact from non-minimal flavour violation will be through the modified squark mass spectrum. As already stated above, the mass difference between neutralino and the lightest squark enters the calculation of the corresponding coannihilation cross section exponentially. When the squark mass approaches the neutralino mass due to increasing flavour mixing, this can significantly enhance the contribution from the corresponding coannihilation with respect to the case of minimal flavour violation. Again, also the flavour-violating couplings can have subdominant effects on the coannihilation processes. Each of the diagrams depicted in Fig.\ \ref{fig:diagrams2} contains the squark-quark-neutralino coupling already discussed above. Moreover, the couplings of squarks to Higgs- and massive gauge bosons are sensitive to flavour-violating effects. In the mass eigenstate basis, the couplings of squarks to a $Z^0$-boson are given by \cite{NMFV_Squark1} \begin{eqnarray} C_{Z^0 \tilde{q}_j \tilde{q}_k} &=& - i \frac{g_2}{\cos \theta_W} (p_j + p_k)_{\mu} \left[ \sum_{i=1}^3 I_q {\cal R}^{\tilde{q}*}_{ij} {\cal R}^{\tilde{q}}_{ik} - e_{q} \sin^2\theta_W \delta_{jk} \right] \label{eq:SqSqZ} \end{eqnarray} for $q=u,d$. Here, $p_j$ and $p_k$ denote the momentum of $\tilde{q}_j$ and $\tilde{q}_k$, respectively. The interactions of squarks with a photon or a gluon are flavour-diagonal and are therefore not discussed in detail here. The couplings of two up-type squarks with the light scalar Higgs boson are given by \cite{NMFV_Squark1} \begin{eqnarray} C_{h^0 \tilde{u}_j \tilde{u}_k} & = & -\frac{g_2}{2 m_W}\ \sum_{i=1}^3 \bigg[ m_W^2 \sin(\alpha+\beta) \Big[ (1-\frac{1}{3} \tan^2\theta_W) {\cal R}^{\tilde{u}}_{ji} {\cal R}^{\tilde{u}*}_{ki} + \frac{4}{3} \tan^2\theta_W {\cal R}^{\tilde{u}}_{j(i+3)} {\cal R}^{\tilde{u}*}_{k(i+3)} \Big] \nonumber \\[0.2cm] & & + 2 \frac{\cos\alpha}{\sin\beta} \Big[ {\cal R}^{\tilde{u}}_{ji}\ m^2_{u_i} {\cal R}^{\tilde{u}*}_{ki} + {\cal R}^{\tilde{u}}_{j(i+3)} m^2_{u_i} {\cal R}^{\tilde{u}*}_{k(i+3)} \Big] + \frac{\sin\alpha}{\sin\beta} \Big[ \mu^* {\cal R}^{\tilde{u}}_{j(i+3)} m_{u_i} {\cal R}^{\tilde{u}*}_{ki} + \mu {\cal R}^{\tilde{u}}_{ji} m_{u_i} {\cal R}^{\tilde{u}*}_{k(i+3)} \Big] \nonumber \\[0.2cm] & & + \frac{\cos\alpha}{\sin\beta}\, \frac{v_u}{\sqrt2} \sum_{l=1}^3 \Big[ {\cal R}^{\tilde{u}}_{j(i+3)}\ ({T}_U)_{il}\ {\cal R}^{\tilde{u}*}_{kl} + {\cal R}^{\tilde{u}}_{ji}\ ({T}_U^\dagger)_{il}\ {\cal R}^{\tilde{u}*}_{k(l+3)} \Big] \bigg] . \label{eq:SqSqH} \end{eqnarray} From this expression, the coupling to the heavy scalar Higgs is obtained through the replacements $h^0 \to H^0$ and $\alpha \to \alpha+\pi/2$. Moreover, couplings of down-type squarks to the neutral scalar Higgses are obtained by replacing $\tilde{u}_i \to \tilde{d}_i$ and $\sin\beta \to \cos\beta$. Finally, the couplings of up-type squarks to a pseudoscalar Higgs-boson are given by \cite{NMFV_Squark1} \begin{equation} C_{A^0 \tilde{u}_j \tilde{u}_k} = -i \frac{g_2}{2 m_W}\ \sum_{i=1}^3 \Big[ \mu^* {\cal R}^{\tilde{u}}_{j(i+3)} m_{u_i} {\cal R}^{\tilde{u}*}_{ki} + \cot\beta \frac{v_u}{\sqrt2} \sum_{l=1}^3 {\cal R}^{\tilde{u}}_{j(i+3)} (T_U)_{il} {\cal R}^{\tilde{u}*}_{kl} + \textnormal{h.c} \Big]. \label{eq:SqSqA} \end{equation} Again, the expressions for down-type squarks can easily be obtained through $\tilde{u}_i \to \tilde{d}_i$ and $\cot\beta \to \tan\beta$. The effects of the modified mass eigenvalues and the modified couplings are superimposed. Since the two effects are linked together through their same origin (see Eq.\ (\ref{eq:massdiag})), the separate impacts on the (co)annihilation cross section and the neutralino relic density cannot be disentangled. However, some general features can be expected. The effect of the modified squark mass eigenvalues on coannihilation is expected to be stronger than in the case of neutralino pair annihilation due to the exponential factor already mentioned above. Moreover, the squark is here an external particle, and the impact of its mass on the phase space is more important than the mass in the $t$- or $u$-channel propagator. The impact of the modified flavour contents of the involved squarks, i.e.\ the effect of the rotation matrix in the coupling, is expected to be smaller than the mass effect. This is again due to the exponential factor in Eq.\ (\ref{eq:CoAnn}). Note also that, the mixing being unitary, the newly opened channels can be (partially) compensated by the simultaneous diminution of other contributions. The compensating contribution can, however, turn out to be forbidden in specific kinematical configurations and the impact of the new contributions can be significant. This is in particular the case when the neutralino is too light to annihilate into top-quark pairs, i.e.\ for $m_{\tilde{\chi}_1^0} < m_t$. The flavour violating elements lead then to a $\tilde{c}$ admixture in the lightest squark, which then allows for neutralino pair annihilation into top and charm quarks. Note that there can also be coannihilation of a neutralino with an up-(down-)type squark into a charged Higgs boson $H^{\pm}$ or a W-boson together with a down-(up-)type quark. In this case, the $u$-channel diagram includes a chargino propagator and in consequence the corresponding chargino-squark-quark coupling, while the $s$- and $t$-channel diagrams involve couplings of up- and down-type squarks to the charged Higgs or W-boson. Analytical expressions for these couplings can be found in Refs.\ \cite{NMFV_mSUGRA, NMFV_Squark1}. Since they are rather similar (with obvious replacements, e.g., concerning gaugino mixing) to the interactions given in Eqs.\ (\ref{eq:NeuSqQ}) to (\ref{eq:SqSqH}), they are not displayed in detail here. Note, however, that these couplings explicitly depend on the CKM-matrix. The general argumentation given above remains unchanged.
2,869,038,156,203
arxiv
\section{Introduction} \setcounter{equation}{0} In classical Newton's theory of gravity, gravity obeys the inverse square law\cite{01}. That is, the gravitational force between two mass point $m_1$ and $m_2$ is \begin{equation} f= \frac{G_N m_1 m_2}{r^2}, \end{equation} where $G_N$ is the newtonian gravitational constant and $r$ is the distance between two mass point. Gravitational force is along the line which connects the mass point and is always attractive. Obvious, the above expression is not invariant or covariant under Lorentz transformation. \\ In Einstein's general theory of relativity\cite{02,03}, gravity is treated as space-time geometry, and gravity is just an effect of the curved space-time. General relativity is a geometric theory of gravity, and in the original expressions, the concepts of the space-time metric, affine connection, curvature tensor, geodesic curve, $\cdots$ are used, and the concept of the traditional "gravitational force" is not used. Only in post-Newtonian approximation can we clearly see the correspondent of traditional gravitational force and its influences\cite{04,05}. \\ Quantum gauge theory of gravity is proposed in the framework of the quantum field theory\cite{wu01,wu02,wu03,wu04,wu05}, where gravity is treated as a kind of physical interactions and space-time is always kept flat. This treatment satisfies the fundamental spirit of quantum field theory. Quantum gauge theory of gravity is a perturbatively renormalizable quantum theory. Quantum gauge theory can be used to solve some problems related to quantum behavior of gravitational interactions, such as unifications of fundamental interactions can be easily accomplished in it\cite{12,13,14}, it can be used to explain the possible origin of dark matter and dard energy\cite{15,16}, it can be used to explain Podkletnov effects\cite{17} and COW experiments. \\ In gauge theory of gravity, gravitoelectromagnetic field is naturally defined as components of field strength of gravitational gauge field. It is known that gravitoelectromagnetism was studied for more than 130 years. The close analogy between Newton's gravitation law and Coulomb's law of electricity led to the birth of the concept of gravitomagnetism in the nineteenth century\cite{m01,m02,m03,m04}. Later, in the framework of General Relativity, gravitomagnetism was extensively explored\cite{m05,m06,m07} and recently some experiments are designed to test gravitomagnetism effects. Some recently reviews on gravitomagnetism can be found in literatures \cite{m08,m09,m10}. In quantum gauge theory of gravity, gravitoelectromagnetism is defined in a more general way, and the gravitoelectromagnetism discussed in the literatures \cite{m05,m06,m07,m08,m09,m10} is only the special case of it. \\ In this paper, we will discuss some classical effects of gravitational interactions in the framework of gauge theory of gravity, including gravitational interactions of two mass point and gravitational Lorentz force. \\ \section{Pure Gravitational Gauge Field} \setcounter{equation}{0} First, for the sake of integrity, we give a simple introduction to gravitational gauge theory and introduce some notations which is used in this paper. Details on quantum gauge theory of gravity can be found in literatures \cite{wu01,wu02,wu03,wu04,wu05}. In gauge theory of gravity, the most fundamental quantity is gravitational gauge field $C_{\mu}(x)$, which is the gauge potential corresponding to gravitational gauge symmetry. Gauge field $C_{\mu}(x)$ is a vector in the corresponding Lie algebra, which is called the gravitational Lie algebra. So $C_{\mu}(x)$ can expanded as \begin{equation} \label{2.10} C_{\mu}(x) = C_{\mu}^{\alpha} (x) \hat{P}_{\alpha}, ~~~~~~(\mu, \alpha = 0,1,2,3) \end{equation} where $C_{\mu}^{\alpha}(x)$ is the component field and $\hat{P}_{\alpha} = -i \frac{\partial}{\partial x^{\alpha}}$ is the generator of the global gravitational gauge group. The gravitational gauge covariant derivative is defined by \begin{equation} \label{2.9} D_{\mu} = \partial_{\mu} - i g C_{\mu} (x) = G_{\mu}^{\alpha} \partial_{\alpha}, \end{equation} where $g$ is the gravitational coupling constant and $G$ is given by \begin{equation} \label{2.11} G = (G_{\mu}^{\alpha}) = ( \delta_{\mu}^{\alpha} - g C_{\mu}^{\alpha} ) =(I -g C)^{\alpha}_{\mu}. \end{equation} Matrix $G$ is an important quantity in gauge theory of gravity. Its inverse matrix is denoted as $G^{-1}$ \begin{equation} \label{2.12} G^{-1} = \frac{1}{I - g C} = (G^{-1 \mu}_{\alpha}). \end{equation} Using matrix $G$ and $G^{-1}$, we can define two important quantities \begin{equation} \label{2.13} g^{\alpha \beta} = \eta^{\mu \nu} G^{\alpha}_{\mu} G^{\beta}_{\nu}, \end{equation} \begin{equation} \label{2.14} g_{\alpha \beta} = \eta_{\mu \nu} G_{\alpha}^{-1 \mu} G_{\beta}^{-1 \nu}. \end{equation} \\ The field strength of gravitational gauge field is defined by \begin{equation} \label{2.16} F_{\mu\nu} \stackrel{\triangle}{=} \frac{1}{-ig} \lbrack D_{\mu}~~,~~D_{\nu} \rbrack. \end{equation} Its explicit expression is \begin{equation} \label{2.17} F_{\mu\nu}(x) = \partial_{\mu} C_{\nu} (x) -\partial_{\nu} C_{\mu} (x) - i g C_{\mu} (x) C_{\nu}(x) + i g C_{\nu} (x) C_{\mu}(x). \end{equation} $F_{\mu\nu}$ is also a vector in gravitational Lie algebra and can be expanded as, \begin{equation} \label{2.18} F_{\mu\nu} (x) = F_{\mu\nu}^{\alpha}(x) \cdot \hat{P}_{\alpha}, \end{equation} where \begin{equation} \label{2.19} F_{\mu\nu}^{\alpha} = \partial_{\mu} C_{\nu}^{\alpha} -\partial_{\nu} C_{\mu}^{\alpha} - g C_{\mu}^{\beta} \partial_{\beta} C_{\nu}^{\alpha} + g C_{\nu}^{\beta} \partial_{\beta} C_{\mu}^{\alpha}. \end{equation} \\ \section{Gravitoelectromagnetic Field} \setcounter{equation}{0} $F_{\mu \nu}^{\alpha}$ is the component field strength of gravitational gauge field. Define \begin{equation} F^{\alpha}_{ij}= - \varepsilon_{ijk} B^{\alpha}_{k} ~~,~~ F^{\alpha}_{ 0i} = E^{\alpha}_{i}. \label{12.2} \end{equation} Then field strength of gravitational gauge field can be expressed as \begin{equation} F^{\alpha} = \left \lbrace \begin{array}{cccc} 0 & E_1^{\alpha} & E_2^{\alpha} & E_3^{\alpha} \\ - E_1^{\alpha} & 0 & -B_3^{\alpha} & B_2^{\alpha} \\ - E_2^{\alpha} & B_3^{\alpha} & 0 & - B_1^{\alpha} \\ - E_3^{\alpha} & -B_2^{\alpha} & B_1^{\alpha} & 0 \end{array} \right \rbrace . \end{equation} This form is quite similar to that of field strength in electrodynamics, but with an extra group index $\alpha$. The component $E_i^{\alpha}$ of field strength is called gravitoelectric field, and $B_i^{\alpha}$ is called gravitomagnetic field. Traditional Newtonian gravity is transmitted by $E_i^{\alpha}$. The $\alpha=0$ components $B_i^0$ and $E_i^0$ correspond to the gravitoelectromagnetic field defined in literature \cite{m05,m06,m07,m08,m09,m10}. \\ In most cases, gravitational field is weak and its self interactions can be neglected. So, eq.(\ref{2.19}) can be simplified to \begin{equation} F_{\mu\nu}^{\alpha} = \partial_{\mu} C_{\nu}^{\alpha} -\partial_{\nu} C_{\mu}^{\alpha}. \end{equation} The gravitomagnetic and gravitoelectric field have much familiar forms \begin{equation} B_i^{\alpha} = \partial_k C_j^{\alpha} - \partial_j C_k^{\alpha}, \end{equation} \begin{equation} E_i^{\alpha} = \partial_t C_i^{\alpha} - \partial_i C_0^{\alpha} \end{equation} From definitions eq.(\ref{12.2}), we can prove that \begin{equation} \nabla \cdot \svec{B}^{\alpha} =0, \label{12.5} \end{equation} \begin{equation} \frac{\partial}{\partial t} \svec{B}^{\alpha} + \nabla \times \svec{E}^{\alpha} =0. \label{12.6} \end{equation} From eq.(\ref{12.5}), we know that gravitomagnetic field is source free. It is found that gravitomagnetic field is generated by moving objects, and transmit gravitomagnetic interactions between two rotating objects. \\ \section{Gravitational Lorentz Force} \setcounter{equation}{0} A particle which moves in a gravitomagnetic field will feel a force that is perpendicular to its motion. In electrodynamics, this force is usually called Lorentz force. As an example, we discuss gravitational interactions between gravitational field and Dirac field. The lagrangian for gravitational gauge interactions of Dirac field is\cite{18,wu06} \begin{equation} {\cal L}_0 = - \bar{\psi} (\gamma^{\mu} D_{\mu} + m) \psi - \frac{1}{4} C^{\mu\nu\rho\sigma}_{\alpha\beta} F_{\mu \nu}^{\alpha} F_{\rho \sigma}^{\beta} , \label{6.2} \end{equation} where \begin{equation} C^{\mu\nu\rho\sigma}_{\alpha\beta} = \frac{1}{4} \eta^{\mu \rho} \eta^{\nu \sigma} g_{\alpha \beta} + \frac{1}{2} \eta^{\mu \rho} G^{-1 \nu}_{\beta} G^{-1 \sigma}_{\alpha} - \eta^{\mu \rho} G^{-1 \nu}_{\alpha} G^{-1 \sigma}_{\beta}. \end{equation} So, the interaction Lagrangian is \begin{equation} {\cal L}_I = g \bar\psi \gamma^{\mu} \partial_{\alpha} \psi C_{\mu}^{\alpha}. \label{12.14} \end{equation} For Dirac field, the gravitational energy-momentum of Dirac field is \begin{equation} T_{g \alpha}^{\mu} = \bar\psi \gamma^{\mu} \partial_{\alpha} \psi. \label{12.15} \end{equation} Substitute eq.(\ref{12.15}) into eq.(\ref{12.14}), we get \begin{equation} {\cal L}_I = g T_{g \alpha}^{\mu} C_{\mu}^{\alpha}. \label{12.16} \end{equation} The interaction Hamiltonian density ${\cal H}_I$ is \begin{equation} {\cal H}_I = - {\cal L}_I = - g T_{g \alpha}^{\mu}(y,\svec{x}) C_{\mu}^{\alpha}(y). \label{12.17} \end{equation} Suppose that the moving particle is a mass point at point $\svec{x} $, in this case \begin{equation} T_{g \alpha}^{\mu}(y,\svec{x}) = T_{g \alpha}^{\mu} \delta(\svec{y} - \svec{x} ), \label{12.18} \end{equation} where $ T_{g \alpha}^{\mu}$ is independent of space coordinates. Then, the interaction Hamiltonian $H_I$ is \begin{equation} H_I = \int {\rm d}^3 \svec{y} {\cal H}_I (y) = - g \int {\rm d}^3 \svec{y} T_{g \alpha}^{\mu}(y,\svec{x}) C_{\mu}^{\alpha}(y). \label{12.19} \end{equation} The gravitational force that acts on the mass point is \begin{equation} f_i = g \int {\rm d}^3y T^{\mu}_{g \alpha}(y,\svec{x}) F^{\alpha}_{i \mu} +g \int {\rm d}^3y T^{\mu}_{g \alpha}(y,\svec{x}) \frac{\partial}{\partial y^{\mu}} C_i^{\alpha}. \label{12.20} \end{equation} For quasi-static system, if we omit higher order contributions, the second term in the above relation vanish. For a mass point, using the technique of Lorentz covariance analysis, we can proved that \begin{equation} P_{g \alpha} U^{\mu} = \gamma T_{g \alpha}^{\mu}, \label{12.21} \end{equation} where $U^{\mu}$ is velocity, $\gamma$ is the rapidity, and $P_{g \alpha}$ is the gravitational energy-momentum. According eq.(\ref{12.18}), $P_{g \alpha}$ is given by \begin{equation} P_{g \alpha} = \int {\rm d}^3 \svec{y} T_{g \alpha}^0(y) = T_{g \alpha}^0. \label{12.22} \end{equation} Using all these relations and eq.(\ref{12.2}), we get \begin{equation} \label{12.22a} \svec{f} = -g P_{g \alpha} \svec{E}^{\alpha} - g P_{g \alpha} \svec{v} \times \svec{B}^{\alpha}. \end{equation} For quasi-static system, the dominant contribution of the above relation is \begin{equation} \svec{f} = g M \svec{E}^0 + g M \svec{v} \times \svec{B}^0, \label{12.23} \end{equation} where $\svec{v}= \svec{U}/\gamma$ is the velocity of the mass point. The first term of eq.(\ref{12.23}) is the classical Newton's gravitational interactions. The second term of eq.(\ref{12.23}) is the Lorentz force. The direction of this force is perpendicular to the direction of the motion of the mass point. When the mass point is at rest or is moving along the direction of the gravitomagnetic field, this force vanishes. Lorentz force should have some influences for cosmology, for the rotation of galaxy will generate gravitomagnetic field and this gravitomagnetic field will affect the motion of stars and affect the large scale structure of galaxy. \\ \section{Repulsive Component of Gravitational Interactions} \setcounter{equation}{0} The classical gravitational interactions are always attractive, but sometimes, there are repulsive components in gravity. The gravitational force is given by eq.(\ref{12.22a}). The first term corresponds to classical gravitational force. It is \begin{equation} \begin{array}{rcl} \svec{f} &=& - g P_{g \alpha} \svec{E}^{\alpha} \\ &&\\ &=& g ( M_1 \svec{E}^0 - P_{gj} \svec{E}^j ), \end{array} \label{12.60} \end{equation} where $M_1$ is the gravitational mass of the mass point which is moving in gravitational field. Suppose that the gravitational field is generated by another mass point whose gravitational energy-momentum is $Q_g^{\alpha}$ and its gravitational mass is $M_2$. For quasi-static gravitational field, we can get \begin{equation} \svec{E}^{\alpha} = - \frac{g}{4 \pi r^3} Q_g^{\alpha} \svec{r} \label{12.61} \end{equation} Substitute eq.(\ref{12.61}) into eq.(\ref{12.60}), we get \begin{equation} \svec{f} = \frac{g^2}{4 \pi r^3} \svec{r} ( - E_{1g} E_{2g} + \svec{P}_g \cdot \svec{Q}_g ), \label{12.62} \end{equation} where $E_{1g}$ and $E_{2g}$ are gravitational energy of two mass point. From eq.(\ref{12.62}), we can see that, if $\svec{P}_g \cdot \svec{Q}_g$ is positive, the corresponding component gravitational force between two momenta is repulsive. Because $E_{1g} E_{2g} \ge \svec{P}_g \cdot \svec{Q}_g$, the total gravitational force between two mass point is always attractive. \\ Suppose that a Lorentz transformation along the direction which is perpendicular to the direction of the gravitational force $\svec{f}$, the left hand side of eq.(\ref{12.62}) is invariant. Under this transformation, the gravitational energy of both mass point will be changed, so $E_{1g} E_{2g}$ will be changed, but the sum $- E_{1g} E_{2g} + \svec{P}_g \cdot \svec{Q}_g$ keep invariant. Therefore, the appearance of the component gravitational force between two momenta satisfies the requirement of Lorentz symmetry. \\ \section{Discussions} \setcounter{equation}{0} Eq.(\ref{12.22a}) is obtained from the interaction Lagrangian, it is deduced without concerning dynamics of gravitational gauge field. It is known that the selection of the Lagrangian of pure gravitational gauge field is not unique, but this ambiguity does not affect Eq.(\ref{12.22a}). Different selection of the Lagrangian of pure gravitational gauge field only gives out different $\svec{E}^{\alpha}$ and $\svec{B}^{\alpha}$, it does not affect classical Newtonian gravitational interactions and gravitational Lorentz force. \\ In classical Newton's theory of gravity, gravitational force between tow objects is always along the line which connects the center of mass of the two objects. Because of the gravitational Lorentz force, there exists components force which is perpendicular to the line. In most cases, this orthogonal component is much weaker than the traditional Newtonian gravity. \\ Gravitational Lorentz force is independent of electric charge, so it is different from the electric Lorentz force. Only charged particle can feel electric Lorentz force, but all particles can feel gravitational Lorentz force. Using this property, we can discriminate gravitational Lorentz force from electric Lorentz force, and discriminate gravitomagnetic field from the electromagnetic magnetic field. Effects of gravitational Lorentz force should be observable and can be observed by astrophysical observations. It is known that there are some experiments designed to test gravitomagnetic effects. \\
2,869,038,156,204
arxiv
\section{Introduction} Powerful concentration and deviation inequalities for suprema of empirical processes have been derived during the last 20 years. These inequalities turned out to be crucial for example in the study of consistency and rates of convergence for many estimators. Unfortunately, the known inequalities are only valid for bounded empirical processes or under strict tail assumptions. So, this paper was prompted by the question whether useful inequalities can be obtained under considerably weaker assumptions.\\ Let us first set the framework, starting with a brief summary of the known results for bounded empirical processes, or more precisely, for empirical processes index by bounded functions. To this end, we consider independent and identically distributed random variables $X_1,...,X_n$ and a countable function class $\mathcal F$ such that $\sup_{f\in\mathcal{F}}\Vert f\Vert_\infty\leq 1$ and $\sup_{f\in\mathcal{F}}|\mathbb{E} f(X_1)|=0$. The quantity of interest is then $Y:=\sup_{f\in\mathcal{F}}|\frac{1}{n}\sum_{i=1}^n f(X_i)|$. Bousquet derives in \citep{Bousquet02} with $\sigma_Y^2:=\sup_{f\in\mathcal{F}}\operatorname{Var}f(X_1)$ and $\nu:=\sigma_Y^2+2\mathbb{E}[Y]$ the exponential deviation inequality \begin{equation*} \mathbb{P}\left(Y-EY\geq \sqrt{2x\nu}+\frac{x}{3}\right)\leq e^{\text{-}nx}\text{~~~for all~}x>0. \end{equation*} Bousquet's proof is a refinement of Rio's proof in \citep{Rio02} and relies on the entropy method (see for example \citep[Chapter~5.3]{Massart07}). Similar exponential bounds for bounded empirical processes have been found by Klein and Rio \citep{Klein05} and by Massart \citep{Massart00}. These bounds are slightly less sharp, but additionally hold for not necessarily identically distributed random variables and also for $\text{-Y}$. We finally mention Talagrand's work \citep{Talagrand96b} that probably provided the spark for the development in this field.\\ Results are also known for possibly unbounded empirical processes that have very weak tails. We consider independent and identically distributed random variables $X_1,...,X_n$ and a function class $\mathcal F$ that fulfills the Bernstein conditions, that is, $\sup_{f\in\mathcal{F}}\frac{1}{n}\sum_{i=1}^n\mathbb{E} |f(X_i)|^m\leq\frac{m!}{2}K^{m-2},~m=2,3,...$ for a constant $K$. Additionally, we assume $\sup_{i,f\in\mathcal{F}}|\mathbb{E} f(X_i)|=0$ and $\operatorname{card}\mathcal{F}=p$. B\"uhlmann and van de Geer then derive in \citep{Buhlmann11} the exponential deviation inequality \begin{equation*} \mathbb{P}\left(Y\geq Kx +\sqrt{2x} +\sqrt{\frac{2\log(2p)}{n}} +\frac{K\log(2p)}{n}\right)\leq e^{\text{-}nx}\text{~~~for all~}x>0 \end{equation*} for $Y$ as above. Other exponential bounds for unbounded empirical processes are given by Adamczak in \citep{Adamczak08} and by van de Geer and Lederer in \citep{vdGeer11b}. These authors assume very weak tails with respect to suitable Orlicz norms.\\ But what if the empirical process is unbounded and does not fulfill the strict tail assumptions mentioned above? There is no hope to derive exponential bounds as above under considerably weaker assumptions. However, we show in the following that weak moment assumptions are sufficient to obtain useful moment type concentration inequalities. For this purpose, we consider independent, not necessarily identically distributed random variables $X_1,...,X_n$ and a countable function class $\mathcal F$ with an envelope that has $p$th moment at most $M^p$ for a $p\in[1,\infty)$. Our main result, Theorem~\ref{lemma.ConIn2.FirstCor}, implies then for the quantity of interest $Y:=\sup_{f\in\mathcal{F}}|\frac{1}{n}\sum_{i=1}^n (f(X_i)-\mathbb{E} f(X_i))|$, and for $1\leq l\leq p$, $\sigma_Y$ as above, and $(\cdot)_+^l:=\left(\max\{0,\cdot\}\right)^l$ \begin{equation*} \mathbb{E} \left[Y-(1+\epsilon)\mathbb{E} {Y}\right]_+^l\leq \left(\left(\frac{64}{\epsilon}+7+\epsilon\right)\frac{lM}{n^{1-\frac{l}{p}}}+\frac{4\sqrt l{\sigma_Y}}{\sqrt n} \right)^l\text{~~~for all~}\epsilon>0 \end{equation*} and \begin{align*} \mathbb{E} \left[(1-\epsilon)\mathbb{E} {Y}-Y\right]_+^l\leq \left(\left(\frac{86.4}{\epsilon}+7-\epsilon\right)\frac{lM}{n^{1-\frac{l}{p}}}+\frac{4.7\sqrt l{\sigma_Y}}{\sqrt n} \right)^l\text{~~~for~}\epsilon\in (0,1]. \end{align*} \noindent We argue in Section~\ref{sec.M1} that this result is especially useful in the common case where the envelope (measured by $M$) is much larger than the single functions (measured by ${\sigma_Y}$).\\ We close this section with a short outline of the paper. In Section~\ref{sec.Guideline}, we give the basic definitions and assumptions. In Section~\ref{sec.M1}, we then state and discuss the main result. This is followed by complementary bounds in Section~\ref{sec.M2}. Detailed proofs are finally given in Section~\ref{sec.proofs}. \section{Random Vectors, Concentration Inequalities and Envelopes} \label{sec.Guideline} We are mainly interested in the behavior of \itshape suprema of empirical processes \normalfont \begin{equation} \label{eq.intro.ep} Y:=\sup_{f\in\mathcal F}\left |\frac{1}{n}\sum_{i=1}^n f(X_i)\right|\text{~~or~~}Y:=\sup_{f\in\mathcal F}\left|\frac{1}{n}\sum_{i=1}^n \left(f(X_i)-\mathbb{E} f(X_i)\right)\right| \end{equation} for large $n$. Here, $X_1,...,X_n$ are independent, not necessarily identically distributed random variables and $\{f:f\in\mathcal{F}\}$ is a countable family of real, measurable functions. In the sequel, we may restrict ourselves to finitely many functions by virtue of the monotonous convergence theorem.\\ \itshape Random vectors \normalfont generalize the notion of empirical processes. Let $\mathcal{Z}_1,...,\mathcal{Z}_n$ be arbitrary probability spaces and $\{Z_i(j):\mathcal{Z}_i\to \mathbb{R}, 1\leq j\leq N, 1\leq i\leq n\}$ a set of random variables. We then define the random vectors as $Z(j):=(Z_1(j),...,Z_n(j))^T:\mathcal{Z}_1\times ...\times\mathcal{Z}_n\to\mathbb{R}^n$. For convenience, we introduce their mean as $ P Z(j):=\frac{1}{n}\sum_{i=1}^n\mathbb{E} Z_i(j)$ and their empirical mean as $ \mathbb{P}_nZ(j):=\frac{1}{n}\sum_{i=1}^nZ_i(j).$ Throughout this paper, we then consider the generalized formulation of \eqref{eq.intro.ep} \begin{align}\label{eq.intro.rv} Z&:=\max_{1\leq j\leq N}|\mathbb{P}_n Z(j)|. \end{align} The corresponding results for the empirical processes \eqref{eq.intro.ep} can be found via $Z_i(j):= f_j(X_i)$ or $Z_i(j):=f_j(X_i)-\mathbb{E} f_j(X_i)$ for $\mathcal F=\{f_1,...,f_N\}$.\\ \itshape Concentration inequalities \normalfont are a standard tool to characterize the behavior of the process \eqref{eq.intro.rv} (and thus of \eqref{eq.intro.ep}). For $n\to\infty$, the process \eqref{eq.intro.rv} is typically governed by the central limit theorem. In contrast, concentration inequalities bound the deviation in both directions from the mean or related quantities for finite $n$. Similarly, \itshape deviation inequalities \normalfont bound the deviation in one direction only. Concentration or deviation inequalities - contrarily to maximal inequalities for example - provide bounds that depend only on $n$ and moment properties of an envelope and the single functions $f$ (and particularly not on $N$).\\ Let us finally express the moment properties of the envelope. First, we call $\mathcal{E}:=(\mathcal{E}_1,...,\mathcal{E}_n)^T :\mathcal{Z}_1\times ...\times\mathcal{Z}_n\to\mathbb{R}^n$ an \itshape envelope \normalfont if $|Z_i(j)|\leq \mathcal{E}_i$ for all $1\leq j\leq N$ and $1\leq i \leq n$ . The basic assumption of this paper is then that there is a $p\in [1,\infty)$ and an $M>0$ such that \begin{equation} \mathbb{E}\mathcal{E}_i^p\leq M^p \end{equation} for all $1\leq i \leq n$. Typically, the envelope is much larger than the single random vectors. In these cases, we have $M\gg \sigma:=\sqrt{\max_{1\leq j \leq N}\frac{1}{n}\sum_{i=1}^n\mathbb{E} Z_i(j)^2}$. \section{Proofs} \label{sec.proofs} In this last section we give detailed proofs. \subsection{Proof of Theorem~\ref{lemma.ConIn2.FirstCor}} The key idea of our proofs is to introduce an appropriate truncation that depends on the envelope of the empirical process. This allows us to split the problem into two parts that can be treated separately: On the one hand, a part corresponding to a bounded empirical process that can be treated by convexity arguments and Massart's results on bounded random vectors \citep{Massart00}. And on the other hand, a part corresponding to an unbounded empirical process that can be treated by rather elementary means.\\ For ease of exposition we present some convenient notation for the truncation first. After deriving two simple auxiliary results, we then turn to the main task of this section: We prove Lemma~\ref{thm.ConIn2.wmass2}, a generalization of Theorem~\ref{lemma.ConIn2.FirstCor}. The main result of this paper, Theorem~\ref{lemma.ConIn2.FirstCor}, is then an easy consequence.\\ A basic tool used in this section is \itshape truncation\normalfont. Before turning to the proofs, we want to give some additional notation for this tool. First, we define the \itshape unbounded \normalfont and the \itshape bounded part of the random vectors \normalfont as \begin{alignat*}{2} \overline{Z}(j)&:=(\overline{Z}_1(j),...,\overline{Z}_n(j))^T:=(Z_1(j)1_{\{\mathcal{E}_1>K\}},...,Z_n(j)1_{\{\mathcal{E}_n>K\}})^T\\ \underline{Z}(j)&:=(\underline{Z}_1(j),...,\underline{Z}_n(j))^T:=(Z_1(j)1_{\{\mathcal{E}_1\leq K\}},...,Z_n(j)1_{\{\mathcal{E}_n\leq K\}})^T. \end{alignat*} Similarly, we define \begin{alignat*}{2} \overline{\mathcal E}(j)&:=(\overline{\mathcal{E}}_1(j),...,\overline{\mathcal{E}}_n(j))^T:=(\mathcal{E}_1(j)1_{\{\mathcal{E}_1>K\}},...,\mathcal{E}_n(j)1_{\{\mathcal{E}_n>K\}})^T\\ \underline{\mathcal{E}}(j)&:=(\underline{\mathcal{E}}_1(j),...,\underline{\mathcal{E}}_n(j))^T:=(\mathcal{E}_1(j)1_{\{\mathcal{E}_1\leq K\}},...,\mathcal{E}_n(j)1_{\{\mathcal{E}_n\leq K\}})^T. \end{alignat*} To prevent an overflow of indices, the \itshape truncation level \normalfont $K>0$ is not included explicitly in the notation. The truncation level is, however, given at the adequate places so that there should not be any confusion. Finally, we define the maxima of the truncated random variables as \begin{align*} \overline{Z}:=\max_{1\leq j\leq N}|\mathbb{P}_n \overline{Z}(j)|\text{~~and~~} \underline{Z}:=\max_{1\leq j\leq N}|\mathbb{P}_n \underline{Z}(j)|. \end{align*} Now we derive two simple auxiliary lemmas. \begin{lemma} \label{prop.ConIn2.moment} Let $l\geq 1$, $W_i:\mathcal{Z}_i\to \mathbb{R}^+_0$ for $1\leq i \leq n$ and $\mathbb{E} W_i^l\leq 1$. Then, \begin{equation*} \mathbb{E}\left[\mathbb{P}_n W\right]^l\leq 1 \end{equation*} for the corresponding random vector $W$ on the product space. \end{lemma} \begin{proof} By the triangle inequality we have \begin{equation*} \left(\mathbb{E}\left[\mathbb{P}_n W\right]^l\right)^\frac{1}{l}= \frac{1}{n}\left(\mathbb{E}\left[\sum_{i=1}^n W_i\right]^l\right)^\frac{1}{l}\leq \frac{1}{n}\sum_{i=1}^n\left(\mathbb{E}\left[ W_i^l\right]\right)^\frac{1}{l}\leq 1. \end{equation*} \end{proof} \begin{lemma} \label{lemma.ConIn.Verg} Under the assumptions of Theorem~\ref{lemma.ConIn2.FirstCor} it holds for $K=n^{\frac{l}{p}}M$ that \begin{equation*} |\mathbb{E}[\underline{Z}-Z]|\leq \frac{M}{n^{l(1-\frac{1}{p})}} \end{equation*} and $\underline\sigma\leq \sigma$ for \begin{align*} {\underline{\sigma}}&:=\sqrt{\max_{1\leq j \leq N}\frac{1}{n}\sum_{i=1}^n\operatorname{Var}\underline{Z}_i(j)}. \end{align*} \end{lemma} \begin{proof} Since $||a|-|b||\leq |a-b|$ for all $a,b\in\mathbb{R}$, it holds that \begin{align*} \left|\mathbb{E}[\underline{Z}-Z]\right|&=\left|\mathbb{E}\left[\max_{1\leq j\leq N}|\mathbb{P}_n \underline{Z}(j)|-\max_{1\leq j\leq N}|\mathbb{P}_n Z(j)|\right]\right|\\ &\leq\mathbb{E}\left[\max_{1\leq j\leq N}||\mathbb{P}_n \underline{Z}(j)|-|\mathbb{P}_n Z(j)||\right]\\ &\leq\mathbb{E}\left[\max_{1\leq j\leq N}|\mathbb{P}_n( \underline{Z}(j)-Z(j))|\right]\\ &=\mathbb{E}\left[\max_{1\leq j\leq N}|\mathbb{P}_n\overline{Z}|\right]\\ &\leq\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^n\overline{\mathcal{E}}_i\right] \end{align*} With H\"older's and Chebyshev's Inequality we obtain for $1\leq i\leq n$ \begin{align*} \mathbb{E} \overline{\mathcal{E}}_i^l =&\mathbb{E}\mathcal{E}_i^l 1_{\{\mathcal{E}_i>K\}}\\ \leq&(\mathbb{E}\mathcal{E}_i^p)^\frac{l}{p} (\mathbb{E} 1_{\{\mathcal{E}_i>K\}})^{1-\frac{l}{p}}\\ \leq&(\mathbb{E}\mathcal{E}_i^p)^\frac{l}{p} \left(\frac{\mathbb{E}\mathcal{E}_i^p}{K^p}\right)^{1-\frac{l}{p}}\\ \leq&\frac{M^p}{K^{p-l}}. \end{align*} These two results yield then the first assertion. The second assertion is straightforward. \end{proof} We can now turn to the harder part of this section. The following lemma is a generalization of Theorem~\ref{lemma.ConIn2.FirstCor}. The derivation of Theorem~\ref{lemma.ConIn2.FirstCor} from this and Lemma~\ref{lemma.ConIn.Verg} is then a simple task. \begin{lemma} \label{thm.ConIn2.wmass2} Let $1\leq l\leq p$ and $\epsilon,K\in\mathbb{R}^+$. Then, \begin{align*} \mathbb{E} \left[Z-(1+\epsilon)\mathbb{E}\underline{Z}\right]_+^l\leq \left( \left(\frac{64}{\epsilon}+5\right)\frac{lK}{n}+\frac{4\sqrt l\underline{\sigma}}{\sqrt n}+\frac{M^{\frac{p}{l}}}{K^{\frac{p}{l}-1}}\right)^l \end{align*} and \begin{align*} \mathbb{E} \left[(1-\epsilon)\mathbb{E}\underline{Z}-Z\right]_+^l\leq\left( \left(\frac{86.4}{\epsilon}+5\right)\frac{lK}{n}+\frac{4.7\sqrt l\underline{\sigma}}{\sqrt n}+\frac{M^{\frac{p}{l}}}{K^{\frac{p}{l}-1}}\right)^l. \end{align*} \end{lemma} \begin{proof} The key idea of the proof is to separate the bounded from the unbounded quantities. On the one hand, we develop bounds for $\mathbb{E}\overline{Z}_+^l$ via elementary means. On the other hand, we develop bounds for $\mathbb{E}\left[\underline{Z}-(1+\epsilon)\mathbb{E}\underline{Z}\right]_+^l$ and $\mathbb{E}\left[(1-\epsilon)\mathbb{E}\underline{Z}-\underline{Z}\right]_+^l$ via convexity arguments and \citep[Theorem~4]{Massart00}. They can then be combined to deduce the result.\\ We start with the proof of the first inequality. First, we split $Z$ in a bounded and an unbounded part \begin{align*}\nonumber Z&=\max_{1\leq j \leq N}|\mathbb{P}_n Z(j)|\\\nonumber &=\max_{1\leq j \leq N}|\mathbb{P}_n( \underline{Z}(j)+\overline{Z}(j))|\\\nonumber &\leq\max_{1\leq j \leq N}(|\mathbb{P}_n\underline{Z}(j)|+|\mathbb{P}_n\overline{Z}(j)|)\\ &\leq\underline{Z}+\overline{Z} \end{align*} and deduce with the triangle inequality that \begin{align}\nonumber &\mathbb{E} \left[Z-(1+\epsilon)\mathbb{E}\underline{Z}\right]_+^l\\\nonumber \leq&\mathbb{E} \left[\underline{Z}+\overline{Z}-(1+\epsilon)\mathbb{E}\underline{Z}\right]_+^l\\\nonumber \leq&\mathbb{E} \left[(\underline{Z}-(1+\epsilon)\mathbb{E}\underline{Z})_++\overline{Z}_+\right]^l\\ \leq &\left((\mathbb{E}\left[\underline{Z}-(1+\epsilon)\mathbb{E} \underline{Z}\right]_+^l)^{\frac{1}{l}}+(\mathbb{E}\overline{Z}_+^{l})^\frac{1}{l}\right)^l. \label{eq.ConIn2.splitting2} \end{align} Now, we turn to the development of bounds for $\mathbb{E}\overline{Z}_+^l$. With the help of H\"older's and Chebyshev's Inequalities we obtain as above for $1\leq i\leq n$ \begin{align*} \mathbb{E} \overline{\mathcal{E}}_i^l \leq&\frac{M^p}{K^{p-l}}. \end{align*} That is, \begin{equation*} \mathbb{E}\left[\frac{\overline{\mathcal{E}}_i}{(\frac{M^p}{K^{p-l}})^\frac{1}{l}}\right]^l\leq 1. \end{equation*} We may consequently apply Lemma~\ref{prop.ConIn2.moment} and obtain \begin{equation} \label{eq.ConIn2.Chebby} \mathbb{E} \overline{Z}_+^l=\mathbb{E} \overline{Z}^l\leq \mathbb{E} [\mathbb{P}_n\overline{\mathcal{E}}]^l\leq\frac{M^p}{K^{p-l}}. \end{equation} As a next step, we derive bounds for $\mathbb{E}\left[\underline{Z}-(1+\epsilon)\mathbb{E}\underline{Z}\right]_+^l$. To begin, we set $ J:=(\frac{32}{\epsilon}+2.5)K$, $\sigma:=\underline\sigma$ and define the function $g_l:\mathbb{R}^+\to (1,\infty)$ as \begin{equation*} g_l (x):=\exp\left(\frac{\sqrt{n}(\sqrt{2\sigma^2+Jx^\frac{1}{l}}-\sqrt{2}\sigma)}{\sqrt{2}J}\right)^2. \end{equation*} Note that $g_l$ is strictly increasing and smooth. Moreover, the first and second derivatives are \begin{align*} g_l'(x)=&\frac{2n(\sqrt{2\sigma^2+Jx^\frac{1}{l}}-\sqrt{2}\sigma)}{4J^2\sqrt{2\sigma^2+Jx^\frac{1}{l}}}\frac{J}{l}x^{\frac{1}{l}-1}g_l (x)\\ =& \frac{n}{2l J}\left(1-\frac{\sqrt{2}\sigma}{\sqrt{2\sigma^2+Jx^\frac{1}{l}}}\right)x^{\frac{1}{l}-1}g_l (x) \end{align*} and \begin{align*} g''_l(x)&= \frac{n^2}{4l^2 J^2}\left(1-\frac{\sqrt{2}\sigma}{\sqrt{2\sigma^2+Jx^\frac{1}{l}}}\right)^2x^{\frac{2}{l}-2}g_l (x)\\ &~~~~~~~+ \frac{n}{2l J}\frac{\sqrt{2}\sigma}{2(2\sigma^2+Jx^\frac{1}{l})^\frac{3}{2}}\frac{J}{l}x^{\frac{2}{l}-2}g_l (x) \\ &~~~~~~~+ \frac{n}{2l J}\left(1-\frac{\sqrt{2}\sigma}{\sqrt{2\sigma^2+Jx^\frac{1}{l}}}\right)\left(\frac{1}{l}-1\right)x^{\frac{1}{l}-2}g_l (x)\\ &\geq \frac{nx^{\frac{2}{l}-2}g_l (x)}{4l^2 J}\mbox{\fontsize{10}{10}\selectfont $\left(1-\frac{\sqrt{2}\sigma}{\sqrt{2\sigma^2+Jx^\frac{1}{l}}}\right)\left(\left(1-\frac{\sqrt{2}\sigma}{\sqrt{2\sigma^2+Jx^\frac{1}{l}}}\right){\displaystyle \frac{n}{J}}+2(1-l)x^{-\frac{1}{l}}\right)$}. \end{align*} We now use the lower bound for the second derivative to find an interval, on which the function $g_l$ is convex. To this end, we observe that for $\sigma>0$ \begin{equation} \label{eq.ConIn2.ineqx} \left(1-\frac{\sqrt{2}\sigma}{\sqrt{2\sigma^2+Jx^\frac{1}{l}}}\right)\frac{n}{J}+2(1-l)x^{-\frac{1}{l}}\geq 0 \end{equation} is equivalent to \begin{equation*} 1-\frac{1}{\sqrt{1+\frac{J}{2\sigma^2}x^\frac{1}{l}}}\geq \frac{2(l-1)J}{n}x^{-\frac{1}{l}}. \end{equation*} This can be rewritten with the definition $u:=\frac{J}{2\sigma^2}x^\frac{1}{l}>0$ as \begin{equation*} 1-\frac{1}{\sqrt{1+u}}\geq \frac{(l-1)J^2}{n\sigma^2u} \end{equation*} and with the definition $C:=\frac{(l -1)J^2}{n\sigma^2}\geq 0$ as \begin{equation} \label{eq.ConIn2.coninu} 1-\frac{1}{\sqrt{1+u}}\geq \frac{C}{u}. \end{equation} We assume now, that $u\geq C$. Then, \begin{alignat*}{2} &~~~~~~ &1-\frac{1}{\sqrt{1+u}}&\geq \frac{C}{u}\\ &\Leftrightarrow &\sqrt{1+u}\left(1-\frac{C}{u}\right)&\geq 1\\ &\Leftrightarrow &(1+u)(u^2-2Cu+C^2)&\geq u^2\\ &\Leftrightarrow &u^3-2Cu^2+(C^2-2C)u+C^2&\geq 0. \end{alignat*} Considering the equality \begin{equation*} u^2-2Cu+C^2-2C= 0 \end{equation*} with roots $\{C\pm \sqrt{2C}\}$, we deduce that \begin{equation*} u^3-2Cu^2+(C^2-2C)u+C^2\geq 0 \end{equation*} for all $u\geq C+\sqrt{2C}$. Consequently, for $u\geq C+\sqrt{2C}$, Inequality \eqref{eq.ConIn2.coninu} holds true. Hence, if we postulate \begin{equation} \label{ConIn2Reqx} \frac{J}{2\sigma^2}x^\frac{1}{l}\geq \frac{(l -1)J^2}{n\sigma^2}+\sqrt{\frac{2(l -1)J^2}{n\sigma^2}} \end{equation} Equation \eqref{eq.ConIn2.ineqx} holds true. The postulate \eqref{ConIn2Reqx} is equivalent to \begin{equation*} x^\frac{1}{l}\geq \frac{2(l -1)J}{n}+\frac{\sqrt{8(l -1)}\sigma}{\sqrt n} \end{equation*} and to \begin{equation*} x\geq \left(\frac{2(l -1)J}{n}+\frac{\sqrt{8(l -1)}\sigma}{\sqrt n}\right)^l=:I. \end{equation*} Additionally, note that with this condition on $x$, Equation \eqref{eq.ConIn2.ineqx} is also true for $\sigma=0$. So we finally derived, since $\frac{nx^{\frac{2}{l}-2}g_l (x)}{4l^2 J}(1-\frac{\sqrt{2}\sigma}{\sqrt{2\sigma^2+Jx^\frac{1}{l}}})$ is positive, that the function $g_l$ is convex on the domain $(I,\infty)$.\\ This convexity property makes it possible to apply a result of \citep{Massart00}. To show this, we introduce \begin{equation*} X:=(\underline{Z}-(1+\epsilon)\mathbb{E}\underline{Z})_+^l \end{equation*} and find with Jensen's Inequality and the fact that $g_l$ is increasing \begin{equation*} g_l(\mathbb{E} X)\leq g_l(\mathbb{E} [X\vee I])\leq \mathbb{E} g_l(X\vee I). \end{equation*} We used here the notation $a\vee b:=\max\{a,b\}$ for $a,b\in\mathbb{R}$. Massart's Inequality \citep[Theorem~4, (13)]{Massart00} for bounded random vectors translates then to our setting as \begin{equation*} \mathbb{P}(n\underline{Z}\geq (1+\epsilon)n\mathbb{E}\underline{Z}+\sigma\sqrt{8nx}+\left(\frac{32}{\epsilon}+2.5\right)Kx)\leq e^{-x}, \end{equation*} where $\epsilon, x >0$. This is equivalent to \begin{equation} \label{eq.ConIn2.massart} \mathbb{P}(\underline{Z}\geq (1+\epsilon)\mathbb{E}\underline{Z}+\sigma\sqrt{\frac{8x}{n}}+\frac{J}{n}x)\leq e^{-x}. \end{equation} We then deduce (cf. \cite{vdGeer11b}) \begin{align*} &\mathbb{E}\exp\left(\frac{\sqrt{n}\left(\sqrt{2\sigma^2+J(X\vee I)^{\frac{1}{l}}}-\sqrt{2}\sigma\right)}{\sqrt{2}J}\right)^2\\ =&\int_0^\infty\mathbb{P}\left(\exp\left(\frac{\sqrt{n}\left(\sqrt{2\sigma^2+J(X\vee I)^{\frac{1}{l}}}-\sqrt{2}\sigma\right)}{\sqrt{2}J}\right)^2> t\right)dt\\ \leq &1+\int_1^\infty\mathbb{P}\left(\exp\left(\frac{\sqrt{n}\left(\sqrt{2\sigma^2+J(X\vee I)^{\frac{1}{l}}}-\sqrt{2}\sigma\right)}{\sqrt{2}J}\right)^2> t\right)dt\\ =&1+\int_1^\infty\mathbb{P}\left(\sqrt{2\sigma^2+J(X\vee I)^{\frac{1}{l}}}> \sqrt{2}\sigma+\sqrt{\frac{2J^2}{n}\log t}\right)dt\\ =&1+\int_1^\infty\mathbb{P}\left(J(X\vee I)^{\frac{1}{l}}> 4\sigma\sqrt{\frac{J^2}{n}\log t}+\frac{2J^2}{n}\log t\right)dt \end{align*} and note that \begin{align*} JI^{\frac{1}{l}}&< 4\sigma\sqrt{\frac{J^2}{n}\log t}+\frac{2J^2}{n}\log t\\ \Leftrightarrow~~~ \frac{2(l -1)J}{n}+\frac{\sqrt{8(l -1)}\sigma}{\sqrt n}&<4\sigma\sqrt{\frac{\log t}{n}}+ \frac{2J}{n}\log t. \end{align*} This is fulfilled if $t\geq e^{l-1}$. Hence, with Massart's Inequality \eqref{eq.ConIn2.massart}, \begin{align*} &\mathbb{E}\exp\left(\frac{\sqrt{n}\left(\sqrt{2\sigma^2+J(X\vee I)^{\frac{1}{l}}}-\sqrt{2}\sigma\right)}{\sqrt{2}J}\right)^2\\ \leq &1+e^{l-1}-1+\int_{e^{l-1}}^\infty\mathbb{P}\left(X^{\frac{1}{l}}> 4\sigma\sqrt{\frac{\log t}{n}}+\frac{2J}{n}\log t\right)dt\\ =&e^{l-1}+\int_{e^{l-1}}^\infty\mathbb{P}\left(\underline{Z}>(1+\epsilon)\mathbb{E}\underline{Z}+4\sigma\sqrt{\frac{\log t}{n}}+ \frac{2J}{n}\log t\right)dt\\ \leq& e^{l-1}+\int_{e^{l-1}}^\infty\exp(-\log t^2)dt < e^{l}. \end{align*} In summary, we have \begin{equation*} g_l(\mathbb{E} X)< e^l. \end{equation*} This is now inverted, observing that for $y \in (1,\infty)$ such that \begin{alignat*}{2} &~~~~~~ &y&=\exp\left( \frac{\sqrt{n}(\sqrt{2\sigma^2+Jx^\frac{1}{l}}-\sqrt{2}\sigma)}{\sqrt{2}J}\right)^2\\ &\Rightarrow&\sqrt{\log{y}}&=\frac{\sqrt{n}(\sqrt{2\sigma^2+Jx^\frac{1}{l}}-\sqrt{2}\sigma)}{\sqrt{2}J}\\ &\Rightarrow&\frac{\sqrt 2 J}{\sqrt n}\sqrt{\log{y}}+\sqrt{2}\sigma&=\sqrt{2\sigma^2+Jx^\frac{1}{l}}\\ &\Rightarrow&\frac{2J^2}{n}\log{y}+\frac{4\sigma J}{\sqrt n}\sqrt{\log y}&=Jx^\frac{1}{l}\\ &\Rightarrow&\left(\frac{2J}{n}\log{y}+\frac{4\sigma}{\sqrt n}\sqrt{\log y}\right)^l&=x. \end{alignat*} This is now applied with $x=\mathbb{E} X$ to obtain \begin{equation} \mathbb{E} X\leq \left(\frac{2lJ}{n}+\frac{4\sqrt{l}\sigma}{\sqrt n}\right)^l. \label{eq.ConIn2.bounded} \end{equation} We are now ready to collect the terms. Inequalities \eqref{eq.ConIn2.splitting2}, \eqref{eq.ConIn2.Chebby} and \eqref{eq.ConIn2.bounded} give \begin{align*} \mathbb{E} \left[Z-(1+\epsilon)\mathbb{E}\underline{Z}\right]_+^l &=\left(\frac{2lJ}{n}+\frac{4\sqrt{l}\sigma}{\sqrt n} + \frac{M^{\frac{p}{l}}}{K^{\frac{p}{l}-1}}\right)^l. \end{align*} Hence, since $ J=(\frac{32}{\epsilon}+2.5)K$, \begin{align*} \mathbb{E} \left[Z-(1+\epsilon)\mathbb{E}\underline{Z}\right]_+^l &\leq \left( \left(\frac{64}{\epsilon}+5\right)\frac{lK}{n}+\frac{4\sqrt{l}\sigma}{\sqrt n} + \frac{M^{\frac{p}{l}}}{K^{\frac{p}{l}-1}}\right)^l. \end{align*} This finishes the proof of the first part of the lemma. For the second part, we note that \begin{align*} \underline{Z}&=\max_{1\leq j \leq N}|\mathbb{P}_n \underline{Z}(j)|\\ &=\max_{1\leq j \leq N}|\mathbb{P}_n((Z(j)-\overline{Z}(j))|\\ &\leq\max_{1\leq j \leq N}(|\mathbb{P}_n Z(j)|+|\mathbb{P}_n\overline{Z}(j)|)\\ &\leq Z+\overline{Z} \end{align*} and therefore $Z\geq \underline{Z} - \overline{Z}$. Consequently, \begin{align*} &\mathbb{E} \left[(1-\epsilon)\mathbb{E}\underline{Z}-Z\right]_+^l\\ \leq&\mathbb{E} \left[(1-\epsilon)\mathbb{E}\underline{Z}-\underline{Z}+\overline{Z}\right]_+^l\\ \leq&\mathbb{E} \left[((1-\epsilon)\mathbb{E}\underline{Z}-\underline{Z})_++\overline{Z}_+\right]^l\\ =&\left(\mathbb{E}[(1-\epsilon)\mathbb{E} \underline{Z}-\underline{Z}]_+^l)^\frac{1}{l}+(\mathbb{E}\overline{Z}_+^{l})^\frac{1}{l}\right)^l. \end{align*} One can then proceed as in the first part and use \citep[Theorem~4, (14)]{Massart00}. \end{proof} \begin{proof}[Proof of Theorem \ref{lemma.ConIn2.FirstCor}] Set $K=n^\frac{l}{p}M$ in Lemma~\ref{thm.ConIn2.wmass2} and use Lemma~\ref{lemma.ConIn.Verg} to replace the truncated quantities by the original ones. \end{proof} \begin{comment} \begin{proof}[Proof of Lemma \ref{lemma.ConIn.Bsp}] We first observe that \begin{align*} \mathbb{E} \left[Z-2\mathbb{E}\underline{Z}\right]_+=&\mathbb{E} \left[Z-2\mathbb{E} Z -2\mathbb{E}[\underline{Z}-Z]\right]_+\\ \geq& \mathbb{E} \left[Z-2\mathbb{E} Z\right]_+ -2(\mathbb{E}\left[\underline{Z}-Z\right])_+\\ \geq& \mathbb{E} \left[Z-2\mathbb{E} Z\right]_+ -\frac{2M}{n^{1-\frac{1}{p}}} \end{align*} and similarly \begin{align*} \mathbb{E} \left[\frac{1}{2}\mathbb{E}\underline{Z}-Z\right]_+=&\mathbb{E} \left[\frac{1}{2}\mathbb{E}[\underline{Z}-Z]+\frac{1}{2}\mathbb{E}{Z}-Z\right]_+\\ \geq &\mathbb{E} \left[\frac{1}{2}\mathbb{E}{Z}-Z\right]_+-\frac{M}{2n^{1-\frac{1}{p}}} \end{align*} with the help of Lemma~\ref{lemma.ConIn.Verg}. This is then plugged in Theorem~\ref{lemma.ConIn2.FirstCor} to obtain the result. \end{proof} \end{comment} \subsection{Proof of Theorem~\ref{theorem.Massart2}} Here, we prove Theorem~\ref{theorem.Massart2} with the help of symmetrization and Massart \citep{Massart00}. The proof here is considerably shorter than the proof of Theorem~\ref{lemma.ConIn2.FirstCor}. This is because we do not need tedious convexity arguments. \begin{proof}[Proof of Theorem \ref{theorem.Massart2}] The trick is to use symmetrization and desymmetrization arguments so that we are able to use \citep[Theorem 9]{Massart00} in a favorable way.\\ Beforehand, we define $Z_\epsilon:=\max_{1\leq j\leq N}|\frac{1}{n}\sum_{i=1}^n \epsilon_i Z_i(j)|$ with independent Rademacher random variables $\epsilon_i$. Then, we symmetrize according to \citep[Lemma 2.3.6]{vdVaart00} with the function $\Phi(x)=(x-4\mathbb{E} Z)_+^l$ to obtain \begin{align*} & \mathbb{E}\left[Z-4\mathbb{E} Z\right]_+^l\leq \mathbb{E}\left[2Z_\epsilon-4\mathbb{E} Z\right]_+^l \end{align*} and we desymmetrize with the function $\Phi(x)=x$ to obtain \begin{align*} & \mathbb{E}\left[2Z_\epsilon-4\mathbb{E} Z\right]_+^l\leq \mathbb{E}\left[2Z_\epsilon-\mathbb{E} 2Z_\epsilon\right]_+^l. \end{align*} Hence, \begin{align} \label{eq.Massartzwo1} \mathbb{E}\left[Z-4\mathbb{E} Z\right]_+^l\leq 2^l\mathbb{E}\me_\epsilon\left[Z_\epsilon-\mathbb{E} Z_\epsilon\right]_+^l, \end{align} where we write here and in the following $\mathbb{E}_\epsilon$ for the expectation and $\mathbb{P}_\epsilon$ for the probability w.r.t. the Rademacher random variables. Next, we observe that \begin{align*} &\mathbb{E}_\epsilon\left[Z_\epsilon-\mathbb{E} Z_\epsilon\right]_+^l\\ =&\int_0^\infty\mathbb{P}_\epsilon\left(\left(Z_\epsilon-\mathbb{E} Z_\epsilon\right)_+^l>t\right)dt\\ =&\int_0^\infty\mathbb{P}_\epsilon\left(Z_\epsilon>\mathbb{E} Z_\epsilon + t^\frac{1}{l}\right)dt\\ \leq&\int_0^\infty\mathbb{P}_\epsilon\left(\max_{1\leq j\leq N}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j)>\mathbb{E} \max_{1\leq j\leq N}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j) + t^\frac{1}{l}\right)dt\\~~~~~~~~~~&+\int_0^\infty\mathbb{P}_\epsilon\left(\max_{1\leq j\leq N}\text{-}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j)>\mathbb{E} \max_{1\leq j\leq N}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j) + t^\frac{1}{l}\right)dt\\ \leq&2\int_0^\infty\mathbb{P}_\epsilon\left(\max_{1\leq j\leq N}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j)>\mathbb{E} \max_{1\leq j\leq N}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j) + t^\frac{1}{l}\right)dt. \end{align*} In a final step, we apply Massart's Inequality \citep[Theorem 9]{Massart00} with \begin{equation*} L^2=\max_{1\leq j\leq N}\sum_{i=1}^n\left(2|Z_i(j)|\right)^2\leq 4n\mathbb{P}_n\mathcal{E}^2, \end{equation*} where $\mathbb{P}_n\mathcal{E}^2:=\frac{1}{n}\sum_{i=1}^n\mathcal{E}_i^2$. This yields \begin{align*} &2\int_0^\infty\mathbb{P}_\epsilon\left(\max_{1\leq j\leq N}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j)>\mathbb{E} \max_{1\leq j\leq N}\frac{1}{n}\sum_{i=1}^n \epsilon_iZ_i(j) + t^\frac{1}{l}\right)dt\\ \leq& 2\int_0^\infty\exp\left(-\frac{nt^\frac{2}{l}}{8\mathbb{P}_n\mathcal{E}^2}\right)dt\\ =&2\left(\frac{8}{n}\right)^\frac{l}{2}(\mathbb{P}_n\mathcal{E}^2)^\frac{l}{2}\int_0^\infty\exp\left(-t^\frac{2}{l}\right)dt\\ =&2\left(\frac{8}{n}\right)^\frac{l}{2}(\mathbb{P}_n\mathcal{E}^2)^\frac{l}{2}\frac{l\Gamma\left(\frac{l}{2}\right)}{2}. \end{align*} With Inequality~\eqref{eq.Massartzwo1} this gives \begin{align*} \mathbb{E}\left[Z-4\mathbb{E} Z\right]_+^l\leq 2^ll\left(\frac{8}{n}\right)^\frac{l}{2}\mathbb{E}\left[\mathbb{P}_n\mathcal{E}^2\right]^\frac{l}{2}\Gamma\left(\frac{l}{2}\right). \end{align*} Finally, because of Lemma~\ref{prop.ConIn2.moment}, it holds that \begin{equation*} \mathbb{E}\left[\mathbb{P}_n\mathcal{E}^2\right]^\frac{l}{2}\leq \mathbb{E}\left[\mathbb{P}_n\mathcal{E}\right]^{l}\leq M^l \end{equation*} and hence \begin{align*} \mathbb{E}\left[Z-4\mathbb{E} Z\right]_+^l\leq l\Gamma\left(\frac{l}{2}\right)\left(\frac{32}{n}\right)^\frac{l}{2}M^l. \end{align*} \end{proof} \subsection{Proof of Theorem~\ref{sec4.Corrollary1}} We eventually derive Theorem~\ref{sec4.Corrollary1} using truncation. After some auxiliary results, we derive Lemma~\ref{lemma.helpwoe}. This Lemma settles the bounded part of the problem. It is then used to proof Lemma~\ref{theorem.mainwoe} which is a slight generalization of the main theorem. Finally, we derive Theorem~\ref{sec4.Corrollary1} as a simple corollary.\\ We begin with two auxiliary lemmas: \begin{lemma} \label{lemma.helpfirst} Let $W$ be a centered random variable with values in $[-A,A]$, $A\geq 0$, such that $\mathbb{E} W^2\leq 1$. Then, \begin{equation*} \mathbb{E} e^\frac{W}{A}\leq 1 + \frac{1}{A^2}. \end{equation*} \end{lemma} \begin{pro} We follow well known ideas (see e.g. \cite[Chapter~14]{Buhlmann11}): \begin{align*} \mathbb{E} e^\frac{W}{A}&= 1 + \mathbb{E}\left [ e^\frac{W}{A}-1-\frac{W}{A}\right]\\ &\leq 1 + \mathbb{E}\left[ e^\frac{|W|}{A}-1-\frac{|W|}{A}\right]\\ &=1+\sum_{m=2}^\infty\frac{\mathbb{E} |W|^m}{m!A^m}\\ &\leq 1+\sum_{m=2}^\infty\frac{A^{m-2}}{m!A^m}\\ &\leq 1 +\frac{1}{A^2}. \end{align*} \end{pro} \begin{lemma} \label{lemma.ConIn4.Combi} Let $C_m^n:=|\{ (i_1,...,i_m)^T\in\{ 1,...,n \}^m :\forall j\in \{ 1,...,m \} \exists j'\in\{ 1,...,m \},j'\neq j,i_j=i_{j'}\}|$ for $m,n\in\mathbb{N}$. Then, \begin{equation*} C_m^n\leq m!\left(\frac{n}{2}\right)^{\lfloor \frac{m}{2}\rfloor}. \end{equation*} \end{lemma} \begin{proof} The proof of this lemma is a simple counting exercise. We start with the case $m\leq 2$. One finds easily that $C_1^n=0$ and $C_2^n=n$, which completes the case $m\leq 2$. Next, we consider the case $m>2$. To this end, we note that $C_m^1=1$, $C_3^2=2$ and $C_m^2\leq 2^m\leq m!$ for $m>3$. This completes the cases $n\leq 2$. Now, we do an induction in $n$. So we let $n\geq 2$ and find \begin{align*} C_m^{n+1}=&C_m^n +\frac{m(m-1)}{2!}C_{m-2}^n\\&+\frac{m(m-1)(m-2)}{3!}C_{m-3}^n +...+\frac{m(m-1)...3}{(m-2)!}C_{2}^n+1. \end{align*} By induction, this yields \begin{align*} C_m^{n+1}\leq&m!\Big[\left(\frac{n}{2}\right)^{\lfloor \frac{m}{2}\rfloor}+\frac{1}{2!}\left(\frac{n}{2}\right)^{\lfloor \frac{m-2}{2}\rfloor}\\ &+\frac{1}{3!}\left(\frac{n}{2}\right)^{\lfloor \frac{m-3}{2}\rfloor}+... +\frac{1}{(m-2)!}\left(\frac{n}{2}\right)^{\lfloor \frac{2}{2}\rfloor}\Big] +1. \end{align*} We now assume that $m$ is even. So, \begin{align*} C_m^{n+1}\leq&m!\left[ \left(\frac{n}{2}\right)^\frac{m}{2}+\frac{1}{2!}\left(\frac{n}{2}\right)^{\frac{m}{2}-1}+\frac{1}{3!}\left(\frac{n}{2}\right)^{\frac{m}{2}-2}+...+\frac{1}{(m-2)!}\left(\frac{n}{2}\right)\right]+1\\ =&m!\left[\left(\frac{n}{2}\right)^\frac{m}{2}+\frac{1}{2!}\left(\frac{n}{2}\right)^{\frac{m}{2}-1}+\sum_{j=2}^{\frac{m}{2}-1}\left(\frac{1}{(2j-1)!}+\frac{1}{\left(2j\right)!}\right)\left(\frac{n}{2}\right)^{\frac{m}{2}-j}\right]+1\\ \leq&m!\left[\left(\frac{n}{2}\right)^\frac{m}{2}+\frac{m}{4}\left(\frac{n}{2}\right)^{\frac{m}{2}-1}+\sum_{j=2}^{\frac{m}{2}-1}{\frac{m}{2}\choose j}\left(\frac{1}{2}\right)^j\left(\frac{n}{2}\right)^{\frac{m}{2}-j}+\left(\frac{1}{2}\right)^\frac{m}{2}\right]\\ =&m!\sum_{j=0}^{\frac{m}{2}}{\frac{m}{2}\choose j}\left(\frac{1}{2}\right)^j\left(\frac{n}{2}\right)^{\frac{m}{2}-j}= m!\left( \frac{n+1}{2}\right)^{\lfloor \frac{m}{2}\rfloor}. \end{align*} This completes the proof for $m>2$ with $m$ even. We note finally, that for odd $m>2$ we have $C_m^n<mC_{m-1}^n\leq m!\left(\frac{n}{2}\right)^{\lfloor \frac{m}{2}\rfloor}. $ \end{proof} We now settle the bounded part of the problem. Bounded random variables are in particular subexponential, so one could apply results from \citep{Viens07} for example. But for our purposes, a direct treament as in the following seems to be more suitable. \begin{lemma} \label{lemma.helpwoe} Let $l\in\mathbb{N}$, $p\geq 2$ and $A\geq 2$. Then, for the truncation level $K=\frac{ A}{2}+\sqrt{\frac{A^2}{4}-1}$, \begin{equation*} \mathbb{E} \left[\max_{1\leq j \leq N}(\mathbb{P}_n -P)\underline{Z}(j)-AM\frac{\log(N)}{n}\right]_+^l\leq \left(\frac{M}{A}+\frac{lAM}{n}\right)^{l}. \end{equation*} \end{lemma} \begin{pro} We assume w.l.o.g. $M=1$ and observe that \begin{equation*} \mathbb{E} \left[\underline{Z}_i(j)-\mathbb{E}\underline{Z}_i(j)\right]^2\leq \mathbb{E}\underline{Z}_i(j)^2\leq 1. \end{equation*} Moreover, because of H\"older's and Chebyshev's Inequalities and $K\geq 1$ it holds that \begin{equation*} |\underline{Z}_i(j)-\mathbb{E}\underline{Z}_i(j)|\leq|\underline{Z}_i(j)|+|\mathbb{E}\underline{Z}_i(j)|\leq K+\frac{1}{K}=A. \end{equation*} These observations, the independence of the random variables and Lemma~\ref{lemma.helpfirst} yield then \begin{align*} &\mathbb{E} e^\frac{n(\mathbb{P}_n-P) \underline{Z}(j)}{A}\\ =&\mathbb{E} e^\frac{\sum_{i=1}^n(\underline{Z}_i(j)-\mathbb{E}\underline{Z}_i(j))}{A}\\ \leq&\left(1 +\frac{1}{A^2}\right)^n. \end{align*} Next, one checks easily, that the map $x\mapsto e^{x^\frac{1}{l}}$ is convex on the set $[(l-1)^{l},\infty)$. Hence, using Jensen's Inequality again, we obtain \begin{align*} &\mathbb{E} \left[\max_{1\leq j \leq N}(\mathbb{P}_n-P) \underline{Z}(j)-A\frac{\log(N)}{n}\right]_+^{l}\\ \leq& \frac{A^{l}}{n^{l}}\mathbb{E}\left[ \left(\max_{1\leq j \leq N}n(\mathbb{P}_n-P) \underline{Z}(j)/A-\log(N)\right)_+^{l}\vee (l-1)^{l}\right]\\ \leq& \frac{A^{l}}{n^{l}}\log^{l}\left(\mathbb{E} \exp\left(\left(\max_{1\leq j \leq N}n(\mathbb{P}_n - P) \underline{Z}(j)/A-\log(N)\right)_+\vee (l-1)\right)\right)\\ =& \frac{A^{l}}{n^{l}}\log^{l}\left(\mathbb{E} \exp\left(\left(\max_{1\leq j \leq N}n(\mathbb{P}_n - P) \underline{Z}(j)/A-\log(N)\right)\vee (l-1)\right)\right)\\ \leq& \frac{A^{l}}{n^{l}}\log^{l}\left(\max_{1\leq j \leq N}\mathbb{E} \exp(n(\mathbb{P}_n - P) \underline{Z}(j)/A)+ e^{l-1}\right)\\ \leq& \frac{A^{l}}{n^{l}}\log^l\left(\left(1+\frac{1}{A^2}\right)^n+e^{l-1}\right). \end{align*} We finally note that $a+b< eab$ for all $a,b\geq 1$ and find \begin{align*} &\mathbb{E} \left[\max_{1\leq j \leq N}(\mathbb{P}_n-P) \underline{Z}(j)-A\frac{\log(N)}{n}\right]_+^{l}\\ <& \frac{A^{l}}{n^{l}}\log^l\left(\left(1+\frac{1}{A^2}\right)^ne^{l}\right)\\ =& \frac{A^{l}}{n^{l}}\left(\log\left(1+\frac{1}{A^2}\right)^n+\log e^{l}\right)^l\\ \leq&\left(\frac{1}{A}+\frac{lA}{n}\right)^{l}. \end{align*} \end{pro} The results above can now be used to derive a generalization of the main problem. \begin{lemma} \label{theorem.mainwoe} Assume that the random variables $Z_i(j)$ are centered. Then, for $l\in\mathbb{N}$, $p\geq 2$ , $p\geq l$ and $A\geq 2$, \begin{equation*} \label{eq.mainwoe} \mathbb{E} \left[Z-AM\frac{\log(2N)}{n}\right]_+^l\leq \left(2\left(\frac{2}{A}\right)^{p-1}+(l!)^\frac{1}{l}\sqrt{\frac{2}{n}}+\frac{1}{A}+\frac{lA}{n}\right)^{l}M^l. \end{equation*} \end{lemma} \begin{proof} The idea is again to separate the bounded and the unbounded quantities. The part with the unbounded quantities is treated by elementary means and Lemma~\ref{lemma.ConIn4.Combi}. For the bounded part, we use Lemma~\ref{lemma.helpwoe}.\\ First, we assume w.l.o.g. that $M=1$ and set $K=\frac{ A}{2}+\sqrt{\frac{A^2}{4}-1}$. Then, we deduce with the triangle inequality that \begin{align}\nonumber &\mathbb{E} \left[\max_{1\leq j \leq N}\mathbb{P}_n Z(j)-A\frac{\log(N)}{ n}\right]_+^l\\\nonumber =&\mathbb{E} \left[\max_{1\leq j \leq N}(\mathbb{P}_n-P)Z(j)-A\frac{\log(N)}{n}\right]_+^l\\\nonumber \leq&\mathbb{E} \left[\max_{1\leq j \leq N}(\mathbb{P}_n-P)\overline{Z}(j)+\max_{1\leq j \leq N}(\mathbb{P}_n-P)\underline{Z}(j)-A\frac{\log(N)}{ n}\right]_+^l\\\nonumber \leq&\mathbb{E} \left[\left[\max_{1\leq j \leq N}(\mathbb{P}_n-P)\overline{Z}(j)\right]_++\left[\max_{1\leq j \leq N}(\mathbb{P}_n-P)\underline{Z}(j)-A\frac{\log(N)}{ n}\right]_+\right]^l\\ \leq&\left(\left(\mathbb{E} \mbox{\fontsize{9}{12}\selectfont $ \left[{\displaystyle \max_{1\leq j \leq N}}(\mathbb{P}_n-P)\overline{Z}(j)\right]_+^l$}\right)^{\frac{1}{l}}+\left(\mathbb{E}\mbox{\fontsize{9}{12}\selectfont $\left[{\displaystyle \max_{1\leq j \leq N}}(\mathbb{P}_n-P)\underline{Z}(j)-A\frac{\log(N)}{ n}\right]_+^{l}$}\right)^\frac{1}{l}\right)^l.\label{eq.ConIn4.splitbeg} \end{align} So, we are able to treat the unbounded and the bounded quantities separately. We begin with the unbounded quantities. We first note that \begin{align*} \left[(\mathbb{P}_n-P)\overline{Z}(j)\right]_+^l \leq\left((\mathbb{P}_n+P)\overline{\mathcal{E}}\right)^l= \left((\mathbb{P}_n-P)\overline{\mathcal{E}}+2P\overline{\mathcal{E}}\right)^l. \end{align*} Hence, \begin{equation}\label{eq.ConIn4.Splitun} \mathbb{E} \left[\max_{1\leq j \leq p}(\mathbb{P}_n-P)\overline{Z}(j)\right]_+^l \leq \left(2P\overline{\mathcal{E}}+(\mathbb{E} \left[(\mathbb{P}_n-P)\overline{\mathcal{E}}\right]^l)^\frac{1}{l}\right)^{l}. \end{equation} H\"older's and Chebyshev's Inequalities are then used to find \begin{align} \label{eq.ConIn4.first} &P\overline{\mathcal{E}}=\frac{1}{n}\sum_{i=1}^n\mathbb{E}\overline{\mathcal{E}}_i \leq \frac{1}{n}\sum_{i=1}^n({\mathbb{E}\mathcal{E}_i^p})^\frac{1}{p}({\mathbb{E} 1_{\{\mathcal{E}_i>K\}}})^{1-\frac{1}{p}} \leq \frac{1}{K^{p-1}}. \end{align} To bound the quantity left, we note that for all $i$ and $p\geq q\in\mathbb{N}$ \begin{align*} \mathbb{E}\left[\overline{\mathcal{E}}_i-\mathbb{E}\overline{\mathcal{E}}_i\right]^q \leq 2^q \end{align*} so that \begin{align*} \mathbb{E}\left[(\overline{\mathcal{E}}_{i_1} - \mathbb{E}\overline{\mathcal{E}}_{i_1})...(\overline{\mathcal{E}}_{i_l} - \mathbb{E}\overline{\mathcal{E}}_{i_l})\right] \leq 2^l. \end{align*} Moreover, it holds that \begin{align*} \mathbb{E}\left[(\overline{\mathcal{E}}_{i_1} - \mathbb{E}\overline{\mathcal{E}}_{i_1})...(\overline{\mathcal{E}}_{i_l} - \mathbb{E}\overline{\mathcal{E}}_{i_l})\right]=0 \end{align*} for all $i_1,...,i_l$ such that there is a $j$ with $i_j\neq i_{j'}$ for all $j'\neq j$. With Lemma~\ref{lemma.ConIn4.Combi}, we then get for $n>1$ \begin{align} &\mathbb{E} \left[(\mathbb{P}_n-P)\overline{\mathcal{E}}\right]^{l}\leq \frac{2^lC_l^n}{n^l}\leq \frac{2^{l} l!}{n^l}\left(\frac{n}{2}\right)^{\lfloor \frac{l}{2}\rfloor} \leq l!\sqrt{\frac{2}{n}}^l.\label{eq.ConIn4.second} \end{align} Clearly, this also holds for $n=1$ and $l=1$. For $n=1$ and $l>1$ we note that \begin{align*} \mathbb{E} \left[(\mathbb{P}_n-P)\overline{\mathcal{E}}\right]^{l}\leq 2^l \leq l! \sqrt 2^l, \end{align*} so that Inequality \eqref{eq.ConIn4.second} holds for all $n$ and $l$ under consideration. Inserting then Inequalities \eqref{eq.ConIn4.first} and \eqref{eq.ConIn4.second} in Inequality \eqref{eq.ConIn4.Splitun}, we obtain the result for the unbounded part \begin{align} \label{eq.ConIn4.boundunbound} \mathbb{E} \left[\max_{1\leq j \leq p}(\mathbb{P}_n-P)\overline{Z}(j)\right]_+^l \leq \left(\frac{2}{K^{p-1}}+(l!)^\frac{1}{l}\sqrt{\frac{2}{n}} \right)^l \end{align} Next, we plug the result of Lemma~\ref{lemma.helpwoe} and Inequality~\eqref{eq.ConIn4.boundunbound} in Inequality~\eqref{eq.ConIn4.splitbeg} to derive \begin{align*} &\mathbb{E} \left[ \max_{1\leq j \leq N}\mathbb{P}_n Z(j)-A\frac{\log(N)}{n}\right]_+^l\\ \leq&\left(2\left(\frac{2}{A}\right)^{p-1}+(l!)^\frac{1}{l}\sqrt{\frac{2}{n}}+\frac{1}{A}+\frac{lA}{n}\right)^{l}. \end{align*} Finally, we define $Z(j+N):=\text{-}Z(j)$ for $1\leq j \leq N$. We then get \begin{align*} \mathbb{E} \left[Z-A\frac{\log(2N)}{n}\right]_+^l= &\mathbb{E} \left[ \max_{1\leq j \leq 2N}\mathbb{P}_n Z(j)-A\frac{\log(2N)}{n}\right]_+^l\\ \leq &\left(2\left(\frac{2}{A}\right)^{p-1}+(l!)^\frac{1}{l}\sqrt{\frac{2}{n}}+\frac{1}{A}+\frac{lA}{n}\right)^{l} \end{align*} replacing $N$ by $2N$ in the results above. \end{proof} Theorem~\ref{sec4.Corrollary1} is now a simple corollary. \begin{proof}[Proof of Theorem \ref{sec4.Corrollary1}] Set $A=2\sqrt n$ in Lemma~\ref{theorem.mainwoe}. \end{proof} \section{Main Result} \label{sec.M1} We are mainly concerned with concentration inequalities for unbounded empirical processes that only fulfill weak moment conditions. Additionally, we want to incorporate empirical processes with envelopes that may be much larger than the single functions under consideration.\\ The following theorem is the main result of this paper: \begin{theorem} \label{lemma.ConIn2.FirstCor} For $1\leq l\leq p$ and $\epsilon\in\mathbb{R}^+$ it holds that \begin{equation*} \mathbb{E} \left[Z-(1+\epsilon)\mathbb{E} {Z}\right]_+^l\leq \left(\left(\frac{64}{\epsilon}+7+\epsilon\right)\frac{lM}{n^{1-\frac{l}{p}}}+\frac{4\sqrt l{\sigma}}{\sqrt n} \right)^l \end{equation*} and, if additionally $\epsilon\in (0,1]$, \begin{align*} \mathbb{E} \left[(1-\epsilon)\mathbb{E} {Z}-Z\right]_+^l\leq \left(\left(\frac{86.4}{\epsilon}+7-\epsilon\right)\frac{lM}{n^{1-\frac{l}{p}}}+\frac{4.7\sqrt l{\sigma}}{\sqrt n} \right)^l. \end{align*} \end{theorem} \noindent As discussed in the preceding section, we state our results in terms of random vectors instead of empirical processes. The connection can be made as described. Furthermore, we relinquished slightly better constants to obtain incisive bounds, however, a considerable improvement in $l$ does not seem to be possible. We finally note that the expectation $\mathbb{E} Z$ can be replaced by suitable approximations. Such approximations are usually found with chaining and entropy (see for example \citep{Buhlmann11}, \citep{vdVaart00}, \citep{vdVaart11}) or generic chaining (see for example \citep{Talagrand96}, \citep{Talagrand05}).\\ Let us now have a closer look at the above result. In contrast to the known results given in the introduction, the single functions may be unbounded and may only fulfill weak moment conditions. For the envelope, the moment restrictions are increasing with increasing power $l$, as expected.\\ And what about large envelopes, that is $M\gg {\sigma}$? Theorem~\ref{lemma.ConIn2.FirstCor} separates the part including the size of the envelope (measured by $M$) from the part including the size of the single random vectors (measured by ${\sigma}$). For $p>2l$ and $n \gg 1$, a possibly large value of $M$ is counterbalanced by $\frac{1}{n^{1-\frac{l}{p}}}\ll\frac{1}{\sqrt n} $ and thus, the influence of large envelopes is tempered. In particular, the term including the size of the envelope can be neglected for $n\to \infty$ if $p$ is sufficiently large.\\ We conclude this section with two straightforward consequences of Theorem~\ref{lemma.ConIn2.FirstCor}. \begin{corollary} Theorem~\ref{lemma.ConIn2.FirstCor} directly implies probability bounds via Chebyshev's Inequality. Under the above assumptions it holds for $x > 0$ \begin{equation*} \mathbb{P}\left(Z\geq (1+\epsilon)\mathbb{E} {Z} +x \right)\leq\min_{1\leq l \leq p}\mbox{\fontsize{12}{12}\selectfont $\frac{\left(\left(\frac{64}{\epsilon}+7+\epsilon\right)\frac{lM}{n^{1-\frac{l}{p}}}+\frac{4\sqrt l{\sigma}}{\sqrt n}\right)^l}{x^l}$} \end{equation*} and similarly \begin{equation*} \mathbb{P}\left(Z\leq (1-\epsilon)\mathbb{E} {Z} -x \right)\leq\min_{1\leq l \leq p}\mbox{\fontsize{12}{12}\selectfont $\frac{\left(\left(\frac{86.4}{\epsilon}+7-\epsilon\right)\frac{lM}{n^{1-\frac{l}{p}}}+\frac{4.7\sqrt l{\sigma}}{\sqrt n}\right)^l}{x^l}$}. \end{equation*} \end{corollary} \begin{corollary} Concrete first order bounds under the above assumptions are for example \label{lemma.ConIn.Bsp} \begin{alignat*}{5} \mathbb{E}& \left[Z-2\mathbb{E}{Z}\right]_+ &\leq&~72\frac{M}{n^{1-\frac{1}{p}}}&+&4\frac{\sigma}{\sqrt n}\\ \text{and~~~}\mathbb{E} &\left[\frac{1}{2}\mathbb{E}{Z}-Z\right]_+ &\leq&179.3\frac{M}{n^{1-\frac{1}{p}}}&+&4.7\frac{\sigma}{\sqrt n}. \end{alignat*} \end{corollary} \section{Complementary Bounds} \label{sec.M2} In this section, we complement the main result Theorem~\ref{lemma.ConIn2.FirstCor} with two additional bounds. These additional bounds can be of interest if $l$ is close to $p$.\\ The first result reads: \begin{theorem}\label{theorem.Massart2} Assume that the random variables $Z_i(j)$ are centered. For $l\geq 1$ and $p\geq l$ it holds that \begin{equation*} \mathbb{E}\left[Z-4\mathbb{E} Z\right]_+^l\leq l~\Gamma\left(\frac{l}{2}\right)\left(\frac{32}{n}\right)^\frac{l}{2}M^l, \end{equation*} where ${\Gamma}$ is the usual Gamma function. \end{theorem} Let us compare Theorem~\ref{theorem.Massart2} with Theorem~\ref{lemma.ConIn2.FirstCor}. On the one hand, the above result does not possess the flexibility of the factor $(1+\epsilon)$ and is a deviation inequality only. On the other hand, the term including the size of the envelope $M$ is independent of $p$ and has a different power of $n$ in the denominator compared to the corresponding term in Theorem~\ref{lemma.ConIn2.FirstCor}. Comparing these two terms in detail, we find that the bound of Theorem~\ref{theorem.Massart2} may be sharper than the corresponding bound in Theorem~\ref{theorem.Massart2} if $l\leq p < 2l$.\\ We finally give explicit deviation inequalities for $Z$ in the case of finitely many random vectors. For $p\geq 2$, explicit bounds are found immediately by replacing $\mathbb{E} Z$ in Theorem~\ref{lemma.ConIn2.FirstCor} or Theorem~\ref{theorem.Massart2} by the upper bound $\sqrt{\frac{8\log (2N)}{n}}M$ (see \cite{Duembgen09}). Another bound is found by an approach detailed in Section~\ref{sec.proofs}. The bound reads: \begin{theorem} \label{sec4.Corrollary1} Let the random variables $Z_i(j)$ be centered. Then, for $p\geq 2$, $l\in\mathbb{N}$, and $p\geq l$ \begin{equation*} \mathbb{E} \left[Z-2M\frac{\log(2N)}{\sqrt n}\right]_+^l\leq \left(\frac{35l^2}{n}\right)^{\frac{l}{2}}M^l. \end{equation*} \end{theorem} \noindent This can supersede the bound in Theorem~\ref{theorem.Massart2} for $2\sqrt{\log(2N)}\leq 8\sqrt{2}$.
2,869,038,156,205
arxiv
\section{Introduction The notion of toric orbifold was initiated by Davis and Januskiewicz in their pioneering paper \cite{DJ}. After 19 years it was explicitly studied in \cite{PS} by Poddar and the first author with the name quasitoric orbifold. A toric orbifold $X$ is an orbifold admitting an effective action by a compact torus $T\simeq (\mathbb{S}^1)^n$ with orbit space a simple convex polytope $Q$. A toric orbifold can alternately be constructed from $Q$ and the data encoded by a $\mathbb{Z}^n$-valued function on the set of codimension $1$ faces of $Q$ denoted by $\lambda$, called as the {\it characteristic function}. In particular, when $\lambda$ satisfies the condition $(*)$ in \cite[p. 423]{DJ} the toric orbifold is a manifold and is called a toric manifold. These toric manifolds are now known as quasitoric manifolds, for example see \cite[Section 5.2]{BP}. Moreover, a toric manifold admits a canonical $T$-equivariant cell structure given by means of the height function on $Q$ \cite[Theorem 3.1]{DJ}. In this paper we give some sufficient conditions for a possibly singular toric orbifold to admit a $T$-equivariant cell structure. This is using the concept of the retraction of the polytope $Q$ which was introduced in \cite{BSS}. Motivated by the terminology of divisive weighted projective space due to \cite{HHRW} we call them {\it divisive} toric orbifolds. The equivariant projection $X \longrightarrow pt$ induces the structure of a graded $E_{T}^*(pt)$-algebra structure on $E_{T}^*(X)$. Here $E_{T}^*$ denotes the $T$-equivariant generalized cohomology ring. The main aim of this article is to describe $E_T^*(X)$ as an algebra over $E_T^*(pt)$. In particular, we get a description of the $T$-equivariant $K$ ring of $X$ over $K_{T}^*(pt)$. This was motivated by the corresponding results on a divisive weighted projective space in \cite{HHRW}. The main tools here are the methods developed by Harada Henriques and Holm in \cite{HHH}. They prove, among other results, a precise version of localization theorem for $G$-equivariant generalized cohomology theory of $G$-manifolds for a topological group $G$, equipped with a $G$-equivariant stratification satisfying some additional conditions \cite[Section 3]{HHH}. Moreover, in Section 4 we develop the notion of piecewise algebra for a characteristic pair $(Q, \lambda)$, see Definition \ref{piec_alg}. In Theorem \ref{piecewise} we prove that the generalized equivariant cohomology ring of a divisive quasitoric orbifold is isomorphic as an $E_T^*(pt)$-algebra to the corresponding piecewise algebra for $(Q, \lambda)$. In particular, we derive that the equivariant cohomology ring $H_{T}^*(X)$ is isomorphic to the piecewise polynomial functions for $(Q, \lambda)$, the equivariant topological $K$-ring $K_{T}^*(X)$ is isomorphic to the piecewise Laurent polynomial functions for $(Q, \lambda)$ and the equivariant cobordism ring $MU_{T}^*(X)$ is isomorphic to the piecewise exponential functions on $(Q, \lambda)$. Note that examples of quasitoric orbifolds include simplicial projective toric varieties. Thus the projective simplicial toric varieties for which height function on the associated moment polytope gives an equivariant cell structure are examples of divisive quasitoric orbifolds. Thus our main theorem Theorem \ref{piecewise} generalizes \cite[Theorem 5.5, Theorem 7.1]{HHRW}. We refer to \cite{May} for definition and results on $T$-equivariant generalized cohomology theories $E_{T}^*$, \cite{Segal} for $T$-equivariant $K$-theory $K_{T}^*$ and \cite{TD} and \cite{Sinha} for $T$-equivariant complex cobordism theory $MU^*_{T}$. \section{Toric orbifolds and local groups}\label{subsec_def_by_construction} In this section we recall the concept of \emph{characteristic pairs} $(Q,\lambda)$ of \cite{DJ} and \cite{PS}, and explain how they are used to construct \emph{toric orbifolds} $X=X(Q,\lambda)$. We then recall invariant subspaces of $X$ corresponding to a face of $Q$, and local groups of these subspaces. Let $Q$ be an $n$-dimensional simple convex polytope in ${\bb{R}}^n$ and $$\mathcal{F}(Q)= \{F_i : i \in \{ 1, \ldots, d \} = I\}$$ be the codimension one faces (facets) of $Q$. The codimension one faces of a convex polytope are called facets. \begin{definition}\label{def_characteristic_function} A function $\lambda : \mathcal{F}(Q) \to {\bb{Z}}^n$ is called a characteristic function on $Q$ if $\lambda(F_{i_1}), \ldots, \lambda(F_{i_k})$ are linearly independent primitive vectors whenever the intersection $F_{i_1} \cap \cdots \cap F_{i_k}$ is nonempty. We denote $\lambda_i = \lambda(F_i)$ and we call it a characteristic vector corresponding to the facet $F_i$. The pair $(Q, \lambda)$ is called a characteristic pair. \end{definition} In the above definition, one can check that it suffices to satisfy the linearly independence condition at each vertex which is an intersection of $n$ facets of $Q$. An example of characteristic function is given in Figure \ref{fig-eg1}. Let $F$ be a codimension-$k$ face of $Q$. Since $Q$ is a simple polytope, $F$ is the unique intersection of $k$ facets $F_{i_1}, \ldots, F_{i_k}$. Let $M(F)$ be the submodule of ${\bb{Z}}^n$ generated by the characteristic vectors $\{\lambda_{i_1}, \dots, \lambda_{i_k} \}$. Then, $T_{M(F)} = M(F)_{{\bb{R}}} /M(F)$ is a torus of dimension $k$. We shall adopt the convention that $T_{M(Q)} = 1$. Then there is a group homomorphism $f_{M(F)} \colon T_{M(F)} \to T$ induced by the inclusion $M(F) \hookrightarrow {\bb{Z}}^n$ and if the rank of $M(F)$ is $n$ then $\ker{f_{M(F)}} = {\bb{Z}}/M(F)$, see \cite[Section 2]{PS}. Let \begin{equation}\label{def_tf} T_F = \text{Im}\{f_{M(F)} \colon T_{M(F)} \to T\} \end{equation} Define an equivalence relation $\sim$ on the product $T \times Q$ by \begin{equation}\label{equ001} (t, x) \sim (s, y) ~ \mbox{if and only if}~ x=y~ \mbox{and}~ s^{-1}t \in T_F \end{equation} where $F$ is the smallest face containing $x$. The quotient space $$ X(Q, \lambda)=(T \times Q)/\sim$$ has an orbifold structure with a natural $T$-action induced by the group operation, see Section 2 in \cite{PS}. Clearly, the orbit space under the $T$-action on $X(Q, \lambda)$ is $Q$. Let $$\pi: X(Q, \lambda) \to Q ~ \mbox{defined by} ~ \pi([p,t]_{\sim}) = p$$ be the orbit map. The space $X(Q, \lambda)$ is called the toric orbifold associated to the charateristic pair $(Q, \lambda)$. We note that if addition $\lambda$ satisfies the Davis and Januszkiewicz's condition $(*)$ in \cite{DJ}, then $X$ is a manifold called a toric manifold. After analyzing the orbifold structure of $X(Q, \lambda)$, Poddar and the first author, \cite[Subsection $2.2$]{PS}, gave an axiomatic definition of toric orbifolds, which generalizes the axiomatic definition of toric manifolds of \cite{DJ}. They showed that these two definitions of toric orbifolds are equivalent. In \cite{PS}, the authors also give explicit orbifold charts for $X(Q, \lambda)$. Now we discuss about some special invariant subspaces of $X$ following \cite{PS}. Let $F$ be a face of $Q$ of codimension $k$. Then, the preimage $\pi^{-1}(F)$ is a closed invariant subspace. Let \begin{equation M^*(F)= M(F)_\mathbb{R} \cap \mathbb{Z}^n ~~{\rm and}~~ G_F = M^*(F)/M(F). \end{equation} So $M^*(F)$ is a direct summand of ${\bb{Z}}^n$ and $G_F$ is a finite abelian group. Note that if $F$ is a face of $F'$, then the natural inclusion of $M(F')$ into $M(F)$ induces a surjective homomorphism from $G_F$ to $G_{F'}$, see the proof of \cite[Proposition 4.3]{BSS}. Consider the following projection homomorphism, \begin{equation}\label{eq_lattice_projection} \varrho_F : {\bb{Z}}^n \to \mathbb{Z}^n / M^\ast(F)\cong \mathbb{Z}^{n-k}. \end{equation} Let $\{H_{1}, \ldots, H_{\ell}\}$ be the facets of $F$. Then for each $j \in \{1, \ldots, \ell\}$, there is a unique facet $F_{i_j}$ such that $H_j = F \cap F_{i_j}$. We define a map $$\lambda_{F} : \{H_1, \ldots, H_\ell \} \to \mathbb{Z}^{n-k} \text{ by }$$ $$\lambda_{F}(H_j) = prim(\varrho_F \circ \lambda(F_{i_j})) ~\mbox{for}~j \in \{1, \ldots, \ell \},$$ where $prim(x)$ denotes the primitive vector of $x \in {\bb{Z}}^{n-k}$. Then $\lambda_F$ is a characteristic function on $F$. If $X(F, \lambda_F)$ be the toric orbifold corresponding the characteristic pair $(F, \lambda_F)$, then \cite[Proposition 3.2]{BSS} says that as topological spaces, $\pi^{-1}(F)$ and $X(F, \overline{\lambda}_F)$ are homeomorphic. In the rest of this section, we compute and compare the local groups of $X(Q, \lambda)$ following \cite{BSS}. There is a bijective correspondence between fixed points of $T^k$-action on the toric orbifold $\pi^{-1}(F)$ to the vertices of $F$. Let $v$ be a vertex of $F \subset Q$. Then $v=F_{i_1} \cap \cdots \cap F_{i_n}$ for a unique collection of facets $F_{i_1}, \ldots, F_{i_n}$ of $Q$. Let $M(v)$ be the submodule of ${\bb{Z}}^n$ generated by $\{\lambda(F_{i_1}), \ldots, \lambda(F_{i_n})\}$. We now consider $v$ as a vertex of codimension $k$-face $F$, then $v=H_{j_1} \cap \cdots \cap H_{j_{n-k}}$ for a unique collection of facets $H_{j_1}, \ldots, H_{j_{n-k}}$ of $F$. Let $M_{F}(v)$ be the submodule of $ \mathbb{Z}^{n-k}$ generated by $\{\lambda_{F}(H_{j_1}), \ldots, \lambda_{F}(H_{j_{n-k}})\}$. We define \begin{equation} G_{Q}(v) :={\bb{Z}}^n/M(v), \quad \mbox{and} \quad G_{F}(v) :=\mathbb{Z}^{n-k} /M_{F}(v) \end{equation} for a proper face $F$ of $Q$ containing $v$. These are finite abelian groups. Notice that the orders $|G_Q(v)|$ and $|G_F(v)|$ of each group are obtained by computing the determinant of the matrix obtained by considering the corresponding characteristic vectors as columns. In other words, \begin{align*} |G_Q(v)|&=\left| \det \left[ \begin{array}{c|c|c} \lambda(F_{i_1}) &\cdots&\lambda(F_{i_n}) \end{array} \right] \right|, ~~ \text{~if~} v=F_{i_1} \cap \dots \cap F_{i_n} \in V(Q),\\ |G_F(v)|&=\left| \det \left[ \begin{array}{c|c|c} \lambda_F(H_{j_1}) &\cdots&\lambda_F(H_{i_{n-k}}) \end{array} \right] \right|, ~~ \text{~if~} v=H_{j_1} \cap \dots \cap H_{j_{n-k}} \in V(F). \end{align*} \section{CW-structures on toric orbifolds} In this section, we recall the concept of retraction of simple polytope which was introduced in \cite{BSS}. Then we define divisive toric orbifolds, and show that a divisive toric orbifold has an actual CW-structure. We compute an example of a divisive toric orbifold. We adhere to the notations of previous sections. Let $Q$ be a simple polytope of dimension $n$. Then the collection of its faces is a polytopal complex (see \cite[Definition 5.1]{Zi}) denoted by $\mathcal{L}(Q)$ and $Q$ is its polyhedron. If $\mathcal{L}$ is a polytopal subcomplex of $ \mathcal{L}(Q)$ and $B_{\ell}$ is its polyhedron. \begin{definition} A vertex $v$ is called a free vertex of $B_{\ell}$ if it has a neighborhood in $B_{\ell}$ which is homeomorphic to ${\bb{R}}^s_{\geq}$ as manifold with corners for some $0 \leq s \leq \dim(B_{\ell})$. \end{definition} Now we recall the following. \begin{definition} Let $Q$ be a simple polytope and there exists a sequence $\{B_{\ell}, P_{\ell}, v_{\ell}\}_{\ell=1}^m $ such that \begin{itemize} \item $B_{\ell}$ is the polyhedron of of a subcomplex of $\mathcal{L}(Q)$ not containing the vertices $v_1, \ldots, v_{\ell -1}$ for $\ell > 1$ and $B_1 = Q$. \item $P_{\ell}$ is a face of $B_{\ell}$ and $v_{\ell}$ is a free vertex of $B_{\ell}$ where $v_{\ell} \in P_{\ell}$. \item $B_m$ is the vertex $v_m$. \end{itemize} Then we say $\{B_{\ell}, P_{\ell}, v_{\ell}\}_{\ell = 1}^m$ is a retraction of $Q$ starting with the vertex $v_1$ and ending at $v_m$. \end{definition} Several examples of retraction sequence of polytopes can be found in \cite[Section 2]{BSS}. Also, Figure \ref{fig-eg5} gives an example of a retraction sequence of a 3-prism. Moreover, \cite[Proposition 2.3]{BSS} shows that every simple polytope has a retraction sequence. \begin{remark}\label{dir_graph} Given a retraction sequence of simple polytope $Q$ one can define a directed graph on 1-skeleton of $Q$ in the following way. Let $\{(B_i, P_i, v_i)\}_{i=1}^m$ be a retraction sequence of $Q$. We order the vertices of $Q$ as $v_1 < v_2 < \ldots < v_m$. We assign a direction from $v_s$ to $v_r$ on an edge with end points $v_s, v_r$ if $v_s > v_r$. This directed graph has the property: if $k$ many edges are coming toward the vertex $v_i$ then $\dim P_i =k$. \end{remark} \begin{definition}\label{def_divisive_tor_orb} Let $X$ be a toric orbifold over the simple polytope $Q$ with characteristic function $\lambda$. Then $X$ is called divisive if $Q$ has a retraction $\{(B_i, P_i, v_i)\}_{i=1}^m$ such that $G_{P_i}(v_i)$ is trivial group for $i=1, \ldots, m-1$. \end{definition} \begin{lemma}\label{cw-of-toic-orbi} If $X$ is a divisive toric orbifold of $2n$-dimension then $X$ has a usual CW-structure where cells are invariant under $T$-action. \end{lemma} \begin{proof} Let $\{H_1, \ldots, H_{s}\}$ be the facets of $P_i$ such that $H_1 \cap \cdots \cap H_{s} = v_i$. Let $U_i$ be the open subset of $P_i$ obtained by deleting all faces of $P_i$ not containing $v_i$. Since $G_{P_i}(v_i)$ is trivial, then the vectors $\{\overline{\lambda}_{P_i}(H_1), \ldots, \overline{\lambda}_{P_i}(H_{s})\}$ form a basis of ${\bb{Z}}^s$. Then following the construction 1.5 of \cite{DJ}, we have $$(T^s \times U_i)/\sim ~= ~ \pi_{P_i}^{-1}(U_i)$$ is equivariantly homeomorphic to a $2s$-dimensional open disk. Now by \cite[Proposition 3.2]{BSS} the subset $\pi_{P_i}^{-1}(U_i) \subset X(P_i, \lambda_{P_i})$ is equivariantly homeomorphic to $\widehat{U}_i:=\pi^{-1}(U_i) \subset X(Q, \lambda)$ for $i = 1, \ldots, m$. Note that $Q= {\displaystyle \bigcup_{i=1}^m U_i}$. Therefore $X(Q, \lambda) = {\displaystyle \bigcup_{i=1}^m \widehat{U}_i}$. This proves the lemma. \end{proof} \begin{remark} The Definition \ref{def_divisive_tor_orb} generalizes the definition of divisive weighted projective spaces of \cite{HHRW}. \end{remark} \begin{example}\label{eg_div_toric_3d} In this example we construct a divisive toric orbifold. Consider the characteristic function $\lambda$ on a 3-prism $Q$ as in Figure \ref{fig-eg1}. One can compute that $G_Q(v_{1})=1$, $G_Q(v_{2})=1$, $G_Q(v_{3})=3$, $G_Q(v_{4})=1$, $G_Q(v_{5})=3$, $G_Q(v_{6})=5$, where, for example, $G_Q(v_{1})$ is the determinant of $\{\lambda(F_1), \lambda(F_2), \lambda(F_3)\}$, and similarly others. \begin{figure}[ht] \centerline{ \scalebox{0.75}{ \input{k-thm1.pdf_t} } } \caption{A characteristic function on 3-prism $Q$.} \label{fig-eg1} \end{figure} Now we consider the following retraction of $Q$, see Figure \ref{fig-eg5}. The retraction sequence is given by $\{B_1, B_1, v_{1}\}$, $~\{B_2, F_4, v_{2}\}$, $~\{B_3, F_2 \cap F_4, v_{3}\}$, $\{B_4, F_5, v_{4}\}$, $\{B_5, F_4 \cap F_5, v_{5}\}$, $\{B_6, v_6, v_{6}\}$. We only compute the local group $G_{F_5}(v_{4})$ and computation for other $G_{P_i}(v_i)$ is similar. In this case $P_i = F_5$ and $v_i = v_4$. So $M(F_5) = \<\lambda(F_5)\>= \<( 1, 1, 4)\>$. Thus $$M^*(F_5) = \<\lambda(F_5)\> \cap {\bb{Z}} = \<(1, 1, 4)\> = M(F_5) \cong {\bb{Z}}.$$ Consider the basis $\{e_1=(0,1,0), e_2=(0,0,1), e_3=(1,1,4)\}$ of ${\bb{Z}}^3$. Then one gets the projection $$\rho : \mathbb{Z}^3 \to \mathbb{Z}/M^*(F_5) = \<e_1, e_2, e_3\>/\<e_3\> \cong {\bb{Z}}^2.$$ The facets of $F_5$ which intersects at $v_{4}$ are $F_2 \cap F_5$ and $F_3 \cap F_5$. Therefore we get $\lambda_{F_5}$ on $F_2 \cap F_5$ and $F_3 \cap F_5$ which are given by $$\lambda_{F_5}(F_2 \cap F_5) = \rho(\lambda(F_2))=(1, 0) ~~\mbox{and}~~ \lambda_{F_5}(F_3 \cap F_5) = \rho(\lambda(F_3))=(0, 1).$$ Therefore $$G_{F_5}(v_{4})= \mathbb{Z}^2/\<\lambda_{F_5}(F_2 \cap F_5), \lambda_{F_5}(F_3 \cap F_5)\> =\mathbb{Z}^2/\<(1, 0), (0, 1)\>=1.$$ Similarly, one can compute that $G_{F_4}(v_{2})=1$, $G_{F_2 \cap F_4} (v_{3})=1$, $G_{F_4\cap F_5}(v_{5})=1$. \begin{figure}[ht] \centerline{ \scalebox{0.55}{ \input{k-thm2.pdf_t} } } \caption{A retraction sequence of 3-prism $Q$.} \label{fig-eg5} \end{figure} \end{example} \section{GKM theory on divisive toric orbifolds} We begin this section by recalling the GKM theory from \cite[Section 3]{HHH}. We shall then verify that these results can be applied to a divisive toric orbifold and hence give a precise description of its equivariant $K$-ring, equivariant cobordism ring and equivariant cohomology ring. Let $X$ be a $G$-space equipped with a $G$-invariant stratification \[X_{m} \subseteq X_{m-1} \subseteq \cdots \subseteq X_1 = X.\] Every $X_i\setminus X_{i+1}$ has a $G$-invariant subspace ${\bf x}_i$ whose neighbourhood carries the structure of the total space a $G$-equivariant vector bundle $\rho_i=(V_i,\pi_i,{\bf x}_i)$. Let $E_{G}^*(X)$ denotes a generalized $G$-equivariant cohomoogy theory of the $G$-space $X$. Now we make the following assumption on the $G$-space $X$. \begin{enumerate} \item[(A1)] Each subquotient $X_i/X_{i+1}$ is homeomorphic to the Thom space $Th(\rho_i)$ with corresponding attaching map $\phi_i:S(\rho_i)\longrightarrow X_{i+1}$. \item[(A2)] Every $\rho_i$ admits a $G$-equivariant direct sum decomposition $\displaystyle\bigoplus_{j>i} \rho_{ij}$ into $G$-equivariant subbundles $\rho_{ij}=(V_{ij},\pi_{ij}, {\bf x}_i)$. We allow the case $V_{ij}=0$. \item[(A3)] There exist $G$-equivariant maps $f_{ij} : {\bf x}_i \longrightarrow {\bf x}_j$ such that the restrictions $f_{ij}\circ \pi_{ij}\mid_{S(\rho_{ij})}$ and $\phi_i\mid_{S(\rho_{ij})}$ agree for every $j > i$. \item[(A4)]The equivariant Euler classes $e_G(\rho_{ij})$ for $j > i$, are not zero divisors and are pairwise relatively prime $E^*_G({\bf x}_i)$. \end{enumerate} We now recall the precise description of the generalized $G$-equivariant cohomology theory of $X$. \begin{theorem}\cite[Theorem 3.1]{HHH}\label{gkm} Let $X$ be a $G$-space satisfying the four assumptions {\rm A1} to {\rm A4}. Then the restriction map \[\iota^*: E^*_{G}(X)\longrightarrow \prod_{i=1}^m E_{G}^*({\bf x}_i)\] is monic and its image $\Gamma_{X}$ can be described as \[\{(a_i)\in \prod_{i=1}^m E_G^*({\bf x}_i): e_{G}(\rho_{ij})~\mid~a_i-f_{ij}^*(a_j)~\forall~ j<i)\}.\] \end{theorem} We first show below that the GKM theory of \cite{HHH} described above can be applied to a divisive toric orbifold $X :=X(Q,\lambda)$. Further, we shall use this to give an explicit description of $K_T^{*}(X)$, $MU^*_{T}(X)$ and $H^*_T(X)$. Let $X(Q,\lambda)$ be a divisive toric orbifold and $\{B_i, P_i, v_i\}_{i=1}^{m}$ be the corresponding retraction on the polytope $Q$. By Lemma \ref{cw-of-toic-orbi} there is an invariant CW-complex structure on $X(Q,\lambda)$ associated to this retraction. Let $$X_i:= \pi^{-1} (B_{i}), \quad \mbox{and} \quad x_i := \pi^{-1}(v_i)$$ be the $T$-fixed points of $X(Q,\lambda)$ for $1 \leq i \leq m$. Thus we have the following $T$-invariant stratification \begin{equation}\label{T-inv_stra} \{x_m\} = X_{m} \subseteq X_{m-1} \subseteq \cdots \subseteq X_1 = X(Q, \lambda) \end{equation} of $X(Q, \lambda)$ such that $X_i \setminus X_{i+1} = \widehat{U}_{i} \cong \mathbb C^{s_i}$, where $s_i = \dim P_{i}$ for $i=1, \ldots, m$ with $X_{m+1}= \emptyset$. Moreover, $x_i \in \widehat{U}_i \subseteq X_i$, $1 \leq i \leq m$. Since $\widehat{U}_{i} \cong {V}_i = \mathbb{C}^{s_i}$ are $T$-stable we have a $T$-representation $\rho_i = ({V}_i, \pi_i, x_i)$, which can alternately be viewed as a $T$-equivariant complex vector bundle over the $T$-fixed point $x_i$ for $1 \leq i \leq m$. \begin{proposition}\label{divass} A divisive toric orbifold $X(Q,\lambda)$ with the $T$-invariant stratification in \eqref{T-inv_stra} satisfies assumptions {\em (A1)} to {\em (A4)} listed above. \end{proposition} \begin{proof} {\bf Checking for (A2)}: We fix $i$ for the time being. Let $v_{i_1}, v_{i_2},\ldots, v_{i_r}$ be the vertices in $Q$, such that there are directed edges $e_{i_1},\ldots, e_{i_r}$ respectively from $v_{i_1}, v_{i_2},\ldots, v_{i_r}$ towards $v_i$ in the directed graph on the $1$-skeleton of $Q$ associated to the retraction $\{(B_i, E_i, v_i)\}_{i=1}^{m}$ as in Remark \ref{dir_graph}. Since $Q$ is simple polytope, $e_{i_j}=F_{j_1} \cap \cdots \cap F_{j_{n-1}}$ for unique facets $F_{j_1}, \ldots, F_{j_{n-1}}$ of $Q$. The $n\times (n-1)$-matrix $[\lambda(F_{j_1})^t, \cdots, \lambda(F_{j_{n-1}})^t]$ defines a $\mathbb{Z}$-linear map \begin{equation}\label{char_map} \psi_j:\mathbb{Z}^n\longrightarrow \mathbb{Z}^{n-1} \end{equation} acting on right. Since the image of $\psi_j$ is an abelian group of rank $(n-1)$ (see Definition \ref{def_characteristic_function}), thus $\ker \psi_j$ is a one-dimensional ${\bb{Z}}$-submodule of ${\bb{Z}}^n$. So, we have a character \begin{equation}\label{eq_map_chi} \chi_j: T \to S^1 \end{equation} determined by a primitive vector $u_j\in \mathbb{Z}^n$ generating the kernel of $\psi_j$. Note that $\pi^{-1}(e_{i_j})$ is $T$-invariant. We choose the sign of $u_j$ such that the following diagram commutes. \[ \begin{CD} T \times \pi^{-1}(e_{i_j}) @>{id}>> T \times \pi^{-1}(e_{i_j}) \\ @VV{\rho}V @VV{\rho_{\chi_j}}V \\ \pi^{-1}(e_{i_j}) @>{id}>> \pi^{-1}(e_{i_j}). \end{CD} \] That is choice of $u_j$ is compatible with the $T$-action on $X(Q, \lambda)$. Since $\mathbb{R}_{\geq 0}^r \cong U_i \subset Q$ is determined by $e_{i_j}\setminus v_{i_j}$ for $1 \leq j \leq r$, it follows that the representation $\rho_i $ is isomorphic to ${\displaystyle \bigoplus_{j=1}^r \mathbb{C}_{\chi_j}}$ where $\mathbb{C}_{\chi_j}$ is the one dimensional complex representation determined by the character $\chi_j$ for $1 \leq j \leq r$. Let \begin{equation}\label{def_ni} N(i) = \{k ~|~ k > i \mbox{ and there is an oriented edge from} ~ v_k~ \mbox{to} ~v_i\}. \end{equation} Thus if we write $V_{is}:=\mathbb{C}_{\chi_s}$ for $s \in N(i)$ and $V_{is}=0$ for $s > i$ and $s \notin N(i)$ then this verifies assumption (A2).\\ {\bf Checking for (A1)}: Let $N(i):=\{i_1,\ldots, i_r\}$. Consider the neighbourhood $W_i=U_i \cap D$ of $v_i$ in $Q$ where $D$ is a closed disc in $\mathbb{R}^n$ with centre $v_i$ such that $D$ does not any other vertex of $Q$. Note that $P_i$ is the smallest face of $Q$ containing the edges $e_{i_j}$ for $i_j \in N(i)$. The $Link(v_i)$ in $P_i$ is $P_i \setminus U_i$ where $U_i$ is defined in the proof of Lemma \ref{cw-of-toic-orbi}. So $P_i = Star(v_i) = Link(v_i)\star v_i$. Thus it follows from polyhedral geometry that for every $p \in (P_i \setminus U_i)$, the line segment joining $p$ and $v_i$ meets $W_i \cap \partial D $ at a unique point $y_p$. Moreover, $y_p$ determines $p$ uniquely and vice versa, see Figure \ref{fig-a1}. \begin{figure}[ht] \centerline{ \scalebox{.70}{ \input{ret_su.pdf_t} } } \caption{Correspondence of cell attaching.} \label{fig-a1} \end{figure} This gives a bijective correspondence $g_i : \partial{D} \cap W_i \to P_i \setminus U_i$. Therefore we have following commutative diagram, \begin{equation}\label{def_attaching} \begin{CD} T \times (\partial{D} \cap W_i) @>id \times g_i>> T \times (P_i \setminus U_i) @.\\ @VVV @VVV @. \\ (T \times (\partial{D} \cap W_i))/\sim @>{\hat{g}_i}>> (T \times (P_i \setminus U_i))/\sim @>{\subseteq}>> Y_{i-1}. \end{CD} \end{equation} The map $\hat{g}_i$ sends $[t,y_p]$ to $[t,p]$. This map is well defined because if $y_p$ belongs to the relative interior of a face $F$ of $P_i$ then $p$ also belongs to $F$. Moreover, under the identification of $\widehat{U}_i$ with the complex representation $\rho_i$, $$\widehat{W}_i = \pi^{-1}(W_i)= (T \times W_i)/\sim ~ \subseteq \widehat{U}_i$$ can be identified with the disc bundle $D(\rho_{i})$ and $\partial(\widehat{W}_i) = \pi^{-1}(\partial{D} \cap W_i))$ with the sphere bundle $S(\rho_i)$ of the representation space associated to $\rho_i$. This induces the following homeomorphisms $$X_i/X_{i+1} ~\cong ~ \widehat{W}_i/\partial(\widehat{W}_i) ~\cong ~ D(\rho_i)/S(\rho_i) = Th(\rho_i),$$ where $\widehat{U}_i \subseteq X_i$ maps homeomorphically onto the interior of the disc $\widehat{W}_i$. This verifies assumption (A1).\\ {\bf Checking for (A3)}: Suppose $f_{ij} : x_i \longrightarrow x_{i_j}$ denotes the constant map for every $i_j \in N(i)$ where $N(i)$ is defined in \eqref{def_ni}. Further, if $\hat{\mathcal{E}}_{i_j}:= (\mathcal{E}_{i_j} \setminus v_{i_j}) \subseteq U_i,$ then $\widetilde{\mathcal{E}}_{i_j} = \pi^{-1}(\hat{\mathcal{E}}_{i_j})$ can be identified with the one dimensional sub-representation $\rho_{ij}$ of $\rho_i$ for $j=1, \ldots, r$. Let $S(\rho_{ij})$ be the circle bundle associated with $\rho_{ij}$. Let $w_{i_j}$ be a point where $\mathcal{E}_{i_j}$ meets $\partial{D} \cap W_i$. Then the attaching map $\hat{g}_i$ (see \eqref{def_attaching}) sends $\pi^{-1}(w_{i_j})$ in $S(\rho_{ij})$ to $x_{i_j}$. Further, $\pi^{-1}(w_{i_j}) \in \widetilde{U}_i$ is mapped to $x_i$ under the canonical projection in the radial direction of the representation $\rho_{ij}$. It follows that the restriction of the maps $g_i$ and the composition of the projection of $\rho_{ij}$ with $f_{ij}$ on $S(\rho_{ij})$ agree for every $i_j \in N(i)$. This verifies assumption (A3).\\ {\bf Checking for (A4)}: Note that for for $1 \leq j \neq j' \leq r$, $\mathcal{E}_{i_j}$ and $\mathcal{E}_{i_{j'}}$ are distinct edges incident at $v_i$ in the simple polytope $Q$. If $v_i = F_{i_1} \cap \cdots \cap F_{i_n}$ then there are $s, s' \in [n] := \{1, \ldots, n\}$ such that $\mathcal{E}_{i_{j}} \cap F_{i_s} = v_i = \mathcal{E}_{i_{j'}} \cap F_{i_{s'}}$. So $$\mathcal{E}_{i_{j}}= \bigcap_{a \neq s} F_{i_a} ~ \mbox{and} ~ \mathcal{E}_{i_{j'}} = \bigcap_{a \neq s'} F_{i_a}.$$ Recall that $u_j$'s are primitive vectors determined by the kernel of \eqref{char_map}. So we have $\<u_j, \lambda(F_{i_a})\>=0$ for $a \in [n] \setminus \{s\} $ and $\<u_{j'}, \lambda(F_{i_b})\>=0$ for $b \in [n] \setminus \{s'\}$. Now if $u_j = c u_{j'}$ for some $c {\bb{Z}}$ then $\<u_j, F_{i_a}\> = 0$ for $a=1, \ldots, n$. That implies $u_j = 0$, which contradicts the definition of $u_j$. Thus $u_j$ and $u_{j'}$ are linearly independent. Since $u_j$ is primitive and non-zero for every $j\in N(i)$, the $K$-theoretic equivariant Euler class $$e^{T}(\rho_{ij})=1-e^{-\chi_j}$$ is a non-zero divisor in the representation ring $K^0_T(x_i)=R(T)$, which is an integral domain. Moreover, since $u_j$ and $u_{j'}$ are linearly independent, it follows that $1-e^{-\chi_j}$ and $1-e^{-\chi_{j'}}$ are relatively prime in the U.F.D $R(T)$. Also, this can be seen more generally for the equivariant Euler class in $MU^*_{T}$ and also for $H^*_{T}( ;\mathbb{Z})$, as $u_j$'s are primitive, by \cite[Lemma 5.2]{HHH}. \end{proof} \begin{remark}\label{composeconstant} Note that we can define the constant map $f_{ij}:x_i \longrightarrow x_j$ between any two $T$-fixed points of $X=X(Q,\lambda)$, which satisfy $f_{ik}=f_{jk} \circ f_{ij}$ for $1 \leq i,j,k \leq m$. Thus we have the pull-back map of equivariant $K$-theory $f_{ij}^*: K^*_{T}(x_j)\longrightarrow K^*_{T}(x_i)$ which satisfies $f_{ik}^*=f_{ij}^*\circ f_{jk}^*$ for $1\leq i,j,k\leq m$. In fact, since each $x_i$ is a point for $i=1, \ldots, m$ then $f_{ij}^*$ is an identity map. \end{remark} \section{Equivariant K-theory of divisive toric orbifolds} In this section we shall describe the $T$-eqivariant $K$-theory of divisive toric orbifolds. In particular, the following proposition extends \cite[Proposition ]{HHRW} on divisive weighted projective space to any divisive toric orbifold. In view of the Remark \ref{composeconstant} we have the pull back isomorphisms $$f_{i1}^*:K^*_{T}(x_1)\longrightarrow K^*_{T}(x_i)$$ for $1\leq i\leq m$. This gives $\prod_{i=1}^m K^*_{T}(x_i)$ a canonical $K^*_{T}(x_1)$-algebra structure via the inclusion defined by $(f_{i1}^*(a))$ for $a\in K^*_{T}(x_1)$. \begin{proposition}\label{gkm_rt_algebra} Let $X=X(Q,\lambda)$ be a divisive toric orbifold. The $T$-equivariant ring $K_T^*(X)$ of $X$ is isomorphic to the $K_{T}^*(y_1)$-subalgebra \begin{equation}\label{gkm_desc_kring} \Gamma_{K_X}=\{(a_i)~:~1-e^{-\chi_j}~\mbox{divides}~ a_i-f^*_{ij}(a_j) \in K^*_{T}(x_i) ~\forall~j\in N(i)\}\subseteq \prod_{i=1}^m K_{T}^*(x_i). \end{equation} \end{proposition} \begin{proof} By Proposition \ref{divass} and Theorem \ref{gkm} above it follows that $K^*_{T}(X)$ is isomorphic to the subring $\Gamma_{X}$ of $\prod_{i=1}^m K_{T}^*(x_i)$. Further, by Remark \ref{composeconstant} it follows that $\Gamma_{X}$ gets the structure of $K^*_{T}(x_1)$-subalgebra. Indeed for every $a\in K^*_{T}(x_1)$, $(f_{i1}^*(a))\in \Gamma_{X}$ since for every $j\in N(i)$, $$f^*_{i1}(a)-f^*_{ij}(f^*_{j1}(a))=f^*_{i1}(a)-f^*_{i1}(a)=0$$ and hence trivially divisible by $1-e^{-\chi_j}$. \end{proof} \begin{remark} More generally, with the same hypothesis of Proposition \ref{gkm_rt_algebra}, one can get the following for $E= MU, H$ \begin{equation}\label{gkm_rel} \Gamma_{E_X}=\{(a_i)~:~e_{T}({V_{ij}})~\mbox{divides}~a_i-f^*_{ij}(a_j) \in E^*_{T}(x_i) ~\forall~j\in N(i)\}\subseteq \prod_{i=1}^m E_{T}^*(x_i). \end{equation} Here $V_{ij}$ is the $T$-representation corresponding to the primitive character $u\in \mathbb{Z}^n$ orthogonal to $\lambda(F)$ for each $F$ of the $(n-1)$-facets containing $E_{ij}$. When there is no edge between $v_i$ and $v_j$ then $V_{ij}$ is trivial. \end{remark} \section{Piecewise Algebra and its Applications} In this section, we introduce several piecewise algebra associated to a polytope and a characteristic function on this polytope. Let $Q$ be a simple polytope and $\lambda$ a charcteristic function on $Q$ as in Definition \ref{def_characteristic_function}. Consider the category $\mbox{CAT}(Q)$ whose objects are the faces $F$ of $Q$ and whose morphisms are their inclusions $$i_{F,F'} : F \hookrightarrow F'.$$ Then $CAT(Q)$ is a small category in which $Q$ is the final object. Given a face of $F$, let $U_F$ be the open set of $Q$ obtained by deleting all faces which has empty intersection with $\mathring{F}$ where $\mathring{F}$ is the relative interior of $F$. If $F \subseteq F'$ then $U_{F'} \subset U_F$, which implies the inclusion $$j_{FF'} : \pi^{-1}(U_{F'}) \hookrightarrow \pi^{-1}(U_F).$$ Consider $U : CAT(Q) \to T\mbox{-}Top$ defined by $$U(F) = \pi^{-1}(U_F) ~~ \mbox{and} ~~ U(i_{FF'}) = j_{FF'}.$$ Then $U$ is a contravariant functor and the construction of toric orbifolds implies that colim$U = \bigcup_{F \in \mathcal{L}(Q)}(U(F)) = X(Q, \lambda)$. Now assume $F$ is a proper face of $Q$ Then we can assign the subgroup $T_F$ of $T$ to $F$, where $T_F$ is defined in \eqref{def_tf} for a face $F$ of $Q$. Note that $T_F$ is the stabilizer subtorus corresponding to the set $\pi^{-1}(F)$. If $F \subseteq F'$ is a face inclusion then we have $M(F') \subseteq M(F)$ where $M(F)$ is defined in section \ref{subsec_def_by_construction} for a face $F$ of $Q$. This inclusion of submodules of ${\bb{Z}}^n$ induces the natural projection $$r_{FF'} : T/T_{F'} \to T/T_{F}.$$ Define $V : CAT(Q) \to T\mbox{-}Top$ by $$V(F) = T/T_F ~~ \mbox{and} ~~ V(i_{FF'}) = r_{FF'}.$$ Then $V$ is a contravariant functor. Note that $U$ is objectwise equivariantly equvalent to the above diagram, since $$\pi^{-1}(U_F) = (T \times U_F)/\sim ~ \cong ({\bb{C}}^k/G_F) \times (T/T_F)$$ where $k = codim (F)$ and $G_F$ is defined in section \ref{subsec_def_by_construction}. Therefore the homotopy colimit of $V$ is equivariantly homotopy equivalent to $X(Q,\lambda)$. \begin{definition}\label{piec_alg} For any complex oriented equivariant cohomology theory $E^*_{T}(-)$, we have the covariant functor $$EV: \mbox{CAT}(Q)\longrightarrow \mbox{GCALG}_{E},$$ where $\mbox{GCALG}_{E}$ is the category of graded commutative $E_{T}^*$-algebras, defined by $$EV(F):=E^*_{T}(T/T_{F}) \quad \mbox{and} \quad EV(i_{FF'})=r_{FF'}^*.$$ The limit $P_{E}(Q, \lambda)$ of $EV$ is called the $E_{T}^*$-algebra of piecewise coefficients for the pair $(Q, \lambda)$. \end{definition} Similar to \cite[Remark 4.8]{HHRW}, we note that here $P_{E}(Q, \lambda)$ is an $E_{T}^*$-subalgebra of $\prod_{F} E_{T}(V_{F})$, so every piecewise coefficient has one component $f_{F}$ for every face $F$ of $Q$. If $F$ is one of the vertices in $\{v_1,\ldots, v_m\}$ then $$EV(F)= =EV(v_i)= E_{T_{F}}^*(pt) = E_T^* (pt)$$ is an element of $E_{T}^*$ since $T_{F}=T$ in this case; on the other hand if $F=Q$ then $$EV(F)= EV(Q) = E_{T_F}^*(pt)=E^*$$ since $T_{Q}=\{1\}$. Moreover, if $(f_{F})\in P_{E}(Q, \lambda)$ then $r_{F',F}^*(f_{F})=f_{F'}$ whenever $F\subseteq F'$ in $Q$. Sums and products of piecewise coefficients are take facewise. We have a canonical diagonal inclusion $E_{T}^*\subseteq P_{E}(Q, \lambda)$ as $(r^*_{F}(f))$, for $f\in E_{T}^*$ where $$r_{F}:T/T_{F}\longrightarrow T/T={pt}$$ is the projection induced by the canonical inclusion $T_{F}\subseteq T$, which clearly satisfies the compatibility condition. Also the constants $(0)$ and $(1)$ act as $zero$ and identity element in $P_{E}(Q, \lambda)$ respectively. Given an edge $\mathcal{E}$ of $Q$ there is a unique collection of $(n-1)$ many facets $F_{i_1}, \ldots, F_{i_{n-1}}$ such that $\mathcal{E} = F_{i_1} \cap \cdots \cap F_{i_{n-1}}$. The integral matrix $[\lambda(F_{1})^t, \ldots, \lambda(F_{n-1})^t]$ of size $n \times (n-1)$ determines a primitive vector $u_{\mathcal{E}}$, see just after \eqref{eq_map_chi}. Moreover, on the orbit $T/T_{\mathcal{E}}$ in $X$ corresponding to the edge $\mathcal{E}$, $T$ acts on the orbit $T/T_{\mathcal{E}}$ via the character $\chi_{\mathcal{E}}$. Note that $T/T_{\mathcal{E}}$ can be identified with the irreducible circle representation $S(\chi_{\mathcal{E}})$. Thus we can identify $E^*_{T}(S({\chi_{u_{\mathcal{E}}}}))$ with $E^*_{T}(T/T_{\mathcal{E}})$. However the inclusion of $S(\chi_{u_{\mathcal{E}}})$ into the unit disc $D(\chi_{u_{\mathcal{E}}})$ determines the equivariant cofiber sequence \[S(\chi_{u_{\mathcal{E}}}) \longrightarrow D(\chi_{u_{\mathcal{E}}}) \longrightarrow S^{\mathcal{E}},\] where $S^{\mathcal{E}}$ denotes the one-point compactification $D(\chi_{u_{\mathcal{E}}})/S(\chi_{u_{\mathcal{E}}})$ with $T$-action. Applying $E^*_{T}$ yields the long exact sequence \begin{equation}\label{les} \cdots E^*_{T}(S^{\mathcal{E}}) \stackrel{.e}{\longrightarrow} E_{T}^*(D(\chi_{u_{\mathcal{E}}})) \longrightarrow E^*_{T}(S(\chi_{u_{\mathcal{E}}})) \longrightarrow \cdots \end{equation} Since $D(\chi_{u_{\mathcal{E}}})$ is equivariantly contractible and Thom isomorphism applies to the Thom space $S^{\mathcal{E}}$, the homomorphism $.e$ may be interpreted as the multiplication by the equivariant Euler class $e_{T}(\chi_{u_{\mathcal{E}}})$. Thus $.e$ is injective for $E= K, MU, H$ since $e^T(\chi_{u_{\mathcal{E}}})$ is a non-zero divisor, see proof of (A4) in Proposition \ref{divass}. Hence (\ref{les}) breaks into short exact sequences yielding isomorphisms: \begin{equation}\label{iso} E^*_{T}/e_{T}(\chi_{u_{\mathcal{E}}}) \cong E^*_{T}(S(\chi_{u_{\mathcal{E}}})) \cong E^*_{T}(T/T_{\mathcal{E}}). \end{equation} Now, let $F$ be a face of $Q$ of dimension $k$. Let $F$ be the intersection of the facets $F_1, F_2,\ldots, F_{n-k}$. Then consider the $\mathbb{Z}$-linear map \[\psi_{F}:\mathbb{Z}^n\longrightarrow \mathbb{Z}^{n-k},\] defined by the $n\times (n-k)$-matrix $[\lambda(F_1)^t, \ldots, \lambda(F_{n-k})^t]$ of $\mathbb{Q}$-rank $(n-k)$. The kernel of the map is a free $\mathbb{Z}$-module generated by primitive vectors $u_{1},\ldots, u_{k}$ in $\mathbb{Z}^n$. In particular for every edge $\mathcal{E}$ in $F$, the vector $u_{\mathcal{E}}$ lies in the kernel and is hence a $\mathbb{Z}$-linear combination of $u_1,\ldots, u_{k}$. Since $u_{i}$'s are pairwise linearly independent, $e_{T}(\chi_{u_i}), 1\leq i\leq k$ are relatively prime in $E^*_{T}(pt)$ for $E=K,MU,H$, see proof of (A4) in Proposition \ref{divass}. Since $T_{F}$-acts trivially on the orbit $T/T_{F}$, $T$ acts on $T/T_{F}$ via the product of circle representations $S(\chi_{u_{1}})\times S(\chi_{u_{2}})\times\cdots\times S(\chi_{u_{k}})$ which is a deformation retract of $T/T_{F}$ as a $T$-representation. Thus we have the isomorphism \begin{equation} E^*_{T}/\langle e^T(\chi_{u_{i}}); 1\leq i\leq k\rangle\simeq E^*_{T}(S(\chi_{u_{1}})\times S(\chi_{u_{2}})\times\cdot\times S(\chi_{u_{k}}))\simeq E^*_{T}(T/T_{F}). \end{equation} Let $$ J_{F} = \langle e^T(\chi_{u_{i}}) ~:~ 1 \leq i \leq k \rangle.$$ Then $J_F$ is an ideal of $E^*_{T}$. Moreover, if $F\subseteq F'$ then $\mbox{kernel}(\psi_{F})\subseteq \mbox{kernel}(\psi_{F'})$. This gives the inclusion of the ideals $J_{F}\subseteq J_{F'}$ of $E^*_{T}$ inducing the projection $$r^*_{FF'}:E^{*}_{T}/J_{F}\longrightarrow E^*_{T}/J_{F'}.$$ This gives rise to covariant functor $EV:\mbox{CAT}(Q)\longrightarrow GCALG_{E^*_{T}}$ defined by $$EV(F):=E^*_{T}/J_{F} = E^*_{T}(T/T_{F}) \quad \mbox{and} \quad EV(i_{FF'})=r_{FF'}^*$$ for every face $F$ of $Q$ for each $E=K,MU,H$. The limit $P_E(Q, \lambda)$ is the $E^*_{T}$-algebra of piecewise coefficients for the pair $(Q, \lambda)$. \begin{remark} \begin{enumerate} \item If $E=K$, then $e_{T}(\chi_{u_F})=1-e^{u_F}$ is clearly in the ideal generated by $e_{T}(\chi_{u_i})=(1-e^{u_i}); 1\leq i\leq k$. In this case, $P_K(Q, \lambda)$ is called the $R(T)$-algebra of piecewise Laurent Polynomial for $(Q, \lambda)$. \item If $E=MU^*$ then $P_K(Q, \lambda)$ is called the $MU^*_T$-algebra of piecewise Cobordism forms for $(Q, \lambda)$. \item If $E=H^*$, then $P_K(Q, \lambda)$ is called the $H^*_T$-algebra of piecewise Polynomial for $(Q, \lambda)$. \end{enumerate} \end{remark} \begin{theorem}\label{piecewise} For any divisive toric orbifold: $E^*_{T}(X(Q,\lambda))$ is isomrophic as $E^*_{T}$-algebra to $P_{E}(Q, \lambda)$ for each $E=K,MU,H$. \end{theorem} \begin{proof} It suffices to identify the algebra $\Gamma_{K_X}$ defined by (\ref{gkm_desc_kring}) with $\mbox{lim}_{GCALG}EV$. By the universal property of $\mbox{lim}_{GCALG}EV$, we shall find compatible homomorphisms $$h_{F}:\Gamma_{K_X}\longrightarrow EV(F)$$ for every face $F$ of $Q$. Given $a=(a_i)\in \Gamma_{X}$, on the vertices of $Q$ we define $h_{v_j}(a):=a_j$ for each $1\leq j\leq m$. On the edges $E_{ij}$ we let $$ h(a)_{E_{ij}}:=a_{i}\equiv a_{j}\pmod {e_{T}(V_{ij})}$$ in $E^*_{T}/J_{E_{ij}}$. This definition extends to $$ h_{F}(a)=a_{i_1}\equiv a_{i_2}\equiv\cdots\equiv a_{i_k} \pmod {J_{F}},$$ where $v_{i_1},\ldots, v_{i_k}$ are the vertices of $F$. Since $u_{\mathcal{E}_{ij}}$ generates $\mbox{kernel}(\psi_{\mathcal{E}_{ij}})\subseteq \mbox{kernel} (\psi_{F})$, $J_{F}$ contains $e_{T}(V_{ij})$ for each edge $\mathcal{E}_{ij}\subset F$. Since the $1$-skeleton of $F$ is connected, any two vertices $v_{i_{j}}$ and $v_{i_{r}}$ are connected by a path of edges in $F$. It follows that $a_{i_j}-a_{i_r}$ belongs to $J_{F}$, and therefore the map $h_{F}$ is well defined for each face $F$ of $Q$. Furthermore, since $J_{F}$ is an ideal in $E^*_{T}$, it follows that $h_{F}$ is a homomorphism of $E^*_{T}$-algebras. Moreover, $h_{F}$'s are compatible over $CAT(Q)$. This follows as $F \subseteq F'$ implies $J_{F'}$ is obtained from $J_{F}$ by adjoining $e_{T}(\chi_{u})$ for $\displaystyle u \in \mbox{kernel}(\psi_{F'})\setminus \mbox{kernel}(\psi_{F})$. Thus the corresponding projection $$r^*_{FF'} : E^*_{T}/J_{F}\longrightarrow E^*_{T}/J_{F'}$$ is induced. Thus $h_{F'}=r^*_{FF'} \circ h_{F}$ whenever $F \subseteq F'$. Therefore we have constructed a well defined homomorphism $$ h:\Gamma_{K_X}\longrightarrow P_E(Q, \lambda)$$ of $E^*_{T}$-algebras. We now conclude by showing that the map $h$ is an isomorphism. Given $\displaystyle a\neq a' \in \Gamma_{X}\subseteq \prod_{i=1}^m E^*_{T}(x_i)$. There exists at least one $v_i$ such that $a_i\neq a'_i$. Thus $h_{v_i}(a)\neq h_{v_i}(a')$ in $E^*_{T}(x_i)$. Thus $h$ is injective. Let $(a_{F})$ be an element in the limit $P_E(Q, \lambda)$ of the functor $EV$. Then $(a_F)$ determines $(a_i)$ in $\prod_{i=1}^m E^*_{T}(x_i)$ by restricting to the vertices $a_i:=a_{v_i}\in E^*_{T}$ for $1\leq i\leq m$. Note that $$a_{\mathcal{E}_{ij}} = r^*_{v_i, \mathcal{E}_{ij}}(a_{v_i}) \quad \mbox{and} \quad a_{\mathcal{E}_{ij}} = r^*_{v_j, \mathcal{E}_{ij}}(a_{v_j}).$$ Thus ${\displaystyle a_{v_i}\equiv a_{v_j} \pmod {J_{E_{ij}}},}$ whenever $v_i$ and $v_j$ are connected by an edge $E_{ij}$ in $Q$. Since $J_{E_{ij}}$ is generated by $e_{T}(\chi_{u_{ij}})$ it follows that $a_i-a_j$ is divisible by $e^*_{T}(\chi_{u_{ij}})$. This implies by (\ref{gkm_desc_kring}) that $(a_i)\in \Gamma_{K_X}$ proving the surjectivity of $h$. Proof for other cases are similar with slight modifications in maps and their domains and co-domains. So we omit the proof of these cases. \end{proof} \begin{remark} \begin{enumerate} \item In Section 4.2 of \cite{BNSS}, the authors constructed the characteristic pair corresponding to a polytopal simplicial complex. So one can define divisive toric variety using this characteristic pair following Definition \ref{def_divisive_tor_orb}. Therefore, as a consequence we can get similar description of the equivariant generalized cohomology theories for divisive toric varieties arising from polytopal symplicial complexes.\\ . \item In \cite{DJ}, Toric manifolds were studied and an invariant CW-structure of a toric manifold was constructed. So in particular, toric manifolds are divisive toric orbifolds, and hence Theorem \ref{piecewise} holds for this class of manifolds.\\ \item In subsection 4.3 of \cite{BNSS}, the local groups $G_F(v)$ are computed for torus orbifolds which are generalization of toric orbifolds. So Definition \ref{def_divisive_tor_orb} can be introduced in this category. Thus one can get similar description of the equivariant generalized cohomology theories for divisive torus orbifolds. \end{enumerate} \end{remark} {\bf Acknowledgement}: The authors would like to thank Anthony Bahri, Nigel Ray and Jongbaek Song for helpful conversations.
2,869,038,156,206
arxiv
\section{Introduction} Much of algebraic geometry is governed by the numerical properties of the canonical class. Other useful properties such as uniruledness and rational connectivity have also played a major role. Also closed smooth projective varieties, and more generally K\"{a}hler manifolds have been studied extensively from both a topological and symplectic viewpoint. In complex dimension $2$, tools such as Donaldson theory and Seiberg Witten invariants have been used to study such manifolds. For instance in \cite{Witten:Monopoles} algebraic surfaces of general type have plus or minus their canonical class as diffeomorphism invariants. Plurigenera and hence Kodaira dimension for algebraic surfaces are also shown in \cite{FriedmanMorgan:AlgebraicSW} to be diffeomorphism invariants using Seiberg Witten theory. There are many other results for these surfaces. One can also study these varieties from a symplectic perspective. We can use tools such as Gromov-Witten theory to see what the symplectic structure tells us about our algebraic variety. Work by Koll\'{a}r and Ruan (\cite{Kollar:rationalcurves} and \cite{Ruan:virtual}) tells us the property of being uniruled is a symplectic invariant using Gromov Witten theory. Another extremely useful notion in algebraic geometry is rational connectedness and this has been studied from a symplectic viewpoint in \cite{Voisin:rationallyconnected} and \cite{ZTian:rationallyconnected3fold}. Less has been done to study open algebraic varieties from a symplectic perspective, although there has been some work \cite{TianJunZhang:additivity}. Also there isn't as much work in higher dimensions although there is some progress in dimension $3$ (see for instance \cite{Ruan:3folds}). This paper addresses some of these issues. We will be primarily concerned with smooth affine varieties and we will study them from a symplectic perspective. Every smooth affine variety has a symplectic structure coming from some embedding in $\mathbb C^N$ and this is a biholomorphic invariant (see \cite{EliahbergGromov:convexsymplecticmanifolds}). A particular algebraic invariant of $A$ is called the log Kodaira dimension. One can ask to what extent is the log Kodaira dimension a symplectic invariant? Log Kodaira dimension is a number $\overline{\kappa}(A)$ which takes values in $\{-\infty,0,1,\cdots,\text{dim}_\mathbb C A\}$. We say that $A$ is of log general type if $\overline{\kappa}(A) = \text{dim}_\mathbb C A$. A precise definition is given at the start of section \ref{section:logkodaira}. We show that log Kodaira dimension is a symplectic invariant for smooth acyclic affine surfaces (Theorem \ref{theorem:acyclicsurfaceinvariance}). We also show that if $A$ and $B$ are symplectomorphic smooth affine varieties such that: \begin{enumerate} \item $A$ has complex dimension $3$. \item $A$ can be compactified by a smooth normal crossing nef divisor which is linearly equivalent to some smooth divisor \item The log Kodaira dimension of $A$ is $2$. \end{enumerate} then the log Kodaira dimension of $B$ is $\leq 2$ (see Theorem \ref{theorem:logkodairaresultsindimension3}). A projective variety is {\it uniruled} if there is a rational curve passing through every point. Let $P$ and $Q$ be smooth projective varieties compactifying smooth affine varieties $A$ and $B$ respectively. We show that if $A$ is symplectomorphic to $B$ and $P$ is uniruled then $Q$ is also uniruled (Theorem \ref{theorem:birationalinvarianceofuniruledness}). In order to prove these theorems we introduce three notions of uniruledness for smooth affine varieties. The first notion is defined for an object called the Liouville domain associated to our affine variety. This is a symplectic invariant and is defined in Section \ref{section:uniruledliouville}. The second notion says that a smooth affine variety is algebraically $k$ uniruled if there is a morphism from $\P^1$ minus at most $k$ points to our variety passing through a generic point and is defined in Section \ref{section:affineuniruled}. The third notion defined in Section \ref{section:affineuniruled2} is more flexible than the second notion as it now involves $J$ holomorphic curves from $\P^1$ minus some points where $J$ is any appropriate almost complex structure. This notion is called compactified $k$ uniruled. We show by using degeneration to the normal cone techniques that the first definition implies the second definition (Theorem \ref{theorem:kuniruledimpliesalgebraicallyuniruled}). Also using other simpler techniques we can show that third definition implies the first one (Theorem \ref{theorem:algebraicuniruledimplesuniruled}). Putting all of this together one gets that if $A$ is symplectomorphic to $B$ and is compactified $k$ uniruled then $B$ is algebraically $k$ uniruled. Because there is a relationship between log Kodaira dimension and uniruledness in low dimensions (see \cite{MiyanishiSugie:affinessurfaces},\cite{Miyanishi:openaffine}, \cite{Kawamata:classification} and \cite{Kishimoto:affinethreefolds}) we obtain our log Kodaira dimension results. Similarly if $P$ and $Q$ are projective with symplectomorphic affine open subsets $A$ and $B$ such that $P$ is uniruled, then one can show that $A$ is compactified $k$ uniruled for some $k$. Hence $B$ is algebraically $k$ uniruled which in turn implies that $Q$ is uniruled. The paper is organized as follows: In Section \ref{section:uniruledliouville} we introduce the reader to uniruled Liouville domains (first definition). These are purely symplectic objects. In Section \ref{section:affineuniruled} we give a purely algebraic definition of uniruledness for smooth affine varieties (second definition) and relate it to uniruled Liouville domains. In Section \ref{section:GWintro} we give an introduction to Gromov Witten invariants, then in Section \ref{section:affineuniruled2} we give a much more flexible definition of uniruledness (third definition) for smooth affine varieties. In Section \ref{section:logkodaira} we use all of the above machinery to prove our log Kodaira dimension invariance results and finally in Section \ref{section:unirulednesscompactifications} we prove that projective varieties with symplectomorphic open affine subsets are either both uniruled or both not uniruled. \bigskip {\it Acknowledgements:} I would like to thank Paul Seidel and Ivan Smith for their help in this project. The author was partially supported by NSF grant DMS-1005365 and also a grant to the Institute for Advanced Study by The Fund for Math. \section{Uniruled Liouville domains} \label{section:uniruledliouville} Throughout this paper we will use the following notation. If $U$ is any subset of a topological space then we write $U^o$ for the interior of $U$. Also if $(N,\omega)$ is a symplectic manifold and $\theta$ is a $1$-form then we write $X_{\theta}$ to be the unique vector satisfying $\iota_{X_{\theta}} \omega = \theta$. Let $M$ be a compact manifold with boundary with a $1$-form $\theta_M$ satisfying: \begin{enumerate} \item $\omega_M := d\theta_M$ is a symplectic form. \item The $\omega_M$-dual $X_{\theta_M}$ of $\theta_M$ points outwards along $\partial M$. \end{enumerate} We say that $(M,\theta_M)$ is a {\it Liouville domain} if it satisfies the above properties. Let $J$ be an almost complex structure compatible with the symplectic form $\omega_M$. We say that $J$ is a {\it convex almost complex structure} on $M$ if there is some function $\phi : M \rightarrow \mathbb R$ so that: \begin{enumerate} \item $\partial M$ a regular level set of $\phi$ and $\phi$ attains its maximum on $\partial M$. \item $\theta_M \circ J = d\phi$ near $\partial M$. \end{enumerate} Suppose that $(N, \omega_N)$ is a symplectic manifold and let $J_N$ be an almost complex structure. If $u : S \rightarrow N$ is a $J_N$-holomorphic map from a Riemann surface $S$ to $N$ then the {\it energy of} $u$ is defined to be $\int_S u^* \omega_N$. \begin{defn} Let $k>0$ be an integer and $\lambda > 0$ a real number. We say that a Liouville domain $M$ is {\bf $(k,\Lambda)$-uniruled} if for every convex almost complex structure $J$ on $M$ and every point $p \in M^o$ where $J$ is integrable on a neighbourhood of $p$, there is a proper $J$ holomorphic map $u : S \rightarrow M^o$ to the interior $M^o$ of $M$ passing through this point. We require that $S$ is a genus zero Riemann surface, the rank of $H_1(S,\mathbb Q)$ is at most $k-1$ and the energy of $u$ is at most $\Lambda$. \end{defn} \begin{theorem} \label{theorem:subdomainuniruled} Suppose that $N,M$ are Liouville domains such that $M$ is a codimension $0$ symplectic submanifold of $N$ with the property that there exists some $1$-form $\theta'$ on $N$ so that $\theta'|_M - \theta_M$ is exact and so that $d\theta' = d\theta_N$. If $N$ is $(k,\Lambda)$-uniruled then $M$ is also $(k,\Lambda)$-uniruled. In particular, the above fact is true if $M$ is a codimension $0$ exact submanifold of $N$ or if the inclusion map $M \hookrightarrow N$ is a symplectic embedding and a homotopy equivalence. \end{theorem} Before we prove this theorem we need some preliminary lemmas and definitions. The following definitions are technically not relevant for the theorem above, but one of the lemmas used in proving this theorem will also be used later on in a slightly more general context. A {\it nodal Riemann surface} is a $1$ dimensional complex analytic variety with the property that the only singularities are nodal. We say that it has {\it arithmetic genus $0$} if it can be holomorphically embedded into a simply connected compact nodal Riemann surface. An example of an arithmetic genus zero surface is: $B(1) \cap \{z_1z_2 = 0\} \subset \mathbb C^2$ where $B(1)$ is the open unit ball and $z_1,z_2$ are coordinates for $\mathbb C^2$. Note that a genus zero nodal Riemann surface is a union $S_1,\cdots,S_k$ of smooth Riemann surfaces which only intersect each other at the nodal singularities of $S$. Here $S_1,\cdots,S_k$ are called the {\it irreducible components of $S$}. An {\it arithmetic genus $0$ nodal Riemann surface with boundary} is a closed subset $S$ of a compact arithmetic genus $0$ nodal Riemann surface $C$ with the property that away from the nodes of $C$, $S$ is a Riemann surface with boundary. We require that the closure of this boundary does not intersect the nodes of $C$. This means that the boundary is a union of circles. An example of such a holomorphic object would be the closure of $B(1) \cap \{z_1z_2 = 0\}$. Again an arithmetic genus $0$ nodal Riemann surface with boundary is a union of smooth Riemann surfaces with boundary intersecting each other at the nodal singularities away from their boundaries. These smooth Riemann surfaces with boundary are called the {\it irreducible components of $S$}. We can form a graph $\Gamma_S$ whose nodes are the irreducible components of $S$ and if two irreducible components intersect at some point $p$ then we have an edge $E_p$ joining the appropriate nodes. This is called the {\it dual graph} of $S$. The dual graph of every connected arithmetic genus zero compact Riemann surface is a tree. \begin{tikzpicture} [scale=1.0] \draw[color=lightgray] (5,5) ellipse (1 and 0.25); \draw (5,5) node {\tiny A} circle (1); \draw[shift={(5,5)},rotate=35,shift={(-5,-5)}] (7,5)+(0,1) arc (90:270:1); \draw[shift={(5,5)},rotate=130,shift={(-5,-5)}] (7,5)+(0,1) arc (90:270:1); \draw[shift={(5,5)},rotate=35,shift={(-5,-5)}] (7,5) node {\tiny B} ellipse (0.25 and 1); \draw[shift={(5,5)},rotate=130,shift={(-5,-5)}] (7,5) node {\tiny C} ellipse (0.25 and 1); \draw[shift={(5+5,5)},rotate=130,shift={(-5,-5)}] (5,5) -- (7,5) node[below,left] {\tiny C}; \draw[shift={(5+5,5)},rotate=35,shift={(-5,-5)}] (5,5) node[below] {\tiny A} -- (7,5) node[below,right] (7,5) {\tiny B}; \draw[fill=black] (5+5,5) circle (0.05); \draw[shift={(5+5,5)},rotate=35,shift={(-5,-5)},fill=black] (7,5) circle (0.05); \draw[shift={(5+5,5)},rotate=130,shift={(-5,-5)},fill=black] (7,5) circle (0.05); \node at (4,3.5) { Riemann surface.}; \node at (4+5,4.5) { Dual graph. }; \end{tikzpicture} \begin{lemma} \label{lemma:topologyofcurve} Let $M$ be a Liouville domain and $N$ any exact symplectic manifold so that $M$ is a codimension $0$ symplectic submanifold of $N$ with the additional property that there is some $1$-form $\theta'$ on $N$ so that $\theta'|_M - \theta_M$ is exact and so that $d\theta' = d\theta_N$. Let $J$ be a compatible almost complex structure on $N$ so that $J$ restricted to $M$ is a convex almost complex structure. If $u : S \rightarrow N^o$ is a $J$-holomorphic curve with the property that $u^{-1}(M)$ is compact and $S$ is an arithmetic genus zero nodal Riemann surface then the map $H_1(u^{-1}(M^o)) \rightarrow H_1(S)$ is injective. \end{lemma} \proof of Lemma \ref{lemma:topologyofcurve}. By definition we can choose a collar neighbourhood $(1-\epsilon,1] \times \partial M$ of $\partial M$ inside $M$ so that $d\theta_M \circ J= dr$ where $r$ parameterizes the interval $(1-\epsilon,1]$. For $R \in (1-\epsilon,1)$ we define $M_R$ to be $M \setminus \{r > R\}$. We will show that $H_1(u^{-1}(M_R)) \rightarrow H_1(S)$ is injective for generic $R$ and this will prove the theorem because $H_1(u^{-1}(M^o))$ is the direct limit of $H_1(u^{-1}(M_R))$ as $R$ tends to $1$. For generic $R$, $\partial M_R$ is transverse to $u$. This means that $S_R := u^{-1}(M_R)$ is an arithmetic genus $0$ compact nodal Riemann surface with boundary. Also the closure of $S \setminus S_R$ is a possibly non-compact nodal Riemann surface with boundary equal to $\partial S_R$. We will write this closure as $S^c_R$. We let $\theta$ be a $1$-form on $N$ so that $\theta'-\theta$ is exact and so that $\theta = \theta_M$ on a neighbourhood of $M_R$. The maximum principle \cite[Lemma 7.2]{SeidelAbouzaid:viterbo} using the $1$-form $\theta$ tells us that every irreducible component of $S^c_R$ is non-compact. Let $S'_1,\cdots,S'_{l'}$ be these irreducible components. These are non-compact Riemann surfaces with compact boundary. Hence they have the property that $H_1(\partial S'_i) \to H_1(S'_i)$ is injective. This in turn implies that $H_1(\partial S^c_R) \to H_1(S^c_R)$ is injective. Because $S$ is the union of $S_R$ and $S^c_R$ along $\partial S^c_R$, we have by a Mayor-Vietoris argument that the map $H_1(S_R) \rightarrow H_1(S)$ is injective. Hence $H_1(u^{-1}(M^o)) \rightarrow H_1(S)$ is injective. \qed \begin{lemma} \label{lemma:convexalmostcomplexstructureexistence} Every Liouville domain $(M,\theta_M)$ has a convex almost complex structure. \end{lemma} \proof of Lemma \ref{lemma:convexalmostcomplexstructureexistence}. By flowing $\partial M$ backwards along $X_{\theta_M}$ we have a neighbourhood $(1-\epsilon,1] \times \partial M$ of $M$ so that $\theta_M = r \alpha_M$ where $r$ parameterizes $(1-\epsilon,1]$ and $\alpha_M$ is the contact form $\theta_M|_{\partial M}$. We define the vector bundle $V_1$ to be the span of the vectors $\frac{\partial}{\partial r}$ and $X_{\theta_M}$ and $V_2$ to be the set of vectors in the kernal of $dr$ and $\theta_M$. Near $\partial M$, we have that $TM = V_1 \oplus V_2$ and $V_1$, $V_2$ are symplectically orthogonal. We define $J$ so that: \begin{enumerate} \item $J$ is compatible with the symplectic form $\omega_M$. \item $J(V_1) = V_1$ and $J(V_2) = V_2$ near $\partial M$. This can be done because these vector spaces are $\omega_M$ orthogonal. \item $J(r\frac{\partial}{\partial r}) = X_{\theta_M}$. \end{enumerate} Here $\theta_M \circ J = dr$ near $\partial M$ and so $J$ is a convex almost complex structure. \qed \proof of Theorem \ref{theorem:subdomainuniruled}. Let $J$ be a convex almost complex structure on $M$. By Lemma \ref{lemma:convexalmostcomplexstructureexistence}, we have that $N$ admits some convex almost complex structure $J_N$. Because the space of all almost complex structures compatible with a symplectic form is contractible we can choose a compatible almost complex structure $J'$ so that $J' = J_N$ near $\partial N$ and $J'|_M = J$. Let $p \in M$ be a point in the interior of $M$ such that $J$ is integrable on a neighbourhood of $p$. Because $N$ is $(k,\Lambda)$-uniruled, we have that there exists a proper holomorphic map $u : S \rightarrow N^o$ passing through $p$ of energy at most $\Lambda$. Also the rank of $H_1(S,\mathbb Q)$ is at most $k-1$. By Lemma \ref{lemma:topologyofcurve}, the rank of $H_1(u^{-1}(N^o),\mathbb Q)$ is also at most $k-1$. Hence $u|_{u^{-1}(N^o)}$ is a proper $J$ holomorphic map in $M^o$ of energy at most $\Lambda$ and passing through $p$, where $|H_1(u^{-1}(N^o),\mathbb Q)| \leq k-1$. This implies that $M$ is $(k,\Lambda)$-uniruled. \qed If $(M,\theta_M)$ is a Liouville domain, then by flowing $\partial M$ backwards along $X_{\theta_M}$ we have a neighbourhood $(1-\epsilon,1] \times \partial M$ of $M$ so that $\theta_M = r \alpha_M$ where $r$ parameterizes $(1-\epsilon,1]$ and where $\alpha_M$ is a contact form on $\partial M$. If we glue $[1,\infty) \times \partial M$ to $M$ along $\partial M$ and extend $\theta_M$ by $r\alpha_M$, then we get a new exact symplectic manifold $\widehat{M}$ called the {\it completion of} $M$. \begin{theorem} \label{theorem:symplecticinvariance} Let $M$, $N$ be two Liouville domains such that $\widehat{M}$ is symplectomorphic to $\widehat{N}$. If $M$ is $(k,\Lambda)$-uniruled then there exists a $\Lambda' > 0$ such that $\widehat{N}$ is $(k,\Lambda')$-uniruled. \end{theorem} \proof of Theorem \ref{theorem:symplecticinvariance}. Let $\phi : \widehat{M} \rightarrow \widehat{N}$ by our symplectomorphism. By \cite[Lemma 1]{eliashberg:symplectichomology} we can assume that $\phi$ is an exact symplectomorphism which means that $\phi^* \theta_N = \theta_M + df$ for some function $f$. Let $\Phi_t : \widehat{M} \rightarrow \widehat{M}$ be the time $t$ flow of the vector field $X_{\theta_M}$. Because $X_{\theta_M}$ is equal to $r \frac{\partial}{\partial r}$ near infinity where $r$ is the cylindrical coordinate on $\widehat{M}$, we get that for some $T \geq 0$, $\phi^{-1}(N) \subset \Phi_T(M)$. Because $\Phi_T^* \theta_M = e^T \theta_M$, we get by a rescaling argument that the Liouville domain $\phi_T(M)$ is $(k,e^T \Lambda)$-uniruled. Because $N$ is a codimension $0$ exact symplectic submanifold of $\phi_T(M)$, we have by Lemma \ref{theorem:subdomainuniruled} that $N$ is $(k,\Lambda')$-uniruled where $\Lambda' = e^T \Lambda$. \qed If we have two Liouville domains $(M,\theta_M)$ and $(N,\theta_N)$ then they are {\it Liouville deformation equivalent} if there is a diffeomorphism $\phi : M \rightarrow N$ and a smooth family of $1$-forms $\theta^t_M$ ($t \in [0,1]$) on $M$ with the property that: \begin{enumerate} \item $\theta^0_M = \theta_M$, \item $\theta^1_M = \phi^* \theta_N$, and \item $(M,\theta^t_M)$ is a Liouville domain for each $t$. \end{enumerate} \begin{corollary} \label{corollary:deformationinvariance} Let $M$, $N$ be two Liouville deformation equivalent Liouville domains. If $M$ is $(k,\Lambda)$-uniruled then there exists a $\Lambda' > 0$ such that $\widehat{N}$ is $(k,\Lambda')$-uniruled. \end{corollary} \proof of Corollary \ref{corollary:deformationinvariance}. We will first show that $\widehat{M}$ is symplectomorphic to $\widehat{N}$ (which is a standard fact) and then we will use Theorem \ref{theorem:symplecticinvariance}. Let $\theta^t_M$ be our Liouville deformation on $M$. By construction, we can complete all the Liouville domains $(M,\theta^t_M)$ giving us a manifold $\widehat{M}$ and a smooth family of $1$-forms $\theta^t_M$ on $\widehat{M}$ by abuse of notation. The vector field $X_t$ given by the $d\theta^t_M$-dual of $\frac{d}{dt}(\theta^t_M)$ is integrable. This is because $dr(X_t)$ is less than or equal to some constant times $r$ where $r$ is the cylindrical coordinate on $\widehat{M}$. The time $1$ flow of this vector field gives us a symplectomorphism from$(\widehat{M},\theta^0_M)$ to $(\widehat{M},\theta^1_M)$. So because $\theta_M = \theta^0_M$ and $\phi^* \theta_N = \theta^1_M$ we get that $(\widehat{M},\theta_M)$ is symplectomorphic to $(\widehat{N},\theta_N)$ Hence by Theorem \ref{theorem:symplecticinvariance}, we have that $N$ is $(k,\Lambda')$-uniruled for some $\Lambda' \geq 0$. \qed \section{Uniruled smooth affine varieties} \label{section:affineuniruled} We say that an affine variety $A$ is {\it algebraically $k$-uniruled} if through every point $p \in A$ there is a polynomial map $S \rightarrow A$ passing through $p$ where $S$ is equal to $\P^1$ with at most $k$ punctures. We want to relate this algebraic definition of uniruledness with the one in the last section. In order to do this we need to associate a Liouville domain with $A$. \begin{defn} \label{defn:canonicasymplecticformassociatedtoA} Let $A \subset \mathbb C^N$ be a smooth affine variety in $\mathbb C^N$. Then we define the $1$-form $\theta_A$ to be equal to $\sum_{i=1}^N \frac{1}{2}r_i^2 d\vartheta_i$ restricted to $A$ where $(r_i,\vartheta_i)$ are polar coordinates for the $i$th $\mathbb C$ factor. We have that $\omega_A := d\theta_A$ is a biholomorphic invariant \cite{EliahbergGromov:convexsymplecticmanifolds}. Also for $R \gg 1$, $(B(R) \cap A, \theta_A)$ is a Liouville domain by \cite[Lemma 2.1]{McLean:affinegrowth} where $B(R)$ is the closed ball of radius $R$. We will write $(\overline{A},\theta_A)$ for such a Liouville domain and call it a {\bf Liouville domain associated to } $A$. \end{defn} If $A_1,A_2$ are two isomorphic smooth affine varieties then any Liouville domain associated to $A_1$ is Liouville deformation equivalent to any Liouville domain associated to $A_2$ by Lemma \ref{lemma:affineLiouvilledomain} in the Appendix. The problem with the symplectic form $\omega_A$ is that it gives $A$ infinite volume. But we need to compactify $A$ so in order to deal with this we need another symplectic structure on $A$ which is compatible with the compactification $X$ of $A$. \begin{defn} \label{defn:symplecticformonaffinevariety} Let $A$ be a smooth affine variety and $X$ a smooth projective variety such that $X \setminus A$ is a smooth normal crossing divisor (an SNC compactification). Let $L$ be an ample line bundle on $X$ given by an effective divisor $D$ whose support is $X \setminus A$. From now on such a line bundle will be called a {\bf line bundle associated to an SNC compactification $X$ of $A$}. Suppose $|\cdot|$ is some metric on $L$ whose curvature form is a positive $(1,1)$ form. Then if $s$ is some section of $L$ such that $s^{-1}(0) = D$ then we define $\phi_{s,|\cdot|} := -\log{|s|}$ and $\theta_{s,|\cdot|} := -d^c \phi_{s,|\cdot|}$. The two form $d\theta_{s,|\cdot|}$ extends to a symplectic form $\omega_{|\cdot|}$ on $X$ (which is independent of $s$ but does depend on $|\cdot|$). We will say that $\phi_{s,|\cdot|}$ is a {\bf plurisubharmonic function associated to $L$}, $\theta_{s,|\cdot|}$ a {\bf Liouville form associated $L$} and $\omega_{|\cdot|}$ a {\bf symplectic form on $X$ associated to $L$}. \end{defn} The aim of this section is to prove the following: \begin{theorem} \label{theorem:kuniruledimpliesalgebraicallyuniruled} Let $A$ be a smooth affine variety and $\overline{A}$ its associated Liouville domain. Then if $\overline{A}$ is $(k,\Lambda)$-uniruled then $A$ is algebraically $k$-uniruled. \end{theorem} We need some preliminary lemmas before we prove this theorem. \begin{lemma} \label{lemma:algebraicvarietysymplecticforminvariance} Let $L$ be a line bundle associated to an SNC compactification $X$ of $A$ and let $|\cdot|_1$, $|\cdot|_2$ be two metrics on $L$ whose curvature forms are positive $(1,1)$-forms. Then $(A,\omega_{|\cdot|_1})$ is symplectomorphic to $(A,\omega_{|\cdot|_2})$. \end{lemma} \proof of Lemma \ref{lemma:algebraicvarietysymplecticforminvariance}. We have a smooth family of symplectic forms $\omega_t := (1-t)\omega_{|\cdot|_1} + t\omega_{|\cdot|_2}$ on $X$. By a Moser argument we have a smooth family of symplectomorphisms $\phi_t : (X,\omega_{|\cdot|_1}) \to (X,\omega_t)$. Let $D_t := \phi_t^{-1}(D)$ where $D$ is the associated compactification divisor for $A$. We have that $(X \setminus D_0,\omega_{|\cdot|_1})$ is symplectomorphic to $(X \setminus D_1, \omega_{|\cdot|_1})$ by \cite[Lemma 5.15]{McLean:affinegrowth}. Hence $(A,\omega_{|\cdot|_1})$ is symplectomorphic to $(A,\omega_{|\cdot|_2})$. \qed Let $(M,\theta_M)$ be an exact symplectic manifold. We say that it is a {\it finite type Liouville manifold} if there is an exhausting function $f$ (i.e. it is proper and bounded below) with the property that $df(X_{\theta_M}) > 0$ outside some compact set. Here $X_{\theta_M}$ is the $\omega_M := d\theta_M$-dual of $\theta_M$. We say that a finite type Liouville manifold $M$ is {\it strongly bounded} if there is some compact set ${\mathcal K} \subset M$ and a constant $T>0$ so that for every point $p$ outside ${\mathcal K}$, the time $T$ flow of $p$ along $X_{\theta_M}$ does not exist. In other words every point outside ${\mathcal K}$ flows to infinity within some fixed finite time. \begin{lemma} \label{lemma:boundedembedding} Let $(M,\theta_M)$ be a strongly bounded finite type Liouville manifold and let $f$ be its exhausting function. Choose $C \gg 1$ so that $df(X_{\theta_M}) >0$ when $f \geq C$. Then there is a $\nu > 0$ and an exact symplectic embedding of $(M,\theta_M)$ into the Liouville domain $(f^{-1}(-\infty,C],\nu \theta_M))$. This embedding is also a homotopy equivalence. \end{lemma} \proof of Lemma \ref{lemma:boundedembedding}. We define $M_C := f^{-1}(-\infty,C]$. Because $(M,\theta_M)$ is strongly bounded and that $df(X_{\theta_M}) >0$ when $f \geq C$, we have a constant $T$ so that every flowline starting at a point $p$ with $f(p) \geq C$ flows to infinity in time less than $T$. Let $\phi_t : M \rightarrow M$ be the time $t$ flow of $-X_{\theta_M}$. We define an embedding $\iota : M \hookrightarrow M_C$ by $\phi_T$. The reason why $\phi_T$ sends $M$ into $M_C$ is because we know that points outside $M_C$ flow to infinity in time less than $T$. We have that $e^T \iota^* \theta_M = \theta_M$. Hence $\iota$ is an exact symplectic embedding of $(M,\theta_M)$ into $(M_C,\nu \theta_M)$ where $\nu = e^T$. \qed \begin{lemma} \label{lemma:affinevarietycontainedinliouvilledomain} Let $L$ be a line bundle associated to an SNC compactification $X$ of a smooth affine variety $A$ and let $\theta_{s,\|\cdot\|}$ be a Liouville form associated to $L$. Then there is some Liouville domain $(M,\theta)$ Liouville deformation equivalent to $(\overline{A},\theta_A)$ and an exact symplectic embedding $(A,\theta_{s,\|\cdot\|}) \hookrightarrow (M,\theta)$ which is also a homotopy equivalence. \end{lemma} \proof of Lemma \ref{lemma:affinevarietycontainedinliouvilledomain}. In order to prove this Lemma we will show that $(A,\theta_{s,\|\cdot\|})$ is a strongly bounded finite type Liouville manifold and then use Lemma \ref{lemma:boundedembedding} to finish off the proof. We write $D := s^{-1}(0)$ and $f_A := \phi_{s,\|\cdot\|} = -\log{\|s\|}$ our plurisubharmonic function associated to $L$. Locally around some point in $D$ we have a holomorphic chart $z_1,\cdots,z_n$ so that $X \setminus A$ is equal to $z_1z_2 \cdots z_k = 0$. The line bundle $L$ is trivial over this chart and $s = z_1^{a_1} \cdots z_k^{a_k}$ with respect to this trivialization for some $a_1,\cdots,a_k > 0$. The metric $\|\cdot\|$ is equal to $e^\rho |\cdot|$ where $|\cdot|$ is the standard Euclidean metric on our trivialization of $L$. So $f_A = -\log{\|s\|} = -\rho - \sum_{i=1}^k a_i \log{|z_i|}$ in the above coordinate chart. We have that $\theta_A = -d^c f_A$ and $\omega_A = -dd^c f_A$. So $df_A(X_{\theta_A}) = \|d f_A\|_J^2$ where $\|\cdot\|_J$ is the metric on the real cotangent bundle of $X$. There is some constant $\gamma>0$ so that $\|df_A\|^2_J \geq \gamma |df_A|^2$ where $|\cdot|$ (by abuse of notation) is the standard Euclidean metric with respect to our coordinate chart $z_1, \cdots, z_n$. This implies that: \[df_A(X_{\theta_A}) \geq \gamma \left( -|d\rho|^2 + \sum_{i=1}^k a_k^2 \left|\frac{1}{z_i}\right|^2 \right).\] We can assume that the functions $|z_i|$ are bounded above by some constant. This implies that for any $a>0$, $b \in \mathbb R$, there is a constant $\kappa > 0$ such that if $\text{min}(|z_i|,|z_j|) < \kappa$ then $|\frac{1}{z_i}|^2+|\frac{1}{z_j}|^2 \geq a{\log(|z_i|)\log(|z_j|)}+b$. Because $f_A = -\rho - \sum_{i=1}^k a_i \log{|z_i|}$ we have by the previous two inequalities that \begin{equation} \label{eqn:differentialinequality} df_A(X_{\theta_A}) \geq f_A^2 \end{equation} sufficiently near $D$. Now lets look at a flowline $x(t)$ of $X_{\theta_A}$ near $D$. We have that $y(t) := f_A(x(t))$ satisfies \[ \frac{dy}{dt} \geq y^2\] by equation (\ref{eqn:differentialinequality}). Solving such a differential inequality gives us: \[ y \geq \frac{y(0)}{1-y(0)t} \] whose solution blows up in time less than $\frac{1}{y(0)}$ (as we can assume $y(0) > 0$ because we are near $D$). This implies that if we are inside the region $f_A^{-1}(1,\infty)$ and also sufficiently near $D$ then every flowline of $X_{\theta_A}$ flows off to infinity in time less than $1$. Hence $(A,\theta_A)$ is a strongly bounded finite type Liouville manifold. So by Lemma \ref{lemma:boundedembedding} we have an exact symplectic embedding of $(A,\theta_A)$ into $(M,\theta) := (f_A^{-1}(-\infty,C],\nu \theta_A)$ where $C,\nu \gg 1$. This embedding is a homotopy equivalence. By Lemma \ref{lemma:affineLiouvilledomain}, $\overline{A}$ is Liouville deformation equivalent to $f_A^{-1}(-\infty,C]$ for any $C \gg 0$ which in turn is Liouville deformation equivalent to $(M,\theta)$. \qed The following lemma is technical and will be useful here when $J$ is standard and also useful later on. \begin{lemma} \label{lemma:connecteddomain} Let $A$ be a smooth affine variety and $X$ a smooth projective variety compactifying $A$. We let $J$ be an almost complex structure on $X$ which agrees with the standard complex structure on $X$ near $X \setminus A$. Let $u : S \rightarrow X$ be a $J$ holomorphic map where $S$ is a compact nodal Riemann surface such that no component of $S$ maps entirely in to $X \setminus A$. Let $\phi_{s,\|\cdot\|}$ be some plurisubharmonic function associated to an ample line bundle $L$ on $X$. Then near $u^{-1}(X \setminus A)$ we have that $\phi_{s,\|\cdot\|} \circ u$ has no singularities. \end{lemma} \proof of Lemma \ref{lemma:connecteddomain}. We will define $\phi := \phi_{s,\|\cdot\|} \circ u$. Near $u^{-1}(X \setminus A)$ there is a holomorphic section $s$ of $u^* L$ whose zero set is $u^{-1}(X \setminus A)$ such that $\phi = -\log{\|s\|}$. Let $p \in u^{-1}(X \setminus A)$. We want to show that $\phi$ has no singularities near $p$. After trivializing $u^* L$ near $p$ we have that $s = g(z) z^l$ where $z$ is some coordinate function on $S$ near $p = \{z=0\}$ and $g$ is a non-zero holomorphic function near $p$. Also $\|s\| = e^{-\psi} |s|$ where $|\cdot|$ is the standard Euclidean metric on the trivialization of $L$ and $\psi$ is some function. If we choose polar coordinates $z = re^{i\vartheta}$ then $\phi = \psi - l\log{r} - \log{|g(z)|}$. So $-\frac{\partial}{\partial r}(\phi)$ tends to infinity as $r$ tends to $0$ because $-\frac{\partial}{\partial r}(\psi - \log{|g(z)|})$ is bounded but $\frac{\partial}{\partial r}(l\log{r}) = \frac{l}{r}$. Hence $\phi$ has no singularities near $u^{-1}(X \setminus A)$. \qed The next lemma shows us how to relate holomorphic curves inside smooth affine varieties with algebraic curves inside compactifications of these smooth affine varieties. This technique is a degeneration technique. \begin{lemma} \label{lemma:compactnessresultforvarieties} Suppose we have a morphism $\pi : Q \rightarrow \mathbb C$ of smooth quasiprojective varieties with the following properties: \begin{enumerate} \item There is a symplectic form on $Q$ compatible with the complex structure. \item The central fiber $\pi^{-1}(0)$ is equal to $F \cup E$ where $F$ and $E$ have the same dimension and where $F$ is a projective variety. \item $Q \setminus E$ is isomorphic to a product $B \times \mathbb C$ where $B$ is a smooth affine variety. The morphism $\pi$ under the above isomorphism is the projection map $B \times \mathbb C \twoheadrightarrow \mathbb C$. \item There is a sequence $x_i \in \mathbb C \setminus \{0\}$ tending to zero as $i$ tends to infinity and a holomorphic map $u_{x_i} : S_{x_i} \rightarrow \pi^{-1}(x_i)$ for each $i$. Here $S_{x_i}$ is a smooth genus zero Riemann surface with $|H_1(S_{x_i},\mathbb Q)| \leq k-1$. This map is not necessarily a proper map and it has energy bounded above by some constant $\Lambda$ with respect to $\omega$ where $\Lambda$ is independent of $i$. \item There is a neighbourhood $N$ of $F$ with the property that $u_{x_i}|_{u_{x_i}^{-1}(N)}$ is a proper map for $i$ sufficiently large. \item All these curves $u_{x_i}$ pass through some point $p_{x_i} \in \pi^{-1}(x_i)$ where $p_{x_i}$ tends to some point $p \in F \setminus E$ as $i$ tends to $\infty$. \end{enumerate} Then there is a non-trivial holomorphic curve $v : \P^1 \rightarrow F$ with the property that $v^{-1}(E)$ is at most $k$ points in $\P^1$. Also $p$ is contained in the image of $v$. If all the curves $u_{x_i}$ are contained inside some closed subvariety $V$ of $Q$ then $v$ is also contained inside $V$. \end{lemma} \proof of Lemma \ref{lemma:compactnessresultforvarieties}. Choose real compact codimension zero submanifolds with boundary $N_1,N_2$ of $N$ with the property that the interior of $N_1$ contains $F$ and the interior of $N_2$ contains $N_1$. We can perturb the boundaries of these manifolds $N_1$ and $N_2$ so that they are transverse to $u_{x_i}$ for all $i$. Consider the holomorphic curves $u'_{x_i} := u_{x_i}|_{u_x^{-1}(N_2)}$. By a compactness argument \cite{fish:compactness} using the manifold $N_2$ and curves $u'_{x_i}$, we have (after passing to a subsequence), a sequence of compact subcurves $\widetilde{S}_i \subset S_{x_i}$ with the following properties: \begin{enumerate} \item the boundary of $\widetilde{S}_i$ is sent by $u_{x_i}$ outside $N_1$. \item There is a compact surface $\widetilde{S}$ with boundary and a sequence of diffeomorphisms $a_i : \widetilde{S} \rightarrow \widetilde{S}_i$ so that $u_{x_i} \circ a_i$ $C^0$ converge to some continuous map $v' : \widetilde{S} \rightarrow Q$. This continuous map is smooth away from some union of curves $\Gamma$ in the interior of $\widetilde{S}$ and $u'_{x_i} \circ a_i$ converge in $C^{\infty}_{\text{loc}}$ to $v'$ outside $\Gamma$. \item The map $v'$ is equal to $v'' \circ \phi$ where $v''$ is a holomorphic map from a nodal Riemann surface $S$ with boundary to $N_2$ and $\phi : \widetilde{S} \rightarrow S$ is continuous surjection, and a diffeomorphism onto its image away from $\Gamma$. The curves $\Gamma$ get mapped to the nodes of $S$ under $\phi$. The map $v''$ sends the boundary of $S$ outside $N_1$. \end{enumerate} Because each of the curves $u_{x_i}$ are contained in $\pi^{-1}(x_i)$ and $x_i$ tends to zero we also get that the image of $v''$ is contained in $\pi^{-1}(0) = F \cup E$. Also because the points $p_{x_i}$ converge to $p \in F$, we have some $s \in S$ such that $v''(s) = p$. Because $v''$ sends the boundary of $S$ outside $N_1$ and that the interior of $N_1$ contains $F$, we have an irreducible component $\P^1 = K$ inside our nodal Riemann surface $S$ where $v''$ maps $K$ to $F$. Also we can assume that $s \in K$. Our map $v$ is defined as $v''$ restricted to $K = \P^1$. We now wish to show that $v^{-1}(E)$ is at most $k$ points. There is a natural exhausting plurisubharmonic function $\rho$ on $B$ which we can construct using Definition \ref{defn:canonicasymplecticformassociatedtoA}. We pull back $\rho$ to $\widetilde{\rho}$ under the natural projection map $P_B : B \times \mathbb C \twoheadrightarrow B$. Here we identify $B \times \mathbb C$ with $Q \setminus E$. We know that $v^{-1}(E)$ is the disjoint union of $l$ points for some $l$. We want to show that $l \leq k$. The image $v(K)$ of $v$ is a one dimensional projective subvariety of $F$, and $v(K) \setminus E$ is a (possibly singular) affine subvariety of $B$. By Lemma \ref{lemma:connecteddomain} we have that $\rho$ restricted to $v(K)$ is non-singular outside a compact set. Hence for $C \gg 1$ we have that $\rho^{-1}(C)$ is transverse to $v$ and that $(\rho \circ v)^{-1}(C)$ is a disjoint union $l$ circles. Also by making $C$ large enough, we have that $v^{-1}(\rho^{-1}(-\infty,C])$ is connected. Because the maps $u_{x_i} \circ a_i$ converge in $C^\infty_{\text{loc}}$ to $v$ near these $l$ circles for $i$ large enough and that these maps $C^0$ converge, we get that the connected component $S'_i$ of $(\rho \circ u_{x_i})^{-1}\left((-\infty,C]\right)$ passing through $p_x$ has $l$ boundary components for $i \gg 1$. Because \begin{enumerate} \item each curve $u_{x_i}$ maps to a smooth affine variety $\pi^{-1}(x_i)$ and \item $(\rho \circ u_{x_i})^{-1}((-\infty,C])$ is compact for $i$ large enough and \item $|H^1(S_{x_i},\mathbb Q)| \leq k-1$, \end{enumerate} we have by Lemma \ref{lemma:topologyofcurve} that $H_1(S'_i,\mathbb Q)$ has rank at most $k-1$ which implies that $S'_i$ has at most $k$ boundary circles for $i \gg 1$. But we know for $i$ sufficiently large that it is also has $l$ boundary components, hence $l \leq k$. This implies that $v^{-1}(E)$ is a union of $\leq k$ points and passes through $p$. Now suppose that all of these curves $u_{x_i}$ are contained inside some closed subvariety $V$. Because $u_{x_i} \circ a_i$ $C^0$ converges to $v'$ we have that $v'$ is a subset of $V$ because $V$ is a closed subset of $Q$. The image of $v$ is contained inside the image of $v'$ which implies that the image of $v$ is contained inside $V$. \qed \proof of Theorem \ref{theorem:kuniruledimpliesalgebraicallyuniruled}. Let $X'$ be some compactification of $A$ by a projective variety. By the Hironaka resolution of singularities theorem \cite{hironaka:resolution} we can resolve the compactification $X'$ of $A$ so that it is some smooth projective variety $X$ with the property that $X \setminus A$ is a smooth normal crossing divisor. This variety can be embedded into $\P^N$ so that $X \setminus A$ is equal to $X$ intersected with some linear hypersurface $\P^{N-1}$ in $\P^N$. So we can view $A$ as a subvariety of $\mathbb C^N = \P^N \setminus \P^{N-1}$. We will let $D_X$ be the effective ample divisor given by restricting $\P^{N-1}$ to $X$. We start with $\P^1 \times \P^N$. The divisor $D := \{\infty\} \times \P^N + \P^1 \times \P^{N-1}$ is ample. Let $P := \text{Bl}_{\bracket{0} \times \P^{N-1}} \P^1 \times \P^N$ be the natural blowup map along $\{0\} \times \P^{N-1}$ and let $\widetilde{D}$ be the proper transform of $D$ in $P$. We let $E$ be the exceptional divisor. Then $k \widetilde{D} + (k-1)E$ is ample inside $P$ for $k \gg 1$. Let $\pi : P \rightarrow \P^1$ be the composition of the blowdown map with the projection map to $\P^1$. The fiber $\pi^{-1}(0)$ is a union of two divisors $F + E$ and this is linearly equivalent to $\pi^{-1}(\infty)$. Hence $E$ is linearly equivalent to $\pi^{-1}(\infty)-F$. Let $D'$ be the divisor $k\widetilde{D} + (k-1)(\pi^{-1}(\infty) - F)$. This is ample and the associated line bundle $L_{D'}$ admits a metric $\|\cdot\|$ whose curvature form is a positive $(1,1)$-form. This gives us a symplectic form on $X$. Let $s$ be a meromorphic section of $L_{D'}$ so that $s^{-1}(0) - s^{-1}(\infty) = D'$. We have that $-d^c \log{\|s\|}$ restricted to $\pi^{-1}(x) \setminus \text{support}(D')$ ($x \neq 0$) makes this fiber into a Liouville manifold. We have that $D'$ is the disjoint union of $D'_1 := k\widetilde{D} + (k-1)\pi^{-1}(\infty)$ and $-(k-1)F$. Also $-\log{\|s\|}$ tends to $+\infty$ as we approach $D'_1$ and $-\infty$ as we approach $F$. Hence $P_C := \left( (-\log{\|s\|})^{-1}((-\infty,C]) \right) \cup F$ is a compact submanifold of $X \setminus \text{support}(D'_1)$ for generic $C \gg 1$ whose interior contains $F$. Consider $\P^1 \times X \subset \P^1 \times \P^N$ and let $P_X$ be the proper transform of $\P^1 \times X$ inside $P$. We let $\pi_X$ be the restriction of $\pi$ to $P_X$. We have $A_x := \pi_X^{-1}(x) \setminus \text{support}(D'_1)$ are all isomorphic smooth affine varieties when $x \neq 0$. Also if $X_x := \pi_X^{-1}(x)$ then these isomorphisms extend to isomorphisms $\phi_{x,y} : X_x \rightarrow X_y$ so that $\phi^* L|_{X_y} = L|_{X_x}$. All these affine varieties are isomorphic to $A$. So by Lemma \ref{lemma:algebraicvarietysymplecticforminvariance} we have that all these affine varieties are symplectomorphic with respect to the symplectic form $-dd^c \log{\|s\|}$. Combining this with Lemma \ref{lemma:affinevarietycontainedinliouvilledomain} we then get that all these varieties can be codimension $0$ symplectically embedded into a fixed Liouville domain $(M,\theta)$ which is Liouville deformation equivalent to $\overline{A}$. Also these embeddings are homotopy equivalences. Because $\overline{A}$ is $(k,\Lambda)$ uniruled, we have by Lemma \ref{corollary:deformationinvariance} that $(M,\theta)$ is $(k,\Lambda')$ uniruled for some $\Lambda'>0$. We define $P_A \subset P_X$ to be equal to $P_X \setminus (\text{support}(D'_1) \cup E)$. This is isomorphic to $\mathbb C \times A$. Let $p$ be any point in $P_X \cap (F \setminus E)$ and let $x_i \in \mathbb C \setminus \{0\}$, $p_{x_i} \in P_C \cap A_{x_i}$ be a family of points in $A_{x_i}$ which all converge to $p$ as $i$ tends to $\infty$. For every $x_i$ choose a Liouville domain $N_{x_i}$ which is an exact codimension $0$ symplectic submanifold of $A_{x_i}$ containing $P_C \cap A_{x_i}$ and so that the embedding map $N_{x_i} \hookrightarrow A_{x_i}$ is a homotopy equivalence. By Lemma \ref{theorem:subdomainuniruled}, we have that $N_{x_i}$ is $(k,\Lambda')$ uniruled because it can be symplectically embedded into $M$ so that the embedding is a homotopy equivalence. So for each $i$ there is a proper $J$ holomorphic curve $u_{x_i} : S_{x_i} \rightarrow N_{x_i}^0$ where $u_{x_i}$ has energy $\leq \Lambda'$ and $|H_1(S_{x_i},\mathbb Q)|\leq k-1$. In particular $u_{x_i}|_{u_x^{-1}(P_C^o)}$ are properly embedded holomorphic curves inside the interior $P_C^o$ of the compact manifold $P_C$. By Lemma \ref{lemma:compactnessresultforvarieties}, there is an algebraic map $v : \P^1 \rightarrow P_X \cap F$ with the property that $v(q) = p$ for some $q \in \P^1$ and $v^{-1}(E)$ is a union of at most $k$ points. After identifying $A$ with $P_X \cap (F \setminus E)$ we then get that $v|_{v^{-1}(A)}$ is also algebraic and passing through $p$ and $v^{-1}(A)$ is $\P^1$ with at most $k$ punctures. Hence $A$ is algebraically $k$-uniruled. \qed \section{Introduction to Gromov Witten invariants} \label{section:GWintro} Genus $0$ Gromov Witten invariants for general symplectic manifolds have now been defined in many different ways: \cite{FukayaOno:Arnold}, \cite{CieliebakMohnke:symplectichypersurfaces}, \cite{HoferWysockiZehnder:polyfoldapplications1} and \cite{LiTian:sympGW}. Earlier work for special symplectic manifolds such as projective varieties of complex dimension $3$ or less are done in \cite{Ruan:topologicalsigma}, \cite{Ruan:3folds} and \cite{RuanTian:quantum}. Many of the applications of this paper appear in complex dimension $3$ or less. These invariants can also be defined in a purely algebraic way \cite{LiTian:algGW}, \cite{BehrendFantechi:normalcone} and \cite{Behrend:GW} but we will not use these theories here. We will use the Gromov Witten invariants defined for general symplectic manifolds. All of our calculations are done for complex structures where all of the curves in the relevant homology class are regular and unobstructed (and also somewhere injective) and so are relatively easy calculations. Also most of these (or similar) calculations have been done before in \cite{Mcduff:rationalruled}, \cite{Ruan:virtual} and \cite{Kollar:lowdegree}. We start with a compact symplectic manifold $X$, a natural number $k$ and an element $\beta \in H_2(X)$. Let $d := 2 \left( n - 3 + k + c_1(X).\beta \right)$ where $n$ is half the dimension of $X$. Choose $k$ cohomology classes $\alpha_1,\cdots,\alpha_k \in H^*(X,\mathbb Q)$ so that the sum of their degrees is $d$. For any compatible almost complex structure one has the set ${\mathcal M}(\beta,J,k)$ of $J$ holomorphic maps $u : S \rightarrow X$ where $S$ is a genus $0$ compact nodal Riemann surface with $k$ labeled marked points. This nodal curve has to be stable which means that if an irreducible component of this surface maps to a point then that component must have at least three of these marked points. There are natural maps $\text{ev}_i : {\mathcal M}(\beta,J,k) \to X$ which send a curve $u : S \to X$ to $u(x_i)$ where $x_i$ is the $i$th marked point in $S$. It turns out (in nice circumstances) that ${\mathcal M}(\beta,J,k)$ is a topological space with a homology class \[\left[ {\mathcal M}(\beta,J,k) \right]^{\text{vir}} \in H_d({\mathcal M}(\beta,J,k),\mathbb Q).\] One then has \[ \langle \alpha_1, \cdots, \alpha_k \rangle^X_{0,\beta} := \int_{\left[ {\mathcal M}(\beta,J,k) \right]^{\text{vir}} } \text{ev}_1^* \alpha_1 \wedge \cdots \wedge \text{ev}_k^* \alpha_k.\] The genus $0$ Gromov Witten invariant \[ \langle \alpha_1, \cdots, \alpha_k \rangle^X_{0,\beta} \in \mathbb Q\] satisfies the following properties: \begin{enumerate} \item \label{item:existenceofcurves} If $\langle \alpha_1, \cdots, \alpha_k \rangle^X_{0,\beta} \neq 0$ for some $\alpha_1,\cdots,\alpha_k$ then for every compatible $J$, there exists a $J$ holomorphic map $u : S \rightarrow X$ from a genus $0$ nodal curve $S$ representing the class $\beta$. \item \label{item:smoothcalculation} Suppose that $X$ is a smooth projective variety with its natural complex structure $J$. Suppose that every rational curve $C$ representing the class $\beta$ is smooth, embedded, and satisfies $H^1(C,T_X|_C) = 0$ where $T_X$ is the tangent sheaf. Then $\langle \alpha_1, \cdots, \alpha_k \rangle^X_{0,\beta} \neq 0$ for some $\alpha_1,\cdots,\alpha_k$. \end{enumerate} The reason why (\ref{item:smoothcalculation}) is true is that ${\mathcal M}(\beta,J,k)$ in this case is a complex manifold of dimension $d$ for every $k$ and $\left[ {\mathcal M}(\beta,J,k) \right]^{\text{vir}}$ is equal to its fundamental class. For $k$ large enough, the map: \[ \text{ev}_1 \times \cdots \times \text{ev}_k : {\mathcal M}(\beta,J,k) \to X^k \] is a holomorphic map which is a branched cover onto its image. If we restrict the natural product symplectic structure $\omega_{X^k} := \omega_1 + \cdots + \omega_k$ on $X^k$ to ${\mathcal M}(\beta,J,k)$ then it is also a symplectic structure on this moduli space away from the branching locus. Hence $\omega^d_{X^k}$ restricted to ${\mathcal M}(\beta,J,k)$ is a positive multiple of the volume form on an open dense subset and so it evaluates non-trivially with the fundamental class. In particular we have that $\omega_1^{i_1} \wedge \cdots \wedge \omega_k^{i_k}$ evaluates non-trivially with the fundamental class for some $i_1,\cdots,i_k$. So if we choose $\alpha_l := \omega_1^{i_l}$ then $\langle \alpha_1, \cdots, \alpha_k \rangle^X_{0,\beta} \neq 0$. This argument is almost exactly the same as an argument at the end of the proof of \cite[Theorem 4.10]{RuanJianxunTianJun:birational}. \section{Uniruledness criteria for affine varieties} \label{section:affineuniruled2} In this section we will give another definition of uniruledness for smooth affine varieties. The main theorem of this section is to show that any smooth affine variety satisfying this uniruledness condition has an associated Liouville domain which is also $(k,\Lambda)$-uniruled. We say that a smooth affine variety $A$ is {\bf compactified $k$-uniruled} if $A$ has some compactification $X$ by a smooth projective variety so that if $D = X \setminus A$ then we have the following properties: \begin{enumerate} \item There is an effective ample divisor $D_X$ on $X$ whose support is $D$ and whose associated line bundle has a metric whose curvature form gives us some symplectic form $\omega_X$ on $X$. \item Let $J$ be an almost complex structure compatible with $\omega_X$ which is the standard complex structure near $D$. Then there is a dense set $U_J \subset A$ so that for every point $p \in U_J$ such that $J$ is integrable near $p$ we have a $J$ holomorphic map $u : \P^1 \rightarrow X$ passing through $p$ such that $u^{-1}(D \setminus A)$ is a union of at most $k$ points. \item The energy of this curve is bounded above by some fixed constant $\Lambda$. \end{enumerate} We will now give some easier criteria for being compactified $k$-uniruled. Our symplectic form on $X$ comes from some ample divisor. \begin{lemma} \label{lemma:fibrationgwkuniruled} Suppose that we have a morphism $\pi : X \rightarrow B$ whose generic fiber is $\P^1$ where the base $B$ is projective. Let $\beta \in H_2(X)$ be the class of this curve. Then for every compatible almost complex structure $J$ which is integrable on some open set $U$ containing a point $p$, there is some $J$ holomorphic curve $u : S \rightarrow X$ passing through $p$ representing the class of the fiber. Here $S$ is a genus $0$ nodal curve. \end{lemma} \proof of Lemma \ref{lemma:fibrationgwkuniruled}. Let $F$ be any regular fiber of $\pi$. This is isomorphic to $\P^1$. Blow up $X$ to $\widetilde{X}$ at some point in $F$. Let $\widetilde{F}$ be the proper transform of $F$ inside $X$ and let $\widetilde{\beta} \in H_2(\widetilde{X},\mathbb Q)$ be its respective homology class. The only curve in this homology class is $\widetilde{F}$. If we restrict the tangent bundle $T_{\widetilde{X}}$ to this curve then it is isomorphic to: ${\mathcal O}(2) \oplus {\mathcal O}(-1)^{\oplus n-1}$. Hence $H^1(\widetilde{F},T_{X}|_{\widetilde{F}}) = 0$. By property (\ref{item:smoothcalculation}) there exists $\alpha_1,\cdots,\alpha_k$ such that $\langle \alpha_1, \cdots, \alpha_k \rangle^X_{0,\widetilde{\beta}} \neq 0$. Let $p_i$ be a sequence of points in $U$ converging to $p$ where $p_i$ is contained inside a smooth fiber $F_i$. Because $J$ is integrable in $U$, we can blowup $X$ at $p_i$ giving us a new symplectic manifold $X_i$ along with a compatible almost complex structure so that the blowdown map is holomorphic. Let $\widetilde{F}_i$ be the proper transform of $F_i$ in $X_i$ and $\beta_i \in H_2(X_i,\mathbb Z)$ its respective homology class. Then because $\langle \alpha_1, \cdots, \alpha_k \rangle^X_{0,\beta_i} \neq 0$ for some cohomology classes $\alpha_i$, we have by property (\ref{item:existenceofcurves}) a $J$ holomorphic curve $u'_i : S_i \rightarrow X_i$ representing $\beta_i$. By composing this map with the blowdown map, we get a $J$ holomorphic curve $u_i : S_i \rightarrow X$ passing through $p_i$ and representing $\beta$. By a Gromov compactness argument one then gets a holomorphic curve $u : S_i \to X$ passing through $p$ representing the class $\beta$. \qed \begin{lemma} \label{lemma:nefdivisoruniruled} Suppose that we have a morphism $\pi : X \rightarrow B$ whose generic fiber is $\P^1$ where $B$ is a projective variety. Suppose also that $D'$ is an effective nef divisor whose support is equal to $D$ with the property that $\beta.D' \leq k$. Then $A$ is compactified $k$-uniruled. \end{lemma} \proof of Lemma \ref{lemma:nefdivisoruniruled}. Choose any effective ample divisor $D_X$ whose support is $D$ and let $\omega_X$ be the symplectic form associated to this divisor. Let $J$ be any compatible almost complex structure which is standard near $D$. Let $p$ be any point in $A$ where $J$ is integrable near $p$. By Lemma \ref{lemma:fibrationgwkuniruled}, there is a $J$ holomorphic curve $u : S \rightarrow X$ representing the homology class of the fiber passing through $p$. Here $S$ is a nodal curve with irreducible components $S_1,\cdots,S_l$. Let $S_i$ be any component passing through $p$. Then by positivity of intersection we have that $u(S_i).D \leq u(S_i).D'$ and $u(S_i).D' \leq \sum_i u(S_i).D' = u(S).D' \leq k$. Hence $u(S_i).D \leq k$. The energy of the curve $u|_{S_i}$ is bounded above by $\beta.D_X$. Because $S_i$ is isomorphic to $\P^1$ we then get that $A$ is compactified $k$-uniruled. \qed \begin{theorem} \label{theorem:algebraicuniruledimplesuniruled} Suppose that $A$ is a smooth affine variety that is compactified $k$-uniruled. Let $\overline{A}$ be its associated Liouville domain. Then $\overline{A}$ is $(k,\Lambda)$-uniruled for some $\Lambda$. \end{theorem} Before we prove this theorem we need a lemma and a definition. Let $X$ be a smooth projective variety with a smooth normal crossing divisor $D$ so that $X \setminus D$ is affine. A map $u : S \to X$ is said to be a {\it $k$-curve} if every irreducible component $\Sigma$ of $S$ either maps to $D$, or $u^{-1}(D) \cap \Sigma$ is a finite set of size at most $k$. \begin{lemma} \label{lemma:compactnessresult} Let $A$ be a smooth affine variety and $X$ a smooth projective variety compactifying $A$. We equip $X$ with a symplectic form $\omega_{\|\cdot\|}$ coming from some ample line bundle. We let $J$ be a compatible almost complex structure on $X$ which agrees with the standard complex structure on $X$ near $X \setminus A$. Let $u_i : S_i \rightarrow X$ be a sequence of $J$ holomorphic maps where $S_i$ is a connected genus $0$ nodal Riemann surface and where all the $u_i$'s have energy bounded above by some fixed constant. If the $u_i$ are all $k$ curves and Gromov converge to $u : S \rightarrow X$ then $u$ is also a $k$ curve. \end{lemma} \proof of Lemma \ref{lemma:compactnessresult}. Let $S_i^o := u_i^{-1}(A)$ and $S^o := u^{-1}(A)$. We want to show that the rank of $H_1$ of each connected component of $S^o$ is at most $k-1$. Because the $u_i$ Gromov converge, that means that there is a smooth real surface $\widetilde{S}$ and a series of continuous maps $\alpha_i : \widetilde{S} \rightarrow S_i$, $\alpha : \widetilde{S} \rightarrow S$ satisfying: \begin{enumerate} \item $\alpha_i$ and $\alpha$ are diffeomorphisms away from a $1$-dimensional submanifold $\Gamma \subset \widetilde{S}$ and away from the nodes of $S_i$ and $S$. \item $\Gamma$ maps to the nodes of $S$ under $\alpha_0$. \item $u_i \circ \alpha_i$ $C^0$ converge to $\alpha_0 \circ u$ and these maps $C^\infty_{\text{loc}}$ converge away from $\Gamma$. \end{enumerate} Choose an exhausting plurisubharmonic function $\phi : A \rightarrow \mathbb R$ associated to some line bundle $L$, section $s$ and metric $\|\cdot\|$ on $L$. Gromov convergence means that for $c \gg 1$ and $i \gg 1$ we have that every node of $S_i^o$ and also $\alpha_i(\Gamma) \cap S_i^o$ is mapped via $u_i$ to $\phi^{-1}(-\infty,c]$. We can also assume that the same is true for $S^0 := u^{-1}(A)$. For large enough $i$ and for generic $c$ large enough we have $u_i$ is smooth near $\phi^{-1}(c)$ and also transverse to this hypersurface. We can assume the same properties hold for $u$. Let $\Sigma_i$ be a sequence of connected components of $S_i^o$ which converge to a connected component $\Sigma$ of $S^o$. We have that $\Sigma_i \cap u_i^{-1}(\phi^{-1}(c))$ is a union of $l_i$ smooth circles in $S_i$ and $\Sigma \cap u^{-1}(\phi^{-1}(c))$ is a union of $l$ circles for some $l_i,l$. This means that $H_1(\Sigma_i \cap u_i^{-1}(\phi^{-1}(-\infty,c))$ has rank $\leq l_i-1$. By Lemma \ref{lemma:topologyofcurve} we then get that $H_1(\Sigma \cap u_i^{-1}(\phi^{-1}(-\infty,c))$ has rank less than or equal to $|H_1(\Sigma_i)| \leq k-1$. Hence $l_i \leq k-1$ for all $i$. So $\Sigma_i \cap u_i^{-1}(\phi^{-1}(c))$ is a union of at most $k$ circles. Because $u_i \circ \alpha_i$ $C^\infty$ converge to $u_i \circ \alpha$ near $\phi^{-1}(c)$ we get that $\Sigma \cap u^{-1}(\phi^{-1}(c))$ is also a union of at most $k$ circles. This is true for all $c$ sufficiently large. Hence $\text{rank}(H_1(\Sigma))\leq k-1$ for each connected component $\Sigma$ of $S^o$. Hence $u$ is a $k$ curve. \qed \proof of Theorem \ref{theorem:algebraicuniruledimplesuniruled}. Because $A$ is compactified $k$-uniruled, we have a compactification $X$ with divisor $D$ so that: \begin{enumerate} \item There is an effective ample divisor $D_X$ on $X$ with support $D$ whose associated line bundle has a metric with curvature form $\omega_X$ on $X$. Here $\omega_X$ is a symplectic form. \item Let $J$ be an almost complex structure compatible with $\omega_X$ which is the standard complex structure near $D$. Then there is a dense set $U_J \subset A$ so that for every point $p \in U_J$ such that $J$ is integrable near $p$ we have a $J$ holomorphic map $u : \P^1 \rightarrow X$ passing through $p$ which is a $k$ curve. \item The energy of this curve is bounded above by some fixed constant $\Lambda'$. \end{enumerate} We have a plurisubharmonic function $\rho := -\log{|s|}$ where $s$ is a section of $L$ with $s^{-1}(0) = D_X$. For $c \gg 1$ we have that $A_c := \rho^{-1}(-\infty,c]$ is a Liouville domain deformation equivalent to $\overline{A}$ by Theorem \ref{lemma:affineLiouvilledomain}. We now let $J$ be any almost complex structure which coincides with the standard one near $D$ and coincides with any convex almost complex structure inside $A_c$. Let $p$ be any point in the interior of $A_c$ where $J$ is integrable on a neighbourhood of $p$. Choose a sequence of points $p_i \in U_J$ converging to $p$. There is a map $u_i : \P^1 \to X$ of energy bounded above by $\Lambda'$ passing through $p_i$ so that $u_i$ is a $k$ curve. There is a subsequence which Gromov converges to a map $v : S \rightarrow X$ of energy bounded above by $\Lambda'$ passing through $p$. Here $S$ is a genus $0$ nodal curve. By Lemma \ref{lemma:compactnessresult}, we then get that $v$ is a $k$ curve. Let $S'$ be an irreducible component of $S$ whose image under $v$ contains $p$, $S'' := S' \cap v^{-1}(A)$ and $\Sigma := S'' \cap v^{-1}(A_c^0)$ where $A_c^0$ is the interior of $A_c$. By Lemma \ref{lemma:topologyofcurve} we have that $|H_1(\Sigma,\mathbb Q)| \leq k$ because $|H_1(S'',\mathbb Q)| \leq k$. Let $u := v|_\Sigma$. The energy of $u$ is bounded above by $\Lambda'$. This implies that $A_c$ is $(k,\Lambda')$-uniruled. By Corollary \ref{corollary:deformationinvariance} we then get that $\overline{A}$ is $(k,\Lambda)$-uniruled for some $\Lambda>0$. \qed \section{Log Kodaira dimension and uniruledness} \label{section:logkodaira} We will now define log Kodaira dimension. Let $L$ be any line bundle on a projective variety $X$. If $L^{\otimes k}$ has no global sections for any $k$ then we define $\kappa(L) := -\infty$. Otherwise $L^{\otimes k}$ defines a rational map from $X$ to $\P(H^0(L^{\otimes k}))$ for some $k$. We define $\kappa(L)$ in this case to be maximum dimension of the image of this map over all $k$ where this map is defined. The number $\kappa(L)$ is called the {\it Kodaira dimension of} $L$. If $Q$ is any smooth quasiprojective variety then we define its {\it log Kodaira dimension} $\overline{\kappa}(Q)$ as follows: Choose some compactification of $Q$ by a smooth projective variety $X$ so that the associated compactification divisor $D$ is smooth normal crossing. The {\it log Kodaira dimension} of $Q$ is defined to be $\kappa(K_X + Q)$ where $K_X$ is the canonical bundle of $X$. This an invariant of $Q$ up to algebraic isomorphism. Before we look at smooth affine varieties in dimension $2$ and $3$ we need a lemma relating uniruledness with log Kodaira dimension. \begin{lemma} \label{lemma:logkodairaunirulednessrelation} Suppose that $A$ is algebraically $k$-uniruled. If $k=1$ then $A$ has log Kodaira dimension $-\infty$ and if $k=2$ then $A$ has log Kodaira dimension $\leq \text{dim}_\mathbb C A - 1$. \end{lemma} \proof of Lemma \ref{lemma:logkodairaunirulednessrelation}. First of all, we compactify $A$ to some smooth projective variety $X$. Let $D$ be the compactification divisor. Because $A$ is algebraically $k$-uniruled, we have that $X$ is uniruled by $\P^1$'s. By using the theory of Hilbert schemes (see \cite{Kollar:rationalcurves}) there is a surjective morphism: \[ \text{ev} : M \times \P^1 \twoheadrightarrow X \] where $M$ is a reduced projective variety. We define $D_M := \text{ev}^{-1}(D)$. We let $V$ be the subvariety of $M$ with the property that $q \in M$ is contained in $V$ if and only if $D_M \cap (\{q\} \times \P^1)$ is a set of size at most $k$. Because $A$ is $k$ uniruled we can assume that $M$ satisfies: $\text{ev}(V \times \P^1)$ is dense in $X$ . Hence we have a dominant morphism $(V \times \P^1) \setminus D_M \twoheadrightarrow A$. We will define $W$ be equal to $(V^{\text{sm}} \times \P^1) \setminus D_M$ where $V^{\text{sm}}$ is the smooth part of $V$ which is a non-empty Zariski open subset of $V$. In particular we have a morphism $\pi_W$ from $W$ to $A$ whose image contains a dense open set. We can choose $V' \subset V$ to be a subvariety of complex dimension $\text{dim}(X) -1$ so that the image $\pi_W((V' \times \P^1) \setminus D_M)$ still contains a dense open subset of $A$. We define $W'$ to be $(V' \times \P^1) \setminus D_M$. So $\pi_{W'} := \pi_W|_{W'}$ is a dominant morphism from $W'$ to $A$. The projection map $W' \twoheadrightarrow V'$ has generic fiber equal to $\P^1$ minus at most $k$ points. By the Iitaka Easy Addition Theorem (\cite[Theorem 4]{Iitaka:logarithmic}, \cite[Theorem 11.9]{Iitaka:algebraicgeometry}) we have that the log Kodaira dimension of $W'$ is equal to $-\infty$ if $k=1$ and it is $\leq \text{dim}_\mathbb C(A)-1$ if $k = 2$. Because there is a dominant morphism from $W'$ to $A$, we have by the logarithmic ramification formula (\cite{Iitaka:logarithmic}, \cite[Theorem 11.3]{Iitaka:algebraicgeometry}) that the log Kodaira dimension of $W'$ is greater than or equal to the log Kodaira dimension of $A$. Combining the above two facts we have that if $k=1$ then $A$ has log Kodaira dimension $-\infty$ and if $k=2$ then $A$ has log Kodaira dimension $\leq \text{dim}_\mathbb C(A)-1$. \qed \begin{lemma} \label{lemma:compactifieduniruledlkd} Suppose that $A$ and $B$ are symplectomorphic smooth affine varieties. Suppose that $A$ is compactified $k$-uniruled. If $k=1$ then $B$ has log Kodaira dimension $-\infty$ and if $k=2$ then $B$ has log Kodaira dimension $\leq \text{dim}_\mathbb C A - 1$. \end{lemma} \proof of Lemma \ref{lemma:compactifieduniruledlkd}. By Theorem \ref{theorem:algebraicuniruledimplesuniruled} we get that the Liouville domain $\overline{A}$ associated to $A$ is $(k,\Lambda)$ uniruled. Because $B$ is symplectomorphic to $A$ we then get by Lemma \ref{theorem:symplecticinvariance} that the Liouville domain $\overline{B}$ is $(k,\Lambda')$ uniruled. So by Theorem \ref{theorem:kuniruledimpliesalgebraicallyuniruled}, $B$ is algebraically $k$ uniruled. Hence by Lemma \ref{lemma:logkodairaunirulednessrelation}, $B$ has the required log Kodaira dimension. \qed \subsection{Dimension 2} The aim of this section is to prove: \begin{theorem} \label{theorem:acyclicsurfaceinvariance} Let $A$, $B$ be symplectomorphic acyclic smooth affine surfaces. Then they have the same log Kodaira dimension. \end{theorem} Before we prove this we need a compactified uniruled criterion in dimension $2$ and some other preliminary lemmas. \begin{lemma} \label{lemma:countablymanycurves} Let $X$ be any compact symplectic manifold of real dimension $4$ and $J$ any almost complex structure compatible with the symplectic form. Then there is a dense subset of points $U_J \subset X$ with the property that every $J$ holomorphic map $u : \P^1 \to X$ with $u(\P^1) \cap U_J \neq \varnothing$ satisfies $u_*([\P^1])^2 \geq 0$. In fact $U_J$ is a countably infinite intersection of open dense subsets. \end{lemma} \proof of Lemma \ref{lemma:countablymanycurves}. Let $E$ be a homology class satisfying $E.E < 0$. Let $u_i : \P^1 \to X$, $i=1,2$ be two $J$ holomorphic curves representing this class. We have that $(u_1)_*([\P^1]) \cdot (u_2)_*([\P^1])$ is negative. By positivity of intersection we then have that the images of $u_1$ and $u_2$ must coincide. We write $E_J$ for this image. By Sard's theorem, the complement of this image is open and dense. The set of images of $J$ holomorphic curves $u : \P^1 \to X$ with negative self intersection number is $\cup_{E \in H_2, E.E < 0} E_J$. The complement is a countable intersection of open dense subsets, which is also dense. \qed \begin{lemma} \label{lemma:irreducibleholomorphiccurves} Suppose we have a morphism $\pi : X \rightarrow B$ where $X$ is a smooth projective surface and $B$ is a curve. Let $\omega_X$ be a symplectic form associated to an effective ample divisor $D_X$ and $J$ a compatible almost complex structure. Suppose that we have a $J$ holomorphic map $v : \Sigma \to X$ whose fundamental class represents the fiber $[F] \in H_2(X)$, and with the property that every irreducible component $\Sigma'$ of $\Sigma$ satisfies $v_*([\Sigma']).[F] = 0$. Then there is a dense set $U_J \subset X$ with the property that every $J$ holomorphic map $u : S \to X$ where $S$ is a connected nodal curve which intersects $U_J$ and represents $[F]$ has the property that $S$ is irreducible. \end{lemma} \proof of Lemma \ref{lemma:irreducibleholomorphiccurves}. We choose $U_J$ to be the set of points with the property that every $J$ holomorphic curve passing through a point in $U_J$ has some irreducible component with non-negative intersection number. This is dense by Lemma \ref{lemma:countablymanycurves}. Let $S$ be a union of irreducible components $S_1,\cdots,S_l$. We will suppose without loss of generality that $S_1 \cap U_J$ is non-empty, and so that $(u|_{S_1})^2 \geq 0$. Suppose for a contradiction, $u_*([S_i]).[F] < 0$ for some $i$. Then by positivity of intersection we have that $u(S_i)$ is contained inside $v(\Sigma')$ for some irreducible component $\Sigma'$ of $\Sigma$. Because $v_*([\Sigma']).[F] = 0$, we have $u_*([S_i]).[F] = 0$ which is a contradiction. Hence $u_*([S_i]).[F] \geq 0$ for each $i$. Because $\sum_i u_*([S_i]).[F] = 0$, this implies that $u_*([S_i]).[F] = 0$ for all $i$. Suppose for a contradiction, $S$ has more than one irreducible component. Because $S$ is connected we have then that $u_*([S_1]).u_*([S_j]) \neq 0$ for some $j \neq 1$, and because $(u_*([S_1]))^2 \geq 0$ we then get that $u_*([S_1]).(\sum_i u_*([S_i])) > 0$ which is impossible because $u_*([S]) = [F]$. Hence $S$ is irreducible. \qed \begin{corollary} \label{corollary:uniruledfibrationproperty} Let $A$ be a smooth affine variety with an SNC compactification $X$, and let $D$ be the associated compactification divisor. Suppose we have a morphism $\pi : X \to B$ satisfying the hypotheses of Lemma \ref{lemma:irreducibleholomorphiccurves} for any compatible $J$ which is standard near $D$. Suppose that a general fiber of $\pi$ intersects $D$ $k$ times. Then $A$ is compactified $k$ uniruled. \end{corollary} \proof of Corollary \ref{corollary:uniruledfibrationproperty}. By Lemma \ref{lemma:irreducibleholomorphiccurves}, there is a dense subset $U_J \subset A$ with the property that any $J$ holomorphic curve $u : S \to X$ representing $F$ passing through $p \in U_J$ has the property that $S$ is irreducible. Let $p$ be any point in $U_J$ such that $J$ is integrable near $p$, then by Lemma \ref{lemma:fibrationgwkuniruled}, we have that there is such a $J$ holomorphic map passing through $p$. Because $S$ is irreducible, it intersects $D$ in at most $k$ points. Putting all of this together gives us that $A$ is compactified $k$ uniruled. \qed \begin{lemma} \label{lemma:nefdivisor} Suppose that $X$ is a smooth projective surface and $D$, $E$ divisors so that: \begin{enumerate} \item $E$ is irreducible and $D \cup E$ is smooth normal crossing. \item If $D'$ is the union of divisors in $D$ not intersecting $E$ then $D'$ is connected and intersects every irreducible component of $D$. \item There is an effective nef divisor $G$ whose support is contained in $D'$. \end{enumerate} Then there is an effective nef divisor $D_X$ whose support is $D'$ so that: \begin{enumerate} \item For every irreducible curve $C$ in $D$ and not in the support of $G$, we have $D_X.C > 0$. \item $D_X.E = 0$. \end{enumerate} \end{lemma} \proof of Lemma \ref{lemma:nefdivisor}. Suppose that $W$ is any effective nef divisor whose support is in $D'$ and contains $\text{support}(G)$. Also suppose every irreducible curve $C$ inside $\text{support}(W)$ but not in $\text{support}(G)$ satisfies $C.W > 0$. Let $C$ be any irreducible curve of $D'$ not contained in the support of $W$. Because $D'$ is connected we can assume that $C.W \neq 0$. We let $W' := \kappa W + C$ for $\kappa \gg 1$. This is an effective nef divisor with larger support than $W$. For $\kappa$ large enough we have $C.W' > 0$. Hence every irreducible curve $C$ inside $\text{support}(W')$ satisfies $C.W' > 0$ if $C$ is not in $\text{support}(G)$. Therefore we can construct effective nef divisors starting with $G$ with larger and larger support until we get an effective nef divisor $D_X$ whose support is equal to $D'$. Every irreducible curve $C$ in $D'$ intersects $D_X$ positively unless it is in the support of $G$. Also if $C$ is an irreducible curve in $D$ not contained in $D'$ then it intersects $D'$ and hence $C.D_X>0$. We also have that $D_X. E = 0$. \qed \begin{lemma} \label{lemma:dimension2uniruled3} Let $A = X \setminus D$ be a smooth affine surface where $X$ is a smooth projective variety and $D$ is a connected smooth normal crossing divisor. Suppose that we have a morphism $\pi : X \rightarrow B$ with the following properties: \begin{enumerate} \item \label{item:genericratioanlcurve} The generic fiber is $\P^1$. The base $B$ is a smooth projective curve. \item \label{item:dotproductbound} If $F$ is a fiber then $F.D \leq k$ for some $k$. \item \label{item:specialfibers} There are two different points $b_1,b_2 \in B$ with the following property: $\pi^{-1}(b_i) = E_i \cup F_i$ (as reduced curves) where $E_i$ is an irreducible smooth curve satisfying $E_i.E_i = -1$, and $F_i$ is reduced. \item \label{item:normalcrossingcondition} $F_i \subset D$ but $E_i$ is not contained in $D$. Also $D \cup E_1 \cup E_2$ is a smooth normal crossing divisor. \item \label{item:samedotproductcondition} $E_2.D = E_2.F_2$. \item \label{item:effectivenefcondition} There is an effective nef divisor $G$ with the property that $E_i.G = 0$ and whose support is contained inside $D$. \item \label{item:connectednesscondition} The union of irreducible components of $D$ not containing $E_i$ is connected. \end{enumerate} Then $A$ is compactified $k$ uniruled. \end{lemma} \proof of Lemma \ref{lemma:dimension2uniruled3}. Choose any compatible symplectic structure $\omega_X$ coming from an effective ample divisor $D_X$ whose support is $D$. Let $J$ be any compatible almost complex structure which is standard near $D$. We will complete this proof in $3$ steps. In Step 1 we will construct $J$ holomorphic curves representing $[E_i]$ such that no irreducible component is contained in $D$. In Step 2 we will show that each irreducible component of one the curves from Step 1 has intersection number zero with the fiber. In Step 3 we will construct our $J$ holomorphic curve passing through $p$ and intersecting $D$ at most $k$ times using Corollary \ref{corollary:uniruledfibrationproperty}. {\it Step 1}: By \cite[Lemma 3.1]{Mcduff:rationalruled} there is a $J$ holomorphic map $u_i : \Sigma_i \to X$ from a connected genus $0$ nodal Riemann surface $\Sigma_i$ representing the exceptional class $[E_i] \in H_2(X)$. We assume that no irreducible component of $\Sigma_i$ maps to a point. Let $\Sigma_i^1,\cdots,\Sigma_i^{l_i}$ be the irreducible components of $\Sigma_i$. We will now show that $u_i(\Sigma_i^j)$ is not contained in $D$ for each $i,j$. By using properties (\ref{item:normalcrossingcondition}),(\ref{item:connectednesscondition}) and (\ref{item:effectivenefcondition}) combined with Lemma \ref{lemma:nefdivisor} we have an effective nef divisor $D'_i$ satisfying: \begin{enumerate}[(a)] \item \label{item:positiveintersectionwithDX} For every irreducible curve $C$ in $D$ and not in the support $G$, we have $D'_i.C > 0$. \item $D'_i.E_i = 0$. \end{enumerate} Suppose for a contradiction that $(u_i)_*([\Sigma_i^y]) \subset D$ for some $y$. Because $[E_i]^2$ is negative and the intersection product of $E_i$ with any irreducible component of $D'_i$ is non-negative we have that $E_i$ cannot be represented by an effective divisor whose support is in $D'_i$. If $(u_i)_*([\Sigma_i^y]) \subset D'_i$ then the previous fact tells us that there is some $\Sigma_i^x$ satisfying $(u_i)_*([\Sigma_i^x]).D'_i \neq 0$. But this is impossible because $D'_i$ is nef and $E_i.D'_i = 0$. Hence $\Sigma_i^y$ is not contained in $D'_i$, so by property (\ref{item:positiveintersectionwithDX}), $(u_i)_*([\Sigma_i^y]).D'_i \neq 0$. This is impossible as $D'_i$ is nef and has intersection number $0$ with $E_i$. Hence $u_i(\Sigma_i^x)$ is not contained in $D$ for all $i,x$. {\it Step 2}: The aim in this step is to show that if $[F]$ is the class of a fiber of $\pi$ then $(u_1)_*([\Sigma_1^i]).[F] = 0$ for all $i$. Suppose for a contradiction that $(u_1)_*([\Sigma_1^1]).[F] \neq 0$. Then because $[E_1].[F] = 0$ we have $(u_1)_*([\Sigma_1^1]).[F] = -\sum_{j=2}^{l_1} (u_1)_*([\Sigma_1^j]).[F]$. So without loss of generality we can assume that $(u_1)_*([\Sigma_1^1]).[F] < 0$. We can represent $[F]$ by $[D_{F_2}] + \kappa (u_2)_*([\Sigma_2])$ by property (\ref{item:specialfibers}) where $D_{F_2}$ is an effective divisor whose support is exactly $F_2$ and $\kappa$ is a positive integer. Because $(u_1)_*([\Sigma_1^1])$ does not map to $D$, we have by positivity of intersection that $(u_1)_*([\Sigma_1^1]).[D_{F_2}] \geq 0$. Hence $(u_1)_*([\Sigma_1^1]). (u_2)_*([\Sigma_2]) < 0$ because $\kappa > 0$. By positivity of intersection this means that $u_1(\Sigma_1^1) \subset u_2(\Sigma_2^l)$ for some $l$. Without loss of generality we will assume that $l=1$. We have that $E_2.F_2 = E_2.D$ by property (\ref{item:samedotproductcondition}) and that $(u_2)_*([\Sigma_2^i]).[D] \geq 1$ because $A$ is an exact symplectic manifold. Because $(u_i)_*([\Sigma_i^j])$ is not contained inside $D$ for all $i,j$, we have $(u_i)_*([\Sigma_i^j]).[F_2] \leq (u_i)_*([\Sigma_i^j]).[D]$. Using the above two facts, \[(u_2)_*([\Sigma_2^1]).[F_2] =\] \[\sum_{j=1}^{l_2}(u_2)_*([\Sigma_2^j]).[D] - \sum_{j=2}^{l_2} (u_2)_*([\Sigma_2^j]).[F_2] \geq (u_2)_*([\Sigma_2^1]).[D] > 0.\] But this means that $(u_1)_*([\Sigma_1^1]).[F_2] \neq 0$ because $\varnothing \neq u_1(\Sigma_1^1) \subset u_2(\Sigma_2^1)$. Hence $E_1.F_2 = (u_1)_*([\Sigma_1]).[F_2] \neq 0$ which is a contradiction because $E_1$ and $F_2$ are in different fibers of $\pi$ by property (\ref{item:specialfibers}). Hence $(u_1)_*([\Sigma_1^i]).[F] = 0$ for all $i$. {\it Step 3}: There is an effective divisor $D_{F_1}$ whose support is $F_1$ and an integer $\kappa'> 0$ with the property that: $[D_{F_1}] + \kappa' (u_1)_*([\Sigma_1])$ represents $[F]$. Each irreducible component of the above curve has intersection number zero with $[F]$ by Step 2 hence by Corollary \ref{corollary:uniruledfibrationproperty}, we get that $A$ is compactified $k$ uniruled. \qed \begin{lemma} \label{lemma:dimension2uniruled4} Let $A = X \setminus D$ be a smooth affine surface where $X$ is a smooth projective variety and $D$ is a connected smooth normal crossing divisor. Let $\pi : X \to B$ be a morphism of projective varieties so that the generic fiber is isomorphic to $\P^1$ and intersects $D$ $k$ times. Let $E$ be a smooth divisor in $X$. Suppose that: \begin{enumerate} \item \label{item:nefdivisornotintersectingE} There is a nef divisor $G$ whose support is in $D$ such that $E.G=0$. \item \label{item:divisorintersectingE} We have $E.E=-1$, $E.D=1$ and $E$ is not contained in $D$. This means that there is a unique irreducible curve $D_E$ in $D$ intersecting $E$. We will assume that $D_E.G \neq 0$. \item \label{item:fiber} We have that $D_E \cup E$ is contained in a fiber $\pi^{-1}(b)$ and there is an effective divisor $D_F$ whose support is $\pi^{-1}(b) \cap D$ and a natural number $\kappa > 0$ so that $[D_F] + \kappa[E]$ represents the homology class of a fiber of $\pi$. \end{enumerate} Then $A$ is compactified $k$ uniruled. \end{lemma} \proof of Lemma \ref{lemma:dimension2uniruled4}. This proof will be done in two steps. In Step $1$ we will show for any almost complex structure $J$ compatible with the symplectic form on $X$ and which is standard near $D$, the homology class $[E]$ can be represented by an irreducible $J$ holomorphic curve. Finally in Step 2 we will use Corollary \ref{corollary:uniruledfibrationproperty}. {\it Step 1}: By \cite[Lemma 3.1]{Mcduff:rationalruled} there is a $J$ holomorphic map $u : \Sigma \to X$ from a connected genus $0$ nodal Riemann surface $\Sigma$ representing the exceptional class $[E_i] \in H_2(X)$. Let $\Sigma^1,\cdots,\Sigma^l$ be the irreducible components of $\Sigma$. In this step we want to show that $l=1$. Because $u_*([\Sigma^i]).G \geq 0$ for all $i$, and that $u_*([\Sigma]).G = 0$ we then get $u_*([\Sigma^i]).G = 0$ for all $i$. Hence by property (\ref{item:divisorintersectingE}), $u(\Sigma^i)$ is not contained in $D_E$ for any $i$. This means that $u_*([\Sigma^i]).D_E \geq 0$ for all $i$. The above statement combined with the fact that $D_E.E = 1$ means that there is exactly one irreducible component $\Sigma^j$ intersecting $D_E$ and this irreducible component intersects $D_E$ with multiplicity $1$. We may as well assume that $\Sigma^j=\Sigma^1$. Let $D_X$ be an effective ample divisor whose support is $D$. Then $E.D_X = u_*([\Sigma]).D_X$, and $u_*([\Sigma^i]).D_X > 0$ for all $i$ because $A$ is an exact symplectic manifold. Let $c>0$ be the coefficient of $D_E$ in $D_X$. Then $cE.D_E = E.D_X$ because $D_E$ is the only irreducible divisor in $D$ intersecting $E$. Also because $u_*([\Sigma^1]).D_E = 1$ we get that $u_*([\Sigma^1]).D_X \geq c$. Hence $u_*([\Sigma^1]).(D_X - c.D_E) \geq 0$. Also for $i > 1$ we have that $u_*([\Sigma^i]).D_E = 0$ which implies that $u_*([\Sigma^i]).(D_X - cD_E) = u_*([\Sigma^i]).D_X > 0$. Hence if $l > 1$ we get that $u_*([\Sigma]).(D_X - cD_E) = \sum_i u_*([\Sigma^i]).(D_X - cD_E) > 0$ which contradicts the fact that $E.(D_X - cD_E) = 0$. This means that $l = 1$ and so $\Sigma$ is irreducible. {\it Step 2}: By Step $1$ we have that $\Sigma$ is irreducible and $u_*([\Sigma])$ represents $E$. Hence every irreducible component of the $J$ holomorphic curve $D_F \cup u(\Sigma)$ has intersection number $0$ with a fiber of $B$. So by property (\ref{item:fiber}) combined with Corollary \ref{corollary:uniruledfibrationproperty} we then get that $A$ is compactified $k$ uniruled. \qed \bigskip We will give a fairly explicit description of acyclic surfaces of log Kodaira dimension $1$. The constructions come from \cite{GurjarMiyanishi:affinesurfaceslkd1} (see also \cite[Theorem 2.6]{Zaidenberg:1998exot}, \cite{tomDieckPetrie:contractibleaffinesurfaces} and \cite{FlennerZaidenberg:contractiblelkd1}). We start with a line arrangement in $\P^2$ as in Figure \ref{fig:linearrangement}. \begin{tikzpicture} [scale=1.0] \label{fig:linearrangement} \draw (5,5) -- (5,0); \draw (5,8) -- (5,5); \draw (5,5) circle (0.05); \draw (3,8) -- (3+2*8/3,0); \draw (7,8) -- (7-2*8/3,0); \draw (0,2) -- (10,2); \draw[fill=black] (7-2*4.5/3,3.5) node[right] {$p_0$} circle (0.05); \draw[fill=black] (5,2) circle (0.05); \draw[fill=black] (3+2*6/3,2) circle (0.05); \node at (6,1.8) {$\cdots$}; \node[right] at (5,1.8) {$p_1$}; \node[left] at (3+2*6.2/3,1.8) {$p_s$}; \node[left] at (7-2*4/3,4) {$D_0$}; \node[right] at (5,4) {$D_1$}; \node[right] at (3+2*4/3,4) {$D_s$}; \node[right] at (10,2) {$H$}; \end{tikzpicture} Here we have curves $D_0,D_1,\cdots,D_s,H$ in this line arrangement. Let $W$ be the divisor in $\P^2$ representing this line arrangement. At the point $p_0$ we blow up our surface many times according to the following rules: \begin{enumerate} \item The first blow up must be at $p_0$. \item Each subsequent blow up must be on the exceptional divisor of the previous blow up and at a smooth point of the total transform of $W$. \end{enumerate} At the point $p_i$ where $i > 0$, we blow up in such a way as to to resolve the point of indeterminacy of $\frac{x^{m_i}}{y^{n_i}}$ (viewed as a birational map to $\P^1$) where $x,y$ are local coordinates around $p_i$ with $W = \{xy = 0\}$. These are chains of blowups where we only blow up along points $p$ satisfying: \begin{enumerate} \item $p$ is in the exceptional divisor of the previous blowup. \item $p$ is a nodal singular point of the total transform of $W$. \end{enumerate} Let $E_0,E_1,\cdots,E_s$ be the last exceptional curves in these chains of blowups over $p_0,\cdots,p_s$. We let $X$ be equal to $\P^2$ blown up as described above and we let $D$ be the divisor in $X$ equal to the total transform of $W$ minus the last exceptional curves $E_0,\cdots,E_s$. Our surface $A$ is equal to $X \setminus D$. The integers $m_i, n_i$ are coprime and satisfy a certain equation to ensure that our surface $A$ is affine and acyclic of log Kodaira dimension $1$. \begin{lemma} \label{lemms:unirulednessforlkd1} Suppose $A$ is an acyclic surface of log Kodaira dimension $1$. Then it is compactified $2$ uniruled. \end{lemma} \proof of Lemma \ref{lemms:unirulednessforlkd1}. We will use the notation $X,D_0,\cdots,D_s,E_0,\cdots,E_s$, $H$, $p_0,\cdots,p_s$, from before this lemma to describe $A$. We have three cases: \begin{enumerate} \item \label{item:caseblowuponce} $s = 1$. \item \label{item:casemorefibers1} $s > 1$ and $E_i$ intersects $H$ for some $i$. \item \label{item:casemorefibers2} $s > 1$ and $E_i$ does not intersect $H$ for any $i$. \end{enumerate} {\it Case (\ref{item:caseblowuponce}):} Let $\text{Bl}_{p_0}(\P^2)$ be the blowup of $\P^2$ at the point $p_0$. We have a map from $X$ to $\text{Bl}_{p_0}(\P^2)$ which is a sequence of blowdown maps. We also have a fibration $\pi : \text{Bl}_{p_0} \to \P^1$ whose fibers are proper transforms of lines in $\P^2$ passing through $p_0$. Let $\widetilde{\pi} : X \to \P^1$ be the composition of the map from $X$ to $\text{Bl}_{p_0}(\P^2)$ with $\pi$. Because $s = 1$, we have that a generic fiber of $\widetilde{\pi}$ intersects $D$ twice. Also, the proper transform of $D_0$ is a fiber of $\widetilde{\pi}$. So if we choose any almost complex structure $J$ which is equal to the standard one near $D$, we have a fiber represented by the irreducible $J$ holomorphic curve $D_0$. So, $D_0$ has intersection number $0$ with any fiber. By Corollary \ref{corollary:uniruledfibrationproperty} we then get that $A$ is compactified $2$ uniruled. {\it Case (\ref{item:casemorefibers1}):} Let $q$ be the point where all the divisors $D_0,\cdots,D_s$ intersect in one point and let $\widetilde{q}$ be the corresponding point in $X$. We blow up $\widetilde{q}$ giving us $\widetilde{X}$. Let $\widetilde{D}$ be the total transform of $D$, so $A = \widetilde{X} \setminus \widetilde{D}$. Note that $\widetilde{X}$ is equal to $\text{Bl}_q \P^2$ blown up many times at the points $p_0,\cdots,p_s$. Hence we have a natural blowdown map $\text{Bl}_{\widetilde{X}}: \widetilde{X} \to \text{Bl}_q \P^2$. There is a natural map $\pi' : \text{Bl}_q \P_2 \to \P^1$ whose fibers are proper transforms of lines passing through $q$. Let $\widetilde{\pi}' : \widetilde{X} \to \P^1$ be the composition $\pi' \circ \text{Bl}_{\widetilde{X}}$. We let $\widetilde{E}_0,\cdots,\widetilde{E}_s$ be the proper transforms of $E_0,\cdots,E_s$ in $\widetilde{X}$ respectively. We similarly define $\widetilde{H}$, $\widetilde{D}_i$. Let $E$ be the proper transform of the exceptional divisor of $\text{Bl}_q \P^2$ in $\widetilde{X}$. The image of our morphism $\widetilde{\pi}'$ is $B := \P^1$. We define $b_j \in B$ so that $(\widetilde{\pi}')^{-1}(b_j)$ contains $\widetilde{E}_j$. We have that $\widetilde{E}_i$ intersects $\widetilde{H}$ for some $i$. This is contained in some fiber $(\widetilde{\pi}')^{-1}(b_i)$. We have that $(\widetilde{\pi}')^{-1}(b_i)$ is obtained from $(\pi')^{-1}(b_i)$ by blowing up the point where this fiber intersects $H$ repeatedly. Hence if $R$ is the irreducible component of $(\widetilde{\pi}')^{-1}(b_i)$ that intersects $E$, then it is smooth and has self intersection $-1$. This means that $R + E$ is an effective nef divisor. We have that $\widetilde{E}_0.D = 1$ so let $D_E$ be the unique divisor that intersects $\widetilde{E}_0$. Let $D'$ be the union of irreducible curves in $D$ not intersecting $\widetilde{E}_0$ and let $\Delta'$ be the connected component of $D'$ containing $R \cup E$. Using Lemma \ref{lemma:nefdivisor} with the divisors $\Delta' + D_E$ and $E$ there is a nef divisor $G$ with the property that $G.D_E \neq 0$ and $G.E = 0$. The generic fiber of $\widetilde{\pi}'$ intersects $\widetilde{D}$ twice. Also $D_E \cup \widetilde{E}_0$ is contained in $(\widetilde{\pi}')^{-1}(b_0)$ and also there is an effective divisor $D_F$ whose support is in $(\widetilde{\pi}')^{-1}(b_0) \cap D$ and $\kappa \in \mathbb N$ so that $[D_F] + \kappa [\widetilde{E}_0]$ is homologous to a fiber of $\widetilde{\pi}'$. So by Lemma \ref{lemma:dimension2uniruled4} we get that $A$ is compactified $2$ uniruled. {\it Case (\ref{item:casemorefibers2}):} Because $s>1$ and $E_i$ does not intersect $H$ for all $i$, we get that $E_1$ and $E_2$ exist and do not intersect $H$. We have that $(\widetilde{\pi}')^{-1}(b_j)$ is a union of irreducible curves $F_j$ in $D$ plus $\widetilde{E}_j$. Also $\widetilde{E}_j . \widetilde{D} = \widetilde{E}_j.\widetilde{F}_j$ for all $j$ and $\widetilde{D} \cup \widetilde{E}_1 \cup \widetilde{E}_2$ is a smooth normal crossing divisor. Let $D'_j$ be equal to $\widetilde{D}$ minus the irreducible components of $\widetilde{D}$ intersecting $E_j$. We have that $D'_j$ is connected for each $j$ and every irreducible component of $\widetilde{D}$ which intersects $E_j$ also intersects $D'_j$. We have that $\widetilde{D}$ is connected and $E \cup \widetilde{D}_0$ are disjoint from $E_j$ for each $j$. Both $E$ and $\widetilde{D}_0$ intersect each other and have self intersection $-1$ which implies that $G := E + \widetilde{D}_0$ is nef and contained in $D'_j$ for each $j$. This does not intersect $E_j$. Hence by Lemma \ref{lemma:dimension2uniruled3} we then get that $A$ is compactified $2$ uniruled. \qed \proof of Theorem \ref{theorem:acyclicsurfaceinvariance}. In order to prove this theorem we only need to show the following fact: {\it If $A$ has log Kodaira dimension $i$ where $i \leq 1$, then $B$ has log Kodaira dimension $ \leq i$.} This is because log Kodaira dimension is at most $2$. By \cite{Fujita:1982alg} we have that the log Kodaira dimension of $A$ is either $-\infty$, $1$ or $2$. Also if it is equal to $-\infty$ then $A = \mathbb C^2$. Suppose the log Kodaira dimension of $A$ is $-\infty$ then $A = \mathbb C^2$. Also $B$ is diffeomorphic to $A$ and hence contractible and simply connected at infinity. By \cite{Ramanujam:affineplane} we then get that $B$ is isomorphic to $\mathbb C^2$ and hence has log Kodaira dimension $-\infty$. Now suppose that $A$ has log Kodaira dimension $1$. By Lemma \ref{lemms:unirulednessforlkd1} we have that $A$ is compactified $2$ uniruled, so by Lemma \ref{lemma:compactifieduniruledlkd}, $B$ has log Kodaira dimension $\leq 1$. Putting everything together gives us that $A$ and $B$ must have the same log Kodaira dimension. \qed \subsection{Dimension 3} \begin{theorem} \label{theorem:logkodairaresultsindimension3} Suppose that $A$ is a smooth affine variety of dimension $3$ such that $A$ admits a compactification $X$ with the following properties: \begin{enumerate} \item The compactification divisor $D$ is smooth normal crossing and nef. \item The linear system $|D|$ contains a smooth member. \end{enumerate} Let $B$ be any smooth affine variety symplectomorphic to $A$ and $\overline{\kappa}(A) = 2$ then $\overline{\kappa}(B) \leq 2$. \end{theorem} \proof of Theorem \ref{theorem:logkodairaresultsindimension3}. By \cite{Kishimoto:affinethreefolds} we have that $A$ admits a $\mathbb C^*$ fibration. In fact we can say more: \cite[Lemma 4.1,4.2,4.3]{Kishimoto:affinethreefolds} says that there is a projective variety $X^{s}$ and a nef divisor $D^{s}$ so that \begin{enumerate} \item $A = X^{s} \setminus D^{s}$. \item There is a morphism $\pi : X^{s} \to W$ of projective varieties with the property that a generic fiber is isomorphic to $\P^1$ and intersects $D^{s}$ twice. \end{enumerate} By \cite{hironaka:resolution} we can blow up $X^{s}$ away from $A$ giving us a smooth projective variety $X$ and so that the total transform $D$ of $D^{s}$ is a smooth normal crossing divisor. Let $\widetilde{\pi} : X \to W$ be the composition of $\pi$ with the blowdown map $X \to X^{s}$. Let $D_X$ be the effective divisor which is the pullback of $D^{s}$ under the blowdown map. We have that $D_X$ is nef and that a generic fiber $F$ of $\widetilde{\pi}$ satisfies $F.D_X = 2$. By Lemma \ref{lemma:nefdivisoruniruled}, we then get that $A$ is compactified $2$ uniruled. Hence by Lemma \ref{lemma:compactifieduniruledlkd}, we get that the log Kodaira dimension of $B$ is $\leq 2$. \qed \section{Uniruledness of compactifications} \label{section:unirulednesscompactifications} If a projective variety $X$ has a morphism $f : \P^1 \rightarrow X$ passing through every point $x \in X$ then we say that $X$ is {\it uniruled}. \begin{theorem} \label{theorem:birationalinvarianceofuniruledness} Suppose that two smooth projective varieties $P$ and $Q$ have affine open subsets $A$, $B$ with the property that $A$ is symplectomorphic to $B$. Then $P$ is uniruled if and only if $Q$ is. \end{theorem} \proof of Theorem \ref{theorem:birationalinvarianceofuniruledness}. Suppose that $P$ is uniruled, then we will show that $Q$ is uniruled. Let $D_P$ be an effective ample divisor whose support is $P \setminus A$ and $D_Q$ an effective ample divisor whose support is $Q \setminus B$. By \cite{Ruan:virtual} or \cite{Kollar:lowdegree} we have that $\langle [\text{pt}], \alpha_1, \cdots, \alpha_k \rangle^P_{0,\beta} \neq 0$ for some $\beta \in H_2(P,\mathbb Z)$ and cohomology classes $\alpha_1,\cdots,\alpha_k$. Let $k := \beta. D_P$. This means that any compatible $J$ in $P$ which is standard near $D_P$ has the property that there is some $J$ holomorphic curve $u : \Sigma \to P$ passing through any point $p$. Because $D_P$ is nef, each irreducible component $\Sigma_i$ of $\Sigma$ satisfies $u_*(\Sigma_i).D_P \leq k$. In particular this is true for any irreducible component that passes through $p$. Hence $A$ is compactified $k$-uniruled. So by Theorem \ref{theorem:algebraicuniruledimplesuniruled}, we have that the Liouville domain $\overline{A}$ is $(k,\Lambda)$-uniruled for some $\Lambda > 0$. Because the completion of $\overline{A}$ is symplectomorphic to the completion of $\overline{B}$ we then get by Theorem \ref{theorem:symplecticinvariance} that $\overline{B}$ is $(k,\Lambda')$ uniruled for some $\Lambda' > 0$. Hence by Theorem \ref{theorem:kuniruledimpliesalgebraicallyuniruled}, we have that $B$ is algebraically $k$-uniruled. This implies that its compactification $Q$ is uniruled. By symmetry, if $Q$ is uniruled then $P$ is. Hence $P$ is uniruled if and only if $Q$ is uniruled. \qed \section{Appendix : plurisubharmonic functions on smooth affine varieties} The contents of this appendix are all contained inside the proof of \cite[Lemma 2.1]{McLean:affinegrowth} and the ideas of that proof are contained in \cite[Section 4b]{Seidel:biasedview}. We let $A$ be a smooth affine variety. Here we recall the construction of the Liouville domain $\overline{A}$ (see Definition \ref{defn:canonicasymplecticformassociatedtoA}). Choose any algebraic embedding $\iota$ of $A$ into $\mathbb C^N$ (so it is a closed subvariety). We have $\theta_A := -d^c R = \sum_i\frac{r_i^2}{2} d\vartheta_i$ where $(r_i,\vartheta_i)$ are polar coordinates for the $i$th $\mathbb C$ factor. We have that $d\theta_A$ is equal to the standard symplectic structure on $\mathbb C^N$. By abuse of notation we write $\theta_A$ for $\iota^* \theta_A$, and $\omega_A := d\theta_A$. Here $(\overline{A},\theta_A) := (R^{-1}(-\infty,C],\theta_A)$ for $C \gg 0$. We can also construct other Liouville domains as follows (see Definition \ref{defn:symplecticformonaffinevariety}): Let $X$ be a smooth projective variety such that $X \setminus A$ is a smooth normal crossing divisor (an SNC compactification). Let $L$ be an ample line bundle on $X$ given by an effective divisor $D$ whose support is $X \setminus A$. From now on such a line bundle will be called a {\bf line bundle associated to an SNC compactification $X$ of $A$}. Suppose $|\cdot|$ is some metric on $L$ whose curvature form is a positive $(1,1)$ form. Then if $s$ is some section of $L$ such that $s^{-1}(0) = D$ then we define $\phi_{s,|\cdot|} := -\log{|s|}$ and $\theta_{s,|\cdot|} := -d^c \phi_{s,|\cdot|}$. The two form $d\theta_{s,|\cdot|}$ extends to a symplectic form $\omega_{|\cdot|}$ on $X$ (which is independent of $s$ but does depend on $|\cdot|$). We will say that $\phi_{s,|\cdot|}$ is a {\it plurisubharmonic function associated to $L$}, $\theta_{s,|\cdot|}$ a {\bf Liouville form associated $L$} and $\omega_{|\cdot|}$ a {\bf symplectic form on $X$ associated to $L$}. From \cite[Section 4b]{Seidel:biasedview}, we have that for $C \gg 1$,\[(A_C,\theta_C) := (\phi_{s,|\cdot|}^{-1}(-\infty,C], \theta_{s,|\cdot|})\] is a Liouville domain. Let $(r_i,\vartheta_i)$ be polar coordinates for the $i$'th factor in $\mathbb C^N$. \begin{lemma} \label{lemma:nosingularitiesofaparticularfunction} If we compactify $\mathbb C^N$ by $\P^N$, there is a section $S$ of ${\mathcal O}(1)$ and metric $\|\cdot\|$ with the following properties: \begin{enumerate} \item $-\log{\|S\|}|_A$ is equal to $f(R)$ for some non-decreasing smooth function $f : \mathbb R \to \mathbb R$. \item $-\log{\|S\|}|_A$ has no singularities near infinity. \end{enumerate} Hence $R|_B$ has no singularities near infinity. \end{lemma} \proof of Lemma \ref{lemma:nosingularitiesofaparticularfunction}. Let $H := \P^N \setminus \mathbb C^N$ and let $S$ be a section of ${\mathcal O}(1)$ such that $S^{-1}(0) = H$. Let $\|\cdot\|$ be the standard Fubini Study metric on ${\mathcal O}(1)$. We have that $U(N+1)$ acts on $\P^N$ and it naturally lifts to an action on the total space of ${\mathcal O}(1)$. Let $U(N) \subset U(N+1)$ be the natural subgroup that preserves $H$. We have that $\|S\|$ is invariant under this action. Because $-\log{\|S\|}$ is invariant under this action and exhausting, it is equal to $f(R)$ for some non decreasing smooth function $f : \mathbb R \to \mathbb R$. This is because $U(N)$ acts transitively on the level sets of $R$. Let $X$ be the closure of $A$ in $\P^N$. By \cite{hironaka:resolution} we can blow up $\P^N$ along $H$ so that the proper transform $\widetilde{X}$ of $X$ is smooth and the total transform $\widehat{H}$ of $H \cap X$ inside $\widetilde{X}$ is a smooth normal crossing divisor. We pull back the line bundle ${\mathcal O}(1)|_X$ to a line bundle $L_X$ on $\widetilde{X}$ and also pull back the metric $\|\cdot\|$ and section $S$. We will write $\|\cdot\|_X$ and $S_X$ for the new metric and section. Let $p \in \widehat{H}$ and choose local holomorphic coordinates $z_1,\cdots,z_n$ on $\widetilde{X}$ and a trivialization of $L_X$ around $p$ so that $S_X = z_1^{w_1} \cdots z_n^{w_n}$ ($w_i \geq 0$). The metric $\|.\|_X$ on $L_X$ is equal to $e^{\psi}|.|$ for some function $\psi$ with respect to this trivialization where $|.|$ is the standard metric on $\mathbb C$. So \[-d\log{\|S_X\|_X} = -d\psi - (\sum_i w_i d\log{|z_i|}).\] If we take the vector field $Y := -r_1 \partial_{r_1} \cdots - r_n \partial_{r_n}$ (where $z_j = r_je^{i\vartheta_j}$), then $d\log{(|z_j|)}(Y) = -1$ and $d\psi(Y)$ tends to zero. Hence $d\log{\|S_X\|_X}$ is non-zero near infinity which implies that $f(R)|_A = -\log{\|S\|_X}$ has no singularities near infinity. \qed \begin{lemma} \label{lemma:affineLiouvilledomain} For $C \gg 1$, $(A_C,\theta_C)$ is Liouville deformation equivalent to $(\overline{A},\theta_A)$. \end{lemma} \proof of Lemma \ref{lemma:affineLiouvilledomain}. By Lemma \ref{lemma:nosingularitiesofaparticularfunction} we have that $R|_A$ has no singularities for $R \geq C$. Let $c \geq C$ and write $A'_c := (R|_A)^{-1}(-\infty,c]$. Because $c$ is a regular value of $R|_A$, we have that $A'_c$ is a Liouville domain and by definition it is equal to $(\overline{A},\theta_A)$. Let $S$ and $\|\cdot\|$ be the section and metric on ${\mathcal O}(1)$ coming from Lemma \ref{lemma:nosingularitiesofaparticularfunction}. We also have that $f(R) = -\log{\|S\|}$ where $f$ is a smooth function with positive derivative when $R$ is large. Let $A''_c := (-\log{\|S\|})^{-1}(-\infty,c]$. We have that $A'_{f(c)} = A''_c$ for $c \gg 1$. We also have that $t \theta_A + (1-t) \theta_{S,\|\cdot\|}|_A$ is a deformation of Liouville domains from $(A'_{f(c)},\theta_A)$ to $(A''_c, \theta_{S,\|\cdot\|})$. Let $\phi_{s,|\cdot|}$ be a plurisubharmonic function associated to our line bundle $L$ so that $A_C = \phi_{s,|\cdot|}^{-1}(-\infty,C]$. Let $\phi_t := (1-t)\phi_{s,|\cdot|} - t \log{\|S\|}$ and let $A^t_c := \phi_t^{-1}(-\infty,c]$. By using work from \cite[Section 4b]{Seidel:biasedview}, we have for $C$ large enough that $(A^t_C, -d^c \phi_t)$ is a Liouville deformation from $(A''_C,\theta_{S,\|\cdot\|})$ to $(A_C,\theta_C)$. Hence by composing the above two Liouville deformations, we get that $(A_C,\theta_C)$ is Liouville deformation equivalent to $(\overline{A},\theta_A)$ for $C$ large enough. \qed
2,869,038,156,207
arxiv
\section{Introduction} \vspace{-2mm} Expectation for the emergence of artificial intelligence is growing these days triggered by the recent results in reinforcement learning (RL) using a deep neural network (NN)\cite{DQN,AlphaGo}. Our group has propounded for around 20 years that end-to-end RL from sensors to motors using a recurrent NN (RNN) plays an important role for the emergence\cite{Intech,RLDM17}. Especially, different from ``recognition'' whose inputs are given as sensor signals or ``control'' whose outputs are given as motor commands, higher functions are very difficult to design by human hands, and the function emergence approach through end-to-end RL is highly expected. Our group has shown that not only recognition and motion, but also memory, prediction, individuality, and also similar activities to those in monkey brain at tool use emerge\cite{RLDM17}. We have also shown that a variety of communications emerge in the same framework\cite{RLDM17COM}. However, the emergence of what can be called ``thinking'' that is one of the typical higher functions has not been shown yet. In this paper, the difficulty of emergence of ``thinking'' is discussed at first. Then our hypothesis that ``exploration'' grows into ``thinking'' through learning is introduced\cite{IJCNN15}. To realize the hypothesis, the use of a chaotic NN (ChNN) in RL and new deterministic RL for it are introduced\cite{IJCNN15}. Finally, it is shown that the new RL works in a simple task\cite{JCSS} though that cannot be called ``thinking'' yet and there are still many rooms for improvement. No other works with a similar direction to ours have not been found. \section{Difficulty in Emergence of ``Thinking''} \vspace{-2mm} The definition of ``Thinking'' must be varied depending on the person. However, we can ``think'' even when we close our eyes and ears, and what we think does not change randomly, but logically or rationally. Therefore, we hope many ones can agree that in order to realize ``thinking'', rational multi-stage or flow-type state transition should be formed. As a kind of dynamic functions, we have shown that a variety of memory-required functions emerge in an RNN in simple tasks\cite{RLDM17}. It is not so difficult to form memories as fixed-point convergence dynamics if the initial feedback connection weights are set such that the transition matrix for the connection is the identity matrix or close to it when being linearly approximated. That can also solve the vanishing gradient problem in error back propagation. However, state transition needs not only convergence dynamics for association, but also transition dynamics from one state to another. Then, we employed a multi-room task in which an agent moves and when it pushes a button, one of the doors opens and then the agent can move to another room. The sensor signals are the inputs of the agent's RNN, and the RNN was trained based on RL from a reward when it reached the goal. We expected that the internal state changed drastically between before and after the door open though the difference in the sensor signals was not so large. After learning, a large change in the internal state could be observed in some degree, but the learning was very difficult\cite{Sawatsubashi}. Furthermore, when we ``think'', ``inspiration'' or ``discovery'', which is a kind of unexpected but not completely random and rational transition, must be also essential. The convergence and transition dynamics seem contradict at a glance, and it seems very difficult to form both dynamics from scratch in a regular RNN. \begin{figure}[b] \center \begin{tabular}{ccc} \begin{minipage}{.35\textwidth} \centering \includegraphics[height=3.4cm]{ForkProblem.eps} \caption{Lower or higher exploration\\ at a fork.} \label{fig:Fork} \end{minipage} \begin{minipage}{.50\textwidth} \centering \includegraphics[height=3.7cm]{ExplorationThinking.eps} \caption{``Exploration'' and ``thinking'' are both a kind of internal dynamics and deeply related with each other.} \label{fig:ExplorationThinking} \end{minipage} \end{tabular} \end{figure} \section{Chaos and Hypothesis: Growth from ``Exploration'' to ``Thinking'' through Learning} \vspace{-2mm} Suppose we are standing at a forked road as shown in Fig. \ref{fig:Fork}. Usually, we choose one from two options: going right and going left. We do not care many other possible actions such as going straight and dancing. It is not motor(actuator)-level lower exploration, but higher exploration supported by some prior or learned knowledge\cite{Higher}. Furthermore, at a fork, we may wonder such that ``this path looks more rough, but that way looks to go away from the destination''. This can be considered as a kind of ``exploration'' and also as a kind of ``thinking''. The place where the wave in mind occurs is inside the process before making a decision, and learning should be reflected largely on the exploration. The author's group has thought that exploration should be generated inside of a recurrent NN (RNN) that generates motion commands\cite{Exploration}. ``Exploration'' and ``thinking'' are both generated as internal dynamics as shown in Fig. \ref{fig:ExplorationThinking}. ``Exploration'' is more random-like state transitions. On the other hand, ``thinking'' is more rational or logical state transition, and sometimes higher exploration or unexpected but rational state transition such as inspiration or discovery occurs in it. \begin{figure} \centering \includegraphics[height=6.9cm]{AttractorsAndChaos.eps} \vspace{-2mm} \caption{Rough schematic diagram of the combination of attractors and chaotic dynamics. See text in detail.} \label{fig:AttractorsAndChaos} \end{figure} Analogy to such dynamics can be found in chaotic dynamics. In regular associative memory, fixed-point attractor (basin) is formed around each memorized or learned pattern as shown in the upper left part in Fig. \ref{fig:AttractorsAndChaos}. However, when a ChNN is used in it, transition dynamics among the memorized patters called ``Chaotic Itinerancy''\cite{Itinerancy} can be seen as the green arrows in the lower left part in Fig. \ref{fig:AttractorsAndChaos}. If rational or logical state transitions are learned, it is expected that flow-type attractors are formed as the red or black arrows in the lower right part in Fig. \ref{fig:AttractorsAndChaos}. It is also expected that as the green arrows, (1)inspiration or discovery emerges as irregular transitions among the attractors but reflecting the distance, (2)higher exploration emerges at branches of the flow, and (3)in unknown situations, no attractor is formed, and remaining random-like chaotic dynamics appears until return to a known situation. Skarda et al. reported that the activities on the olfactory bulb in rabbits become chaotic for unknown stimuli\cite{Skarda}. Osana et al. have shown the difference of chaotic property between known and unknown patterns on an associative memory using a ChNN, and also after an unknown pattern is learned, association to the pattern is formed as well as the other known patterns\cite{Osana}. From the above discussion, we have hypothesized that {\bf ``exploration'' grows into ``thinking'' through learning by forming flow-type attractors on chaotic random-like dynamics and that can be realized on reinforcement learning using a ChNN}. \section{New Reinforcement Learning (RL) Using a Chaotic Neural Network (ChNN)} In order to realize the above idea, our group has proposed new reinforcement learning using a ChNN\cite{IJCNN15}. The positioning of exploration in learning is completely different from the conventional one. Here, the chaotic dynamics inside the ChNN produces exploration factors by itself. Since external random numbers for stochastic action selection are not used, exploration factors cannot be isolated from the output. Then the learning method has to be completely different from the conventional one. Assuming that the motions are continuous, actor-critic type reinforcement learning architecture is employed. Here, to isolate the chaotic dynamics from the critic, the actor is implemented in a ChNN and the critic is implemented in another regular layered NN as shown in Fig. \ref{fig:Task}. The inputs are the sensor signals for the both networks. Here, only the learning of actor ChNN that is largely different from the conventional reinforcement learning is explained comparing with the conventional one using Fig. \ref{fig:ChaosNN}. In our conventional works, as shown in Fig. \ref{fig:ChaosNN}(a), by adding a random number (noise) $rnd_{j,t}$ to each actor output, the agent explores. The actor network is trained by BPTT (Back Propagation Through Time) using the product of the random number and TD error as the error for each output of the RNN. In the proposed method, there is no external random numbers added to the actor outputs. The network is a kind of RNN, but by setting each feedback connection to a large random value, it can produce chaotic dynamics internally. Because the learning of recurrent connections does not work well, only the connections from inputs to hidden neurons and from hidden neurons to output neurons are trained. One variable $C_{ji}$ named causality trace is put on each connection, and takes in and maintains the input through the connection according to the change in its output as \begin{equation} C^{[l]}_{ji,t}=(1-|\Delta x^{[l]}_{j,t}|)C^{[l]}_{ji,t-1}+\Delta x^{[l]}_{j,t} x^{[l-1]}_{i,t} \label{Eq:C-Trace-RL} \end{equation} where $x^{[l]}_{j,t}$: output of the $j$-th neuron in the $l$-th layer at time $t$, $\Delta x_t = x_t - x_{t-1}$.\vspace{1mm}\\ Using the causality trace $C_{ji}$ and TD error ${\hat r}_t$, the weight $w^{[l]}_{ji}$ from the $i$-th neuron in $(l\!\!-\!\!1\!)$-th layer to the $j$-th neuron in the $l$-th layer is updated with a learning rate $\eta$ as \begin{equation} \vspace{-1mm} \Delta w^{[l]}_{ji,t}=\eta {\hat r_t} C^{[l]}_{ji,t}. \label{Eq:LearnActor} \end{equation} \begin{figure} \centering \includegraphics[height=6.4cm]{ChaosRL.eps} \caption{Comparison of conventional RL and proposed RL\\ (only actor network) \cite{IJCNN15}} \label{fig:ChaosNN} \end{figure} \section{Learning of Obstacle Avoidance} \begin{wrapfigure}[26]{r}{80mm} \vspace*{-\intextsep} \centering \includegraphics[height=8.4cm]{Task.eps} \caption{Learning of obstacle avoidance for a robot with two wheels and two sensors using a ChNN.} \label{fig:Task} \end{wrapfigure} Since the learning is completely different from the conventional one, it is necessary to show whether the learning works appropriately in a variety of tasks or not. It was already applied to several tasks\cite{IJCNN15}\cite{Higher}. In this paper, the result of a recent task in which a robot has two wheels and two visual sensors is shown\cite{JCSS}. As shown in Fig. \ref{fig:Task}, there is a $20 \!\!\times \!\!20$ size of field, and its center is the origin of the field. What the robot has to do is to reach a goal while avoiding an obstacle. The goal is put on the location (0, 8) with radius $r\!\!=\!\!1.0$, and a robot ($r\!\!=\!\!0.5$) and an obstacle ($r\!\!=\!\!1.5$) are put randomly at each episode. The orientation of the robot is also decided randomly. Each of the two omnidirectional visual sensors has 72 cells, and catches only goal or obstacle respectively. Total of 144 sensor signals are the inputs of both critic and actor networks, and the right and left wheels rotate according to the two actor outputs. When the robot comes in the goal area, a reward is given, and when it collides with the obstacle, a small penalty is given. One episode is defined as until arrival to the goal or 1000 steps from start. The both NNs have three layers including input layer. The number of hidden neurons is 10 for critic network and 100 for actor ChNN. \begin{figure*}[] \centering \includegraphics[scale=0.71]{Results.eps} \vspace{-1mm} \caption{Learning results\cite{JCSS}. In (b) and (c), the initial robot location is on the center and its initial orientation is for upper-side in the figure. Actually, the goal is located at the same place, but on this figure the location is varied relatively.} \label{fig:Results} \end{figure*} The learning results are shown in Fig. \ref{fig:Results}. Fig. \ref{fig:Results}(a) shows learning curve and the vertical axis indicates the number of steps to the goal. The blue line shows its average over each 100 episodes. It can be seen that according to the number of episodes, the number of steps decreases. Fig. \ref{fig:Results}(b) and (c) show the robot trajectories before and after learning respectively. Actually, the goal location is fixed and initial robot location is varied. But to see the initial robot orientation, the robot is on the center of the field, and its orientation is for upper-side initially on the figure. Relatively, the goal location is varied. The size of the obstacle in the figure shows the area where the robot collides with the obstacle and then is larger than the actual size. Before learning, the robot explored almost randomly, but after learning, it could reach the goal while avoiding the obstacle. In conventional learning, randomness of action selection is often decreased as learning progresses, but here, it can be seen that exploration factors decreased autonomously according to learning. However, there are some strange behaviors such that the robot changes its direction suddenly. It was observed that the actor hidden neurons were likely to have a value around the maximum 0.5 or minimum -0.5. There is still a large space to improve the learning method. Fig. \ref{fig:Results}(d) shows Lyapunov exponent to see the chaotic property of this system including the environment. The robot is located at one of the 8 locations in Fig. \ref{fig:Results}(c), and the obstacle is also located at one of 8 locations. For each of the $8 \times 8 = 64$ combinations, a small perturbation vector with the size $d_{before}$ is added to the internal states of hidden neurons, and the distance $d_{after}$ \hspace{-0.1mm} in the internal states at the next time between the cases of no perturbation and addition of the perturbation is compared. The average of $ln(d_{after}/d_{before})$ is used as the exponent here. From the figure, it can be seen that Lyapunov exponent is gradually decreased, but since the value is more than 0.0, it is considered that the chaotic property is maintained. In \cite{IJCNN15}, it was observed that when the environment changed, Lyapunov exponent increased again. \small
2,869,038,156,208
arxiv
\section{Introduction} \label{s0} It has been an important subject in differential geometry to study when a smooth manifold carries a Riemannian metric of positive scalar curvature. A famous theorem of Gromov and Lawson \cite{GL80}, \cite{GL83} states that an area enlargeable manifold (in the sense of \cite{GL83}) does not carry a metric of positive scalar curvature. \begin{defn}\label{t0.1} (Gromov-Lawson \cite{GL83}) One calls a closed manifold $W$ (carrying a metric $g^{TW}$) an enlargeable manifold if for any $\epsilon>0$, there is a covering manifold $\pi:\widehat W_\epsilon\rightarrow W$, with $\widehat W_\epsilon$ being spin, and a smooth map $f:\widehat W_\epsilon\rightarrow S^{\dim W}(1)$ (the standard unit sphere), which is constant near infinity and has non-zero degree, such that for any two form $\alpha\in \Omega^2(S^{\dim W}(1))$, one has $|f^*(\alpha)|\leq \epsilon |\alpha|$. \end{defn} It is clear that the enlargeability does not depend on the metric $g^{TW}$. \begin{thm}\label{t0.2} Let $W$ be a closed area enlargeable manifold and $M$ an arbitrary spin manifold of equal dimension, then the connected sum $M\# W$ does not admit any complete metric of positive scalar curvature. \end{thm} When $M$ is closed, Theorem \ref{t0.2} is exactly the Gromov-Lawson theorem \cite{GL80}, \cite{GL83} mentioned at the begining. When $W=T^n$, Theorem \ref{t0.2} solves the following generalized Geroch conjecture (cf. \cite[Conjecture 1.4]{Zhu22}) in the spin setting. \begin{con}\label{t0.3} For any manifold $M$ of dimension $n$, there is no complete metric of positive scalar curvature on $T^n\# M$. \end{con} The remarkable fact (cf. \cite{Lo99}) is that Conjecture \ref{t0.3} for the case of compact $M$ (first proved by Schoen-Yau \cite{SY79} in dimension $\leq 7$) implies the positive mass theorem for $M$, which in the spin case was proved by Witten \cite{W81} (see also \cite{PT}) using Dirac operators (the classical positive mass theorem in dimension three was first proved by Schoen-Yau \cite{SY79a} using minimal hypersurface method which works for dimension $\leq 7$, cf. \cite{SY79b}), while a proof in the nonspin case for all dimensions is given by Schoen-Yau \cite{SY17} by further developing their minimal hypersurface techniques (see also Lohkamp \cite{Lo16} for another minimal hypersurfaces approach in the higher dimensional situation). If $3\leq \dim M\leq 7$, then Conjecture \ref{t0.3} has been proved for arbitrary $M$ by Chodosh and Li \cite{CL20}, with the case of $\dim M=3$ also proved by Lesourd-Unger-Yau \cite{LUY20}. A recent paper by Zhu \cite{Zhu22} shows that Conjecture \ref{t0.3} implies the positive mass theorem with arbitrary ends, which in the spin setting has been proved in Cecchini-Zeidler \cite[Theorem B]{CZ21}. Thus Theorem \ref{t0.2} gives an alternate proof of the positive mass theorem with arbitrary ends in the spin setting. Our proof of Theorem \ref{t0.2} is based on deformed Dirac operators as was used in \cite{Z20}. Indeed, by using a trick in \cite{SZ18} (which goes back to \cite{GL83}), we show that Theorem \ref{t0.2} reduces to the situation already considered in \cite{Z20}. \section{A proof of Theorem \ref{t0.2}} \label{s1} Let $W$ be a closed area enlargeable manifold. Let $M$ be a ${\rm spin}$ manifold. Without loss of generality, we assume that $W$ is spin and that $\dim M=\dim W=n$. Let $h^{TW}$ be a metric on $TW$. As in \cite{SZ18} (which goes back to \cite{GL83}), we fix a point $p\in W$. For any $r\geq 0$, let $B^W_p(r)=\{y\in W\,:\, d(p,y)\leq r\}$. Let $b_0>0$ be a fixed sufficiently small number. Then the connected sum $M\# W$ can be constructed so that the hypersurface $\partial B^W_p(b_0)$, which is the boundary of $B^W_p(b_0)$, cuts $M\# W$ into two parts: the part $W\setminus B^W_p(b_0)$ and the rest part coming from $M$ (by attaching the boundary of a ball in $M$ to $\partial B^W_p(b_0)$). For any $\epsilon>0$, let $\pi:\widehat W_\epsilon\rightarrow W$ be a covering manifold verifying Definition \ref{t0.1}, carrying lifted geometric data from that of $W$. Let $b_0>0$ be small enough so that for any $p',\,q'\in\pi^{-1}(p)$ with $p'\neq q'$, $\overline{B^{\widehat W_\epsilon}_{p'}(4b_0)}\cap \overline{B^{\widehat W_\epsilon}_{q'}(4b_0)}=\emptyset$. It is clear that one can choose $b_0>0$ not depending on $\epsilon$. Let $h: W\rightarrow W$ be a smooth map such that $h={\rm Id}$ on $W\setminus B^W_p(3b_0)$, while $h( B_p^W(2b_0))=\{p\}$. It lifts to a map $\widehat h_\epsilon:\widehat W_\epsilon\rightarrow \widehat W_\epsilon$ verifying that $\widehat h_\epsilon={\rm Id}$ on $\widehat W_\epsilon\setminus \bigcup_{p'\in\pi^{-1}(p)}B_{p'}^{\widehat W_\epsilon}(3b_0)$, while for any $p'\in\pi^{-1}(p)$, $\widehat h_\epsilon(b_{p'}^{\widehat W_\epsilon}(2b_0))=\{p'\}$. Let $f : \widehat W_\epsilon\rightarrow S^{n}(1)$ be the map given in Definition \ref{t0.1}, where for simplicity we assume that each $ \widehat W_\epsilon$ is compact. Set $\widehat f=f\circ \widehat h_\epsilon:\widehat{ W}_\epsilon\rightarrow S^n(1)$. Then ${\rm deg}(\widehat f)={\rm deg}(f)$ and there is a positive constant $c>0$ (not depending on $\epsilon$) verifying that for any $\alpha\in\Omega^2(S^n(1))$, one has \begin{align}\label{1.1} |\widehat f^*(\alpha)|\leq c\,\epsilon\, |\alpha|. \end{align} The connected sum $M\# W$ lifts naturally to $\widehat W_\epsilon$ where near each $p'\in\pi^{-1}(p)$, we do the lifted connected sum, i.e., do the connected sum $M_{p'}\# B_{p'}^{\widehat W_\epsilon}(2b_0)$, where $M_{p'}$ is a copy of $M$. We denote the resulting manifold by $\widehat M\#\widehat W_\epsilon$. Cleary, any metric on $T(M\# W)$ lifts to a metric on $T(\widehat M\#\widehat W_\epsilon)$. We extend $\widehat f:\widehat W_\epsilon\rightarrow S^n(1)$ to $\widehat f:\widehat M\#\widehat W_\epsilon$ by setting $\widehat f(M_{p'}\# B_{p'}(2b_0))=f(p')$ for any $p'\in\pi^{-1}(p)$. Clearly, $\widehat f:\widehat M\#\widehat W_\epsilon\rightarrow S^n(1)$ is locally constant on $(\widehat M\#\widehat W_\epsilon)\setminus (\widehat W_\epsilon \setminus \cup _{p'\in\pi^{-1}(p)} B_{p'}(b_0))$. Moreover, we still have ${\rm deg}(\widehat f)={\rm deg}(f)$. Let $g^{T(M\# W)}$ be any complete metric on $T(M\# W)$ of positive scalar curvature. Since $ \overline{ W \setminus B_{p}(b_0)}$ is compact in $ M\# W $, there is $\delta>0$ such that the corresponding scalar curvature satisfies that \begin{align}\label{1.2} k^{g^{T(M\# W)}}\geq \delta\ \ {\rm on}\ \ (W\setminus B_{b_0}(p))\subset M\# W. \end{align} A similar inequality also holds for the scalar curvature of the lifted metric $g^{T(\widehat M\# \widehat W_\epsilon )}$ on $(\widehat W_\epsilon \setminus \cup _{p'\in\pi^{-1}(p)} B_{p'}(b_0))\subset \widehat M\#\widehat W_\epsilon$. By multiplying $g^{T(\widehat M\# \widehat W_\epsilon )}$ with a constant, we can and we will assume that $\delta=n^2$. Moreover, we also see from the compactness of $ \overline{ W \setminus B_{p}(b_0)}$ that (\ref{1.1}) still holds for $g^{T(\widehat M\# \widehat W_\epsilon )}$, possibly with a different $c$. By taking $\epsilon>0$ small enough, we see from (\ref{1.1}) and (\ref{1.2}) that $\widehat f:( \widehat M\#\widehat W_\epsilon, g^{T(\widehat M\# \widehat W_\epsilon )})\rightarrow S^n(1)$ verifies the area decreasing condition, is locally constant near infinity, and that \begin{align}\label{1.3} k^{g^{T(\widehat M\# \widehat W_\epsilon )}}\geq n^2 \ \ {\rm on\ the\ support\ of} \ {\rm d}f. \end{align} By \cite[Theorems 2.1 and 2.2]{Z20}, one sees that if ${\rm deg}(\widehat f)={\rm deg}(f)\neq 0$, then ${\rm inf}(k^{g^{T(\widehat M\# \widehat W_\epsilon )}})<0 $, which contradicts the assumption that $k^{g^{T(M\#W)}}>0$. $\ $ \noindent{\bf Acknowledgments.} The authors would like to thank Yuguang Shi and Guofang Wang for helpful discussions. W. Zhang was partially supported by NSFC Grant no. 11931007 and the Nankai Zhide Foundation. X. Wang was partially supported by NSFC Grant no. 12101361 and the fundamental research funds of Shandong University, Grant no. 2020GN063. \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
2,869,038,156,209
arxiv
\section{Introduction} Quantum contextuality is a feature of quantum physics which contradicts the prediction of theories assuming realism and non-contextuality. Like entanglement, quantum contextuality was first considered as a paradox and is now recognized as a quantum resource for quantum information and quantum computation \cite{HWVE, BDBOR}. The Bell-Kochen-Specker Theorem \cite{B,KS}, often called Kochen-Specker Theorem (KS), is a no-go result that proves quantum contextuality by establishing that any Hidden-Variable (HV) theory that reproduces the outcomes of quantum physics should be contextual. In other words, if such an HV theory exists the deterministic functions describing the measurements are context-dependent, i.e. depend on the set of compatible, i.e. mutually commuting, measurements that are performed in the same experiment. It is this property of not being reproducible by any Non-Contextual Hidden Variables (NCHV) theory that we call quantum contextuality. In the 90's David Mermin \cite{M} and Asher Peres \cite{P} proposed operator-based proofs of the Kochen-Specker Theorem using configurations of multi-qubits Pauli observables. Their most famous ``simple'' proof is the Magic Peres-Mermin square whose one example is reproduced in Figure \ref{fig:mermin}. \begin{figure}[!h] \begin{center} \includegraphics[width=5cm]{grid1.pdf} \caption{A Mermin-Peres magic square: Each node of the grid corresponds to a two-qubit Pauli observable with eigenvalues $\pm 1$. We use the shorthanded notation $A\otimes B\simeq AB$ where $A, B$ are Pauli matrices from $\{I, X,Y,Z\}$. Each line/row is a context, i.e. a set of mutually commuting operators. The product of three observables of a context (row/line) is $\pm I_4$ as indicated by the signs. Because there is an odd number of negative lines (here one) it is straightforward to check that there is no context-independent function $f$ which can assign $\pm 1$ to each node and satisfy also the sign constrains.}\label{fig:mermin} \end{center} \end{figure} Just a decade ago, testing quantum contextuality of the Mermin-Peres square in a lab was a great challenge \cite{ARBC}. But, in a recent preprint by Altay Dikme {et al.} \cite{DRLB}, it was shown that measurements predicted by quantum physics for such a configuration of observables can be tested on an online NISQC\footnote{The Noisy Intermediate Scale Quantum Computers used in both this paper and \cite{DRLB} are the online accessible quantum computers provided by IBM through the IBM Quantum Experience \cite{ibmq}.}. The authors also checked that the outcomes of the measurements of their experiences cannot be explained by a NCHV theory. The Mermin-Peres grid has been investigated over the past 15 years from the perspective of finite geometry. It was, for instance proved \cite{HS}, that Mermin's grids, as well as other configurations known as Mermin's pentagrams, were the smallest, in terms of the number of contexts and operators, observable-based proofs of the KS Theorem. The geometry where those contextual configurations live is called the symplectic polar space of rank $N$ and order $2$ and will be denoted as $\mathcal{W}(2N-1,2)$. The correspondence between sets of mutually commuting $N$-qubit Pauli operators and totally isotropic subspaces of the symplectic polar space was established in \cite{SP,T,HOS}. This correspondence has been proved to be useful to describe quantum contextuality \cite{PS}, MUBs \cite{SPR}, symmetries of black-hole entropy formulas \cite{LSVP} (as part of the black-hole/qubits correspondence) and other topics connecting quantum physics, error-correcting codes and space-time geometry \cite{LH}. It turns out that $\mathcal{W}(2N-1,2)$ is also useful to define robust quantum contextual inequalities \cite{C} and the purpose of this article is to test quantum contextuality on a NISQC based on the inequalities built from $\mathcal{W}(3,2)$ and $\mathcal{W}(5,2)$. In Section \ref{sec:symplectic}, I recall the geometric construction of the symplectic polar space of rank $N$ and order $2$ that encodes the commutation relations of the generalized $N$-qubit Pauli group. In Section \ref{sec:ibm}, I present the results obtained on the IBM Quantum Experience when measuring one-dimensional contexts of $\mathcal{W}(2N-1,2)$ for $N=2,3$. The results violate strongly the inequalities proposed by Ad\'an Cabello \cite{C} for testing quantum contextuality, i.e. detecting sets of measurements which contradict all NCHV theories. Finally, Section \ref{sec:conclusion} is dedicated to concluding remarks. \section{The symplectic polar space of rank $N$ and order $2$}\label{sec:symplectic} I recall in this section the correspondence between sets of mutually commuting $N$-qubit Pauli operators and totally isotropic subspaces of the symplectic polar space of rank $N$ and order $2$, $\mathcal{W}(2N-1,2)$ \cite{SP,T,HOS}. Let us denote by $\mathcal{P}_N\subset GL_{2^N}(\mathbb{C})$ the $N$-qubit Pauli group. Elements of $\mathcal{P}_N$ are operators $\mathcal{O}$ such that \begin{equation}\label{eq:operator} \mathcal{O}=sA_1\otimes A_2\dots\otimes A_N, \text{ with } s\in \{\pm 1,\pm i\} \text{ and } A_i\in \{I,X,Y,Z\}, \end{equation} with $X,Y,Z$ being the usual Pauli matrices and $I$ the identity matrix. \begin{equation} X=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}, Y=\begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix}, \text{ and } Z=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}. \end{equation} Like in Figure \ref{fig:mermin}, one will shorthand the notation of Eq. (\ref{eq:operator}) to $\mathcal{O}=sA_1A_2\dots A_N$. Also recall that the Pauli matrices $\{I,X,Y,Z\}$ can be expressed in terms of the matrix product of $Z$ and $X$ as: \begin{equation} \begin{array}{cc} I=Z^0.X^0\leftrightarrow (0,0) & X=Z^0.X\leftrightarrow(0,1)\\ Y=iZ^1.X^1 \leftrightarrow (1,1) & Z=Z^1.X^0 \leftrightarrow (1,0), \end{array} \end{equation} where $''.''$ is the usual matrix multiplication. Thus Eq. (\ref{eq:operator}) can be expressed as: \begin{equation} \mathcal{O}=s(Z^{\mu_1}.X^{\nu_1})(Z^{\mu_2}.X^{\nu_2})\dots(Z^{\mu_N}.X^{\nu_N}) \text{ with } s\in \{\pm 1,\pm i\}, \mu_i, \nu_j \in \{0,1\}. \end{equation} This leads to the following surjective map: \begin{equation}\label{eq:map} \pi:\left\{\begin{array}{ccc} \mathcal{P}_N & \rightarrow & \mathbb{F}_2^{2N}\\ \mathcal{O}=s(Z^{\mu_1}.X^{\nu_1})(Z^{\mu_2}.X^{\nu_2})\dots(Z^{\mu_n}.X^{\nu_n}) & \mapsto& (\mu_1,\mu_2,\dots,\mu_N,\nu_1,\nu_2,\dots,\nu_N). \end{array}\right. \end{equation} The center of $\mathcal{P}_N$ is $C(\mathcal{P}_N)=\{\pm I_N,\pm i I_N\}$. Thus, $\mathcal{P}_N/C(\mathcal{P}_N)$ is an Abelian group isomorphic to the additive group $\mathbb{F}_2^{2N}$, where $\mathbb{F}_2=\{0,1\}$ is the two-elements field. Indeed, the surjective map given by Eq. (\ref{eq:map}) factors to the isomorphism $\mathcal{P}_N/C(\mathcal{P}_N)\simeq \mathbb{F}_2^{2N}$. Finally, considering the $(2N-1)$-dimensional projective space over $\mathbb{F}_2$, $PG(2N-1,2)=\mathbb{P}(\mathbb{F}_2 ^N)$, one obtains a correspondence between non-trivial $N$-qubit observables, up to a phase $\{\pm 1,\pm i\}$, and points in the projective space $PG(2N-1,2)$. \begin{equation}\label{eq:map2} \underline{\pi}:\left\{\begin{array}{ccc} \mathcal{P}_N/C(\mathcal{P}_N) & \rightarrow & PG(2N-1,2)\\ \overline{\mathcal{O}}=s(Z^{\mu_1}.X^{\nu_1})(Z^{\mu_2}.X^{\nu_2})\dots(Z^{\mu_N}.X^{\nu_N}) & \mapsto& [\mu_1:\mu_2:\dots:\mu_N:\nu_1:\nu_2:\dots:\nu_N]. \end{array}\right. \end{equation} The first correspondence does not say anything about commutation relations in $\mathcal{P}_N$. To recover these commutation relations, one needs to add an extra structure. Let us denote by $\mathcal{O}=s(Z^{\mu_1}.X^{\nu_1})(Z^\mu_2.X^\nu_2)\dots(Z^{\mu_N}.X^{\nu_N})$ and $\mathcal{O}'=s'(Z^{\mu_1'}.X^{\nu_1'})(Z^{\mu_2'}.X^{\nu_2'})\dots(Z^{\mu_N'}.X^{\nu_N'})$ two representatives of the classes $\overline{\mathcal{O}}$ and $\overline{\mathcal{O}'}$. The classes will commute iff $\mathcal{O}.\mathcal{O}'=\mathcal{O}'.\mathcal{O}$. But a straightforward calculation shows that \begin{equation}\mathcal{O}.\mathcal{O}'=ss'(-1)^{\sum_i \nu_i\mu_i'}(Z^{\mu_1+\mu_1'}.X^{\nu_1+\nu_1'})(Z^{\mu_2+\mu_2'}.X^{\nu_2+\nu_2'})\dots (Z^{\mu_N+\mu_N'}.X^{\nu_N+\nu_N'})\end{equation} and, \begin{equation}\mathcal{O}'.\mathcal{O}=ss'(-1)^{\sum_i \nu_i'\mu_i}(Z^{\mu_1+\mu_1'}.X^{\nu_1+\nu_1'})(Z^{\mu_2+\mu_2'}.X^{\nu_2+\nu_2'})\dots (Z^{\mu_N+\mu_N'}.X^{\nu_N+\nu_N'}).\end{equation} Therefore the classes $\mathcal{O}$ and $\mathcal{O}'$ commute iff $\sum_{i=1} ^N \mu_i\nu_{i}'+\mu_i'\nu_i=0$. Let us define on $PG(2N-1,2)$ the sympletic form: \begin{equation} \langle p,q\rangle=\sum_{i=1} ^N p_iq_{N+i}+p_{N+i}q_i, \end{equation} with $p=[p_1:\dots:p_{2N}]$ and $q=[q_1:\dots:q_N]$. We can now state the definition of the symplectic polar space of rank $N$ and order $2$:. \begin{definition} The space of totally isotropic subspaces\footnote{A totally isotropic subspace is a linear space of $PG(2N-1,2)$ on which the symplectic form vanishes identically.} of $PG(2N-1,2)$ for a nondegenerate symplectic form $\langle,\rangle$ is called the symplectic polar space of rank $N$ and order $2$ and is denoted by $\mathcal{W}(2N-1,2)$. \end{definition} From the previous discussion it is clear that points of $\mathcal{W}(2N-1,2)$ are in correspondence with non-trivial $N$-qubit Pauli operators and linear subspaces of $\mathcal{W}(2N-1,2)$ are sets of mutually commuting operators in $\mathcal{P}_N$ (i.e. contexts). For small values of $N$ the geometry of $\mathcal{W}(2N-1,2)$ has been studied in detail in \cite{SPPH, LHS}. I recall some of their distinguished geometric features and their relations with quantum contextuality. \subsection{The doily, $\mathcal{W}(3,2)$, and its Mermin-Peres grids} For $N=2$ the symplectic polar space $\mathcal{W}(3,2)$ comprises $15$ points (two-qubit non-trivial observables) and $15$ lines (contexts). The structure of $\mathcal{W}(3,2)$, with its points labelled by canonical ($s=+1$) representatives of $\mathcal{P}_2$ is illustrated in Figure \ref{fig:doily}. \begin{figure}[!h] \includegraphics[width=6cm]{doily.pdf} \caption{The symplectic polar space $\mathcal{W}(3,2)$, aka the doily. As a point-line geometry, the doily is a $15_3$-configuration, $15$ lines, $15$ points, $3$ points per lines and $3$ lines through each point. Out of 245 342 non-isomorphic $15_3$-configurations, $\mathcal{W}(3,2)$ is the only one that is triangle free.}\label{fig:doily} \end{figure} Mermin-Peres magic squares live inside the doily as subgeometries \cite{SPPH}. More precisely they are geometric hyperplanes of the doily, i.e. subsets such that any line of the geometry is either fully contained in the subset, or intersects the subset at only one point. The Mermin-Peres magic square of Figure \ref{fig:mermin}, embedded in $\mathcal{W}(3,2)$, is reproduced in Figure \ref{fig:doilymermin}. \begin{figure}[!h] \includegraphics[width=6cm]{mermin2.pdf} \caption{A Mermin-Peres magic square sitting in $\mathcal{W}(3,2)$: 9 more grids can be obtained from this canonical labeling. The fact that $\mathcal{W}(3,2)$ is triangle-free implies that all grids furnish an operator-based proof of KS Theorem \cite{HS}.}\label{fig:doilymermin} \end{figure} An alternative way of defining the Mermin-Peres squares as subgeometries of $\mathcal{W}(3,2)$ is to consider hyperbolic quadrics. Consider a non-degenerate quadratic form defined by \begin{equation}\label{eq:q0} \mathcal{Q}_0(x)=x_1x_3+x_2x_4, \text{ with } x=[x_1:x_2:x_3:x_4]\in \mathcal{W}(3,2). \end{equation} Such a quadric is said to be hyperbolic, \cite{VL,LHS}, and the zero locus of $\mathcal{Q}_0$, to be denoted by $\mathcal{Q}_0^{+}(3,2)$, comprises all $9$ symmetric two-qubit observables, i.e. all operators with an even number of $Y$'s. In other words $\mathcal{Q}_0^{+}(3,2)$ is the Mermin-Peres square of Figure \ref{fig:mermin} and \ref{fig:doilymermin}. To obtain the remaining nine Mermin-Peres squares, one can look at the following hyperbolic quadrics: \begin{equation}\label{eq:qp} \mathcal{Q}_p^{+}(3,2)=\{x\in \mathcal{W}(3,2), \mathcal{Q}_p(x):= \mathcal{Q}_0(x)+\langle p,x\rangle=0, \text{ with } \mathcal{Q}_0(p)=0\}. \end{equation} In terms of operators, $\mathcal{Q}_p^{+}(3,2)$ accommodates those two-qubit observables that are either symmetric and commute with $\mathcal{O}_p$ or skew-symmetric and anti-commute with $\mathcal{O}_p$. \subsection{$\mathcal{W}(5,2)$ and its hyperbolic quadrics} For $N=3$ the symplectic polar space contains $63$ points (non-trival three-qubit observables), $315$ lines ($1$-dimensional linear contexts) and $135$ Fano plane ($2$-dimensional linear contexts). An example of a three-qubit Fano plane of $\mathcal{W}(5,2)$ is reproduced in Figure \ref{fig:fano}. Subgeometries of $\mathcal{W}(5,2)$ have been studied in \cite{LHS} and a full description of the hyperplanes of $\mathcal{W}(2N-1,2)$ has been carried out in \cite{VL}. Hyperbolic quadrics, for instance, form one class of hyperplanes in $\mathcal{W}(5,2)$ and their definition is a straightforward generalization\footnote{The generalization is in fact, also straightforward for the general $N$, see \cite{VL}.} of Eq. (\ref{eq:q0}) and Eq. (\ref{eq:qp}). The quadric corresponding to the symmetric three-qubit observables is \begin{equation} \mathcal{Q}_0^{+}(5,2)=\{x\in \mathcal{W}(5,2), \mathcal{Q}_0(x):=x_1x_4+x_2x_5+x_3x_6=0\}. \end{equation} $35$ additional quadrics can be defined as, \begin{equation} \mathcal{Q}_p^{+}(5,2)=\{x\in \mathcal{W}(5,2), \mathcal{Q}_p(x):=\mathcal{Q}_0(x)+\langle p,x\rangle=0, \text{ with } \mathcal{Q}_0(p)=0\}. \end{equation} A hyperbolic quadric in $\mathcal{W}(5,2)$ consists of $35$ points, $105$ lines and $15$ Fano planes. A combinatorial description of the hyperbolic quadrics is provided in \cite{SZ}. It is, for example, known that the $35$ points of $\mathcal{Q}_p^+(5,2)$ splits into $15+20$ where the first set forms a doily inside $\mathcal{Q}_p^+(5,2)$ while the $20$ off-doily points make ten complementary pairs. Through each point of the the pair pass nine lines and those nine lines intersect the doily in a grid (same grid for two points of the same pair). One, therefore, has a partition of the $105$ lines of a hyperbolic quadric into $90=9\times 10$ off-doily lines (the $9$ lines from the $10$ pairs of points) plus $15$ lines of the doily. Those geometric properties will be useful in the next section to discuss the conditions that can be satisfied by a NCHV theory for the configuration given by $\mathcal{Q}_0 ^{+}(5,2)$. \begin{remark} Small contextual configurations can be found in $\mathcal{W}(5,2)$ including, as already mentioned, copies of Mermin-Peres squares but also Mermin's pentagrams (configurations featuring $10$ observables and $5$ contexts). It has been proved that there are $12 096$ distinguished Mermin's pentagrams in $\mathcal{W}(5,2)$ \cite{PSH,LS}. This number is remarkably equal ot the order of the automorphism group of the smallest Split Cayley hexagon, a notable $63_3$ configuration that can be embedded into $\mathcal{W}(5,2)$ (see the conclusion and \cite{PSH,LSVP} for more details). \end{remark} \begin{figure}[!h] \includegraphics[width=6cm]{fano.pdf} \caption{An example of Fano plane in $\mathcal{W}(5,2)$. This Fano plane is a negative context in the sense that the product of all opertators gives $-I_{8}$. It contains seven one-dimensional linear contexts (the lines). The three lines meeting at $YYY$ are negative.}\label{fig:fano} \end{figure} \section{Measuring the contextuality of $\mathcal{W}(2N-1,2)$ with the IBM Quantum Experience}\label{sec:ibm} \subsection{Macroscopic state-independent inequalities} Let us consider a subgeometry $\mathcal{G}$ of $\mathcal{W}(2N-1,2)$, i.e. a set of points ($N$-qubit observables) and lines (contexts made of three observables). The inequalities, built from $\mathcal{G}$, that we tested on the IBM Quantum Experience are of the following form: \begin{equation}\label{eq:rio1} \chi_\mathcal{G}=\sum_{i=1} ^S \langle \mathcal{C}_i\rangle-\sum_{i=S+1} ^M \langle \mathcal{C}'_i\rangle \leq b_\mathcal{G}, \end{equation} where $\langle\mathcal{C}_i\rangle$ denotes the mean value of the product of three observables measured sequentially on the same positive context (i.e. a positive line of $\mathcal{G}$) while $\langle\mathcal{C}'_i\rangle$ denotes the mean value of the product of the three observables measured sequentially on a negative context (i.e. a negative line) of $\mathcal{G}$. The configuration is, therefore, made of $M$-contexts with $S$ positive ones and $M-S$ negative ones. The upper bound $b_\mathcal{G}$ takes different values if we assume that the results of the measurements can be explained by a NCHV theory or if we consider the prediction of Quantum Mechanics (QM). More precisely, we have, as shown in \cite{C}: \begin{equation}\label{eq:rio2} b_{\mathcal{G}}^{NCHV}= 2P-M\ \ \ \ b_\mathcal{G}^{QM}= M, \end{equation} where $P$ is the maximum number of predictions that can be satisfied by a NCHV theory. Clearly $P\geq S$ as it is always possible to assign $+1$ to all possible observables as a predefinite value and all positive constrains will be satisfied. The Mermin-Peres square configurations, $\mathcal{Q}_{p}^{+}(3,2)$, can be used to provide symple examples of inequalities of type given by Eq. (\ref{eq:rio1}). In this case one has $b_{\mathcal{Q}_p^{+}(3,2)}^{NCHV}=4$ and $b_{{\mathcal{Q}_p^{+}(3,2)}}^{QM}=6$. In \cite{DRLB} it is $\chi_{\mathcal{Q}_{p}^{+}(3,2)}$ for $p=XX$ that was tested on the IBM Quantum Experience and the comparison that the authors make of their experimental results to a general mixture of results obtained from a NCHV theory is equivalent to comparing their measured value of $\chi^{\text{exp}}_{\mathcal{Q}_p^{+}(3,2)}$ with $b_{\mathcal{Q}_p^{+}(3,2)}^{NCHV}$. Ad\'an Cabello studied in \cite{C} the robustness of Eq. (\ref{eq:rio1}) in order to find inequalities that would overperform the ones produced by Mermin-Peres squares. He proved by introducing the tolerated error per correlation, $\varepsilon=\dfrac{b_\mathcal{G} ^{QM}-b_\mathcal{G} ^{NCHV}}{M}$, that the most robust inequalities are given by considering all possible contexts made of three observables in $\mathcal{P}_N$. In this case one has $P=S$ and in our geometric language Cabello's result can be rephrased by saying that the best inequalities of type Eq. (\ref{eq:rio1}) to test quantum-contextuality are $\chi_{\mathcal{W}(2N-1,2)}$. These are the inequalities that we tested on the IBM Quantum Experience. \subsection{Results of the experiments} The IBM Quantum Experience is an online platform launched by IBM in 2016 which gives access to NISQC from $5$ to $16$ qubits \cite{ibmq}. A graphical interface allows the user to easily generate quantum circuits and run them on different backends\footnote{The different quantum computers available on the IBM Quantum Experience have names of type 'ibmq\_athens', 'ibmq\_vigo', 'ibmq\_santiago'. The machines differ from each other by their connectivity architecture and robustness of gates, see \cite{ibmq}.}. An open-source software development kit, Qiskit \cite{qiskit}, is also available to create and run programs on the machines of the IBM Quantum Experience. Due to a large number of measurements to perform, I used Qiskit to generate all possible measurements and send them to the IBM Quantum Experience. All the Qiskit codes and tables of the numerical results obtained are availble at \url{https://quantcert.github.io/Testing_contextuality}. Since 2016 several researchers have been using the IBM Quantum Experience as an experimental platform to launch quantum computations and the reliability of the machines has constantly increased since then \cite{A,GM,Lee,Li,SSBP,H,RBBP,DRLB}. \subsection{Measuring a context} We follow the strategy of \cite{DRLB}, save for a few variations explicitly indicated in the text. The first two constrains are the following: \begin{itemize} \item the measurement performed for each observable should be nondestructive in order to get a sequential measurement. This can be achieved by introducing an auxiliary qubit; \item the IBM Quantum Experience only allows measurement in the $Z$-basis. Thus rotation prior to the measurement in the $X$ and $Y$ basis should be made. \end{itemize} Figure \ref{fig:XYZ} illustrates how to measure on the IBM Quantum Experience the observable $XYZ$ on an auxiliary qubit. The outcome measured on qubit $\#3$ is the product of the observed eigenvalues of the three observables. \begin{figure}[!h] \[ \Qcircuit @C=1em @R=.7em { \lstick{\ket{q_0}} &\gate{H} & \ctrl{3} & \gate{H} & \qw & \qw & \qw & \qw & \qw\\ \lstick{\ket{q_1}} &\qw & \qw & \gate{S^\dagger} & \gate{H} & \ctrl{2} & \gate{H} & \gate{S} &\qw\\ \lstick{\ket{q_2}} &\qw & \qw & \qw & \qw & \qw & \ctrl{1} & \qw &\qw \\ \lstick{\ket{q_3}=\ket{0}} &\qw & \targ & \qw & \qw & \targ & \targ & \qw& \meter } \] \caption{Quantum circuit measuring the observable $XYZ$ on the auxiliary qubit $\# 3$. The rotation (resp. $H$ and $S^\dagger T$) prior to the $CNOT$ gates corresponds to a change of basis (resp. $X$ and $Y$). In order to anticipate the sequential measurement that will follow, the counter-rotation is applied after the $CNOT$ gates.}\label{fig:XYZ} \end{figure} The sequential measurements of three observables on one context will be obtained by the concatenation of three circuits similar to Figure \ref{fig:XYZ}, like in \cite{DRLB}. However, contrary to the circuits generated in \cite{DRLB}, I will: \begin{itemize} \item not make any circuit optimization before sending the calculation to the IBM Quantum Experience. The main reason being that for the case $N=3$, I had to measure $315$ contexts and could not optimize each circuit one by one; \item add barriers between each observable measurement to force the IBM Quantum Experience to not optimize and simplify the calculation. Indeed once the calculation is sent to the IBM Quantum Experience, a {\em transpilation} step is performed to transform the initial circuit to a circuit which respects the connectivity of the actual quantum machine\footnote{The different quantum machines have different topology in terms of connectivity. This means that two-qubit gates, like $CNOT$, cannot be always directly implemented between two qubits as indicated in the program. The transpilation process translates the initial program to an equivalent circuit, which respects the machine connectivity \cite{ibmq}.}. This transpilation phase may sometimes simplify the computation too much. The addition of barriers forces the machine to perform the three sequential measurements one after the other. \end{itemize} Figure \ref{fig:contextmeasurement} shows the sequential measurement of a typical three-qubit context as it is sent to the IBM Quantum Experience by our program. \begin{figure}[!h] \begin{center} \[\resizebox{.9\linewidth}{!}{ \Qcircuit @C=1em @R=.7em {&\gate{H} & \ctrl{3} & \gate{H} & \qw & \qw & \qw & \qw&\qw\barrier{3} & \qw & \gate{H} & \ctrl{3} & \gate{H} &\qw\barrier{3} & \qw &\qw & \qw & \qw &\qw &\qw &\qw&\qw &\qw\barrier{3} &\qw \\ &\qw & \qw & \gate{S^\dagger} & \gate{H} & \ctrl{2} & \gate{H} & \gate{S} &\qw & \qw &\qw & \qw &\qw & \qw & \qw & \ctrl{2} & \qw & \qw &\qw &\qw &\qw& \qw &\qw &\qw \\ &\qw & \qw & \qw & \qw & \qw & \ctrl{1} & \qw &\qw &\qw&\qw & \qw &\qw & \qw & \qw & \qw &\qw & \gate{S^\dagger} & \gate{H} & \ctrl{1} & \gate{H} & \gate{S} &\qw &\qw\\ &\qw & \targ & \qw & \qw & \targ & \targ & \qw& \qw &\qw&\qw & \targ &\qw & \qw & \qw &\targ &\qw &\qw& \qw &\targ & \qw & \qw & \qw &\qw& \meter}} \] \end{center} \caption{Sequential measurement of $XYZ-XII-IZY$: The product of the three eigenvalues corresponding to the three measurements is obtained on the auxiliary qubit $\#4$.}\label{fig:contextmeasurement} \end{figure} Both new constrains make the calculation more fragile as it increases the noise by adding non necessary gates (like the two Hadamard gates on the first qubit of Figure \ref{fig:contextmeasurement}). However, due to the robustness of $\chi_{\mathcal{W}(3,2)}$ and $\chi_{\mathcal{W}(5,2)}$, these contrains will not be an obstacle to test quantum contextuality. \subsection{Results for $\chi_{\mathcal{W}(3,2)}$} I first tested the inequality corresponding to the doily. Our program consists of two main parts. One script generates all points and lines\footnote{with signs given by the canonical labeling by two-qubit observables.} of $\mathcal{W}(3,2)$ following the prescriptions of Section \ref{sec:symplectic} and the second part generates with Qiskit all circuits corresponding to each context to be measured, then the calculation is sent to the IBM Quantum Experience, which sends back the results of the different measurements. In order to obtain significant statistics, each measurement is performed $8192$, times which is the maximum number of shots allowed by the IBM Quantum Experience. All scripts are available at \url{https://quantcert.github.io/Testing_contextuality}. The numerical outcomes are reproduced in Table \ref{tab:doily} for an experiment conducted on January 4, 2021, on the backend ibmq\_casablanca\footnote{We conducted our experiences between November 17, 2020, and January 5, 2021, on different backends: ibmq\_valencia, ibmq\_santiago, ibmq\_yortown. We always obtained violation of the NCHV inequalities except with the backend ibmq\_rome. The backend ibmq\_casablanca, accessible with an ibm-q-research account, provided the best results.}. Each context $\mathcal{C}_i$ is considered as a dichotomic experiment with two outcomes $\{+1,-1\}$, the standard deviation is therefore calculated as $\sigma_{\mathcal{C}_i}=\sqrt{\dfrac{p(1-p)}{n_\text{exp}}}$ where $p$ is the probability of measuring $+1$ and $n_{\text{exp}}$ is the number of measurements (experiences) which are preformed on the context $\mathcal{C}_i$. Based on the numerical values of Table \ref{tab:doily}, one gets \begin{equation} \chi_{\mathcal{W}(3,2)} ^{\text{exp}}=12.27 \end{equation} with total standard deviation $\sigma_\chi=0.012$. Therefore, the classical bound $b^{NCHV}=9$ is violated by more than $270$ standard deviations, i.e. more than $10$ times of the violation obtained for $\mathcal{G}=\mathcal{Q}_p^{+}(3,2)$ as given in \cite{DRLB}. \begin{table}[!h] \begin{tabular}{|c|c|c|c|c|} \hline Context $\mathcal{C}_i$ & $\# 1$ & $\# -1$ & $\langle \mathcal{C}_i\rangle$ & std \\ \hline $IX-XI-XX$ & $7337$ & $855$& $0,7913$ & $0,0034$\\ $IX-YI-YX$ & $7283$ & $909$ & $0,7781$ & $0,0035$\\ $IX-ZI-ZX$& $7213$ & $979$ &$0,7610$ &$0,0036$\\ $IY-XI-XY$& $7511$ & $681$ & $0,8337$ & $0,0031$\\ $IY-YI-YY$& $7465$ &$727$ & $0,8225$ &$0,0031$\\ $IY-ZI-ZY$& $7275$ &$917$ & $0,7761$ & $0,0035$\\ $IZ-XI-XZ$& $7644$ &$548$ & $0,8662$&$0,0035$\\ $IZ-YI-YZ$& $7713$ &$479$ &$0,8831$ & $0,0028$\\ $IZ-ZI-ZZ$& $7822$ &$370$ & $0,9097$& $0,0026$\\ $XX-YY-ZZ$& $735$& $7457$&$-0,8206$& $0,0023$\\ $XX-YZ-ZY$& $7567$& $625$ &$0,8474$ &$0,0032$\\ $XY-YX-ZZ$& $7552$ &$640$&$0,8437$ &$0,0029$\\ $XY-YZ-ZX$& $968$ &$7224$ & $-0,7637$&$0,0030$\\ $XZ-YX-ZY$& $1054$ &$7138$ & $-0,7427$& $0,0036$\\ $XZ-YY-ZX$& $7499$ & $693$ & $0,8308$& $0,0037$\\ \hline \end{tabular} \caption{Numerical values obtained to measure $\chi_{\mathcal{W}(3,2)}$ on January $4$, $2021$ on the IBM Quantum Experience, ibmq\_casablanca. Each measurement was repeated $8192$ times.}\label{tab:doily} \end{table} \subsection{Results for $\chi_{\mathcal{W}(5,2)}$} As anticipated in \cite{C}, the violation is even more pronounced for the $N=3$-qubit symplectic polar space $\mathcal{W}(5,2)$. The calculation involves the measurement of $315$ contexts, which are generated and measured by two Python-Qiskit scripts, like those for the $\mathcal{W}(3,2)$ case. In January $4$, I obtained on the ibmq\_casablanca backend the following value: \begin{equation} \chi_{\mathcal{W}(5,2)}^{\text{exp}}=236.57 \end{equation} with standard deviation $\sigma_{\chi_{\mathcal{W}(5,2)}^{\text{exp}}}=0.064$. Thus the NCHV bound, $b^{NCHV}=135$, is violated by more than $1500$ standard deviations. I also calculated $\chi_{\mathcal{Q}_0^{+}(5,2)}$ because, as explained in Section \ref{sec:symplectic}, hyperbolic quadrics are natural generalization of Mermin-Peres grids for all $N$. To obtain the upper bound $b_{\mathcal{Q}_0^{+}(5,2)}^{NCHV}$, one needs to calculate the maximum number of predictions that can be achieved with a NCHV theory. Using the canonical labelling, a hyperbolic quadric contains either $78$ (resp. $27$) or $66$ (resp. $39$) positive contexts (resp. negatives ones). It is thus clear that $P\geq 78$. Now the $105=90+15$ splitting of the lines of $\mathcal{Q}_0^{+}(5,2)$, where all $90$ off-doily lines can be partitioned into $10$ groups of nine intersecting the doily in its $10$ Mermin-Peres grids, shows that any NHCV theory that could satisfy the prediction of one of the $27$ negative lines would make it possible to have a NHCV theory satisfying more constrains for some of the $10$ grids. But this is not possible and therefore one may conclude that for hyperbolic quadrics $P=78$ and thus $b_{\mathcal{Q}_0^{+}(5,2)}^{NCHV}=51$. In January 4, 2021, I obtained the following experimental value on the backend ibmq\_casablanca: \begin{equation} \chi_{\mathcal{Q}_0^{+}(5,2)}^{\text{exp}}=82.17, \end{equation} \noindent with the standard deviation $\sigma=0.035$, corresponding to a violation of the NCHV bound by $890$ standard deviations. Detail of our results for $\mathcal{W}(5,2)$ and $\mathcal{Q}_0^+(5,2)$ are available on \url{https://quantcert.github.io/Testing_contextuality}. \section{Conclusion}\label{sec:conclusion} In this note, I tested on the IBM Quantum Experience the inequalities of \cite{C} to detect quantum contextuality based on configurations of two-qubit and three-qubit observables. The measurements performed follow the experiments of \cite{DRLB} where the famous Mermin-Peres square configuration was successfully tested. The results show that, despite the current imperfections of the IBM quantum machines, the bounds predicted by NCHV theories are strongly violated. Our first calculation, regarding a configuration of two-qubit observables, involves the measurement of $15$ different contexts and the second experiment, in the three-qubit case, entails $315$ contexts. One also used our programs to test subgeometries in the three-qubit case. Interestingly, the easy use of online available NISCQ opens up new paths for both research and scientific education. As I tried to emphasize in this note, the studies of quantum contextuality from finite geometric configurations \cite{HS,SP,T,HOS,PS,LS} can now translate to experiments on real quantum machines. In particular, it would be very exciting to see the symmetries of some configurations manifesting in the results of some quantum experiments. For instance in the three-qubit case, the Split Cayley hexagon of order $2$ is a configuration made of the $63$ observables of $\mathcal{W}(5,2)$ and only $63$ contexts which form a generalized hexagon \cite{LSVP}. The automorphism group of this configuration is $\text{SL}(7,2)$, the quotient of the Weyl group $W(E_7)$ by $\mathbb{Z}_2$, and the connection with the Mermin-pentagrams contextual configurations was studied from the geometrical perspective. Now, it is possible to check the contextuality of the Split Cayley hexagon by the same techniques as developed here. I believe this is not only scientifically interesting, but it also emphasizes the strong interdisciplinary dimension of quantum information, which is also very valuable from a training perspective. \section*{Acknowledgment} This work was supported by the French Investissements d'Avenir programme, project ISITE-BFC (contract ANR-15-IDEX-03). I acknowledge the use of the IBM Quantum Experience and the IBMQ-research program. The views expressed are those of the author and do not reflect the official policy or position of IBM or the IBM Quantum Experience team. One would like to thank the developers of the open-source framework Qiskit as well as Metod Saniga for his comments on an earlier version of the paper and Ad\'an Cabello for explaining us the ``Rio Negro'' inequalities during his stay in Besan\c{c}on for the IQUINS meeting in 2017.
2,869,038,156,210
arxiv
\section{Introduction} Let $C$ be a \emph{convex body}, i.e. a convex and compact set of $\mathbb R^n$. Let $\mathcal K^n$ be the set of all convex bodies. Let $|C|$ be the \emph{$n$-dimensional volume} (or Lebesgue measure) of $C$. The classical Hermite-Hadamard inequality (proven independenty by Hermite 1881 and Hadamard 1893 in the 1-dimensional case), is a direct consequence of the Jensen's inequality (see \cite{J}). See \cite{DP} (and \cite{CalCar} or \cite{St}) and the references on it for other historical considerations and a comprehensive and complete view of this type of inequalities. It states that for any $C\in\mathcal K^n$ and $f:C\rightarrow[0,\infty)$ concave, then \begin{equation}\label{eq:HH_Jensen} \frac{1}{|C|}\int_Cf(x)dx\leq f\left(x_C\right), \end{equation} where $x_C=\frac{1}{|C|}\int_Cxdx$ is the \emph{centroid} of $C$. Very recently in \cite{GoMe}, it has been generalized in $n$-dimensions for $0$-symmetric sets. More precisely, if $C\in\mathcal K^n$ with $C=-C$, $f:C\rightarrow[0,\infty)$ is concave, and $\phi:[0,\infty)\rightarrow[0,\infty)$ is a convex function with $\phi(0)=0$, then \begin{equation}\label{eq:Gener_HH_0-symm} \frac{1}{|C|}\int_C\phi(f(x))dx\leq\frac{1}{2}\int_{-1}^1\phi(f(0)(1+t))dt. \end{equation} The mean value of a function measured in $C$ (the left-term in \eqref{eq:HH_Jensen}) has repeatedly appeared during the development of different topics of Analysis and Geometry (cf.~\cite{HLP}). Berwald \cite[Sect.~7]{Ber} studied monotonicity relations of $L_p$ means of concave functions over convex compact domains. He showed that for any $C\in\mathcal K^n$ and $f:C\rightarrow[0,\infty)$ concave, then \[ t\rightarrow\left(\frac{{n+t\choose n}}{|C|}\int_Cf(x)^t\right)^\frac{1}{t} \] is decreasing for $t>0$ (see \cite{GZ} for an extension to $t>-1$, see also \cite{ABG} and \cite[Sect.~7]{AAGJV} for a translation of it). It represents a reverse result to the Jensen's inequality \cite{J}, i.e. \[ t\rightarrow\left(\frac{1}{|C|}\int_Cf(x)^t\right)^\frac{1}{t}, \] which is increasing for $t>0$. Borell \cite{Bor} did a step further by showing some convexity relations in the same regard (Thms.~1 and 2). See also Milman and Pajor \cite[2.6]{MP2} and \cite[Sect.~5]{GNT}, or the Hardy-Littlewood Maximal Function (cf.~\cite{Me}). A natural question that arises from \eqref{eq:Gener_HH_0-symm} is to seek for possible extensions to the case of not necessarily symmetric $C$. We can find partial answers in the literature. For instance, Milman and Pajor \cite[Lem. 1.1]{MP} proved that if $f:\mathbb R^n\rightarrow[0,\infty)$ is an integrable \emph{log-concave function} (i.e. such that $\log f$ is concave) and $\mu:\mathbb R^n\rightarrow[0,\infty)$ is a probability measure, then \begin{equation}\label{eq:MP-Ineq} \int_{\mathbb R^n}f(x)d\mu(x)\leq f\left(\int_{\mathbb R^n}x\frac{f(x)}{\int_{\mathbb R^n}f(z)d\mu(z)}d\mu(x)\right). \end{equation} This inequality is independent of \eqref{eq:Gener_HH_0-symm}. However, certain situations can be covered by the first (resp. second) one and not by the second (resp. first), for instance, when $C$ is not symmetric and $\phi(f)$ is log-concave (resp. when $C$ is 0-symmetric but $\phi(f)$ is not log-concave). As an example of the first, let $C$ be non-symmetric and $\phi(t)=t^\alpha$, for some $\alpha\geq 1$, since then $\phi(f)=f^\alpha$ is always log-concave. As an example of the second, let $C$ be 0-symmetric and $\phi(t)=e^{t^2}$, since then $\phi(f)=e^{f^2}$ is in general not log-concave. For a set $X\subset\mathbb R^n$, its \emph{convex hull}, $\mathrm{conv}(X)$, is the smallest convex set containing $X$. Let $B_n$ be the \emph{Euclidean unit ball}, and let $\kappa_n=|B_n|$. Let $\mathrm{Gr}(i,\mathbb R^n)$ be the set of all \emph{$i$-dimensional linear subspaces} contained in $\mathbb R^n$. Let $e_1,\dots,e_n$ be the \emph{canonical basis} of $\mathbb R^n$. A \emph{generalized truncated cone} is a convex body of the form $C=\mathrm{conv}(C_0\cup(x_0+\rho C_0))$, for some $C_0\in\mathcal K^n$, $C_0\subset H$ for some $H\in\mathrm{Gr}(n-1,\mathbb R^n)$, some $x_0\in \mathbb R^n$ and $\rho\geq 0$. We say that $C_0$ and $x_0+\rho C_0$ are the \emph{bases} of $C$. A \emph{generalized cone} is a generalized truncated cone with $\rho=0$ (i.e. one of its bases is a vertex). Our main contribution in this work is to investigate a generalization of \eqref{eq:Gener_HH_0-symm} which covers those missing cases described above. Moreover, it represents an inequality of the type \eqref{eq:MP-Ineq} for bounded domains. To do so, we make use of a \emph{new point}, $x_{C,f}$, which is associated to each $C\in\mathcal K^n$ and $f:C\rightarrow[0,\infty)$ concave. Since its definition is rather technical, we refer, for its definition, to Section \ref{sec:Main_result_Proofs} and in particular to Lemma \ref{lem:ExistenceOfx_C}. \begin{thm}\label{thm:General_Result} Let $C\in\mathcal K^n$, $f:C\rightarrow[0,\infty)$ be concave, $\phi:[0,\infty)\rightarrow[0,\infty)$ be convex, with $\phi(0)=0$. Then \[ \int_C\phi(f(x))dx \leq \max_{R}\int_R\phi(g(x))dx, \] where the maximum ranges over all generalized truncated cones $R$ with $|R|=|C|$, and $g$ is an affine function with $g(x_{R,g})=f(x_{C,f})$ and which is zero in one of the bases of $R$. Moreover, if $\phi$ is strictly convex, then equality holds if and only if $C$ is a certain generalized truncated cone, and $f$ is an affine function with value zero at some of the bases of $C$. \end{thm} Notice that the maximum above can be computed via Fubini's formula (once assuming after a rotation of $R$ and a change of variables that $R=\mathrm{conv}(\{(r_m (B_n\cap H))\cup(e_1+(r_m+m)(B_n\cap H))\})$, where $H=\mathrm{span}(\{e_2,\dots,e_n\})$) so that it equals \begin{equation}\label{eq:reduced_formula} \max_{m\in[-m_0,m_0]}\int_0^1\phi\left(\frac{f(x_{C,f})t}{t_m}\right)\kappa_{n-1}(r_m+mt)^{n-1}dt, \end{equation} where $m_0=(cn/\kappa_{n-1})^{1/(n-1)}$, $r_m$ is a solution to the equation $mnc=\kappa_{n-1}((r_m+m)^{n}-r_m^n)$, $t_m=(((r_m^n+(r_m+m)^n)/2)^{1/n}-r_m)/m$ and $c=|C|$ (see Remark \ref{rmk:concrete_computation} and Section \ref{sec:Higher_dim_case} for details). In some particular cases, we can find the truncated cone and the affine function that attains the maximum in Theorem \ref{thm:General_Result}. \begin{thm}\label{thm:2_dim_case} Let $C\in\mathcal K^2$, $f:C\rightarrow[0,\infty)$ be concave, and let $\alpha\geq 1$. Then \[ \frac{1}{|C|}\int_Cf(x)^{\alpha}dx \leq \frac{2}{(\alpha+1)(\alpha+2)}\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)^\alpha f(x_{C,f})^\alpha. \] Moreover, equality holds if and only if $C$ is a triangle and $f$ is an affine function which is equal to $0$ in one of the edges of $C$. \end{thm} Observe that Theorem \ref{thm:2_dim_case} applied to $\phi(t)=t$ already gives back a sharp inequality that \emph{does not} recover the classical Hermite-Hadamard inequality \eqref{eq:HH_Jensen} (in contrast to \cite[Thm. 1.2]{GoMe} that recovers it). For any $X\subset\mathbb R^n$, its \emph{affine hull} (resp. \emph{linear hull}), $\mathrm{aff}(X)$ (resp. $\mathrm{span}(X)$), is the smallest affine (resp. linear) subspace containing $X$. The \emph{dimension} of $X$, $\mathrm{dim}(X)$, is the dimension of $\mathrm{aff}(X)$. For any $C\in\mathcal K^n$ and $H\in\mathrm{Gr}(i,\mathbb R^n)$, the denote by $P_HC$ the \emph{orthogonal projection of $C$ onto $H$}. Let $X^\bot$ be the \emph{orthogonal subspace} to $X$. Moreover, if $C\in\mathcal K^n$ with $\mathrm{dim}(C)<n$, then $|C|$ denotes the volume of $C$ computed inside the ambient space $\mathrm{aff}(C)$. Using the same techniques as in \cite[Thm. 1.1]{GoMe}, Theorem \ref{thm:2_dim_case} gives us an upper bound of the volume of $K$, for general $K\in\mathcal K^n$, in terms of the volumes of sections and projections of $K$ of orthogonal subspaces. \begin{thm}\label{thm:volume_sections} Let $K\in\mathcal K^n$ and $H\in\mathrm{Gr}(2,\mathbb R^n)$. Then \[ |K|\leq \frac{2}{n(n-1)}\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)^{n-2}|P_HK||K\cap(x_{x_{P_HK,f}}+H^\bot)|, \] where $f(x)=|K\cap(x+H^\bot)|^\frac{1}{n-2}$. Moreover, equality holds if and only if there exist a segment $S\subset H$, a point $x_0\in H\setminus\mathrm{aff}(S)$, and $Q\in\mathcal K^n$, $Q\subset H^\bot$, such that \[ K=\{(1-\lambda)a+\lambda x_0+g((1-\lambda)a+\lambda x_0)+\lambda Q:a\in S,\,\lambda\in[0,1]\}, \] for some $g:H\rightarrow\mathbb R^n$ linear function. \end{thm} Spingarn \cite{S} and later Milman and Pajor \cite{MP} proved that replacing above $x_{P_HK,f}$ by $x_K$ allows to remove the constant in the right-hand side term (cf. also \cite[Thm. 1.1]{GoMe} or \cite{Fr} for other results of this type). If we apply Theorem \ref{thm:General_Result} in the planar case to $\phi(t)=e^t-1$, we obtain an upper bound for log-concave functions (see also \eqref{eq:MP-Ineq}). We compute the exact constant for particular values of $f(x_{C,f})=f_0$, since the study of the underlying monotonicity seems to be intractable and needs of numerical approximations. \begin{thm}\label{thm:Exp_2_dim_case} Let $C\in\mathcal K^2$ and $f:C\rightarrow[0,\infty)$ be concave. Then \[ \frac{1}{|C|}\int_Ce^{\frac{f(x)}{f(x_{C,f})}}dx \leq \sqrt{2}(\sqrt{2}-1)\left(\frac{e^{\frac{\sqrt{2}}{\sqrt{2}-1}}}{\sqrt{2}}-1\right). \] Moreover, equality holds if and only if $C$ is a triangle and $f$ is an affine function which becomes zero in one of the edges of $C$. \end{thm} The technique developed by Milman and Pajor in \cite{MP} cannot be applied if $\phi(f(x))$ is not a log-concave function. For instance, next theorem is obtained as a result of applying Theorem \ref{thm:General_Result} in the planar case with $\phi(t)=e^{t^2}-1$. \begin{thm}\label{thm:PhiConvNonLogConcave} Let $C\in\mathcal K^2$, $f:C\rightarrow[0,\infty)$ be concave. Then \[ \begin{split} \frac{1}{|C|} & \int_Ce^{\frac{f(x)^2}{f(x_{C,f})^2}}dx \leq \frac{\sqrt{\pi}}{2}\mathrm{erfi}\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right) \\ &+ \frac{1}{1-2\sqrt{2}}\left(\frac{e^{\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)^2}-1}{\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)^2}-\frac{\sqrt{\pi}}{2}\mathrm{erfi}\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)\right), \end{split} \] where $\mathrm{erfi}(x)=-i\mathrm{erf}(ix)=\frac{2}{\sqrt{\pi}}\int_0^xe^{t^2}dt$ is the imaginary error function (in Wolfram Language) and $\mathrm{erf}(x)=(2/\sqrt{\pi})\int_0^{x}e^{-t^2}dt$ is the usual error function. Moreover, equality holds if and only if $C$ is a triangle and $f$ is an affine function which has value $0$ in one of the edges of $C$. \end{thm} We briefly show other instances of Theorem \ref{thm:General_Result} in higher dimensions. However, and due to the technical difficulties to solve the underlying optimization problem, we have just considered here the corresponding Hermite-Hadamard inequality in the 3-dimensional case (see Section \ref{sec:Higher_dim_case} for further discussions on this). \begin{thm}\label{thm:HH-3-dim-case} Let $C\in\mathcal K^3$ and $f:C\rightarrow[0,\infty)$ be concave. Then \[ \frac{1}{|C|}\int_Cf(x)dx \leq \frac{3\cdot 2^{1/3}}{2^{1/3}-1}f(x_{C,f}). \] Moreover, equality holds if and only if $C$ is a generalized cone and $f$ is an affine function which becomes zero at the base of $C$. \end{thm} After deriving the solutions of Theorems \ref{thm:2_dim_case}, \ref{thm:Exp_2_dim_case}, \ref{thm:PhiConvNonLogConcave} and \ref{thm:HH-3-dim-case}, it is natural to conjecture that the maximum in the right-hand side in Theorem \ref{thm:General_Result} is attained when $C$ is a cone, and $f$ is an affine function becoming $0$ at its base, thus yielding the following \emph{general result}: \begin{conj} Let $C\in\mathcal K^n$, $f:C\rightarrow[0,\infty)$ be concave, and $\phi:[0,\infty)\rightarrow[0,\infty)$ be convex with $\phi(0)=0$. Then \[ \frac{1}{|C|}\int_C\phi(f(x))dx \leq n\int_0^1\phi\left(f(x_{C,f})\frac{2^{1/n}t}{2^{1/n}-1}\right)(1-t)^{n-1}dt. \] Moreover, if $\phi$ is strictly convex, equality holds if and only if $C$ is a generalized cone, and $f$ is an affine function which becomes zero at the base of $C$. \end{conj} The paper is organized as follows. Section \ref{sec:Main_result_Proofs} is devoted to the proof of Theorem \ref{thm:General_Result}. Besides, we show also some key lemmas necessary to do it. For instance, in Lemma \ref{lem:ExistenceOfx_C} we show the existence of the point $x_{C,f}$, which is somewhat crucial for Theorem \ref{thm:General_Result}. Afterwards in Section \ref{sec:PlanarCase}, we compute the maximum established in Theorem \ref{thm:General_Result} when $C$ is a planar set, for particular cases of $\phi(t)$ (see Theorems \ref{thm:2_dim_case}, \ref{thm:Exp_2_dim_case}, \ref{thm:PhiConvNonLogConcave}). As a consequence of Theorem \ref{thm:2_dim_case}, we show Theorem \ref{thm:volume_sections}. Finally in Section \ref{sec:Higher_dim_case} we explore a little bit what happens in dimension $3$ or greater in Theorem \ref{thm:General_Result}. \section{Proof of the main result}\label{sec:Main_result_Proofs} Let us start this section by recalling that the \emph{Schwarz symmetrization} of $K\in\mathcal K^n$ with respect to $\mathrm{span}(u)$, $u\in\mathbb R^n\setminus\{0\}$, is the set \[ \sigma_u(K)=\bigcup_{t\in\mathbb R}\left(tu+r_t(B^n_2\cap u^\bot)\right), \] where $r_t\geq 0$ is such that $|K\cap(tu+u^\bot)|=r_t^{n-1}\kappa_{n-1}$, whenever $K\cap(tu+u^\bot)\neq\emptyset$, and $0$ otherwise. It is well-known that $\sigma_u(K)\in\mathcal K^n$ and that $|\sigma_u(K)|=|K|$ (cf.~\cite[Section 9.3]{Gru} or \cite{Sch} for more details). For every $K\in\mathcal K^n$ and $x\in\mathbb R^n\setminus\{0\}$, the \emph{support function} of $K$ at $x$ is defined by $h(K,x)=\sup\{x_1y_1+\cdots+x_ny_n:y\in K\}$. Let $K,C\in\mathcal K^n$. The \emph{Minkowski addition} of $K$ and $C$ is defined as $K+C=\{x+y:x\in K,\,y\in C\}$. The \emph{Brunn-Minkowski inequality} asserts that \[ |(1-\lambda)K+\lambda C|^{\frac{1}{n}}\geq(1-\lambda)|K|^\frac{1}{n}+\lambda|C|^\frac{1}{n}, \] for every $\lambda\in[0,1]$. Moreover, equality holds if $K$ and $C$ are obtained one from each other by an homothety or if $K$ and $C$ are contained in parallel hyperplanes (see \cite{Ga} and the references therein for an insightful and complete study of this inequality). Our first result shows that the if the Schwarz symmetrization of $C\in\mathcal K^n$ is a truncated cone, then $C$ is a generalized truncated cone. It will be needed for the equality cases of Theorem \ref{thm:General_Result}. \begin{lemma}\label{lem:ConeGenCone} Let $C\in\mathcal K^n$ be such that \[ \sigma_{e_1}(C)=\mathrm{conv}((B_n\cap H)\cup(e_1+\rho(B_n\cap H))), \] where $H=\mathrm{span}(\{e_2,\dots,e_n\})$ and $\rho\geq 0$. Then $C=\mathrm{conv}(\{C_0\cup(e_1+u+\rho C_0\})$, for some $u\in H$ and $C_0\in\mathcal K^n$ with $C_0\subset H$, i.e. $C$ is a generalized truncated cone. \end{lemma} \begin{proof} Let $C'=\sigma_{e_1}(C)$, and let $M_t=C\cap\{(t,x_2,\dots,x_n)\in\mathbb R^n\}$ and $M'_t=C'\cap\{(t,x_2,\dots,x_n)\in\mathbb R^n\}$ for every $t\in[0,1]$. Then $|M_t|=|M'_t|$ for every $t\in[0,1]$. On the one hand, it is very simple to check that $(1-\lambda)M_0'+\lambda M_1'=M'_\lambda$ for every $\lambda\in[0,1]$. On the other hand, the convexity of $C$ ensures that $(1-\lambda)M_0+\lambda M_1\subset M_\lambda$. Using the Brunn-Minkowski inequality over subspaces parallel to $H$, then \[ \begin{split} |M'_\lambda|^{\frac{1}{n-1}} & = |M_\lambda|^{\frac{1}{n-1}} \\ & \geq (1-\lambda)|M_0|^\frac{1}{n-1}+\lambda|M_1|^\frac{1}{n-1} \\ & =(1-\lambda)|M'_0|^\frac{1}{n-1}+\lambda|M'_1|^\frac{1}{n-1} = |M'_\lambda|^\frac{1}{n-1}. \end{split} \] Therefore, we have equality in the Brunn-Minkowski inequality, and thus $M_\lambda$ are homothetic for every $\lambda\in[0,1]$. Let $C_0:=M_0$ and, since $|M_1|=\rho^{n-1}|M_0|$, we have $M_1=e_1+u+\rho C_0$, for some $u\in H$. Finally, observing again that \[ \begin{split} (1-\lambda)M_0+\lambda M_1 & =(1-\lambda) C_0+\lambda(e_1+u+\rho C_0) \\ & \lambda e_1+\lambda u + ((1-\lambda)+\lambda \rho)C_0 \subset M_\lambda, \end{split} \] and since $|\lambda e_1+\lambda u + ((1-\lambda)+\lambda \rho)C_0|=|M_\lambda|$, then $M_\lambda=\lambda e_1+\lambda u + ((1-\lambda)+\lambda \rho)C_0$ for every $\lambda\in[0,1]$, concluding the proof. \end{proof} Next lemma is a crucial step for the proof of Theorem \ref{thm:General_Result}. We associate here, to every set $C'$ rotationally symmetric with respect to a line, a truncated cone $R$ with the same symmetry, with very special properties on the distribution of mass of the set $R\setminus C'$. \begin{lemma}\label{lem:TrunConeEqualVolumes} Let $C'\in\mathcal K^n$ be rotationally symmetric w.r.t. $\mathrm{span}(\{e_1\})$. Then there exists a truncated cone $R$ with bases orthogonal to $\mathrm{span}(\{e_1\})$, rotationally symmetric w.r.t. $\mathrm{span}(\{e_1\})$, and such that \begin{enumerate} \item[i)] $|R|=|C'|$, \item[ii)] $h(R,-e_1)=h(C',-e_1)=t_0$, $h(R,e_1)=h(C',e_1)=t_1$ and \item[iii)] if $M_t'=\{(t,x_2,\dots,x_n)\in C'\}$, $M''_t=\{(t,x_2,\dots,x_n)\in R\}$, then $M_t'\subset M_t''$ if and only if $t\in[t_0,t_0^*]\cup[t_1^*,t_1]$ and $M_t''\subset M_t'$ if and only if $t\in[t_0^*,t_1^*]$, for some $t_0\leq t_0^*\leq t_1^*\leq t_1$ with \[ \begin{split} & |(R\setminus C')\cap\{(t,x_2,\dots,x_n)\in\mathbb R^n:t\in[t_0,t_0^*]\}|\\ & =|(R\setminus C')\cap\{(t,x_2,\dots,x_n)\in\mathbb R^n:t\in[t_1^*,t_1]\}|=\frac{|R\setminus C'|}{2}. \end{split} \] \end{enumerate} \end{lemma} \begin{proof} Let us define $v_{C'}(t)=\max\{v\geq 0:te_1+ve_2\in C'\}$, for every $t\in[t_0,t_1]$. Now, for every $m\in\mathbb R$, we define the truncated cone $R_m$ with bases orthogonal to $\mathrm{span}(e_1)$ and whose centers belong to this line. We choose its $v_{R_m}(t)=r_m+m(t-t_0)$, $t\in[t_0,t_1]$, where $r_m\geq 0$ is chosen such that $|R_m|=|C|$. It is clear that the slope $m$ ranges in $m\in[-m_0,m_0]$, where $m_0$ is given by the solution to \[ c=|C'|=\int_{t_0}^{t_1}\kappa_{n-1}(m_0(t-t_0))^{n-1}dt=\frac{\kappa_{n-1}(t_1-t_0)^n m_0^{n-1}}{n}, \] i.e. $m_0=(cn/(\kappa_{n-1}(t_1-t_0)^n))^{1/(n-1)}$. Notice that on the one hand, since $C'$ is convex then $v_{C'}$ is concave, and on the other hand $v_{R_m}$ is the graph of a line, for every $m$. Since $c=|R_m|$, it is clear that $v_{C'}$ and $v_{R_m}$ touch in at least one point for $t\in[t_0,t_1]$ (otherwise, $R_m$ or $C'$ would be contained in the other set, thus having that $|C'|<|R_m|$ or $|R_m|<|C'|$, contradicting the fact that $c=|R_m|$). Moreover, since $R_{m_0}$ is a cone with vertex at $t=t_0$, by continuity arguments there exists $t^{m_0}_*\in[t_0,t_1]$ such that $v_{R_{m_0}}(t)\leq v_{C'}(t)$ for $t\in[t_0,t^{m_0}_*]$ and $v_{R_{m_0}}(t)\geq v_{C'}(t)$ for $t\in[t^{m_0}_*,t_1]$ (see Figure \ref{fig:Fig1}). \begin{figure} \centering \includegraphics[width=10cm]{Fig1.png} \caption{The function $v_{C'}$ is the graph of $C'$ whereas $v_{R_{m_0}}$ is the one of the cone $R_{m_0}$.} \label{fig:Fig1} \end{figure} Analogously, there exists $t^{-m_0}_*\in[t_0,t_1]$ such that $v_{R_{-m_0}}(t)\geq v_{C'}(t)$ for $t\in[t_0,t^{-m_0}_*]$ and $v_{R_{-m_0}}(t)\leq v_{C'}(t)$ for $t\in[t^{-m_0}_*,t_1]$. Equivalently, when $m=m_0$, $M''_t\subset M'_t$ for $t\in[t_0,t^{m_0}_*]$, and $M'_t\subset M''_t$ otherwise, and when $m=-m_0$, $M'_t\subset M''_t$ for $t\in[t_0,t^{-m_0}_*]$, and $M''_t\subset M'_t$ for $t\in[t^{-m_0}_*,t_1]$. Once $m$ starts increasing from $-m_0$ towards $m_0$, while $|R_m|=c$, by continuity arguments there exist a maximal interval $[m_1,m_2]$ with $-m_0\leq m_1<m_2\leq m_0$, such that $v_{C'}$ and $v_{R_m}$ intersect in two points, $t^m_0,t^m_1$ with $t_0\leq t^m_0\leq t^m_1\leq t_1$, for every $m\in[m_1,m_2]$ (see Figure \ref{fig:Fig2}). \begin{figure} \centering \includegraphics[width=10cm]{Fig2.png} \caption{$v_{R_{m_1}}$ and $v_{R_{m_2}}$ are the truncated cones with largest and smallest slopes $m_1$ and $m_2$ such that $v_{R_{m_i}}$ and $v_{C'}$ intersect in two points in $[t_0,t_1]$. Notice that $A_0(m)$ (resp. $A_1(m)$) coincides with the area between $v_{R_m}$ and $v_{C'}$ in $t\in[t_0,t^m_0]$ (resp. $t\in[t^m_1,t_1]$).} \label{fig:Fig2} \end{figure} Moreover, since both $R_m$ and $t_0^m$ change continuously on $m$, $A_0(m)=|(R_m\setminus C')\cap\{(t,e_2,\dots,e_n):t\in[t_0,t_0^m]\}|$ is continuous, ranging from $A_0(-m_0)=|R_{-m_0}\setminus C'|$ to $A_0(m_0)=0$ (see Figure \ref{fig:Fig2}). Analogously, $A_1(m)=|(R_m\setminus C')\cap\{(t,e_2,\dots,e_n):t\in[t_1^m,t_1]\}|$ is continuous too, ranging from $A_1(-m_0)=0$ to $A_1(m_0)=|R_{m_0}\setminus C'|$. Since $v_{C'}(t)\leq v_{R_m}(t)$ if and only if $t\in[t_0,t_0^m]\cup[t_1^m,t_1]$ then \begin{equation}\label{eq:volume_split} \begin{split} |(R_m\setminus C')\cap\{(t,e_2,\dots,e_n):t\in[t_0,t_0^m]\}\cup (R_m\setminus C')\cap\{(t,e_2,\dots,e_n):t\in[t_1^m,t_1]\}| & \\ =|R_m\setminus C'|& \end{split} \end{equation} for every $m\in[m_1,m_2]$. By the Bolzano theorem, there exists $m_{**}\in[m_1,m_2]$, such that \[ \begin{split} & |(R_{m_{**}}\setminus C')\cap\{(t,e_2,\dots,e_n):t\in[t_0,t_0^{m_{**}}]\}| = A_0(m_{**})=A_1(m_{**}) \\ & =|(R_{m_{**}}\setminus C')\cap\{(t,e_2,\dots,e_n):t\in[t_1^{m_{**}},t_1]\}|, \end{split} \] and by \eqref{eq:volume_split} they coincide are also equal to $|R_{m_{**}}\setminus C'|/2$. Choosing $R=R_{m_{**}}$ concludes the lemma. \end{proof} The \emph{graph} of $f:\mathbb R^n\rightarrow[0,\infty)$ is defined by $G(f)=\{(x,f(x)):x\in\mathbb R^n\}$. The \emph{epigraph} of a concave function $f:\mathbb R^n\rightarrow[0,\infty)$, $\mathrm{epi}(f)=\{(x,\mu)\in\mathbb R^n\times\mathbb R:\mu\leq f(x)\}$ is a convex set in $\mathbb R^{n+1}$. If $C\in\mathcal K^n$, let $\partial C$ be the \emph{boundary} of $C$. Below we provide a very simple observation, which was essentially proven in \cite{GoMe}, and establishes an interesting property for affine functions bounding from above a concave function. \begin{lemma}\label{lem:AffineGoverF} Let $C\in\mathcal K^n$, $f:C\rightarrow[0,\infty)$ be concave. If $x_0\in \mathrm{int}(C)$, then there exists an affine function $g$, with $g:C\rightarrow[0,\infty)$ with $g(x)\geq f(x)$ for $x\in C$ and $g(x_0)=f(x_0)$. Moreover, there exists $L\in\mathrm{Gr}(n-1,\mathbb R^n)$ (which after a rotation of $C$ we may assume to coincide with $L=\mathrm{span}(\{e_2,\dots,e_n\})$) such that \[ (x_0,f(x_0))+L \subset G(g)\cap((x_0,f(x_0))+\mathrm{span}(\{e_1,\dots,e_n\})). \] \end{lemma} \begin{proof} Since $(x_0,f(x_0))\in\partial(\mathrm{epi}(f))$, there exists an affine function $g$ such that $H=\mathrm{G}(g)$ supports $\mathrm{epi}(f)$ at $(x_0,f(x_0))$. In particular, $f(x)\leq g(x)$ for $x\in C$ and $f(x_0)=g(x_0)$. Since $H$ and $(x_0,f(x_0))+\mathrm{span}(\{e_1,\dots,e_n\})$ are hyperplanes in $\mathbb R^{n+1}$, they intersect in a subspace of dimension at least $n-1$, i.e. there exists a subspace $L\in\mathrm{Gr}(n-1,\mathbb R^n)$ such that \[ (x_0,f(x_0))+L\times\{0\}\subset H\cap((x_0,f(x_0))+\mathrm{span}(\{e_1,\dots,e_n\})), \] as desired. Rotating $C$ appropriately, we can suppose that $L=\mathrm{span}(\{e_2,\dots,e_n\})$, concluding the proof. \end{proof} The lemma below is also crucial step in the proof of Theorem \ref{thm:General_Result}. Given a set $C$ and a concave function $f$ with domain $C$, We show the existence of a point $x_{C,f}$ with very special properties with respect to the mass distribution of the truncated cone $R$ (given in Lemma \ref{lem:TrunConeEqualVolumes}) around $x_{C,f}$ associated to the Schwarz symmetrization of $C$ with respect to the line given in Lemma \ref{lem:AffineGoverF}. \begin{lemma}\label{lem:ExistenceOfx_C} Let $C\in\mathcal K^n$ and $f:C\rightarrow[0,\infty)$ be concave. Then there exist a point $x_{C,f}\in C$ such that (after rotating $C$ as in Lemma \ref{lem:AffineGoverF}) the point $P_{\mathrm{span}(e_1)}(x_{C,f})=(t_{C,f},0,\dots,0)$ fulfills \[ \int_{t_0}^{t_{C,f}}|M''_t|dt=\int^{t_1}_{t_{C,f}}|M''_t|dt=\frac{|R|}{2}, \] where $R$ is the truncated cone given in Lemma \ref{lem:TrunConeEqualVolumes} with respect to $C'=\sigma_{e_1}(C)$ and $t_0$, $t_1$ and $M''_t$ are defined as in Lemma \ref{lem:TrunConeEqualVolumes}. \end{lemma} \begin{proof} Let us fix a point $x_0\in C$. By Lemma \ref{lem:AffineGoverF}, there exists $L\in\mathrm{Gr}(n-1,\mathbb R^n)$ (after a rotation of $C$, assume that $L=\mathrm{span}(\{e_2,\dots,e_n\})$) such that $(x_0,f(x_0))+L\times\{0\} \subset H\cap (x_0,f(x_0))+\mathrm{span}(\{e_1,\dots,e_n\})$, where $H$ is a supporting hyperplane to $\mathrm{epi}(f)$ at $(x_0,f(x_0))$. Lemma \ref{lem:TrunConeEqualVolumes} gives us, for this choice of $L$, and for $C':=\sigma_{e_1}(C)$, a truncated cone $R$. Let $(t_R,0,\dots,0)\in R$ be the point such that \begin{equation}\label{eq:HalvingVolumeR} \int_{t_0}^{t_R}|M''_t|dt=\int_{t_R}^{t_1}|M''_t|dt=\frac{|R|}{2}. \end{equation} where we are adopting all the notation from Lemma \ref{lem:TrunConeEqualVolumes}. Let $s_*\in[t_0,t_1]$ be such that $P_{e_1}(x_0+s_*e_1)=(t_R,0,\dots,0)$. Let $K_t(f)=\{x\in C:f(x)\geq t\}$, for $t\in[0,\|f\|_\infty]$, be the level sets of $f$. We have now two different cases, either \begin{enumerate} \item[i)] $(x_0+s_*e_1+L)\cap K_{\|f\|_\infty}(f)\neq\emptyset$ or \item[ii)] $(x_0+s_*e_1+L)\cap K_{\|f\|_\infty}(f)=\emptyset$. \end{enumerate} In case of i), let $v\in L$ be such that $x_0+s_*e_1+v\in K_{\|f\|_\infty}(f)$. Then the hyperplane $(x_0+s_*e_1+v,f(x_0+s_*e_1+v))+\mathrm{span}(\{e_1,\dots,e_n\})$ supports $\mathrm{epi}(f)$ at $(x_0+s_*e_1+v,f(x_0+s_*e_1+v))$. Thus, according to Lemma \ref{lem:AffineGoverF}, we can choose $x_{C,f}:=x_0+s_*e_1+v$ with the subspace $L$, which together with \eqref{eq:HalvingVolumeR} concludes the lemma. In case of ii), notice that $(K_{t}(f))_t$ is a continuously decreasing family on $t\in[0,\|f\|_\infty)$, such that $K_{0}(f)=C$. Hence, there exists $t_*\in[0,\|f\|_\infty)$ for which $x_0+s_*e_1+L$ supports $K_{t_*}(f)$, namely, at $x_0+s_*e_1+v$, for some $v\in L$. Hence, we can find a supporting hyperplane $H$ to $\mathrm{epi}(f)$ at $(x_0+s_*e_1+v,f(x_0+s_*e_1+v))$, such that $(x_0+s_*e_1+v,f(x_0+s_*e_1+v))+L\times\{0\}\subset H\cap (x_0+s_*e_1+v,f(x_0+s_*e_1+v))+\mathrm{span}(\{e_1,\dots,e_n\})$. Thus, choosing $x_{C,f}:=x_0+s_*e_1+v$ with the subspace $L$, together with \eqref{eq:HalvingVolumeR} concludes the lemma. \end{proof} Notice that $x_{C,f}$ is in general not unique. Indeed, if $f$ is affine, then we find a whole hyperplane of directions for which any such point would fulfill Lemma \ref{lem:ExistenceOfx_C} as well. \begin{proof}[Proof of Theorem \ref{thm:General_Result}] We start applying Lemma \ref{lem:ExistenceOfx_C} (and thus also lemma \ref{lem:TrunConeEqualVolumes}) to $C$ and $f$, then apply Lemma \ref{lem:AffineGoverF} to $f$ and $x_{C,f}$, and consider the notation defined on them. Let $f_0=f(x_{C,f})=g(x_{C,f})$. Since $\phi$ is convex with $\phi(0)=0$, for any $x_2>x_1>0$ we have that \[ 0\leq\frac{\phi(x_1)-0}{x_1-0}\leq\frac{\phi(x_2)-\phi(x_1)}{x_2-x_1}, \] i.e. $\phi$ is non-decreasing. Using that $f(x)\leq g(x)$ for every $x\in C$ (see Lemma \ref{lem:AffineGoverF}), then \begin{equation}\label{eq:phi_Increasing} \int_C\phi(f(x))dx\leq\int_C\phi(g(x))dx. \end{equation} Since \[ (x_{C,f},f_0)+L\times\{0\} \subset H\cap((x_{C,f},f_0)+\mathrm{span}(\{e_1,\dots,e_n\})), \] where $H=\mathrm{aff}(G(g))$ (see Lemma \ref{lem:AffineGoverF}), then \begin{equation}\label{eq:GisEasy} g(t,e_2,\dots,e_n)=f_0+\delta(t-t_{C,f}) \end{equation} for every $t\in[t_0,t_1]$ and for some $\delta$ such that $f_0+\delta(t_i-t_{C,f})\geq 0$, $i=0,1$, i.e. \begin{equation}\label{eq:deltaInterval} \frac{f_0}{t_{C,f}-t_1}\leq\delta\leq\frac{f_0}{t_{C,f}-t_0}. \end{equation} Let $M_t=\{(t,x_2,\dots,x_n)\in C\}$, for every $t\in[t_0,t_1]$. Then \eqref{eq:GisEasy} and Fubini's formula imply that \[ \int_C\phi(g(x))dx = \int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M_t|dt. \] Let us furthermore observe that $g(t,x_2,\dots,x_n)$ has the value \eqref{eq:GisEasy} for every $(t,x_2,\dots,x_n)\in\mathbb R^n$, and thus it gets the same value both in $M_t$ and $M_t'=\{(t,x_2,\dots,x_n)\in C'\}$, where $C'=\sigma_{e_1}(C)$. Moreover, since $|M_t|=|M'_t|$ for every $t\in[t_0,t_1]$, we have that \[ \int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M_t|dt= \int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M'_t|dt. \] We now remember that since $\phi$ is convex, then $\phi(t)+\phi(t')\leq\phi(t-\delta)+\phi(t'+\gamma)$, for every $t-\delta\leq t\leq t'\leq t'+\gamma$, $t,t',\delta,\gamma\in\mathbb R$. Using the notation as in Lemma \ref{lem:TrunConeEqualVolumes}, let $M^*_t=M_t'\cap M_t''$ and $M^{**}_t=(M_t'\setminus M_t'')\cup(M_t''\setminus M_t')$. Since \[ \begin{split} &\int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M'_t|dt = \\ &\int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M^*_t|dt +\int_{t^*_0}^{t^*_1}\phi(f_0+\delta(t-t_{C,f}))|M^{**}_t|dt, \end{split} \] we just need to bound the last integral above. Notice that $\phi(f_0+\delta(t-t_{C,f}))\leq \phi(f_0+\delta(t_{\max}-t_{C,f}))$ for some $t_{\max}\in[t_0^*,t_1^*]$ and every $t\in[t_0^*,t_1^*]$. Analogously, we have that $\phi(f_0+\delta(t'-t_{C,f}))\geq \phi(f_0+\delta(t'_{\min}-t_{C,f}))$ for every $t'\in[t_0,t_0^*]$ and some $t'_{\min}\in[t_0,t_0^*]$, and $\phi(f_0+\delta(t''-t_{C,f}))\geq \phi(f_0+\delta(t''_{\min}-t_{C,f}))$, for every $t''\in[t_1^*,t_1]$ and some $t''_{\min}\in[t_1^*,t_1]$. Moreover, since $|C'|=|R|$, in particular $|C'\setminus R|=|R\setminus C'|$. Using that $t\rightarrow \phi(f_0+\delta(t-t_{C,f}))$ is convex too, then \[ \begin{split} & \int_{t_0^*}^{t_1^*}\phi(f_0+\delta(t-t_{C,f}))|M_t^{**}|dt \\ & \leq \phi(f_0+\delta(t_{\max}-t_{C,f}))\int_{t_0^*}^{t_1^*}|M_t^{**}|dt\\ & =2\phi(f_0+\delta(t_{\max}-t_{C,f}))\frac{|R\setminus C'|}{2}\\ & \leq (\phi(f_0+\delta(t'_{\min}-t_{C,f}))+\phi(f_0+\delta(t''_{\min}-t_{C,f}))) \frac{|R\setminus C'|}{2}\\ &=\phi(f_0+\delta(t'_{\min}-t_{C,f}))\int_{t_0}^{t_0^*}|M_t^{**}|dt +\phi(f_0+\delta(t''_{\min}-t_{C,f}))\int_{t_1^*}^{t_1}|M_t^{**}|dt\\ & \leq \int_{t_0}^{t_0^*}\phi(f_0+\delta(t'-t_{C,f}))|M_t^{**}|dt' +\int_{t_1^*}^{t_1}\phi(f_0+\delta(t''-t_{C,f}))|M_t^{**}|dt''. \end{split} \] Hence, we have proven that \[ \begin{split} &\int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M'_t|dt \leq \int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M^*_t|dt \\ & +\int_{t_0}^{t_0^*}\phi(f_0+\delta(t'-t_{C,f}))|M_t^{**}|dt' +\int_{t_1^*}^{t_1}\phi(f_0+\delta(t''-t_{C,f}))|M_t^{**}|dt''\\ & =\int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M''_t|dt. \end{split} \] In the next (and last) step, there is a dichotomy. Either $\delta\geq 0$ or $\delta\leq 0$. Hence, assume $\delta\geq 0$ (the other case can be proven analogously). Let $\delta_{MAX}=f_0/(t_{C,f}-t_0)$, thus having $\delta\in[0,\delta_{MAX}]$. Using Lemma \ref{lem:ExistenceOfx_C}, and since $\delta\rightarrow\phi(f_0+\delta(t-t_{C,f}))$ is a convex function too, we have that \[ \phi(f_0+\delta(t-t_{C,f}))+\phi(f_0+\delta(t'-t_{C,f})) \leq \phi(f_0+\delta_{MAX}(t-t_{C,f}))+\phi(f_0+\delta_{MAX}(t'-t_{C,f})) \] for every $t\in[t_0,t_{C,f}]$ and every $t'\in[t_{C,f},t_1]$. Thus \[ \phi(f_0+\delta(t-t_{C,f})) -\phi(f_0+\delta_{MAX}(t-t_{C,f}))\leq -\phi(f_0+\delta(t'-t_{C,f})) +\phi(f_0+\delta_{MAX}(t'-t_{C,f})) \] for every $t\in[t_0,t_{C,f}]$ and every $t'\in[t_{C,f},t_1]$. By continuity and compactness standard arguments, there exist $t_*\in[t_0,t_{C,f}]$ and $t_*'\in[t_{C,f},t_1]$ such that \[ \phi(f_0+\delta(t-t_{C,f})) -\phi(f_0+\delta_{MAX}(t-t_{C,f}))\leq \phi(f_0+\delta(t_*-t_{C,f})) -\phi(f_0+\delta_{MAX}(t_*-t_{C,f})) \] for every $t\in[t_0,t_{C,f}]$ as well as \[ -\phi(f_0+\delta(t_*'-t_{C,f})) +\phi(f_0+\delta_{MAX}(t_*'-t_{C,f}))\leq -\phi(f_0+\delta(t'-t_{C,f})) +\phi(f_0+\delta_{MAX}(t'-t_{C,f})) \] for every $t'\in[t_{C,f},t_1]$. Therefore \[ \begin{split} & \int_{t_0}^{t_{C,f}}(\phi(f_0+\delta(t-t_{C,f}))- \phi(f_0+\delta_{MAX}(t-t_{C,f})))|M_t''|dt\\ & \leq\int_{t_0}^{t_{C,f}}(\phi(f_0+\delta(t_*-t_{C,f}))- \phi(f_0+\delta_{MAX}(t_*-t_{C,f})))|M_t''|dt \\ & =(\phi(f_0+\delta(t_*-t_{C,f}))- \phi(f_0+\delta_{MAX}(t_*-t_{C,f})))\int_{t_0}^{t_{C,f}}|M_t''|dt\\ & =(\phi(f_0+\delta(t_*-t_{C,f}))- \phi(f_0+\delta_{MAX}(t_*-t_{C,f})))\frac{|R|}{2}\\ & \leq (-\phi(f_0+\delta(t_*'-t_{C,f})) +\phi(f_0+\delta_{MAX}(t_*'-t_{C,f})))\frac{|R|}{2}\\ & =(-\phi(f_0+\delta(t_*'-t_{C,f})) +\phi(f_0+\delta_{MAX}(t_*'-t_{C,f})))\int_{t_{C,f}}^{t_1}|M_{t'}''|dt'\\ & \leq \int_{t_{C,f}}^{t_1}(-\phi(f_0+\delta(t'-t_{C,f})) +\phi(f_0+\delta_{MAX}(t'-t_{C,f})))|M_{t'}''|dt', \end{split} \] i.e. \[ \begin{split} & \int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M''_t|dt\\ & =\int_{t_0}^{t_{C,f}}\phi(f_0+\delta(t-t_{C,f}))|M''_t|dt+ \int_{t_{C,f}}^{t_1}\phi(f_0+\delta(t'-t_{C,f}))|M''_{t'}|dt'\\ & \leq \int_{t_0}^{t_{C,f}}\phi(f_0+\delta_{MAX}(t-t_{C,f}))|M''_t|dt+ \int_{t_{C,f}}^{t_1}\phi(f_0+\delta_{MAX}(t'-t_{C,f}))|M''_{t'}|dt'\\ &=\int_{t_0}^{t_1}\phi(f_0+\delta_{MAX}(t-t_{C,f}))|M''_t|dt, \end{split} \] for every $\delta\in[0,\delta_{MAX}]$. As mentioned above, for every $\delta\in[\delta_{MIN},0]$, one can prove analogously that \[ \int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M''_t|dt \leq \int_{t_0}^{t_1}\phi(f_0+\delta_{MIN}(t-t_{C,f}))|M''_t|dt. \] In other words, \[ \begin{split} & \int_{t_0}^{t_1}\phi(f_0+\delta(t-t_{C,f}))|M''_t|dt\\ & \leq \max_{\delta_*\in\{\delta_{MIN},\delta_{MAX}\}}\int_{t_0}^{t_1}\phi(f_0+\delta_*(t-t_{C,f}))|M''_t|dt. \end{split} \] Therefore, we have shown that \[ \int_C\phi(f(x))dx\leq \max_{1,2}\int_R\phi(g_i(x))dx, \] where $g_1$ and $g_2$ are affine functions becoming zero at one of the bases of $R$ (corresponding with the slopes $\delta_{MAX}$ and $\delta_{MIN}$, respectively) and such that $g_i(x_{C,f})=f(x_{C,f})$, $i=1,2$. Therefore, the general upper bound for the term $\int_C\phi(f(x))dx$ is $\max_{R}\int_R\phi(g_1(x))dx$, where $R$ is a truncated cone with $|R|=|C|$ and where $g_1$ is a affine function with $g_1(x_{C,f})=f(x_{C,f})$ which becomes zero in a base of $R$ (see that now it is unnecessary to mention both $g_1$ and $g_2$, since $R$ covers positive and negative slopes). This concludes the proof. For the equality case, since $\phi$ is strictly convex, then $\phi$ is also strictly increasing. Indeed, if $\phi$ is strictly convex with $\phi(0)=0$, for any $x_2>x_1>0$ we have that \[ 0<\frac{\phi(x_1)-0}{x_1-0}<\frac{\phi(x_2)-\phi(x_1)}{x_2-x_1}, \] i.e. $\phi$ is strictly increasing. This, together with \eqref{eq:phi_Increasing}, shows that $f$ has to be an affine function, more particularly, $f=g$. Moreover, $\sigma_{e_1}(C)$ must be one of the truncated cones attaining the maximum above. Moreover, $f$ has to become zero in one of the bases of the truncated cone. Hence, by Lemma \ref{lem:ConeGenCone}, $C$ must be a generalized truncated cone, attaining also the maximum, such that $f$ becomes zero in one of the bases of $C$. \end{proof} Let us finish this section by noting how to derive \eqref{eq:reduced_formula} from the right-hand side of the inequality in Theorem \ref{thm:General_Result}, as well as all the values of the parameters. \begin{rmk}\label{rmk:concrete_computation} After a suitable change of variables, we can assume that $R$ is rotationally symmetric with respect to $\mathrm{span}(e_1)$, that $h(R,-e_1)=0$ and $h(R,e_1)=1$. Then we can express $R=\{(t,x_2,\dots,x_n):t\in[0,1],(x_2,\dots,x_n)\in v_R(t)B^{n-1}_2\}$, where $v_R(t)=\max\{s\geq 0:te_1+se_2\in R\}$. Since $R$ is a truncated cone, then we can parametrize $v_R(t)=r_m+mt$, for some real numbers $r_m$ and $m$ that can be computed. On the one hand, since $c=|C|=|R|$, then the maximum slope $m_0$ such that $m\in[-m_0,m_0]$ is given by the extreme case in which $R$ is a cone, and thus when \[ c=|R|=\int_0^1\kappa_{n-1}(m_0t)^{n-1}dt=\kappa_{n-1}m_0^{n-1}\frac1n. \] On the other hand, if $m\in[-m_0,m_0]$, $r_m$ is computed such that \[ c=|R|=\int_0^1\kappa_{n-1}(r_m+mt)^{n-1}dt=\kappa_{n-1}\left((r_m+m)^n-r_m^n)\right)\frac1{nm}. \] Moreover, assuming that $x_{R,g}=(t_m,0,\dots,0)$ and that $g$ is an affine function becoming $0$ at $t=0$ with $g(x_{R,g})=f(x_{C,f})=f_0$, where $t_m\in[0,1]$ is such that \[ \int_0^{t_m}\kappa_{n-1}(r_m+mt)^{n-1}dt=\int_{t_m}^1\kappa_{n-1}(r_m+mt)^{n-1}dt \] i.e. $(r_m+mt_m)^n-r_m^n=(r_m+m)^n-(r_m+mt_m)^n$. Finally, we conclude that \[ \int_R\phi(g(x))dx=\int_0^1\phi\left(\frac{f_0t}{t_m}\right)\kappa_{n-1}(r_m+mt)^{n-1}dt. \] \end{rmk} \section{Planar case}\label{sec:PlanarCase} Particularizing Theorem \ref{thm:General_Result} to the planar case and using Remark \ref{rmk:concrete_computation}, we easily get that $c=m_0$, $r_m=(c-m)/2$ and $t_m=\frac{-(c-m)+\sqrt{c^2+m^2}}{2m}$. Thus we can write the right-hand side of its inequality as \begin{equation}\label{eq:Part2dimCase} \max_{m\in[-m_0,m_0]}\int_0^1\phi\left(\frac{f_0t}{t_m}\right)2(r_m+mt)dt. \end{equation} where $f_0=f(x_{C,f})$. \begin{proof}[Proof of Theorem \ref{thm:2_dim_case}] Let $c=|C|$ and $f_0=f(x_{C,f})$. Using the observation in \eqref{eq:Part2dimCase}, then Theorem \ref{thm:General_Result} applied to $C$, $f$ and $\phi(t)=t^{\alpha}$ says that \[ \begin{split} \int_Cf(x)^\alpha dx & \leq 2\max_{m\in[-c,c]}\int_0^1\left(f_0\frac{t}{t_m}\right)^\alpha\left(\frac{c-m}{2}+mt\right)dt \\ & =2f_0^\alpha\max_{m\in[-c,c]}\left(\frac{c-m}{2(\alpha+1)}+\frac{m}{\alpha+2}\right)\frac{1}{t_m^\alpha}. \end{split} \] Notice that $t_m=(-(c-m)+\sqrt{c^2+m^2})/(2m)$, thus the maximum above rewrites as \[ \begin{split} cf_0^\alpha\max_{m\in[-c,c]} & \frac{1}{t_m^{\alpha+1}(t_m-1)}\left(\frac{2t_m^2-1}{2(\alpha+1)}+\frac{1-2t_m}{\alpha+2}\right)\\ & =cf_0^\alpha\max_{m\in[-c,c]}\varphi(t_m). \end{split} \] Since \[ \varphi'(t_m)=-\frac{\alpha(2(t_m-2)t_m+1)t_m(\alpha(t_m-1)+2t_m-1)}{2(\alpha+1)(\alpha+2)(t_m-1)^2}, \] thus $\varphi'(t_m)=0$ if and only if $t_m=1\pm 1/\sqrt{2}$ or $t_m=(a+1)/(a+2)$. Since $1/2<(\alpha+1)/(\alpha+2)$ and $\varphi'(1/2)=-2^{\alpha+1}\alpha^2/(\alpha^2+3\alpha+2)<0$ for every $\alpha\geq 1$, thus $\varphi(t_m)$ attains a local minimum at $t_m=(\alpha+1)/(\alpha+2)$. Thus the maximum of $\varphi(t_m)$ is attained either at $t_m=1-1/\sqrt{2}$ or $t_m=1/\sqrt{2}$, i.e. \[ cf_0^\alpha\max_{m\in[-c,c]}\varphi(t_m) = cf_0^\alpha\frac{2^{\frac{\alpha}{2}+1}}{\alpha+2}\max\{\frac{1}{(\sqrt{2}-1)^\alpha(\alpha+1)},1\}. \] Since $(\sqrt{2}-1)^\alpha(\alpha+1)$ is strictly decreasing in $\alpha\geq 1$, then the maximum above becomes $\max\{1/(2(\sqrt{2}-1)),1\}=1/(2(\sqrt{2}-1))$, and hence it is always attained at $t_m=1-1/\sqrt{2}$, i.e. at $m=-c$. Therefore \[ \int_Cf(x)^\alpha dx \leq 2cf_0^\alpha\frac{\sqrt{2}^\alpha}{(\sqrt{2}-1)^\alpha(\alpha+1)(\alpha+2)}, \] concluding the result. In the case of equality, notice that the maximum computed above is attained if and only if $m=-c$. Thus, equality holds if and only if $C$ is a (2-dimensional) cone, i.e. a triangle, and in view of the equality cases of Theorem \ref{thm:General_Result}, $f$ is an affine function. Moreover, $f$ has value zero at one of the edges of $C$ since $m=-c$ and we chose above $f(t)=f_0t/t_{-c}$. \end{proof} A direct application of Theorem \ref{thm:2_dim_case} is Theorem \ref{thm:volume_sections}, which now we are able to show. \begin{proof}[Proof of Theorem \ref{thm:volume_sections}] By Fubini's formula, we have that \[ |K|=\int_{P_HK}|K\cap(x+H^\bot)|dx. \] By the Brunn's Concavity Principle (see \cite[Prop.~1.2.1]{Giann}, see also \cite{Ga}) then \[ f:H\rightarrow[0,\infty)\quad\text{where}\quad f(x):=|K\cap(x+H^\bot)|^{\frac{1}{n-2}} \] is a concave function. After a suitable rigid motion, we assume that $H=\mathbb R^2\times\{0\}^{n-2}$. By Theorem \ref{thm:2_dim_case} then \[ \begin{split} \int_{P_HK}f(x)^{n-2}dx & \leq\frac{2}{(n-1)n}\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)^{n-2}|P_HK|f(x_{P_HK,f})^{n-2}\\ & =\frac{2}{(n-1)n}\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)^{n-2}|P_HK||K\cap (x_{P_HK,f}+H^\bot)|, \end{split} \] concluding the result. We omit the technical details to the characterization of the equality case. However, it should be done exactly as the equality case in the proof of Theorem 1.1 in \cite{GoMe}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Exp_2_dim_case}] Let $c=|C|$ and $f_0=f(x_{C,f})$. Using the observation \eqref{eq:Part2dimCase} and applying Theorem \ref{thm:General_Result} to $C$, $f/f_0$ and $\phi(t)=e^t-1$, then \[ \begin{split} \int_Ce^{\frac{f(x)}{f_0}}dx & -|C| = \int_C(e^{\frac{f(x)}{f_0}}-1)dx \\ & \leq \max_{m\in[-c,c]} 2 \int_0^1(e^{\frac{t}{t_m}}-1)\left(\frac{c-m}{2}+mt\right)dt\\ & =\max_{m\in[-c,c]}2t_m\left(e^{\frac{1}{t_m}}\left(\frac{c-m}{2}+mt_m\left(\frac{1}{t_m}-1\right)\right)-\frac{c-m}{2}+mt_m\right)-|C|. \end{split} \] Since $t_m=(-c+m+\sqrt{c^2+m^2})/(2m)$ then $m=c(1-2t_m)/(2t_m(t_m-1))$, and thus \[ \begin{split} \max_{m\in[-c,c]}2t_m\left(e^{\frac{1}{t_m}}\left(\frac{c-m}{2}+mt_m\left(\frac{1}{t_m}-1\right)\right)-\frac{c-m}{2}+mt_m\right) & \\ =\max_{m\in[-c,c]}\frac{c}{t_m-1} \left(e^{\frac{1}{t_m}}\left(\frac{2t_m^2-1}{2}+(1-2t_m)\left(1-t_m\right)\right)\right. & \\ \left.-\frac{2t_m^2-1}{2}+(1-2t_m)t_m\right) & \\ =\max_{m\in[-c,c]}\frac{c}{t_m-1} \left(e^{\frac{1}{t_m}}\left(\frac{2t_m^2-1}{2}+(1-2t_m)\left(1-t_m\right)\right)\right. & \\ \left.-\frac{2t_m^2-1}{2}+(1-2t_m)t_m\right) =\max_{m\in[-c,c]}\varphi(t_m). & \end{split} \] Since $m\in[-c,c]$, then $t_m\in[1-1/\sqrt{2},1/\sqrt{2}]$. It is a tedious computation to check (possibly with use of a software like \emph{Mathematica}) that $\varphi(t_m)$ is strictly decreasing. Indeed \[ \varphi'(t_m)=c\frac{(2(t_m-2)t_m+1)(e^\frac{1}{t_m}(3(t_m-1)t_m+1)-3t_m^2)}{2(t_m-1)^2t_m^2}. \] The local extrema of $\varphi(t_m)$, given by $\varphi'(t_m)=0$, are $t_m=1-1/\sqrt{2}$ and the numerical approximations $t_m\approx 0$.$73487$, $1$.$707106$. Since $\varphi'(1/2)=3-e^2<0$ and $1/\sqrt{2}<0$.$73487$, we can conclude that $\varphi(t_m)$ is decreasing in $[1-1/\sqrt{2},1/\sqrt{2}]$. Thus its maximum is attained when $t_m=1-1/\sqrt{2}$, i.e. $m=-c$, hence providing the result. For the equality case, we are forced to have $m=-c$, i.e. $C$ is a cone (i.e. a triangle) and, due to the equality cases of Theorem \ref{thm:General_Result}, such that $f$ is an affine function. Moreover, $f$ has value zero at one of the edges of $C$ since $m=-c$ and we chose above $f(t)=f_0t/t_{-c}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:PhiConvNonLogConcave}] Let $c=|C|$ and $f_0=f(x_{C,f})$. Using the observation \eqref{eq:Part2dimCase} and applying Theorem \ref{thm:General_Result} to $C$, $f/f_0$ and $\phi(t)=e^{t^2}-1$, then \[ \begin{split} \int_Ce^{\frac{f(x)^2}{f_0^2}}dx-|C|& =\int_C(e^{\frac{f(x)^2}{f_0^2}}-1)dx\\ & \leq \max_{m\in[-c,c]}\int_0^1(e^{\left(\frac{t}{t_m}\right)^2}-1)\left(\frac{c-m}{2}+mt\right)dt\\ & =\max_{m\in[-c,c]}\int_0^1e^{\left(\frac{t}{t_m}\right)^2}\left(\frac{c-m}{2}+mt\right)dt-|C|\\ & = 2\max_{m\in[-c,c]} \left(\frac{c-m}{2}\frac{\sqrt{\pi}}{2}\mathrm{erfi}\left(\frac{1}{t_m}\right)+(mt_m^2)\left(e^{\left(\frac{1}{t_m}\right)^2}-1\right)\right)-|C|. \end{split} \] Since $t_m=(-(c-m)+\sqrt{c^2+m^2})/(2m)$, we get that $m=c(1-2t_m)/(2t_m(t_m-1))$, and thus \[ \begin{split} 2 & \max_{m\in[-c,c]} \left(\frac{c-m}{2}\frac{\sqrt{\pi}}{2}\mathrm{erfi}\left(\frac{1}{t_m}\right)+(mt_m^2)\left(e^{\left(\frac{1}{t_m}\right)^2}-1\right)\right) \\ & =\max_{m\in[-c,c]}\frac{c}{2t_m(t_m-1)}\left(\frac{(2t_m^2-1)\sqrt{\pi}}{4}\mathrm{erfi}\left(\frac{1}{t_m}\right)+(1-2t_m)t_m^2\left(e^{(\frac{1}{t_m})^2}-1\right)\right) \\ & =\max_{m\in[-c,c]}\varphi(t_m). \end{split} \] Since $m\in[-c,c]$, then $t_m\in[1-1/\sqrt{2},1/\sqrt{2}]$, and thus \[ \begin{split} \varphi'(t_m) & = \frac{c}{8(t_m-1)^2t_m^3} \left(\sqrt{\pi}(-2(t_m-1)t_m-1)t_m\mathrm{erfi}\left(\frac1{t_m}\right)+4(2(t_m-2)t_m+1)t_m^3 \right.\\ & \left.-2e^{\frac{1}{t_m^2}}(t_m(2t_m(2t_m((t_m-2)t_m-1)+5)-5)+1)\right). \end{split} \] The local extrema of $\varphi(t_m)$, given by $\varphi'(t_m)=0$, is the numerical approximation $t_m\approx 0$.$722484$. Since $\varphi'(1/2)=-2(\sqrt{\pi} \mathrm{erfi}(2) + 1 + e^4)<0$, we conclude that $\varphi(t_m)$ is strictly decreasing in $t_m\in[1-1/\sqrt{2},1/\sqrt{2}]$, and thus its maximum is attained at $t_m=1-1/\sqrt{2}$, i.e. $m=-c$, which concludes the result. We now suppose there is equality in the inequality above. First of all, we are forced to have $m=-c$, i.e, $C$ must be a $2$-dimensional cone (i.e. a triangle) and from the equality case of Theorem \ref{thm:General_Result}, $f$ is an affine function which has value zero at one of the edges of $C$, since we chose above $f(t)=f_0t/t_{-c}$. \end{proof} \section{Higher dimensional cases}\label{sec:Higher_dim_case} Solving the remaining 1-parameter optimization problem in Theorem \ref{thm:General_Result} is, in general, a hard and challenging task. Using Remark \ref{rmk:concrete_computation}, We clearly have $m_0=(cn/\kappa_{n-1})^{1/(n-1)}$. However, The problem arises when trying to compute $r_m$, since it is described as a solution to \[ c = \frac{\kappa_{n-1}}{mn}\left(nmr_{m}^{n-1}+\cdots+nm^{n-1}r_m+m^n\right). \] This is a quite general polynomial of degree $n-1$ on the variable $r_m$. Moreover, the value of $t_m$ is the only point in $[0,1]$ such that \[ (r_m+mt_m)^n-r_m^n=(r_m+m)^n-(r_m+mt_m)^n, \] and thus \[ t_m=\frac{1}{m}\left(\left(\frac{r_m^n+(r_m+m)^n}{2}\right)^{\frac{1}{n}}-r_m\right). \] Hence, using explicit values for $r_m$ and $t_m$ seems to be very technical for $n=4$ and intractable for $n\geq 5$. In the $3$-dimensional case, we would have that $r_m$ is the root \begin{equation}\label{eq:Value_r_m_dim3} r_m=\frac{-m+\sqrt{\frac{4c}{\pi}-\frac{m^2}{3}}}{2}. \end{equation} Moreover, $t_m$ would be given by the formula \begin{equation}\label{eq:Value_t_m_dim3} t_m=\frac{1}{m}\left(\left(\frac{r_m^3+(r_m+m)^3}{2}\right)^{\frac{1}{3}}-r_m\right). \end{equation} \begin{proof}[Proof of Theorem \ref{thm:HH-3-dim-case}] Applying Theorem \ref{thm:General_Result} to $C$, $f$ and $\phi(t)=t$ tells that \[ \int_Cf(x)dx\leq \max_{m\in[-m_0,m_0]}\int_0^1\frac{f_0t}{t_m}\pi(r_m+mt)^2dt, \] where $f_0=f(x_{C,f})$, $c=|C|$, $m_0=\sqrt{3c/\pi}$, and $r_m$ and $t_m$ are given in \eqref{eq:Value_r_m_dim3} and \eqref{eq:Value_t_m_dim3}. Therefore \begin{equation}\label{eq:max_dim_3} \begin{split} \int_Cf(x)dx & \leq \pi f_0 \max_{m\in[-m_0,m_0]}\frac{1}{t_m}\int_0^1 t(r_m+mt)^2dt \\ & =\pi f_0 \max_{m\in[-m_0,m_0]} \frac{6r_m^2+8r_mm+3m^2}{t_m}. \end{split} \end{equation} Setting $s=\sqrt{3c/\pi}m$, then $s\in[-1,1]$, and we have that \[ r_m=\frac{1}{2}\sqrt{\frac c\pi}\left(-\sqrt{3}s+\sqrt{4-s^2}\right)=\frac{1}{2}\sqrt{\frac c\pi}r_{m(s)}, \] and that \[ t_m=\sqrt{\frac c\pi}\left(\left(\frac1{16}r_{m(s)}^3+\frac12\left(\frac12r_{m(s)}+\sqrt{3}s\right)^3\right)^\frac13-\frac12r_{m(s)}\right) =\sqrt{\frac c\pi}t_{m(s)} \] Substituting in \eqref{eq:max_dim_3} we obtain \[ \max_{s\in[-1,1]}\sqrt{3}\frac{c}{\pi}\frac{s\left(\frac32r_{m(s)}^2+4\sqrt{3}r_{m(s)}s+9s^2\right)}{t_{m(s)}}=\sqrt{3}\frac c\pi \max_{s\in[-1,1]} h(s). \] Since $h(s)$ is a decreasing function from $s=-1$ to $s\approx 0.52$, and increasing from $x\approx 0.52$ until $s=1$, and such that $h(-1)>h(1)$ (see Figure \ref{fig:dim_3}). Since the expression of $h'(s)$ is rather technical and lengthy, we leave out the details. \begin{figure} \centering \includegraphics[width=6cm]{Function_dim_3.png} \caption{The function $h(s)$, $s\in[-1,1]$, which attains its maximum at $s=-1$.} \label{fig:dim_3} \end{figure} Therefore, the maximum in \eqref{eq:max_dim_3} is attained at $m=-\sqrt{3c/\pi}$, concluding the result. For the equality case, notice that from the equality case of Theorem \ref{thm:General_Result} together with the fact shown above that we must have $m=-\sqrt{3c/\pi}$, then $C$ is a generalized cone. Moreover, the function $f$ must be an affine function which becomes zero at the base of $C$ (and not at the vertex of $C$), since we chose above $f(t)=f_0t/t_{-m_0}$. This concludes the proof. \end{proof} \emph{Acknowledgements:} I would like to thank the referee for the useful corrections, which helped in improving the readability of the paper.
2,869,038,156,211
arxiv
\section{Introduction}\label{sec:intro} Stochastic gradient-based optimization is plagued by the presence of numerous hyperparameters. While these can often be set to rule-of-thumb constants or manually-designed schedules, it is also common belief that a more information regarding the optimization landscape can help present alternative strategies such that manual tuning has less of an impact on the end result. For instance, the use of curvature information in the form of Hessian matrices or Fisher information can be used to de-sensitize or completely remove step size parameter~\citep{ypma1995historical,amari1998natural,martens2014new}, and the momentum coefficient can be set to reduce the local gradient variance~\citep{arnold2019reducing}. \begin{wrapfigure}[15]{r}{0.45\linewidth} \vspace{-1em} \centering \includegraphics[width=\linewidth]{plots/2d_trajectory.pdf} \caption{Stochastic gradient eventually goes into diffusion and does not converge. Our filtered gradients offer smooth convergence and complements adaptive step sizes.} \label{fig:rosenbrock2d} \end{wrapfigure} Based on these intuitions, we investigate the use of efficient curvature and variance estimates during training to construct a \emph{self-tuning} optimization framework. Under a Bayesian paradigm, we treat the true gradient as the unobserved state of a dynamical system and seek to automatically infer the true gradient conditioned on the history of parameter updates and stochastic gradient observations. Our method is enabled by evaluations of exact \emph{per-sample} gradients and Hessian-vector products. With recent improvements in automatic differentiation tooling~\citep[{\it e.g.}\@\xspace ,][]{jax2018github,agarwal2019auto,dangel2020backpack}, this matches the asymptotic time cost of minibatch gradient and Hessian-vector product evaluations. While our framework contains the good properties of both curvature-based updates and variance reduction---which are attested in toy and synthetic scenarios---we do not observe significant improvements empirically in optimizing deep neural networks. Notably, our approach can be viewed as an explicit form of the implicit gradient transport of \citet{arnold2019reducing}, yet it does not achieve the same acceleration empirically observed in practice. While we do not fully understand this behavior, we analyze the estimated quantites along the training trajectory and hypothesize that our method has a higher tendency of going down high-variance high-curvature regions whereas standard stochastic gradient descent is repelled from such regions due to gradient variance. This potentially serves as a downside of our method in the deep learning setting. Regardless, the use of efficient variance estimation and interpretation of gradient estimation as Bayesian filtering are useful constructs in the development of self-tuning stochastic optimization. \section{Bayesian Filtering for Stochastic Gradients}\label{sec:gradfilter} \begin{figure} \centering \input{plots/graph} \caption{Graphical model of the hidden Markov dynamics model. The main idea of our algorithm is that the dynamics parameters can be cheaply estimated on each minibatch, and smoothed across time using exact Kalman filter inference. These dynamics parameters are the gradient variance $\Sigma$, the directional curvature $B\delta$ and its variance $Q$. We stabilize $\Sigma$ with an exponential moving average, which is effectively another, more elementary form of Kalman filtering.} \label{fig:graph} \end{figure} We consider stochastic optimization problems of the general form \begin{equation} \arg \min_{\theta \in \mathbb{R}^d} f(\theta),\quad f(\theta) = \mathbb{E}_{\xi}\left[\tilde{f} (\theta, \xi)\right] \end{equation} where we only have access to samples $\xi$. Stochastic gradient descent---the prototypical algorithm for this setting---iteratively updates $\theta_{t+1} = \theta_t - \alpha_t g_t$, where \begin{equation} \label{eq:minibatch_stochastic_gradient} g_t = \frac{1}{n} \sum_{i=1}^n \nabla_\theta \tilde{f}(\theta_t, \xi_t^{(i)}), \quad \xi_t^{(1)}, \dotsc, \xi_t^{(n)} \overset{\text{iid}}{\sim} p(\xi), \end{equation} and $\alpha_t$ is a scalar step size. We may use notational shorthands like $f_t = f(\theta_t)$, $\nabla f_t = \nabla f(\theta_t)$. SGD is hampered by the effects of gradient noise. It famously needs a decreasing step size schedule to converge; used with a constant step size, it goes into diffusion in a region around the optimum~\citep[see, {\it e.g.}\@\xspace ,][]{bottou2018optimization}. Gradient noise also makes stochastic optimization algorithms difficult to tune. In particular, unreliable directions are not amenable to step size adaptation. To stabilize update directions, we build a framework for estimating the true gradient $\nabla f$ based on Kalman filtering. This can also be viewed as a variance reduction method, but does not require the typical finite-sum structure assumption of {\it e.g.}\@\xspace \citet{schmidt2017minimizing,johnson2013accelerating}. \subsection{Dynamical System Model} We treat the true gradient $\nabla f_t$ as the latent state of a dynamical system. This dynamical system is comprised of an observation model $p(g_t \;|\; f_t )$ and a dynamics model $p(\nabla f_t \;|\; \nabla f_{t-1}, \delta_{t-1})$ where $\delta_{t-1} = \theta_{t} - \theta_{t-1}$ is the update direction. We will later choose $\delta_t$ to be depend on our variance-reduced gradient estimates, but the gradient inference framework itself is agnostic to the choice of $\delta_t$. \paragraph{The observation model} $p(g_t \;|\; \nabla f_t)$ describes how the gradient observations relate to the state of the dynamical system. In our case, it is relatively straight-forward, since $g_t$ is simply an unbiased stochastic estimate of $\nabla f_t$, but the exact distribution remains to be specified. We make the assumption that $g_t$ follows a Gaussian distribution, \begin{equation} \label{eq:observation_model} g_t \;|\; \nabla f_t \sim \mathcal{N}(\nabla f_t, \Sigma_t), \end{equation} with covariance $\Sigma_t$. Since $g_t$ is the mean of iid terms (Eq.~\ref{eq:minibatch_stochastic_gradient}), this assumption is supported by the central limit theorem when sufficiently large batch sizes are used. \paragraph{The dynamics model} $p(\nabla f_t \;|\; \nabla f_{t-1})$ describes how the gradient evolves between iterations. We base our dynamics model on a first order Taylor expansion of the gradient function \textbf{centered at $\mathit{\boldsymbol\theta_{\mathbf{t}}}$}, $\nabla f(\theta_{t-1}) \approx \nabla f(\theta_t) - \nabla^2 f(\theta_t) \delta_{t-1}$. We propose to approximate the gradient dynamics by computing a stochastic estimate of the Hessian-vector product, $B_t\delta_{t-1}$, where $\mathbb{E} [B_t] = \nabla^2 f(\theta_t)$. Again, we make a Gaussian noise assumption. This implies the dynamics model \begin{equation} \label{eq:dynamics_model} \nabla f_t \;|\; \nabla f_{t-1} \sim \mathcal{N}(\nabla f_{t-1} + B_t \delta_{t-1}, Q_{t}). \end{equation} where $Q_t$ is the covariance of $B_t\delta_{t-1}$, taking into account the stochasticity in $B_t$. A key insight is that the parameters $B_t\delta_{t-1}, Q_t, \Sigma_t$ of the model can all be ``observed'' directly using automatic differentiation of the loss on each minibatch of samples. We use the Hessian at $\theta_t$ so that the Hessian-vector product can be simultaneously computed with $g_t$ with just one extra call to automatic differentiation (or ``backward pass'') in each iteration (note this does not require constructing the full matrix $B_t$). The variances $Q_t$ and $\Sigma_t$ can also be empirically estimated with some memory overhead by using auto-vectorized automatic differentiation routines. We discuss implementation details later in Section~\ref{sec:practical_implementation}. \subsection{Filtering Framework for Gradient Inference} As Equations~\eqref{eq:observation_model} and~\eqref{eq:dynamics_model} define a linear-Gaussian dynamical system, exact inference on the true gradient conditioned on the history of gradient observations $p(\nabla f_t | g_{1:t}, \delta_{1:{t-1}})$ takes the form of the well-known Kalman filtering equations~\citep{kalman1960} \citep[review in][]{sarkka2013bayesian}: We define parameters $m_t^-$, $m_t$, $P_t^-$ and $P_t$ such that \begin{equation}\label{eq:grad_posterior} \begin{split} \nabla f_t \mid g_{1:t-1}, \delta_{1:{t-1}} &\sim \mathcal{N} ( m_t^-,\; P_t^- ) \\ \nabla f_t \mid g_{1:t}, \delta_{1:{t-1}} &\sim \mathcal{N} ( m_t,\; P_t ). \end{split} \end{equation} Starting from a prior belief $\nabla f_0 \sim \mathcal{N}(m_0, P_0)$, these parameters are updated iteratively: \begin{align} m_t^- &= m_{t-1} + B_t\delta_{t-1}, & P_t^- &= P_{t-1} + Q_{t-1} \label{eq:kalman_predict} \\ K_t &= P_t^- ( P_t^- + \Sigma_t )^{-1} \label{eq:kalman_gain} & & \\ m_t &= (I - K_t)m_t^- + K_t g_t, & P_t &= (I - K_t)P_t^-(I - K_t)^T + K_t \Sigma_t K_t^T \label{eq:kalman_correct} \end{align} Equation \eqref{eq:kalman_predict} is referred to as the \emph{prediction} step as it computes mean and covariance of the predictive distribution $p(\nabla f_t \vert g_{1:t-1})$. In our setting, it predicts the gradient $\nabla f_t$ based on our estimate of the previous gradient ($m_{t-1}$) and the Hessian-vector product approximating the change in gradient from the step $\theta_t = \theta_{t-1} + \delta_{t-1}$. Equation~\eqref{eq:kalman_correct} is the \emph{correction} step. Here, the local stochastic gradient evaluation $g_t$ is used to correct the prediction. Importantly, the \emph{Kalman gain}~\eqref{eq:kalman_gain} determines the blend between the prediction and the observations according to the uncertainty in each. The resulting algorithm gives an online estimation of the true gradients as the parameters $\theta_t$ are updated. We refer to this framework as \text{\textsc{Meka}}{}, loosely based on \emph{model-based Kalman-adjusted gradient estimation}. During optimization, we may use the posterior mean $m_t$ as a variance-reduced gradient estimator and take steps in the direction of $\delta_t = - \alpha_t m_t$. We note two key insights enabling \text{\textsc{Meka}}{}: First, all parameters of the filter are not set \emph{ad hoc}, but are directly evaluated or estimated using automatic differentiation. Secondly, the dynamics model makes explicit use of the Hessian to predict gradients. This is a first-order update. In contrast to second-order methods, like quasi-Newton methods, \text{\textsc{Meka}}{} does not try to estimate the Hessian from gradients, but instead leverages a (noisy) projection with the actual Hessian to improve gradient estimates. This is both cheaper and more robust than second-order methods, because it does not involve solving a linear system. \subsection{\text{\textsc{Adam}}-style Update Directions} While $\text{\textsc{Meka}}$ produces variance-reduced gradient estimates, it does not help with ill-conditioned optimization problems, a case where full batch gradient descent can perform poorly. To alleviate this, we may instead take update directions motivated by the \text{\textsc{AdaGrad}}~\citep{duchi2011adaptive} line of optimizers. We follow \text{\textsc{Adam}}~\citep{kingma2014adam} which proposes dividing the first moment of the gradient element-wise by the square root of the second moment, to arrive at \begin{equation} \delta_t = \alpha_t \frac{m_t}{\sqrt{m_t + \text{diag}(P_t)} + \varepsilon}. \end{equation} where $\varepsilon$ is taken for numerical stability and simply set to $10^{-8}$. Whereas $\text{\textsc{Adam}}$ makes use of two exponential moving averages to estimate the first and second moments of $g_t$, we have estimates automatically inferred through the filtering framework. We refer to this variant as \textsc{AdaMeka}. \section{Uncertainty-informed Step Size Selection} We can adopt a similar Bayesian filtering framework for probabilistic step size adaptation. Our step size adaptation will be a simple enhancement to the quadratic rule, but takes into account uncertainty in the stochastic regime and is much more robust to stochastic observations. The standard quadratic rule if the objective $f$ can be computed exactly is $\alpha_{\textnormal{quadratic}} := \frac{-\delta_t^T \nabla f_t}{\delta_t^T \nabla^2 f_{t-1} \delta_t}$, which is based on minimizing a local quadratic approximation $f(\theta_{t} + \alpha_t \delta_t) - f_t \approx \alpha\delta_t^T \nabla f_t + \frac{\alpha^2}{2} \delta_t^T \nabla^2 f_{t-1} \delta_t$. However, since we only have access to stochastic estimates of $\nabla f$ and $\nabla^2 f$, na\"ively taking this step size with high variance samples results in unpredictable behavior and can cause divergence during optimization. To compensate for the stochasticity and inaccuracy of a quadratic approximation, adaptive step size approaches often include a ``damping'' term~({\it e.g.}\@\xspace \citet{martens2010hfopt})---where a constant is added to the denominator---and an additional scaling factor on $\alpha_t$, both of which aim to avoid large steps but introduces more hyperparameters. As an alternative, we propose a scheme that uses the variance of the estimates to adapt the step size, only taking steps into regions where we are confident about minimizing the objective function. Once again leveraging the availability of $Q_t$ and $\Sigma_t$, our approach allows automatic trade-off between minimizing a local quadratic approximation and the uncertainty over large step sizes, foregoing manual tuning methods such as damping. We adopt a similar linear-Gaussian dynamics model for tracking the true objective $f_t$, with the same assumptions as in Section~\ref{sec:gradfilter}. Due to its similarity with Section~\ref{sec:gradfilter}, we delegate the derivations to Appendix~\ref{app:func}. We again define the posterior distribution, \begin{equation} f_t \mid y_{1:t}, \delta_{1:t-1} \sim \mathcal{N}( u_t, s_t ). \end{equation} where $u_t$ and $s_t$ are inferred using the Kalman update equations. Finally, setting $f_{t+1} = f(\theta_t + \alpha_t\delta_t)$ for some direction $\delta_t$, we have a predictive model of the change in function value as \begin{equation}\label{eq:df_dist} f_{t+1} - f_t \mid y_{1:t}, g_{1:t}, \delta_{1:t} \sim \mathcal{N}\bigg( \alpha_t \delta_t^Tm_t + \frac{\alpha_t^2}{2} \delta_t^TB_t\delta_t, 2s_t + \alpha_t^2 \delta_t^T P_t\delta_t + \frac{\alpha_t^4}{4} \delta_t^T Q_t\delta_t \bigg) \end{equation} Contrasting this with the simple quadratic approximation, the main difference is now we take into account the uncertainty in $f_t$, $\nabla f_t$, and $\nabla^2 f_t$. Each term makes different contributions to the variance as $\alpha_t$ increases, corresponding to different trade-offs between staying near where we are more certain about the function value and exploring regions we believe have a lower function value. Explicitly specifying this trade-off gives an \emph{acquisition function}. These decision rules are typically used in the context of Bayesian optimization~\citep{boreview}, but we adopt their use for step size selection. \subsection{Acquisition Functions for Step Size Selection} Computing the optimal step size in the context of a long but finite sequence of optimization steps is intractable in general, but many reasonable heuristics have been developed. These heuristics usually balance immediate progress against information gathering likely to be useful for later steps. One natural and hyperparameter-free heuristic is maximizing the \emph{probability of improvement} (PI)~\citep{kushner1964new}, \begin{equation}\label{eq:pi} \alpha_\textnormal{PI} := \argmax_\alpha \mathbb{P}\left( f_{t+1} - f_t \leq 0 \mid y_{1:t}, g_{1:t} \right) \end{equation} which is simply the cumulative distribution function of \eqref{eq:df_dist} evaluated at zero. \begin{figure} \centering \begin{subfigure}[b]{0.35\linewidth} \includegraphics[width=\linewidth]{plots/acquisition_fns.pdf} \caption{Positive curvature} \label{fig:negcurv_nolambda} \end{subfigure} \begin{subfigure}[b]{0.35\linewidth} \includegraphics[width=\linewidth]{plots/negative_curv2_lambda.pdf} \caption{Negative curvature} \label{fig:negcurv_lambda} \end{subfigure} \caption{Illustration of different acquisition functions for selecting a step size $\alpha$, based on the mean and variance of our local quadratic estimate of the loss surface.} \label{fig:acquisition_fns} \end{figure} Figure~\ref{fig:acquisition_fns} visualizes the different step sizes chosen by maximizing different acquisition functions. The heuristic of choosing the minimum of the quadratic approximation can be a poor decision when the uncertainty rises quickly. The optimum for PI interpolates between zero and the quadratic minimum in such a way that avoids regions of high uncertainty. Expected improvement~\citep{jones1998efficient} is another popular acquisition function; however, in tests we found it to not be as robust as PI and often results in step sizes that require additional scaling. Maximizing probability of improvement is equivalent to the following optimization problem \begin{equation}\label{eq:pi_loss} \alpha_{\text{PI}} = \argmin_\alpha \frac{-\alpha \delta_t^T m_t + \frac{\alpha^2}{2} \delta_t^TB_t\delta_t}{\sqrt{ 2s_t + \alpha^2\delta_t^TP_t\delta_t + \frac{\alpha^4}{4} \delta_t^TQ_t\delta_t }}. \end{equation} We numerically solve for $\alpha_{\text{PI}}$ using Newton's method, which itself is only a small overhead since we only optimize in one variable with fixed constants: no further evaluations of $f$ are required. We also note that there is exactly one optimum for $\alpha \in \mathbb{R}^+$. For optimization problems where negative curvature is a significant concern, we include a third-order correction term that ensures finite and positive step sizes~(details in Appendix~\ref{app:negcurv_lambda}). \section{A Practical Implementation} \label{sec:practical_implementation} While the above derivations have principled motivations and are free of hyperparameters, a practical implementation of \text{\textsc{Meka}}{} is not entirely straightforward. Below we discuss some technical aspects, simplifications and design choices that increase stability in practice, as well as recent software advances that simplify the computation of quantities of interest. \paragraph{Computing Per-Example Quantities for Estimating Variance} Recent extensions for automatic differentiation in the machine learning software stack~\citep{jax2018github,agarwal2019auto,dangel2020backpack} implement an automatic vectorization \texttt{map} function. Vectorizing over minibatch elements allows efficient computation of gradients and Hessian-vector products of neural network parameters with respect to each data sample independently. These advances allow efficient computation of the empirical variances of gradients and Hessian-vector products, and enable our filtering-based approach to gradient estimation. \paragraph{Stabilizing Filter Estimates} Instead of working with the full covariance matrices $\Sigma_t$ and $Q_t$, we approximate them as scalar objects $\sigma_tI$ and $q_tI$, with $\sigma_t,q_t\in\mathbb{R}_+$ by averaging over all dimensions. We have experimented with diagonal matrices, but found that the scalar form increases stability, generally performing better on our benchmarks. Furthermore, we use an exponential moving average for smoothing the estimated gradient variance $\sigma_t$ as well as the adaptive step sizes $\alpha_t$. The coefficients of these exponential moving average are kept at $0.999$ in our experiments and seem to be quite insensitive, with values in $\{0.9, 0.99, 0.999\}$ all performing near identically (see Appendix~\ref{app:beta_test}). \section{Related Work} Designing algorithms that can self-tune its own parameters is a central theme in optimization~\citep{eiben2011parameter,yang2013framework}; we focus on the stochastic setting, building on and merging ideas from several research directions. The Bayesian filtering framework itself has previously been applied to stochastic optimization. To the best of our knowledge, the idea goes back to \citet{bittner2004kalman} who used a filtering approach to devise an automatic stopping criterion for stochastic gradient methods. \citet{Patel_2016} proposed filtering-based optimization methods for large-scale linear regression problems. \citet{vuckovic2018kalman} and \citet{mahsereci2018probabilistic} used Kalman filters on general stochastic optimization problems with the goal of reducing the variance of gradient estimates. In contrast to our work, none of these existing approaches leverage evaluations of Hessian-vector products to give curvature-informed dynamics for the gradient. In terms of online variance reduction, \citet{gower2017tracking} have discussed the use of Hessian-vector products to correct the gradient estimate; however, they propose methods that approximate the Hessian whereas we compute exact Hessian-vector products by automatic differentiation. \citet{arnold2019transporting} recently proposed an implicit gradient transport formula analogous to our dynamics model, but they require a rather strong assumption that the Hessian is the same for all samples and parameter values. In contrast, we focus on explicitly transporting via the full Hessian. This allows us to stay within the filtering framework and automatically infer the gain parameter, whereas the implicit formulation of \citet{arnold2019transporting} requires the use of a manually-tuned averaging schedule. Step size selection under noisy observations is a difficult problem and has been tackled from multiple viewpoints. Methods include meta-learning approaches~\citep{almeida1999parameter,schraudolph1999local,plagianakos2001learning,yu2006fast,baydin2017online} or by assuming the interpolation regime~\citep{vaswani2019painless,berrada2019training}. \citet{rolinek2018l4} proposed extending a linear approximation to adapt step sizes but introduces multiple hyperparameters to adjust for the presence of noise, whereas we extend a quadratic approximation and automatically infer parameters based on noise estimates. Taking into account observation noise, \citet{mahsereci2017probabilistic} proposed a probabilistic line search that is done by fitting a Gaussian process to the optimization landscape. However, inference in Gaussian processes is more costly than our filtering approach. \section{Convergence in the Noisy Quadratic Setting} \begin{wrapfigure}[17]{r}{0.4\linewidth} \vspace{-1em} \centering \includegraphics[width=\linewidth]{plots/noisy_quadratic.pdf} \caption{Filtered gradients converge with a fixed step size in the noisy quadratic regime, whereas \text{\textsc{SGD}}{} results in diffusion for the same step size.} \label{fig:noisy_quadratic} \end{wrapfigure} As a motivating example, consider a simple toy problem, where \begin{equation} \label{eq:quadratic_toy_problem} f(\theta, \xi) = \frac{1}{2} (\theta - \xi)^T H (\theta - \xi), \end{equation} i.e., a mixture of quadratic functions with identical Hessian but varying location determined by the ``data'' $\xi$. The full gradient is $\nabla f(\theta) = H(\theta - \mathbb{E}[\xi])$ and per-example gradients evaluate to $\nabla f(\theta, \xi) = H(\theta - \xi) = \nabla f(\theta) - H(\xi - \mathbb{E}[\xi]$). Hence, we have additive gradient noise with covariance $\Sigma = H\mathbf{Cov}[\xi] H^T$ independent of $\theta$. Moreover, since the Hessian $\nabla^2 f(\theta, \xi) = H$ is independent of $\xi$, we have that $B_t\delta_{t-1}\equiv \nabla f_t - \nabla f_{t-1}$. The covariance $Q_t$ is zero and the filter equations simplify to \begin{equation}\label{eq:quadratic_toy_problem_filter_equations} \begin{split} & K_t = P_{t-1} (P_{t-1}+ \Sigma)^{-1},\\ & m_t = (I - K_t) (m_{t-1} + B_t\delta_{t-1}) + K_t g_t,\\ & P_t = (I - K_t) P_{t-1}, \end{split} \end{equation} initialized with $m_0=g_0$, $P_0=\Sigma$. The filter covariance $P_t$ contracts in every step and in fact, shrinks at a rate of $O(1/t)$, meaning that the filter will narrow in on the exact gradient. We show that this enables $O(1/t)$ convergence with a \emph{constant} step size. \begin{proposition} \label{proposition:quadratic_toy_problem_convergence} Assume a problem of the form \eqref{eq:quadratic_toy_problem} with $\mu I \preceq H \preceq LI$. If we update $\theta_{t+1} = \theta_t - \alpha m_t$ with $\alpha\leq 1/L$ and $m_t$ obtained via Eq.~\eqref{eq:quadratic_toy_problem_filter_equations}, then $\mathbb{E}[f(\theta_t) - f_\ast] \in O\left(1/t \right)$. \end{proposition} Figure~\ref{fig:noisy_quadratic} shows experimental results for such a noisy quadratic problem of dimension $d=20$ with a randomly-generated Hessian (with condition number $> 1000$) and $\xi\sim\mathcal{N}(0, I)$. Using \text{\textsc{SGD}}{} with a high learning rate simply results in diffusion, and setting the learning rate smaller results in slow convergence. Gradient descent (GD) converges nicely with the high learning rate, and using adaptive steps sizes leads to a better convergence rate. The filtered gradients from \text{\textsc{Meka}}{} converge almost as well as gradient descent, and adaptive step sizes provide an improvement. On the other hand, \text{\textsc{SGD}}{} produces unreliable gradient directions and does not work well with adaptive step sizes. We note that the stochastic gradient has a full covariance matrix and does not match our modeling assumptions, as our model uses a diagonal covariance for efficiency. Even so, the training loss of \text{\textsc{Meka}}{} follows that of gradient descent very closely after just a few iterations. \section{Classification Experiments} Next we test and diagnose our approach on classification benchmarks, MNIST and CIFAR-10. We use JAX's~\citep{jax2018github} vectorized map functionality for efficient per-example gradients and Hessian-vector products. For MNIST, we test using a multi-layer perceptron (MLP); for CIFAR-10, a convolutional neural network (CNN) and a residual network (ResNet-32)~\citep{he2016deep,he2016identity}. One key distinction is we replace the batch normalization layers with group normalization~\citep{wu2018group} as batch-dependent transformations conflict with our assumption that the gradient samples are independent. We note that the empirical per-iteration cost of \text{\textsc{Meka}}{} is $1.0$--$1.6 \times$ that of \text{\textsc{SGD}}{} due to the computation of Hessian-vector products. Full experiment details are provided in Appendix~\ref{app:experiment_details}. A detailed comparison to tuned baseline optimizers is presented in Appendix~\ref{app:comparison_with_all}. \paragraph{Online Variance Reduction} \begin{wrapfigure}[15]{r}{0.4\linewidth} \vspace{-1em} \centering \includegraphics[width=\linewidth]{plots/cifar10_3c3d_l2_norm.pdf} \caption{\text{\textsc{Meka}}{}'s estimated gradients are closer to the true full-batch gradient in $L_2$ norm than stochastically observed gradients by a factor of around $5$.} \label{fig:l2_norm} \end{wrapfigure} We test whether the filtering procedure is correctly aligning the gradient estimate with the true gradient. For this, we use CIFAR-10 with a CNN and no data augmentation, so that the true full-batch gradient over the entire dataset can be computed. Figure~\ref{fig:l2_norm} shows the $L_2$ norm difference between the gradient estimators and the full-batch gradient $\nabla f_t$. \text{\textsc{Meka}}{}'s estimated gradients are closer to the true around by around a factor of 5 compared to the minibatch gradient sample. \begin{figure} \centering \includegraphics[width=0.82\linewidth]{plots/adaptive_comparison.pdf} \caption{Adaptive step sizes based on probability of improvement work best without any additional scaling factor $c$ for modifying the update rule: $\theta_{t+1} = \theta_t + c \alpha_t \delta_t$. } \label{fig:adaptive_comparison} \end{figure} \paragraph{Adaptive Step Sizes are Appropriately Scaled} \label{sec:adaptive_scaling} Without uncertainty quantification, the quadratic minimum step size scheme tends to result in step sizes too large. As such, one may include a scaling factor such that the update is modified as $\theta_{t+1} = \theta_t - c \alpha_t \delta_t$. In contrast, we find that the adaptive step sizes based on probability of improvement (PI) are already correctly scaled in the sense that a $c$ different from $1.0$ will generally result in worse performance. Figure~\ref{fig:adaptive_comparison} shows a comparison of different values for $c$ for the quadratic and PI~\eqref{eq:pi} adaptive schemes. We plot expected improvement in Appendix~\ref{app:adaptive_comparison_full}, which performs poorly and requires non-unit scaling factors. \subsection{Adaptive Step Sizes Dives into High-curvature, High-variance Regions} \label{app:mekadive} \begin{figure} \centering \includegraphics[width=\linewidth]{plots/adaptdelay.pdf} \caption{The performance of \text{\textsc{Meka}}{} with adaptive step sizes on ResNet-32 can be explained by quantities captured during optimization. \text{\textsc{Meka}}{} reaches high-curvature high-variance local minima, as soon as adaptive step sizes are used.} \label{fig:adaptdelay} \end{figure} A core aspect of our filtering approach is the ability to estimate quantities of interest during optimization. We now use these to help understand the loss landscape of ResNet-32 on CIFAR-10. We find that a cause for slow convergence of \text{\textsc{Meka}}{} with adaptive step sizes is due to an abundance of minima that are usually too high variance for standard \text{\textsc{SGD}}. Figure~\ref{fig:adaptdelay} shows estimates of the normalized curvature along the descent direction $\smash{\frac{\delta^TB_t\delta}{\delta^T\delta}}$ as well as the per-sample gradient variance, averaged over parameters. To understand the loss landscape along the trajectory of optimization, we use multiple runs of \text{\textsc{Meka}}{} with the same initialization. Each run takes a different fixed number of constant-size steps before switching to the adaptive step size scheme. It is clear that immediately after switching to adaptive step sizes, \text{\textsc{Meka}}{} falls into an increasingly high curvature region and remains there. The gradient variance also remains high. As our optimization procedure can handle relatively high variance and curvature, it proceeds to optimize within this sharp but potentally non-local minimum. On the other hand, it may be an advantage of fixed-step-size SGD that it skips over both high-variance and high-curvature minima. This failing of adaptive step sizes during the initial phase of training may be related to the ``short horizon bias''~\citep{wu2018understanding} of our one-step-ahead acquisition function. If so, compute budget can be used to approximate multi-step-ahead gains to help reduce this bias. Additionally, the ability to optimize within high-curvature high-variance regions could potentially be an advantage on problems with fewer local minima, yet this may not be the case for deep learning. \section{Conclusion} We introduced an online gradient estimation framework for stochastic gradient-based optimization, which leverages Hessian-vector products and variance estimates to perform automatic online gradient estimation and step size selection. The result is a stochastic optimization algorithm that can self-tune many important parameters such as momentum and learning rate schedules, in an online fashion without checkpointing or expensive outer-loop optimization. While the required additional observables can be computed efficiently with recent advances in automatic differentiation tooling, they are of course not free, increasing computational cost and memory usage compared to \text{\textsc{SGD}}{}. What one gains in return is automation, so that it suffices to run the algorithm just once, without tedious tuning. Given the amount of human effort and computational resources currently invested into hyperparameter tuning, we believe our contributions are valuable steps towards fully-automated gradient-based optimization. \bibliographystyle{plainnat}
2,869,038,156,212
arxiv
\section{#1}\setcounter{equation}{0} \documentclass[fleqn,11pt]{article} \usepackage{amsmath,amsfonts,amssymb} \usepackage{color} \usepackage{geometry} \geometry{ hmargin=2.5cm, vmargin=2.5cm } \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{property}{Property} \newtheorem{consequence}{Consequence} \newtheorem{example}{Example} \newtheorem{remark}{Remark} \newtheorem{remarks}{Remarks} \newtheorem{definition}{Definition} \newtheorem{proposition}{Proposition} \newtheorem{corollary}[theorem]{Corollary} \renewcommand{\thetheorem}{\thesection.\arabic{theorem}} \renewcommand{\thelemma}{\thesection.\arabic{lemma}} \renewcommand{\theproperty}{\thesection.\arabic{property}} \renewcommand{\theconsequence}{\thesection.\arabic{consequence}} \renewcommand{\theexample}{\thesection.\arabic{example}} \renewcommand{\theremark}{\thesection.\arabic{remark}} \renewcommand{\thedefinition}{\thesection.\arabic{definition}} \renewcommand{\theproposition}{\thesection.\arabic{proposition}} \renewcommand{\thecorollary}{\thesection.\arabic{corollary}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\sectionnew}[1]{ \section{#1}\setcounter{equation}{0} \setcounter{theorem}{0} \setcounter{lemma}{0}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\epsilon}{\epsilon} \newcommand{\rm{div}}{\rm{div}} \newcommand{\rm{Tr}}{\rm{Tr}} \newcommand{{_{||}}}{{_{||}}} \newcommand{{_{\perp}}}{{_{\perp}}} \newcommand{{x_{g}}}{{x_{g}}} \newcommand{{v_g}}{{v_g}} \newcommand{{\rho_{_L}}}{{\rho_{_L}}} \newcommand{\bar{\alpha}}{\bar{\alpha}} \newcommand{\hat{\rho}}{\hat{\rho}} \newcommand{\cqfd} {% \mbox{}% \nolinebreak% \hfill% \rule{2mm}{2mm}% \newline \newline } \title{ Discrete velocity Boltzmann equations in the plane: stationary solutions.} \author{Leif ARKERYD and Anne NOURI\\ \\Mathematical Sciences, 41296 G\"oteborg, Sweden,\\ [email protected]\\ \hspace*{1.in}\\ Aix-Marseille University, CNRS, Centrale Marseille, I2M UMR 7373, 13453 Marseille, France,\\ [email protected]\\ www.i2m.univ-amu.fr/perso/anne.nouri/} \date{} \begin{document} \maketitle \hspace{1cm}\\ {\bf Abstract} \hspace{1cm}\\ The paper proves existence of stationary mild solutions for normal discrete velocity Boltzmann equations in the plane with no pair of colinear interacting velocities and given ingoing boundary values. An important restriction of all velocities pointing into the same half-space in a previous paper is removed in this paper. A key property is $L^1$ compactness of integrated collision frequency for a sequence of approximations. This is proven using the Kolmogorov-Riesz theorem, which here replaces the $L^1$ compactness of velocity averages in the continuous velocity case, not available when the velocities are discrete. \footnotetext[1] {2010 Mathematics Subject Classification; 60K35, 82C40, 82C99.} \\ \footnotetext[2] {Key words; stationary Boltzmann equation, discrete coplanar velocities, normal model.}. \section {Introduction.} \label{generalization} \setcounter{equation}{0} The Boltzmann equation is the fundamental mathematical model in the kinetic theory of gases. Replacing its continuum of velocities with a discrete set of velocities is a simplification, preserving the essential features of free flow and quadratic collision term. Besides this fundamental aspect, the discrete equations can approximate the Boltzmann equation with any given accuracy \cite {BPS}, \cite{FKW}, \cite{M}, and are thereby useful for approximations and numerics. In the quantum realm they can also be more directly connected to microscopic quasi/particle models. A discrete velocity model of a kinetic gas is a system of partial differential equations having the form, \begin{align*} &\frac{\partial f_i}{\partial t}(t,z)+v_i\cdot \nabla _zf_i(t,z)= Q_i(f,f)(t,z),\quad t>0,\quad z\in \Omega ,\quad 1\leq i\leq p, \end{align*} where $f_i(t,z)$, $1\leq i\leq p$, are phase space densities at time $t$, position $z$ and velocities $v_i$. The spatial domain is $\Omega $. The given discrete velocities are $v_i$, $1\leq i\leq p$. For $f= (f_i)_{1\leq i\leq p}$, the collision operator $Q= (Q_i)_{1\leq i\leq p}$ with gain part $Q^+$, loss part $Q^-$, and collision frequency $\nu$, is given by \begin{align*} &Q_i(f,f)= \sum_{j,l,m=1}^p \Gamma_{ij}^{lm} (f_lf_m-f_if_j)\\ &\hspace*{0.52in}= Q_i^+(f,f)-Q_i^-(f,f), \\ &Q^+_i(f,f)= \sum_{j,l,m=1}^p \Gamma_{ij}^{lm}f_lf_m, \quad Q^-_i(f,f)= f_i\nu_i(f), \quad \nu _i(f)= \sum_{j,l,m=1}^p \Gamma_{ij}^{lm} f_j,\quad i=1,..., p. \end{align*} The collision coefficients satisfy \begin{align}\label{Gamma} &\Gamma_{ij}^{lm}=\Gamma_{ji}^{lm}=\Gamma_{lm}^{ij}\geq 0. \end{align} If a collision coefficient $\Gamma_{ij}^{lm}$ is non-zero, then the conservation laws for momentum and energy, \begin{align}\label{conservations-velocities} &v_i+v_j=v_l+v_m,\quad |v_i|^2+|v_j|^2=|v_l|^2+|v_m|^2, \end{align} are satisfied. We call interacting velocities any couple of velocities $(v_i,v_j)$ such that for some $(l,m)\in \{ 1,\cdot \cdot \cdot ,p\} ^2 $, $\Gamma _{ij}^{lm}> 0$. The discrete velocity model (DVM) is called normal (see \cite{C}) if any solution of the equations \begin{eqnarray*} \Psi(v_i)+\Psi(v_j)=\Psi(v_l)+\Psi(v_m), \end{eqnarray*} where the indices $(i,j;l,m)$ take all possible values satisfying $\Gamma_{ij}^{lm}>0$, is given by \begin{align*} &\Psi(v)=a+b\cdot v+c|v|^2, \end{align*} for some constants $a,c\in \mathbb{R}$ and $b\in \mathbb{R} ^d$. We consider \begin{align} &\text{the generic case of normal coplanar velocity sets with}\nonumber \\ &\text{ no pair of colinear interacting velocities } (v_i, v_j).\label{generic-condition} \end{align} The case is generic. Indeed, consider a normal velocity set such that for some interacting velocities $(v_i,v_j)$, $v_i$ and $v_j$ are colinear. Then there exists an arbitrary small vector $v_0$ such that the velocity set $(v_i+v_0)_{1\leq i\leq p} $ is normal and with no colinear interacting velocities. The paper considers stationary solutions to normal coplanar discrete velocity models satisfying \eqref{generic-condition}, in a strictly convex bounded open subset $\Omega \subset \mathbb{R} ^2$, with $C^2$ boundary $\partial \Omega $ and given boundary inflow. Denote by $n(Z)$ the inward normal to $Z\in \partial \Omega $. Denote the $v_i$-ingoing (resp. $v_i$-outgoing) part of the boundary by \begin{align*} &\partial \Omega _i^+= \{ Z\in \partial \Omega \hspace*{0.02in}; \hspace*{0.02in}v_i\cdot n(Z)>0\} ,\quad \quad (\text{resp. }\partial \Omega _i^-= \{ Z\in \partial \Omega \hspace*{0.02in}; \hspace*{0.02in}v_i\cdot n(Z)<0\} ). \end{align*} Let \begin{align*} &s_i^+(z)= \inf \{ s>0\hspace*{0.02in}; z-sv_i\in \partial \Omega _i^+\} ,\quad s_i^-(z)= \inf \{ s>0\hspace*{0.02in}; z+sv_i\in \partial \Omega _i^-\} ,\quad z\in \Omega . \end{align*} Write \begin{align}\label{df-in-out-points} z_i^+(z)= z-s_i^+(z)v_i \quad (\text{resp. } z_i^-(z)= z+s_i^-(z)v_i) \end{align} for the ingoing (resp. outgoing) point on $\partial\Omega$ of the characteristics through $z$ in direction $v_i$.\\ The stationary boundary value problem \begin{align} &v_i\cdot \nabla f_i(z)= Q_i(f,f)(z),\quad z\in \Omega,\label{discrete-general-a}\\ & f_i(z)= f_{bi}(z),\quad z\in \partial \Omega _i^+,\quad \quad 1\leq i\leq p,\label{discrete-general-b} \end{align} is considered in $L^1$ in one of the following equivalent forms (\cite{DPL});\\ the exponential multiplier form, \begin{align}\label{exponential-form} f_{i}(z)&= f_{bi}(z_i^+(z))e^{-\int_{0}^{s_i^+(z)} \nu_i(f)(z_i^+(z)+sv_i)ds}\nonumber\\ &+\int_{0}^{s_i^+(z)} Q_i^+(f,f)(z_i^+(z)+sv_i)e^{-\int_s^{s_i^+(z)}\nu_i(f)(z_i^+(z)+rv_i)dr}ds ,\quad \text{a.a. }z\in \Omega ,\quad 1\leq i\leq p, \end{align} the mild form, \begin{align}\label{mild-form} &f_i(z)= f_{bi}(z_i^+(z))+\int_{0}^{s_i^+(z)} Q_i(f,f)(z_i^+(z)+sv_i)ds,\quad \text{a.a. }z\in \Omega ,\quad 1\leq i\leq p, \end{align} the renormalized form, \begin{align}\label{renormalized-form} v_i\cdot \nabla \ln(1+f_i)(z)= \frac{Q_i(f,f)}{1+f_i}(z),\hspace*{0.04in} z\in \Omega ,\quad \quad \quad f_i(z )= f_{bi}(z),\hspace*{0.04in} z\in \partial \Omega _i^+,\quad 1\leq i\leq p, \end{align} in the sense of distributions. Denote by $L^1_+(\Omega)$ the set of non-negative integrable functions on $\Omega$. For a distribution function $f= (f_i)_{1\leq i\leq p}$, define its entropy (resp. entropy dissipation) by \begin{align*} &\sum _{i= 1}^p\int _\Omega f_i\ln f_i(z)dz,\quad \Big( \text{resp.}\quad \sum_{i,j,l,m=1}^p \Gamma_{ij}^{lm} \int _\Omega (f_lf_m-f_if_j)\ln \frac{f_lf_m}{f_if_j}(z)dz\Big) . \end{align*} The main result of the paper is \begin{theorem}\label{main-result} \hspace*{1.in}\\ Consider a coplanar normal discrete velocity model and a non-negative ingoing boundary value $f_{b}$ with mass and entropy inflows bounded, \begin{align*} &\int _{\partial \Omega _i^+}v_i\cdot n(z)\hspace*{0.02in}f_{bi}(1+\ln f_{bi})(z)d\sigma (z)<+\infty,\quad 1\leq i\leq p. \end{align*} For the boundary value problem \eqref{discrete-general-a}-\eqref{discrete-general-b} satisfying \eqref{generic-condition}, there exists a stationary mild solution in $\big( L^1_+(\Omega )\big) ^p$ with finite mass and entropy-dissipation. \end{theorem} \hspace*{1.in}\\ Given $i\in \{ 1,\cdot \cdot \cdot , p\} $, if $\Gamma _{ij}^{lm}= 0$ for all $j$, $l$ and $m$, then $f_i$ equals its ingoing boundary value, and the rest of the system can be solved separately. Such $i$'s are not present in the following discussion.\\ Most mathematical results for stationary discrete velocity models of the Boltzmann equation have been obtained in one space dimension. An overview is given in \cite{IP}. Half-space problems \cite{NB1} and weak shock waves \cite{NB2} for discrete velocity models have also been studied. A discussion of normal discrete velocity models, i.e. conserving nothing but mass, momentum and energy, can be found in \cite{BVW}. In two dimensions, special classes of solutions to the Broadwell model are given in \cite{BT}, \cite{B} and \cite{Ily}. The Broadwell model, not included in the present results, is a four-velocity model, with $v_1+v_2=v_3+v_4=0$ and $v_1$, $v_3$ orthogonal. \cite{B} contains a detailed study of the stationary Broadwell equation in a rectangle with comparison to a Carleman-like system, and a discussion of (in)compressibility aspects. A main result in \cite{CIS} is the existence of continuous solutions to the two-dimensional stationary Broadwell model with continuous boundary data for a rectangle. The paper \cite{AN2} solves that problem in an $L^1$-setting. The proof uses in an essential way the constancy of the sums $f_1+f_2$ and $f_3+f_4$ along characteristics, which no longer holds in the present paper. For every normal model, there is a priori control of entropy dissipation, mass and entropy flows through the boundary. From there, main difficulties are to prove that for a sequence of approximations, weak $L^1$ compactness holds and the limit of the collision operator equals the collision operator of the limit. In \cite{AN1}, weak $L^1$ compactness of a sequence of approximations was obtained with assumption \eqref{generic-condition} together with the assumption that all velocities $v_i$ point out into the same half-plane. In this paper we keep assumption \eqref{generic-condition}, remove the second assumption and provide a new proof of weak $L^1$ compactness of approximations using \eqref{generic-condition}. Assumption \eqref{generic-condition} is also crucial for proving $L^1$ compactness of the integrated collision frequencies, that is important for the convergence procedure. Our paper also differs from \cite{AN1} in the limit procedure. The frame of the limit procedure in \cite{AN1} is the splitting into 'good' and 'bad' characteristics following the approach in our earlier stationary continuous velocity papers \cite{AN}-\cite{AN4}. Here we have instead recourse to sub- and super-solutions used in the classical evolutionary frame for renormalized solutions to the Boltzmann equation \cite{DPL}. \\ \hspace*{1.in}\\ For the continuous velocity evolutionary Boltzmann equation \cite{DPL}, the compactness properties of the collision frequency use in an essential way the averaging lemma, which is not available for the discrete velocity Boltzmann model. In the present paper, the compactness properties are proven by the Kolmogorov-Riesz theorem. Also the argument used in the stationary paper \cite{AN4} in the continuous velocity case for obtaining control of entropy, hence weak $L^1$ compactness of a sequence of approximations from the control of entropy dissipation, does not work in a discrete velocity case because the number of velocities is finite.\\ \hspace*{1.in}\\ The proof starts in Section \ref{Section2} from bounded approximations. In Section \ref{Section3}, $L^1$ compactness properties of the approximations are proven. Section \ref{Section4} is devoted to the proof of Theorem \ref{main-result}. \section{Approximations.}\label{Section2} \label{approximations} \setcounter{equation}{0} \setcounter{theorem}{0} Denote by $\mathbb{N} ^*= \mathbb{N} \setminus \{ 0\} $ and by $a\wedge b$ the minimum of two real numbers $a$ and $b$. Let $\mu_\alpha$ be a smooth mollifier in $\mathbb{R}^2$ with support in the ball centered at the origin of radius $\alpha $. Outside the boundary the function to be convolved with $\mu_\alpha$\textcolor{magenta}{,} is continued in the normal direction by its boundary value. Let $\tilde{\mu }_{k}$ be a smooth mollifier on $\partial \Omega $ in a ball of radius $\frac{1}{k}$. Denote by \begin{align*} &f^k_{bi}= \Big( f_{bi}(\cdot )\wedge \frac{k}{2}\Big) \ast \tilde{\mu }_k ,\quad 1\leq i\leq p, \quad k\in \mathbb{N} ^*. \end{align*} The lemma introduces a primary approximated boundary value problem with damping and convolutions. \begin{lemma}\label{first-approximations} \hspace*{1.in}\\ For any $\alpha >0$ and $k\in \mathbb{N} ^*$, there is a solution $F^{\alpha ,k}\in (L^1_+(\Omega))^p$ to \begin{align} &\alpha F^{\alpha ,k}_i+v_i\cdot \nabla F^{\alpha ,k}_i= \sum _{j,l,m=1}^p\Gamma _{ij}^{lm}\Big( \frac{F^{\alpha ,k}_l}{1+\frac{F^{\alpha ,k}_l}{k}}\frac{F^{\alpha ,k}_m\ast \mu_\alpha }{1+\frac{F^{\alpha ,k}_m\ast \mu_\alpha }{k}}-\frac{F^{\alpha ,k}_i}{1+\frac{F^{\alpha ,k}_i}{k}}\frac{F^{\alpha ,k}_j\ast \mu_\alpha }{1+\frac{F^{\alpha ,k}_j\ast \mu_\alpha }{k}}\Big)\hspace*{0.03in},\label{df-F-final-1}\\ &F^{\alpha ,k}_i(z)= f^k_{bi}(z),\quad z\in \partial \Omega _i^+,\quad \quad 1\leq i\leq p.\label{df-F-final-2} \end{align} \end{lemma} \underline{Proof of Lemma \ref{first-approximations}.}\\ For a proof of Lemma \ref{first-approximations} we refer to the second section in \cite{AN1}.\\ \hspace*{1.in}\\ Let $k\in \mathbb{N} ^*$ be given. Each component of $F^{\alpha ,k}$ is bounded by a multiple of $k^2$. Therefore $(F^{\alpha ,k})_{\alpha \in ]0,1[}$ is weakly compact in $(L^1(\Omega))^p$. For a subsequence, the convergence is strong in $(L^1(\Omega))^p$ as stated in the following lemma. \begin{lemma}\label{cv-F-alpha} \hspace*{1.in}\\ {There is a sequence $(\beta (q))_{q\in \mathbb{N}}$ tending to zero when $q\rightarrow +\infty $ and a function $F^k\in L^1$, such that $(F^{\beta (q),k})_{q\in \mathbb{N} }$ } strongly converges in $(L^1(\Omega))^p$ to $F^k$ when $q\rightarrow +\infty $. \end{lemma} \underline{Proof of Lemma \ref{cv-F-alpha}.}\\ For a proof of Lemma \ref{cv-F-alpha} we refer to Lemma 3.1 in \cite{AN1}.\\ \hspace*{1.in}\\ Denote by \begin{align}\label{df-approximate-gain} &Q^{+k}_i= \sum _{j,l,m=1}^p\Gamma _{ij}^{lm}\frac{F^k_l}{1+\frac{F^k_l}{k}}\frac{F^k_m}{1+\frac{F^k_m}{k}},\quad \nu _i^k= \sum _{j,l,m=1}^p\Gamma _{ij}^{lm}\frac{F^k_j}{(1+\frac{F_i^k}{k})(1+\frac{F_j^k}{k})},\nonumber \\ &Q_i^k= Q_i^{+k}-F_i^k\nu _i^k,\quad 1\leq i\leq p, \end{align} and by $\tilde{D}_k$ the entropy production term of the approximations, \begin{align}\label{df-entropy-production} &\tilde{D}_k= \sum _{i, j,l,m=1}^p\Gamma _{ij}^{lm}\Big( \frac{F^k_l}{1+\frac{F^k_l}{k}}\frac{F^k_m}{1+\frac{F^k_m}{k}}-\frac{F^k_i}{1+\frac{F^k_i}{k}}\frac{F^k_j}{1+\frac{F^k_j}{k}}\Big) \ln \frac{F^k_lF^k_m(1+\frac{F^k_i}{k})(1+\frac{F^k_j}{k})} {(1+\frac{F^k_l}{k})(1+\frac{F^k_m}{k})F^k_iF^k_j}\hspace*{0.04in}. \end{align} All along the paper, $c_b$ denotes constants that may vary from line to line but is independent of parameters tending to $+\infty $ or to zero. \begin{lemma}\label{existence-k-approximations} \hspace*{1.in}\\ $F^k$ is a non-negative solution to \begin{align} &v_i\cdot \nabla F_i^k=Q_i^{+k}-F_i^k\nu_i^k \hspace*{0.03in},\label{Fk-i}\\ &F^k_i(z)= f^k_{bi}(z),\quad z\in \partial \Omega _i^+,\quad 1\leq i\leq p.\label{bcFk-i} \end{align} Solutions $(F^k)_{k\in \mathbb{N}^*}$ to \eqref{Fk-i}-\eqref{bcFk-i} have mass and entropy dissipation bounded from above uniformly with respect to $k$. Moreover their outgoing flows at the boundary are controlled as follows, \begin{align}\label{outgoing-flows} &\sum_{i= 1}^p\int_{\partial \Omega _i^{-},F_i^k\leq k} \mid v_i\cdot n(Z)\mid F^k_i\ln F^k_i(Z)d\sigma (Z)+\ln \frac{k}{2}\int_{\partial \Omega _i^{-},F_i^k> k}\mid v_i\cdot n(Z)\mid {F^k_i}d\sigma (Z)\leq c_b. \end{align} \end{lemma} \underline{Proof of Lemma \ref{existence-k-approximations}.}\\%\ref{existence-k-approximations}}\\ Passing to the limit when $q\rightarrow +\infty $ in \eqref{df-F-final-1}-\eqref{df-F-final-2} written for $F^{\beta (q),k}$, implies that $F^k$ is a solution in $\big( L^1_+(\Omega )\big) ^p$ to \eqref{Fk-i}-\eqref{bcFk-i}. For a proof of the rest of Lemma \ref{existence-k-approximations}, we refer to Lemma 3.2 in \cite{AN1}.\\ \section{On compactness of sequences of approximations.}\label{Section3} \setcounter{equation}{0} \setcounter{theorem}{0} This section is devoted to prove $L^1$ compactness properties of the approximations. In Proposition \ref{weakL1compactness}, weak $L^1$ compactness of $(F^k)_{k\in \mathbb{N} ^*}$ is proven. Lemma \ref{lemma-df-Omega-epsilon} splits $\Omega $ into a set of $i$-characteristics with arbitrary small measure and its complement\textcolor{magenta}{,} where both the approximations and their integrated collision frequencies are bounded. In Lemma \ref{compactness-integrated-collision-frequency}, the strong $L^1$ compactness of integrated collision frequency is proven. \begin{proposition}\label{weakL1compactness} \hspace*{1.in}\\ The sequence $(F^k)_{k\in \mathbb{N} ^*}$ solution to \eqref{Fk-i}-\eqref{bcFk-i} is weakly compact in $L^1$. \end{proposition} % \underline{Proof of Proposition \ref{weakL1compactness}.}\hspace*{0.05in}\\ By Lemma \ref{existence-k-approximations}, $(F^k)_{k\in \mathbb{N} ^*}$ is uniformly bounded in $(L^1(\Omega)) ^p$. \\ \hspace*{1.in}\\ Given \eqref{outgoing-flows} and the following bound on $F^k$, \begin{align}\label{bdd-by-outgoingA} F^k_i(z)&\leq F^k_i(z+s_i^-(z)v_i)\hspace*{0.02in}\exp \Big( \Gamma \displaystyle \sum _{j\in J_i}\int _{-s_i^+(z)}^{s_i^-(z)}F_j(z+rv_i)dr\Big) ,\quad z\in \Omega ,\quad i\in \{ 1,\cdot \cdot \cdot ,p\} , \end{align} the weak $L^1$ compactness of $(F^k)_{k\in \mathbb{N} ^*}$ will follow from the uniform boundedness in $L^\infty (\partial \Omega _i^+)$ of \begin{align}\label{weak29novB} &\big( \int _{0}^{s_i^-(Z)}F_j(Z+rv_i)dr\big) _{j\in J_i, k\in \mathbb{N} }, \end{align} where $J_i$ denotes $\{ j\in \{ 1,\cdot \cdot \cdot , p\} ; (v_i,v_j)\text{ are interacting velocities}\} $. By \eqref{generic-condition}, there exists $\eta >0$ such that for all interacting velocities $(v_i,v_j)$, \begin{align}\label{orthogonal-assumption} &\lvert \sin (\widehat{v_i,v_j})\rvert >\eta . \end{align} Let $i\in \{ 1,\cdot \cdot \cdot ,p\} $ and $Z\in \partial \Omega _i^+$. Multiply the equation satisfied by $F_j^k$ by $\frac{v_i^\perp \cdot v_j}{\lvert v_i\rvert }$ and integrate it on one of the half domains defined by the segment $[Z,Z+s_i^-(Z)v_i] $. Summing over $j\in \{ 1,\cdot \cdot \cdot ,p\}$ implies that \begin{align}\label{weak29novA} &\sum _{j=1}^p\sin ^2(\widehat{v_i,v_j})\int _0^{s_i^-(Z)}F_j^k(Z+sv_i)ds\leq c_b,\quad Z\in \partial \Omega _i^+. \end{align} Together with \eqref{orthogonal-assumption}, this leads to the control of \eqref{weak29novB}. \cqfd \hspace*{1.in}\\ Recall the exponential multiplier form for the approximations $(F^k)_{k\in \mathbb{N} ^*}$, \begin{align}\label{exponential-form-approximations} F^k_{i}(z)&= f^k_{bi}(z_i^+(z))e^{-\int_{-s_i^+(z)}^0 \nu_i^k(z+sv_i)ds}\nonumber\\ &+\int_{-s_i^+(z)}^0 Q_i^{+k}(z+sv_i)e^{-\int_s^{0}\nu_i^k(F^k)(z+rv_i)dr}ds ,\quad \text{a.a. }z\in \Omega ,\quad 1\leq i\leq p, \end{align} with $\nu _i^k$ and $Q_i^{+k}$ defined in \eqref{df-approximate-gain}. An $i$-characteristics is a segment of points $[Z-s_i^+(Z)v_i,Z]$, where $Z\in \partial\Omega _i^-$. Denote by $\Gamma = \displaystyle \max _{i,j,l,m}\Gamma _{ij}^{lm}$. \begin{lemma}\label{lemma-df-Omega-epsilon} \hspace*{1.in}\\ For $i\in \{ 1, ..., p\} $, $k\in \mathbb{N} ^*$ and $\epsilon >0$, there is a subset $\Omega ^{k,\epsilon }_{i}$ of $i$-characteristics of $\Omega $ with measure smaller than $c_b\epsilon $, such that for any $z\in \Omega \setminus \Omega ^{k,\epsilon }_{i}$, \begin{align}\label{bounds-on-complementary-of-Omega_iEpsilon} &F^k_{i}(z)\leq \frac{1}{\epsilon ^2}\exp \big( \frac{p\Gamma }{\epsilon ^2}\big) ,\quad \int_{-s_i^+(z)}^{s_i^-(z)}\nu_i^k(z+sv_i)ds\leq \frac{p\Gamma }{\epsilon ^2}. \end{align} \end{lemma} \underline{Proof of Lemma \ref{lemma-df-Omega-epsilon}.}\\ By the strict convexity of $\Omega $, there are for every $i\in \{ 1,\cdot \cdot \cdot p\} $ two points of $\partial \Omega $, denoted by $\tilde{Z_i}$ and $\bar{Z_i}$ such that \begin{align*} v_i\cdot n(\tilde{Z_i})= v_i\cdot n(\bar{Z_i})= 0. \end{align*} Let $\tilde{l}_i$ (resp. $\bar{l}_i$) be the largest boundary arc included in $\partial \Omega _i^-$ with one end point $\tilde{Z}_i$ (resp. $\bar{Z}_i$) such that \begin{align}\label{lemma4-1b} -\epsilon \leq v_i\cdot n(Z)\leq 0,\quad Z\in \tilde{l}_i\cup \bar{l}_i. \end{align} Let $J_i$ be the subset of $\{ 1,\cdot \cdot \cdot , p\} $ such that \begin{align}\label{df-J-i} &\text{for some }(l,m)\in \{ 1, \cdot \cdot \cdot ,p \} ^2,\quad \Gamma _{ij}^{lm}>0, \quad j\in J_i. \end{align} It follows from the exponential form of $F^k_i$ that \begin{align}\label{bdd-by-outgoing} F^k_i(z)&\leq F^k_i(z+s_i^-(z)v_i)\hspace*{0.02in}\exp \Big( \Gamma \displaystyle \sum _{j\in J_i}\int _{-s_i^+(z)}^{s_i^-(z)}F_j(z+rv_i)dr\Big) ,\quad z\in \Omega . \end{align} The boundedness of the mass flow of $(F^k_i)_{k\in \mathbb{N} ^*}$ across $\partial \Omega _i^-$ is \begin{align}\label{lemma4-1a} &\int _{\partial \Omega _i^-}\mid v_i\cdot n(Z)\mid F^k_i(Z)d\sigma (Z)\leq c_b,\quad k\in \mathbb{N} ^*. \end{align} It follows from \eqref{lemma4-1b}-\eqref{lemma4-1a} that the measure of the set \begin{align*} \{ Z\in \partial \Omega _i^-\cap \tilde{l}_i^{\hspace*{0.02in}c}\cap \bar{l}_i^{\hspace*{0.02in}c}\quad ;\quad F^k_i(Z)>\frac{1}{\epsilon ^2}\} \end{align*} is smaller than $c_b\epsilon $. The boundedness of the mass of $(F^k_j)_{k\in \mathbb{N} ^*}$ can be written \begin{align*} &\int_\Omega F_j^k(z)dz=\int _{\partial \Omega _i^-}\mid v_i\cdot n(Z)\mid \Big( \int _{-s_i^+(Z)}^0F^k_j(Z+rv_i)dr\Big) d\sigma (Z)\leq c_b,\quad j\in J_i. \end{align*} Hence the measure of the set \begin{align*} &\{ Z\in \partial \Omega _i^-\cap \tilde{l}_i^{\hspace*{0.02in}c}\cap \bar{l}_i^{\hspace*{0.02in}c}\quad ;\quad \int _{-s_i^+ (Z)}^0F^k_j(Z+rv_i)dr>\frac{1}{\epsilon ^2}\} ,\quad j\in J_i, \end{align*} is smaller than $c_b\epsilon $. Consequently, the measure of the set of $Z\in \partial \Omega _i^-\cap \tilde{l}_i^{\hspace*{0.02in}c}\cap \bar{l}_i^{\hspace*{0.02in}c}$ outside of which \begin{align*} F^k_i(Z)\leq\frac{1}{\epsilon ^2} \quad \text{and} \quad \int _{-s_i^+(Z)}^0F^k_j(Z+rv_i)dr\leq\frac{1}{\epsilon ^2},\quad j\in J_i, \end{align*} is bounded by $c_b\epsilon$. Together with \eqref{bdd-by-outgoing}, this implies that the measure of the complement of the set of $Z\in \partial \Omega _i^-$, such that \begin{align*} &F_i^k(z)\leq \frac{1}{\epsilon ^2}\exp \big( \frac{p\Gamma }{\epsilon ^2}\big) \quad \text{and}\quad \int _{-s_i^+(z)}^{s_i^-(z)}\nu _i^k(z+rv_i)dr\leq \frac{p\Gamma }{\epsilon ^2} \end{align*} for $z=Z+sv_i$, $s\in [ -s_i^+(Z),0]$, is bounded by $c_b\epsilon$. With it $c_b\epsilon$ is a bound for the measure of the complement, denoted by $\Omega _{i}^{k,\epsilon }$, of the set of $i$-characteristics in $\Omega$ such that for all points $z$ on the $i$-characteristics, \eqref{bounds-on-complementary-of-Omega_iEpsilon} holds. \cqfd Given $i\in \{ 1, ..., p\} $ and $\epsilon >0$, let $\chi^{k,\epsilon }_{i}$ denote the characteristic function of the complement of $\Omega ^{k,\epsilon }_{i}$. The following lemma proves the compactness in $L^1(\Omega )$ of the $k$-sequence of integrated collision frequencies. \begin{lemma}\label{compactness-integrated-collision-frequency} \hspace*{1.in}\\ The sequences $\Big( \int _{-s_i^+(z)}^0 \nu _i^k(z+sv_i)ds\Big) _{k\in \mathbb{N} ^*}$, $1\leq i\leq p$, are strongly compact in $L^1(\Omega )$. \end{lemma} \underline{Proof of Lemma \ref{compactness-integrated-collision-frequency}.}\\ Take $\Gamma_{ij}^{lm}> 0$. By \eqref{generic-condition}, $v_i$ and $v_j$ span $\mathbb{R}^2$. Denote by $(a,b)$ the corresponding coordinate system, $(a^-, a^+)$ defined by \begin{align*} &a^-= \min \{ a\in \mathbb{R} ; (a,b)\in \Omega \hspace*{0.04in}\text{for some }b\} ,\quad a^+= \max \{ a\in \mathbb{R} ; (a,b)\in \Omega \hspace*{0.04in}\text{for some }b\} , \end{align*} and by $D$ the Jacobian of the change of variables $z\rightarrow (a,b)$. The uniform bound for the mass of $(F^k)_{k\in \mathbb{N} ^*}$ proven in Lemma \ref{existence-k-approximations}, implies that \begin{align*} &\Big( \int _\Omega \int _{-s_i^+(z)}^0 \nu _i^k(z+sv_i)dsdz\Big) _{k\in \mathbb{N} ^*} \end{align*} is bounded in $L^1$ uniformly with respect to $k$. Indeed, for some $(b^-(a),b^+(a))$, $a\in [a^-,a^+]$, \begin{align*} \int _\Omega \int _{-s_i^+(z)}^0 F_j^k(z+sv_i)dsdz&= D\int _{a^-}^{a^+}\int _{b^-(a)}^{b^+(a)}\int _{-s_i^+(bv_j)}^{a}F_j^k(bv_j+sv_i) ds\hspace*{0.02in}db\hspace*{0.02in}da\\ &\leq D\int _{a^-}^{a^+}\int _{b^-(a)}^{b^+(a)}\int _{-s_i^+(bv_j)}^{s_i^-(bv_j)}F_j^k(bv_j+sv_i) ds\hspace*{0.02in}db\hspace*{0.02in}da\\ &\leq c\int _\Omega F_j^k(z)dz,\hspace*{1.4in} j\in J_i. \end{align*} By the Kolmogorov-Riesz theorem (\cite{K}, \cite{R}), the compactness of $\Big( \int _{-s_i^+(z)}^0 \nu _i^k(z+sv_i)ds\Big) _{k\in \mathbb{N} ^*}$ will follow from its translational equi-continuity in $L^1(\Omega)$. Equicontinuity in the direction $v_i$, and in the direction $v_j$ with the mild form \eqref{mild-form} for $F_j^k$, come natural. Here the assumption \eqref{generic-condition} becomes crucial. The sequence \begin{align}\label{pf-lemma4-2-a} &\Big( \int _{-s_i^+(z)}^0 F_j^k(z+sv_i)ds\Big) _{k\in \mathbb{N} ^*},\quad j\in J_i, \end{align} is translationally equi-continuous in the $v_i$-direction. Indeed, $s_i^+(z+hv_i)= s_i^+(z)+h$ so that, denoting by $I(0,h)$ the interval with endpoints $0$ and $h$ and using the uniform bound on the mass of $(F_j^k)_{k\in \mathbb{N} ^*}$, \begin{align*} &\int _\Omega \mid \int _{-s_i^+(z+hv_i)}^0 F_j^k(z+hv_i+sv_i)ds-\int _{-s_i^+(z)}^0 F_j^k(z+sv_i)ds\mid dz\\ &= \int _\Omega \int _{s\hspace*{0.02in}\in I(0,h)}F_j^k(z+sv_i)ds\hspace*{0.02in}dz\\ &\leq c \mid h\mid . \end{align*} Let us prove the translational equi-continuity of \eqref{pf-lemma4-2-a} in the $v_j$-direction. By the weak $L^1$ compactness of $(F_j^k) _{k\in \mathbb{N} ^*}$, it is sufficient to prove the translational equi-continuity in the $v_j$-direction of $\big( \int _{s_i^+(z)}^0 \chi _j^{k,\epsilon }F_j^k(z+sv_i)ds\big) _{k\in \mathbb{N} ^*}$. Expressing $F_j^k(z+hv_j+sv_i)$ (resp. $F_j^k(z+sv_i)$) as integral along its $v_j$-characteristics, it holds that \begin{align*} &\mid \int _{-s_i^+(z+hv_j)}^0 \chi _j^{k,\epsilon }F_j^k(z+hv_j+sv_i)ds-\int _{-s_i^+(z)}^0 \chi _j^{k,\epsilon }F_j^k(z+sv_i)ds\mid \leq \mid A_{ij}^k(z,h)\mid +\mid B_{ij}^k(z,h)\mid , \end{align*} where \begin{align*} &A_{ij}^k(z,h)= \int _{-s_i^+(z+hv_j)}^0 \chi _j^{k,\epsilon }f_{bj}^k\big( z_j^+(z+hv_j+sv_i) \big) ds-\int _{-s_i^+(z)}^0 \chi _j^{k,\epsilon }f_{bj}^k\big( z_j^+(z+sv_i)\big) ds, \end{align*} and \begin{align*} B_{ij}^k(z,h)&= \int _{-s_i^+(z+hv_j)}^0 \int _{-s_j^+(z+hv_j+sv_i)}^0\chi _j^{k,\epsilon }Q_j^k(z+hv_j+sv_i+rv_j)drds\nonumber \\ &-\int _{-s_i^+(z)}^0 \int _{-s_j^+(z+sv_i)}^0\chi _j^{k,\epsilon }Q_j^k(z+sv_i+rv_j)drds, \end{align*} with $Q_i^k$ defined in \eqref{df-approximate-gain}. Denote by $(z_j^+(z_i^+(z)), z_j^+(z_i^+(z+hv_j))$ the boundary arc with end points $z_j^+(z_i^+(z))$ and $z_j^+(z_i^+(z+hv_j))$ and of length tending to zero with $h$. Performing the change of variables $s\rightarrow Z= z_j^+(z+hv_j+sv_i) $ (resp. $s\rightarrow Z= z_j^+(z+sv_i) $) in the first (resp. second) term of $A_{ij}^k(z,h)$, and using that the sequence $(f_{bi}^k)_{k\in \mathbb{N} ^*}$ is bounded by $f_{bi}$, it holds that \begin{align}\label{pf-lemma4-2-e} &\lim _{h\rightarrow 0}\int _\Omega \mid A_{ij}^k(z,h)\mid dz= 0, \end{align} uniformly with respect to $k$. Moreover, for some $\omega _h(z)\subset \Omega $ of measure or order $\mid h\mid $ uniformly with respect to $z\in \Omega $, \begin{align}\label{pf-lemma4-2-b} B_{ij}^k(z,h)&= \int _{\omega _h(z)} \chi _j^{k,\epsilon }Q_j^k(Z)dZ. \end{align} The sequence $(\chi _j^{k,\epsilon }Q_j^k)_{k\in \mathbb{N} ^*}$ is weakly compact in $L^1$. Indeed, \begin{align}\label{pf-lemma4-2-c} \chi _j^{k,\epsilon }Q_j^k&\leq \frac{1}{\ln \Lambda }\tilde{D}_k+\Gamma \Lambda \Big( \sum _{i\in J_j}F_i^k\Big) (\chi _j^{k,\epsilon }F_j^k)\nonumber \\ &\leq \frac{1}{\ln \Lambda }\tilde{D}_k+\frac{\Gamma \Lambda }{\epsilon ^2}\exp \big( \frac{p\Gamma }{\epsilon ^2}\big) \Big( \sum _{i\in J_j}F_i^k\Big) ,\quad \Lambda >1, \end{align} with $(\tilde{D}_k)_{k\in \mathbb{N} ^*}$ uniformly bounded in $L^1$ and $(F_i^k)_{k\in \mathbb{N} ^*}$ weakly compact in $L^1$. Hence, \begin{align}\label{pf-lemma4-2-g} &\lim _{h\rightarrow 0}\int _\Omega \mid B_{ij}^k(z,h)\mid dz= 0,\quad \text{uniformly with respect to }k. \end{align} \cqfd \section{The passage to the limit in the approximations.}\label{Section4} \setcounter{equation}{0} \setcounter{theorem}{0} Let $f$ be the weak $L^1$ limit of a subsequence of the solutions $(F^k)_{k\in \mathbb{N} ^*}$ to \eqref{Fk-i}-\eqref{bcFk-i}, still denoted by $(F^k)_{k\in \mathbb{N} ^*}$. For proving that $f$ is a mild solution of \eqref{discrete-general-a}-\eqref{discrete-general-b}, it is sufficient to prove that for any $\eta >0$ and $i\in \{ 1, \cdot \cdot \cdot , p\} $, there is a set $X_i^\eta $ of $i$-characteristics with complementary set of measure smaller than $c\eta $, such that \begin{align}\label{use-of-test-function} \int _\Omega \varphi \chi _i^\eta f_i(z)dz&= \int _\Omega \varphi \chi _i^\eta f_{bi}(z_i^+(z))dz\nonumber \\ &+\int _\Omega \int _{-s_i^+(z)}^0\big( \varphi \chi _i^\eta Q_i(f,f)+\chi _i^\eta f_i\hspace*{0.02in}v_i\cdot \nabla \varphi \big) (z+sv_i)ds\hspace*{0.02in}dz,\quad \varphi \in C^1(\bar{\Omega }) , \end{align} where $\chi _i^\eta $ denotes the characteristic function of $X_i^\eta $. Define the set $X_i^\eta $ as follows. For every $\epsilon >0$, pass to the limit when $k\rightarrow +\infty $ in \begin{align}\label{exp-form-chi-F-k} &\chi _i^{k,\epsilon }F_i^k (z)\leq \chi _i^{k,\epsilon }F_i^k (z_i^-(z))\hspace*{0.02in}\exp \Big( \int _{-s_i^+(z)}^{s_i^-(z)}\nu _i^k (z+sv_i)ds\Big) , \quad \text{a.a. }z\in \Omega , \quad k\in \mathbb{N} ^*, \end{align} and use the weak $L^1$ compactness of $(\chi _i^{k,\epsilon }F_i^k)_{k\in \mathbb{N} ^*}$, the weak $L^1$ compactness and the uniform boundedness in $L^\infty $ of $(\chi _i^{k,\epsilon }F_i^k(z_i^-(z)))_{k\in \mathbb{N} ^*}$, and the strong $L^1$ compactness of $(\int _{-s_i^+(z)}^{s_i^-(z)}\nu_i^k(z+sv_i)ds)_{k\in \mathbb{N} ^*}$. It implies that \begin{align*} &F_i^\epsilon (z)\leq F_i^\epsilon (z_i^-(z))\hspace*{0.02in}\exp \Big( \int _{-s_i^+(z)}^{s_i^-(z)}\nu _i(f)(z+sv_i)ds\Big) , \quad \text{a.a. }z\in \Omega , \quad \epsilon \in ]0,1[, \end{align*} where $F_i^\epsilon $ is the limit of a subsequence of $(\chi _i^{k,\epsilon }F_i^k)_{k\in \mathbb{N} ^*}$ and $\nu _i(f)= \sum _{j,l,m=1}^p\Gamma _{ij}^{lm}f_j$. By the monotonicity in $\epsilon $ of $(F^\epsilon )_{\epsilon \in ]0,1[}$ (resp. $\big( F^\epsilon (z_i^-(z))\big) _{\epsilon \in ]0,1[})$ and the uniform boundedness of their masses, it holds that \begin{align*} &f_i(z)\leq f_i(z_i^-(z))\hspace*{0.02in}\exp \Big( \int _{-s_i^+(z)}^{s_i^-(z)}\nu _i(f)(z+sv_i)ds\Big) , \quad \text{a.a. }z\in \Omega . \end{align*} From here the proof follows the lines of the proof of Lemma \ref{lemma-df-Omega-epsilon}, so that given $\eta >0$, there is a set $X_i^\eta $ of $i$-characteristics, with complementary set of measure smaller than $c\eta $, such that \begin{align}\label{bdd-with-chi-eta} &f_i(z)\leq \frac{1}{\eta }e^{\frac{p\Gamma }{\eta }}\quad \text{and}\quad \int _{-s_i^+(z)}^{s_i^-(z)}\nu _i(f)(z+sv_i)ds\leq \frac{p\Gamma }{\eta },\quad \text{a.a. }z\in X_\eta . \end{align} Denote by $C^1_+(\bar{\Omega })$ the subspace of non-negative functions of $C^1(\bar{\Omega })$. \hspace*{1.in}\\ \begin{lemma}\label{passage-limit-in-k} \hspace*{1.in}\\ $f$ is a subsolution of \eqref{discrete-general-a}-\eqref{discrete-general-b}, i.e. \begin{align}\label{f-subsolution} \int _\Omega \varphi \chi _i^\eta f_i(z)dz&\leq \int _\Omega \varphi f_{bi}(z_i^+(z))dz+\int _\Omega \int _{-s_i^+(z)}^0\chi _i^\eta f_i\hspace*{0.02in}v_i\cdot \nabla \varphi (z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &+\int _\Omega \int _{-s_i^+(z)}^0\varphi \hspace*{0.01in}Q_i(f,f) (z+sv_i)ds\hspace*{0.02in}dz,\quad 1\leq i\leq p,\quad \varphi \in C^1_+(\bar{\Omega }). \end{align} \end{lemma} \underline{Proof of Lemma \ref{passage-limit-in-k}.}\\ Let $i\in \{ 1,\cdot \cdot \cdot ,p\} $ and $\varphi \in C^1_+(\bar{\Omega })$ be given. Write the mild form of $\varphi \chi _i^\eta \chi _i^{k,\epsilon }F_i^k$ and integrate it on $\Omega $. It results \begin{align}\label{passage-limit-1} \int _\Omega \varphi \chi _i^\eta \chi _i^{k,\epsilon }F_i^k(z)dz&= \int _\Omega \varphi \chi _i^\eta \chi _i^{k,\epsilon }f_{bi}^k(z_i^+(z))dz+\int _\Omega \int _{-s_i^+(z)}^0\chi _i^\eta \chi _i^{k,\epsilon }F_i^k\hspace*{0.02in}v_i\cdot \nabla \varphi (z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &+\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _i^{k,\epsilon }\big( Q_i^{+k}-F_i^k\nu _i^k\big) (z+sv_i)ds\hspace*{0.02in}dz. \end{align} By the weak $L^1$ compactness of $(F_i^k)_{k\in \mathbb{N} ^*}$ and the linearity with respect to $\chi _i^{k,\epsilon }F_i^k$ of the first line of \eqref{passage-limit-1}, its passage to the limit when $k\rightarrow +\infty $ is straightforward. Let us pass to the limit when $k\rightarrow +\infty $ in any term of the loss term of \eqref{passage-limit-1}, denoted by $\Gamma _{ij}^{lm}L^k$, where \begin{align}\label{passage-limit-2} &L^k:= \int _\Omega \chi _i^\eta \chi _i^{k,\epsilon }(z)\int _{-s_i^+(z)}^0\varphi \frac{F_i^k}{1+\frac{F_i^k}{k}}\frac{F_j^k}{1+\frac{F_j^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz,\quad j\in J_i, \end{align} and $J_i$ is defined in \eqref{df-J-i}. By integration by parts, $L_k$ equals \begin{align}\label{passage-limit-3} &\int _\Omega \int _{-s_i^+(z)}^0\chi _i^\eta \chi _i^{k,\epsilon }\big( \varphi (Q_i^{+k}-F_i^k\nu _i^k) +(v_i\cdot \nabla \varphi )F_i^k\big)(z+sv_i)\Big( \int _{s}^0\chi _i^{k,\epsilon }\frac{F_j^k}{(1+\frac{F_i^k}{k})(1+\frac{F_j^k}{k})}(z+rv_i)dr\Big) ds\hspace*{0.02in}dz\nonumber \\ &+\int _\Omega \chi _i^\eta \chi _i^{k,\epsilon }\varphi \frac{f_{bi}^k}{1+\frac{f_{bi}^k}{k}}(z_i^+(z))\int _{-s_i^+(z)}^0\frac{F_j^k}{1+\frac{F_j^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz. \end{align} Denote by $(a,b)$ the coordinate system in the $(v_i,v_j)$ basis, $(a^-, a^+)\in \mathbb{R} ^2$ and $(b^-(a), b^+(a))\in \mathbb{R} ^2$ for every $a\in ]a^-, a^+[ $, such that \begin{align}\label{decomposition-Omega} &\Omega = \{ av_i+bv_j;\quad a\in ]a^-, a^+[, \quad b\in ]b^-(a), b^+(a)[ \hspace*{0.02in}\} . \end{align} The first term in $L^k$ can be written as $\int _{a^-}^{a^+}l^k(a)da$ with $l^k$ defined as \begin{align}\label{passage-limit-5} &l^k(a)= \int _{b^- (a)}^{b^+(a)}\int _{-s_i(bv_j)}^a\chi _i^\eta \chi _i^{k,\epsilon }\big( \varphi (Q_i^{+k}-F_i^k\nu _i^k)+(v_i\cdot \nabla \varphi )F_i^k\big) (sv_i+bv_j)\nonumber \\ &\hspace*{1.8in}\big( \int _{s}^a\chi _i^{k,\epsilon }\frac{F_j^k}{(1+\frac{F_i^k}{k})(1+\frac{F_j^k}{k})}(rv_i+bv_j)dr\big) ds\hspace*{0.02in}db. \end{align} For each rational number $a$, the sequence of functions \begin{align*} &(b,s)\in [b^-(a),b^+(a)]\times [-s_i^+(bv_j),a]\rightarrow \chi _i^\eta \chi _i^{k,\epsilon }\big( \varphi (Q_i^{+k}-F_i^k\nu _i^k)+(v_i\cdot \nabla \varphi )F_i^k\big) (sv_i+bv_j) \end{align*} is weakly compact in $L^1$, whereas \begin{align*} &(b,s)\rightarrow \int _{s}^a\chi _i^{k,\epsilon }\frac{F_j^k}{(1+\frac{F_i^k}{k})(1+\frac{F_j^k}{k})}(rv_i+bv_j)dr \end{align*} is by Lemma \ref{compactness-integrated-collision-frequency} strongly compact in $L^1$, and by Lemma \ref{lemma-df-Omega-epsilon} uniformly bounded in $L^\infty $. The convergence follows for any rational number $a$. With a diagonal process, there is a subsequence of $(l^k)$, still denoted by $(l^k)$, converging for any rational $a$. Moreover, \begin{align}\label{passage-limit-6} &\lim _{h\rightarrow 0}\hspace*{0.02in}\big( l^k(a+h)-l^k(a)\big) = 0, \end{align} uniformly with respect to $k$ and $a$, by the weak $L^1$ compactness of \begin{align*} &\big( \chi _i^\eta \chi _i^{k,\epsilon }( \varphi (Q_i^{+k}-F_i^k\nu _i^k)+(v_i\cdot \nabla \varphi )F_i^k\big) _{k\in \mathbb{N} ^*}\quad \text{and}\quad (F_j^k)_{k\in \mathbb{N} ^*}. \end{align*} Thus $(l^k)$ is a uniform converging sequence on $[a^-, a^+]$. The second term in $L^k$ can be treated analogously, $(\chi _i^{k,\epsilon }f_{bi}^k)_{k\in \mathbb{N} ^*}$ being uniformly bounded in $L^\infty $. The convergence follows.\\ \hspace*{1.in}\\ In order to determine the limit of $L^k$ when $k\rightarrow +\infty $, remark that \begin{align*} &\chi _i^\eta \chi _i^{k,\epsilon }( \varphi (Q_i^{+k}-F_i^k\nu _i^k)+(v_i\cdot \nabla \varphi )F_i^k= v_i\cdot \nabla (\chi _i^\eta \chi _i^{k,\epsilon }\varphi \hspace*{0.01in}F_i^k), \end{align*} which weakly converges in $L^1$ to $v_i\cdot \nabla (\chi _i^\eta \varphi \hspace*{0.01in}F_i^\epsilon )$ when $k\rightarrow +\infty $. Hence \begin{align} \lim _{k\rightarrow +\infty }L^k&=\int _\Omega \int _{-s_i^+(z)}^0v_i\cdot \nabla (\chi _i^\eta \varphi \hspace*{0.01in}F_i^\epsilon )(z+sv_i)\Big( \int _{s}^0f_j(z+rv_i)dr\Big) ds\hspace*{0.02in}dz\nonumber \\ &+\int _\Omega \chi _i^\eta \varphi f_{bi}(z_i^+(z))\Big( \int _{-s_i^+(z)}^0f_j(z+sv_i)ds\Big) \hspace*{0.02in}dz.\nonumber \end{align} By a backwards integration by parts, \begin{align}\label{passage-limit-8} \lim _{k\rightarrow +\infty }L^k&= \int _\Omega \int _{-s_i^+(z)}^0\varphi \hspace*{0.01in}\chi _i^\eta F_i^\epsilon f_j(z+sv_i)ds\hspace*{0.02in}dz. \end{align} \hspace*{0.02in}\\ In order to prove \eqref{f-subsolution}, let us prove that each \begin{align}\label{passage-limit-9} &\Gamma _{ij}^{lm}\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _i ^{k,\epsilon }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz,\quad j\in J_i, \end{align} term from $Q_i^{+k}$ in \eqref{passage-limit-1} converges when $k\rightarrow +\infty $ to a limit smaller than \begin{align}\label{passage-limit-10} &\Gamma _{ij}^{lm}\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta F_l^{\epsilon ^\prime }f_m(z+sv_i)ds\hspace*{0.02in}dz+\alpha (\epsilon ^\prime ),\hspace*{0.04in}\epsilon ^\prime \in ] 0,1[ ,\quad \text{with } \lim _{\epsilon ^\prime \rightarrow 0}\alpha (\epsilon ^\prime )= 0. \end{align} Take $\Gamma _{ij}^{lm}= 1$, $j\in J_i$, for simplicity. $(\mu _ {\frac{1}{n}})_{n\in \mathbb{N} ^*}$ being the sequence of mollifiers defined at the beginning of Section \ref{Section2} for $\alpha = \frac{1}{n}$, split \eqref{passage-limit-9} into \begin{align}\label{passage-limit-12} &\int _\Omega \int _{-s_i^+(z)}^0\varphi (\chi _i^\eta \ast \mu _{\frac{1}{n}})\chi _l^{k,\epsilon ^\prime }\chi _i^{k,\epsilon }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &+\int _\Omega \int _{-s_i^+(z)}^0\varphi (\chi _i^\eta \ast \mu _{\frac{1}{n}})(1-\chi _l^{k,\epsilon ^\prime })\chi _i^{k,\epsilon }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &\int _\Omega \int _{-s_i^+(z)}^0\varphi \big( \chi _i^\eta -(\chi _i^\eta \ast \mu _{\frac{1}{n}})\big) \chi _i^{k,\epsilon }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &\hspace*{1.in}\nonumber \\ &\leq \int _\Omega \int _{-s_i^+(z)}^0\varphi (\chi _i^\eta \ast \mu _{\frac{1}{n}})\chi _l^{k,\epsilon ^\prime }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &+\frac{c}{\ln \Lambda }+\frac{c\Lambda }{\epsilon ^2}e^{\frac{p\Gamma }{\epsilon ^2}}\sum _{j\in J_i}\Big( \int _{\Omega _l^{k,\epsilon ^\prime }}F_j^k(z)dz+\int _\Omega \varphi \mid \chi _i^\eta -(\chi _i^\eta \ast \mu _{\frac{1}{n}})\mid F_j^k(z)dz \Big) \nonumber \\ &\hspace*{1.in}\nonumber \\ &\leq \int _\Omega \int _{-s_i^+(z)}^0\varphi (\chi _i^\eta \ast \mu _{\frac{1}{n}})\chi _l^{k,\epsilon ^\prime }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &\hspace*{1.in}\nonumber \\ &+\frac{c}{\ln \Lambda }+\frac{c\Lambda }{\epsilon ^2}e^{\frac{p\Gamma }{\epsilon ^2}}\Big( \Lambda ^\prime \epsilon ^\prime +\frac{1}{\ln \Lambda ^\prime }+\frac{1}{\ln \frac{k}{2}}+\tilde{\Lambda }\parallel \chi _i^\eta -(\chi _i^\eta \ast \mu _{\frac{1}{n}})\parallel _{L^1}+\frac{1}{\ln \tilde{\Lambda }}\Big) ,\quad \text{by }\eqref{weak-compactness-1},\nonumber \\ &\hspace*{3.in}\Lambda >1,\quad \Lambda ^\prime >1,\quad \tilde{\Lambda }>1,\quad \epsilon ^\prime >0. \end{align} Denote by $D$ the Jacobian of the change of variables $z\rightarrow (a,b)$. For some smooth function $A$, and any integrable function $g$, \begin{align*} \int _\Omega \int _{-s_i^+(z)}^0g(z+sv_i)dsdz&= D\int _{b^-}^{b^+}\int _{a^-(b)}^{a^+(b)}\int _{-s_i^+(bv_j)}^ag(sv_i+bv_j)ds\hspace*{0.02in}da\hspace*{0.02in}db\\ &= D\int _{b^-}^{b^+}\int _{-s_i^+(bv_j)}^{a^+(b)}(a^+(b)-\max \{ a^-(b), s\} )g(sv_i+bv_j)\hspace*{0.02in}ds\hspace*{0.02in}db\\ &= \int _{\Omega }A(\alpha ,\gamma )g(\alpha v_l+\gamma v_m)\hspace*{0.02in}d\alpha \hspace*{0.02in}d\gamma . \end{align*} Hence, \begin{align} &\lim _{k\rightarrow +\infty }\int \int _{-s_i^+(z)}^0\varphi (\chi _i^\eta \ast \mu _{\frac{1}{n}})\chi _l^{k,\epsilon ^\prime }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &= \int _\Omega \int _{-s_i^+(z)}^0\varphi (\chi _i^\eta \ast \mu _{\frac{1}{n}})F_l^{\epsilon ^\prime }f_m(z+sv_i)ds\hspace*{0.02in}dz,\quad \epsilon ^\prime \in ] 0,1[ . \end{align} For $\tilde{\Lambda }$ large enough, pass to the limit when $k\rightarrow +\infty $ and $n\rightarrow +\infty $ in \eqref{passage-limit-12}. Up to subsequences, the weak $L^1$ limits $F_i^\epsilon $ and $F_i^{\epsilon ^\prime }$ of $(\chi _i^{k,\epsilon }F_i^k)_{k\in \mathbb{N} ^*}$ and $(\chi _i^{k,\epsilon ^\prime }F_i^k)_{k\in \mathbb{N} ^*}$ when $k\rightarrow +\infty $ satisfy \begin{align}\label{passage-limit-a} &\int _\Omega \varphi \chi _i^\eta F_i^\epsilon (z)dz\leq \int _\Omega \varphi \chi _i^\eta f_{bi}^k(z_i^+(z))dz+\int _\Omega \int _{-s_i^+(z)}^0\chi _i^\eta F_i^\epsilon \hspace*{0.02in}v_i\cdot \nabla \varphi (z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &+\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \big( Q_i^+(F^{\epsilon ^\prime },f)-F_i^\epsilon \nu _i(f)\big) (z+sv_i)ds\hspace*{0.02in}dz\\ &+\frac{c}{\ln \Lambda }+\frac{c\Lambda }{\epsilon ^2}e^{\frac{p\Gamma }{\epsilon ^2}}\big( \Lambda ^\prime \epsilon ^\prime +\frac{1}{\ln \Lambda ^\prime }\big) ,\quad (\epsilon ,\epsilon ^\prime )\in ]0,1[ ^2 ,\quad \Lambda >1, \quad \Lambda ^\prime >1.\nonumber \end{align} Choose $\Lambda $ large enough, $\epsilon $ small enough, $\Lambda ^\prime $ large enough, $\epsilon ^\prime $ small enough, in this order. The passage to the limit when $\epsilon \rightarrow 0$ and $\epsilon ^\prime \rightarrow 0$ in \eqref{passage-limit-a} results from the monotone convergence theorem, the family $(F^\epsilon )_{\epsilon \in ]0,1[ }$ being non decreasing, with mass uniformly bounded, together with the mass of $(\chi _i^\eta Q_i^+(F^{\epsilon ^\prime },f))_{\epsilon ^\prime \in ]0,1[ }$ and $(\chi _i^\eta F_i^{\epsilon ^\prime }\nu _i(f))_{\epsilon ^\prime \in ]0,1[ }$. Consequently, \eqref{f-subsolution} holds. \cqfd \begin{lemma}\label{proof-theorem-1-1} $f$ is a solution to \eqref{discrete-general-a}-\eqref{discrete-general-b}. \end{lemma} \underline{Proof of Lemma \ref{proof-theorem-1-1}.}\\ For proving Lemma \ref{proof-theorem-1-1}, it remains to prove that \begin{align}\label{passage-limit-18} \int _\Omega \varphi \chi _i^\eta f_i(z)dz&\geq \int _\Omega \varphi \chi _i^\eta f_{bi}(z_i^+(z))dz+\int _\Omega \int _{-s_i^+(z)}^0\chi _i^\eta f_i\hspace*{0.02in}v_i\cdot \nabla \varphi (z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &+\int _\Omega \int _{-s_i^+(z)}^0\varphi \hspace*{0.02in}\chi _i^\eta Q_i(f,f)(z+sv_i)ds\hspace*{0.02in}dz,\quad 1\leq i\leq p,\quad \varphi \in C^1_+(\bar{\Omega }). \end{align} For $\beta>0$, start from the equation for $\varphi \chi _i^\eta F_i^k$ written in renormalized form, \begin{align}\label{passage-limit-19} &\beta^{-1}\varphi \chi _i^\eta \ln (1+\beta F_i^k) (z)-\beta^{-1} \varphi \chi _i^\eta \ln (1+\beta f_{bi}^k)(z_i^+(z))\nonumber \\ &+\int _{-s_i^+(z)}^0\beta^{-1}\chi _i^\eta \ln (1+\beta F_i^k) \hspace*{0.02in}v_i\cdot \nabla \varphi (z+sv_i)ds= \int _{-s_i^+(z)}^0\frac{\varphi \hspace*{0.02in}\chi _i^\eta (Q_i^{+k}-F_i^k\nu _i^k)}{1+\beta F_i^k}(z+sv_i)ds. \end{align} It holds \begin{align*} &\beta ^{-1}\ln(1+\beta x)<x,\hspace*{0.04in} \beta \in ]0,1[ \quad \text{ and }\quad \lim_{\beta \rightarrow 0} \beta ^{-1}\ln(1+\beta x)= x,\quad x>0. \end{align*} Hence in weak $L^1$ the sequence $(\beta ^{-1}\ln \big( 1+\beta F^k_i\big) )_{k\in \mathbb{N} ^*}$ converges modulo subsequence to a function $F^\beta \leq f$ when $k\rightarrow +\infty $. The mass of the limit increases to the mass of $f$, when $\beta \rightarrow 0$. % This gives in the final limit $\beta \rightarrow 0$ for the l.h.s. of \eqref{passage-limit-19}, \begin{align}\label{passage-limit-19-a} &\varphi \chi _i^\eta f_i(z)- \varphi \chi _i^\eta f_{bi} (z_i^+(z)) - \int _{-s_i^+(z)}^0 \chi _i^\eta f_i \hspace*{0.02in}v_i\cdot \nabla \varphi (z+sv_i)ds. \end{align} Using analogous arguments as for the limit of the loss term in Lemma \ref{passage-limit-in-k}, it holds that \begin{align*} &\lim _{k\rightarrow +\infty }\Gamma _{ij}^{lm}\int _\Omega \int _{-s_i^+(z)}^0\frac{\varphi \chi _i^\eta F_i^kF_j^k}{1+\beta F_i^k}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &= \Gamma _{ij}^{lm}\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \big( \underset{k\rightarrow +\infty }{\text{weak}L^1\text{lim}}\frac{F_i^k}{1+\beta F_i^k}\big) f_j(z+sv_i)ds\hspace*{0.02in}dz,\quad j\in J_i. \end{align*} But \begin{align*} &\underset{k\rightarrow +\infty }{\text{weak}L^1\text{lim}}\frac{F_i^k}{1+\beta F_i^k}\leq \underset{k\rightarrow +\infty }{\text{weak}L^1\text{lim}}F_i^k, \end{align*} and \begin{align*} &\int _\Omega \underset{k\rightarrow +\infty }{\text{weak}L^1\text{lim}}\frac{F_i^k}{1+\beta F_i^k}(z)dz \quad \text{increases to }\int _\Omega \underset{k\rightarrow +\infty }{\text{weak}L^1\text{lim}}F_i^k(z)dz \end{align*} when $\beta \rightarrow 0$. Hence \begin{align}\label{passage-limit-} &\lim _{\beta \rightarrow 0}\lim _{k\rightarrow +\infty }\Gamma _{ij}^{lm}\int _\Omega \int _{-s_i^+(z)}^0\frac{\varphi \chi _i^\eta F_i^kF_j^k}{1+\beta F_i^k}(z+sv_i)ds\hspace*{0.02in}dz= \Gamma _{ij}^{lm}\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta f_if_j(z+sv_i)ds\hspace*{0.02in}dz. \end{align} For the gain term and any $(l,m)\in \{ 1, \cdot \cdot \cdot , p\} ^2$ such that $\Gamma _{ij}^{lm}>0$ for some $j\in \{ 1,\cdot \cdot \cdot , p\} $, \begin{align}\label{limit-in-k-24} &\int _\Omega \int _{-s_i^+(z)}^0\frac{\varphi \chi _i^\eta }{1+\beta F_i^k}\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &\geq \int _\Omega \int _{-s_i^+(z)}^0\frac{\varphi \chi _i^\eta \chi _l^{k,\epsilon }}{1+\beta F_i^k}\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &= \int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _l^{k,\epsilon }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &-\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _l^{k,\epsilon }\frac{\beta F_i^k}{1+\beta F_i^k}\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &\geq \int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _l^{k,\epsilon }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz\nonumber \\ &-c\Lambda \sum _{j\in J_i}\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _l^{k,\epsilon }\frac{\beta (F_i^k)^2F_j^k}{1+\beta F_i^k}(z+sv_i)ds\hspace*{0.02in}dz-\frac{c}{\ln \Lambda }\quad \Lambda >1,\quad \epsilon \in ]0,1[. \end{align} It holds \begin{align}\label{limit-in-k-25} &\lim _{k\rightarrow +\infty }\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _l^{k,\epsilon }\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.02in}dz= \int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta F_l^\epsilon f_m^k(z+sv_i)ds\hspace*{0.02in}dz. \end{align} Choose $\Lambda $ large enough and split the domain of integration of every $j\in J_i$ term in \eqref{limit-in-k-24} into \begin{align*} \{ F_i^k\leq \Lambda ^\prime \} &\cup \{ F_i^k>\Lambda ^\prime \quad \text{and}\quad F_i^kF_j^k> \tilde{\Lambda } \frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}\} \\ &\cup \{ F_i^k>\Lambda ^\prime \quad \text{and}\quad F_i^kF_j^k\leq \tilde{\Lambda } \frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}\} ,\quad \Lambda ^\prime >1,\quad \tilde{\Lambda } >1. \end{align*} It holds that \begin{align}\label{limit-in-k-24-a} &\int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta \chi _l^{k,\epsilon }\frac{\beta (F_i^k)^2F_j^k}{1+\beta F_i^k}(z+sv_i)ds\hspace*{0.01in}dz\leq c\Big( \beta (\Lambda ^\prime )^2+\frac{1}{\ln \tilde{\Lambda }}+\frac{\tilde{\Lambda }}{\epsilon ^2}e^{\frac{p\Gamma }{\epsilon ^2}}\int _{F_i^k>\Lambda ^\prime }F_m^k(z)dz\Big) ,\nonumber \\ &\hspace*{3.in} \beta \in ]0,1[ ,\quad \Lambda ^\prime >0,\quad \tilde{\Lambda }>1. \end{align} The last term in \eqref{limit-in-k-24-a} tends to zero when $\tilde{\Lambda }\rightarrow +\infty $, $\Lambda ^\prime \rightarrow +\infty $, $\beta \rightarrow 0$ in this order, uniformly with respect to $k$. Consequently, \begin{align*} &\lim _{\beta \rightarrow 0}\lim _{k\rightarrow +\infty }\int _\Omega \int _{-s_i^+(z)}^0\frac{\varphi \chi _i^\eta }{1+\beta F_i^k}\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.01in}dz\geq \int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta F_l^\epsilon f_m(z+sv_i)ds\hspace*{0.01in}dz. \end{align*} This holds for every $\epsilon >0$. Hence \begin{align}\label{limit-in-k-24-d} &\lim _{\beta \rightarrow 0}\lim _{k\rightarrow +\infty }\int _\Omega \int _{-s_i^+(z)}^0\frac{\varphi \chi _i^\eta }{1+\beta F_i^k}\frac{F_l^k}{1+\frac{F_l^k}{k}}\frac{F_m^k}{1+\frac{F_m^k}{k}}(z+sv_i)ds\hspace*{0.01in}dz\geq \int _\Omega \int _{-s_i^+(z)}^0\varphi \chi _i^\eta f_l f_m(z+sv_i)ds\hspace*{0.01in}dz. \end{align} And so, \eqref{passage-limit-18} holds. Together with \eqref{f-subsolution}, this proves \eqref{use-of-test-function}. \cqfd
2,869,038,156,213
arxiv
\section{Introduction} ~ Nowadays there is an intensive experimental effort to search for new phenomena in many fields of particle physics. These signals should be rare events where one needs to treat low statistics data and non-Gaussian errors. In order to handle these data one needs special care, since all the usual Gaussian procedures and techniques are no longer valid. On the other hand, it turns out that the minimization of the chi-square functions is normally simple, relatively fast and a very familiar way of fitting data, estimating parameters and their errors so as the goodness-of-fit testing. The $\chi^2$ of fitting a theoretical curve to experimental data in order to estimate some parameter values is to minimize the function \begin{equation} \chi^2_G = \sum_{i=1}^{N} \frac{(f(x_{i},\vec{\alpha})-n_i)^2}{\sigma_i^2} \label{1x1} \end{equation} \noindent where $f(x_i,\vec{\alpha})$ is a known function(model prediction) calculated at the point $x_i$ and $\vec{\alpha}$ is a vector of the parameters one wants to obtain; $n_i$ is the measured experimental value associated with the bin located at $x_i$ and $N$ is the number of bins in the range of interest. This method has been largely used but presents some limitations such as the assumption that each bin content must obey a Gaussian distribution of spread $\sigma_i$ or the measurement errors are at least almost ''Gaussian''. If the contents of each bin obeys a Poisson distribution some authors \cite{Brandt} recommend that each bin content must have at least a statistically significant number of entries $n_i$ such as that $\sigma_i \approx \sqrt{n_i}$, where the asymptotically limiting case for large number of measurements starts, and the square root of the variance can be considered as a good interval estimation. For values smaller than these, there are suggestions that one must use bins of variable sizes such that each bin content be greater than the statistically significant number of entries $n_i$. The disadvantage of these suggestions is that one can lose important information about the structure of the studied distribution. There is no rule of thumb for the bin width or for the ideal number of bins in which the region of interest should be divided. There is yet another hint that the ideal case is to use bins of variable width of equal probability contents \cite{Eadie}.\par In a very interesting paper, Baker and Cousins\cite{Baker}, call attention and discuss some topics such as point estimation, confidence interval estimation, goodness-of-fit testing, biased estimation, etc. when fitting curves to histograms using chi-square function.\par They presented a $\chi^2_{BC}$ function for fitting histograms when the bin contents obey a Poisson distribution. They defined a Poisson likelihood chi-square which is given by Eq.(\ref{1x2}) below, \begin{equation} \chi^2_{BC}=2 \sum_{i=1}^N\left[f(x_i,\vec\alpha)-n_i+n_i log\left(n_i/f(x_i,\vec\alpha)\right)\right] \label{1x2} \end{equation} This function behaves asymptotically as a chi-square distribution and then can be used for estimation and goodness-of-fit testing.\\ The chi-function presented in this paper has also asymptotically a behavior like the classical chi-square function, a fast convergence to the correct value, much less fluctuations when compared with $\chi^2_G$ and it works also when one has distributions with long tails where bin contents can be very low. It is of easy implementation in any minimization program.\\ After this introduction, we demonstrate in section 2 how one can transform a non-Gaussian pdf into an approximate Gaussian pdf. Section 3 is devoted to obtain the chi-square function for the approximate Gaussian pdf from a Poisson distribution and discuss some of its characteristics. The next section, 4, we compare with the results obtained using the Eq.(\ref{1x1}) and Eq.(\ref{1x2}), and the new chi-square function for different number of entries in the histograms. This comparison is made using Monte Carlo events generated according to some distributions with known parameters, as suggested by \cite{Baker} and in section 5 we present the conclusions. In the appendix we obtain the equivalent chi-square expression for a binomial distribution. \section{Obtaining an approximate Gaussian distribution} ~ The basic motivation is to transform, via a variable transformation, a non-Gaussian probability density function(pdf) into an approximately Gaussian pdf preserving the probability even when one has small number of events \cite{Eadie,Box}. Let us consider a non-Gaussian pdf $p(x)$ and one wants to obtain a transformation such that the probability is preserved Eq.(\ref{2x1}) and that the new pdf $q(z)$ should be approximately Gaussian: \begin{equation} q(z)dz = p(x)dx \label{2x1} \end{equation} Then $q(z)$ can be written as \begin{equation} q(z) = p(x) \left\arrowvert\frac{dx}{dz}\right\arrowvert \label{2x2} \end{equation} When a pdf is unimodal and obeys some regularity conditions \cite{Box}, the logarithm of it is approximately quadratic so that \begin{equation} log(p(x)) \approx log(p(\hat{x}))- \frac{1}{2}\left (-\frac{\partial^2 log(p(x))}{\partial x^2 }\right )_{\hat x}(x-\hat{x})^2 \label{2x3} \end{equation} \noindent where $\hat{x}$ is the point associated to the maximum of $p(x)$ and one can define the following quantity \begin{equation} J(\hat{x}) = \left (-\frac{\partial^2 log(p(x))}{\partial x^2}\right )_{\hat{x}} \label{2x4} \end{equation} On the other hand, the logarithm of a Normal pdf $g(x)$, with mean $\hat\mu$ and standard deviation $\sigma$, is of the form: \begin{equation} log(g(x)) = const - \frac{1}{2 \sigma^2} (x-\hat{\mu})^2 \label{2x5} \end{equation} \noindent so that given the location parameter $\hat\mu$, it is completely determined by its standard deviation $\sigma$. A comparison between Eq.(\ref{2x3}) and Eq.(\ref{2x5}) shows that the variance of the pdf $p(x)$ is approximately equal to $J^{-1}(\hat{x})$. Let us suppose now that $z(x)$ is a one-to-one transformation between $x$ and $z$, then using the chain rule for derivatives one gets the relation \begin{eqnarray} J(\hat{z}) & = & \left ( -\frac{\partial^2 log(p(x))}{\partial x^2}\right )_{\hat x} \left\arrowvert \frac {dx}{dz}\right\arrowvert^2_{\hat z}\cr & = & J(\hat x)\left\arrowvert \frac {dx}{dz}\right\arrowvert^2_{\hat z} \label{2x6} \end{eqnarray} Let us choose $z(x)$ such that \begin{equation} \left (\frac{dx}{dz}\right )_{\hat z} = J^{-1/2}(\hat{x}) \label{2x7} \end{equation} This choice is made so as to make $J(\hat z)$ independent of $\hat z$, the standard deviation equal to one and the new distribution $q(z)$ approximately translation invariant along the $z$ axis. Thus the metric can be obtained from the relationship obeying the above conditions \begin{equation} \frac{dz}{dx} = J^{1/2}(x) \label{2x8} \end {equation} \begin{equation} z = \int^x J^{1/2}(t) dt \label{2x9} \end{equation} \section{The Chi-square Function} As an example, let us apply the above prescription to a Poisson pdf given by \begin{equation} p_P(x) = \frac{x^k e^{-x}}{\Gamma(k+1)} \label{2x10} \end{equation} \noindent which means that after observing $k$ events one has a pdf of the estimated mean parameter $x$. Now one wants to find a Gaussian like pdf through a transformation of $x$. The location of the $p_P(x)$ maximum is easily shown to be at $\hat x= k$ and the term associated to the second derivative is \begin{equation} -\frac{\partial^2 log(p(x))}{\partial x^2} = \frac{k}{x^2} \label{2x11} \end{equation} Using the fact that $\hat{x} = k$, one obtains \begin{equation} J(\hat{x}) = \frac{1}{k} \label{2x12} \end{equation} \noindent then \begin{equation} J^{1/2}(x) = \frac{1}{\sqrt{x}} \label{2x13} \end{equation} Using Eq(\ref{2x7}), the one-to-one transformation is obtained as \begin{equation} \frac{dz}{dx} = \frac{1}{\sqrt{x}} \label{2x14} \end{equation} \begin{equation} z = \int^x \frac{1}{\sqrt{t}}dt \label{2x15} \end{equation} \begin{equation} z = 2 \sqrt{x} \label{2x16} \end{equation} \noindent and the inverse transformation is \begin{equation} x = \left (z \over 2\right )^2 \label{2x17} \end{equation} Reusing the above expression in Eq.({\ref{2x10}) and using Eq.(\ref{2x2}), one obtains an approximately Gaussian expression $q_P(z)$ associated to the Poisson pdf $p_P(x)$, which is \begin{equation} q_P(z) = \frac{\left(z/2\right)^{2k+1} e^{-\left (z/2\right )^2}}{\Gamma(k+1)} \label{2x18} \end{equation} It is not difficult to show that the above expression is normalized, $J(\hat z)=1$ and is translation invariant by construction. The approximate Gaussian and exact Gaussian distributions,$q_P(z)$ and $g(z)$, respectively, are shown in Fig. 1 for different values of $k$ ($k = 0,1,2,5$ and $10$). The worst approximation occurs at $k=0$ but it gets better very fast as $k$ increases. This pdf $q_P(z)$ has a maximum at $\hat{z} = \sqrt{4k+2}$ which corresponds to $x_{\hat z} = k+ 1/2$. It is interesting to note that $x_{\hat z}$ is between $\hat x=k$ and the median of the Poisson pdf, $x_m \approx k + 2/3$, i.e., the maximum of $p_P(x)$ is not directly related to the maximum of $q_P(z)$ via Eq.(\ref{2x16}) and Eq.(\ref{2x17}). It is not difficult either to obtain an analytical expression for the confidence intervals for roughly $68.3\%$ of confidence level since the obtained $q_P(z)$ has a maximum at $\hat{z} = \sqrt{4k+2}$ and a standard deviation equal to unit, one gets \begin{equation} \left [z_{min},z_{max}\right ] =\left [ \sqrt{4k+2}-1,\sqrt{4k+2}+1\right ] \label{2x19} \end{equation} Taking the inverse transformation one gets the corresponding interval associated with the original Poisson pdf \begin{equation} \left[x_{min},x_{max}\right ]= \left[(z_{min}/2)^2,(z_{max}/2)^2\right ] \label{2x20} \end{equation} The confidence level calculated according to Eq.(\ref{2x19})and Eq.(\ref{2x20}) is shown in Table 1 for different values of $k$, and so are the probability contents of the calculated interval. One can see that these intervals overestimate the $68.27\%$ confidence level but converge to it as $k$ increases. Let us suppose that one wants to fit a set of data when the contents of each bin obeys a Poisson distribution. If the contents of each bin has small numbers of events, one can no longer use as its standard deviation $\sigma_i = \sqrt{n_i}$, since the errors are asymmetrical and consequently one can not use the least-square fit method, Eq.(\ref{1x1}), either since it works only for Gaussian pdf. If one insists one could get large deviations for the estimated parameters as is shown in the figures of the next section. After taking the above transformation, the contents of each bin is approximately Gaussian. Using the likelihood ratio test theorem \cite{Eadie,Baker}, one gets the following expression in terms of bin contents $n_i$ \begin{equation} \chi^2_P=-2 \sum_{i=1}^N log(\lambda_i) \end{equation} \noindent where \begin{equation} \lambda_i=\frac{\left({\sqrt{4f(x_i,\vec\alpha)+2}\over 2}\right)^{2n_i+1} e^{-\left({\sqrt{4f(x_i,\vec\alpha)+2}\over 2}\right)^2}} {\left({\sqrt{4n_i+2}\over 2}\right)^{2n_i+1} e^{-\left({\sqrt{4n_i+2}\over 2}\right)^2}} \end{equation} \noindent which gives \begin{equation} \chi^2_P=\sum_{i=1}^N\left[2(f(x_i,\vec\alpha)-n_i)+ (2n_i+1)log\left({2n_i+1\over2 f(x_i,\vec\alpha)+1}\right)\right] \label{2x21} \end{equation} \noindent which asymptotically behaves like a chi-square distribution. This expression is similar to Eq.(\ref{1x2}) obtained by \cite{Baker}. One can also derive an equivalent expression for a binomial distribution. This is shown in the appendix. \section{Comparing the different chi-square functions} ~ Let us now compare the results obtained by minimizing Eq.(\ref{1x1}), Eq.(\ref{1x2}) and Eq.(\ref{2x21}) when the bin contents obeys a Poisson distribution. This comparison was made for three different curves: Gaussian, Breit-Wigner and Moyal, Eq.(\ref{3x1}),Eq.(\ref{3x2}) and Eq.(\ref{3x3}), respectively. \begin{equation} f_{G}(x) \propto e^{\displaystyle{-\frac{(x-\mu_G)^2}{2 \sigma_G^2}}} \label{3x1} \end{equation} \begin{equation} f_{BW}(x) \propto \frac{1} {(2 \sigma_{BW}^{2}+(x-\mu_{BW})^ 2) } \label{3x2} \end{equation} \begin{equation} f_{M}(x) \propto e^{\displaystyle{{-h-e^{-h}}}} \label{3x3} \end{equation} \noindent where $h=(x-\mu_{M})/\sigma_{M}$.\\ The first two are symmetrical curves with ''short'' and long ''tails'', respectively, while the last one is an asymmetrical function with a left ''short'' and a right ''long'' tails. The parameters $\{ \mu_j \}$ are associated to the maximum value of the distribution and $\{ \sigma_j \}$ are related to the spread of the distribution, where $j=G,BW,M$. One generates random points with known $\bar \mu_j$ and $\bar \sigma_j$ according to each of the above distributions and filling histograms with 100 bins, see Table 2. The number of entries ranges from 20 to $10^4$ and for each fixed number of entries, $10^4$ sets of points were generated. For each fixed number of entries we calculate the average value of the fitted parameters $\mu_j$ and $\sigma_j$ which are the estimators of $\bar \mu_j$ and $\bar \sigma_j$, respectively, using Eq.(\ref{1x1}), Eq.(\ref{1x2}) and Eq.(\ref{2x21}) and the MERLIN optimization package \cite{merlin}. The fit was done from the first bin to the last bin content different from zero, although there could exist bins of contents equal to zero in between, except for Eq.(\ref{1x1}) where the bins of contents equal to zero were excluded in order to avoid singularities. One also calculates the expected mean errors defined as $\Delta \sigma_j=\sqrt{<\sigma_j-\bar\sigma_j>^2}$ and $\Delta \mu_j=\sqrt{<\mu_j-\bar\mu_j>^2}$ of the fitted $\mu_j$ and $\sigma_j$ with respect to the ''true'' known $\bar \mu_j$ and $\bar \sigma_j$, fixing the number of entries. These results are summarized in Figs.2-13 in terms of the number of entries. We can see clearly that the minimization of $\chi^2_P$ shows faster or equal convergence to ''true'' value and smaller or equal expected mean errors as the number of entries increase than when using $\chi^2_{BC}$. Both these functions are systematically much better than $\chi_G$ for the convergence and expected mean errors of the parameters. For small number of entries, we can also notice that the parameters $\sigma_i$ are systematically overestimated for all the shown cases while $\mu_M$ for the Moyal case is underestimated. All the chi-square functions here presented converge systematically to the correct value as the number of entries increase. One can observe in all figures that all three methods coincide as the number of entries increase. We can clearly see the advantage of using the chi-square function introduced here. It converges equally well or faster and has equal or smaller expected mean errors than the other two methods. Besides, Eq.(\ref{2x21}) is also of easy implementation in optimization programs . \section{Conclusions} This article presented an improvement over the usual minimization of chi-square function technique for fitting functions in order to extend its applicability to low statistics data when one has asymmetrical errors. This new method is obtained through the change of variables such that one gets approximately Gaussian pdf when transforming the original pdf. The approximate Gaussian pdf is obtained associated to a Poisson pdf and a chi-square function is adapted for this new Gaussian like pdf. Monte Carlo generated events show an improvement in low statistics region, as expected, although the results converge to the ``true'' value as the number of events increases. This method has shown a fast convergence to the correct parameter values especially to the parameter associated to the curve spread, $\sigma_j$. The proposed chi-square function is consistent since the fitted parameters converge to the true value of the parameters and their expected mean errors decreases as the number of observations increases. The results are simple and of easy use in standard optimization procedures. A similar result was obtained in the appendix for binomial distributions. \\ \textit{Acknowledgments:} The authors are very grateful for discussions with A. Ramalho and A. Vaidya. This work was supported in part by the following Brazilian agencies: CNPq, FINEP, FAPERJ, FJUB and CAPES. \newpage
2,869,038,156,214
arxiv
\section{Introduction} In 1937, Henri Cartan (1904--2008), one of the founders of the Bourbaki group, introduced the concepts of filter and ultrafilter \cite{Cartan1, Cartan2}. These concepts were among the cornerstones of Bourbaki's exposition of General Topology \cite{Bourbaki}. For non-metrizable spaces, filter convergence is a good substitute for ordinary convergence of sequences, in particular a Hausdorff space $X$ is compact if and only if every filter in $X$ has a cluster point. We refer to \cite[Section 16.1]{Kad2018} for a brief introduction to filters and compactness. Filters and ultrafiters (or equivalent concepts of ideals and maximal ideals of subsets) are widely used in Topology, Model Theory, and Functional Analysis. Let us recall some definitions. A \emph{filter} $\mathfrak{F}$ on a set $\Omega \neq \emptyset$ is a non-empty collection of subsets of $\Omega$ satisfying the following axioms: \begin{enumerate} \item[(a)] $\emptyset \notin \mathfrak{F}$; \item[(b)] if $A,B \in \mathfrak{F}$ then $A \cap B \in \mathfrak{F}$; \item[(c)] for every $A \in \mathfrak{F}$ if $B \supset A$ then $B \in \mathfrak{F}$. \end{enumerate} The natural ordering on the set of filters on $\Omega$ is defined as follows: $\mathfrak{F}_1 \succ \mathfrak{F}_2$ if $\mathfrak{F}_1 \supset \mathfrak{F}_2$. Maximal elements in this order are called \emph{ultrafilters}. The existence of ultrafilters requires the Axiom of Choice, so in this paper we work in Zermelo-Fraenkel-Axiom of Choice (ZFC) system of set theory axioms. For an ultrafilter $\mathfrak{U}$ on $\Omega$ the following is true: for every subset $A \subset \Omega$ that does not belong to $\mathfrak{U}$, the complement $\Omega \setminus A$ belongs to $\mathfrak{U}$. Actually, this property characterizes those filters which are ultrafilters. In this paper we are interested in filters on ${\mathbb N}$. Given a filter $\mathfrak{F}$ in ${\mathbb N}$, a sequence of $x_n$, $n \in {\mathbb{N}}$ in a topological space $X$ is said to be $\mathfrak{F}$-\emph{convergent} to $x$ if for every neighborhood $U$ of $x$ the set $\{n \in {\mathbb{N}}\colon x_n \in U\}$ belongs to $\mathfrak{F}$. In particular, if one takes as $\mathfrak{F}$ the filter of those sets whose complements are finite (the \emph{Fr\'echet filter} $\mathfrak{F}_{Fr}$), then $\mathfrak{F}_{Fr}$-convergence coincides with the ordinary one. A filter $\mathfrak{F}$ on ${\mathbb N}$ is said to be \emph{free} if it dominates the Fr\'echet filter or, equivalently, if the intersection of all elements of $\mathfrak{F}$ is empty. In this case, every ordinary convergent sequence is automatically $\mathfrak{F}$-convergent. For a free ultrafilter $\mathfrak{U}$ on ${\mathbb N}$, every sequence $(x_n)$ in a compact space $X$ is $\mathfrak{U}$-convergent, which makes $\mathfrak{U}$-limits a powerful and widely used tool. In the sequel, when we say ``filter'' or ``ultrafilter'' we assume that they are free even if we don't say this explicitly. We use expressions ``collection'' or ``family'' in the same meaning as ``set''. In particular, if we say $W = \{\mathfrak{U}_k\}_{k=1}^{n}$ is a collection of filters, we mean that all $\mathfrak{U}_k$ are different. A non-negative finitely additive measure $\mu$ defined on the collection $ 2^{\mathbb N} $ of all subsets of $\mathbb N$ is said to be a \emph{statistical measure} if $\mu(\mathbb N) = 1$ and $\mu(\{k\}) = 0$ for all $k \in \mathbb N$. Evidently, a statistical measure cannot be countably additive. Statistical measures were introduced in \cite{ChengLinLan2008, ChengLinShi2009, BaoCheng2013} and extensively studied in \cite{ChengHuaZhou2016}. The \emph{filter generated by a statistical measure} $\mu$ is the collection $\mathfrak{F}_\mu$ of those subsets $A \subset {\mathbb N}$ for which $\mu(A) = 1$. Conversely, an example of statistical measure is the characteristic function $\mathds{1}_\mathfrak{U}$ of a free ultrafilter $\mathfrak{U}$ on $\mathbb N$: $\mathds{1}_\mathfrak{U}(A) = 1$ if $A \in \mathfrak{U}$, and $\mathds{1}_\mathfrak{U}(A) = 0$ if $A \in 2^{\mathbb N} \setminus \mathfrak{U}$. Consequently, every free ultrafilter on ${\mathbb N}$ is generated by a statistical measure. To give more examples, one can use the following straightforward observation that rephrases \cite[Theorem 4.4]{ChengHuaZhou2016}. \begin{remark} \label{rem1-count-inters} Let $\mu_n$, $n \in {\mathbb N}$, be a sequence of statistical measures, $a_n$, $n \in {\mathbb N}$, be a sequence of positive reals with $\sum_{n \in {\mathbb N}} a_n = 1$, then $ \sum_{n \in {\mathbb N}} a_n \mu_n$ is a statistical measure. In particular, for a sequence $\mathfrak{U}_n$, $n \in {\mathbb N}$ of free ultrafilters on ${\mathbb N}$, the filter $$ \bigcap_{n \in {\mathbb N}} \mathfrak{U}_n=\{A\subset {\mathbb N}\colon A\in \mathfrak{U}_n\ \forall n\in {\mathbb N}\} $$ is generated by the statistical measure $\sum_{n \in {\mathbb N}} a_n \mathds{1}_{\mathfrak{U}_n}$. \end{remark} Let us also remark that if a statistical measure $\mu$ satisfies that $\mu(A)\in \{0,1\}$ for every $A\subset {\mathbb N}$ then, clearly, $\mu=\mathds{1}_\mathfrak{U}$ for the ultrafilter $$\mathfrak{U} =\{A\subset {\mathbb N}\colon \mu(A)=1\}.$$ Besides of this, not too much is known. There are nontrivial examples of statistical measures coming from the Hahn-Banach Theorem, the most prominent of them are the invariant means on countable commutative semigroups, in particular, the generalized Banach limit $\mathrm{Lim}$, see \cite[Section 5.5]{Kad2018}, especially Exercises 8-12 of Subsection 5.5.2. For some of them the corresponding filter cannot be represented as a countable intersection of ultrafilters. The corresponding examples can be extracted from results by Fremlin and Talagrand, see references and a short description in Section \ref{seq-problems}. According to \cite[Theorem 5.2]{ChengHuaZhou2016}, the Fr\'echet filter is not generated by a statistical measure. In \cite{Kadets2016} the same is shown for the filter $\mathfrak{F}_{st}$ of all subsets $A \subset {\mathbb N}$ of natural density $1$. The filter $\mathfrak{F}_{st}$ generates the famous \emph{statistical convergence} for sequences, which, together with its various generalizations, is a very popular area of research. Say, Zentralblatt Math. shows 469 documents published between 1981 and 2020 that have words ``statistical convergence'' in their titles. The name ``statistical measure'' is motivated by statistical convergence. The people exploring statistical convergence mostly come to this kind of problems from mathematical analysis, measure theory and functional analysis. Our background and motivation are the same. What the authors of \cite{ChengHuaZhou2016} and \cite{Kadets2016} did not know at the moment of the corresponding publications, was that statistical measures (without using this name) were considered earlier by other people, whose motivation were foundations of mathematics like axiomatic set theory, model theory and descriptive set theory. The both mentioned above examples of filters that are not generated by a statistical measure as well as many others can be deduced using descriptive set theory approach which we briefly explain below. Let us identify, as usual, the collection $2^{{\mathbb N}}$ of all subsets of ${\mathbb N}$ with the Cartesian power $\{0, 1\}^{{\mathbb N}}$. Considering on $\{0, 1\}$ the discrete topology, one generates the standard product topology on $2^{{\mathbb N}}$. It is hidden in the simplified proof of a Solovay's theorem in \cite[Theorem 15.5]{ToWa2016}, without using the words ``statistical measure'', that the filter $\mathfrak{F}$ generated by a statistical measure, considered as a subset of the topological space $\{0, 1\}^{{\mathbb N}}$ cannot have the Baire property so, in particular, is not a Borel subset of $\{0, 1\}^{{\mathbb N}}$. Since every ``explicitly defined'' filter (like $\mathfrak{F}_{Fr}$, $\mathfrak{F}_{st}$, or Erd\"{o}s-Ulam filters and summable filters considered below) is a Borel subset, none of them is generated by a statistical measure. In order to attract more attention of ``mathematical analysis people'' to such kind of reasoning, we go into some details and give more references in the last section of the paper. In our paper we address similar kind of questions using an elementary purely combinatorial approach. In Section~\ref{section:poorandconglomerated} we present a simple sufficient condition (called conglomeration property) for a filter to not being generated by a statistical measure. Erd\"os-Ulam filters and summable filters are conglomerated filters, which gives an elementary proof that they are not generated by a statistical measure. In particular, this simplifies a lot the demonstration of the main result of \cite{Kadets2016}. Outside of this, in Section~\ref{section:intersection} we present some reasoning about filters that are intersections of finite or countable families of ultrafilters. We demonstrate that, in contrast to finite intersections, a representation as an intersection of countable family of ultrafilters is not unique, which makes the problem of determining the existence of such a representation more difficult. A minimal representation as an intersection of countable family of ultrafilters, if exists, is unique, but it is unclear if it exists. We conclude the paper with a list of open questions and related remarks in section~\ref{seq-problems}. Before we pass to the main part, let us recall some more common terminology about filters. For a given filter $\mathfrak{F}$ on ${\mathbb N}$ the corresponding \emph{ideal} of $\mathfrak{F}$, $\mathcal{I} = \mathcal{I}(\mathfrak{F})$, is the collection of the complements of the elements of $\mathfrak{F}$, that is, $$ \mathcal{I}(\mathfrak{F}) =\{{\mathbb N} \setminus A \colon A \in \mathfrak{F}\}. $$ From the definition of filter, it follows that $\mathcal{I}(\mathfrak{F})$ satisfies the properties of ideals of subsets: ${\mathbb N}\notin \mathcal{I}(\mathfrak{F})$, $\mathcal{I}(\mathfrak{F})$ is closed by finite unions, and if $B_1\in \mathcal{I}(\mathfrak{F})$ and $B_2\subset B_1$, then $B_2\in \mathcal{I}(\mathfrak{F})$. The corresponding \emph{grill} $\mathfrak{G} = \mathfrak{G}(\mathfrak{F})$ of $\mathfrak{F}$ is the collection of those sets that do not belong to $\mathcal{I}(\mathfrak{F})$ or, equivalently, the collection of those sets that intersect all the elements of $\mathfrak{F}$: $$ \mathfrak{G}(\mathfrak{F})=2^{{\mathbb N}} \setminus \mathcal{I}(\mathfrak{F})=\left\{B\in 2^{\mathbb N}\colon B\cap A\neq\emptyset \ \forall A\in \mathfrak{F}\right\}. $$ It is immediate that $\mathfrak{F}\subset \mathfrak{G}(\mathfrak{F})$. Nowadays, grills are more often called ``co-ideals'' and denoted either $\mathcal{I}^+$ or $\mathfrak{F}^*$. Using the name ``grill" we pay respect to Gustave Choquet who introduced this concept axiomatically and proved \cite{Choquet} that every axiomatically defined grill corresponds to some filter. A couple of examples can be of help. \begin{enumerate} \item[(1)] If $\mathfrak{F}_{Fr}$ is the Fr\'{e}chet filter, then $\mathcal{I}(\mathfrak{F}_{Fr})$ is the collection of all finite subsets of ${\mathbb N}$ and $\mathfrak{G}(\mathfrak{F}_{Fr})$ is the collection of all infinite subsets of ${\mathbb N}$. \item[(2)] If $\mathfrak{F}_\mu$ is the filter generated by a statistical measure $\mu$, then $\mathcal{I}(\mathfrak{F}_\mu) =\{ A \subset {\mathbb N} \colon \mu(A) = 0\}$, and $\mathfrak{G}(\mathfrak{F}_\mu) =\{ A \subset {\mathbb N} \colon \mu(A) > 0\}$. \end{enumerate} For $A \in \mathfrak{G}(\mathfrak{F})$ the \emph{trace} $\mathfrak{F}|_A$ of $\mathfrak{F}$ on $A$ is the collection of all sets of the form $A \cap B$, $B \in \mathfrak{F}$. This collection of sets is a filter on $A$. A family $W$ of subsets of a set $\Omega$ is said to be \emph{centered} if the intersection of any finite collection of members of $W$ is not empty. A family $W$ is centered if and only if there is a filter $\mathfrak{F}$ on $\Omega$ containing $W$. A non-empty family $\mathcal{D}$ of subsets of $\Omega$ is called a \emph{filter basis} if $\emptyset \notin \mathcal{D}$ and for every $A,B\in \mathcal{D}$ there is $C\in \mathcal{D}$ such that $C\subset A\cap B$. Given a filter basis $\mathcal{D}$, the family $\mathfrak{F}$ of all sets $A\subset \Omega$ which contain at least one element of $\mathcal{D}$ as a subset is a filter, which is called the \emph{filter generated by the basis} $\mathcal{D}$. We write $\overline{n, m}$ to denote the set of integers of the form $\{n, n+1, \ldots, m\}$. For a set $E$ we denote $\# E$ the number of elements in $E$. \section{Poor filters and conglomerated filters}\label{section:poorandconglomerated} Two sets $A, B \subset {\mathbb N}$ are said to be \emph{almost disjoint}, if $\# (A\cap B) < \infty$. For a given free filter $\mathfrak{F}$ the sets $A, B \subset {\mathbb N}$ are said to be $\mathfrak{F}$-\emph{almost disjoint}, if $A\cap B \in \mathcal{I}(\mathfrak{F})$. Remark, that almost disjointness implies $\mathfrak{F}$-almost disjointness, as $\mathfrak{F}$ contains the Fr\'{e}chet filter. Here is the first definition for filter we would like to introduce. \begin{definition} \label{def-poor} A free filter $\mathfrak{F}$ on ${\mathbb N}$ is called \emph{poor} if every pairwise $\mathfrak{F}$-almost disjoint collection $\mathcal A = \{A_\gamma \}_{\gamma \in \Gamma} \subset \mathfrak{G}(\mathfrak{F})$ of subsets is at most countable. \end{definition} The following easy lemma was stated in \cite[Lemma 2.4]{Kadets2016} for almost disjoint sets. The generalization to $\mathfrak{F}$-almost disjointness is straightforward, as the proof is copied from \cite[Lemma 2.4]{Kadets2016} almost word-to word. \begin{lemma} \label{s2-lem-alm-disj} Let $\mu$ be a statistical measure, then the corresponding filter $\mathfrak{F} = \mathfrak{F}_\mu$ is poor. \end{lemma} \begin{proof} Let $A_\gamma \subset {\mathbb N}$, $\gamma \in \Gamma$ be a collection of pairwise $\mathfrak{F}$-almost disjoint subsets such that $A_\gamma \in \mathfrak{G}(\mathfrak{F})$ for all $\gamma \in \Gamma$ (that is $\mu(A_\gamma) > 0$). Remark that since $\mu(A) = 0$ for every $A \in \mathcal{I}(\mathfrak{F})$, the finite-additivity formula $\mu\left(\bigcup_{k=1}^n D_k \right ) = \sum_{k=1}^n \mu(D_k)$ remains true for every finite collection of pairwise $\mathfrak{F}$-almost disjoint subsets. Now, for every $n \in {\mathbb N}$ denote $\Gamma_n = \{\gamma \in \Gamma \colon \mu(A_\gamma) > \frac1n\}$. Then for every finite subset $E \subset \Gamma_n$ we have the following estimation for the number of elements of $E$: $$ \# E < n \sum_{\gamma \in E} \mu(A_\gamma) = n \mu\Bigl(\bigcup_{\gamma \in E}A_\gamma \Bigr) \le n \mu({\mathbb N}) = n. $$ Consequently, $\# \Gamma_n < n $. Since $\Gamma = \bigcup_{n \in {\mathbb N}}\Gamma_n $, $\Gamma$ is at most countable. \end{proof} Now we are ready to formulate the promised ``simple sufficient condition'' that enables to demonstrate in an elementary way that several standard filters are not generated by a statistical measure. \begin{definition} \label{def-conglomerated} A free filter $\mathfrak{F}$ on ${\mathbb N}$ is said to be \emph{conglomerated} if there is a disjoint sequence of sets $D_n \in \mathcal{I}(\mathfrak{F})$, $n \in {\mathbb N}$, such that $\bigcup_{n \in M} D_n \in \mathfrak{G}(\mathfrak{F})$ for every infinite subset $M \subset {\mathbb N}$. \end{definition} \begin{theorem} \label{thm-suffic-cond} If $\mathfrak{F}$ is a conglomerated filter, then $\mathfrak{F}$ is not poor and so, in particular, it is not generated by a statistical measure. \end{theorem} \begin{proof} It is well known (see, for example, \cite[Page 77]{Sierp1958}) that ${\mathbb N}$ contains an uncountable family $\Gamma$ of pairwise almost disjoint infinite subsets (in fact, a family of continuum cardinality). Define for each $\gamma \in \Gamma$ $$ A_\gamma = \bigcup_{n \in \gamma} D_n. $$ Then the family $\{A_\gamma \}_{\gamma \in \Gamma}$ is uncountable, pairwise $\mathfrak{F}$-almost disjoint and $A_\gamma \in \mathfrak{G}(\mathfrak{F})$ for every $\gamma \in \Gamma$. \end{proof} Our next aim is to present some consequences of the previous theorem. The first immediate consequence deals with the Fr\'{e}chet filter. \begin{corollary} \label{cor1-lem-alm-disj} The Fr\'echet filter $\mathfrak{F}_{Fr}$ is conglomerated so, in particular, it is not generated by a statistical measure. \end{corollary} \begin{proof} Just take $D_n = \{n\}$. \end{proof} For a sequence $s=(s_k)$ of non-negative real numbers such that $\sum_{k=1}^{\infty} s_k=\infty$ the \emph{summable ideal} $\mathcal{I}^{s}$ is defined as the collection of those subsets $A\subset{\mathbb{N}}$ that $\sum_{k\in A} s_k < \infty$. The corresponding filter $\mathfrak{F}^{s} = \{{\mathbb N} \setminus A \colon A \in \mathcal{I}^{s}\}$ is called \emph{summable filter}. Then $\mathcal{I}(\mathfrak{F}^s) = \mathcal{I}^s$, and $\mathfrak{G}(\mathfrak{F}^{s}) = \{B \subset {\mathbb N} \colon \sum_{k\in A} s_k = \infty\}$. \begin{theorem} \label{thm-summ-filt} For every sequence $s=(s_k)$ as above, the corresponding summable filter $\mathfrak{F}^{s}$ is conglomerated. \end{theorem} \begin{proof} Denote $d_1 = 0$. We know that $\sum_{k \in {\mathbb N}} s_k = \infty$, so there exists $d_2 \in {\mathbb N}$ such that $ \sum_{k=1}^{d_2} s_k \geq 1$. Obviously $\sum_{k=d_2+1}^{\infty} s_k = \infty$, so there is $d_3 \in {\mathbb N}$, $d_3 > d_2$, with $\sum_{k=d_2+1}^{d_3} s_k \geq 1$. Continuing this procedure, we obtain a sequence of $d_k$, $d_1 < d_2 < d_3 < \ldots$ such that for all $n \in {\mathbb N}$ $$ \sum_{k=d_n+1}^{d_{n+1}} s_k \geq 1. $$ Denote $D_1 = \overline{d_1 + 1, d_2}$, $D_2 = \overline{d_2 + 1, d_3}$, and so on. These $D_n$ form a disjoint sequence of sets. All $D_n$ are finite, so $D_n \in \mathcal{I}(\mathfrak{F}^s)$. Finally, for every infinite subset $M \subset {\mathbb N}$ we have $$ \sum_{k \in \bigcup_{n \in M} D_n} s_k = \sum_{n \in M} \sum_{k=d_n+1}^{d_{n+1}} s_k = \infty, $$ so $\bigcup_{n \in M} D_n \in \mathfrak{G}(F^s)$. This means that for $\mathfrak{F} = \mathfrak{F}^{s}$ all the conditions of Definition \ref{def-conglomerated} are fulfilled. \end{proof} In the terminology of \cite{Just}, for a sequence $s=(s_k)$ of non-negative real numbers such that $\sum_{k=1}^{\infty} s_k=\infty$, the \emph{Erd\"{o}s-Ulam ideal} $\mathcal{EU}_{s}$ is the ideal of all those $A \subset {\mathbb N}$ that $d_s (A) = 0$ where \begin{equation*} d_s(A) =\limsup_{k \to \infty}\frac{\sum_{i\in A\cap \overline{1,k}}s_{i}}{% \sum_{i=1}^{k}s_{i}}. \end{equation*} In order to ensure that $\mathcal{EU}_{s}$ is not the same as the ideal of finite subsets of ${\mathbb N}$ one may, following \cite{Just}, add the condition \begin{equation*} \lim_{k \to \infty}\frac{s_{k}}{\sum_{i=1}^{k}s_{i}} = 0, \end{equation*} but, for our purposes, this additional restriction is superfluous. The corresponding filter $\mathcal{EU}^{s} = \{{\mathbb N} \setminus A \colon A \in \mathcal{EU}_{s}\}$ is called the \emph{Erd\"{o}s-Ulam filter}. Then, $\mathcal{I}(\mathcal{EU}^{s}) = \mathcal{EU}_{s}$, and $\mathfrak{G}(\mathcal{EU}^{s}) = \{B \subset {\mathbb N} \colon d_s(B) > 0\}$. \begin{theorem} \label{thm-eros-ulam} For every sequence $s=(s_k)$ as above, the corresponding Erd\"{o}s-Ulam filter $\mathcal{EU}^{s}$ is conglomerated. \end{theorem} \begin{proof} Denote $d_1 = 0$, $d_2 = 1$ and $D_1 = \{1\}$. Then, evidently $$ \frac{\sum_{i\in D_1}s_{i}}{\sum_{i=1}^{1}s_{i}} = 1 > \frac{1}{2}. $$ Let us demonstrate the possibility to construct recurrently a sequence of $d_n$, $d_1 < d_2 < d_3 < \ldots$ and of corresponding $D_n = \overline{d_n + 1, d_{n+1}}$ in such a way that for all $n \in {\mathbb N}$ \begin{equation} \label{eq-EU-1} \frac{\sum_{i\in D_n}s_{i}}{\sum_{i=1}^{d_{n+1}}s_{i}} > \frac{1}{2}. \end{equation} Indeed, let $d_j$ be already constructed for $j = 1, \ldots, n$. Since $$ \lim_{k \to \infty}\frac{\sum_{i = d_n}^{{k}}s_{i}}{\sum_{i=1}^{k}s_{i}} = 1, $$ there is a $k > d_n$ such that $$ \frac{\sum_{i = d_n+1}^{{k}}s_{i}}{\sum_{i=1}^{k}s_{i}} > \frac{1}{2}. $$ It remains to take this particular $k$ as $d_{n+1}$. Now, when we have all the $d_n$ and corresponding $D_n$, we see, like in the previous theorem, that $D_n$ form a disjoint sequence of sets and, being finite, they are elements of the ideal $\mathcal{EU}_{s}$. Finally, for every infinite subset $M = \{m_1, m_2, \ldots\} \subset {\mathbb N}$ we have that \begin{align*} d_s\left(\bigcup_{m \in M} D_m\right) &=\limsup_{k \to \infty}\frac{\sum\limits_{i\in \bigcup_{m \in M} D_m \cap \overline{1,k}}s_{i}}{\sum_{i=1}^{k}s_{i}} \\ &\ge \limsup_{k \to \infty}\frac{\sum\limits_{i\in \bigcup_{m \in M} D_m \cap \overline{1, d_{m_k+1}}}s_{i}}{\sum_{i=1}^{d_{m_k+1}}s_{i}} \\ & \ge \limsup_{k \to \infty}\frac{\sum_{i\in D_{m_{k}}}s_{i}}{\sum_{i=1}^{d_{m_k + 1} }s_{i}} \, \overset{\eqref{eq-EU-1}}\ge \, \frac{1}{2} \, > \, 0. \end{align*} So, $\bigcup_{m \in M} D_m \in \mathfrak{G}(\mathcal{EU}^{s})$. \end{proof} The filter $\mathfrak{F}_{st}$ generating the famous statistical convergence of sequences is just $\mathcal{EU}^{s}$ for $s = (1,1,1, \ldots)$. So the previous theorem implies (with a simple and clear proof) the main result of \cite{Kadets2016}, which in turn answered a question from \cite{ChengHuaZhou2016}: \begin{corollary} \label{cor-stat-filt} The filter $\mathfrak{F}_{st}$ is not generated by a statistical measure. \end{corollary} \section{Intersections of families of ultrafilters} \label{section:intersection} For a collection $W$ of subsets of $2^{\mathbb N}$ we denote by $\cap W$ the intersection of all members of that collection. That is, $$ \cap W=\left\{B\subset {\mathbb N} \colon B\in \mathcal{U}\ \forall \mathcal{U}\in W\right\} $$ \begin{definition} \label{repesentation} A collection $W$ of free ultrafilters is said to be a \emph{representation} of the filter $\mathfrak{F}$, if $\cap W = \mathfrak{F}$ \end{definition} Let us start with two easy remarks. \begin {lemma} \label{lem-filters-incl} Let $\mathfrak{F}_1$, $\mathfrak{F}_2$ be free filters on ${\mathbb N}$ with $\mathfrak{F}_1 \subset \mathfrak{F}_2$. Then $\mathfrak{G}(\mathfrak{F}_2) \subset \mathfrak{G}(\mathfrak{F}_1)$ so, in particular, $\mathfrak{F}_2\subset \mathfrak{G}(\mathfrak{F}_1)$. \end{lemma} \begin{proof} As $\mathfrak{F}_1 \subset \mathfrak{F}_2$, $\mathcal{I}(\mathfrak{F}_1)\subset \mathcal{I}(\mathfrak{F}_2)$, so $\mathfrak{G}(\mathfrak{F}_2)=2^{\mathbb N}\setminus \mathcal{I}(\mathfrak{F}_2)$ is contained in $2^{\mathbb N}\setminus\mathcal{I}(\mathfrak{F}_1)=\mathfrak{G}(\mathfrak{F}_1)$. Finally, that $\mathfrak{F}_2\subset \mathfrak{G}(\mathfrak{F}_2)$. \end{proof} \begin {lemma} \label{lem-filters-excl} Let $\mathfrak{F}$ be a free filter on ${\mathbb N}$. Then, $A \in 2^{\mathbb N} \setminus \mathfrak{F}$ if and only if $({\mathbb N} \setminus A) \in \mathfrak{G}(\mathfrak{F})$. \end{lemma} \begin{proof} If $A\notin \mathfrak{F}$, then ${\mathbb N}\setminus A\notin \mathcal{I}(\mathfrak{F})$, so $({\mathbb N}\setminus A)\in \mathfrak{G}(\mathfrak{F})$. Conversely, if $({\mathbb N} \setminus A) \in \mathfrak{G}(\mathfrak{F})$, then ${\mathbb N}\setminus A\notin \mathcal{I}(\mathfrak{F})$, so $A\notin \mathfrak{F}$. \end{proof} The following easy remark complements the well-known fact that every filter $\mathfrak{F}$ is equal to the intersection of all ultrafilters that contain $\mathfrak{F}$. \begin{theorem} \label{thm-continuum} Let $\mathfrak{F}$ be a free filter on ${\mathbb N}$. Then, there exists a family $W$ of ultrafilters on ${\mathbb N}$ such that $\mathfrak{F} = \cap W$ and $W$ is of at most continuum cardinality. \end{theorem} \begin{proof} By Lemma \ref{lem-filters-excl}, for every $A \in 2^{\mathbb N} \setminus \mathfrak{F}$ the family of sets $\{{\mathbb N} \setminus A\}\cup \mathfrak{F}$ is centered, so there is a filter that contains $\{{\mathbb N} \setminus A\}\cup \mathfrak{F}$ and, consequently, we may select an ultrafilter $\mathfrak{U}_A$ such that $(\{{\mathbb N} \setminus A\}\cup \mathfrak{F}) \subset \mathfrak{U}_A$. In other words, $\mathfrak{F} \subset \mathfrak{U}_A$, but $A \notin \mathfrak{U}_A$. Then $\mathfrak{F} = \bigcap\limits_{A \in (2^{\mathbb N} \setminus \mathfrak{F})} \mathfrak{U}_A$, so $W = \{\mathfrak{U}_A \colon A \in (2^{\mathbb N} \setminus \mathfrak{F})\}$ provides the required representation of $\mathfrak{F}$. \end{proof} The next lemma explains better the structure of the intersection of a family of filters. \begin {lemma} \label{lem-incl-union} Let $W = \{\mathfrak{F}_\gamma\}_{\gamma \in \Gamma}$ be a collection of free filters on ${\mathbb N}$. Then, given $B_\gamma \in \mathfrak{F}_\gamma$ for every $\gamma \in \Gamma$, the set $\bigcup_{\gamma \in \Gamma} B_\gamma$ is an element of $\cap W$. \end{lemma} \begin{proof} For every $j \in \Gamma$ we have that $\bigcup_{\gamma \in \Gamma} B_\gamma \supset B_j$ and $B_j \in \mathfrak{F}_j$, so by axioms of filter $\bigcup_{\gamma \in \Gamma} B_\gamma \in \mathfrak{F}_j$. \end{proof} Our next goal is to show that a representation of a given filter as a finite intersection of ultrafilters, if exists, is unique. \begin {lemma} \label{lem-incl} Let $W = \{\mathfrak{U}_1, \mathfrak{U}_2,\ldots,\mathfrak{U}_n\}$ be a finite set of free ultrafilters on ${\mathbb N}$, and $\mathfrak{U}$ be a free ultrafilter such that $\cap W \subset \mathfrak{U}$. Then $\mathfrak{U} \in W$. \end{lemma} \begin{proof} We have to show the existence of such $k \in \overline{1, n}$ that $\mathfrak{U} = \mathfrak{U}_k$. Let us assume contrary that $\mathfrak{U} \neq \mathfrak{U}_k$ for every $k \in \overline{1, n}$. This means that for every $k \in \overline{1, n}$ there exists $B_k \in \mathfrak{U}_k$ such that $B_k \notin \mathfrak{U}$. As $\mathfrak{U}$ is an ultrafilter, so $({\mathbb N} \setminus {B_k}) \in \mathfrak{U}$ for all $k \in \overline{1, n}$ and, consequently, $${\mathbb N} \setminus{\bigcup_{k=1}^{n} B_k} = \bigcap_{k=1}^{n} ({\mathbb N} \setminus {B_k}) \in \mathfrak{U}.$$ This means that $\bigcup_{k=1}^{n} B_k \notin \mathfrak{U}$. But according to Lemma \ref{lem-incl-union}, $\bigcup_{k=1}^{n} B_k \in \cap W \subset \mathfrak{U}$, which leads to contradiction. \end{proof} \begin{theorem} \label{thm-incl} Let $W_1$ and $W_2$ be finite collections of ultrafilrers on ${\mathbb N}$, such that $\cap W_1 = \cap W_2$. Then $W_1 = W_2$. \end{theorem} \begin{proof} For every $\mathfrak{U} \in W_1$ we have that $\mathfrak{U} \supset \cap W_1 = \cap W_2$. By Lemma \ref{lem-incl} this gives that $\mathfrak{U} \in W_2$. So $W_1 \subset W_2$. By the same argument $W_2 \subset W_1$. \end{proof} \begin{definition} \label{def-min coll} A collection $W$ of free ultrafilters consisting of at least two elements is said to be \emph{minimal}, if for every $\mathfrak{U} \in W$ $$ \cap W \neq \cap (W \setminus \{\mathfrak{U}\}). $$ A free filter $\mathfrak{F}$ on ${\mathbb N}$ is said to be \emph{min-representable} if either it is an ultrafilter or it possess a minimal representation $W$. \end{definition} Theorem \ref{thm-incl} implies that every finite set of ultrafilters is minimal, so the intersection of a finite set of ultrafilters is min-representable. In the sequel, we are going to study what can be said more about minimal representations. First of all, we show that not every filter is min-representable. \begin {lemma} \label{lem-filt-notin-ultfilt} Let $\mathfrak{F}_0$ be a free filter and $\mathfrak{U}$ be a free ultrafilter on ${\mathbb N}$ such that $\mathfrak{F}_0 \not\subset \mathfrak{U}$. Denote $\mathfrak{F} = \mathfrak{F}_0 \cap \mathfrak{U}$. Then, for every $D \in \mathfrak{F}$ there are $A \in \mathfrak{U}$ and $B \in \mathfrak{F}_0$ such that $D = A \sqcup B$. Moreover, the trace $\mathfrak{F}|_A$ of $\mathfrak{F}$ on $A$ is the same as $\mathfrak{U}|_A$. \end{lemma} \begin{proof} Since $\mathfrak{F}_0 \not\subset \mathfrak{U}$, there is a $K \in (\mathfrak{F}_0 \setminus \mathfrak{U})$. Denote $B = K \cap D$. We know that both $K$ and $D$ are elements of $\mathfrak{F}_0$, so $B \in \mathfrak{F}_0$ as we need. Now, $K \notin \mathfrak{U}$, so by the ultrafilter criterion $({\mathbb N} \setminus K) \in \mathfrak{U}$. Consequently, $D \setminus B = D \setminus K = D \cap ({\mathbb N} \setminus K) \in \mathfrak{U}$, which means that $A := D \setminus B$ is what we need. For every $C \in \mathfrak{U}|_A$ we have that $C \sqcup B \in \mathfrak{F}_0 \cap \mathfrak{U} = \mathfrak{F}$, so $C = A \cap (C \sqcup B) \in \mathfrak{F}|_A$. This demonstrates that the filter $\mathfrak{F}|_A$ on $A$ majorates the ultrafilter $\mathfrak{U}|_A$ on $A$, so $\mathfrak{F}|_A = \mathfrak{U}|_A$. \end{proof} \begin{theorem} \label{thm-restriction-ultraf} If a free filter $\mathfrak{F}$ possesses a minimal representation $W$, then, for every $\mathfrak{U} \in W$, there is $A \in \mathfrak{U}$ such that the trace $\mathfrak{F}|_A$ of $\mathfrak{F}$ on $A$ is the same as $\mathfrak{U}|_A$. \end{theorem} \begin{proof} Denote $\mathfrak{F}_0 = \cap (W \setminus \{\mathfrak{U}\})$. By minimality, $\mathfrak{F}_0 \not\subset \mathfrak{U}$. Also, $\mathfrak{F} = \mathfrak{F}_0 \cap \mathfrak{U}$. Then, Lemma \ref{lem-filt-notin-ultfilt} applied for $D = {\mathbb N}$ provides us with $A \in \mathfrak{U}$ and $B \in \mathfrak{F}_0$ such that ${\mathbb N} = A \sqcup B$ and $\mathfrak{F}|_A = \mathfrak{U}|_A$. \end{proof} The last theorem motivates the following definition. \begin{definition} \label{def-extr-indec} A free filter $\mathfrak{F}$ on ${\mathbb N}$ is said to be \emph{extremely not min-representable}, if for every $A \in \mathfrak{G}(\mathfrak{F})$ the trace $\mathfrak{F}|_A$ is not an ultrafilter. \end{definition} Remark that for an extremely not min-representable filter $\mathfrak{F}$ every representation $W$ of $\mathfrak{F}$ consisting of more than one element is ``extremely non-minimal" in the following sense: for every $\mathfrak{U} \in W$ $$ \mathfrak{F} = \cap (W \setminus \{\mathfrak{U}\}). $$ \begin{theorem} \label{thm-repr-for-minimal} The Frechet filter $\mathfrak{F}_{Fr}$, all Erd\"{o}s-Ulam filters $\mathcal{EU}^{s}$ and all summable filters $\mathfrak{F}^{s}$ are extremely not min-representable. \end{theorem} \begin{proof} We present the demonstration for $\mathfrak{F}_{Fr}$. The other two cases are also easy to manage. We have that $A \in \mathfrak{G}(\mathfrak{F}_{Fr})$ if and only if $A$ is infinite. Then, $\mathfrak{F}_{Fr}|_A$ consists of those $B \subset A$ such that $A \setminus B$ is finite. So if we write $A$ as a union $A = B_1 \sqcup B_2$ of two infinite sets, then non of them belongs to $\mathfrak{F}_{Fr}|_A$. So, $\mathfrak{F}_{Fr}|_A$ is not an ultrafilter on $A$. \end{proof} \begin{theorem} \label{thm-uncountable-minimal} For every cardinality $\alpha$ smaller than the continuum, there exist a free filter with a minimal representation of exactly that cardinality. \end{theorem} \begin{proof} Let $\Gamma \subset 2^{\mathbb N}$ be a family of cardinality $\alpha$ consisting of pairwise almost disjoint infinite subsets. For each $A \in \Gamma$, pick an ultrafilter $\mathfrak{U}_A$ such that $A \in \mathfrak{U}_A$. Let us demonstrate that $W = \{\mathfrak{U}_A \colon A \in \Gamma\}$ is a minimal collection of ultrafilters (which, of course, is a representation for $\mathfrak{F} := \cap W$). Indeed, for every $B \in \Gamma$ and every $A \in (\Gamma \setminus \{B\})$, the almost disjointness implies that $(A \setminus B) \in \mathfrak{U}_A$. Denote $D = \bigcup_{A \in (\Gamma \setminus \{B\}) } A \setminus B$. By Lemma \ref{lem-incl-union}, $D \in \cap (W \setminus \{\mathfrak{U}_B\})$. On the other hand, $D \notin \mathfrak{U}_B$, which means that $D \notin \mathfrak{F}$. So, we demonstrated that $\mathfrak{F} \neq \cap (W \setminus \{\mathfrak{U}_B\})$ which completes the proof. \end{proof} \begin{theorem} \label{thm-repr-for-minimal} Let $W = \{\mathfrak{U}_k\}_{k=1}^{n}$ be a finite or countable minimal collection of free ultrafilters, where $n \in ({\mathbb N} \cup \{\infty\})$, $n \ge 2$, is the number of elements in $W$, and $\mathfrak{F} = \cap W$. Then \begin{enumerate} \item there exists a partition of ${\mathbb N}$ into disjoint family of subsets $\{N_k\}_{k=1}^{n}$ such that $N_k \in \mathfrak{U}_k$ for all $k$. \item A set $A \subset {\mathbb N}$ is an element of $\mathfrak{F}$ if and only if there is a collection of sets $\{A_k\}_{k=1}^{n}$ such that $A_k \in \mathfrak{U}_k$, $A_k \subset N_k$, and $A = \bigsqcup_{k=1}^n A_k$. \end{enumerate} \end{theorem} \begin{proof} In order to ensure (1), we may construct the needed subsets $\{N_k\}_{k=1}^{n}$ recurrently, using on each step Lemma \ref{lem-filt-notin-ultfilt}. Indeed, for each $k < n$ denote $\mathfrak{F}_k = \bigcap_{j=k+1}^n U_j$. Since ${\mathbb N} \in \mathfrak{F} = (\mathfrak{U}_1 \cap \mathfrak{F}_1)$, and by minimality $\mathfrak{U}_1 \not\supset \mathfrak{F}_1$, an application of Lemma \ref{lem-filt-notin-ultfilt} provides us with $N_1 \in \mathfrak{U}_1$ and $B_1 \in \mathfrak{F}_1$ such that ${\mathbb N} = N_1 \sqcup B_1$. Now $B_1 \in \mathfrak{F}_1 = (\mathfrak{U}_2 \cap \mathfrak{F}_2)$, and by minimality $\mathfrak{U}_2 \not\supset \mathfrak{F}_2$, so we obtain $N_2 \in \mathfrak{U}_2$ and $B_2 \in \mathfrak{F}_2$ such that $B_1 = N_2 \sqcup B_2$. Continuing this process, we either stop on $n$-th step if $n < \infty$, or proceed up to infinity. In any case, we get a disjoint family of subsets $\{N_k\}_{k=1}^{n}$ such that $N_k \in \mathfrak{U}_k$ for all $k$. If $\bigsqcup_{k=1}^{n} N_k = {\mathbb N}$, we are done. Otherwise, it remains to substitute $N_1$ by $N_1 \sqcup \left({\mathbb N} \setminus \bigsqcup_{k=1}^{n} N_k\right)$. In the item (2), one direction of the statement is just Lemma \ref{lem-incl-union}. For the other direction, taking $A \in \mathfrak{F}$ it is sufficient to define the needed $A_k \in \mathfrak{U}_k$ by the formula $A_k = A \cap N_k$. \end{proof} The next corollary complements Lemma \ref{lem-incl-union} in the case of intersection of a finite family of ultrafilters. \begin{corollary} \label{coroll-repr-for-fin-int} Let $W = \{\mathfrak{U}_1, \mathfrak{U}_2,\ldots,\mathfrak{U}_n\}$, $n \ge 2$, be a finite collection of free ultrafilters, $\mathfrak{F} = \cap W$. Then, there exists a partition of ${\mathbb N}$ into a disjoint family of subsets $\{N_k\}_{k=1}^{n}$ such that $N_k \in \mathfrak{U}_k$ for every $k \in \overline{1, n}$, satisfying that a set $A \subset {\mathbb N}$ is an element of $\mathfrak{F}$ if and only if $A = \bigsqcup_{k=1}^n A_k$ for some elements $A_k \in \mathfrak{U}_k$ with $A_k \subset N_k$ for every $k \in \overline{1, n}$. \end{corollary} \begin{proof} Every finite collection of ultrafilters is minimal by Theorem \ref{thm-incl}, so Theorem \ref{thm-repr-for-minimal} is applicable. \end{proof} The descriptions given in Theorem \ref{thm-repr-for-minimal} for finite $n$ and for $n = \infty$ look very similar. Nevertheless, the infinite case loses some nice properties of the finite case, which is reflected in the following theorem. \begin{theorem}\label{thm_inf_int} Let $W = \{\mathfrak{U}_k\}_{k=1}^{\infty}$ be a countable minimal collection of free ultrafilters, and let $\mathfrak{F} = \cap W$. Then there exists a free ultrafilter $\mathfrak{U}_0$ such that $\mathfrak{U}_0 \supset \mathfrak{F}$ but $\mathfrak{U}_0 \notin W$. In particular, the representation of $\mathfrak{F}$ as a countable intersection of ultrafilters is not unique: $\mathfrak{F} = \bigcap_{k=1}^{\infty} \mathfrak{U}_k$ and at the same time $\mathfrak{F} = \bigcap_{k=0}^{\infty} \mathfrak{U}_k$. \end{theorem} \begin{proof} Take the sets $N_k$ from Theorem \ref{thm-repr-for-minimal} and consider the following family $G$ of sets: $G = \{A \subset {\mathbb N}\colon \exists j \in {\mathbb N} \ \forall k > j\ A \cap N_k \in \mathfrak{U}_k\}$. Evidently, $G \supset \mathfrak{F}$. Let us show that \begin{enumerate} \item[(i)] the family $G$ is a filter; \item[(ii)] $\mathfrak{U}_k \not\supset G$ $\forall k \in {\mathbb N}$. \end{enumerate} For the item (i) let us check that $G$ verifies the axioms of filter. \begin{itemize} \item $\emptyset \notin G$, because $\emptyset \notin \mathfrak{U}_k$ for each $k \in {\mathbb N}$; \item let $A, B \in G$. We have to show that $A \cap B \in G$. As $A, B \in G$, there exists $j_1 \in {\mathbb N}$ such that $A \cap N_k \in \mathfrak{U}_k$ for all $k > j_1$, and there exists $j_2 \in {\mathbb N}$ such that $A \cap N_k \in \mathfrak{U}_k$ for all $k > j_2$. Denote $j:=\max\{j_1,j_2\}$. This number $j$ can be easily used to show that $A \cap B \in G$; \item Let $A \in G$, $D \subset {\mathbb N}$, $A \subset D$. Let's show that $D \in G$. We know that $A \in G$, which means that exists $j_1 \in {\mathbb N}$ such that $A \cap N_k \in \mathfrak{U}_k$ for all $k > j_1$. As $D \supset A \supset A \cap N_k$, $A \cap N_k \in \mathfrak{U}_k$, and $\mathfrak{U}_k$ is a filter, we obtain that $D \in \mathfrak{U}_k$ for all $k > j_1$. That is, $D \in G$. We have shown that $G$ is a filter. \end{itemize} In order to prove the statement (ii), it is enough to remark that for every $k \in {\mathbb N}$ the corresponding $A_k = \bigcap_{j=k+1}^\infty N_j$ belongs to $G$ but $A_k \notin \mathfrak{U}_k$ because it does not intersect the set $N_k \in \mathfrak{U}_k$. Let us take as the needed $\mathfrak{U}_0$ an arbitrary ultrafilter that majorizes $G$. Then $\mathfrak{U}_0 \supset G \supset \mathfrak{F}$, and $\mathfrak{U}_k \neq \mathfrak{U}_0$ for all $k \in {\mathbb N}$. The latter is true because $\mathfrak{U}_k \not\supset G$ for any $k \in {\mathbb N}$ but $\mathfrak{U}_0 \supset G$. \end{proof} Although in the infinite case representations are not unique, the MINIMAL representation, if it exists, has to be unique; we will show this below in Theorem \ref{thm_min-unique}. \begin{definition} \label{def-inavoidable} Let $\mathfrak{F}$ be a free filter and $\mathfrak{U}$ be a free ultrafilter on ${\mathbb N}$. $\mathfrak{U}$ is said to be \emph{inavoidable} for $\mathfrak{F}$, if every representation $W$ of $\mathfrak{F}$ contains $\mathfrak{U}$ as an element. \end{definition} Lemma \ref{lem-filt-notin-ultfilt} implies that, if $\mathfrak{U}$ is an inavoidable ultrafilter for $\mathfrak{F}$, then there is $A \in \mathfrak{U}$ such that the trace $\mathfrak{F}|_A$ of $\mathfrak{F}$ on $A$ is the same as $\mathfrak{U}|_A$. The inverse implication is also true. \begin {lemma} \label{lem-inavoidable-inverse} Let $\mathfrak{F}$ be a free filter and $\mathfrak{U}$ be a free ultrafilter on ${\mathbb N}$. Assume that there is $A \in \mathfrak{U}$ such that the trace $\mathfrak{F}|_A$ of $\mathfrak{F}$ on $A$ is the same as $\mathfrak{U}|_A$. Then $\mathfrak{U}$ is inavoidable for $\mathfrak{F}$. \end{lemma} \begin{proof} Let $\cap W$ be any representation for $\mathfrak{F}$ and let $A$ as in the hypothesis. Then, ${\mathbb N} \setminus A \not\in \mathfrak{F}$ (otherwise $\emptyset \in \mathfrak{F}|_A$), so there is $\widetilde \mathfrak{U} \in W$ such that ${\mathbb N} \setminus A \not\in \widetilde \mathfrak{U}$. Since $\widetilde \mathfrak{U}$ is an ultrafilter, we obtain that $A \in \widetilde \mathfrak{U}$. Then, $\widetilde \mathfrak{U}|_A \supset \mathfrak{F}|_A = \mathfrak{U}|_A$, so $\mathfrak{U}|_A$ is a base for both $\mathfrak{U}$ and $\widetilde \mathfrak{U}$ at the same time, that is $\widetilde \mathfrak{U} = \mathfrak{U}$. \end{proof} \begin{theorem}\label{thm_min-unique} \emph{(a)} If $W$ is a minimal collection of free ultrafilters and $\mathfrak{F} = \cap W$, then each $\mathfrak{U} \in W$ is inavoidable for $\mathfrak{F}$. Consequently, \emph{(b)} $\mathfrak{F}$ does not have any other minimal representation outside of $W$. \end{theorem} \begin{proof} Item (a) follows from Theorem \ref{thm-restriction-ultraf} and Lemma \ref{lem-inavoidable-inverse}. The statement (b) evidently follows from (a). \end{proof} \section{Remarks and open problems} \label{seq-problems} By now the theory of filters generated by a single statistical measure is making its first steps. The number of examples is limited, consequently one may build a lot of hypotheses which maybe can be destroyed by a clever example. Nevertheless, we find it natural to share with interested colleagues those, maybe childish, questions that we are not able to answer at this stage. Lemma \ref{s2-lem-alm-disj} says that every filter generated by a statistical measure is poor. So, \begin{problem} \label{prob2} Is it true that every poor free filter is generated by a statistical measure? \end{problem} According to Theorem \ref{thm-suffic-cond}, every conglomerated filter is not poor. So, \begin{problem} \label{prob2+} Is it true that every free filter that is not poor is conglomerated? \end{problem} A formally weaker question can be the following: \begin{problem} \label{prob2++} Is is true that every non-conglomerated filter is generated by a statistical measure? \end{problem} Remark that the answers may depend on continuum hypothesis, so the problems may also be stated as consistency questions. We next collect some remarks and problems which we will divide in three subsection depending on whether they are related to category theory, measurability, or shift invariance. \subsection{Remarks and problems related to category theory} The analysis of the proof of \cite[Theorem 15.5]{ToWa2016} gives the following theorem: if a free filter $\mathfrak{F}$ (or, equivalently, the corresponding ideal $\mathcal{I}$) considered as a subspace of the topological space $2^{{\mathbb N}}$ is meager, then there is a family $\Gamma \subset \mathfrak{G}(\mathfrak{F})$ of continuum cardinality consisting of pairwise almost disjoint infinite subsets. Consequently this $\mathfrak{F}$ is not poor and cannot be generated by a statistical measure. This theorem is the main ingredient of the proof of already mentioned fact that a filter generated by a statistical measure as a subspace of $2^{{\mathbb N}}$ cannot have the property of Baire. Recall, that in the product topology on $2^{{\mathbb N}}$ the standard base of neighborhoods of a set $A \subset {\mathbb N}$ consists of neighborhoods $$ U_n(A) = \{B \subset {\mathbb N} \colon B \cap \overline{1, n} = A \cap \overline{1, n}\}. $$ The proof of \cite[Theorem 15.5]{ToWa2016} mentioned above proceeds as follows. For a meager ideal $\mathcal{I}$ one takes a sequence of nowhere dense subsets $V_n \subset 2^{{\mathbb N}}$ with $\bigcup_{n=1}^\infty V_n \supset \mathcal{I}$ and constructs recurrently a tree $A_0$, $A_1$, $A_{0,0}$, $A_{0,1}$, $A_{1,0}$, $A_{1,1}$, $A_{0,0,0}$, etc., of finite subsets of ${\mathbb N}$ and a sequence $m_1 < m_2 < \ldots$ of naturals with the properties that $A_{t_1, t_2, \ldots, t_n} \subset \overline{1, m_n}$ for any multi-index $t = (t_1, t_2, \ldots, t_n ) \in \{0,1\}^n$; for extensions $(t_1, t_2, \ldots, t_n, t_{n+1} ) \in \{0,1\}^{n+1}$ of $t$ the inclusions $$A_{t_1, t_2, \ldots, t_n} \subset A_{t_1, t_2, \ldots, t_n, t_{n+1}}\quad \textrm{and} \quad A_{t_1, t_2, \ldots, t_n, t_{n+1}} \setminus A_{t_1, t_2, \ldots, t_n} \subset \overline{m_n + 1, m_{n + 1}} $$ take place; and that $U_{m_n}(A_{t_1, t_2, \ldots, t_n}) \bigcap V_n = \emptyset$. The corresponding family $\Gamma \subset \mathfrak{G}(\mathfrak{F})$ of pairwise almost disjoint infinite subsets is made up from infinite branches of this tree: for every sequence $(t_1, t_2, \ldots) \in \{0,1\}^{\mathbb N}$ one takes $\bigcup_{n=1}^\infty A_{t_1, t_2, \ldots, t_n}$ as an element of $\Gamma$. If this tree could be build with the additional property that $$\quad A_{t_1, t_2, \ldots, t_{n-1}, 0} \setminus A_{t_1, t_2, \ldots, t_{n-1}}= \emptyset \quad \textrm{and} \quad A_{t_1, t_2, \ldots, t_{n-1}, 1} \setminus A_{t_1, t_2, \ldots, t_{n-1}} = D_n, $$ where $D_n$ do not depend on the choice of $t_k$, then $\mathfrak{F}$ would be conglomerated. This leads to the following problem. \begin{problem} \label{prob02+} Let a free filter $\mathfrak{F} \subset 2^{{\mathbb N}}$ be meager. Does this imply that $\mathfrak{F}$ is conglomerated? \end{problem} \subsection{Remarks and results related to measurability} The natural probabilistic measure $p(\{0\}) = p(\{1\}) = \frac12$ on $\{0, 1\}$ induces the standard product probabilistic measure $\nu$ on $2^{{\mathbb N}}$. The $\sigma$-algebra $\Sigma$ of $\nu$-measurable subsets of $2^{{\mathbb N}}$ contains the Borel $\sigma$-algebra $\mathfrak{B}$ on $2^{\mathbb{N}}$. Denote $\nu^*$ the corresponding outer measure. If $\mathfrak{U}$ is a free ultrafilter, then, according to Sierpi\'nski \cite{Sierp1945}, see also \cite[ Lemma 464Ca]{Fremlin}, $\nu^*(\mathfrak{U})=1$. Talagrand \cite{Talagrand}, see also \cite[ Lemma 464Cb]{Fremlin}, demonstrated that $\nu^*(\mathfrak{F})=1$ for every filter that is a countable intersection of ultrafilters. As $A \mapsto\mathbb{N}\backslash A$ is a preserving measure bijection of $2^{\mathbb{N}}$, we have that also for such filters $\nu^*(\mathcal{I}(\mathfrak{F})) = 1$, so the inclusion $2^{\mathbb{N}} \supset \mathfrak{F} \sqcup\mathcal{I}(\mathfrak{F})$ says that a countable intersection of ultrafilters is not $\nu$-measurable. One may ask whether every $\mathfrak{F}$ generated by a statistical measure is not $\nu$-measurable. The answer is negative by surprisingly easy probabilistic argument \cite[Example 464Jb]{Fremlin}. Namely, the coordinate maps $\phi_n \colon 2^{{\mathbb N}} \to \{0, 1\}$, $\phi_n(A) = 1$ if $n \in A$ and $\phi_n(A) = 0$ if $n \notin A$ form an independent sequence of Bernoulli random variables on the probability space $2^{\mathbb{N}}$. Fix an ultrafilter $\mathfrak{U}$ on ${\mathbb N}$ and define statistical measure $\mu_{\mathfrak{U}}$ by the formula $\mu_{\mathfrak{U}}(A) = \lim_{\mathfrak{U}}\frac{1}{n}\sum_{k=1}^n \phi_n(A)$. According to the Strong Law of Large Numbers, $\frac{1}{n}\sum_{k=1}^n \phi_n$ tends to $\frac{1}{2}$ with probability $1$, so $$ \nu\left(\left\{A \in 2^{{\mathbb N}}\colon \mu_{\mathfrak{U}}(A) = \frac{1}{2} \right\}\right) = 1. $$ Consequently, $\nu\left(\mathfrak{F}_{\mu_{\mathfrak{U}}}\right) = 0$, and $\mathfrak{F}_{\mu_{\mathfrak{U}}}$ is $\nu$-measurable. Combining this with the Talagrand's result cited above, we obtain the following corollary. \begin{corollary} \label{cor-stat-not-cinter} There is a free filter $\mathfrak{F}$ generated by a statistical measure which cannot be represented as a countable intersection of ultrafilters. All filters of the form $\mathfrak{F}_{\mu_{\mathfrak{U}}}$ are such examples. \end{corollary} \subsection{Remarks and problems related to shift invariance}\label{ssec4.3} Recall that a \emph{generalized Banach limit} is a bounded linear functional ${\rm Lim}$ defined on the space $\ell_\infty$ of all bounded sequences of reals and having the following properties: \begin{itemize} \item[-] if $x =(x_1 ,x_2 ,\ldots,x_n ,\ldots)$ has a limit, then $ {\rm Lim} \,x = \lim_{n \to \infty} x_n$; \item[-] if all $x_k \ge 0$ then $ {\rm Lim} \,x \ge 0$; \item[-] if $y =(x_2 ,x_3 ,\ldots,x_{n+1} ,\ldots)$ them $ {\rm Lim} \,x = {\rm Lim} \,y$. \end{itemize} The existence of such a functional is usually deduced from the Hahn-Banach Theorem. It is known that ${\rm Lim}$ is not unique. For example, in \cite[Section 16.1.3, Exercise 11]{Kad2018} it is shown that for every free ultrafilter $\mathfrak{U}$ on ${\mathbb N}$ the functional that maps each $x =(x_1 ,x_2 ,\ldots,x_n ,\ldots) \in \ell_\infty$ to the $\mathfrak{U}$-limit of its arithmetic means $x_1, \frac{x_2 + x_2}{2}, \frac{x_2 + x_2 + x_3}{3}, \ldots$ is a generalized Banach limit. To each generalized Banach limit ${\rm Lim}$ corresponds the statistical measure $\mu_{{\rm Lim}}$ that sends each $A \subset {\mathbb N}$ to ${\rm Lim}(\mathds{1}_A)$, and the filter $\mathfrak{F}$ of those $A \subset {\mathbb N}$ that ${\rm Lim}(\mathds{1}_A) = 1$. The additional property of these filters is their shift-invariance: for every $A = \{n_1, n_2, \ldots\} \in \mathfrak{F}$ the corresponding shift $A + 1 = \{n_1+1, n_2+1, \ldots\}$ also lies in $\mathfrak{F}$. Corollary \ref{cor-stat-not-cinter} implies that some of shift-invariant filters cannot be represented as a countable intersection of ultrafilters. On the other hand, given a free ultrafilter $\mathfrak{U}$ and an integer $n \in {\mathbb Z}$, we can define the shift $\mathfrak{U} + n$ as the filter whose base is $\{(A + n)\cap {\mathbb N} \colon A \in \mathfrak{U}\}$. Then, $\bigcap_{n \in {\mathbb Z}}(\mathfrak{U} + n)$ is a shift-invariant filter which has the form of the intersection of a countable family of ultrafilters. What is not quite clear for us is whether $\bigcap_{n \in {\mathbb Z}}(\mathfrak{U} + n)$ is generated by a shift-invariant statistical measure. This leads to the following question: \begin{problem} \label{prob4} May a free filter $\mathfrak{F}$ generated by a shift-invariant statistical measure be equal to the intersection of some countable family of free ultrafilters? \end{problem} Some finer than shift-invariance properties of statistical measures were discussed in \cite{Douwen}, where the word ``diffuse'' instead of ``statistical'' was used. For filters generated by the corresponding measures, the respective variants of Problem \ref{prob4} make sense as well. The last questions concern the existence of minimal representations. \begin{problem} \label{prob6} Assume $\mathfrak{F}$ has a countable representation. Does this imply that $\mathfrak{F}$ has a minimal representation? \end{problem} \begin{problem} \label{prob7} Does there exist a countable collection $\{\mathfrak{U}_k\}_{k=1}^{\infty}$ of free ultrafilters such that $$ \bigcap_{k=n}^\infty \mathfrak{U}_k = \bigcap_{k=1}^\infty \mathfrak{U}_k $$ for every $n \in {\mathbb N}$? \end{problem} \noindent{\bfseries Acknowledgements:} The first author gratefully thanks Prof. Miguel Mart\'{\i}n for hospitality and fruitful discussions during the visit of the first author to the University of Granada in January-February 2020.
2,869,038,156,215
arxiv
\section*{\textbf{#1}}} \newcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathbb{B}}{\mathbb{B}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{D}}{\mathbb{D}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{G}}{\mathbb{G}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{\mathbb{I}}{\mathbb{I}} \newcommand{\mathbb{J}}{\mathbb{J}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{L}}{\mathbb{L}} \newcommand{\mathbb{M}}{\mathbb{M}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{O}}{\mathbb{O}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{S}}{\mathbb{S}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\mathbb{U}}{\mathbb{U}} \newcommand{\mathbb{V}}{\mathbb{V}} \newcommand{\mathbb{W}}{\mathbb{W}} \newcommand{\mathbb{X}}{\mathbb{X}} \newcommand{\mathbb{Y}}{\mathbb{Y}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{\mathfrak{G}}{\mathfrak{G}} \newcommand{\mathfrak{S}}{\mathfrak{S}} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\mathfrak{E}}{\mathfrak{E}} \renewcommand{\aa}{\alpha} \newcommand{\beta}{\beta} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\partial}{\partial} \newcommand{\vspace{.4cm}}{\vspace{.4cm}} \newcommand{\mathbb{C} \mathbb{P}}{\mathbb{C} \mathbb{P}} \newcommand{\overline}{\overline} \newcommand{\epsilon}{\epsilon} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\text{sf}}{\text{sf}} \newcommand{\downarrow}{\downarrow} \newcommand{\uparrow}{\uparrow} \newcommand{\textnormal{hex}}{\textnormal{hex}} \newcommand{\textnormal{sign}}{\textnormal{sign}} \newcommand{\textnormal{triv}}{\textnormal{triv}} \newcommand{\overleftarrow}{\overleftarrow} \newcommand{\overrightarrow}{\overrightarrow} \newcommand{\textnormal{star}}{\textnormal{star}} \newcommand{\textnormal{CMG}}{\textnormal{CMG}} \newcommand{\text{opp}}{\text{opp}} \newcommand{\twoheadrightarrow}{\twoheadrightarrow} \newcommand{\hookrightarrow}{\hookrightarrow} \newcommand{\text{wt}}{\text{wt}} \newcommand{\textnormal{rowspan}}{\textnormal{rowspan}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\mathbf{X}}{\mathbf{X}} \newcommand{\mathbf{Y}}{\mathbf{Y}} \DeclareMathOperator{\Dih}{\textnormal{Dih}} \DeclareMathOperator{\Gr}{\textnormal{Gr}} \DeclareMathOperator{\tGr}{\hat{\textnormal{Gr}}} \DeclareMathOperator{\GL}{\textnormal{GL}} \DeclareMathOperator{\SL}{\textnormal{SL}} \DeclareMathOperator{\Conf}{\textnormal{Conf}} \DeclareMathOperator{\Sym}{\textnormal{Sym}} \DeclareMathOperator{\Trop}{\textnormal{Trop}} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\Ext}{Ext} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Ker}{Ker} \DeclareMathOperator{\CoKer}{CoKer} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Proj}{Proj} \DeclareMathOperator{\Tor}{Tor} \DeclareMathOperator{\Span}{\textnormal{span}} \DeclareMathOperator{\QAut}{\textnormal{QAut}} \DeclareMathOperator{\qaut}{\textnormal{qaut}} \DeclareMathOperator{\sign}{\textnormal{sign}} \DeclareMathOperator{\SignedPerms}{\textnormal{SignedPerms}} \DeclareMathOperator{\Aut}{\textnormal{Aut}} \title{Quasi-homomorphisms of cluster algebras} \author{Chris Fraser} \date{} \keywords{Cluster algebra, seed orbit, quasi-homomorphism, cluster modular group, tagged mapping class group.} \thanks{This work was supported by a graduate fellowship from the National Physical Science Consortium and NSF grant DMS-1361789. } \subjclass[2010]{13F60} \address{Department of Mathematics, University of Michigan, Ann Arbor, MI, 48109, USA} \email{[email protected]} \setcounter{tocdepth}{1} \begin{document} \setcounter{tocdepth}{1} \numberwithin{equation}{section} \begin{abstract} We introduce quasi-homomorphisms of cluster algebras, a flexible notion of a map between cluster algebras of the same type (but with different coefficients). The definition is given in terms of seed orbits, the smallest equivalence classes of seeds on which the mutation rules for non-normalized seeds are unambiguous. We present examples of quasi-homomorphisms involving familiar cluster algebras, such as cluster structures on Grassmannians, and those associated with marked surfaces with boundary. We explore the related notion of a quasi-automorphism, and compare the resulting group with other groups of symmetries of cluster structures. For cluster algebras from surfaces, we determine the subgroup of quasi-automorphisms inside the tagged mapping class group of the surface. \end{abstract} \maketitle \vspace{-.3in} \section*{Introduction} The general structural theory of cluster algebras has been well developed during the 15 years since their inception \cite{CAI}. Despite this, there does not seem to be a consensus on what the ``right'' notion of a homomorphism between cluster algebras should be-- several such notions have arisen in different mathematical settings, see e.g.~\cite{ADS, ASS, ChangZhua, ChangZhub, Reading, ReadingSurfaces}. From our perspective, the key difficulty in defining homomorphisms of cluster algebras is rooted in the fact that the construction of a cluster algebra involves three operations: the addition, the multiplication, and the auxiliary addition used in the normalization condition. Most preexisting notions of a ``cluster homomorphism'' are designed to respect all three of these operations, a rather restrictive requirement. We suggest instead that even in the ordinary (i.e., normalized) setting, it is fruitful to consider maps that only preserve the structures intrinsic to non-normalized cluster algebras, ignoring the auxiliary addition. This leads us to the concept of \emph{seed orbits} and to the mutation patterns these orbits form. The morphisms between such mutation patterns are the main object of our interest; we call them \emph{quasi-homomorphisms}. This paper is devoted to a systematic study of quasi-homomorphisms and related algebraic constructs. \begin{center} \rule{6cm}{0.4pt} \end{center} A cluster algebra is defined by specifying a distinguished set of generators (called \emph{cluster variables}) inside an ambient field of rational functions in $n$ variables. Starting from an \emph{initial cluster} of $n$ cluster variables, the remaining cluster variables are obtained by iterating algebraic steps called \emph{mutations}. Each mutation produces a new cluster from a current one by exchanging one cluster variable for a new one. The specific rules for computing the latter are encoded by two additional ingredients, an $n \times n$ \emph{exchange matrix}~$B$ and a \emph{coefficient tuple}~$\mathbf{p}$ consisting of elements of some fixed \emph{coefficient group}. The triple consisting of the cluster, the exchange matrix, and the coefficient tuple, is called a \emph{seed}. When a cluster mutates, the ingredients~$B$ and~$\mathbf{p}$ also do: the new matrix $B'$ is given explicitly in terms of $B$, and the new tuple $\mathbf{p}'$ satisfies a constraint involving~$\mathbf{p}$ and~$B$. A collection of seeds related to each other by mutations in all possible directions forms a \emph{seed pattern}. In the most general cluster algebra setup -- that of \emph{non-normalized seed patterns}~\cite{CAIII,CATSII, CAI}, the mutation recipe does not uniquely specify the new coefficient tuple~$\mathbf{p}'$ from~$\mathbf{p}$ and~$B$. This ambiguity propagates through iterated mutations, and consequently the set of cluster variables is not uniquely determined by the initial seed. The usual way to remove this ambiguity is to impose the additional assumption that the coefficient group is endowed with an additional operation of ``auxiliary addition'' (making it into a \emph{semifield}), and then require the corresponding \emph{normalization condition} to hold at every seed. This assumption is satisfied for the most important examples of cluster algebras arising in representation theory. In this paper, we make use of another way of removing the ambiguity by considering \emph{seed orbits}, the smallest equivalence classes of seeds on which the mutation rules are unambiguous. This gives rise to to the concept of a mutation pattern of seed orbits. Such a pattern is determined uniquely by any one of its constituent seed orbits. The natural notion of a homomorphism between two mutation patterns of seed orbits brings us to the definition of a \emph{quasi-homomorphism}, a rational map (more precisely, a semifield homomorphism) that respects the seed orbit structure and commutes with mutations. Though the appropriate context for defining quasi-homomorphisms is that of non-normalized seed patterns, we see two ways in which quasi-homomorphisms are useful in the structural theory of ordinary (normalized) seed patterns. First, it is important to understand the relationships between cluster algebras with the same underlying pattern of exchange matrices but with different choices of coefficients. One celebrated result of this kind is the \emph{separation of additions formula} (\cite[Theorem $3.7$]{CAIV}). For a given mutation pattern of exchange matrices, this formula expresses the cluster variables in a cluster algebra with \emph{any} choice of coefficients in terms of those in a cluster algebra with a special choice of \emph{principal coefficients}. Our Proposition \ref{qhnormalizedprop} puts this formula in a wider context, in which every quasi-homomorphism between a pair of normalized seed patterns witnesses its own separation of additions. This idea can be used to construct a new cluster algebra starting from a known one. (More precisely, using a known cluster structure on an algebra~$R$, one can produce a cluster structure on another algebra~$R'$ by describing an appropriate map from~$R$ to~$R'$.) Second, the naturally defined concept of a \emph{quasi-automorphism} gives rise to the \emph{quasi-automorphism group} of a seed pattern. This group interpolates between previously defined groups that are either too sensitive to coefficients (these groups are too small) or don't refer to coefficients at all (these groups are too large). Most cluster algebras arising in applications have nontrivial coefficients, and these cluster algebras often afford nontrivial self-maps that are quasi-automorphisms of the cluster structure. The twist map on the Grassmannian~\cite{MarshScott} is one important example, cf.~Remark~\ref{twistremark}. In a forthcoming companion paper we construct a large group of quasi-automorphisms of the Grassmannian cluster algebras \cite{Scott} whose action on cluster variables has a simple description. Much of the abstract setup in this paper was developed with that application in mind. \medskip The paper is organized as follows. Section \ref{PreliminariesSecn} presents background on non-normalized seed patterns. This is mostly standard and taken from \cite{CATSII, CAIV}, but with emphasis on the notion of the \emph{ambient semifield}, cf.~Definition~\ref{ambientsemifielddefn}. The section ends with a motivating example: a pair of seed patterns which will illustrate the various notions in subsequent sections. A reader familiar with cluster algebras can skim this section and head directly to Example \ref{GrandBandsExchangeGraphs}. Sections \ref{SeedOrbitsSecn} and \ref{QHSecn} are the conceptual core of the paper. We define seed orbits as the smallest equivalence classes of non-normalized seeds on which the mutation rule is unambiguous. In Proposition \ref{equivalentseedscondition} we give a more explicit characterization of seed orbits as orbits with respect to a rescaling action on seeds. Section \ref{QHSecn} introduces quasi-homomorphisms of seed patterns and their basic properties. We end this section by describing the key differences between quasi-homomorphisms and some preexisting notions, specifically \emph{rooted cluster morphisms} \cite{ADS} and \emph{coefficient specializations} \cite{CAII, Reading, ReadingSurfaces}. In Section \ref{NormalizedQHs} we discuss quasi-homomorphisms between normalized seed patterns. For seed patterns of geometric type, we relate quasi-homomorphisms to linear combinations of the rows of an extended exchange matrix, making connections to the separation of additions formula and to gradings on cluster algebras. Section \ref{NervesSecn} introduces the easiest way of specifying a quasi-homomorphism in practice, by checking that a given semifield map sends cluster variables to rescaled cluster variables on a \emph{nerve}. In Section \ref{QASecn} we define the \emph{quasi-automorphism group} of a seed pattern and compare it with the cluster modular group \cite{FG} and the group of cluster automorphisms \cite{ASS}. Sections \ref{SurfacesSecn} and \ref{SurfacesProofsSecn} focus on cluster algebras associated with bordered marked surfaces~\cite{CATSI, CATSII}. The main result is Theorem \ref{whichgworkwithL} describing the quasi-automorphism group of such a cluster algebra as a subgroup of the tagged mapping class group (excluding a few exceptional surfaces). In particular, it establishes that regardless of the choice of coefficients in such a cluster algebra, the quasi-automorphism group is always a finite index subgroup of the cluster modular group. The concept of a nerve introduced in Section \ref{NervesSecn} is new and includes as a special case the star neighborhood of a vertex. Star neighborhoods show up in the algebraic Hartogs' principle argument used to establish that a given cluster algebra is contained in another algebra \cite[Proposition 3.6]{tensors}. In Appendix \ref{StarfishSecn} we extend this argument from a star neighborhood to an arbitrary nerve. Section \ref{GrassmannianandBands} illustrates the techniques in Section \ref{NormalizedQHs}. We generalize Example \ref{GrandBandsExchangeGraphs} by describing a quasi-isomorphism between the Grassmannian cluster algebras \cite{Scott} and polynomial rings arising as coordinate rings of \emph{band matrices}. \section*{Acknowledgements} I would like to thank Ian Le, Greg Muller, and Gregg Musiker for helpful conversations. I especially thank Sergey Fomin for many conversations and suggestions. This work was supported by a graduate fellowship from the National Physical Science Consortium and NSF grant DMS-1361789. While in the midst of carrying out this work, I learned that Thomas Lam and David Speyer had independently obtained results similar to Corollary \ref{geomtypeseedhom} and Remark \ref{separationofadditionsrmk}. \section{Preliminaries on seed patterns}\label{PreliminariesSecn} A (non-normalized) cluster algebra is constructed from a set of data called a \emph{non-normalized seed pattern}. We define this data now while fixing standard notation. For a number $x$ we let $[x]_+ := \max (x,0)$. We let $\textnormal{sign}(x)$ equal either $-1$, $0$ or $1$ according to whether $x$ is negative, zero, or positive. We denote $\{1,\dots,n\}$ by $[1,n]$. The setup begins with a choice of \emph{ambient field of rational functions} $\mathcal{F}$ with coefficients in a \emph{coefficient group } $\mathbf{P}$. The coefficient group is an abelian multiplicative group without torsion. The ambient field is a field of rational functions in $n$ variables with coefficients in $\mathbf{P}$: it is the set of expressions that can be made out of $n$ elements $x_1,\dots,x_n$ and the elements of of $\mathbf{P}$, using the standard arithmetic operations $+,-,\times$ and $\div$, under the usual notion of equivalence of such rational expressions. The integer $n$ is called the \emph{rank}. \begin{defn}[Non-normalized seed, \cite{CAI,CATSII}]\label{nnseeddefn} Let $\mathbf{P}$ and $\mathcal{F}$ be as above. A \emph{non-normalized seed} in $\mathcal{F}$ is a triple $\Sigma = (B,\mathbf{p},\mathbf{x})$, consisting of the following three ingredients: \begin{itemize} \item a skew-symmetrizable $n \times n$ matrix $B = (b_{ij})$, \item a \emph{coefficient tuple} $\mathbf{p} = (p_1^{\pm},\dots,p_n^{\pm})$ consisting of $2n$ elements in $\mathbf{P}$, \item a \emph{cluster} $\mathbf{x} = (x_1,\dots,x_n )$ in $ \mathcal{F}$, whose elements (called \emph{cluster variables}) are algebraically independent and freely generate $\mathcal{F}$ over $\mathbb{Q} \mathbf{P}$. \end{itemize} \end{defn} The more restrictive notion of \emph{normalized seed} is given in Definition \ref{normalizedpatterndefn}. Normalized seeds are much more studied in the literature, where they are usually simply called \emph{seeds}. Thus, we persistently use the adjective \emph{non-normalized} in our setting, although this is a little clumsy. \begin{defn} A \emph{labeled $n$-regular tree}, $\mathbb{T}_n$, is an $n$-regular tree with edges labeled by integers so that the set of labels emanating from each vertex is $[1,n]$. We write $t \xrightarrow{k} t'$ to indicate that vertices $t,t'$ are joined by an edge with label $k$. An isomorphism $\mathbb{T}_n \to \overline{\mathbb{T}}_n$ of labeled trees $\mathbb{T}_n$ and $\overline{\mathbb{T}}_n$ sends vertices to vertices and edges to edges, preserving incidences of edges and the edge labels. Such an isomorhism is uniquely determined by its value at a single vertex $t \in \mathbb{T}_n$. \end{defn} \begin{defn}[Non-normalized seed pattern, \cite{CATSII,CAI}]\label{nnseedpatterndefn} Let $\mathbf{P}$ and $\mathcal{F}$ be as above. A collection of non-normalized seeds in~$\mathcal{F}$, with one seed~$\Sigma(t)= (B(t),\mathbf{p}(t),\mathbf{x}(t))$ for each~$t \in \mathbb{T}_n$, is called a \emph{non-normalized seed pattern} if for each edge~$t \xrightarrow{k} t'$, the seeds $\Sigma(t)$ and $\Sigma(t')$ are related by a \emph{mutation in direction~$k$}: \begin{itemize} \item The matrices $B(t)$ and $B(t')$ are related by a \emph{matrix mutation} \begin{equation}\label{brecurrence} b_{ij}(t') = \begin{cases} -b_{ij}(t) & \text{ if } i = k \text{ or } j = k \\ b_{ij}(t)+ \textnormal{sign}(b_{ik}(t))[b_{ik}(t)b_{kj}(t)]_+ & \text{ otherwise,} \end{cases} \end{equation} \item the coefficient tuples $\mathbf{p}(t)$ and $\mathbf{p}(t')$ are related by \begin{equation}\label{nnyrecurrenceatk} p^\pm_k(t') = p^\mp_k(t) \text{ and } \end{equation} \begin{equation}\label{nnyrecurrence} \frac{p^+_j(t')}{p^-_j(t')} = \begin{cases} \frac{p^+_j(t)}{p^-_j(t} p^{+}_k(t)^{b_{kj(t)}} & \text{ if } b_{kj} \geq 0 \\ \frac{p^+_j(t)}{p^-_j(t)} p^{-}_k(t)^{b_{kj(t)}} & \text{ if } b_{kj} \leq 0 \\ \end{cases} \end{equation} when $j \neq k$, \item and the clusters $\mathbf{x}(t)$ and $\mathbf{x}(t')$ are related by \begin{equation}\label{nnxrecurrenceatj} x_j(t') = x_j(t) \text{ for $j \neq k$, and } \end{equation} \begin{equation}\label{nnxrecurrence} x_k(t)x_k(t') =p^+_k \prod x_j(t)^{[b_{jk}]_+} + p^-_k \prod x_j(t)^{[-b_{jk}]_+} , \end{equation} the latter of which is called an \emph{exchange relation}. \end{itemize} \end{defn} The rules \eqref{brecurrence} through \eqref{nnxrecurrence} are ambiguous, meaning $\Sigma(t')$ is not determined uniquely from $\Sigma(t)$. Indeed, since \eqref{nnyrecurrence} only mentions the ratio $\frac{p^+_j(t')}{p^-_j(t')}$, for each $j \neq k $ one can rescale both of $p^+_j(t')$ and $p^-_j(t')$ by a common element of $\mathbf{P}$ while preserving \eqref{nnyrecurrence}. We write~$\Sigma \overset{\mu_k}{\leftrightsquigarrow} \Sigma'$ to indicate that two seeds~$\Sigma$ and~$\Sigma$ are related by a mutation in direction~$k$; this condition is symmetric in~$\Sigma$ and~$\Sigma'$. Thinking of \eqref{nnxrecurrence} as a recipe for computing $x_k(t')$ from $\Sigma(t)$, we crucially observe that the computation is \emph{subtraction-free}: the only operations needed are $+,\times$ and $\div$ in $\mathcal{F}$. This motivates the following definition: \begin{defn}[Ambient semifield]\label{ambientsemifielddefn} Let $\mathcal{E}$ be a non-normalized seed pattern, and $\mathbf{x}(t)$ one of its clusters. The \emph{ambient semifield}, $\mathcal{F}_{>0} = \mathcal{F}_{>0}(\mathcal{E}) \subset \mathcal{F}$ is the subset of all elements which can be given as a \emph{subtraction-free} rational expression in the elements of $\mathbf{x}(t)$, with coefficients in $\mathbf{P}$. Thus, it is the set of rational functions which can be built out of~$x_1(t),\dots,x_n(t)$ and the elements of $\mathbf{P}$ using the operations $+,\times$ and $\div$ in $\mathcal{F}$. \end{defn} Since \eqref{nnxrecurrence} is subtraction-free, $\mathcal{F}_{>0}$ is independent of the choice of $t$ (it only depends on $\mathcal{E}$), and every cluster variable for $\mathcal{E}$ lies in $\mathcal{F}_{>0}$. Recall that a \emph{semifield} is an abelian multiplicative group, with an additional binary operation (called the auxiliary addition) that is commutative and associative, and distributes over multiplication. The ambient semifield is a semifield with respect to the multiplication and addition operations in $\mathcal{F}$, justifying its name. Homomorphisms between semifields are defined in the obvious way. The ambient semifield has the following universality property. \begin{lem}[{\cite[Definition $2.1$]{CAIV}}]\label{universalsemifieldlemma} Let $\mathcal{E}$ be a non-normalized seed pattern with coefficient group $\mathbf{P}$ and ambient semifield $\mathcal{F}_{>0}$. Fix a cluster $\mathbf{x}(t)$ in $\mathcal{E}$. Let $\mathcal{S}$ be \emph{any} semifield. Then given a multiplicative group homomorphism $\mathbf{P} \to \mathcal{S}$, and a function $\mathbf{x}(t) \to \mathcal{S}$, there exists a unique semifield homomorphism $\mathcal{F}_{>0} \to \mathcal{S}$ agreeing with the given maps on $\mathbf{P} \cup \mathbf{x}(t)$. \end{lem} The following elements of $\mathcal{F}_{>0}$ will play a prominent role in Section \ref{SeedOrbitsSecn}. \begin{defn}[Hatted variables]\label{yhatsdefn} Let $\mathcal{E}$ be a non-normalized seed pattern. Let $\hat{\mathbf{y}}(t) = (\hat{y}_1(t),\dots,\hat{y}_n(t))$ denote the $n$-tuple of \emph{hatted variables} \begin{equation}\label{hattedvarsdefn} \hat{y}_j(t) = \frac{p^+_j(t)}{p^-_j(t)} \prod_{i} x_i(t)^{b_{ij}(t)}, \end{equation} obtained by taking the ratio of the two terms on the right hand side of \eqref{nnxrecurrence}. \end{defn} The hatted variables in adjacent seeds determine each other as follows: \begin{prop}[{\cite[Proposition $2.9$]{CATSII}}] \label{nnyhatsmutate} Let $\mathcal{E} = (B(t),\mathbf{p}(t),\mathbf{x}(t))$ be a non-normalized seed pattern with hatted variables $\hat{\mathbf{y}}(t)$. For each edge $t \xrightarrow{k} t'$, the $n$-tuples $\hat{\mathbf{y}}(t)$ and $\hat{\mathbf{y}}(t')$ satisfy \begin{equation}\label{yrecurrencewplus} \hat{y}_{j}(t') = \begin{cases} \hat{y}_j(t)^{-1} & \text{ if } j = k \\ \hat{y}_j(t) \hat{y}_k(t)^{[b_{kj}(t)]_+}(\hat{y}_k(t) + 1)^{-b_{kj}(t)} & \text{ if } j \neq k. \end{cases} \end{equation} \end{prop} The propagation rule \eqref{yrecurrencewplus} takes place in $\mathcal{F}_{>0}$, and only depends on the $B$ matrix. \medskip The preceding discussion is what we will need for Section \ref{SeedOrbitsSecn}. We briefly recall a few more definitions which will be useful in presenting our examples. First, the \emph{exchange graph} $\mathbf{E}$ associated with a seed pattern $\mathcal{E}$ is the graph whose vertices are the \emph{unlabeled seeds} in $\mathcal{E}$, and whose edges correspond to mutations between these seeds. More precisely, permuting the indices $[1,n]$ in a non-normalized seed commutes with the mutation rules \eqref{brecurrence} through \eqref{nnxrecurrence}. The exchange graph is the $n$-regular graph obtained by identifying vertices $t_1,t_2 \in \mathbb{T}_n$ if the seeds $\Sigma(t_1)$ and $\Sigma(t_2)$ are permutations of each other. The \emph{star neighborhood} $\textnormal{star}(t)$ of a vertex $t \in \mathbf{E}$ is the set of $n$ edges adjacent to it. Rather than being indexed by $[1,n]$, the data in an unlabeled seed~$\Sigma(t)$ for $t \in \mathbf{E}$ is indexed by the $n$ seeds adjacent to~$\Sigma(t)$, i.e. by the elements of $\textnormal{star}(t)$. Second, in the concrete examples in this paper, we have chosen a distinguished finite set of elements called \emph{frozen variables}, and the coefficient group $\mathbf{P}$ is the free abelian multiplicative group of Laurent monomials in these frozen variables. The \emph{cluster algebra}~$\mathcal{A}$ associated with the seed pattern~$\mathcal{E}$ is the~$\mathbb{Z}$-algebra generated by the frozen variables and all of the cluster variables arising in the seeds of~$\mathcal{E}$. \begin{example}\label{GrandBandsExchangeGraphs} We now introduce a pair of affine algebraic varieties $\mathbf{X}$ and $\mathbf{Y}$ and a pair of seed patterns in their respective fields of rational functions. The cluster algebras associated with these seed patterns are the coordinate rings $\mathbb{C}[\mathbf{X}]$ and $\mathbb{C}[\mathbf{Y}]$. Both cluster algebras are of finite Dynkin type $A_2$. Let $\mathbf{X} = \tGr(3,5)$ be the affine cone over the Grassmann manifold of $3$-dimensional planes in $\mathbb{C}^5$. The points in $\mathbf{X}$ are the decomposable tensors $\{x \wedge y \wedge z \colon x,y,z \in \mathbb{C}^5\} \subset \Lambda^3(\mathbb{C}^5)$. Its coordinate ring is generated by the \emph{Pl\"ucker coordinates} $\Delta_{ijk}$ for $1 \leq i < j < k \leq 5$, extracting the coefficient of $e_i \wedge e_j \wedge e_k$ in $x \wedge y \wedge z$, where $e_1,\dots,e_5$ is the standard basis for $\mathbb{C}^5$. Representing a given $x \in \tGr(3,5)$ by a $3 \times 5$ matrix, $\Delta_{ijk}(x)$ is the maximal minor of this matrix in columns $i,j,$ and $k$. There is a well known cluster structure on $\mathbb{C}[\mathbf{X}]$ \cite{CAI,CAII}. It is a special case of a cluster structure for arbitrary Grassmannians constructed by Scott \cite{Scott}. The frozen variables are the Pl\"ucker coordinates consisting of cyclically consecutive columns \begin{equation} \Delta_{123}, \Delta_{234}, \Delta_{345}, \Delta_{145}, \Delta_{125} \label{FrozenXs}. \\ \end{equation} There are five cluster variables, listed in \eqref{ClusterXs} with cyclically adjacent pairs of cluster variables forming clusters \begin{equation} \Delta_{245}, \Delta_{235}, \Delta_{135}, \Delta_{134}, \Delta_{124}. \label{ClusterXs} \end{equation} The clusters and exchange relations are given in Figure \ref{GrThreeFiveFig}. All of the other data in the seed pattern can be determined from these. For example, focusing on the seed whose cluster is $(x_1,x_2) = (\Delta_{235}, \Delta_{245})$, from the first and fifth exchange relations in Figure \ref{GrThreeFiveFig} follows \begin{align} (p_1^+,p^-_1,p_2^+,p_2^-) &= (\Delta_{125}\Delta_{234},\Delta_{123},\Delta_{145},\Delta_{345}\Delta_{125}) \\ (\hat{y}_1, \hat{y}_2) &=(\displaystyle{\frac{\Delta_{125}\Delta_{234}}{\Delta_{123}\Delta_{245}}, \frac{\Delta_{145}\Delta_{235}}{\Delta_{345}\Delta_{125}}}). \end{align} The exchange relations are written so that mutating is moving clockwise in the exchange graph. If a mutation moves counterclockwise, one should swap the order of the two terms in the exchange relation. \begin{figure}[ht] \begin{tabular}{ll} \begin{tikzpicture}[scale = .5] \def \rsize{3.25}; \foreach \s in {1,...,5} { \node at ({72*(-\s)+162}:\rsize) {$\bullet$}; \draw ({72*(-\s)+162}:\rsize) -- ({72*(-\s+1)+162}:\rsize); } \node at (90: 1.2*\rsize) {$ (\Delta_{235},\Delta_{245})$}; \node at (25 :1.5*\rsize) {$(\Delta_{135},\Delta_{235})$}; \node at (-52: 1.3*\rsize) {$(\Delta_{134}, \Delta_{135})$}; \node at (-140: 1.5*\rsize) {$(\Delta_{124}, \Delta_{134})$}; \node at (-210: 1.4*\rsize) {$(\Delta_{245}, \Delta_{124})$}; \end{tikzpicture} & { $\begin{aligned} \Delta_{245} \Delta_{135} &= \Delta_{145} \Delta_{235}+\Delta_{125}\Delta_{345} \\[.1cm] \Delta_{235} \Delta_{134} &= \Delta_{234} \Delta_{135}+\Delta_{123}\Delta_{345} \\[.1cm] \Delta_{135} \Delta_{124} &= \Delta_{125} \Delta_{134}+\Delta_{123}\Delta_{145} \\[.1cm] \Delta_{134} \Delta_{245} &= \Delta_{345} \Delta_{124}+\Delta_{123}\Delta_{345} \\[.1cm] \Delta_{124} \Delta_{235} &= \Delta_{123} \Delta_{245}+\Delta_{125}\Delta_{234} \end{aligned} $ } \end{tabular} \caption{The exchange graph for $\mathbb{C}[\mathbf{X}]$. The vertices are clusters and edges between vertices are mutations. Each mutation exchanges two cluster variables via an exchange relation listed in the table at top right. The extra data in each seed can be inferred from these exchange relations. \label{GrThreeFiveFig}} \end{figure} Second, let $\mathbf{Y} \cong \mathbb{C}^9$ the affine space of \emph{band matrices} of the form \begin{equation} y = \begin{pmatrix} y_{1,1} &y_{1,2} & y_{1,3}& 0& 0 \\ 0 & y_{2,2} & y_{2,3} & y_{2,4}& 0 \\ 0 &0 & y_{3,3} & y_{3,4} & y_{3,5} \end{pmatrix}.\end{equation} Its coordinate ring $\mathbb{C}[\mathbf{Y}]$ contains the \emph{minors} $Y_{I,J}$. Evaluating $Y_{I,J}$ on $y \in \mathbf{Y}$ returns the minor of $y$ occupying rows $I$ and columns $J$, e.g. $Y_{i,j}(y) = y_{i,j}$ and $Y_{12,23}(y) = y_{1,2}y_{2,3}-y_{1,3}y_{2,2}$. Some of these minors factor, e.g. $Y_{12,13} = Y_{1,1}Y_{2,3}$. Figure \ref{BandThreeFiveFig} shows a seed pattern whose cluster algebra is $\mathbb{C}[\mathbf{Y}]$. The frozen variables are the following minors \begin{equation} Y_{1,1}, Y_{2,2}, Y_{3,3} , Y_{1,3},Y_{2,4},Y_{3,5}, Y_{123,234} \label{FrozenYs}. \\ \end{equation} The cluster variables are listed in \eqref{ClusterYs}, with cyclically adjacent pairs forming clusters \begin{equation} Y_{1,2}, Y_{12,23},Y_{2,3}, Y_{23,34}, Y_{3,4}\label{ClusterYs}. \end{equation} \begin{figure}[ht] \begin{center} \begin{tabular}{ll} \begin{tikzpicture}[scale = .5] \def \rsize{3.4}; \foreach \s in {1,...,5} { \node at ({72*(-\s)+162}:\rsize) {$\bullet$}; \draw ({72*(-\s)+162}:\rsize) -- ({72*(-\s+1)+162}:\rsize); } \node at (95: 1.2*\rsize) {$(Y_{12,23}, Y_{1,2})$}; \node at (30: 1.35*\rsize) {$(Y_{2,3}, Y_{12,23})$}; \node at (-60: 1.2*\rsize) {$(Y_{23,34}, Y_{2,3})$}; \node at (-140: 1.6*\rsize) {$(Y_{3,4}, Y_{23,34})$}; \node at (-205: 1.4*\rsize) {$(Y_{1,2}, Y_{3,4})$}; \end{tikzpicture} & {$ \begin{aligned} Y_{1,2}Y_{2,3} &= Y_{12,23}+Y_{2,2}Y_{1,3} \\[.1cm] Y_{12,23}Y_{23,34} &= Y_{123,234} Y_{2,3}+Y_{2,2}Y_{3,3}Y_{1,3}Y_{2,4} \\[.1cm] Y_{2,3}Y_{3,4} &= Y_{23,34}+Y_{3,3}Y_{2,4} \\[.1cm] Y_{23,34}Y_{1,2} &= Y_{2,2}Y_{1,3} Y_{3,4}+Y_{123,234} \\[.1cm] Y_{3,4}Y_{12,23} &= Y_{3,3}Y_{2,4} Y_{1,2}+Y_{123,234} \end{aligned} $ } \end{tabular} \caption{The exchange graph and exchange relations for $\mathbb{C}[\mathbf{Y}]$, mirroring Figure \ref{GrThreeFiveFig}. \label{BandThreeFiveFig}} \end{center} \end{figure} \end{example} \section{Seed orbits}\label{SeedOrbitsSecn} We introduce seed orbits by first describing them as equivalence classes under a certain equivalence relation on seeds. Proposition \ref{equivalentseedscondition} gives another characterization as orbits under an explicit rescaling action. \begin{defn} Let $\vec{k} = (k_1,\dots,k_\ell)$ be a sequence of elements of $[1,n]$. Choosing a base point $t_0 \in \mathbb{T}_n$, such a sequence determines a walk $t_0 \xrightarrow {k_1} t_1 \xrightarrow{k_2} \cdots \xrightarrow{k_\ell} t_\ell $ in $\mathbb{T}_n$. We say that~$\vec{k}$ is \emph{contractible} if this walk starts and ends at the same vertex of $\mathbb{T}_n$, i.e. $t_\ell = t_0$. Given non-normalized seeds $\Sigma$ and $\Sigma^*$, we write $\Sigma \sim \Sigma^*$ if there is a contractible sequence of mutations from $\Sigma$ to $\Sigma^*$, i.e. a contractible sequence $\vec{k}$ and non-normalized seeds $\Sigma_1,\dots,\Sigma_{\ell-1}$ such that \begin{equation}\label{equivalentseedsequence} \Sigma = \Sigma_0 \overset{\mu_{k_1}}{\leftrightsquigarrow} \Sigma_1 \overset{\mu_{k_2}}{\leftrightsquigarrow} \Sigma_2 \cdots \Sigma_{\ell-1} \overset{\mu_{k_\ell}}{\leftrightsquigarrow} \Sigma_{\ell} = \Sigma^*. \end{equation} \end{defn} Clearly, $\sim$ is an equivalence relation on non-normalized seeds. Furthermore, it removes the ambiguity present in mutation of non-normalized seeds: \begin{lem}\label{equivalenceclassesmutate} The mutation rule $\overset{\mu_k}{\leftrightsquigarrow}$ becomes unambiguous and involutive once it is thought of as a rule on equivalence classes of seeds under $\sim$. That is, fixing a $\sim$-equivalence class $\mathfrak{S}$ and a direction $k \in [n]$, the set of seeds \begin{equation} \{ \Sigma' \colon \Sigma' \overset{\mu_k}{\leftrightsquigarrow} \Sigma \text{ for some } \Sigma \in \mathfrak{S} \} \end{equation} is again a $\sim$-equivalence class of seeds. \end{lem} We now characterize $\sim$-equivalence classes explicitly. We say two elements $z,x \in \mathcal{F}$ are \emph{proportional}, written $z \asymp x$, if $\frac{z}{x} \in \mathbf{P}$. We emphasize that $\mathbf{P}$ does not include constants, e.g. $-1,2 \notin \mathbf{P}$, and thus $x$ is not proportional to $-x, 2x$, etc. \begin{prop}[Seed orbits]\label{equivalentseedscondition} Let $\Sigma = (B,\mathbf{p},\mathbf{x}), \Sigma^* = (B^*,\mathbf{p}^*,\mathbf{x}^*)$ be non-normalized seeds in $\mathcal{F}$, of rank $n \geq 2$, with $\mathbf{x} = (x_i),\mathbf{p} = (p^\pm_i), \mathbf{x}^* = (x^*_i),\mathbf{p}^* = ((p^*)^\pm_i)$. Then the following are equivalent: \begin{enumerate} \item $\Sigma \sim \Sigma^*$. \item $B = B^*, \hat{\mathbf{y}}(\Sigma) = \hat{\mathbf{y}}(\Sigma^*)$, and $x_i \asymp x^*_i$ for all $i$. \item $B = B^*$, and there exist scalars $c_1,\dots,c_n,d_1,\dots,d_n \in \mathbf{P}$, such that \begin{align} x^*_j &= \frac{x_j}{c_j} \label{rescalingxs} \\ (p^*)^\pm_j &= \frac{p^{\pm}_j}{d_j} \prod c_i^{[\pm b_{ij}]_+} \label{rescalingps}. \end{align} \end{enumerate} \end{prop} Equations \eqref{rescalingxs} and \eqref{rescalingps} define a \emph{rescaling action} of $\mathbf{P}^n \times \mathbf{P}^n$ on non-normalized seeds, denoted by $(\vec{c},\vec{d}) \cdot \Sigma$ where $(\vec{c},\vec{d}) \in \mathbf{P}^n \times \mathbf{P}^n$ and $\Sigma$ is a non-normalized seed. Proposition \ref{equivalentseedscondition} says that a $\sim$-equivalence class of non-normalized seeds is precisely a $\mathbf{P}^n \times \mathbf{P}^n$ orbit under this action; we henceforth refer to these equivalence classes as \emph{seed orbits}. \begin{proof} Conditions (2) and (3) are a re-translation of each other by immediate calculation. We show (1) implies (2). Defining a seed orbit by $(2)$, this implication follows from the fact that seed orbits are ``closed under mutation.'' More precisely, if $\Sigma$ and $\Sigma^\dagger = (\vec{c},\vec{d}) \cdot \Sigma$ are in the same seed orbit and $\Sigma'$ and $(\Sigma^\dagger)'$ are two seeds satisfying $\Sigma \overset{\mu_{k}}{\leftrightsquigarrow} \Sigma'$ and $\Sigma^\dagger \overset{\mu_{k}}{\leftrightsquigarrow} (\Sigma^\dagger)'$, then $\Sigma'$ and $(\Sigma^\dagger)'$ are in the same seed orbit. By \eqref{brecurrence} and Proposition \ref{nnyhatsmutate}, we know that $B' = (B^\dagger)'$ and $\hat{\mathbf{y}}' = (\hat{\mathbf{y}}^\dagger)'$, so the claim will follow if we check $(x^\dagger)'_j \asymp x'_j$ for all $j$. This is obvious when $j \neq k$ from \eqref{nnxrecurrenceatj}. When $j= k$, \eqref{nnxrecurrence} for the mutation $\Sigma^\dagger \overset{\mu_{k}}{\leftrightsquigarrow} (\Sigma^\dagger)'$ says that \begin{align}\label{nnxrecurrenceredone} (x^\dagger)'_k &= (x_k^\dagger)^{-1}((p^\dagger)^+_k \prod (x^\dagger_j)^{[b_{jk}]_+} + (p^\dagger)^-_k \prod (x^\dagger_j)^{[-b_{jk}]_+}) \\ &=(x_k^\dagger)^{-1}(p^\dagger)^-_k(1+\hat{y}_k(\Sigma^\dagger)) \prod (x^\dagger_j)^{[-b_{jk}]_+} \\ &= \frac{c_k} {d_k} (x_k)^{-1} p^-_k(1+\hat{y}_k(\Sigma)) \prod x_j^{[-b_{jk}]_+} \\ &= \frac{c_k} {d_k} x_k', \label{nnxrecurrenceredoneend} \end{align} as desired. Returning to the implication $(1) \Rightarrow (2)$, from the symmetry of $\overset{\mu_k}{\leftrightsquigarrow}$ it follows that $\Sigma$ is related to itself along any contractible sequence $\vec{k}$. Since seed orbits are closed under mutation, any seed $\Sigma^*$ related to $\Sigma$ by a contractible sequence of mutations is therefore in the same seed orbit as $\Sigma$. Now we show $(3)$ implies $(1)$. Let $\hat{c}_j(a) \in \mathbf{P}^n \times \mathbf{P}^n$ denote the vector with $c_j = a$ and all other entries equal to $1$, and define similarly $\hat{d}_j(a)$. Clearly, it suffices to show that $\Sigma \sim \hat{c}_j(a) \cdot \Sigma$ and $\Sigma \sim \hat{d}_j(a) \cdot \Sigma$, since rescalings of this type generate $\mathbf{P}^n \times \mathbf{P}^n$. Seeds of the form $\hat{d}_j(a) \cdot \Sigma$ are equivalent to $\Sigma$, as follows by mutating twice in any direction $k \neq j$. For seeds of the form $\hat{c}_j(a) \cdot \Sigma$, let $\Sigma'$ be any seed satisfying $\Sigma \overset{\mu_{j}}{\leftrightsquigarrow} \Sigma'$: \begin{align} \Sigma &\sim \hat{d}_j(a^{-1}) \cdot \Sigma \label{commutecalci} \\ &\overset{\mu_{j}}{\leftrightsquigarrow} (\hat{c}_j(a^{-1})\hat{d}_j(a^{-1})) \cdot \Sigma' \label{commutecalcii}\\ &\sim (\hat{c}_j(a^{-1}) ) \cdot \Sigma' \label{commutecalciii} \\ &\overset{\mu_{j}}{\leftrightsquigarrow} \hat{c}_j(a) \cdot \Sigma, \label{commutecalciv} \end{align} where \eqref{commutecalcii} and \eqref{commutecalciv} follow from the calculation in \eqref{nnxrecurrenceredoneend}, and \eqref{commutecalci} and \eqref{commutecalciii} are admissible since we already know rescaling by $\hat{d}_j(a)$ preserves equivalence of seeds. Since \eqref{commutecalci} through \eqref{commutecalciv} amounts to mutating in direction $j$ twice on seed equivalence classes, it follows that $\Sigma \sim \hat{c}_j( a) \cdot \Sigma$ as desired. \end{proof} \section{Quasi-homomorphisms}\label{QHSecn} We will now give the definition of a quasi-homomorphisms from a seed pattern $\mathcal{E}$ to another seed pattern $\overline{\mathcal{E}}$. We retain the notation of Section \ref{PreliminariesSecn} for all the data in $\mathcal{E}$, and we use bars to denote the analogous quantities in the second pattern $\overline{\mathcal{E}}$. Thus $\overline{\mathcal{E}}$ has coefficient group $\overline{\mathbf{P}}$, ambient field $\overline{\mathcal{F}}$, seeds $\overline{\Sigma}(\overline{t}) = (\overline{B}(\overline{t}),\overline{\mathbf{p}}(\overline{t}),\overline{\mathbf{x}}(\overline{t}))$, hatted variables $\hat{\overline{y}}_j(\overline{t})$, and so on. It is built on a second copy of the $n$-regular tree, $\overline{\mathbb{T}}_n$. The motivating observation is the following: since the mutation rules \eqref{nnyrecurrence} through \eqref{nnxrecurrence} are certain algebraic relations in in $\mathcal{F}_{>0}$, they are preserved by a homomorphism of semifields. \begin{defn}[Quasi-homomorphism]\label{qhdefn} Let $\mathcal{E}$ and $\overline{\mathcal{E}}$ be non-normalized seed patterns. Let~$\Psi \colon \mathcal{F}_{>0} \to \overline{\mathcal{F}}_{>0}$ be a semifield homomorphism satisfying~$\Psi(\mathbf{P}) \subset \overline{\mathbf{P}}$ (in this case we say~$\Psi$ \emph{preserves coefficients}). We say~$\Psi$ is a \emph{quasi-homomorphism from $\mathcal{E}$ to $\overline{\mathcal{E}}$} if it maps each seed in $\mathcal{E}$ to a seed that is $\sim$-equivalent to a seed in $\overline{\mathcal{E}}$, in a way that is compatible with mutation. More precisely, let $t \mapsto \overline{t}$ be an isomorphism of the labeled trees $\mathbb{T}_n$~and~$\overline{\mathbb{T}}_n$. Then $\Psi$ is a quasi-homomorphism if and only if \begin{equation}\label{qhsimdefn} \Psi(\Sigma(t)) \sim \overline{\Sigma}(\overline{t}) \end{equation} for all $t \in \mathbb{T}_n$, where $\Psi(\Sigma(t))= (B(t),\Psi(\mathbf{p}),\Psi(\mathbf{x}))$ is the triple obtained by evaluating $\Psi$ on $\Sigma(t)$. \end{defn} As motivation for this definition, we imagine a situation where $\mathcal{E}$ is well understood combinatorially, and we would like to understand another seed pattern $\overline{\mathcal{E}}$ by comparing it with $\mathcal{E}$. The requirement \eqref{qhsimdefn} says that the seeds $\Psi(\Sigma(t))$ mutate ``in parallel'' with the seeds in $\overline{\mathcal{E}}$, in the sense that their corresponding seeds only differ by the rescalings \eqref{rescalingxs} and \eqref{rescalingps}. The following Propositions \ref{basicqhpropsi} and \ref{basicqhpropsii} show two ways in which quasi-homomorphisms are well-behaved. Both of their proofs follow immediately from the observation that applying a semifield homomorphism commutes with mutation. \begin{prop}\label{basicqhpropsi} Let $\Psi \colon \mathcal{F}_{>0} \to \overline{\mathcal{F}}_{>0}$ be a semifield homomorphism satisfying \eqref{qhsimdefn} for \emph{some} $t \in \mathbb{T}_n$. Then $\Psi$ is a quasi-homomorphism. \end{prop} That is, rather than checking that \eqref{qhsimdefn} holds at \emph{every} $t \in \mathbb{T}_n$, it suffices to check this at a single $t \in \overline{\mathbb{T}}_n$. \begin{prop}\label{basicqhpropsii} Let $\Psi$ be a quasi-homomorphism from $\mathcal{E}$ to $\overline{\mathcal{E}}$. Let $\Sigma$ be a seed in $\mathcal{E}$, and let $\Sigma^*$ be a non-normalized seed satisfying $\Sigma \sim \Sigma^*$. Then $\Psi(\Sigma) \sim \Psi(\Sigma^*) $. \end{prop} Proposition \ref{basicqhpropsii} says that quasi-homomorphism preserves $\sim$-equivalence of seeds. Thus, if $\mathfrak{S}(t)$ denotes the seed orbit of $\Sigma(t)$ and ditto for $\overline{\mathfrak{S}}(\overline{t})$ and $\overline{\Sigma}(\overline{t})$, then $\Psi$ maps $\mathfrak{S}(t)$ inside~$\overline{\mathfrak{S}}(\overline{t})$ for all $t$. A quasi-homomorphism is therefore a natural notion of homomorphism between the respective seed orbit patterns $(t, \mathfrak{S}(t))$ and $(\overline{t},\overline{\mathfrak{S}}(\overline{t}))$. Now we describe a quasi-homomorphism between the pair of seed patterns in Example~\ref{GrandBandsExchangeGraphs}. \begin{example}\label{GrandBandsQH} Given $Y \in \mathbf{Y}$, let $Y[1],Y[2],Y[3] \in \mathbb{C}^5$ denote its rows. There is a surjective map of varieties $F \colon \mathbf{Y} \to \mathbf{X}$ sending $Y \overset{F}{\mapsto} Y[1] \wedge Y[2] \wedge Y[3]$. It determines a map on cluster algebras $F^* \colon \mathbb{C}[\mathbf{X}] \to \mathbb{C}[\mathbf{Y}]$ sending $\Delta_{ijk} \mapsto Y_{123,ijk}$. Figure \ref{NNBandThreeFiveFig} shows the non-normalized seed pattern that arises from applying $F^*$ to Figure \ref{GrThreeFiveFig} and factoring the cluster variables inside $\mathbb{C}[\mathbf{Y}]$. The seeds in Figure \ref{NNBandThreeFiveFig} are in the same seed orbit as the corresponding seeds in Figure \ref{BandThreeFiveFig}, and thus $F^*$ is a quasi-homomorphism from $\mathbb{C}[\mathbf{X}]$ to $\mathbb{C}[\mathbf{Y}]$. \begin{figure}[ht] \begin{tikzpicture}[scale = .5] \def \rsize{4}; \foreach \s in {1,...,5} { \node at ({72*(-\s)+162}:\rsize) {$\bullet$}; \draw ({72*(-\s)+162}:\rsize) -- ({72*(-\s+1)+162}:\rsize); } \node at (90: 1.2*\rsize) {$ (Y_{3,5} Y_{12,23},Y_{2,4}Y_{3,5} Y_{1,2})$}; \node at (17 :1.9*\rsize) {$(Y_{1,1} Y_{3,5} Y_{2,3},Y_{3,5}Y_{12,23})$}; \node at (-25: 1.9*\rsize) {$(Y_{1,1} Y_{23,34},Y_{1,1} Y_{3,5} Y_{2,3})$}; \node at (-155: 1.9*\rsize) {$(Y_{1,1}Y_{2,2}Y_{3,4}, Y_{1,1} Y_{23,34})$}; \node at (164: 2.0*\rsize) {$(Y_{2,4}Y_{3,5} Y_{1,2},Y_{1,1}Y_{2,2}Y_{3,4} )$}; \end{tikzpicture} \[ \begin{aligned} Y_{2,4}Y_{3,5}Y_{1,2} \cdot Y_{1,1} Y_{3,5} Y_{2,3} &= Y_{1,1}Y_{3,5}^2Y_{2,4}(Y_{12,23}+Y_{2,2}Y_{1,3}) \\[.1cm] Y_{3.5}Y_{12,23} \cdot Y_{1,1} Y_{23,34} &= Y_{1,1}Y_{3,5} (Y_{123,234} Y_{2,3}+Y_{2,2}Y_{3,3}Y_{1,3}Y_{2,4}) \\[.1cm] Y_{1.1} Y_{3.5} Y_{2,3} \cdot Y_{1.1}Y_{2.2}Y_{3,4} &= Y_{1.,1}^2 Y_{2,2}Y_{35} (Y_{23,34}+Y_{3,3}Y_{24}) \\[.1cm] Y_{1.1} Y_{23,34} \cdot Y_{24}Y_{3.5}Y_{1,2} &= Y_{1.1}Y_{24}Y_{3,5} (Y_{2,2}Y_{1,3} Y_{3,4}+Y_{123,234}) \\[.1cm] Y_{1.1}Y_{2.2}Y_{3,4} \cdot Y_{35}Y_{12,23} &= Y_{1.1}Y_{22}Y_{3,5}(Y_{3,3}Y_{2,4} Y_{1,2}+Y_{123,234}) \end{aligned} \] \caption{ The non-normalized seed pattern obtained by applying $F^*$ to the seed pattern in Figure \ref{GrThreeFiveFig}. The clusters agree with the clusters in Figure \ref{BandThreeFiveFig} up to the frozen variables listed in \eqref{FrozenYs}. Cancelling the common frozen variable factors from both sides of the exchange relations yields the exchange relations in Figure \ref{BandThreeFiveFig}. It follows that the $\hat{y}$ values are the same in both figures. \label{NNBandThreeFiveFig} } \end{figure} \end{example} \begin{defn}\label{proportionality} Two quasi-homomorphisms $\Psi_1,\Psi_2$ from $\mathcal{E}$ to $\overline{\mathcal{E}}$ are called \emph{proportional} if $\Psi_1(\Sigma) \sim \Psi_2(\Sigma)$ for all seeds $\Sigma$ in $\mathcal{E}$. We say a quasi-homomorphism $\Psi$ from $\mathcal{E}$ to $\overline{\mathcal{E}}$ is a \emph{quasi-isomorphism} if there is a quasi-homomorphism $\Phi$ from $\overline{\mathcal{E}}$ to $\mathcal{E}$ such that $\Phi \circ \Psi$ is proportional to the identity map on $\mathcal{F}_{>0}$. We say that $\Psi$ and $\Phi$ are \emph{quasi-inverses} of one another. \end{defn} Once we have a quasi-isomorphism between two seed patterns, we think of them as being essentially ``the same.'' Up to coefficients, the maps in both directions allows us to write the cluster variables in one seed pattern in terms of the cluster variables in the other one. \begin{rmk}\label{QHcategory} The set of seed patterns with quasi-homomorphisms as morphisms is a category. Proportionality is an equivalence relation on the morphisms in this category, and this equivalence relation respects composition of quasi-homomorphisms. This yields a quotient category whose objects are seed patterns and whose morphisms are proportionality classes of quasi-homomorphisms. A morphism in this quotient category is an isomorphism if and only if one (hence any) of its constituent quasi-homomorphisms is a quasi-isomorphism. \end{rmk} The following lemma provides a simple method for checking a candidate map is a quasi-inverse of a given quasi-homomorphism. \begin{lem}\label{constructQIs} Let $\Psi$ be a quasi-homomorphism from $\mathcal{E}$ to $\overline{\mathcal{E}}$, and $\Phi \colon \overline{\mathcal{F}}_{>0} \to \mathcal{F}_{>0}$ a semifield map that preserves coefficients and for which $\Phi \circ \Psi(x) \asymp x$ for all cluster variables $x$ in $\mathcal{E}$. Then $\Psi$ and $\varphi$ are quasi-inverse quasi-isomorphisms. \end{lem} Lemma \ref{constructQIs} follows from the more general Proposition \ref{checkqhonnerves} below. In fact, it will suffice to merely check that $\varphi \circ \Psi(x) \asymp x$ for all $x$ lying on a \emph{nerve} (cf.~Definition \ref{NerveDefn}). \begin{example}\label{GrandBandsQI} Using Lemma \ref{constructQIs} we describe a quasi-inverse $G^*$ for the quasi-~homomorphism $F^*$ from Example \ref{GrandBandsQH}. Let $G \colon \mathbf{X} \to \mathbf{Y}$ be the morphism sending $X \in \mathbf{X}$ to the the band matrix $$G(X) = \begin{pmatrix} \Delta_{145}(X) & \Delta_{245}(X) & \Delta_{345}(X) & 0 & 0 \\ 0& \Delta_{125}(X) & \Delta_{135}(X) & \Delta_{145}(X) & 0 \\ 0& 0 & \Delta_{123}(X) & \Delta_{124}(X) & \Delta_{125}(X) \\ \end{pmatrix} \in \mathbf{Y}$$ all of whose entries are Pl\"ucker coordinates of $X$. The coordinate ring map $\mathbb{C}[\mathbf{Y}] \to \mathbb{C}[\mathbf{X}]$ sends $Y_{i,j}$ to the Pl\"ucker coordinate in the $(i,j)$ entry of $G(X)$, e.g. $G^*(Y_{1,2}) = \Delta_{245}$. The matrix $G(X)$ has an interesting property: all of its minors are monomials in the Pl\"ucker coordinates of $X$. In particular, its maximal minors agree with those of $X$, up to a multiplicative factor: \begin{equation}\label{FGcomp} \Delta_{ijk}(G(X)) = \Delta_{145}(X) \Delta_{125}(X) \Delta_{ijk}(X). \end{equation} Thus, $G^* \circ F^*(\Delta_{ijk}) = \Delta_{145}\Delta_{125}\Delta_{ijk} \asymp \Delta_{ijk}$ for each cluster variable $\Delta_{ijk}$. Since $G^*$ preserves coefficients (the only nontrivial check is $G^*(Y_{123,234}) = \Delta_{125}\Delta_{145}\Delta_{234}$), from Lemma \ref{constructQIs} it follows that $G^*$ is a quasi-inverse of $F^*$. \end{example} \begin{rmk}\label{largersubring} A quasi-homomorphism is defined as a map on ambient semifields since these maps transparently preserve the mutation rules \eqref{brecurrence}--\eqref{nnxrecurrence}. This should be suitable for most purposes, since one is mostly interested in evaluating a quasi-homomorphism on cluster variables or coefficients. However, the cluster algebra~$\mathcal{A}$ is the more familiar algebraic object associated to a seed pattern. If one wants to think of a quasi-homomorphism $\Psi$ as an algebra map of cluster algebras $\mathcal{A} \to \overline{\mathcal{A}}$, one will sometimes need to first localize at the frozen variables in $\overline{\mathcal{A}}$. \end{rmk} We close this section by explaining the differences between quasi-homomorphisms and preexisting notions of homomorphisms between cluster algebras. Specifically, we consider the notion of a \emph{rooted cluster morphism} in the category of cluster algebras described by Assem, Dupont, and Shiffler \cite{ADS}, and also of a \emph{coefficient specialization} defined by Fomin and Zelevinsky \cite{CAIV} and studied by Reading \cite{Reading,ReadingSurfaces}. The key difference between these notions and quasi-homomorphisms is that a quasi-homomorphism allows for cluster variables to be \emph{rescaled} by an element of $\overline{\mathbf{P}}$. This extra flexibility provides more freedom in constructing new cluster algebras from old ones (cf.~Section \ref{NormalizedQHs}) or in finding nice self-maps of cluster algebras giving rise to elements of the cluster modular group (cf.~Section~\ref{QASecn}). In a little more detail, a coefficient specialization is a map whose underlying map on coefficients can be any group homomorphism $\mathbf{P} \to \overline{\mathbf{P}}$, but that must send each cluster variable to a cluster variable. Thus, each coefficient specialization is a quasi-homomorphism, but a very special one since cluster variables are not allowed to be rescaled by elements of $\overline{\mathbf{P}}$. Rooted cluster morphisms require choosing a pair of initial seeds in $\mathcal{E}$ and $\overline{\mathcal{E}}$ (this is the sense in which the morphism is rooted). Between this pair of seeds, a morphism is an algebra map that sends each cluster variable to either a cluster variable or an integer, and sends each frozen variable to either a frozen variable, a cluster variable, or an integer. Hence, while quasi-homomorphisms are more flexible in allowing for cluster variables to be rescaled and for frozen variables to be sent to \emph{monomials} in the frozen variables, they are also less flexible as they do not allow for unfreezing frozen variables or specializing variables to integers. It probably would not be hard to combine these two notions. One more technicality: to streamline the discussion, we have formulated Definition \ref{qhdefn} so that quasi-homomorphisms preserve exchange matrices, whereas rooted cluster morphisms allow for $B \mapsto -B$. To make Definition \ref{qhdefn} more consonant with these preexisting notions, one could modify \eqref{qhsimdefn} to say that either $\Psi(\Sigma(t)) \sim \Sigma(t)$ or $\Psi(\Sigma(t)) \sim \Sigma(t)^{\text{opp}}$ (see the definition of \emph{opposite seed} in Section \ref{NervesSecn} below) without any significant changes. \section{Normalized seed patterns}\label{NormalizedQHs} In this section, we recall the definition of normalized seed patterns and apply the results of Section \ref{SeedOrbitsSecn} in the case that $\mathcal{E}$ and $\overline{\mathcal{E}}$ are normalized. \begin{defn}[Normalized seed pattern]\label{normalizedpatterndefn} A seed pattern $\mathcal{E}$ as in Definition \ref{nnseedpatterndefn} is called \emph{normalized} if the coefficient group $\mathbf{P}$ is a semifield, and each coefficient tuple $\mathbf{p}(t)$ satisfies \begin{equation}\label{normalized} p^+_j(t) \oplus p^-_j(t) = 1 \text{ for all $j$}, \end{equation} where $\oplus$ is the addition in $\mathbf{P}$. \end{defn} The advantage of this normalization condition is that it makes the mutation rule \eqref{nnyrecurrence}, and therefore mutation of normalized seeds, unambiguous. Indeed, \eqref{nnyrecurrence} specifies the ratio $y_j(t') = \frac{p^+_j(t')}{p^-_j(t')}$ in terms of $B(t)$ and $\mathbf{p}(t)$, and there is a unique choice of $p^\pm_j(t') \in \mathbf{P}$ with this ratio and satisfying the normalization condition, namely the pair \begin{equation}\label{ysandps} p^+_j(t') = \frac{y_j(t')}{1 \oplus y_j(t')} \text{ and } p^-_j(t') = \frac{1}{1 \oplus y_j(t')}. \end{equation} Furthermore, mutating twice in a given direction is the identity. At the same time, the disadvantage is that computing a cluster algebra now involves three operations, the two operations present in $\mathcal{F}_{>0}$ along with $\oplus$ in $\mathbf{P}$. The definition of quasi-homomorphism prioritizes these first two operations. Proposition \ref{qhnormalizedprop} says that in the case of a quasi-homomorphism between normalized seed patterns, there is a ``separation of additions'' phenomenon, separating the addition in $\overline{\mathcal{F}_{>0}}$ from the one in $\overline{\mathbf{P}}$. \medskip Before stating Proposition \ref{qhnormalizedprop}, we say a little more about normalized seed patterns and~\emph{$Y$-patterns}. In a normalized seed pattern, the tuple of ratios $(y_1(t),\dots,y_n(t))$ determines the coefficient tuple by \eqref{ysandps}. Accordingly, for normalized seed patterns one keeps track of $y_j(t)$ rather than $p^\pm_j(t)$. Rewriting \eqref{nnyrecurrenceatk} and \eqref{nnyrecurrence} in terms of $y_j(t)$ determines a \emph{$Y$-pattern recurrence in the semifield $\mathbf{P}$}: \begin{equation}\label{yrecurrence} y_{j}(t') = \begin{cases} y_j(t)^{-1} & \text{ if } j = k \\ y_j(t) y_k(t)^{[b_{kj}(t)]_+}(y_k(t)\oplus 1)^{-b_{kj}(t)} & \text{ if } j \neq k. \end{cases} \end{equation} A collection of quantities $(B(t),\mathbf{y}(t))_{t \in \mathbb{T}_n}$ satisfying \eqref{brecurrence} and \eqref{yrecurrence}, with the $\mathbf{y}(t)$ lying in some semifield $\mathcal{S}$, is called a \emph{Y-pattern in the semfield $\mathcal{S}$}. Notice that the concept of semifield is now playing two different roles, either as the ambient semifield $\mathcal{F}_{>0}$ in which the exchange relation calculations take place, or as the coefficient semifield $\mathbf{P}$ used to remove the ambiguity in mutation of seeds. The surprising connection between these two roles is Lemma \ref{nnyhatsmutate}, which we now recognize as saying that the $(B(t),\mathbf{\hat{y}}(t))$ form a~$Y$-pattern in the ambient semfield $\mathcal{F}_{>0}$. The most important example of a coefficient semifield arising in applications is the \emph{tropical semifield}. \begin{defn} A \emph{tropical semifield} is a free abelian multiplicative group in some generators $u_1,\dots,u_m$, with auxiliary addition $\oplus$ given by $$\prod_{j} u_j^{a_j} \oplus \prod_{j} u_j^{b_j} = \prod_{j} u_j^{\min (a_j,b_j)}.$$ \end{defn} A normalized seed pattern over a tropical semifield is said to be \emph{of geometric type}. When this is the case, denoting the frozen variables by $x_{n+1},\dots,x_{n+m}$, the data of $(B(t),\mathbf{y}(t))$ is entirely described by an $(n+m) \times n$ matrix $\tilde{B}(t) = (b_{ij}(t))$, called the \emph{extended exchange matrix}. Its top $n \times n$ submatrix is $B(t)$ and is called the \emph{principal part}. Its bottom $m$ rows are called \emph{coefficient rows}. They are specified from the equality $\displaystyle y_j(t) = \prod_{i=1}^m x_{n+i}^{b_{n+i,j}(t)}$. The mutation rule \eqref{yrecurrence} translates into the rule \eqref{brecurrence} on $\tilde{B}$. \medskip The seed pattern in Figure \ref{GrThreeFiveFig} is of geometric type over the tropical semifield in the frozen variables in \eqref{FrozenXs}. The same holds for the seed pattern in Figure \ref{BandThreeFiveFig} over the frozen variables in \eqref{FrozenYs}. On the other hand, the seed pattern in Figure \ref{NNBandThreeFiveFig} is not normalized, e.g. the first exchange relation there satisfies $p^+_2 \oplus p^-_2 = Y_{1,1}Y_{2,4}Y_{3,5}$. \medskip Now we state Proposition \ref{qhnormalizedprop} describing quasi-homomorphisms between normalized seed patterns $\mathcal{E}$ and $\overline{\mathcal{E}}$. It arose during the process of writing a forthcoming book on cluster algebras \cite{CABook}, in proving one direction of the finite type classification (namely, that a cluster algebra with a quiver whose principal part is an orientation of a Dynkin quiver necessarily has only finitely many seeds). We will state it as a recipe for \emph{constructing} a normalized seed pattern from a given one, since we envision this being useful in applications. \begin{prop}\label{qhnormalizedprop} Let $\mathcal{E}$ be a non-normalized seed pattern, with the usual notation. Let $(x_i) = (x_i(t_0))$ be a fixed initial cluster in $\mathcal{E}$. Let $\overline{\mathbf{P}}$ be a semifield, and $\overline{\mathcal{F}}_{>0}$ the semifield of subtraction-free rational expressions in algebraically independent elements $\overline{x}_1,\dots,\overline{x}_n$ with coefficients in $\overline{\mathbf{P}}$. Let $\Psi \colon \mathcal{F}_{>0} \to \overline{\mathcal{F}}_{>0}$ be a semifield map satisfying $\Psi(x_i) \asymp \overline{x}_i$, and let $c \colon \mathcal{F}_{>0} \to \overline{\mathbf{P}}$ be the composition of semifield maps $\mathcal{F}_{>0} \xrightarrow{\Psi} \overline{\mathcal{F}}_{>0} \xrightarrow{\overline{x}_i \mapsto 1} \overline{\mathbf{P}}$ where the second map in this composition specializes all $\overline{x}_i$ to $1$ and is the identity on $\overline{\mathbf{P}}$. Then there is a normalized seed pattern $\overline{\mathcal{E}}$ in $\overline{\mathcal{F}}_{>0}$ with seeds $(\overline{B}(\overline{t}),\overline{\mathbf{p}}(\overline{t}), \overline{\mathbf{x}}(\overline{t}))$ satisfying \begin{align} \overline{B}(\overline{t}) &= B(t) \label{seedhomtopatternhomeqsi} \\ \overline{x}_i(\overline{t}) &= \frac{\Psi(x_i(t)) }{c(x_i(t))} \label{seedhomtopatternhomeqsiv}\\ \hat{\overline{y}}_i(\overline{t}) & = \Psi(\hat{y}_i(t)) \label{seedhomtopatternhomeqsiii}\\ \overline{y}_i(\overline{t}) &= c(\hat{y}_i(t)) \label{seedhomtopatternhomeqsii}. \end{align} Clearly, $\Psi$ is a quasi-homomorphism from $\mathcal{E}$ to $\overline{\mathcal{E}}$. \end{prop} \begin{proof} Formulas \eqref{seedhomtopatternhomeqsiv} through \eqref{seedhomtopatternhomeqsii} follow by applying \cite[Proposition $3.4$]{CATSII}) to $\Psi(\mathcal{E})$, renormalizing by the scalars $c(x_i)$, and massaging the formulas given there. Alternatively, it is also straightforward to check the right hand sides of \eqref{seedhomtopatternhomeqsiv} through \eqref{seedhomtopatternhomeqsii} satisfy the required recurrences directly (this is carried out in \cite{CABook}). \end{proof} \begin{example} Beginning with the seed pattern in Figure \ref{GrThreeFiveFig}, one can construct the normalized seed pattern in Figure \ref{BandThreeFiveFig} by first applying the semifield map $F^*$ -- obtaining the non-normalized seed pattern in Figure \ref{NNBandThreeFiveFig}-- and then normalizing by a semifield map~$c \colon \mathcal{F}_{>0} \to \overline{\mathbf{P}}$. This map $c$ agrees with $F^*$ on frozen variables and sends a cluster variable $x$ to the frozen variable monomial dividing $F^*(x)$, e.g. $c(\Delta_{235}) = Y_{3,5}$ and $c(\Delta_{245}) = Y_{2,4}Y_{3,5}$. \end{example} When both $\mathcal{E}$ and $\overline{\mathcal{E}}$ are of geometric type, constructing a quasi-homomorphism that sends one seed into (the seed orbit of) another seed is a matter of linear algebra: \begin{cor}\label{geomtypeseedhom} Let $\Sigma = (\tilde{B},\{x_1,\dots,x_n\})$ and $\overline{\Sigma} = (\overline{\tilde{B}},\{\overline{x}_1,\dots,\overline{x}_n \})$ be seeds of geometric type, with frozen variables $x_{n+1},\dots,x_{n+m}$ and $\overline{x}_{n+1},\dots,\overline{x}_{n+\overline{m}}$ respectively. Let $\mathcal{E}$ and $\overline{\mathcal{E}}$ be the respective seed patterns. Let $\Psi$ be a quasi-homomorphism from $\mathcal{E}$ to $\overline{\mathcal{E}}$ such that $\Psi(\Sigma) \sim \overline{\Sigma}$. It determines a monomial map from the $x_i$ to the $\overline{x}_i$. Let $M_\Psi$ denote the matrix of exponents of this monomial map, thus $M_\Psi$ is an $(n+\overline{m}) \times (n+m)$ matrix satisfying $\displaystyle \Psi(x_k) = \prod_{i=1}^{n+{\overline{m}}}\overline{x}_i^{(M_\Psi)_{ik}}$. Then the extended exchange matrices $\tilde{B},\overline{\tilde{B}}$ are related by \begin{equation}\label{newbtilde} \overline{\tilde{B}}= M_\Psi \tilde{B}. \end{equation} In particular, such a a quasi-homomorphism $\Psi$ exists if and only if the principal parts of $\tilde{B},\overline{\tilde{B}}$ agree, and the (integer) row span of $\tilde{B}$ contains the (integer) row span of $\overline{\tilde{B}}$. \end{cor} \begin{proof} Indeed, the $(i,j)$ entry of the left hand side of \eqref{newbtilde} encodes the exponent of $\overline{x}_i$ in $\hat{\overline{y}}_j$, while the $(i,j)$ entry of the right hand side encodes the exponent of $\overline{x}_i$ in $\Psi(\hat{y}_j)$. So \eqref{newbtilde} now follows from \eqref{seedhomtopatternhomeqsiii}. The final statement follows by studying \eqref{newbtilde}: the ``interesting'' rows of $M_\Psi$ are its bottom $\overline{m}$ rows. Each of these rows determines a particular linear combination of the rows of $\tilde{B}$, and these linear combinations can be prescribed arbitrarily by prescribing the exponent of $\overline{x}_i$ in $\Psi(x_j)$ for $1 \leq i \leq \overline{m}, 1 \leq j \leq n+m$ using Lemma \ref{universalsemifieldlemma}. \end{proof} \begin{rmk}[Exchange graphs and separation of additions]\label{separationofadditionsrmk} The formulas \eqref{seedhomtopatternhomeqsi},\eqref{seedhomtopatternhomeqsiv} and \eqref{seedhomtopatternhomeqsii} show that if there is a quasi-homomorphism from $\mathcal{E}$ to $\overline{\mathcal{E}}$, then the exchange graph of $\mathcal{E}$ covers that of $\overline{\mathcal{E}}$. In particular, by Corollary \ref{geomtypeseedhom}, if the rows of $\tilde{B}$ span $\mathbb{Z}^n$, then the exchange graph for the corresponding cluster algebra $\mathcal{A}(\tilde{B})$ covers the exchange graph of every other cluster algebra $\overline{\mathcal{A}}$ with the same underlying exchange matrix. This is a natural generalization of the \emph{separation of additions formula} \cite[Theorem $3.7$]{CAIV} from the case of a quiver with principal coefficients to \emph{any} $\tilde{B}$-matrix whose rows span~$\mathbb{Z}^n$. Namely, let~$\Sigma_0 = (B_0,\mathbf{y},\mathbf{x})$ and~$\overline{\Sigma}_0 = (B_0,\overline{\mathbf{y}},\overline{\mathbf{x}})$ be a pair of normalized seeds with the same exchange matrix, and suppose~$\Sigma_0$ has principal coefficients, i.e.~$y_i = x_{n+i}$. There is a natural choice of maps~$\Psi$ mapping~$\Sigma_0$ to~$\overline{\Sigma_0}$ as in Proposition \ref{qhnormalizedprop}, defined by~$\Psi(x_i) = \overline{x}_i$ and~$\Psi(x_{n+i}) = \overline{y}_i$. For this choice of~$\Psi$, formula~\eqref{seedhomtopatternhomeqsiv} becomes separation of additions: the numerator of~\cite[Theorem $(3.7)$]{CAIV} (evaluating the~``$X$ polynomial'' in~$\overline{\mathcal{F}}$) is applying the semifield homomorphism~$\Psi$, while the denominator (specializing the cluster variables to~$1$ and evaluating the~$X$ polynomial in~$\overline{\mathbf{P}}$) is applying the semifield map~$c$. \end{rmk} \begin{rmk}[Proportionality and gradings]\label{propandgradings} Let $\mathcal{E}$ be a seed pattern of geometric type. We recall briefly the concept of a \emph{$\mathbb{Z}^r$-grading on $\mathcal{E}$} cf.~\cite{Grabowski, GrabowskiLaunois}. Choosing an initial seed $(\tilde{B},\{x_i\})$ in $\mathcal{E}$, such a choice of grading is determined by a $r \times (n+m)$ grading matrix $G$ satisfying $G \tilde{B} = 0$. The $i^{\text{th}}$ column of $G$ determines the grading of $x_i$ as a vector in $\mathbb{Z}^r$, for $1 \leq i \leq n+m$. The condition $G \tilde{B} = 0$ guarantees that every exchange relation \eqref{nnxrecurrence} is homogeneous with respect to this $\mathbb{Z}^r$-grading; this in turn defines the multi-grading of each adjacent cluster variable and thereby each adjacent grading matrix. It can be seen that these adjacent grading matrices again satisfy the left kernel condition, so that the grading propagates to a $\mathbb{Z}^r$-grading on the entire cluster algebra in which the cluster variables and coefficients are homogeneous. Now we suppose we are given two seeds $\mathcal{E}$ and $\overline{\mathcal{E}}$ of geometric type with notation as in Corollary \ref{geomtypeseedhom}. Let $\Psi_1$ and $\Psi_2$ be a pair of proportional quasi-homomorphisms of $\mathcal{E}$ and~$\overline{\mathcal{E}}$. We obtain as in \eqref{newbtilde} matrices $M_{\Psi_1}$ and $M_{\Psi_2}$ such that $M_{\Psi_1} \tilde{B} = M_{\Psi_2} \tilde{B} = \overline{\tilde{B}}$, which implies that $M_{\Psi_1}-M_{\Psi_2}$ defines a $\mathbb{Z}^{\overline{m}}$-grading $G$ on $\mathcal{E}$ (the first $n$ rows of $M_{\Psi_1}-M_{\Psi_2}$ define the trivial grading). Conversely, fixing a quasi-homomorphism $\Psi_1$ with matrix $M_{\Psi_1}$, any choice of $\mathbb{Z}^{\overline{m}}$-grading matrix $G$ on $\tilde{B}$ provides a quasi-homomorphism $\Psi_2$, proportional to~$\Psi_1$, whose matrix is $M_{\Psi_2} = M_{\Psi_1} + G$. \end{rmk} \begin{rmk}\label{QRowSpans} For simplicity, we stated Corollary \ref{geomtypeseedhom} in terms of $\mathbb{Z}$ row spans, but a similar statement holds for $\mathbb{Q}$ row spans. To do this, one enlarges the tropical semifield $\overline{\mathbf{P}}$ to the \emph{Puiseux tropical semifield} consisting of Puiseux monomials with rational exponents in the frozen variables. This is unpleasant from the perspective of cluster algebras as coordinate rings, but is perfectly fine if one is only interested in writing algebraic formulas for cluster variables, etc. This is foreshadowed in the work of Sherman and Zelevinsky \cite[Section 6]{Sherman}, which discusses the coefficient-free rank 2 cluster algebra $\mathcal{A}(b,c)$ with exchange matrix $\begin{pmatrix} 0 & a \\ -b & 0 \end{pmatrix} $. The authors write the cluster variables in any cluster algebra with this $B$ matrix in terms of the cluster variables for $\mathcal{A}(b,c)$. Their formulas involve Puiseux monomials in the frozen variables. \end{rmk} \section{Nerves}\label{NervesSecn} By Proposition \ref{basicqhpropsi}, to check that a given semifield map $\Psi$ is a quasi-homomorphism from $\mathcal{E}$ to $\overline{\mathcal{E}}$, it suffices to check that $\Psi(\Sigma(t)) \sim \overline{\Sigma}(\overline{t})$ for \emph{some} pair of seeds $\Sigma(t)$ in $\mathcal{E}$ and $\overline{\Sigma}(\overline{t})$. By Proposition \ref{equivalentseedscondition}, this means checking that $B(t) = B(\overline{t})$ and $\hat{\mathbf{y}}(t) = \hat{\mathbf{y}}(\overline{t})$, and furthermore $\Psi(x_j(t)) \asymp \overline{x}_j(\overline{t})$ holds for all $j$. We envision applications where checking the proportionality condition on cluster variables is easy and can be done in \emph{many} seeds $t$, but checking the equality of exchange matrices or $\hat{\mathbf{y}}$'s is inconvenient. The goal of this section is to give a criterion that guarantees $\Psi$ is a quasi-homomorphism by only checking these proportionality conditions. The relevant concept is that of a \emph{nerve} for a seed pattern. \begin{defn}\label{NerveDefn} Let $\mathcal{E}$ be a seed pattern. A \emph{nerve}~$\mathcal{N}$ for~$\mathbb{T}_n$, is a connected subgraph of~$\mathbb{T}_n$ such that every edge label~$k \in [1,n]$ arises at least once in $\mathcal{N}$. \end{defn} The basic example of a nerve is the star neighborhood of a vertex. We believe that there are many theorems of the form, ``if a property holds on a nerve, then it holds on the entire seed pattern.'' We give an example of such a theorem in the appendix, generalizing the ``Starfish Lemma'' \cite[Proposition 3.6]{tensors} from a star neighborhood to a nerve. Before stating the result of this section, we need to address the (mostly unimportant) difference between a seed and its opposite seed. We say a seed is \emph{indecomposable} if the underlying graph described by its exchange matrix (the vertex set is $[1,n]$ and vertices $i,j$ are joined by an edge if $b_{i,j} \neq 0$) is connected. For a seed $\Sigma = (B,\mathbf{p},\mathbf{x})$, the \emph{opposite seed}~$\Sigma^{\text{opp}} = (B^{\text{opp}},\mathbf{p}^{\text{opp}}, \mathbf{x}^{\text{opp}})$ is the seed defined by $B^{\text{opp}} = -B$, $(p^{\text{opp}})^\pm_j = p^\mp_j$, and $x^{\text{opp}}_i = x_i$. It satisfies~$\hat{y}^{\text{opp}}_j = \frac{1}{\hat{y}_j}$. The operations of restricting to an indecomposable component and replacing a seed by its opposite seed both commute with mutation. \begin{prop}\label{checkqhonnerves} Let $\mathcal{E}$ and $\overline{\mathcal{E}}$ be non-normalized seed patterns, with respective ambient semifields $\mathcal{F}_{>0}$ and $\overline{\mathcal{F}}_{>0}$. Suppose the seeds in $\overline{\mathcal{E}}$ are indecomposable. Let $\Psi \colon \mathcal{F}_{>0} \to \overline{\mathcal{F}}_{>0}$ be a semifield homomorphism that preserves coefficients and satisfies $\Psi(x_j(t)) \asymp \overline{x}_j(\overline{t})$ for every vertex $t$ and label $j$ such that $t \xrightarrow{j} t'$ is in $\mathcal{N}$. Then $\Psi$ is a quasi-homomorphism from $\mathcal{E} $ to $\overline{\mathcal{E}}$ or from $\mathcal{E}$ to $\overline{\mathcal{E}}^{\text{opp}}$. \end{prop} In particular applying the proposition when $\mathcal{N}$ is the star neighborhood of a vertex $t$, to check that $\Psi$ is a quasi-homomorphism, it suffices to check that $\Psi(x_j(t)) \asymp \overline{x}_j(\overline{t})$ for all $j \in [1,n]$, as well as checking $\Psi(x_j(t')) \asymp \overline{x}_j(\overline{t'})$ for each adjacent edge $t \xrightarrow{j} t'$. Lemma \ref{constructQIs} now follows. \begin{proof} Choose a vertex $t \in \mathcal{N}$. By hypothesis for all $j$, $\Psi(x_j(t)) = c_j(t) \overline{x}_j(\overline{t})$ for some $c_j(t) \in \overline{\mathbf{P}}$, so we are left checking that $B(t) = B(\overline{t})$ and $\hat{\mathbf{y}}(t) = \hat{\mathbf{y}}(\overline{t})$. Suppose $t \xrightarrow{k} t'$ is an edge in $\mathcal{N}$, then there is a scalar $c_k(t')$ such that $\Psi(x_k(t' )) = c_k(t') \overline{x}_k(\overline{t'})$. The exchange relation defining $\overline{x}_k(\overline{t'})$ in $\overline{\mathcal{E}}$ is \begin{equation}\label{barredexchange} \overline{x}_{k}(\overline{t})\overline{x}_{k}(\overline{t'}) = \overline{p}^+_k(\overline{t}) \prod \overline{x}_{j}(\overline{t})^{[\overline{b}_{jk}(\overline{t})]_+} + \overline{p}^-_k(\overline{t})\prod \overline{x}_{k}(\overline{t})^{[-\overline{b}_{jk}(\overline{t})]_+}. \end{equation} On the other hand, applying $\Psi$ to the relation defining $x_k(t')$ in $\mathcal{E}$ and rearranging yields \begin{equation}\label{applyPsitoexchange} \overline{x}_k(\overline{t})\overline{x}_{k}(\overline{t'}) = \frac{1}{c_k(t')c_k(t)} ( \Psi( p^+_k(t)) \prod \Psi(x_j(t))^{[b_{jk}(t)]_+} +\Psi(p^-_k(t)) \prod (\Psi(x_j(t)))^{[-b_{jk}(t)]_+} ). \end{equation} Abbreviating the two terms on the right hand side of \eqref{barredexchange} as $X+Y$, and the two terms in \eqref{applyPsitoexchange} as $Z+W$, we see by algebraic independence in the seed at $t$ that either $X = Z, Y = W$, or $X = W, Y = Z$. Refer to these as Case $1$ or Case $2$ respectively. By inspection, we see that $\hat{\overline{y}}_k(\overline{t})$ is the ratio $\frac{X}{Y}$, while $\Psi(\hat{y}_k(t))$ is $\frac{Z}{W} $. Thus in Case $1$ we deduce that $\Psi(\hat{y}_k(t)) = \hat{\overline{y}}_k(\overline{t})$ and the matrices $B(t)$ and $B(\overline{t})$ have the same $k^{\text{th}}$ column. In Case $2$ we deduce the same thing once we replace $\overline{\mathcal{E}}$ by $\overline{\mathcal{E}}^{\text{opp}}$. Now apply Lemma \ref{nerveYsyslem}. \end{proof} \begin{lem}\label{nerveYsyslem} Let $\mathcal{Y} = \{\mathbf{y}(t),B(t)\}$ and $\overline{\mathcal{Y}} = \{\overline{\mathbf{y}}(t),\overline{B}(t)\}$ be two $Y$-patterns whose matrices $B(t)$ are indecomposable. Let $\mathcal{N}$ be a nerve for $\mathbb{T}_n$. Suppose for every vertex $t \in \mathcal{N}$ and label $k$ such that the edge $t \xrightarrow{k} t'$ is in $\mathcal{N}$, one of the following holds \begin{align} y_k(t) = \overline{y}_k(t) &\text{ and } b_{jk}(t) = \overline{b}_{jk}(t) \text{ for all $j \in [1,n]$, or} \label{epartsagree}\\ y^{\text{opp}}_k(t) = \overline{y}_k(t) &\text{ and } b^{\text{opp}}_{jk}(t) = \overline{b}_{jk}(t) \text{ for all $j \in [1,n]$}, \label{epartsagreeii} \end{align} then $\mathcal{Y} = \overline{\mathcal{Y}}$ or $\mathcal{Y}^{\text{opp}} = \overline{\mathcal{Y}}$ accordingly. \end{lem} Roughly, there are two issues here: first the question of whether $Y$-patterns can be checked on a nerve (they can), and second whether we are dealing with $\mathcal{Y}$ or $\mathcal{Y}^{\text{opp}}$ (this relies on indecomposability). \begin{proof} In any $Y$-pattern, for a given $(k,t)$ pair (not necessarily in $\mathcal{N}$), we will refer to $y_k(t)$ and the $k^{\text{th}}$ column of $B(t)$ as the \emph{$k$-part of the seed at $t$}. The equations in \eqref{epartsagree} say that $\mathcal{Y},\overline{\mathcal{Y}}$ have either the same $k$-parts, or opposite $k$-parts, for any edge $t \xrightarrow{k} t' \in \mathcal{N}$. Pick a vertex $t_0 \in \mathcal{N}$, and an edge $k \in \mathcal{N}$ incident to $t_0$. If necessary, replace $\mathcal{Y}$ by $\mathcal{Y}^{\text{opp}}$ so that the given $Y$-patterns have the same $k$-part at $t_0$. We seek to prove $\mathcal{Y},\overline{\mathcal{Y}}$ have the same $j$-part at $t_0$, for all $j \in [n]$. Let $t_0 \xrightarrow{k} t_1 \in \mathcal{N}$ be an edge in the nerve incident to $t_0$. The mutation rules \eqref{brecurrence}, \eqref{yrecurrence} are involutive and have the property that for any $j$, the $j$-part of the seed at $t_1$ depends only on the $j$-part and $k$-part of the seed at $t_0$. Since the given $Y$-patterns agree at $k$, we see that their $j$-parts agree at $t_1$ if and only if they agree at $t_0$. Repeatedly apply this observation, mutating in all possible directions in the nerve, while preserving the fact that the $j$-parts at $t \in \mathcal{N}$ coincide if and only if they coincide at $t_0$. Since the nerve is connected and every edge label shows up at least once in $\mathcal{N}$, we conclude that for all $j$, the $j$-parts at $\mathcal{Y}$ and $\overline{\mathcal{Y}}$ are either the same or opposite. The connectedness hypothesis assures they are all in fact the same. \end{proof} \section{Quasi-automorphisms and the cluster modular group}\label{QASecn} A \emph{quasi-automorphism} is a quasi-isomorphism from a given seed pattern $\mathcal{E}$ to itself, cf.~Definition~\ref{proportionality}. One can think of a quasi-automorphism as a choice of a map describing an automorphism of the pattern of seed orbits associated to $\mathcal{E}$. We will use quasi-automorphisms to define a variant of a group of automorphisms of $\mathcal{E}$, generalizing the group of cluster automorphisms defined for seed patterns with trivial coefficients in \cite{ASS} while retaining many of the properties of cluster automorphisms (e.g. Proposition \ref{basicqhpropsi}, Corollary \ref{geomtypeseedhom} and Proposition \ref{checkqhonnerves}). The following example illustrates that the notion of quasi-automorphism is more general than the ``naive'' notion of a semifield automorphism preserving the seed orbit pattern. \begin{example}\label{QAsarenotautos} A quasi-automorphism does not have to be an automorphism of semifields. Consider the composition $G^* \circ F^*$ from Example \ref{GrandBandsQI}, which is a quasi-automorphism of~$\mathbb{C}[\mathbf{X}]$ proportional to the identity map. It rescales each Pl\"ucker variable by a product of frozens: $G^* \circ F^* (\Delta_S) = \Delta_{145}\Delta_{125}\Delta_S$. The ambient semifield of $\mathbb{C}[\mathbf{X}]$ has a grading for which every Pl\"ucker variable is degree one, and every homogeneous element in the image of $G^* \circ F^*$ has degree a multiple of $3$. Thus $G^* \circ F^*$ cannot be surjective. \end{example} \begin{defn}\label{QAutDefn} The \emph{quasi-automorphism group} $\QAut_0(\mathcal{E})$ is the set of proportionality classes of quasi-automorphisms of $\mathcal{E}$. This is the automorphism group of $\mathcal{E}$ in the quotient category discussed in Remark \ref{QHcategory}. \end{defn} \begin{rmk}\label{CMGandCoeffs} Let us call a quasi-automorphism trivial if it is proportional to the identity map. The set of trivial quasi-automorphisms is a monoid (but not usually a group) under composition; the composition $G^* \circ F^*$ from Example \ref{GrandBandsQI} bears witness to this. One way to construct quasi-automorphisms proportional to a given~$\Psi$ is to compose $\epsilon_1 \circ \Psi \circ \epsilon_2$ with $\epsilon_1$ and $\epsilon_2$ trivial. It is tempting to try and define $\QAut_0(\mathcal{E})$ purely in terms of thse trivial quasi-automorphisms, without mentioning proportionality. However the relation~$\equiv$ defined by $\Psi_1 \equiv \Psi_2$ if $\Psi_2 = \epsilon_1 \circ \Psi \circ \epsilon_2$ is neither symmetric nor transitive, so one cannot form a quotient category using this relation. \end{rmk} We write~$\QAut_0(\mathcal{E}) = \QAut_0(\tilde{B})$ when $\mathcal{E}$ is of geometric type and specified by an initial matrix $\tilde{B}$. By Remark \ref{propandgradings}, two quasi-automorphisms are proportional to each other if and only if their ratio defines a $\mathbb{Z}^m$-grading on $\mathcal{E}$ (taking exponents of elements of $\mathbf{P}$ to obtain elements of $\mathbb{Z}^m$). Fixing a particular quasi-automorphism $\Psi$, the number of degrees of freedom in specifying another quasi-automorphism proportional to $\Psi$ is therefore the corank of~$\tilde{B}$. \begin{lem}\label{symmetryofspans} Let $\mathcal{E}$ be a seed pattern of geometric type and $\Psi$ a quasi-homomorphism from $\mathcal{E}$ to itself. Then $\Psi$ is a quasi-automorphism. \end{lem} Thus when $\mathcal{E}$ is of geometric type, every quasi-homomorphism $\Psi$ from $\mathcal{E}$ to itself determines an element of $\QAut_0(\mathcal{E})$, i.e. any such $\Psi$ has a quasi-inverse. \begin{proof} By \cite[Lemma 3.2]{CAIII}, if two $\tilde{B}$-matrices $\tilde{B}(t_0)$ and $\tilde{B}(\overline{t}_0)$ are in the same mutation class, they are related by a pair of unimodular integer matrices: $ \tilde{B}(\overline{t}_0) = M \tilde{B}(t_0) N$, for~$ M \in \GL_{m+n}(\mathbb{Z}),$ and~$N \in \GL_n(\mathbb{Z})$. By Corollary \ref{geomtypeseedhom}, for a pair of vertices $t_0,\overline{t_0} \in \mathbb{T}_n$, there is a quasi-homomorphism $\Psi$ sending the seed orbit at $t_0$ to the seed orbit at $\overline{t}_0$ if and only if the principal parts of $\tilde{B}(t_0)$ and $\tilde{B}(\overline{t}_0)$ agree, and the row span of $\tilde{B}(t_0)$ contains the row span of $\tilde{B}(\overline{t}_0)$. By the unimodularity of mutation, this criterion is preserved under swapping the roles of $t_0$ and $\overline{t}_0$ -- if the row span of $\tilde{B}(t_0)$ contains the row span of $\tilde{B}(\overline{t}_0)$ then in fact the two row spans are equal submodules of $\mathbb{Z}^n$. \end{proof} \begin{rmk}\label{twistremark} Marsh and Scott \cite{MarshScott} described a version of the twist for the Grassmannian cluster algebras. One can show that it is a quasi-automorphism using \cite[Corollary 8.6]{MarshScott}. \end{rmk} We will now recall the definitions of some preexisting groups of automorphisms associated a seed pattern $\mathcal{E}$. Namely: \begin{itemize} \item the cluster modular group $\textnormal{CMG}(\mathcal{E})$ of Fock and Goncharov \cite{FG}, and \item the group $\Aut(\mathcal{E})$ of automorphisms in the category of (rooted) cluster algebras defined by Assem, Dupont and Schiffler \cite{ADS}. \end{itemize} We first present these definitions and then discuss a particular example where all the groups are computed and compared to each other and to the quasi-automorphism group. \begin{defn}[Cluster modular group {\cite[Definition $2.14$]{FG}}]\label{CMGDefn} Let $\mathcal{E}$ be a seed pattern with exchange graph~$\mathbf{E}$. The \emph{cluster modular group} $\textnormal{CMG}(\mathcal{E})$ is the group of graph automorphisms $g \in \Aut(\mathbf{E})$ that preserve the exchange matrices. More precisely, recall that the unlabeled seed at vertex $t \in \mathbf{E}$ is indexed not by $[1,n]$ but by the elements of $\textnormal{star}(t)$. Then an element of the cluster modular group is a graph automorphism $g \in \text{Aut}(\mathbf{E})$ satisfying $B(t)_{t',t''} = B(g(t))_{g(t'),g(t'')}$ for all $t \in \mathbf{E}$ and $t',t'' \in \textnormal{star}(t)$. Such a graph automorphism can be determined by choosing a pair of vertices $t_0,\overline{t_0} \in \mathbf{E}$ and an identification of $\textnormal{star}(t_0)$ with $\textnormal{star}(\overline{t}_0)$ under which $B(t_0) = B(\overline{t_0})$. \end{defn} \begin{rmk} Because Definition \ref{CMGDefn} is in terms of automorphisms of the exchange graph, the cluster modular group appears to depend on the entire seed pattern $\mathcal{E}$, and not just the underlying exchange matrices in $\mathcal{E}$. However, it is widely believed that the exchange graph -- and therefore the cluster modular group -- is in fact independent of the choice of coefficients (i.e., it only depends on the exchange matrices, and therefore can be prescribed by giving a single such matrix). This has been proven for skew-symmetric exchange matrices \cite{IKLP}. \end{rmk} The quasi-automorphism group is a subgroup of the cluster modular group. Indeed, each quasi-automorphism $\Psi$ determines a cluster modular group element $g$ via~$\Psi(\Sigma(t)) \sim \Psi(g(t))$, and proportional quasi-automorphisms determine the same $g$. Since $\Psi$ preserves exchange matrices and evaluating $\Psi$ commutes with permuting the cluster variables in a seed, the element $g$ produced this way is indeed an element of the cluster modular group. One can also consider automorphisms in the category of cluster algebras defined in \cite{ADS}. We reproduce a version of the definition for the sake of convenience. \begin{defn}\label{strongautomorphism} Let $\mathcal{E}$ be a seed pattern. We say two seeds $\Sigma_1$ and $\Sigma_2$ in $\mathcal{E}$ are similar if $\Sigma_2$ coincides with $\Sigma_1$ after first permuting the frozen variables, and then permuting the indices $[1,n]$ appropriately. Suppose the exchange matrices in $\mathcal{E}$ are indecomposable. Let $\mathcal{A}$ be its cluster algebra. A $\mathbb{Z}$-algebra map $f \colon \mathcal{A} \to \mathcal{A}$ is an \emph{automorphism} of $\mathcal{E}$ if for every (equivalently, for any) seed $\Sigma$ in $\mathcal{E}$, $f(\Sigma)$ or $f(\Sigma)^{\text{opp}}$ is similar to a seed in $\mathcal{E}$. We denote the group of automorphisms of $\mathcal{E}$ by $\Aut(\mathcal{E})$. \end{defn} The elements of $\Aut(\mathcal{E})$ are similar to \emph{strong isomorphisms} from \cite{CAII} but slightly more general since one is allowed to permute the frozen variables. We say $f$ as in Definition \ref{strongautomorphism} is a \emph{direct automorphism} or \emph{inverse automorphism} according to whether $f(\Sigma)$ or $f(\Sigma)^{\text{opp}}$ is a seed in $\mathcal{E}$. Let $\Aut^+(\mathcal{E}) \subset \Aut(\mathcal{E})$ denote the subgroup of direct automorphisms. By similar reasoning to~\cite[Theorem 2.11]{ASS}, this subgroup has index two in $\Aut(\mathcal{E})$ if each seed $\Sigma$ in $\mathcal{E}$ is mutation-equivalent to $\Sigma^{\text{opp}}$; otherwise $\Aut^+(\mathcal{E}) = \Aut(\mathcal{E})$. An important special case of Definition \ref{strongautomorphism} is when $\mathcal{E}$ has trivial coefficients in which case the group $\Aut(\mathcal{E})$ is the group of \emph{cluster automorphisms} \cite{ASS}. When $\mathcal{E}$ has trivial coefficients, we have $\Aut^+(\mathcal{E}) = \textnormal{CMG}(\mathcal{E})$. Furthermore, a direct cluster automorphism is the same as a quasi-automorphism in this case. \medskip We can summarize the containments between the preceding groups as \begin{equation}\label{containmentseq} \Aut^+(\mathcal{E}) \subset \QAut_0(\mathcal{E}) \subset \textnormal{CMG}(\mathcal{E}) \overset{?}{=} \Aut^+(\mathcal{E}_\textnormal{triv}). \end{equation} where $\mathcal{E}$ is a seed pattern and $\mathcal{E}_\textnormal{triv}$ is the seed pattern obtained from $\mathcal{E}$ by trivializing its coefficients. The equality~$\textnormal{CMG}(\mathcal{E}) \overset{?}{=} \Aut^+(\mathcal{E}_\textnormal{triv})$ depends on the belief that $\textnormal{CMG}(\mathcal{E}) = \textnormal{CMG}(\mathcal{E}_\textnormal{triv})$, cf.~Remark~\ref{CMGandCoeffs}. The group $\Aut(\mathcal{E}_\textnormal{triv})$ contains all of the groups in \eqref{containmentseq}, and the group $\Aut(\mathcal{E})$ doesn't sit nicely with the rest of the containments when $\Aut^+(\mathcal{E}) \subsetneq \Aut(\mathcal{E})$. We next illustrate the differences between the groups in \eqref{containmentseq} using a particular cluster algebra associated with a bordered marked surface. Basic notions and references concerning this class of cluster algebras are given in Section \ref{SurfacesSecn}. \begin{example}\label{annulusclustereg} Let $(\mathbf{S,M})$ be an annulus with two marked points on each boundary component cf.~Figure~\ref{annulusfig}. We have colored the marked points either black or white to aid in describing the automorphism groups below. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[scale = .8] \def \rsizeo{.3}; \def \rsizet{1.2}; \draw [fill= lightgray] (0,0) circle [radius = \rsizeo]; \draw (\rsizeo,0) arc [radius = \rsizeo , start angle = 0, end angle = 360]; \draw (\rsizet,0) arc [radius = \rsizet , start angle = 0, end angle = 360]; \draw [thick, dashed] (0,-\rsizeo)--(0,-\rsizet); \node at (0,-.7^\rsizet) {$\vee$}; \node at (0,1.3*\rsizet) {$v_1$}; \node at (0,-1.3*\rsizet) {$v_4$}; \node at (60:2*\rsizeo) {$v_2$}; \node at (-40:2.2*\rsizeo) {$v_3$}; \draw [fill= white] (0,-\rsizet) circle [radius = .07]; \draw [fill= white] (0,-\rsizeo) circle [radius = .07]; \draw [fill= black] (0,\rsizet) circle [radius = .07]; \draw [fill= black] (0,\rsizeo) circle [radius = .07]; \begin{scope}[yshift = -.6cm, xshift = 3cm] \node at (0,.6) {$\wedge$}; \node at (2,.6) {$\wedge$}; \draw [thick,dashed] (0,0)--(0,1.2); \draw [thick,dashed] (2,0)--(2,1.2); \draw [thick] (0,0)--(2,0); \draw [thick] (0,1.2)--(2,1.2); \draw [fill= white] (0,0) circle [radius = .07]; \draw [fill= white] (0,1.2) circle [radius = .07]; \draw [fill= white] (2,0) circle [radius = .07]; \draw [fill= white] (2,1.2) circle [radius = .07]; \draw [fill= black] (1,0) circle [radius = .07]; \draw [fill= black] (1,1.2) circle [radius = .07]; \node at (0,1.6) {$v_4$}; \node at (2,1.6) {$v_4$}; \node at (1,1.6) {$v_1$}; \node at (0,-.4) {$v_3$}; \node at (2,-.4) {$v_3$}; \node at (1,-.4) {$v_2$}; \end{scope} \end{tikzpicture} \caption{An annulus with two marked points on each boundary component. At right, we show a ``flat form'' of this annulus obtained by cutting along the dashed line. \label{annulusfig}} \end{center} \end{figure} The cluster modular group $\textnormal{CMG}(\mathbf{S,M})$ for a cluster algebra associated with this annulus coincides with the \emph{mapping class group} of the annulus (see Proposition~\ref{ASSconjecture} below). This group has the following explicit description: let $\rho$ be the (isotopy class of) the homeomorphism of $\mathbf{S}$ that rotates the inner boundary of the annulus clockwise by a half-turn. Let $\tau$ be the clockwise half-turn of the outer boundary. Let $\sigma$ be the homeomorphism represented by a $180$ degree turn of the flat form of the annulus; it swaps the inner and outer boundary components. Then the elements $\rho,\tau,$ and $\sigma$ generate the cluster modular group. The group has a presentation $\textnormal{CMG}(\mathbf{S,M}) = \langle \rho,\tau,\sigma \colon (\rho \tau)^2 = \sigma^2 = 1, \rho \tau = \tau \rho, \sigma \rho = \tau \sigma \rangle$ with respect to these generators. It is a central extension $1 \mapsto \mathbb{Z} / 2 \mathbb{Z} \mapsto \textnormal{CMG} \mapsto \Dih_\infty \mapsto 1$ of the infinite dihedral group $\Dih_\infty = \langle r,s \colon s^2 = (sr)^2 = 1 \rangle $ by~$\mathbb{Z} / 2 \mathbb{Z} = \langle \rho \tau \rangle$, using the map $\sigma \mapsto s, \rho \mapsto r, \tau \mapsto r^{-1}$. \begin{figure} \begin{tikzpicture}[scale = 1] \def \rsizeo{.3}; \def \rsizet{1.5}; \node at (0,2.2) {$T$}; \draw [fill= lightgray] (0,0) circle [radius = \rsizeo]; \draw (\rsizeo,0) arc [radius = \rsizeo , start angle = 0, end angle = 360]; \draw (\rsizet,0) arc [radius = \rsizet , start angle = 0, end angle = 360]; \draw [red] (\rsizeo,0)--(10:\rsizet); \draw [red] (23:\rsizeo)--(15:\rsizet); \draw [thick] (0,\rsizeo)--(0,\rsizet); \draw [thick] (0,-\rsizeo)--(0,-\rsizet); \draw [thick] (-90: \rsizeo) to [out = -40, in = 270] (0:.6*\rsizet) to [out = 90, in = -40] (90: \rsizet); \draw [thick] (90: \rsizeo) to [out = 150, in = 90] (180:.6*\rsizet) to [out = 270, in = 150] (-90: \rsizet); \node at (100:.8*\rsizet) {$a$}; \node at (-80:.75*\rsizet) {$b$}; \node at (50:.85*\rsizet) {$c$}; \node at (-155:.85*\rsizet) {$d$}; \node at (0:.83*\rsizet) {$L$} ; \draw [fill= white] (0,-\rsizet) circle [radius = .07]; \draw [fill= white] (0,-\rsizeo) circle [radius = .07]; \draw [fill= black] (0,\rsizet) circle [radius = .07]; \draw [fill= black] (0,\rsizeo) circle [radius = .07]; \begin{scope}[xshift = 4cm, yshift = -1.3cm] \node at (0,3.5) {$\tilde{B}(T)$}; \node at (0,0) {$x_a$}; \node at (0,2.5) {$x_b$}; \node at (-1.5,1.25) {$x_d$}; \node at (1.5,1.25) {$x_c$}; \node at (0,1.25) {$\boxed{x_L} $}; \draw [thick, ->] (.2,.2)--(1.3,1.05); \draw [thick, ->] (-.2,.2)--(-1.3,1.05); \draw [thick, ->] (.2,2.3)--(1.3,1.45); \draw [thick, ->] (-.2,2.3)--(-1.3,1.45); \draw [thick, ->] (.5,1.16)--(1.11,1.16); \draw [thick, ->] (.5,1.34)--(1.11,1.34); \end{scope} \begin{scope}[xshift = 8cm, yshift = -1.3cm] \node at (0,3.5) {$\tilde{B}(\rho(T))$}; \node at (0,0) {$x_c$}; \node at (0,2.5) {$x_d$}; \node at (-1.5,1.25) {$x_f$}; \node at (1.5,1.25) {$x_e$}; \node at (0,1.25) {$\boxed{x_L} $}; \draw [thick, ->] (.2,.2)--(1.3,1.05); \draw [thick, ->] (-.2,.2)--(-1.3,1.05); \draw [thick, ->] (.2,2.3)--(1.3,1.45); \draw [thick, ->] (-.2,2.3)--(-1.3,1.45); \draw [thick, ->] (-.15,.85)--(-.15,.4); \draw [thick, ->] (.15,.85)--(.15,.4); \end{scope} \begin{scope}[xshift = 12cm, yshift = -1.3cm] \node at (0,3.5) {$\tilde{B}(\rho^2(T))$}; \node at (0,0) {$x_e$}; \node at (0,2.5) {$x_f$}; \node at (-1.5,1.25) {$x_h$}; \node at (1.5,1.25) {$x_g$}; \node at (0,1.25) {$\boxed{x_L} $}; \draw [thick, ->] (.2,.2)--(1.3,1.05); \draw [thick, ->] (-.2,.2)--(-1.3,1.05); \draw [thick, ->] (.2,2.3)--(1.3,1.45); \draw [thick, ->] (-.2,2.3)--(-1.3,1.45); \draw [thick, ->] (-.15,.85)--(-.15,.4); \draw [thick, ->] (.15,.85)--(.15,.4); \draw [thick, ->] (-.15,1.65)--(-.15,2.1); \draw [thick, ->] (.15,1.65)--(.15,2.1); \draw [thick, ->] (-1.11,1.16)--(-.5,1.16); \draw [thick, ->] (-1.11,1.34)--(-.5,1.34); \end{scope} \end{tikzpicture} \caption{A lamination $L$ consisting of two copies of the same curve on the annulus, determining a single frozen variable $x_L$. We have also drawn a triangulation $T$ of this annulus by the arcs $a,b,c,d$. The quivers $\tilde{B}(T),\tilde{B}(\rho(T))$, and $\tilde{B}(\rho^2(T))$ are shown at right, where the extra arcs are~$e = \rho^2(a), f = \rho^2(b), g = \rho^2(c), h = \rho^2(d)$. The values of $\hat{y}$ in each quiver are read off as the Laurent monomial ``incoming variables divided by outgoing variables.'' \label{annulusclusterfig}} \end{figure} Figure~\ref{annulusclusterfig} gives a choice of lamination~$L$ and triangulation $T$, as well as the quivers $\tilde{B}(T), \tilde{B}(\rho(T))$ and $\tilde{B}(\rho^2(T))$. Let $\mathcal{A}$ be the corresponding cluster algebra with frozen variable $x_L$, $\mathcal{E}$ its seed pattern, and $\mathcal{F}_{>0}$ its ambient semifield. The cluster modular group element $\sigma \rho \tau$ permutes the arcs in $T$. It induces an automorphism of the quiver $\tilde{B}(T)$, and therefore an element of $\Aut(\mathcal{E})$. The quivers $\tilde{B}(T)$ and $\tilde{B}(\rho^2(T))$ are neither isomorphic nor opposite, so there is no strong automorphism sending the seed at $T$ to the seed at $\rho^2(T)$. Likewise, there is no strong automorphism between $T$ and $\rho^{\pm 4}( T), \rho^{\pm 6} (T), \rho^{\pm 8} ( T)$, and so on. However, there \emph{is} a quasi-automorphism relating these seeds, which we describe now. It is the semifield map $\Psi \colon \mathcal{F}_{>0} \to \mathcal{F}_{>0}$ defined by $\Psi(x_L) = x_L$ as well as $\Psi(x_\gamma) = x_L^{-1} \cdot x_{\rho^2(\gamma)}$ for $\gamma = a,b,c,d$. It sends each $\hat{y}$ for $\Sigma(T)$ to the corresponding $\hat{y}$ for $\Sigma(\rho^2 \cdot T)$, defining a quasi-automorphism of $\mathcal{E}$ whose map on seed orbits is $\rho^2$. It has a simple global description on cluster variables which can be checked inductively by performing appropriate mutations away from $T$. Namely, for each arc $\gamma$ let $\iota(\gamma,L)$ denote the number of times $\gamma$ crosses the two curves in $L$. For example, $\iota(a,L) = 0$ and $\iota(c,L) = 1$. Then \begin{equation}\label{powerofqforgamma} \Psi(x_\gamma) = x_L^{\iota(\gamma,L) - \iota(\rho^2(\gamma),L)} \cdot x_{\rho^2( \gamma)} \end{equation} for all arcs $\gamma$ in the annulus. The power of $x_L$ on the right hand side of \eqref{powerofqforgamma} is always equal to~$0,1$, or $- 1$. It is also simple to describe quasi-automorphisms realizing $\sigma$ and $\rho \tau$. Perhaps surprisingly, the seeds at $T$ and $\rho (T)$ are \emph{not} related by a quasi-automorphism. Indeed, the values of $\hat{y}$ are equal at the top and bottom vertices of $\tilde{B}(T)$ in Figure \ref{annulusclusterfig}, but they are not equal in $\tilde{B}(\rho ( T))$. The same holds for $T$ and $\tau ( T)$. Putting all of this together, there is only one nontrivial strong automorphism of $\mathcal{E}$, namely the element $\sigma \rho \tau$. On the other hand, the quasi-automorphism group is infinite, generated by $\rho^2 , \sigma$ and $\rho \tau$. It is a direct product $\Dih_\infty \times \mathbb{Z} / 2 \mathbb{Z}$. It is an index two subgroup of $\textnormal{CMG}(\mathbf{S,M})$, namely the kernel of the map $\textnormal{CMG} \twoheadrightarrow \mathbb{Z} / 2 \mathbb{Z}$ that computes the parity of the number of black marked points sent to a white marked point. \end{example} Example \ref{annulusclustereg} suggests that although the cluster modular group $\textnormal{CMG}(\mathcal{E})$ may be strictly larger than the quasi-automorphism group $\QAut_0(\mathcal{E})$, the gap between these groups is not so large. Indeed, Section \ref{SurfacesSecn} establishes that for seed patterns associated with surfaces, $\QAut_0(\mathcal{E})$ is always a finite index subgroup of the cluster modular group. \section{Quasi-automorphisms of cluster algebras from surfaces}\label{SurfacesSecn} In this section we place Example~\ref{annulusclustereg} in context via results valid for any cluster algebra associated to a marked bordered surface as in \cite{ModuliSpaces, CATSI,CATSII, GSV2003}. We describe quasi-automorphisms of these cluster algebras in terms of the tagged mapping class group of the marked surface. We follow the setup and notation in \cite{CATSII}. Let $\mathcal{A}(\mathbf{S,M,L})$ denote the cluster algebra of geometric type determined by the triple $(\mathbf{S,M,L})$. Here $\mathbf{S}$ is an oriented bordered surface with a nonempty set~$\mathbf{M}$ of \emph{marked points}. The marked points reside either in the interior of $\mathbf{S}$ (we call these \emph{punctures}) or in $\partial \mathbf{S}$ (we call these \emph{cilia}). The set of punctures is $\overline{\mathbf{M}}$. We disallow a few possibilities for $(\mathbf{S,M})$, namely a sphere with three or fewer punctures, an $n$-gon when $n<4$, and a once-punctured monogon. The choice of coefficients is specified by a \emph{multi-lamination} $\mathbf{L} = (L_1,\dots,L_m)$, an $m$-tuple of (integral unbounded measured) laminations on $\mathbf{(S,M)}$. Each lamination $L_i$ consists of a finite number of \emph{curves} in $\mathbf{(S,M)}$. The cluster variables in $\mathcal{A}(\mathbf{S,M})$ are indexed by \emph{tagged arcs}~$\gamma$, the set of which we denote by~$\mathbf{A}^\bowtie(\mathbf{S,M})$. The seeds in~$\mathcal{A}(\mathbf{S,M,L})$ are indexed by \emph{tagged triangulations}~$T$ of~$(\mathbf{S,M})$. The extended exchange matrix~$\tilde{B}(T)$ for a seed has the \emph{signed adjacency matrix}~$B(T)$ as its principal part, and has the \emph{shear coordinate vector}~$\vec{b}(T,L_i)$ of the lamination~$L_i$ with respect to~$T$ as its~$i^{\text{th}}$ row of coefficients. The exchange graph of the resulting cluster algebra is independent of the choice of coefficients \cite[Corollary 6.2]{CATSII}. We let $\textnormal{CMG}(\mathbf{S,M})$ denote the corresponding cluster modular group. It is closely related to the following geometrically defined group. \begin{defn}[Tagged mapping class group \cite{ASS}]\label{markedmcgdefn} Let $(\mathbf{S,M})$ be a bordered marked surface that is not a closed surface with exactly one puncture. A \emph{tagged mapping class} for $(\mathbf{S,M})$ is a pair $g = (f,\psi)$, where \begin{itemize} \item $f$ is an element of the \emph{mapping class group} of $(\mathbf{S,M})$ -- i.e. $f$ is an orientiation-preserving homeomorphism of $\mathbf{S}$ mapping $\mathbf{M}$ to itself setwise, considered up to isotopies of $\mathbf{S}$ that fix $\mathbf{M}$ pointwise, and \item $\psi \colon \overline{\mathbf{M}} \to \{\pm 1\}$ is a function from the set of punctures to $\{\pm 1\}$. \end{itemize} When $(\mathbf{S,M})$ is a closed surface with one puncture $p$, we make the same definition but impose $\psi(p) = 1$ since tagged versions of arcs are not in the cluster algebra. The tagged mapping classes comprise the \emph{tagged mapping class group}, denoted $\mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$. \end{defn} We understand $\mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$ by its action on tagged arcs $\gamma \in A^\bowtie(\mathbf{S,M})$. A tagged mapping class $g = (f,\psi)$ acts on $\gamma$ by first performing the homeomorphism $f$ to $\gamma$, and then changing the tag of any end of $\gamma$ incident to a puncture $p$ for which $\psi(p) = -1$. The resulting action on tagged triangulations preserves the signed adjacency matrices, and embeds $\mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$ as a subgroup of $\textnormal{CMG}(\mathbf{S,M})$, cf.~\cite{ASS}. In fact the following is true: \begin{prop}[Bridgeland-Smith]\label{ASSconjecture} The tagged mapping class group $\mathcal{M} \mathcal{G} _\bowtie (\mathbf{S,M})$ coincides with the cluster modular group $\textnormal{CMG}(\mathbf{S,M})$, unless $(\mathbf{S,M})$ is a sphere with four punctures, a once-punctured square, or a digon with one or two punctures. \end{prop} Thus barring these exceptional cases, two tagged triangulations of $(\mathbf{S,M})$ have isomorphic quivers precisely when they are related by an element of the tagged mapping class group (see \cite[Proposition 8.5]{BridgelandSmith} and the subsequent discussion; see also \cite[Conjecture 1]{ASS}). In the exceptional cases listed in Proposition \ref{ASSconjecture}, the tagged mapping class group is a proper finite index subgroup of the cluster modular group. Motivated by Proposition \ref{ASSconjecture}, we set out to describe, for various choices of coefficients~$\mathbf{L}$, the quasi-automorphism group $\QAut_0(\mathbf{S,M,L})$ from Definition \ref{QAutDefn} as a subgroup of the tagged mapping class group. The main ingredient in our answer is a black-white coloring similar to one in the examples from Section \ref{QASecn}. \begin{defn}\label{colorCs} The \emph{even components} of $(\mathbf{S,M})$ are the punctures $C \in \overline{M}$ as well the as boundary components $C \subset \partial \mathbf{S}$ having an even number of cilia. We let $r$ denote the number of even components, and label the even components $C_1,\dots,C_r$. For each even boundary component $C \subset \partial \mathbf{S}$, we color the cilia on $C$ black or white so that the colors alternate, i.e. adjacent cilia have opposite colors. \end{defn} Using the black-white coloring in Definition \ref{colorCs}, each tagged mapping class $g = (f,\psi)$ determines an $r \times r$ signed permutation matrix $\pi_g$ whose entries are indexed by the even components. The $(i,j)$ entry of $\pi_g$ is $0$ unless $f(C_i) = C_j$. If $C_j \subset \partial \mathbf{S}$ is a boundary component and $f(C_i) = C_j$, then the $(i,j)$ entry is $+1$ if $f$ sends black cilia on $C_i$ to black cilia on $C_j$, and is $-1$ if $f$ sends black cilia on $C_i$ to white cilia on $C_j$. When $C_i$ and $C_j$ are punctures, the sign of the $(i,j)$ entry is the sign $\psi(C_j)$. Not all signed permutation matrices will arise in this way since $f$ can only permute components that have the same number of cilia. \begin{defn}\label{signconvention} Let $L$ be a lamination. For each curve $\aa$ in $L$, Figure \ref{colorsofendsfig} shows how to assign a sign to an end of $\aa$ that either lands on even boundary component or spirals around a puncture. At a puncture, the sign is chosen according to whether~$\aa$ spirals counterclockwise or clockwise into the puncture. At a boundary component, the sign is chosen according to whether the nearest neighboring cilium in the clockwise direction along~$C$ is black or white. An end on an odd component has zero sign. The \emph{pairing}~$p( L;C) $ of a lamination~$L$ with the even component~$C$ is the sum of all the signs associated to~$L$, i.e. the sum over all curves $\aa$ in $L$ of the signs of the two ends of $\aa$. We let~$\vec{p}(L) = (p(L;C_i))_{i = 1,\dots,r} \in \mathbb{Z}^r$ denote the vector of pairings of~$L$ with the even components \end{defn} Example~\ref{workedsignsexample} works out these signs for the annulus from Example~\ref{annulusclustereg}. \begin{figure}[ht] \begin{center} \begin{tikzpicture} \def \rsizeo{.6}; \def \rsizet{1.5}; \draw [fill= lightgray] (0,0) circle [radius = \rsizeo]; \draw [fill = black] (0:\rsizeo) circle [radius = .07]; \draw [fill = white] (90:\rsizeo) circle [radius = .07]; \draw [fill = black] (180:\rsizeo) circle [radius = .07]; \draw [fill = white] (-90:\rsizeo) circle [radius = .07]; \draw [thick] (20:\rsizeo) to [out = 0, in = -90] (1.8*\rsizeo, 1.3*\rsizeo); \draw [thick, dashed] (1.8*\rsizeo,1.5*\rsizeo)--(1.8*\rsizeo,2*\rsizeo); \node at (2.2*\rsizeo,.2) {+1}; \begin{scope}[xshift = 3cm] \draw [fill= lightgray] (0,0) circle [radius = \rsizeo]; \draw [fill = black] (0:\rsizeo) circle [radius = .07]; \draw [fill = white] (90:\rsizeo) circle [radius = .07]; \draw [fill = black] (180:\rsizeo) circle [radius = .07]; \draw [fill = white] (-90:\rsizeo) circle [radius = .07]; \draw [thick] (-35:\rsizeo) to [out = -10, in = -90] (1.8*\rsizeo, 1.3*\rsizeo); \draw [thick, dashed] (1.8*\rsizeo,1.5*\rsizeo)--(1.8*\rsizeo,2*\rsizeo); \node at (2.3*\rsizeo,.2) {-1}; \end{scope} \begin{scope}[xshift = 9cm] \node at (0,0) {$\bullet$}; \draw [thick] (0:1.2*\rsizeo) to [out = -90, in = 0] (-90:\rsizeo) to [out = 180, in = -90] (180:\rsizeo) to [out = 90, in = 180] (90:\rsizeo) to [out = 0, in = 90] (0:.8*\rsizeo) to [out = -90, in = 0] (-90:.7*\rsizeo) to [out = 180, in = -90] (180:.7*\rsizeo); \draw [thick, dashed] (0:1.2*\rsizeo)--(45:1.4*\rsizeo)--(55:1.6*\rsizeo)--(60:1.9*\rsizeo); \draw [thick, dashed] (180:.7*\rsizeo) to [out = 90, in = 180] (90:.7*\rsizeo) to [out = 0, in = 90] (0:.55*\rsizeo); \node at (2.2*\rsizeo,.2) {-1}; \end{scope} \begin{scope}[xshift = 6cm] \node at (0,0) {$\bullet$}; \draw [dashed,thick] (140:1.4*\rsizeo)--(120:2*\rsizeo); \draw [thick] (145:1.4*\rsizeo) to [out =-90 , in = 90](180:1.2*\rsizeo) to [out = -90, in = 180] (-90:\rsizeo) to [out = 0, in = -90] (0:\rsizeo) to [out = 90, in = 0] (90:\rsizeo) to [out = 180, in = 90] (180:.9*\rsizeo) to [out = -90, in = 180] (-90:.75*\rsizeo) to [out = 0, in = -90] (0:.7*\rsizeo) to [out = 90, in = 0] (90:.6*\rsizeo) to [out = 180, in = 90] (180:.5*\rsizeo); \draw [dashed, thick] (180:.5*\rsizeo) to [out = -90, in = 180] (-90:.5*\rsizeo) to [out = 0, in = -90](0:.4*\rsizeo); \node at (1.6*\rsizeo,.2) {+1}; \end{scope} \end{tikzpicture} \caption{The conventions for assigning signs to each end of a curve that lands on an even boundary component (in this case, a boundary component with $4$ cilia) or spirals around a puncture. The pairing $p(L,C)$ is obtained by adding up all of these signs. \label{colorsofendsfig}} \end{center} \end{figure} In addition to acting on tagged arcs, $\mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$ also acts on laminations $L$. A tagged mapping class $g = (f,\psi)$ acts by first performing the homeomorphism $f$ to $L$, and then changing the direction of spiral at each puncture $p$ for which $\psi(p) = -1$. This action preserves sheard coordinates in the sense that $\vec{b}(T,L) = \vec{b}(g(T),g(L))$ for a triangulation $T$ and lamination $L$. It is easy to see that $g$ acts on the vector of pairings by the matrix $\pi_g$, i.e. $\vec{p}(g (L)) = \pi_g \cdot \vec{p}(L)$ for a lamination $L$. The next theorem is the main result of this section, describing $\QAut(\mathbf{S,M,L})$ inside the marked mapping class group in very concrete terms. \begin{thm}\label{whichgworkwithL} Suppose $(\mathbf{S,M})$ is not one of the four exceptional surfaces in~Proposition \ref{ASSconjecture}. Let $\mathbf{L}$ be a multi-lamination. Let $V_\mathbf{L} = \Span(\{\vec{p}(L) \colon L \in \mathbf{L}\}) \subset \mathbb{Z}^r$ be the submodule spanned by the vectors of pairings associated to the laminations $L $ in $\mathbf{L}$. Then \begin{equation}\label{whichgworkeq} \QAut_0(\mathbf{S,M,L}) = \{g \in \mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M}) \colon \pi_g(V_\mathbf{L}) = V_\mathbf{L}\}. \end{equation} \end{thm} We prove Theorem \ref{whichgworkwithL} in Section \ref{SurfacesProofsSecn}. The subgroup of $\mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$ described in \eqref{whichgworkeq} only depends on the endpoint behavior of laminations -- it doesn't mention the topology of the surface, or how much curves wrap around the holes and handles of the surface. The map $g \mapsto \pi_g$ is a group homomorphism from $\mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$ to the group of signed permutation matrices. The subgroup in \eqref{whichgworkeq} is an inverse image of the subgroup of signed permutation matrices that fix $V_\mathbf{L}$ and therefore is always finite index in $\mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$. \begin{cor}\label{QAutBigcapCor} Let $g$ be a tagged mapping class. If $\pi_g$ is plus or minus the identity matrix, then $g \in \QAut_0(\mathbf{S,M,L})$ for \emph{any choice} of multi-lamination $\mathbf{L}$. Otherwise, $g \notin \QAut_0(\mathbf{S,M,L})$ for \emph{some choice} of~$\mathbf{L}$. \end{cor} \begin{rmk} The tagged mapping classes in Corollary \ref{QAutBigcapCor} are those that fix all even components setwise, and furthermore either preserve the black-white coloring of ends of curves, or simultaneously swap all colors. This group is generated by the following four types of elements (see~\cite{FarbMargalit} for generators of the mapping class group): Dehn twists about simple closed curves, homeomorphisms that permute odd components, fractional Dehn twists rotating the cilia on a given boundary component by two units, and the \emph{tagged rotation}. This last element is the one that simultaneously changes tags at all punctures and rotates all boundary components by one unit. It was studied in \cite{BrustleQiu}, where it was shown to coincide with the shift functor of a $2$-Calabi-Yau cluster category associated with the surface. \end{rmk} \begin{proof} If $\pi_g = \pm 1$, then $\pi_g$ clearly preserves $V_\mathbf{L}$ regardless of the choice of $\mathbf{L}$ and by Theorem \ref{whichgworkwithL} $g$ is in $\QAut_0(\mathbf{S,M,L})$ for any $\mathbf{L}$. If $\pi_g \neq \pm 1$, think of $\pi_g$ as a signed permutation $\sigma$ of $\{\pm 1,\dots,\pm r\}$ in the usual way. If there is any index $i \in [1,r]$ such that $\sigma(i) \neq \pm i$, then let $L$ be a lamination consisting of a curve with two black ends on $C_i$, satisfying $p(L;C) = 2$. If there is no such index $i$, we can choose a pair of indices $i , j \in [1,r]$ such that $\sigma(i) = -i$ but $\sigma(j) = j$. In this case we let $L$ be a lamination consisting of a curve connecting the even components $C_i$ and $C_j$ by a curve that is black at both ends. In both of these two cases, we see that $\pi_g (\vec{p}(L))$ is not in the span of $\vec{p}(L)$ and by Theorem \ref{whichgworkwithL}, $g \notin \QAut_0(\mathbf{S,M,L})$. \end{proof} \begin{example}\label{workedsignsexample} We order the boundary components in Figure \ref{annulusclusterfig} so that the inner boundary is first. The vector of pairings for the lamination $L$ in Figure \ref{annulusclusterfig} is $\vec{p}(L) = (-2,-2)$. Then $\rho$ and $\tau$ act on the vector of parings by swapping the sign of the first or second component respectively, and $\sigma$ acts by permuting the first and second component. The description of $\QAut_0(\mathbf{S,M,L})$ in Example \ref{annulusclustereg} matches the one in Theorem \ref{whichgworkwithL}. The subgroup of elements described in Corollary \ref{QAutBigcapCor} is a a direct product $\mathbb{Z} \times \mathbb{Z} / 2 \mathbb{Z}$ generated by $\rho^2$ and $\rho \tau$. It has index $4$ in the cluster modular group. \end{example} \begin{rmk} Theorem \ref{whichgworkwithL} can be modified in the case that $(\mathbf{S,M})$ is one of the exceptional surfaces in Proposition \ref{ASSconjecture}. Namely, the left hand side of \eqref{whichgworkeq} merely describes the subgroup of $\QAut_0(\mathbf{S,M,L})$ consisting of tagged mapping classes (that is, ignoring the exotic symmetries). For particular choices of coefficients, the ``extra'' elements of the cluster modular group might also be inside $\QAut_0$. \end{rmk} \section{Proofs for Section \ref{SurfacesSecn}}\label{SurfacesProofsSecn} The key result of this section is Proposition \ref{QHsofSurfaces} describing quasi-homomorphisms of cluster algebras from surfaces. Theorem \ref{whichgworkwithL} follows from it as a special case. Let $\mathbf{B}(\mathbf{S,M})$ denote the set of \emph{boundary segments} connecting adjacent cilia in $\partial S$. There is an especially natural choice of multi-lamination $\mathbf{L}_{\text{boundary}} = (L_\beta)_{\beta \in \mathbf{B(S,M)}}$ with one frozen variable for each boundary segment (see e.g. in \cite[Remark 15.8]{CATSII}). Lemma \ref{sccshearcoord} expresses the shear coordinates of certain laminations in terms of the extended exchange matrix determined by $\mathbf{L}_{\text{boundary}}$. It is patterned after \cite[Lemma $14.3$]{CATSII}. \begin{lem}\label{sccshearcoord} Let $L$ be a lamination none of whose curves has an end that spirals at a puncture. Given an arc $\gamma$ the \emph{transverse measure} of $\gamma$ in $L$ is the minimal number of intersections of $\gamma$ with the curves in $L$. We denote it by $l(\gamma,L)$. For a boundary segment $\beta \in \mathbf{B}(\mathbf{S,M})$, we similarly let $l(\beta,L)$ denote the number of ends of the curves in $L$ on $\beta$. We let $\vec{l}(T,L) = (l(\star,L))_{\star \in T \cup \mathbf{B}(\mathbf{S,M})}$ be the row vector containing all of these transverse measures. Then \begin{equation}\label{sccshearcoordeq} -2 \vec{b}(T,L) = \vec{l}(T,L) \tilde{B}(T,\mathbf{L}_\text{boundary}). \end{equation} \end{lem} \begin{proof} We check that the $\gamma_0$ components of the left and right hand sides of \eqref{sccshearcoord} are equal, where $\gamma_0 \in T$. Let $\gamma_1,\dots,\gamma_4$ be the quadrilateral containing $\gamma_0$ (number in clockwise order). Some of the $\gamma_i$ may be boundary segments. Each time $\aa$ shears across the quadrilateral in an `$S$' crossing, it contributes $+1$ to the left hand side, while contributing $-\frac{1}{2}(-1+-1)$ to the right hand side. And so on. \end{proof} This argument is like \cite[Lemma 14.13]{CATSII} but simpler because our we are not dealing with spirals at the puncture, for which $l(T,L) = \infty$. Comparing \eqref{sccshearcoordeq} with \eqref{newbtilde}, we see that if $\mathbf{L}$ is any multi-lamination none of whose curves spiral at punctures, then there is a quasi-homomorphism from $\mathcal{A}(\mathbf{S,M},\mathbf{L}_\text{boundary})$ to $\mathcal{A}(\mathbf{S,M,L})$. A version of Lemma \ref{sccshearcoord} allowing for spirals at punctures would involve the extended exchange matrix $\tilde{B}(\overline{T},\overline{L})$ on the \emph{fully opened surface} $(\overline{\mathbf{S,M}})$, where $\overline{L}$ and $\overline{T}$ are lifts of $L$ and $T$ to the opened surface (see \cite[Sections 9,14]{CATSII} for details). The corresponding version of \eqref{sccshearcoordeq} determines a quasi-homomorphism from $\mathcal{A}(\overline{\mathbf{S,M}},\overline{\mathbf{L}})$ to $\mathcal{A}(\mathbf{S,M,L})$. \medskip Our next step is to to describe the row span of the signed adjacency matrices $B(T)$. It strengthens \cite[Theorem 14.3]{CATSI} which states that the corank of $B(T)$ is the number of even components. The description requires associating a sign to the ends of arcs $\gamma \in A^\bowtie(\mathbf{S,M})$, in a similar fashion as was done for the ends of curves in Definition \ref{signconvention}. Namely if $\gamma$ has an endpoint on a boundary component $C \subset \partial \mathbf{S}$, the endpoint gets sign $\pm 1$ if its endpoint is a black or white cilium respectively. If the endpoint is on a puncture $C \in \overline{M}$, the sign is $\pm 1$ if the end is plain or tagged respectively. An endpoint on an odd component gets sign $0$. The pairing $p(\gamma;C)$ of $\gamma$ with $C$ is the sum of the signs of the ends of $\gamma$ that reside on $C$, and the vector of pairings is $\vec{p}(\gamma) = (p(\gamma; C_i))_{i = 1, \dots, r}$. This pairing satisfies $p(\gamma;C) = p(L_\gamma;C)$ where $L_\gamma$ is the \emph{elementary lamination} determined by $\gamma$ (cf.\ \cite[Definition 17.2]{CATSII}, see also \cite[Section 5]{ReadingSurfaces}). \begin{lem}\label{kernelbasisatClem} For a tagged triangulation $T$, let $\mathbb{Q}(T) = \mathbb{Q}(\gamma \colon \gamma \in T)$ be $\mathbb{Q}$-vector space of row vectors with entries indexed by $\gamma \in T$. Let $\mathbb{Q}(T)^* = \mathbb{Q}(\gamma^* \colon \gamma \in T)$ be the dual space of column vectors with entries indexed by the dual basis $\{\gamma^* \colon \gamma \in T \} $. For each even component $C$, consider the column vector \begin{equation}\label{kernelbasisatC} R_C = \sum_{\gamma \in T }p(\gamma;C)\gamma^* \in \mathbb{Q}(T)^*. \end{equation} Then a vector $\vec{a} \in \mathbb{Q}(T)$ is in the the row span of $B(T)$ if and only if the dot product $\vec{a} \cdot R_C $ vanishes for all $C$. \end{lem} Said differently, the map $\gamma \mapsto (\gamma,C_i)$ for $C_i$ an even component determines a $\mathbb{Z}$-grading on the coefficient-free cluster algebra $\mathcal{A}(\mathbf{S,M})$. These gradings form a \emph{standard $\mathbb{Z}^r$-grading} as $i$ varies from $1,\dots,r$ (a standard grading is one that spans the kernel of the $B$-matrices, see \cite{Grabowski}). We will not rely on gradings in what follows. Lemma \ref{kernelbasisatClem} is proved at the end of this section. \vspace{.3cm} For a vector $\vec{a} \in \mathbb{Q}(T)$, the \emph{residue of $\vec{a}$ around $C$ in $T$} is the dot product $\vec{a} \cdot R_C = \sum_{\gamma \in T }(\gamma,C) a_\gamma$. Writing $\vec{a}$ as the shear coordinate of a lamination $L$, the residue has the following simple description. \begin{lemma}\label{annularnbhdlemma} Let $L$ be a lamination and $C$ an even component. Then the residue of $\vec{b}(T,L)$ around $C$ is the pairing $p(L;C)$ from Definition~\ref{signconvention}. \end{lemma} \begin{proof} The residue is computed in terms of the shear coordinates of arcs adjacent to $C$. To compute these shear coordinates, rather than considering the entire surface, we can focus on the set of triangles having at least one vertex on $C$. Lifting to a finite cover of $\mathbf{S}$ perhaps (in order to remove interesting topology nearby $C$ that is irrelevant to computing the residue) this union of triangles will either be a triangulated annulus (when $C \subset \partial \mathbf{S}$ is a boundary component) or a once-punctured $n$-gon for some $n$ (if $C$ is a puncture). We call this set of triangles the \emph{annnular neighborhood} of $C$. Even when $L$ consists of a single curve, the intersection of $L$ with this annular neighborhood might consist of several curves. By the linearity of shear coordinates and residues, it suffices to consider the case that $L$ consists of a single curve in the annular neighborhood. When $C$ is a puncture, its annular neighborhood is a punctured disc with a triangulation all of whose arcs are radii joining the puncture to the boundary of the disc. By inspection, a curve $L$ contributes nonzero residue at $C$ if and only it spirals at $C$, and the value of this residue is $\pm 1$ according to whether it spirals counterclockwise or clockwise respectively as claimed. When $C$ is a even boundary component, we compute the residue of $\vec{b}(T,L)$ using the right hand side of \eqref{sccshearcoord}. We split up this right hand side into two terms by splitting up $\vec{l}(T,L)$ as a concatenation of $(l(\gamma,L)_{\gamma \in T}$ and $(l(\beta,L)_{\beta \in \partial \mathbf{S}}$, and performing the matrix multiplication with $\tilde{B}$ in block form. The first term in this expression has zero residue around $C$ since it is a linear combination of the rows of $B(T)$. What's left over is a sum \begin{equation}\label{residueinannulus} -\frac{1}{2}\sum_{\gamma \in T, \beta \in \mathbf{B(S,M)} } p(\gamma;C)l(\beta,L) B_{\beta,\gamma}. \end{equation} We claim the sum above evaluates to $p(L; C)$. For the sum to be nonzero, $L$ must have a nonzero end at some segment $\beta = [v_{i-1},v_{i}]$ with $v_{i-1},v_i$ in clockwise order. This segment $\beta$ is contained in a unique triangle in $T$. Call the other two sides in this triangle $\gamma_{i-1}$ and $\gamma_i$, whose endpoint on $C$ is $v_{i-1}$ and $v_i$ respectively. There are cases according to whether either of these sides is a boundary segment. If neither is, then $p(\gamma_{i-1}; C) B_{\beta,\gamma_{i-1}} = p(\gamma_{i}; C)B_{\beta,\gamma_{i}} $ is $\pm 1$ according to whether $v_i$ is white or black. The total contribution to \eqref{residueinannulus} is $ p(C;L)$. In the degenerate case that $\gamma_i$ is a boundary segment, it does not contribute to \eqref{residueinannulus}, but $p(\gamma_{i-1}; C) = 2$ and this effect is cancelled out, and so on. \end{proof} \begin{prop}\label{QHsofSurfaces} Suppose $(\mathbf{S,M})$ is not among the four listed exceptions in Proposition~\ref{ASSconjecture}. Let $\mathbf{L},\mathbf{L'}$ be multi-laminations on $(\mathbf{S,M})$ and recall the submodules $V_\mathbf{L}$ and $V_\mathbf{L'}$ from Theorem~\ref{whichgworkwithL}. Let $g \in \mathcal{M} \mathcal{G}_\bowtie(\mathbf{S,M})$ be a tagged mapping class and $\pi_g$ the corresponding signed permutation matrix. The following are equivalent: \begin{itemize} \item there is a quasi-homomorphism $\Psi$ from $\mathcal{A}(\mathbf{S,M,L})$ to $\mathcal{A}(\mathbf{S,M,L'})$ whose map on tagged triangulations is $T \mapsto g (T)$ (that is, $\Psi(\Sigma(T)) \sim \overline{\Sigma}(g(T))$), \item $V_{\mathbf{L'}} \subset \pi_g(V_{\mathbf{L}})$. \end{itemize} \end{prop} \begin{proof}[Proof of Proposition \ref{QHsofSurfaces}] A quasi-homomorphism $\Psi$ from $\mathcal{A}(\mathbf{S,M,L})$ to $\mathcal{A}(\mathbf{S,M,L'})$ is determined by a pair of tagged triangulations $T$ and $T'$, such that $\tilde{B}(T,\mathbf{L})$ and $\tilde{B}(T',\mathbf{L'})$ are related as in \eqref{newbtilde}. Since the principal parts of these matrices agree, by Proposition~\ref{ASSconjecture} there is a tagged mapping class $g$ such that $g(T) = T'$. Furthermore, for each lamination $L' \in \mathbf{L'}$, the vector $\vec{b}(T',L')$ must be in $\Span(\{\vec{b}(T,L) \colon L \in \mathbf{L}\})$. Since $\vec{b}(T',L') = \vec{b}(T,g^{-1} (L'))$, by Lemma \ref{kernelbasisatClem}, it is equivalent to find a linear combination of $\vec{b}(T,g^{-1}(L'))$ and $\{\vec{b}(T,L) \colon L \in \mathbf{L}\}$ that has zero residue around every even component. Proposition~\ref{QHsofSurfaces} follows now from Lemma \eqref{annularnbhdlemma} and the fact that $g$ acts on a vector of pairings by the matrix $\pi_g$. \end{proof} \begin{proof}[Proof of Lemma \ref{kernelbasisatClem}] Restating the Lemma, we seek to show that the $R_C$ form a basis for the dual space to the row span We begin by verifying each of these vectors pair to zero with the row span. First, we check this when $C$ is a puncture. We begin with the case that all of the arcs in $T$ are untagged at $C$. We need to check that $ \sum_{\gamma \in T} p(\gamma;C) B(T)_{\gamma',\gamma}$ vanishes for each $\gamma' \in T$. Indeed, letting $L $ be the lamination consisting of a tiny simple closed curve contractible to $C$, the shear coordinate vector $\vec{b}(T,L)$ is clearly $0$. Now we apply \eqref{sccshearcoord} for this choice of $L$: the $\gamma'^{\text{th}}$ component of \eqref{sccshearcoord} says $ 0 = \sum_{\gamma \in T}p(\gamma ; C) B(T)_{\gamma,\gamma'}$ as desired, using the fact that $l(\gamma,L) = 0$ if $\gamma$ is a boundary segment. The argument when all arcs are tagged at $C$ is identical. If $C$ is incident to exactly two arcs, namely the plain and tagged version of the same arc, then $R_C$ follows from \cite[Definition $9.6$]{CATSI} (or a calculation in a once-punctured digon). Second we check this when $C \subset \partial \mathbf{S}$ is a boundary component. Number the cilia on $C$ by $v_1, \dots ,v_{2m}$. For each $i \in [1,2m]$, let $L_i$ be a tiny lamination contractible to $v_i$ -- its two endpoints are on the two boundary segments adjacent to $v_i$. Again, $\vec{b}(T,L_i)$ is clearly $0$ and in particular $\sum_{\text{$v_i$ black}} \vec{b}(T,L_i) = \sum_{\text{$v_i$ white}} \vec{b}(T,L_i)$. Summing over the corresponding right hand sides of \eqref{sccshearcoord}, again performing the matrix multiplication in \eqref{sccshearcoord} in block form as in the argument for Lemma \ref{annularnbhdlemma}, the terms corresponding to boundary segments are present in both the sum over black $v_i$ and the sum over white $v_i$. Canceling these common terms, we get the equality $\sum_{\text{$v_i$ black, $\gamma \in T$ }} l(\gamma,L_i) B(T)_{\gamma,\gamma'}= \sum_{\text{$v_i$ white}} l(\gamma,L_i) B(T)_{\gamma,\gamma'}$ for all $\gamma'$, which says $\sum_{\gamma \in T} p(\gamma,C) B(T)_{\gamma,\gamma'} = 0$ for all $\gamma'$ as desired. Thus all of the $R_C$ pair to zero with the row span of $B(T)$. We will now show that they are linearly independent, which completes the proof since they have the expected size by \cite[Theorem $14.3$]{CATSI}. Consider a linear relation of the form \begin{equation}\label{relationofrelations} \sum a_C R_C = 0. \end{equation} We define scalars $a_v$ for all marked points $v \in \mathbf{M}$ as follows: if $v$ is a puncture $C$, then $b_v = a_C$. If $v$ is a cilium residing on an even component $C$, then $b_v = \pm a_C$, with $\pm$ sign consistent with the black-white coloring on $C$. If $v$ is cilium on an odd component, we set $a_v = 0$. Now consider any vertices $v_1,v_2$ forming an edge in the triangulation $T$. We claim \begin{equation}\label{balancedvertices} a_{v_1}+a_{v_2} = 0. \end{equation} Indeed, if $v_1,v_2$ are the endpoints of an arc $\gamma \in T$, the $\gamma^{\text{th}}$ component of the relation \eqref{relationofrelations} is $a_{v_1}+a_{v_2}$ by construction, and \eqref{balancedvertices} holds. If they are the endpoints of a boundary segment, then \eqref{balancedvertices} clearly holds. However, in any given triangle in $T$ with vertices $v_1,v_2,v_2$, the only way for \eqref{balancedvertices} to hold for all $3$ of the pairs $(v_1,v_2),(v_1,v_3),(v_2,v_3)$ is if $a_{v_1} =a_{v_2} =a_{v_3} = 0 .$ Varying the vertex and triangle containing it, this establishes that all $a_v =0$ for all $v \in \mathbf{M}$, and hence all $a_C = 0$, as desired. \end{proof} \section{Appendix: The starfish lemma on a nerve}\label{StarfishSecn} We give the appropriate generalization of the Starfish Lemma \cite[Proposition 3.6]{tensors} from a star neighborhood to a nerve. Our proof follows the proof of the Starfish Lemma in \cite{CABook}, with appropriate modifications. Let $R$ be a domain. We say two elements $r,r' \in R$ are \emph{coprime} if they are not contained in the same prime ideal of height $1$. When $R$ is a unique factorization domain, every pair of non-associate irreducible elements are coprime. \begin{prop}\label{starfishprop} Let $\mathcal{N}$ be a nerve in $\mathbb{T}_n$. Let $\mathcal{R}$ be a $\mathbb{C}$-algebra and a Noetherian normal domain. Let $\mathcal{E}$ be a seed pattern of geometric type, satisfying the following: \begin{itemize} \item all frozen variables are in $\mathcal{R}$ \item for each vertex $t \in \mathcal{N}$, the cluster $\mathbf{x}(t) \subset \mathcal{R}$, and the cluster variables $x \in \mathbf{x}(t)$ are pairwise coprime elements of $\mathcal{R}$; \item for each edge $t \xrightarrow{k} t'$ in $\mathcal{N}$, the cluster variables $x_k(t)$ and $x_k(t')$ are pairwise coprime. \end{itemize} Then the cluster algebra $\mathcal{A}$ defined by $\mathcal{E}$ satisfies $\mathcal{A} \subset \mathcal{R}$. \end{prop} The proof relies on the following two lemmas, the first of which is a standard fact from commutative algebra. For a prime ideal $P$, let $R_P = R[(R / P)^{-1}]$ denote the localization of $R$ away from $P$. \begin{lem}[{\cite[Theorem 11.5]{Matsumura}}]\label{heightoneprimes} For a normal Noetherian domain $R$, the natural inclusion $R \subset \cap_{\textnormal{ht $P = 1$}} R_P$ (intersection over height one primes) is an equality. \end{lem} \begin{lem}\label{goodclusterforP} With hypotheses as in Proposition \ref{starfishprop}, let $P$ be a height one prime ideal in $R$. Then at least one of the products \begin{equation}\label{hartogseqn} \prod_{x \in \mathbf{x}(t), t \in \mathcal{N}} x \end{equation} is not in $P$. \end{lem} \begin{proof} By the coprimeness in each cluster $t \in \mathcal{N}$, at most one of the cluster variables $x$ in a product \eqref{hartogseqn} satisfies $x \in P$. We will show that for at least one $t$, \emph{none} of the cluster variables is in $P$, establishing our claim since $P$ is prime. Pick any vertex $t_0 \in \mathcal{N}$, and suppose the cluster variable $x_i \in P$. Given an edge $t_0 \xrightarrow{j} t_0' \subset \mathcal{N}$ where $j \neq i$, the cluster variable $x_j(t_0') \notin P$ by the coprimality assumption in the cluster at $t_0'$. Repeatedly applying this assumption while mutating along the nerve, using the connectedness hypothesis and the fact that every edge label shows up in the nerve, we finally arrive at a vertex $t \in \mathcal{N}$ such that the edge $t \xrightarrow{i} t' \subset \mathcal{N}$, and all of the extended cluster variables $x_j \in \tilde{\mathbf{x}}(t)$ with $j \neq i$ are not in $P$. By the coprimeness assumption along edge $i$, we see $x_i(t') \notin P$, and the cluster at $t'$ is one where the product \eqref{hartogseqn} is not in $P$. \end{proof} \begin{proof}[Proof of Proposition \ref{starfishprop}] We need to prove each cluster variable $z$ is in $R$. By Lemma \ref{heightoneprimes}, it suffices to show $z \in R_P$ for any height one prime $P$. Indeed, by Lemma \ref{goodclusterforP} there is a cluster $t \in \mathcal{N}$ such that $\prod_{x \in \mathbf{x}(t)} x \notin P$. By the Laurent Phenomenon, $z$ is a Laurent polynomial in the elements of $\mathbf{x}(t)$, with coefficients in $\mathbb{C}[x_{n+1},\dots,x_{n+m}]$. In particular, $z \in R_P$, as desired. \end{proof} \section{Appendix: Grassmannians and Band Matrices}\label{GrassmannianandBands} As an illustration of Proposition \ref{qhnormalizedprop}, we extend the constructions in Example \ref{GrandBandsQH} and Example \ref{GrandBandsQI} from the case $(k,n) = (2,5)$ to general $(k,n)$. Let \begin{equation}\label{bfxdefn} \mathbf{X} = \tGr(n-k,n) \end{equation} be the affine cone over the Grassmannian of $(n-k)$-dimensional subspaces of $\mathbb{C}^n$. Its points are the decomposable tensors in $\Lambda^{n-k}{\mathbb{C}^n}$. Let \begin{equation}\label{bfydefn} \mathbf{Y} \cong \mathbb{C}^{(n-k)(k+1)} \end{equation} be the affine space of $(n-k) \times n$ \emph{band matrices of width $k+1$}, i.e. the set of matrices $Y$ whose entries $y_{i,j}$ are zero unless $i \leq j \leq i+k$. We will describe a quasi-isomorphism of the coordinate rings $\mathbb{C}[\mathbf{X}]$ and $\mathbb{C}[\mathbf{Y}]$, and in particular a cluster structure on $\mathbb{C}[\mathbf{Y}]$ which appears to be new. The coordinate ring $\mathbb{C}[\mathbf{X}]$ is the ring generated by the Pl\"ucker coordinates $\Delta_S$ as $S$ ranges over $(n-k)$-subsets of $n$. It has a cluster structure of geometric type cf.~\cite{Scott} in which the frozen variables are those Pl\"ucker coordinates consisting of cyclically consecutive columns. The non-frozen Pl\"ucker coordinates are all cluster variables. We will introduce a useful sign convention: if $S$ is any set of $(n-k)$ natural numbers, we let $\Delta_S$ denote the Pl\"ucker coordinate obtained by first reducing all the elements of $S$ to their least positive residue modulo $n$, sorting these residues, and then taking the corresponding Pl\"ucker coordinate. If there are fewer than $(n-k)$ distinct elements in $S$ modulo $n$, then $\Delta_S$ is identically zero. The coordinate ring $\mathbb{C}[\mathbf{Y}]$ contains minors $Y_{I,J}$ for $I \subset [1,n-k], J \subset [1,n]$ subsets of the same size denoting row and column indices respectively. It is a polynomial ring in the coordinate functions $Y_{i,j}$, $1 \leq i \leq j \leq i+k \leq n$. The following elements will serve as frozen variables in $\mathbb{C}[\mathbf{Y}]$: \begin{align} Y_{1,1},Y_{2,2},\dots,Y_{n-k,n-k} \text{ and } Y_{1,k+1},Y_{2,k+1},\dots,Y_{n-k,n} \text{ , and} \\ Y_{[1,n-k],[2,n-k+1]},Y_{[1,n-k],[3,n-k+2]},\dots,Y_{[1,n-k],[k,n-1]}; \end{align} we let $\overline{\mathbf{P}}$ denote the corresponding tropical semifield in these frozen variables. Just as in Example \ref{GrandBandsQH}, there is a morphism of varieties $F \colon \mathbf{Y} \to \mathbf{X}$ sending $Y \in \mathbf{Y}$ to the decomposable tensor $Y[1] \wedge \cdots \wedge Y[n-k] \in \mathbf{X}$, where the $Y[i]$ are the rows of $Y$. The map on coordinate rings is \begin{equation}\label{Fstardefn} F^* \colon \mathbb{C}[\mathbf{X}] \to \mathbb{C}[\mathbf{Y}], \text{ defined by $F^*(\Delta_S) = Y_{[1,n-k],S}$.} \end{equation} Letting $\Delta_{S}$ be any non-frozen Pl\"ucker coordinate, one sees that \begin{equation}\label{conPluckers} F^*(\Delta_S) = c(S) \cdot Y_{I(S),J(S)} \end{equation} where $c(S) \in \overline{\mathbf{P}}$ and $Y_{I(S),J(S)}$ is a non-frozen irreducible row-solid minor in $\mathbb{C}[\mathbf{Y}]$. The map $\Delta_S \mapsto Y_{I(S),J(S)}$ is a bijection between the non-frozen Pl\"ucker coordinates in $\mathbb{C}[\mathbf{X}]$ and the non-frozen irreducible row-solid minors in $\mathbb{C}[\mathbf{Y}]$. Just as in Example \ref{GrandBandsQI}, there is a morphism of varieties $G \colon \mathbf{X} \to \mathbf{Y}$ sending $X \in \mathbf{X}$ to the band matrix whose whose $(i,j)$ entry is a certain Pl\"ucker coordinate evaluated on $X$: \begin{equation}\label{gijdefn} G(X)_{i,j} = \Delta_{[i+k+1,n+i-1] \cup j }(X). \end{equation} Since the Pl\"ucker coordinate on the right hand side of \eqref{gijdefn} is only nonzero when \\ unless~$i \leq j \leq i+k$, $G(X)$ is indeed a point in $\mathbf{Y}$. The map on coordinate rings is \begin{equation}\label{Gstardefn} G^*(Y_{i,j}) = \Delta_{[i+k+1,n+i-1] \cup j }. \end{equation} \begin{thm}\label{GrandBandsThm} Let $\mathbf{X}$ and $\mathbf{Y}$ be the varieties in \eqref{bfxdefn} and \eqref{bfydefn}. Let $\overline{\mathcal{F}}_{>0}$ be the semfield of subtraction-free expressions in the $Y_{i,j}$. Abusing notation, let $F^* \colon \mathcal{F}_{>0} \to \overline{\mathcal{F}}_{>0}$ be the semfield map determined by the map $F^*$ from \eqref{Fstardefn}, and let $G^* \colon \overline{\mathcal{F}}_{>0} \to \mathcal{F}_{>0}$ be the map determined by \eqref{Gstardefn}. Then $\mathbb{C}[\mathbf{Y}]$ is a cluster algebra of geometric type, and the maps $F^*$ and $G^*$ are quasi-inverses. All of the irreducible row-solid minors in $\mathbb{C}[\mathbf{Y}]$ are cluster or frozen variables. \end{thm} \begin{proof} The proof relies on the following well-known properties of the cluster structure on $\mathbb{C}[\mathbf{X}]$: \begin{enumerate} \item There exist clusters in $\mathbb{C}[\mathbf{X}]$ consisting entirely of Pl\"ucker coordinates (called \emph{Pl\"ucker clusters}). Every non-frozen Pl\"ucker coordinate shows up in at least one of these clusters. \item The set of Pl\"ucker clusters are connected to one another by mutations whose exchange relations are \emph{short Pl\"ucker relations}, of the form $\Delta_{S \cup ik}\Delta_{Sj\ell} = \Delta_{Sij}\Delta_{Sk\ell}+ \Delta_{Sjk}\Delta_{Si\ell}$, for $i < j < k < l$ and $Sij$ denotes the union $S \cup \{i,j\}$). \item For certain Pl\"ucker clusters, every neighboring cluster is again a Pl\"ucker cluster. \end{enumerate} The first two of these facts are consequences of the technology of plabic graphs and square moves, the third fact follows by considering a particular explicit Pl\"ucker seed for the Grassmannian (one whose quiver is a ``grid quiver''). We can now state Theorem \ref{GrandBandsQH} more carefully. Let $\Sigma$ be any Pl\"ucker cluster in $\mathbb{C}[\mathbf{X}]$. For each Pl\"ucker coordinate $x_i \in \Sigma$, let $\overline{x}_i$ be the irreducible row-solid minor in $\mathbb{C}[\mathbf{Y}]$ related to $x_i$ as in \eqref{conPluckers}. We will see below that $\{\overline{x}_i \colon x_i \in \Sigma \}$ are algebraically independent generators for $\overline{\mathcal{F}}_{>0}$ over $\overline{\mathbf{P}}$. Assuming this has been proved, applying the construction in Proposition \ref{qhnormalizedprop}, we obtain a semifield map $c_\Sigma \colon \mathcal{F}_{>0} \to \overline{\mathbf{P}}$ and a normalized seed $F^*(\Sigma)$ whose cluster variables are $\overline{x}_i = \frac{F^*(x_i)}{c_\Sigma(x_i)}$ as in \eqref{seedhomtopatternhomeqsiv}. The semifield map $c_\Sigma$ satisfies $c_\Sigma(\Delta_S) = c(S)$ for all $S \in \Sigma$, where $c(S)$ is defined by \eqref{conPluckers}. The \emph{key claim} is that in fact $c_\Sigma(\Delta_S) = c(S)$ holds for all $\Delta_S$, and therefore the semifield map $c_\Sigma$ does not depend on $\Sigma$. From this key claim, it follows that the seeds $F^*(\Sigma)$ are all related to each other by mutation using Proposition \ref{qhnormalizedprop}. Furthermore, every non-frozen irreducible row-solid minor is a cluster variable in $\overline{\mathcal{E}}$ by \eqref{seedhomtopatternhomeqsiv}. Since this includes all of the $Y_{i,j}$, this shows that the $F^*(\Sigma)$ are indeed seeds -- each seed has the expected size necessary to form a transcendence basis for $\mathbb{C}(\mathbf{Y})$, the seeds are all related to each other by mutation, and their union clearly contains a generating set for the field of fractions. The cluster algebra for the resulting seed pattern clearly contains $\mathbb{C}[\mathbf{Y}]$. The opposite containment follows from the Algebraic Hartogs' argument on a starfish cf.~Section~\ref{StarfishSecn}, using Fact (3) above. Thus it remains to check the key claim that $c_\Sigma(\Delta_S) = c(S)$ for all non-frozen $S$. By Fact (2), it suffices to check that $c$ preserves the short Pl\"ucker relations, i.e. \begin{equation}\label{checkcsmatch} c(Sik)c(Sj\ell) = c(S ij)c(S k\ell) \oplus c(S jk)c(S i\ell). \end{equation} Verifying \eqref{checkcsmatch} is a direct piecewise check: the exponent of $Y_{a,a}$ in the left hand side of \eqref{checkcsmatch} is $0,1,2$ according to whether neither, one of, or both of $Sik$ and $Sj\ell$ contain the interval $[1,a]$. Performing the similar computation for $Sij$ and $Sk \ell$, as well as $Sjk$ and $Si \ell$, and taking the minimum of their respective answers, gives the exponent of $Y_{a,a}$ in the right hand side, and we claim these left and right hand exponents are always equal. This can be done by a case analysis: let $E$ be the largest number such that $[1,E] \subset Sijkl$. If $E <i$, then both sides return a $2$ if $a \leq E$, and $0$ otherwise. If $i \leq E < j$, then both sides return a $2$ if $a < i$, return a $1$ if $i \leq a \leq E$, and return a $0$ otherwise. If $j \leq E $ then both sides return a $2$ if $a < i$, return a $1$ if $i \leq a < j$, and return a $0$ otherwise. A similar calculation checks that the exponents of $Y_{a,a+k}$ match up in both sides of \eqref{checkcsmatch}. Finally we check that $G^*$ is a quasi-inverse to $F^*$. By Lemma \ref{constructQIs}, we only need to see that $G^*$ preserves coefficients and that $G^* \circ F^*$ is proportional to the identity. It suffices to check that $G^* \circ F^* (\Delta_S) \asymp \Delta_S$ for every $\Delta_S$. This follows from the determinantal identity Lemma \ref{flattoband} below, applied to $G^* \circ F^* (\Delta_S) = G^* (Y_{[1,n-k],S})$. \end{proof} \begin{lem}\label{flattoband} Let $I = [a,a+s-1]$ be some consecutive subset of $[n-k]$, and $J $ a subset of $[a,a+s-1+k]$ of size $s$. Then for $X \in \mathbb{C}[\mathbf{X}]$, \begin{equation}\label{thirdminoreq} Y_{I,J}(G(X)) = \left(\prod_{i=a}^{a+s-2} \Delta_{[i+k+1,n+i] }(X) \right) \cdot \Delta_{ [a+k+s,n+a-1] \cup J}(X) \end{equation} \end{lem} Notice that the first product on the right hand side of \eqref{thirdminoreq} is a monomial in the frozen Pl\"ucker coordinates. \begin{proof} Proceed by induction on $s$. It's clear when $s=1$. For $s >1$, we will need a Pl\"ucker relation \begin{equation}\label{LongPlucker} \Delta_{[a+k+1,n+a]}(X) \Delta_{[a+k+s,n+a-1] \cup J}(X) = \sum_{\ell=1}^s (-1)^{\ell+1}\Delta_{[a+1+k,n+a-1] \cup j_\ell}(X) \Delta_{[a+k+s,n+a] \cup (J-j_\ell)}(X), \end{equation} see e.g. \cite[Section 9.1, Exercise 1]{YoungTableaux}. Let $J = \{j_1,\dots,j_s\}$ with $j_1 < j_2 < \dots < j_s$. Assuming \eqref{thirdminoreq} holds for smaller values of $s$, we expand along the first row to see \begin{align*} Y_{I,J}(G(X)) &= \sum_{\ell = 1}^s (-1)^{\ell+1} Y_{a,j_\ell}(G(X)) Y_{(I-a),(J-j_\ell)}(G(X)) \\ &= \sum_{\ell = 1}^s (-1)^{\ell+1} \Delta_{[a+k+1,n+a-1]}(X) (\prod_{i=a+1}^{a+s-2} \Delta_{i+k+1,n+i}(X)) \cdot \Delta_{[a+k+s,n+a] \cup (J-j_\ell)} \\ &= (\prod_{i=a+1}^{a+s-2} \Delta_{i+k+1,n+i}(X)) \sum_{\ell=1}^s (-1)^{\ell+1}\Delta_{[a+1+k,n+a-1] \cup j_\ell}(X) \Delta_{[a+k+s,n+a] \cup (J-j_\ell)}(X), \end{align*} and the result follows using \eqref{LongPlucker}. \end{proof} \begin{rmk} In the case $k=2$, our construction is the ``motivating example'' considerd by Yang and Zelevinsky \cite{YZ}. They establish that the homogeneous coordinate ring of a certain $\SL_{n+1}$-double Bruhat cell is a Dynkin type $A_n$ cluster algebra with principal coefficients. The elements of this double Bruhat cell are $(n+1) \times (n+1)$ band matrices of width $3$. Their example follows from ours by setting certain frozen variables equal to $1$, and setting $Y_{1,1} $ and $Y_{n-k,n}$ equal to $0$ (this latter operation is permissible because both of these frozen variables are isolated vertices in the quivers for $\mathbb{C}[\mathbf{Y}]$). We also remark that it is already known that the Grassmannian cluster algebras are quasi-isomorphic to a polynomial ring, by a fairly uninteresting quasi-isomorphism. Indeed, we can realize the affine space of $(n-k) \times k$ matrices as the closed subvariety of $\tGr(n-k,n)$ defined by specializing the frozen variable $\Delta_{[1,n-k]}$ to $1$, and this specialization is a quasi-isomorphism. The resulting cluster structure on the polynomial ring is unrelated to the one we have given in this section. \end{rmk}
2,869,038,156,216
arxiv
\section{Introduction } Let $\{\xi_i, \ i\ge 1,\}$ be a sequence of random variables and $S_n=\sum_{i=1}^n \xi_i, \ n\ge 1, S_0=0.$ The asymptotic theory of partial sums $S_n$ and related partial sum-processes $S_n(t)=S_{[nt]}, \ t\ge 0$ is well developed and documented in many monographs starting with the classical book \cite{Gnedenko}. Let us note that in the summation theory for sequences we perform the summation over increasing (with $n\to \infty$) intervals of integers $[1, n]$, although there is a possibility to consider sums over some increasing sets $A_n\subset A_{n+1}\subset {\bf Z}$. Passing from sequences to random fields we have much more flexibility, even in formulation of a problem, and one can say that the asymptotic theory of summation of values of random fields is developed less comparing with the same theory for sequences. To explain our goals and for the simplicity of writing and understanding, we consider a stationary random field (r.f.) $Y=\{Y_{k_1, k_2}, \ (k_1, k_2)\in {\bf Z}^2,\}$ indexed by two-dimensional indices. We can perform the summation of the values of the r.f. over some sequence of increasing sets $A_n\subset A_{n+1}\subset {\bf Z}^2$, or even over some sets, indexed by multi-indices, and investigate $\sum_{(k_1, k_2)\in A_n}Y_{k_1, k_2}$. Rectangles (sets, indexed by two-dimensional indices) present one of the most simple sets for such summation, and we can consider \begin{equation}\label{sum} S_{n_1, n_2}=S_{n_1, n_2}(Y):=\sum_{k_1=1}^{n_1}\sum_{k_2=1}^{n_2} Y_{k_1, k_2} \ {\rm and} \ S_{n_1, n_2}(t_1, t_2; Y)=S_{[n_1t_1], [n_2t_2]}(Y), \quad t_1\ge 0, \ \ t_2 \ge 0. \end{equation} To avoid the centering, let us assume that, if a r.f. $Y$ has the first moment finite, then $EY_{0, 0}=0$. One can expect that under some mild conditions there exist some unboundedly growing constants $A_{n_1, n_2}$ such that finite dimensional distributions (f.d.d.) of $A_{n_1, n_2}^{-1}S_{n_1, n_2}(t_1, t_2)$ converges, as $(n_1, n_2)\to \infty,$ to f.d.d. of some non-trivial (not identically equal to zero) r.f. $V(t, s).$ Here and in what follows $(n_1, n_2)\to \infty$ means that $\min(n_1, n_2)\to \infty.$ It is not difficult to see that the situation described above for r.f. is analogous to the situation for sequences, considered in Lamperti theorems and giving rise to self-similar processes, see \cite{Lamperti}. We refer a reader to the recent paper \cite{DavPau1} and references therein, where generalizations of Lamperti theorems for r.f. are considered. In our context the following result - Corollary 1 from \cite{DavPau1} - is important. We shall formulate as Proposition only particular case (in \cite{DavPau1} ${\mathbb R}^m$-valued random fields on ${\bf Z}^d$ are considered). \begin{prop} \label{prop0} Suppose that $\eta=\{\eta_{i_1, i_2}, (i_1, i_2)\in {\bf Z}^2\}$ is a real-valued stationary random field. If there exists a function $f(n_1, n_2)\to \infty$ as $(n_1, n_2)\to \infty$ and $b\in {\mathbb R}$ such that \begin{equation}\label{coreq} \left \{\frac{S_{[n_1t_1], [n_2t_2]}(\eta -b)}{f(n_1, n_2)}, \ (t_1, t_2) \in {\mathbb R}_+^2\right \}\stackrel{f.d.d.}{\longrightarrow} \{V(t_1, t_2), \ (t_1, t_2) \in {\mathbb R}_+^2 \}, \end{equation} where $V$ is a non-degenerate continuous in probability real-valued random field, then $f(a_1, a_2)= a_1^{H_1}a_2^{H_2}L(a_1, a_2)$ with some $H_i>0, i=1, 2,$ and some coordinate-wise slowly varying function $L,$ and $V$ is $(H_1, H_2)$-multivariate self-similar random field. \end{prop} We do not provide here definitions of a coordinate-wise slowly varying function and $(H_1, H_2)$-multivariate self-similar random field, referring a reader to \cite{DavPau1}, since for our purposes these notions are unimportant. For us the following fact is important: in the limit theorem, formulated in Proposition \ref{prop0}, the limit random field does not depend on the way how $(n_1, n_2)$ tends to infinity. Examples of stationary random fields, for which Proposition \ref{prop0} can be applied were given in \cite{Paul20} (see the discussion on directional memory for random fields and scaling transition at the end of this paper). It was shown that if a linear field \begin{equation}\label{field2} \eta_{k_1, k_2}=\sum_{i,j=0}^\infty c_{i_1,i_2}\varepsilon_{k_1-i_1, k_2-i_2}, \ (k_1, k_2)\in {\bf Z}^2, \end{equation} where innovations $\varepsilon_{k_1, k_2}, \ (k_1, k_2)\in {\bf Z}^2,$ are i.i.d. random variables with mean zero and finite variance and the filter $\{c_{i_1, i_2}, \ i_1\ge 0, i_2\ge 0\}$ is of the form $c_{i_1, i_2}=a_{i_1}b_{i_2}$ with some sequences $\{a_i\}, \{b_i\}$, then for such linear random field Proposition \ref{prop0} can be applied. On the other hand, there are random fields for which the limit random field for sums $S_{[n_1t_1], [n_2t_2]}(\xi -b)$ depends on the way how $(n_1, n_2)$ tends to infinity. In the recent papers \cite{Puplinskaite1}, \cite{Puplinskaite2}, and \cite{Pilipauskaite} the so-called phenomenon of the scale transition for random fields in the case $d=2$ was described. This phenomenon can be explained as follows. We use the notation introduced in (\ref{sum}). The sides $n_1$ and $n_2$ of rectangles in (\ref{sum}) are allowed to grow to infinity arbitrary, now let us suppose that these lengths are connected by the relation $n_1=n, \ n_2=n^\gamma, \ \gamma>0$. Let us consider the r.f. \begin{equation}\label{sumgamma} Z_{n, \gamma}(t_1, t_2)=S_{n, n^{\gamma}}(t_1, t_2), \quad t_1\ge 0, \quad t_2\ge 0. \end{equation} Let us assume that, for any $\gamma>0$, there exists a nontrivial random field $V_\gamma (t_1, t_2)$ and a normalization $A_n (\gamma) \to \infty$ such that f.d.d. of $A_n^{-1} (\gamma)Z_{n, \gamma}(t_1, t_2)$ converges weakly to f.d.d. of $V_\gamma$. In \cite{Puplinskaite2} the following definition is given. \begin{definition}\label{sctran} A random field $Y$ exhibits scaling transition if there exists $\gamma_0>0$ such that the limit process $V_\gamma$ is the same, let say $V_+$, for all $\gamma >\gamma_0$ and another, not obtained by simple scaling, $V_-$, for $\gamma <\gamma_0$. The r.f. $V_{\gamma_0}$ is called well-balanced scaling limit of $Y$ and r.f. $V_+$ and $V_-$ are called unbalanced scaling limits of $Y$. \end{definition} In \cite{Puplinskaite2} Gaussian random fields are investigated and it is shown that if the spectral density of a Gaussian random field $Y$ is \begin{equation}\label{specden} f(x_1, x_2)=\frac{g(x_1, x_2)}{(|x_1|^2+c|x_2|^{2H_2/H_1})^{H_1/2}}, \quad (x_1, x_2)\in [-\pi, \pi]^2, \end{equation} where $0<H_1\le H_2<\infty, \ H_1H_2<H_1+H_2, \ c>0$ and $g$ is bounded and continuous at origin and $g(0, 0)=1,$ then the random field $Y$ exhibits the scaling transition at $\gamma_0=H_1/H_2$ and the expressions of unbalanced and well-balanced scaling limits of $Y$ are described. For the precise formulation see Theorem 3.1 in \cite{Puplinskaite2}. In \cite{Puplinskaite1} the scaling transition phenomenon is demonstrated for aggregated nearest neighbor random coefficients autoregressive r.f. with finite and infinite variance. In \cite{Pilipauskaite} the same phenomenon is shown for non-linear r.f., obtained by taking Appel polynomials of linear r.f. with filter coefficients decaying at possibly different rate in the horizontal and the vertical directions. All these examples are quite complicated and it is difficult to understand what causes the scaling transition phenomenon. In all three above mentioned papers fields exhibiting scaling transition have one main feature - long-range dependence, and one can think that this feature of r.f. is responsible for the phenomenon of scaling transition. That this is not the case shows the above given example of a linear field (\ref{field2}) with the filter of the form $c_{i_1, i_2}=a_{i_1}b_{i_2}$ with some sequences $\{a_i\}, \{b_i\}$ then the linear r.f. (\ref{field2}) does not exhibit the scaling transition, despite the fact that it can have long or short range dependence and all sorts of directional memory (positive, zero or negative in any direction). Also in \cite{Puplinskaite2} it is shown that a Gaussian r.f. with long-range dependence and with spectral density of the form \begin{equation}\label{specden1} f(x_1, x_2)=\frac{g(x_1, x_2)}{|x_1|^{2d_1}|x_2|^{2d_2}}, \end{equation} where $0<d_1, d_2 <1/2$ and $g$ has the same properties as in (\ref{specden}), does not exhibit the scaling transition. Thus, long-range dependence, although important in this problem, is not necessary condition for the scaling transition. The main goal of the present paper is to provide the most simple examples of r.f. exhibiting the scaling transition in order to understand the mechanism of appearance of the scaling transition better. We shall consider only linear r.f. (\ref{field2}) and their sums of the form (\ref{sum}), thus in our considerations there are only two factors, which can cause the scaling transition, namely the filter $\{c_{i_1, i_2}\}$, defining dependence structure of the r.f. $X$ and the way how $(n_1, n_2)$ tends to infinity. Let us note that even in the case of linear random fields with finite variance at present we have no complete answer to the question what properties of filters give us Lamperti type limit theorem, as in Proposition \ref{prop0}, or scaling transition phenomenon. Here it is worth to mention that in the papers \cite{Puplinskaite1} and \cite{Pilipauskaite} starting point is a linear r.f., but in \cite{Puplinskaite1} a linear r.f. with random coefficients of a filter (autoregressive r.f. with random coefficients) is considered and aggregation procedure is applied. Then the limit r.f. is investigated for scaling transition, while in \cite{Pilipauskaite} Appel polynomials of linear r.f. are considered. Therefore, in these examples there are more factors which can be responsible for the scaling transition. Also we demonstrate that our examples can be easily generalized to the case of r.f. indexed by indices in ${\bf Z}^d, \ d>2.$ At the final stage of the preparation of the paper we became aware of the publication \cite{Surg}, where scaling transition is investigated for linear random fields on ${\bf Z}^3$. Taking into account the results of this paper and our simple examples, one can say that with $d$ increasing the picture of scaling transition becomes more complicated, even for those simple examples of linear random fields which we consider. \section{Preliminaries} Although in the paper we shall investigate mainly the cases of linear r.f. defined on $\mathbb{Z} ^2$ or on $\mathbb{Z} ^3$, in this section we shall derive some formulae and explain main idea in the case of a general linear field defined on $\mathbb{Z} ^d$. In what follows letters in bold stand for vectors in ${\mathbb R}^d$, and inequalities or equalities between them are component-wise. Functions $\vee$, $\sv{\cdot}$, and product of vectors are understood component-wise: ${\bf n}\vee{\bf m}=(n_1\vee m_1,\dots,n_d\vee m_d)$, $\sv{{\bf u}}=(\sv{u_1},\dots,\sv{u_d})$, ${\bf n}{\bf t}=(n_1t_1,\dots,n_dt_d)$. Also we shall use the following notation $\sum_{{\bf k}={\bf 0}}^{\sv{{\bf n} {\bf t}}}:=\sum_{k_1=0}^{\sv{n_1 t_1}} \dots \sum_{k_d=0}^{\sv{n_d t_d}}$, here ${\bf 0}:=(0,0,\dots,0)$ and dimension of this vector will be clear from context. We consider a real valued linear r.f. defined on $\mathbb{Z} ^d$ and the corresponding partial sum r.f. \begin{equation}\label{field} X_{\bf k}=\sum_{{\bf i}\in\mathbb{Z} _+^d}c_{{\bf i}}\xi_{{\bf k}-{\bf i}}, \quad S_{{\bf n}}({\bf t})=\sum_{{\bf k}={\bf 0}}^{\sv{{\bf n} {\bf t}}} X_{{\bf k}}. \end{equation} Here $\xi_{{\bf i}}, \ {\bf i} \in \mathbb{Z} ^d$ are i.i.d. random variables with the characteristic function (ch.f.) $\exp(-|t|^\alpha), \ 0,\alpha \le 2$. We are interested in limit behavior of appropriately normalized r.f. $A_{{\bf n}}^{-1}S_{{\bf n}}({\bf t})$, as ${\bf n} \to \infty$, which means $\min(n_i,i=1,\dots,d)\rightarrow\infty$. Firstly, we shall rewrite the expression of $S_{{\bf n}}({\bf t})$ in the following way: \begin{align*} S_{{\bf n}}({\bf t}) &=\sum_{{\bf 0}\leq {\bf k}\leq {\bf n}{\bf t}} X_{{\bf k}}=\sum_{{\bf 0}\leq {\bf k}\leq {\bf n}{\bf t}} \sum_{{\bf i}\geq {\bf 0}}c_{{\bf i}}\xi_{{\bf k}-{\bf i}}\\ &=\sum_{{\bf k}\in\mathbb{Z} ^d} \sum_{{\bf i}\in\mathbb{Z} ^d}c_{{\bf i}}\xi_{{\bf k}-{\bf i}}\ind{[{\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}]}\ind{[{\bf i}\geq{\bf 0}]}=\sum_{{\bf k}\in\mathbb{Z} ^d} \sum_{{\bf l}\in\mathbb{Z} ^d}c_{{\bf k}-{\bf l}}\xi_{{\bf l}}\ind{[{\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}]}\ind{[-{\bf l}\geq-{\bf k}]}\\ &=\sum_{{\bf k}\in\mathbb{Z} ^d} \sum_{{\bf l}\in\mathbb{Z} ^d}c_{{\bf k}-{\bf l}}\xi_{{\bf l}}\ind{[-{\bf l}\leq{\bf k}-{\bf l}\leq{\bf n}{\bf t}-{\bf l}]}\ind{[{\bf k}-{\bf l}\geq {\bf 0}]}=\sum_{{\bf k}\in\mathbb{Z} ^d} \sum_{{\bf l}\in\mathbb{Z} ^d}c_{{\bf k}}\xi_{{\bf l}}\ind{[-{\bf l}\leq{\bf k}\leq{\bf n}{\bf t}-{\bf l}]}\ind{[{\bf k}\geq {\bf 0}]}\\ &=\sum_{{\bf k}\in\mathbb{Z} ^d}\sum_{{\bf l}\in\mathbb{Z} ^d}c_{{\bf k}}\xi_{{\bf l}}\ind{[(-{\bf l})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}-{\bf l}]}=\sum_{{\bf l}\in\mathbb{Z} ^d}\xi_{{\bf l}} \sum_{{\bf k}\in\mathbb{Z} ^d}c_{{\bf k}}\ind{[(-{\bf l})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}-{\bf l}]}. \end{align*} Taking $m$ points ${\bf t}^{(l)}, \ l=1, \dots, m$ we can write ch.f. of $\sum_{l=1}^{m}x_lS_{\bf n}({\bf t}^{(l)})$: \begin{align*} \mathbb{E} \exp\left({\rm i}\sum_{j=1}^{m}x_jS_{\bf n}({\bf t}^{(j)})\right)&= \mathbb{E} \exp\left( {\rm i}\sum_{{\bf l}\in\mathbb{Z} ^d}\xi_{{\bf l}}\sum_{j=1}^{m}x_j \sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-{\bf l})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-{\bf l}]}\right)\\ &= \exp\left(-\sum_{{\bf l}\in\mathbb{Z} ^d}\abs{ \sum_{j=1}^{m}x_j \sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-{\bf l})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-{\bf l}]}}^\alpha\right). \end{align*} We see that we must investigate the quantity \begin{equation*} J_{{\bf n}}=J_{{\bf n}}(x_1,\dots,x_m,{\bf t}^{(1)},\dots,{\bf t}^{(m)})=\sum_{{\bf l}\in\mathbb{Z} ^d}\abs{ \sum_{j=1}^{m}x_j \sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-{\bf l})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-{\bf l}]}}^\alpha. \end{equation*} We write this quantity as integral \begin{align}\label{generalJn} J_{{\bf n}}&=\int_{\mathbb{R} ^d}\abs{ \sum_{j=1}^{m}x_j \sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-\sv{{\bf u}})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-\sv{{\bf u}}]}}^\alpha {\rm d} {\bf u}\\ \nonumber &=\int_{\mathbb{R} ^d}\abs{ \sum_{j=1}^{m}x_j \sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-\sv{{\bf n}{\bf u}})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-\sv{{\bf n}{\bf u}}]}}^\alpha {\rm d} {\bf n}{\bf u}\\ \nonumber &=\left(\prod_{i=1}^{d}n_i\right)\int_{\mathbb{R} ^d}\abs{ \sum_{j=1}^{m}x_j \sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-\sv{{\bf n}{\bf u}})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-\sv{{\bf n}{\bf u}}]}}^\alpha {\rm d} {\bf u}\\ \nonumber &=\left(\prod_{i=1}^{d}n_i\right)\int_{\mathbb{R} ^d}h_{{\bf n}}\left({\bf u}\right){\rm d} {\bf u}. \end{align} Here \begin{equation}\label{generalhn} h_{{\bf n}}\left({\bf u}\right) = \abs{\sum_{j=1}^{m}x_j f_{{\bf n}}\left({\bf u},{\bf t}^{(j)}\right) }^\alpha , \end{equation} and \begin{equation}\label{generalfbn} f_{{\bf n}}\left({\bf u},{\bf t}^{(j)}\right)=\sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-\sv{{\bf n}{\bf u}})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-\sv{{\bf n}{\bf u}}]}. \end{equation} These three formulae (\ref{generalJn})-(\ref{generalfbn}) will be starting point in the investigation of particular cases $d=2$ or $d=3$ with specific filter coefficients $c_{{\bf i}}$. Main step will be to prove the point-wise convergence of the appropriately normalized function $h_{{\bf n}}$ to some function $h$ and to show that $h_{{\bf n}}$ is bounded from above by integrable function, then application of Lebesgue dominated convergence theorem will yield the convergence of the normalized quantity $J_{{\bf n}}$. Since in the examples, which we shall consider to get the scaling transition, the main idea is to take filters with non-zero coefficients on axes (or other lines), we shall provide the expression of function (\ref{generalfbn}) in this case. Suppose that we have $d$ sequences $a_q(i), i\in\mathbb{N} ,q=1,\dots,d,$, and let us define \begin{equation}\label{genci} c_{{\bf i}}=\sum_{q=1}^{d} a_q(i_q)\ind{[i_q\geq0\text{ and }i_l=0,l\neq q]}, \end{equation} i.e., \begin{equation*} c_{{\bf i}}=\begin{cases} a_q(i_q), \text{ if for some}\ q: i_q>0\text{ and } i_l=0,l\neq q,\\ \sum_{q=1}^{d}a_q(0), \text{ if } i_l=\dots =i_d =0,\\ 0,\text{ elsewhere}. \end{cases} \end{equation*} Using the notation \begin{equation*} A(a,b)=\{ k\in\mathbb{Z} : (-a)\vee 0 \leq k \leq b-a \}, \end{equation*} for this filter we can write the function (\ref{generalfbn}) as follows \begin{align*} f_{{\bf n}}\left({\bf u},{\bf t}^{(j)}\right)&=\sum_{{\bf k}\in\mathbb{Z} ^d} \sum_{q=1}^{d} a_q(k_q)\ind{[k_q\geq0\text{ and }k_l=0,l\neq q]}\ind{[(-\sv{{\bf n}{\bf u}})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-\sv{{\bf n}{\bf u}}]}\\ &=\sum_{q=1}^{d} \sum_{{\bf k}\in\mathbb{Z} ^d} a_q(k_q) \ind{[k_q\geq0\text{ and }k_l=0,l\neq q]} \prod_{i=1}^{d} \ind{A(\sv{n_iu_i},n_it_i^{(j)})}(k_i)\\ &=\sum_{q=1}^{d} \sum_{k_q=0}^{\infty} a_q(k_q) \ind{A(\sv{n_qu_q},n_qt_q^{(j)})}(k_q) \prod_{i=1,\dots,q-1,q+1,\dots,d} \ind{A(\sv{n_iu_i},n_it_i^{(j)})}(0)\\ &=\sum_{q=1}^{d} \sum_{k_q\in\mathbb{Z} } a_q(k_q)\ind{[(-\sv{n_q u_q})\vee 0\leq k_q\leq n_q t_q^{(j)}-\sv{n_q u_q}]} \prod_{i=1,\dots,q-1,q+1,\dots,d} \ind{A(\sv{n_iu_i},n_it_i^{(j)})}(0). \end{align*} Denoting \begin{equation*} U_q(n_q,u_q,t_q^{(j)})=\sum_{k_q\in\mathbb{Z} } a_q(k_q)\ind{[(-\sv{n_q u_q})\vee 0\leq k_q\leq n_q t_q^{(j)}-\sv{n_q u_q}]}. \end{equation*} we can rewrite the last formula as follows: \begin{equation}\label{fn2} f_{{\bf n}}\left({\bf u},{\bf t}^{(j)}\right)=\sum_{q=1}^{d}U_q(n_q,u_q,t_q^{(j)})\prod_{i=1,\dots,q-1,q+1,\dots,d} \ind{A(\sv{n_iu_i},n_it_i^{(j)})}(0). \end{equation} Concerning the indicator functions we have the following simple relations: \begin{equation}\label{indfunc} \ind{A(\sv{n_iu_i},n_it_i^{(j)})}(0)= \ind{[(-\sv{n_i u_i})\leq 0\leq n_i t_i^{(j)}-\sv{n_i u_i}]}\rightarrow \ind{[-u_i\le 0\le t_i^{(j)}- u_i]}=\ind{[0,t_i^{(j)}]}(u_i), \end{equation} and \begin{equation}\label{indfunc1} \ind{A(\sv{n_iu_i},n_it_i^{(j)})}(0 \leq \ind{[-n_i u_i \leq 0 \leq n_i t_i^{(j)}-{n_i u_i}+n_i]} =\ind{[- u_i \leq 0 \leq t_i^{(j)}-{ u_i}+1]} =\ind{[ 0, t_i^{(j)}+1]}(u_i). \end{equation} Thus, we see that the main step in investigation of (\ref{fn2}) is to find asymptotic of sums $U_q(n_q,u_q,t_q^{(j)})$, assuming some regular behavior of sequences $a_q(i)$ and some relations between growth of $n_q, \ q=1, \dots , d.$ We shall not continue investigation of the general case $d\ge 2$, since in this way we shall get quite complicated picture, the particular cases $d=2$ and $d=3$ will be more informative and visual. We end this section providing the result about asymptotic behavior of sums, present in (\ref{fn2}). Namely, we assume that we have a sequence $a_i=(1+i)^{-\gamma}, \ i\ge 1$, and $a_0$ will be defined separately. Here it is appropriate to note, that it is possible to consider more general case $a_i\sim (1+i)^{-\gamma}L(i),$ where $L$ is a slowly varying function, but since we consider rather specific filters, such generality is unimportant. Let \begin{equation}\label{Utgamma} U_{t,\gamma}(i,n)= \sum_{k=\left( -i \right)\vee 0 }^{\sv{nt}-i}a_{k}, \end{equation} and our goal is to find point-wise convergence (for a fixed $t$) \begin{equation}\label{bound} \frac{U_{t,\gamma}(\sv{nu},n)}{z_{\gamma,n}}\rightarrow H_{\gamma}(u,t) \end{equation} with some sequence $z_{\gamma,n}$. In order to apply Lebesque dominated convergence theorem, we must get the bound \begin{equation}\label{bound1} \frac{U_{t,\gamma}(\sv{nu},n)}{z_{\gamma,n}}\leq G_{\gamma}(u,t), \end{equation} where $G_{\gamma}(u,t)$ is a function, for a fixed $t$ satisfying \begin{equation}\label{integrableBound} \int_{-\infty}^{\infty} \abs{G_{\gamma}(u,t)}^\alpha {\rm d} u<\infty. \end{equation} Let us define \begin{equation}\label{sekaNormavimui} z_{n,\gamma}=\begin{cases} 1, \text{ if } \gamma>1 \text{ and }\sum_{j=0}^{\infty}a_j\neq 0,\\ n^{1-\gamma}, \text{ if } 1<\gamma<1+1/\alpha \text{ and }\sum_{j=0}^{\infty}a_j= 0,\\ {\ln n}, \text{ if } \gamma=1,\\ n^{1-\gamma}, \text{ if } 1/\alpha<\gamma<1.\\ \end{cases} \end{equation} Also we define the function \begin{equation}\label{Hdef} H_{\gamma}(u,t)=\begin{cases} \sum_{k=0}^{\infty}a_k\ind{[0,t)}(u)\text{, if }\gamma>1\text{ and }\sum_{j=0}^{\infty}a_j\neq 0,\\ \left ((t-u)_+^{1-\gamma}-(-u)_+^{1-\gamma}\right ) (1-\gamma)^{-1} \text{, if } 1<\gamma<1+{1}/{\alpha} \text{ and }\sum_{j=0}^{\infty}a_j= 0,\\ \ind{[0,t)}(u)\text{, if }\gamma=1,\\ \left ((t-u)_+^{1-\gamma}-(-u)_+^{1-\gamma}\right ) (1-\gamma)^{-1} \text{, if }1/\alpha<\gamma<1. \end{cases} \end{equation} Here and in what follows we use the notation $(\cdot)_+=\max(0,\cdot)$. \begin{prop}\label{prop1} For a sequence $ \{ a_i, \ i\ge 0,\}$ defined above, we have the relations (\ref{bound}) - (\ref{integrableBound}) with the functions $z_{n,\gamma}$ and $H_{\gamma}(u,t),$ defined in (\ref{sekaNormavimui}) and (\ref{Hdef}), respectively. Expression of the function $G_{\gamma}(u,t)$ is given in (\ref{gfunct}), (\ref{Ggammadef}), and (\ref{gamma1est}). \end{prop} {\it Proof of Proposition \ref{prop1}}. We start with the case $\gamma>1$ and $\sum_{k=0}^{ \infty} a_{k}\neq 0$. It is easy to see that for a fixed value of $u$ and a.e. \begin{equation*} U_{t,\gamma}(\sv{nu},n)=\sum_{k=\left( -\sv{nu} \right)\vee 0 }^{\sv{nt}-\sv{nu}}a_{k} \rightarrow \ind{[0,t)}(u)\sum_{k=0}^{ \infty} a_{k} =:H_{\gamma}(u,t), \end{equation*} we added a.e. since for $u=t$ this limit is $a_0$. Thus, we have (\ref{bound}) with $z_{\gamma,n}=1$. For $u\in(-1,t+1)$ we have \begin{equation*} \sum_{k=\left( -\sv{nu} \right)\vee 0 }^{\sv{nt}-\sv{nu}}a_{k} \leq \ind{(-1,t+1)}(u)\left (|a_0|+ \sum_{k=1}^{ \infty} a_{k}\right ), \end{equation*} while for $u\leq -1$ we get \begin{equation*} \sum_{k=\left( -\sv{nu} \right)\vee 0 }^{\sv{nt}-\sv{nu}}a_{k}= \ind{(-\infty,-1]}(u)\int_{-\sv{nu} }^{\sv{nt}-\sv{nu}+1}(1+{\sv{v}})^{-\gamma}{\rm d} v\leq \ind{(-\infty,-1]}(u)\int_{-{nu} }^{{nt}-{nu}+2n}v^{-\gamma}{\rm d} v \end{equation*} \begin{equation*} \leq \ind{(-\infty,-1])}(u)\frac{\left( -{u} \right)^{1-\gamma}-\left( {t}-{u}+2 \right)^{1-\gamma}}{\gamma-1}. \end{equation*} Denoting \begin{equation}\label{gfunct} G_{\gamma}(u,t)=\begin{cases} \ind{(-1,t+1)}(u)\left (|a_0|+ \sum_{k=1}^{ \infty} a_{k}\right ),\text{ if }u\in(-1, \infty),\\ \ind{(-\infty,-1]}(u){\left( \left( -{u} \right)^{1-\gamma}-\left( {t}-{u}+2 \right)^{1-\gamma} \right)}/{\left( \gamma-1 \right)},\text{ if }u\leq -1, \end{cases} \end{equation} we get the function, satisfying \eqref{bound1} and \eqref{integrableBound}. In the case $\gamma<1$ we have, for $u\leq t +1$ (otherwise the sum is $0$), \begin{equation*} \sum_{k=(-\sv{nu})\vee 0}^{\sv{nt}-\sv{nu}}a_{k}= \ind{(-\infty,\frac{\sv{nt}}{n}+\frac{1}{n})}(u)\int_{(-\sv{nu})_+}^{\sv{nt}-\sv{nu}+1}a_{\sv{v }} {\rm d} v= n^{1-\gamma}\int_{-\infty}^{\infty}\kappa_{\gamma}(v; u,t) {\rm d} v, \end{equation*} where \begin{equation*} \kappa_{\gamma}(v; u,t)=\ind{(-\infty,\frac{\sv{nt}}{n}+\frac{1}{n})}(u) \ind{\left( \frac{(-\sv{nu})_+}{n}, \frac{\sv{nt}-\sv{nu}+1}{n} \right)}(v) n^{\gamma}a_{\sv{nv }}. \end{equation*} For $u\le t+1$ and a fixed value of $v$, we have, as $n\rightarrow\infty$, a.e. \begin{equation*} \kappa_{\gamma}(v; u,t)\rightarrow \ind{(-\infty,t)}(u)\ind{\left( (-u)_+, (t-u) \right)}(v)v^{-\gamma}. \end{equation*} It is easy to see that $\abs{\kappa_{\gamma}(v; u,t)}\leq \max (1, |a_0|)\ind{(-\infty,t)}(u)\ind{\left( (-u)_+, (t-u+2 ) \right)}(v)v^{-\gamma}.$ This majorizing function is integrable, therefore, with $z_{\gamma,n}=n^{1-\gamma}$, \begin{equation*} \frac{U_{t,\gamma}(\sv{nu},n)}{z_{\gamma,n}}=n^{\gamma-1}\sum_{k=(-\sv{nu})_+}^{\sv{nt}-\sv{nu}}a_{k}=\int_{-\infty}^{\infty} \kappa_{\gamma}(v; u,t) {\rm d} v\rightarrow \ind{(-\infty,t)}(u) \int_{(-u)_+}^{t-u} v^{-\gamma} {\rm d} v \end{equation*} \begin{equation*} =\frac{1}{1-\gamma}\left( ((t-u)_+)^{1-\gamma}-((-u)_+)^{1-\gamma} \right)=H_{\gamma}(u,t). \end{equation*} Since \begin{equation*} \frac{U_{t,\gamma}(\sv{nu},n)}{z_{\gamma,n}}=\int_{-\infty}^{\infty} \kappa_{\gamma}(v; u,t) {\rm d} v \leq \max (1, |a_0|)\ind{(-\infty,t+1)}(u) \int_{(-u)_+}^{ t-u+2 } v^{-\gamma}{\rm d} v, \end{equation*} we denote \begin{equation}\label{Ggammadef} G_{\gamma}(u,t):=\max (1, |a_0|)\frac{\ind{(-\infty,t+1)}(u)}{1-\gamma}\left( (t-u+2)^{1-\gamma}-((-u)_+)^{1-\gamma} \right). \end{equation} It is easy to note that \begin{equation*} \int_{-\infty}^{\infty} \abs{G_{\gamma}(u,t)}^{\alpha}{\rm d} u<\infty, \end{equation*} thus we have \eqref{bound1} and \eqref{integrableBound}. The case $\gamma=1$. Assuming $u\le t+1$ (otherwise the sum is zero), we separate one term: \begin{equation}\label{sepsum} \sum_{k=(-\sv{nu})\vee 0}^{\sv{nt}-\sv{nu}}a_{k}= a_{(-\sv{nu})\vee 0} + \sum_{k= (1-\sv{nu})\vee 1 }^{\sv{nt}-\sv{nu}}a_{k}. \end{equation} For $-2\leq u\le t+1$, we can estimate $a_{(-\sv{nu})\vee 0}\leq C$, while for $u<-2$ we have $a_{(-\sv{nu})\vee 0}\leq (-nu)^{-1} \leq (-u)^{-1}$. Therefore, we have \begin{equation}\label{sepsum1} (\ln n)^{-1}a_{(-\sv{nu})\vee 0}\rightarrow 0 \quad \text{ and }\quad (\ln n)^{-1}a_{(-\sv{nu})\vee 0} \leq R(u) \end{equation} with \begin{equation*} R(u)=\begin{cases} C, \text{ if }-2\leq u\le t+1,\\ (-u)^{-1}, \text{ if } u<-2. \end{cases} \end{equation*} For the separated sum in (\ref{sepsum}), using the change of variables, we can write \begin{equation*} \sum_{k=(1-\sv{nu})\vee 1}^{\sv{nt}-\sv{nu}}a_{k}= \ind{(-\infty,\frac{\sv{nt}}{n})}(u)\int_{(1-\sv{nu})\vee 1}^{\sv{nt}-\sv{nu}+1}a_{\sv{v}}{\rm d} v =\ln n \int_{-\infty}^{\infty}{\bar \kappa}_{1}(v;u,t){\rm d} v, \end{equation*} where \begin{equation*} {\bar \kappa}_{1}(v;u,t)=\ind{(-\infty,\frac{\sv{nt}}{n})}(u)\ind{\left( \frac{\ln\left( (1-\sv{nu})\vee 1 \right)}{\ln n}, \frac{\ln\left( \sv{nt}-\sv{nu}+1 \right)}{\ln n}\right)}(v) a_{\sv{\exp(v\ln n)}}\exp(v\ln n). \end{equation*} We have the point-wise convergence ${\bar \kappa}_{1}(v;u,t)\rightarrow \ind{(0,t)}(u)\ind{\left( 0, 1\right)}(v)$, and, since $a_{\sv{\exp(v\ln n)}}\exp(v\ln n)\leq 1$, we have the following bound for the integrand \begin{equation*}\label{gammaEQ1kappaBound} {\bar \kappa}_{1}(v;u,t) \leq \ind{\left( \frac{\ln\left( (1-\sv{nu})\vee 1 \right)}{\ln n}, \frac{\ln\left( \sv{nt}-\sv{nu}+1 \right)}{\ln n}\right)}(v)\leq \ind{ \left( 0, 1+ \ln\left( {t}-{u}+2\right) \right) }(v). \end{equation*} Therefore, by the dominated convergence theorem \begin{equation}\label{sepsum2} \frac{1}{\ln n}\sum_{k=(1-\sv{nu})\vee 1}^{\sv{nt}-\sv{nu}}a_{k}= \int_{-\infty}^{\infty} {\bar \kappa}_{1}(v;u,t) {\rm d} v\rightarrow \ind{(0,t)}(u)\int_{0}^{1} 1 {\rm d} v= \ind{(0,t)}(u)=H_{1}(u,t). \end{equation} Applying (\ref{sepsum} we can write \begin{eqnarray*} Q_n(u,t) &:=& \frac{1}{\ln n}\sum_{k=(1-\sv{nu})\vee 1}^{\sv{nt}-\sv{nu}}a_{k}\leq \int_{-\infty}^{\infty} \ind{\left( \frac{\ln\left( (1-\sv{nu})\vee 1 \right)}{\ln n}, \frac{\ln\left( \sv{nt}-\sv{nu}+1 \right)}{\ln n}\right)}(v) {\rm d} v \\ &=& \frac{\ln\left( \sv{nt}-\sv{nu}+1 \right)}{\ln n}- \frac{\ln\left( (1-\sv{nu})\vee 1 \right)}{\ln n}\\ &\leq& \frac{\ln\left( {nt}-{nu}+n \right)- \ln\left( (1-\sv{nu})\vee 1 \right)}{\ln n}. \end{eqnarray*} For $-2\leq u\le t+1$ we can estimate \begin{equation*} Q_n(u,t) \leq \frac{\ln\left( {nt}-{nu}+2n \right) }{\ln n}\leq 1+\ln\left( {t}-{u}+2 \right), \end{equation*} while, for $u<-2$, we estimate as follows: \begin{eqnarray*} Q_n(u,t) &\leq& \frac{\ln\left( {nt}-{nu}+n \right)- \ln\left( 1-\sv{nu} \right)}{\ln n} \leq \frac{ \ln\left( {t}-{u}+1 \right)- \ln\left( -{u} \right)}{\ln n}\\ &=& \ln\left( \frac{{t}-{u}+1 }{-{u}}\right)= \ln\left( 1+\frac{{t}+1 }{-{u}}\right). \end{eqnarray*} It is easy to see that the bounding function decays as $(-u)^{-1}$, for $u\rightarrow-\infty$. Let us denote \begin{equation}\label{gamma1est} G_{1}(u,t)=\left(1+\ln\left( {t}-{u}+2 \right)+R(u)\right)\ind{[-2, t+1]}(u)+ \left (\ln\left( 1+\frac{{t}+1 }{-{u}}\right)+R(u)\right )\ind{(-\infty, -2)}(u). \end{equation} Since we consider the case $\gamma=1$, this means that $\alpha>1$, therefore the function $|G_{1}(u,t)|^\alpha$ is integrable. Thus, (\ref{sepsum1}) and (\ref{sepsum2}) gives us (\ref{bound}) with $z_{\gamma,n}=\ln n$, and we have (\ref{bound1}) and (\ref{integrableBound}) with the function given in (\ref{gamma1est}). In the case $1<\gamma<1+{1}/{\alpha}$ and $\sum_{j=0}^{\infty}a_j= 0$ the proof goes along the same lines as in the case $\gamma<1$, only we use the equality $\sum_{j=0}^{n}a_j=-\sum_{j=n+1}^{\infty}a_j$. Therefore, we omit the details. \vspace{3mm} \hfill \mbox{$\Box$}\\[2mm] \section{The case $d=2$} In the previous section we derived formulae (\ref{generalJn})-(\ref{generalfbn}), which are the staring point investigating the scaling transition in particular cases $d=2$ and $d=3$. Taking specific filter coefficients $c_{{\bf i}}$ we shall demonstrate what structure and properties of these coefficients exhibit the scaling transition. We recall that we consider r.f. and corresponding partial sum r.f. \begin{equation}\label{fieldd2} X_{\bf k}=\sum_{{\bf i}\in\mathbb{Z} _+^2}c_{{\bf i}}\xi_{{\bf k}-{\bf i}}, \ {\bf k}=(k_1, k_2)\in \mathbb{Z} ^2 \ \text {and} \ S_{{\bf n}}({\bf t})=\sum_{{\bf k}={\bf 0}}^{\sv{{\bf n} {\bf t}}} X_{{\bf k}}. \end{equation} \subsection{Example 1} Let us take the following two sequences: $a_j(0)=0, a_j(i)=(1+i)^{-\gamma_j}, \ i\ge 1, \ \gamma_j>1/\alpha$, \ j=1, 2. Consider linear r.f. (\ref{fieldd2}) with the following filter \begin{equation*} c_{i_1,i_2}= \begin{cases} a_1(i_1), \text{ if } i_1\geq 0, i_2=0,\\ a_2(i_2), \text{ if } i_2\geq 0, i_1 =0,\\ 0, \text{ elsewhere}. \end{cases} \end{equation*} This can be written as \begin{equation}\label{cij} c_{i_1,i_2}=a_1(i_1)\ind{i_1\geq 0}\ind{i_2=0}+a_2(i_2)\ind{i_2\geq 0}\ind{i_1=0}, \end{equation} i.e., we have the same filter (\ref{genci}) (with $d=2$) considered in the previous section. From (\ref{generalJn})-(\ref{generalfbn}) and (\ref{fn2}) we have that \begin{equation}\label{chfuncd2} \mathbb{E} \exp\left({\rm i} ,A_{n_1, n_2}^{-1}\sum_{j=1}^{m}x_jS_{\bf n}({\bf t}^{(j)})\right)= \exp\left( -A_{n_1,n_2}^{-\alpha}J_{n_1,n_2}\right), \end{equation} where \begin{equation}\label{Jbnd2} J_{n_1,n_2}=n_1n_2\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{n_1,n_2}^{(1)}(u_1,u_2){{\rm d} u}_1{{\rm d} u}_2 \end{equation} and \begin{equation}\label{fn1n2} f_{n_1,n_2}^{(1)}(u_1,u_2)=\abs{\sum_{l=1}^{m}x_l\left( \ind{\{0\leq n_2u_2\leq \sv{n_2t_2^{(l)}}\}} U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1) + \ind{\{0\leq n_1u_1\leq \sv{n_1t_1^{(l)}}\}} U_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},n_2) \right) }^\alpha. \end{equation} We recall (see (\ref{Utgamma})) that \begin{equation}\label{stgama} U_{t_j^{(l)},\gamma_j}(i,n_j)= \sum_{k=\left( -i \right)\vee 0 }^{\sv{n_jt_j^{(l)}}-i}a_{j}(k), \ j=1, 2. \end{equation} Now we can apply Proposition \ref{prop1} and in a standard way (applying Lebesgue dominated convergence theorem) we can get the convergence of ch.f. from (\ref{chfuncd2}). Since in the expression of the function $f_{n_1,n_2}(u_1,u_2)$ there are two terms, normalization of which can be performed by sequences $z_{\gamma_1,n_1}$ and $z_{\gamma_2,n_2}$, see (\ref{sekaNormavimui}), it is clear that it will be impossible to get joint normalization for $f_{n_1,n_2}(u_1,u_2)$ in general case of parameters $\gamma_1, \gamma_2$ and $(n_1,n_2) \to \infty$. But in the case where both normalizing sequences $z_{\gamma_1,n_1}$ and $z_{\gamma_2,n_2}$ are equal to $1$, namely, if $\gamma_i>\max (1, 1/\alpha), \ i=1, 2$, then we get the following result. Let us denote by $M_\alpha$ symmetric $\alpha$-stable measure on $\mathbb{R} ^2$ with Lebesgue control measure and let $\stackrel{f.d.d.}{\longrightarrow}$ stand for the convergence of f.d.d.. \begin{prop}\label{prop2} Suppose that we have a sum (\ref{fieldd2}) of values of a linear r.f. with the filter (\ref{cij}). If $\gamma_i>\max (1, 1/\alpha), \ i=1, 2$, then, as $(n_1,n_2) \to \infty$, \begin{equation}\label{lim1} (n_1n_2)^{-1/\alpha}S_{{\bf n}}({\bf t})\stackrel{f.d.d.}{\longrightarrow} \sum_{k=1}^\infty (a_1(k)+a_2(k)) \int_0^{t_1}\int_0^{t_2} M_\alpha ({\rm d} u_1{\rm d} u_2) \end{equation} \end{prop} This proposition means that linear r.f. $\{X_{k_1, k_2}, (k_1, k_2)\in \mathbb{Z} ^2, \}$ with such filter does not exhibit the scaling transition and limit in (\ref{lim1}) does not depend on the way how $(n_1, n_2)$ tends to infinity, i.e. the Lamperti type result Proposition \ref{prop0} holds. Taking into account the terminology, proposed in \cite{Paul20}, this r.f. has zero memory in both directions. Now let us look at the case where the limit in the relation (\ref{lim1}) depends on the way how $(n_1, n_2)$ tends to infinity, i.e., the case where we have the scaling transition. Let $n_1=n, n_2=n^\tau$ and let us consider the case $1/\alpha <\gamma_2\le \gamma_1<1$. Then $z_{\gamma_1,n_1}=n^{1-\gamma_1}, z_{\gamma_2,n_2}=n^{\tau(1-\gamma_2)}$, and we define $$ M_n=M_n(\gamma_1, \gamma_2):=\max\{z_{\gamma_1,n_1},z_{\gamma_2,n_2}\}=\max\{n^{1-\gamma_1} ,n^{\tau (1-\gamma_2)}\}=\begin{cases} n^{1-\gamma_1}, \text{ if } \tau \leq \tau_0,\\ n^{\tau (1-\gamma_2)}, \text{ if } \tau > \tau_0, \end{cases} $$ where $0<\tau_0=(1-\gamma_1)/(1-\gamma_2)<1$. Finally, let us define $$K_{\gamma_1}(\tau)=\lim_{n\rightarrow\infty}\frac{z_{\gamma_1,n_1}}{M_n}=\begin{cases} 1, \text{ if } \tau \leq \tau_0,\\ 0, \text{ if } \tau > \tau_0, \end{cases} \quad K_{\gamma_2}(\tau)=\lim_{n\rightarrow\infty}\frac{z_{\gamma_2,n_2}}{M_n}=\begin{cases} 0, \text{ if } \tau < \tau_0,\\ 1, \text{ if } \tau \geq \tau_0. \end{cases}$$ \begin{prop}\label{prop3} Suppose that we have a sum (\ref{fieldd2}) of values of a linear r.f. with the filter (\ref{cij}). If $1/\alpha <\gamma_2\le \gamma_1<1 $ and $n_1=n, n_2=n^\tau$, then, as $n \to \infty$, \begin{equation}\label{lim2} (n^{1+\tau})^{-1/\alpha}M_n^{-1}S_{n,n^\tau}(t_1,t_2)\stackrel{f.d.d.}{\longrightarrow} V(\tau, t_1, t_2) \end{equation} where \begin{equation}\label{lim2a} V(\tau, t_1, t_2)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left( \ind{\{0\leq u_2\leq t_2\}}K_{\gamma_1}(\tau)H_{\gamma_1}(u_1,t_1)+ \ind{\{0\leq u_1\leq t_1\}}K_{\gamma_2}(\tau)H_{\gamma_2}(u_2,t_2) \right)M({\rm d} u_1,{\rm d} u_2). \end{equation} \end{prop} From (\ref{lim2a}) we see, that linear r.f. $\{X_{k_1, k_2}\}$ in this case exhibits the scaling transition. Namely, in the interval $0<\tau<\tau_0$ we have $K_{\gamma_1}(\tau)=1 , \ K_{\gamma_2}(\tau)=0$, therefore from (\ref{lim2a}) we get unbalanced scaling limit $V_-(t_1, t_2)$, independent of $\tau$, similarly in the interval $\tau_0<\tau<\infty$ we get unbalanced scaling limit $V_+(t_1, t_2)$. For $\tau=\tau_0$ we have $K_{\gamma_1}(\tau_0)= K_{\gamma_2}(\tau_0)=1$ and we get balanced scaling limit $V(\tau_0, t_1, t_2)$. If $1/\alpha<\gamma_1=\gamma_2=\gamma<1$ then we have $\tau_0=1$ as a point of scaling transition and taking $n_1=n_2$ we have balanced scaling limit $V(1, t_1, t_2)$. Also from this proposition one can see that there will be no scaling transition if only one of $\gamma_i, \ i=1, 2,$ does not satisfy the condition of Proposition \ref{prop2}, let us say that $\gamma_1> \max (1, 1/\alpha)$ and $1/\alpha<\gamma_2<1$. Then it is easy to see that $z_{\gamma_1,n_1}=1$ and the normalization $z_{\gamma_2,n_2}$ for the second term in (\ref{fn1n2}) is growing to infinity, thus it is prevailing. In this case the normalizing constant for $S_{n_1,n_2}(t_1, t_2)$ is $A_{n_1, n_2}=(n_1n_2)^{1/\alpha}z_{\gamma_2,n_2}$, and the limit process, independent of $\tau$, will be obtained from (\ref{lim2a}) putting $K_{\gamma_1}(\tau)\equiv 0 , \ K_{\gamma_2}(\tau)\equiv 1$. In terms of directional memory one can say and the r.f. has zero memory in the horizontal direction and positive memory in the vertical direction. \medskip Note that the linear r.f. from this example shows stronger effect than scaling transition as defined in Definition \ref{sctran}. Let us consider the scale transition point $\tau_0$ and balanced scaling limit process $V(\tau_0, t_1, t_2)$. Let us take $n_1=n, n_2=cn^{\tau_0}$ with $0<c<\infty$, then we get $z_{\gamma_1,n_1}=n^{1-\gamma_1}, z_{\gamma_2,n_2}=n^{\tau_0(1-\gamma_2)}c^{1-\gamma_2}$ and taking $M_n=n^{1-\gamma_1}$ we shall get \begin{equation}\label{lim3} (n^{-(1+\tau_0)/\alpha+1-\gamma_1})c^{-1/\alpha}S_{n,cn^{\tau_0}}(t_1, t_2)\stackrel{f.d.d.}{\longrightarrow} U(c, t_1, t_2) \end{equation} where \begin{equation}\label{lim3a} U(c, t_1, t_2)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left( \ind{\{0\leq u_2\leq t_2\}} H_{\gamma_1}(u_1,t_1)+ \ind{\{0\leq u_1\leq t_1\}}c^{1-\gamma_2}H_{\gamma_2}(u_2,t_2) \right)M({\rm d} u_1,{\rm d} u_2). \end{equation} Taking $M_n=n^{1-\gamma_1}\max (1, c^{1-\gamma_2})$ we can get as a limit the process $V(\tau_0, t_1, t_2)$, but now with both functions $K_{\gamma_i}$ depending on $c$. This simple example gives us one more reason to reconsider Definition \ref{sctran}. Let us take in this example $1/\alpha < \gamma_1<1$ and $\gamma_2=1$, then we get $z_{\gamma_1,n_1}=n_1^{1-\gamma_1}, z_{\gamma_2,n_2}=\ln n_2 $. If we assume, as earlier, $n_1=n, n_2=n^\tau$, then $z_{\gamma_2,n_2}=\tau \ln n$ and, for any $0<\tau<\infty$, for sufficiently large $n$ we get $M_n(\gamma_1, \gamma_2):=\max\{z_{\gamma_1,n_1},z_{\gamma_2,n_2}\}=n^{1-\gamma_1}$. This would mean that there is no scale transition, but it is easy to see that such conclusion is due to our assumption about the relation between the growth of $n_2$ as a function of $n$ (we recall that $n_1=n$), namely, that $n_2=n^\tau$. If we assume different relation, taking $n_2=\exp (n^\tau)$, then $z_{\gamma_2,n_2}=n^\tau$ and we easily get the point of scale transition $\tau_0=1-\gamma_1$. At the point of scale transition we have the same effect, which we described above and which can be named as the second order scale transition. It we assume $n_2=\exp (cn^{\tau_0})$, then it is easy to see that we shall get limit distribution dependent on the new parameter $c$. These considerations lead to the following generalization of Definition \ref{sctran}. We consider a stationary r.f. $Y=\{Y_{k_1, k_2}, \ (k_1, k_2)\in {\bf Z}^2\}$ and sums defined in (\ref{sum}), only now we assume that $n_1=n, n_2=f(n, \tau),$ where $\tau\in (a, b)\subset \mathbb{R} $ is some real parameter from an interval $(a, b)$, which can be finite or infinite, and $f: \mathbb{Z} _+ \times (a, b) \to \mathbb{Z} _+$. We suppose that for each fixed $\tau$ function $f$ is monotonically growing to infinity, as $n\to \infty$. We denote \begin{equation}\label{sumtau} Z_{n, f, \tau}(t_1,t_2)=S_{n, f(n, \tau)}(t_1,t_2), \quad t_1\ge 0, \quad t_2\ge 0. \end{equation} We assume that, for any $\tau \in (a, b)$, there exists a nontrivial random field $V_{\tau, f} (t_1,t_2)$ and a normalization $A_n (\tau, f) \to \infty$ such that f.d.d. of $A_n^{-1}(\tau, f) Z_{n, f, \tau}(t_1,t_2)$ converges weakly to f.d.d. of $V_{\tau, f}(t_1,t_2)$. \begin{definition}\label{gensctran} We say that a random field $Y$ exhibits scaling transition if there exists a function $f(n, \tau)$ and a point $\tau_0\in (a, b)$ such that the limit process $V_{\tau, f}$ is the same, let say $V_+$, for all $\tau \in (\tau_0, \tau_0+\delta)$ and another, not obtained by simple scaling, $V_-$, for $\tau \in (\tau_0-\delta, \tau_0) $. The r.f. $V_{\tau_0, f}$ is called well-balanced scaling limit of $Y$ at the point $\tau_0$. \end{definition} This definition not only extends the relation between $n_1$ and $n_2$ from power functions to more general class of functions, but also presupposes the possibility of more than one scale transition point, and such possibility will be realized in Example 3. \subsection{Example 2} We shall make very small change in the Example 1 in order to show that long-range dependence is not the main factor causing the scaling transition. In the filter (\ref{cij}) we redefine only $c_{0,0}:=a_1(0)+a_2(0)$, and $c_{0,0}$ is chosen in a such way that the following condition \begin{equation}\label{sumZero} \sum_{i_1=0}^{\infty}\sum_{i_2=0}^{\infty}c_{i_1,i_2}=0. \end{equation} is satisfied. Quantities $a_1(0), a_2(0)$ we choose in such a way, that $\sum_{i=0}^{\infty}a_j(i)=0, \ j=1, 2.$ Now the filter can be written as $c_{i_1,i_2}=a_1(i_1)\ind{i_1\geq 0}\ind{i_2=0}+a_2(i_2)\ind{i_2\geq 0}\ind{i_1=0}$, and, as in Example 1, we have formulae (\ref{chfuncd2})-(\ref{stgama}) Now we assume $a_j(i)=(1+i)^{-\gamma_j}, \ i\ge 1$,\ $\gamma_j>\max (1, 1/\alpha), \ j=1, 2,$ and, using the same notation (\ref{stgama}), we investigate the quantity (\ref{Jbnd2}). We recall that $a_0+b_0$ is negative and condition (\ref{sumZero}) holds. The integral in the expression of $J_{n_1, n_2}$ can be divided into three parts: \begin{equation*} \frac{J_{n_1, n_2}}{n_1n_2}= \int_{0}^{\infty}\int_{0}^{\infty}f_{n_1, n_2}^{(1)}(u_1,u_2){\rm d} u_1{\rm d} u_2 + \int_{0}^{\infty}\left( \int_{-\infty}^{0}f_{n_1, n_2}^{(1)}(u_1,u_2){\rm d} u_1 \right){\rm d} u_2+ \int_{-\infty}^{0}\left( \int_{0}^{\infty}f_{n_1, n_2}^{(1)}(u_1,u_2){\rm d} u_1 \right){\rm d} u_2 \end{equation*} \begin{equation*} =J_{n_1, n_2}^{(1)}+J_{n_1, n_2}^{(2)}+J_{n_1, n_2}^{(3)}. \end{equation*} It is easy to see that \begin{equation*} J_{n_1,n_2}^{(2)}=\int_{0}^{\infty}\left( \int_{-\infty}^{0} \abs{\sum_{l=1}^{d}x_l\left( \ind{\{0\leq n_2u_2\leq \sv{n_2t_2^{(l)}}\}} U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1) \right) }^\alpha {\rm d} u_1 \right) {\rm d} u_2. \end{equation*} and \begin{equation*} J_{n_1,n_2}^{(3)}= \int_{-\infty}^{0} \left( \int_{0}^{\infty} \abs{\sum_{l=1}^{d}x_l\left( \ind{\{0\leq n_1u_1\leq \sv{n_1t_1^{(l)}}\}}U_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},n_2) \right) }^\alpha {\rm d} u_1 \right) {\rm d} u_2. \end{equation*} Since for sums, formed separately from sequences $\{a_1(k)\}$ or $\{a_2(k)\}$ we have condition $\sum_{i=0}^{\infty}a_j(i)=0, \ j=1, 2$, therefore from Proposition \ref{prop1} we get \begin{equation*} \frac{J_{n_1, n_2}^{(2)}}{n_1^{1-\gamma_1}}\rightarrow \int_{0}^{\infty} \left( \int_{-\infty}^{0} \abs{\sum_{l=1}^{m}x_l \ind{\{0\leq u_2\leq t_2^{(l)}\}} H_{\gamma_1}(u_1,t_1^{(l)}) }^\alpha {\rm d} u_1 \right) {\rm d} u_2, \end{equation*} and \begin{equation*} \frac{J_{n_1, n_2}^{(3)}}{n_2^{1-\gamma_2}}\rightarrow \int_{-\infty}^{0}\left( \int_{0}^{\infty} \abs{\sum_{l=1}^{m}x_l \ind{\{0\leq u_1\leq t_1^{(l)}\}} H_{\gamma_2}(u_2,t_2^{(l)}) }^\alpha {\rm d} u_1 \right) {\rm d} u_2. \end{equation*} It remains to consider the term $J_{n_1, n_2}^{(1)}$. Since $U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1)=U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1)\ind{\{0\leq \sv{n_1u_1}\leq \sv{n_1t_1^{(l)}}\}}$ and similar equality can be written for $U_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},n_2) $, we have \begin{eqnarray}\label{jnm3} J_{n_1, n_2}^{(1)} &=& \int_{\mathbb{R} _+^2}\abs{\sum_{l=1}^{m}x_l\left( \ind{\{0\leq n_2u_2\leq \sv{n_2t_2^{(l)}}\}} U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1)+\ind{\{0\leq n_1u_1\leq \sv{n_1t_1^{(l)}}\}}U_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},n_2) \right) }^\alpha{\rm d} u_1{\rm d} u_2 \\ \nonumber &=& \int_{\mathbb{R} _+^2} \abs{\sum_{l=1}^{m}x_l\ind{\{0\leq n_2u_2\leq \sv{n_2t_2^{(l)}}\}}\ind{\{0\leq n_1u_1\leq \sv{n_1t_1^{(l)}}\}}\left( U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1) + U_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},n_2) \right) }^\alpha{\rm d} u_1{\rm d} u_2. \end{eqnarray} Denoting \begin{equation*} U'_{t_j^{(l)},\gamma_j}(i,n_j):= \sum_{k=\sv{n_jt_j^{(l)}}-i+1}^{\infty}a_j(k),\ j=1, 2, \end{equation*} and taking into account (\ref{sumZero}), we have \begin{equation*} U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1) + U_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},m)=- \left( U'_{t_1^{(l)},\gamma_1}(\sv{nu},n) + U'_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},m) \right). \end{equation*} Substituting this equality into (\ref{jnm3}) we get \begin{equation}\label{jnm4} J_{n_1, n_2}^{(1)}= \int_{0}^{\infty}\int_{0}^{\infty} \abs{\sum_{l=1}^{m}x_l\ind{\{0\leq n_2u_2\leq \sv{n_2t_2^{(l)}}\}}\ind{\{0\leq n_1u_1\leq \sv{n_1t_1^{(l)}}\}}\left( U'_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1) + U'_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},n_2) \right) }^\alpha{\rm d} u_1{\rm d} u_2. \end{equation} Now, as in Proposition \ref{prop1}, we can prove that \begin{equation}\label{st1} \frac{U'_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1)}{n_1^{1-\gamma_1}} \to \ind{\{0\leq u_1\leq t_1^{(l)}\}}\frac{(t_1^{(l)}-u_1)^{1-\gamma_1}}{\gamma_1-1}, \end{equation} \begin{equation}\label{st2} \frac{U'_{t_2^{(l)},\gamma_2}(\sv{n_2u_2},n_2)}{n_2^{1-\gamma_2}} \to \ind{\{0\leq u_2\leq t_2^{(l)}\}}\frac{(t_2^{(l)}-u_2)^{1-\gamma_2}}{\gamma_2-1}, \end{equation} as $n_1, n_2\to \infty$. Thus, we have the same situation as in Example 1 with sequences $z_{\gamma_1,n_1}=n_1^{1-\gamma_1}, z_{\gamma_2,n_2}=n_2^{1-\gamma_2}$, only now both sequences tend to zero, since $\gamma_i>1$. Taking $n_1=n, n_2=n^\tau$ and $$ M_n=M_n(\gamma_1, \gamma_2):=\max\{z_{\gamma_1,n_1},z_{\gamma_2,n_2}\}=\begin{cases} n^{1-\gamma_1}, \text{ if } \tau \leq \tau_0,\\ n^{\tau (1-\gamma_2)}, \text{ if } \tau > \tau_0, \end{cases} $$ where $0<\tau_0=(\gamma_1-1)/(\gamma_2-1),$ we easily get that there is scale transition at the point $\tau_0$. It is possible to formulate the analog of Proposition \ref{prop3} with limit distributions for the appropriate normalized sum $S_{n,n^\tau}(t_1,t_2)$ (to this aim we need to find majorizing functions as in (\ref{bound1}) and to use Lebesgue dominated convergence theorem), but we shall not do this, since the main task for this example was to show that scaling transition can be caused not by long-range dependence, which was present in the Example 1. In this example with the scale transition we have condition (\ref{sumZero}), which indicates the negative dependence, this term was introduced in \cite{Lahiri} . Unfortunately, in both examples, in which we face the scale transition, it is not possible to use the notion of directional memory, which was introduced in \cite{Paul20}, since in both examples the normalizing constants are of the form $$ A_{n_1, n_2}=(n_1n_2)^{1/\alpha}\max (n_1^{1-\gamma_1}, n_2^{1-\gamma_2}), $$ and cannot be written in the form $$ n_1^{1/\alpha+\delta_1}n_2^{1/\alpha+\delta_2}. $$ But in the first example we can speak about general (not directional) positive memory, since the additional factor to $(n_1n_2)^{1/\alpha}$ is of the form $\max (n_1^{\delta_1}, n_2^{\delta_2})$ with both $\delta_i$ positive, while in the second example these exponents $\delta_i$ both are negative, therefore it is reasonable to speak about general negative memory. It is possible to say that in the first example linear processes generated separately by filters $\{a_i\}$ and $\{b_i\}$ have positive memory, while in the second example, with conditions $\sum_{i=0}^\infty a_j(i)\ne 0, \ j=1, 2$, both linear processes generated by these filters have negative memory. \subsection{Example 3} Now we shall take an example of a linear r.f. (\ref{fieldd2}) with the following filter \begin{equation*} c_{i,j}= \begin{cases} a_i, \text{ if } i\geq 1, j=0,\\ c_i, \text{ if } i=j\geq 1,\\ 0, \text{ otherwise}, \end{cases} \end{equation*} where $a_0=c_0=0, $ $a_i=(1+i)^{-\gamma_1}$, \ $c_i=(1+i)^{-\gamma_2}$, $1/\alpha<\gamma_2<\gamma_1<1$. It is convenient to write this filter as follows: \begin{equation}\label{cij2} c_{i,j}=a_{i}\ind{i\geq 1}\ind{j=0}+c_{j}\ind{i=j\geq 1}, \end{equation} i.e., again we have coefficients of the filter on two lines, as in previous examples, but now only one line is coordinate axis. As in Example 1, we assume $n_1=n, n_2=n^\tau$. Now we get two points where scaling transition occurs. The first point $\tau_0=(1-\gamma_1)/(1-\gamma_2)$ is the same as in Example 1, and we get additional point $\tau_1=1$. Let us denote \begin{equation}\label{An2} A_n(\tau)=A_{n, n^\tau}:=\begin{cases} n^{(1+\tau)/\alpha+1-\gamma_1} , \text{ if } \tau\in \left( 0, \tau_0 \right),\\ n^{(1+\tau)/\alpha+\tau(1-\gamma_2)} , \text{ if } \tau\in \left[\tau_0 ,1 \right],\\ n^{(1+\tau)/\alpha+1-\gamma_2} , \text{ if } \tau\in \left( 1,\infty \right), \end{cases} \end{equation} and $a(u_1, u_2)=\max (0,-u_1,-u_2), \ b(u_1, u_2; t_1, t_2)=\min (t_1-u_1,t_2-u_2).$ Function $H_{\gamma_1}(u,t)$ was defined in Proposition \ref{prop1}, we recall here that for $\gamma<1$ \begin{equation*} H_{\gamma}(u,t)=\frac{1}{1-\gamma}\left (((t-u)_+)^{1-\gamma}-((-u)_+)^{1-\gamma} \right ). \end{equation*} \begin{prop}\label{prop5} Suppose that we have a sum (\ref{fieldd2}) of values of a linear r.f. with the filter (\ref{cij2}). If $1/\alpha <\gamma_2\le \gamma_1<1 $ and $n_1=n, n_2=n^\tau$, then, as $n \to \infty$, \begin{equation}\label{lim4} A_n^{-1}(\tau)S_{n,n^\tau}(t_1,t_2)\stackrel{f.d.d.}{\longrightarrow} V_1(\tau, t_1, t_2) \end{equation} where \begin{equation}\label{lim4a} V_1(\tau, t_1, t_2)=\begin{cases} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}H_{\gamma_1}(u_1,t_1))\ind{\{0\leq u_2 \leq t_2\}} M({\rm d} u_1 {\rm d} u_2) , \text{ if } \tau\in \left( 0, \tau_0 \right),\\ \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\ind{\{0\leq u_1 \leq t_1\}}H_{\gamma_2}(u_2,t_2)M({\rm d} u_1 {\rm d} u_2) , \text{ if } \tau\in \left(\tau_0 ,1 \right),\\ \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} H_{\gamma_2}(u_1,t_1)\ind{\{0\leq u_2 \leq t_2\}} M({\rm d} u_1 {\rm d} u_2), \text{ if } \tau\in \left( 1,\infty \right). \end{cases} \end{equation} and \begin{equation}\label{lim4b} V_1(\tau_0, t_1, t_2)=\int_{-\infty}^{\infty}\int_{\infty}^{\infty}\left (\ind{\{0\leq u_2 \leq t_2\}}H_{\gamma_1}(u_1,t_1)+ \ind{\{0\leq u_1 \leq t_1\}}H_{\gamma_2}(u_2,t_2)\right ) M({\rm d} u_1 {\rm d} u_2), \end{equation} \begin{equation}\label{lim4c} V_1(1, t_1, t_2)=\int_{-\infty}^{\infty}\int_{\infty}^{\infty}\ind{a(u_1, u_2) <b(u_1, u_2; t_1, t_2)}\frac{(b(u_1, u_2; t_1, t_2))^{1-\gamma_2}-(a(u_1, u_2))^{1-\gamma_2}}{1-\gamma_2} M({\rm d} u_1 {\rm d} u_2). \end{equation} \end{prop} The sum $Z_{{\bf t}^{(l)},\gamma_2}(i,j,n_1,n_2)$ is more complicated comparing with $U_{t_j^{(l)},\gamma_j}(i,n_j)$ (see (\ref{stgama}), and it alone gives us one point of transition. Namely, we can consider r.f. (\ref{fieldd2}) with the following simple filter \begin{equation}\label{cijlygus} c_{i,j}=c_{i}, \ \ {\rm if} \ i=j\ge 1, \ \ {\rm and} \ \ c_{i,j}=0, \ \ {\rm elsewhere}. \end{equation} Analysis of this random field gives us the scaling transition point $\tau=1$. {\it Sketch of the proof of Proposition \ref{prop5}}. Taking into account formulae (\ref{chfuncd2})-(\ref{fn1n2}) and considering the ch.f. of the vector $\left( S_{\bf n}({\bf t}^{(1)}),\dots, S_{\bf n}({\bf t}^{(m)}) \right)$, we must investigate the asymptotic of $f_{n_1,n_2}^{(3)}(u_1,u_2)$. Similarly to Example 1, taking into account (\ref{cij2}), we can get the following expression of this quantity \begin{equation} \label{CijnmIsraiska} f_{n_1,n_2}^{(3)}(u_1,u_2) =\abs{ \sum_{l=1}^{m}x_l\left(\ind{\{0\leq n_2u_2\leq \sv{n_2t_2^{(l)}}\}} U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1)+Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2},n_1,n_2) \right)}^\alpha, \end{equation} where $U_{t_j^{(l)},\gamma_j}(i,n)$ is defined in (\ref{stgama}) and \begin{equation}\label{ztsl} Z_{{\bf t}^{(l)},\gamma_2}(i,j,n_1,n_2):=\sum_{k=\max\left( 0,-i,-j \right)}^{\min\left( \sv{n_1t_1^{(l)}}-i,\sv{n_2t_2^{(l)}}-j \right)}c_{k}. \end{equation} Comparing the expression (\ref{CijnmIsraiska}) with the corresponding expression of $f_{n_1,n_2}(u_1,u_2)$ in Example 1 we see that the first term in (\ref{CijnmIsraiska}) is the same (corresponding to the filter on the horizontal axis), and for this term we can apply Proposition \ref{prop1}. This will give us the following relation (independent of $\tau$, since $U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1)$ depends only on $n_1$): \begin{equation*} \frac{U_{t_1^{(l)},\gamma_1}(\sv{nu_1},n_1)}{z_{\gamma_1,n_1}}=\rightarrow \ind{(-\infty,t_l)}(u_1)\int_{(-u_1)_+}^{t_l-u_1}v^{-\gamma_1} {\rm d} v =H_{\gamma_1}(u_1,t_l), \end{equation*} with $z_{\gamma_1,n_1}=n_1^{1-\gamma_1}.$ The second term (corresponding to the filter on the diagonal) in (\ref{CijnmIsraiska}) is different, and Proposition \ref{prop1} directly cannot be applied. Therefore, we investigate the quantity \begin{equation}\label{jnm2} J_{n_1,n_2}=n_1n_2\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{n_1,n_2}^{(3)}(u_1,u_2){{\rm d} u}_1{{\rm d} u}_2=J^{(1)}_{n_1,n_2}+J^{(2)}_{n_1,n_2}+J^{(3)}_{n_1,n_2}+J^{(4)}_{n_1,n_2} \end{equation} with $n_1=n, n_2=n^\tau$ (but sometimes we shall leave $n_1, n_2$ instead of $n, n^\tau$), dividing the integral over $\mathbb{R} ^2$ into four integrals over regions $\{u\ge 0, v\ge 0 \}, \{u< 0, v\ge 0 \}, \{u< 0, v< 0 \}, \{u\ge 0, v< 0 \}$. In investigation of these integrals we use without special mentioning the following steps: we prove the point-wise convergence of functions in the expression of integrals and then use the Lebesgue dominated convergence theorem. (i) We start with the integral $$ J^{(1)}_{n,n^\tau}=n^{1+\tau}\int_{0}^{\infty}\int_{0}^{\infty}f_{n,n^\tau}^{(3)}(u_1,u_2){{\rm d} u}_1{{\rm d} u}_2 $$ and consider the case $0<\tau<1$.Changing the sum in (\ref{ztsl}) by integral after some transformations we can get $$ Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)= n_2^{1-\gamma_2}\ind{\{\sv{n_1u_1}\leq \sv{n_1t_1^{(l)}}\}}\ind{\{\sv{n_2u_2}\leq \sv{n_2t_2^{(l)}}\}} \int_{0}^{\infty}\kappa_{n_1,n_2}(y){\rm d} y, $$ where $$ \kappa_{n_1,n_2}(y)=\ind{\left( 0, \min\left( (\sv{n_1t_1^{(l)}}-\sv{n_1u_1}+1)/{n_2},(\sv{n_2t_2^{(l)}}-\sv{n_2u_2}+1)/{n_2} \right) \right)}(y) n_2^{\gamma_2}c_{\sv{n_2y}}. $$ Since for a fixed $y>0$ we have $\kappa_{n,m}(y)\rightarrow \ind{\left( 0, s_l-u_2 \right) }(y) y^{-\gamma_2}$, and this function can be bounded by integrable function $\abs{\kappa_{n,m}(y)}\leq \ind{\left( 0, s_l-u_2+2 \right) }(y) y^{-\gamma_2},$ we get $$ \int_{0}^{\infty}\kappa_{n_1,n_2}(y){\rm d} y\rightarrow \int_{0}^{t_2^{(l)}-u_2} y^{-\gamma_2} {\rm d} y=\frac{(t_2^{(l)}-u_2)^{1-\gamma_2}}{1-\gamma_2}. $$ Then, for $u_1\ge0, u_2\ge 0, $ we can get $$ \frac{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)}{n_2^{1-\gamma_2}}\rightarrow \ind{\{u_1 <t_1^{(l)}\}} \ind{\{u_2 <t_2^{(l)}\}} \frac{(t_2^{(l)}-u_2)^{1-\gamma_2}}{1-\gamma_2}=\ind{\{u_1 <t_1^{(l)}\}} H_{\gamma_2}(u_2,t_2^{(l)}), $$ since, for $u_2\ge 0,$ we have $(-u_2)_+=0$. Now we return to the function from (\ref{CijnmIsraiska}) $$ f_{n_1,n_2}^{(3)}(u_1,u_2) = \Big | \sum_{l=1}^{m}x_l\Big (\ind{\{0\leq n_2u_2\leq \sv{n_2t_2^{(l)}}\}} n_1^{1-\gamma_1}\frac{U_{t_1^{(l)},\gamma_1}(\sv{n_1u_1},n_1)}{n_1^{1-\gamma_1}} $$ $$ + n_2^{1-\gamma_2}\frac{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1t_1^{(l)}}, \sv{n_2t_2^{(l)}},n_1,n_2)}{n_2^{1-\gamma_2}} \Big )\Big |^\alpha, $$ Thus we have the same situation as in Example 1. Denoting $M_n:=\max(n^{1-\gamma_1},m^{1-\gamma_2})=\max(n^{1-\gamma_1},n^{\tau(1-\gamma_2)})$ we get the same functions $K_i(\tau)$: $$ \lim_{n\to \infty}\frac{n^{1-\gamma_1}}{M_n}= K_1(\tau):=\begin{cases} 0,\text{ if }1>\tau >\tau_0,\\ 1,\text{ if }0<\tau \leq \tau_0, \end{cases} $$ $$ \lim_{n\to \infty}\frac{m^{1-\gamma_2}}{M_n}= K_2(\tau):=\begin{cases} 1,\text{ if }\tau \geq \tau_0,\\ 0,\text{ if }\tau < \tau_0. \end{cases} $$ Then we easily get $$ \frac{f_{n_1,n_2}^{(3)}(u_1,u_2)}{M_n^\alpha}\rightarrow\Big | \sum_{l=1}^{m}x_l\Big (\ind{\{0\leq u_2 \leq t_2^{(l)}\}} K_1(\tau) H_{\gamma_1}(u_1,t_1^{(l)}) +\ind{\{u_1 <t_1^{(l)}\}}K_2(\tau) H_{\gamma_2}(u_2,t_2^{(l)})\Big )\Big |^\alpha $$ and \begin{equation}\label{JNM1} \frac{J^{(1)}_{n,n^\tau}}{n^{1+\tau}M_n^\alpha}\rightarrow \int_{0}^{\infty}\int_{0}^{\infty}\Big | \sum_{l=1}^{m}x_l\Big (\ind{\{0\leq u_2 \leq t_2^{(l)}\}} K_1(\tau) H_{\gamma_1}(u_1,t_1^{(l)})+\ind{\{u_1 <t_1^{(l)}\}}K_2(\tau) H_{\gamma_2}(u_2,t_2^{(l)})\Big )\Big |^\alpha {{\rm d} u}_1{{\rm d} u}_2. \end{equation} In the case $\tau>1$ (this means $n_2/n_1\to \infty$) we similarly can get (we recall that $u_1\ge0, u_2\ge 0 $) $$ \frac{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)}{n_1^{1-\gamma_2}}\rightarrow \ind{\{u_1 <t_1^{(l)}\}} \ind{\{u_2 <t_2^{(l)}\}} \frac{(t_1^{(l)}-u_1)^{1-\gamma_2}}{1-\gamma_2}=\ind{\{u_2 <t_2^{(l)}\}} H_{\gamma_2}(u_1,t_1^{(l)}). $$ Since for $n_1=n, n_2=n^\tau$ norming constants for the two terms in (\ref{CijnmIsraiska}) are $n^{1-\gamma_1}$ and $n^{1-\gamma_2}$, respectively, and $n^{1-\gamma_1}<n^{1-\gamma_2}$, we get \begin{equation}\label{JNM1A} \frac{J^{(1)}_{n,n^\tau}}{n^{1+\tau+(1-\gamma_2)\alpha}}\rightarrow \int_{0}^{\infty}\int_{0}^{\infty}\Big | \sum_{l=1}^{m}x_l\Big (\ind{\{0\leq u_2 \leq t_2^{(l)}\}} H_{\gamma_2}(u_1,t_1^{(l)}) \Big )\Big |^\alpha {{\rm d} u}_1{{\rm d} u}_2. \end{equation} (ii) Now we investigate $$ J^{(2)}_{n,n^\tau}=n^{1+\tau}\int_{-\infty}^{0}\int_{0}^{\infty}f_{n,n^\tau}^{(3)}(u_1,u_2){{\rm d} u}_2{{\rm d} u}_1 $$ and consider the case $0<\tau<1$.Changing the sum in (\ref{ztsl}) by integral after some transformations we can get $$ Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)\leq \ind{\brc{-\sv{n_1u_1}\leq \sv{n_2t_2^{(l)}}-\sv{n_2u_2} }} \ind{\brc{\sv{n_2u_2}\leq \sv{n_2t_2^{(l)}} }} \int_{0}^{ {n_2t_2^{(l)}}-{n_2u_2} +2n_2} y^{-\gamma_2} {\rm d} y $$ $$ =\ind{\brc{-\sv{n_1u_1}\leq \sv{n_2t_2^{(l)}}-\sv{n_2u_2} }} \ind{\brc{\sv{n_2t_2}\leq \sv{n_2t_2^{(l)}} }} \frac{\skl{ {n_2t_2^{(l)}}-{n_2u_2} +2n_2}^{1-\gamma_2}}{1-\gamma_2}, $$ hence, using simple inequalities between indicator functions we get $$ \frac{Z_{t_1^{(l)},t_2^{(l)},\gamma_2}(\sv{n_1u_1},\sv{n_2t_2}, n_1, n_2)}{n_2^{1-\gamma_2}}\leq \ind{\brc{u_1\geq \frac{n_2}{n_1}(t_2-t_2^{(l)}-1) }} \ind{\brc{t_2\leq u_2^{(l)}+1 }} \frac{\skl{ {t_2^{(l)}}-{u_2} +2}^{1-\gamma_2} }{1-\gamma_2}. $$ Since we consider the set $\{(u_1,u_2): u_1<0,\ u_2\geq 0\}$, we have $\ind{\brc{u_1\geq \frac{n_2}{n_1}(u_2-t_2^{(l)}-1) }}\rightarrow \ind{\brc{u_1\geq 0 }}= 0,$ therefore, $$ \frac{Z_{t_1^{(l)},t_2^{(l)},\gamma_2}(\sv{n_1u_1},\sv{n_2t_2}, n_1, n_2)}{n_2^{1-\gamma_2}}\rightarrow 0. $$ Using the same sequence $M_n$ and the function $K_1,$ we get $$ \frac{f_{n,n^\tau}^{(3)}(u_1,u_2)}{M_n^\alpha}\rightarrow \Big | \sum_{l=1}^{m}x_l \ind{\{0\leq u_2 \leq t_2^{(l)}\}} K_1(\tau) H_{\gamma_1}(u_1,t_1^{(l)})\Big |^\alpha, $$ and, finally, \begin{equation}\label{JNM2} \frac{J^{(2)}_{n,n^\tau}}{n^{1+\tau}M_n^\alpha}\rightarrow \int_{-\infty}^{0}\int_{0}^{\infty}\Big | \sum_{l=1}^{m}x_l \ind{\{0\leq u_2 \leq t_2^{(l)}\}} K_1(\tau) H_{\gamma_1}(u_1,t_1^{(l)})\Big |^\alpha {{\rm d} u}_2{{\rm d} u}_1. \end{equation} In the case $\tau>1$ we get $$ Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)=n_1^{1-\gamma_2}\ind{\{\sv{n_2u_2}\leq \sv{n_2t_2^{(l)}}\}}\int_{0}^{\infty}\kappa_{n_1,n_2}(y){\rm d} y, $$ where $$ \kappa_{n_1,n_2}(y)=\ind{\left( -\sv{n_1u_1}/n_1, \min\left( (\sv{n_1t_1^{(l)}}-\sv{n_1u_1}+1)/{n_1}, (\sv{n_2t_2^{(l)}}-\sv{n_2u_2}+1)/{n_1} \right) \right)}(y) n_1^{\gamma_2}c_{\sv{n_2y}}. $$ Using the relation $\kappa_{n_1,n_2}(y) \rightarrow \ind{\left( -u_1, t_1^{(l)}-u_1 \right) }(y) y^{-\gamma_2},$ in the same way as in (i) we obtain $$ \frac{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)}{n_1^{1-\gamma_2}}\rightarrow \ind{\{u_2 <t_2^{(l)}\}}H_{\gamma_2}(u_1,t_1^{(l)}). $$ Again, due to $n^{1-\gamma_1}<n^{1-\gamma_2}$, the second term in (\ref{CijnmIsraiska}) is prevailing, therefore we get \begin{equation}\label{JNM2A} \frac{J^{(2)}_{n,n^\tau}}{n^{1+\tau+(1-\gamma_2)\alpha}}\rightarrow \int_{-\infty}^{0}\int_{0}^{\infty}\Big | \sum_{l=1}^{m}x_l \ind{\{0\leq u_2 \leq t_2^{(l)}\}} H_{\gamma_2}(u_1,t_1^{(l)})\Big |^\alpha {{\rm d} u}_2{{\rm d} u}_1. \end{equation} (iii) We investigate the third integral $$ J^{(3)}_{n,n^\tau}=n^{1+\tau}\int_{-\infty}^{0}\int_{-\infty}^0 f_{n,n^\tau}^{(3)}(u_1,u_2){{\rm d} u}_2{{\rm d} u}_1 $$ Since our goal is to prove that after the appropriate normalization this integral tends to zero, we use the rough estimate $$ J^{(3)}_{n,n^\tau}\leq n^{1+\tau}d^\alpha\sum_{l=1}^{d}\abs{x_l}^\alpha\int_{-\infty}^{0}\int_{-\infty}^0\abs{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)}^\alpha {{\rm d} u}_2{{\rm d} u}_1. $$ Let us assume that $\tau<1,$ i.e., $n_2/n_1 \to 0$. By change of variables we have $$ \int_{-\infty}^{0}\int_{-\infty}^0\abs{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)}^\alpha {{\rm d} u}_2{{\rm d} u}_1 $$ $$ =\int_{-\infty}^{0}\int_{-\infty}^{-\sv{n_2u_2}/n_1}\abs{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}+\sv{n_2u_2}, \sv{n_2u_2}, n_1, n_2)}^\alpha {{\rm d} u}_1{{\rm d} u}_2. $$ Once more, changing the sum into integral, we get $$ Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}+\sv{n_2u_2}, \sv{n_2u_2}, n_1, n_2) $$ $$ \leq \ind{\brc{\sv{n_1u_1}\leq \sv{n_1t_1^{(l)}}}} \ind{\brc{\sv{n_1u_1}\geq -\sv{n_2t_2^{(l)}}}} n_2^{1-\gamma_2} \frac{\left( t_2^{(l)}-u_2+1 \right)^{1-\gamma_2}-\left( -u_2 \right)^{1-\gamma_2}}{1-\gamma_2}. $$ This gives us the following estimate $$ \frac{J^{(3)}_{n_1,n_2}}{n_1n_2^{1+\alpha(1-\gamma_2)}}\leq \int_{-\infty}^{0}|H_{\gamma_2}(u_2,t_2^{(l)}+1)|\int_{-\infty}^{\infty} \ind{(-\infty,-\sv{n_2u_2}/n_1)}(u_1)\ind{\brc{\sv{n_1u_1}\leq \sv{n_1t_1^{(l)}}}} \ind{\brc{\sv{n_1u_1}\geq -\sv{n_2t_2^{(l)}}}}{{\rm d} u}_1{{\rm d} u}_2. $$ Considering these three indicator functions we get $$ \ind{(-\infty,-\sv{n_2u_2}/n_1)}(u_1)\ind{\brc{\sv{n_1u_1}\leq \sv{n_1t_1^{(l)}}}} \ind{\brc{\sv{n_1u_1}\geq -\sv{n_2t_2^{(l)}}}}\leq \ind{\left[-\frac{n_2}{n_1}t_2^{(l)} , \min(t_1^{(l)}+1,-\sv{n_2u_2}/n_1 )\right]}(u_1), $$ therefore, $$ \int_{-\infty}^{\infty} \ind{(-\infty,-\sv{n_2u_2}/n_1)}(u_1)\ind{\brc{\sv{n_1u_1}\leq \sv{n_1t_1^{(l)}}}} \ind{\brc{\sv{n_1u_1}\geq -\sv{n_2t_2^{(l)}}}}{{\rm d} u}_1\leq \min(t_1^{(l)}+1,-\frac{\sv{n_2}}{n_1u_2} )+\frac{n_2}{n_1}t_2^{(l)}. $$ For a fixed $u_2$ the right-hand side of the last inequality tends to zero, therefore, we get \begin{equation}\label{JNM3} \frac{J^{(3)}_{n_1,n_2}}{n_1n_2^{1+(1-\gamma_2)\alpha}}\rightarrow 0. \end{equation} In the case $\tau>1,$ i.e., $n_1/n_2 \to 0$, we get \begin{equation}\label{JNM3A} \frac{J^{(3)}_{n_1,n_2}}{n_1^{1+(1-\gamma_2)\alpha}n_2}\rightarrow 0. \end{equation} (iv) It remains to investigate the last integral $$ J^{(4)}_{n_1,n_2}=n_1n_2\int_{0}^{\infty}\int_{-\infty}^0 f_{n_1,n_2}^{(3)}(u_1,u_2){{\rm d} u}_2{{\rm d} u}_1 $$ $$ =n_1n_2\int_{0}^{\infty}\int_{-\infty}^0 \abs{Z_{{\bf t}^{(l)},\gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)}^\alpha {{\rm d} u}_2{{\rm d} u}_1. $$ Let us note that $$ Z_{t_1^{(l)}, t_2^{(l)}, \gamma_2}(\sv{n_1u_1}, \sv{n_2u_2}, n_1, n_2)=Z_{t_2^{(l)}, t_1^{(l)}, \gamma_2}(\sv{n_2u_2}, \sv{n_1u_1}, n_2, n_1), $$ therefore the investigation of this quantity in the case $\tau<1$ will be the same as in the case (ii) and $\tau>1$. We shall get \begin{equation}\label{JNM4} \frac{J^{(4)}_{n_1,n_2}}{n_1n_2^{1+(1-\gamma_2)\alpha}}\rightarrow \int_{-\infty}^{0}\int_{0}^{\infty}\Big | \sum_{l=1}^{m}x_l \ind{\{0\leq u_2 \leq t_2^{(l)}\}} H_{\gamma_2}(u_1,t_1^{(l)})\Big |^\alpha {{\rm d} u}_2{{\rm d} u}_1. \end{equation} In the case $\tau>1$ investigation is similar to (ii) and $\tau<1$, and we get \begin{equation}\label{JNM4A} \frac{J^{(4)}_{n_1,n_2}}{n_1^{1+(1-\gamma_2)\alpha}n_2}\rightarrow 0. \end{equation} Collecting formulae (\ref{JNM1}) - (\ref{JNM4A}) we get (\ref{lim4a}). Due to the fact that $K_1(\tau_0)=K_2(\tau_0)=1$ we get (\ref{lim4b}). It is more difficult to verify (\ref{lim4c}), to this aim we must simply go through all the proof of (\ref{lim4a}), assuming that $n_1=n_2=n$ and $A_n=n^{(2/\alpha)+1-\gamma_2}$. Let us take the case (i), the integral $J^{(1)}_{n,n}$. In all four integrals the first term in the expression of the function $f_{n_1,n_2}^{(3)}(u_1,u_2)$ from (\ref{CijnmIsraiska}) is independent of $n_2$, therefore, we need to consider only the second term, namely, $Z_{t_1^{(l)}, t_2^{(l)}, \gamma_2}$. In the case $n_1=n_2=n$ it is easy to see that $\kappa_{n,n}(y)\rightarrow \ind{\left( 0, b(u_1, u_2; t_1^{(l)}, t_2^{(l)}) \right) }(y) y^{-\gamma_2}$, therefore, for $u_1\ge 0, u_2\ge 0,$ we get $$ \frac{Z_{{\bf t}^{(l)},\gamma_2}(\sv{nu_1}, \sv{nu_2}, n, n)}{n^{1-\gamma_2}}\rightarrow \ind{\{u_1 <t_1^{(l)}\}} \ind{\{u_2 <t_2^{(l)}\}} \frac{b(u_1, u_2; t_1^{(l)}, t_2^{(l)})^{1-\gamma_2}}{1-\gamma_2}. $$ Since $\gamma_2<\gamma_1$ we see that the first term after normalization tends to zero, also we have $a(u_1, u_2)=0,$ therefore we see that we got the same expression as in (\ref{lim4c}), in the case $u_1\ge 0, u_2\ge 0.$ Similarly can be treated the rest three integrals in (\ref{jnm2}). \vspace{3mm} \hfill \mbox{$\Box$}\\[2mm] \section{The case $d=3$} In the above mentioned papers \cite{Puplinskaite2}, \cite{Puplinskaite1}, and \cite{Pilipauskaite} only r.f.s, defined on $\mathbb{Z} ^2$ were considered. It was mentioned that generalization of the results of these papers to the case $\mathbb{Z} ^d$ with $d\ge 3$ is difficult task, and the first step in this direction was the paper \cite{Surg}, where the scaling transition of linear random fields on $\mathbb{Z} ^3$ was considered. But at first we must discuss what we understand by words scaling transition on $\mathbb{Z} ^3$, or more generally, on $\mathbb{Z} ^d, \ d\ge 3$. In section 2 (the case $d=2$) considering sums (\ref{sum}) we have two possibilities. The first one is the convergence (in the sense of f.d.d.) of appropriately normed sums to some limit process as $(n_1, n_2)\to \infty$, and the limit process is independent on the way how the indices $n_1,n_2$ grow. The second one is the situation when the limit for sums (\ref{sum}) depends on the way how $(n_1, n_2)$ grow. In this case there was quite natural way to define this dependence, using relation $n_2=f(n_1)$ between $n_1$ and $n_2$, see Definitions 2 and 6. But the straightforward generalization of Definition 2, considering sums (\ref{field}) in the case $d=3$ and assuming $n_1=n^{q_1}, n_2=n^{q_2}, n_3=n^{q_3}$ (such case is considered in \cite{Surg}), is too narrow, since it presents only one possible way to define the path in $\mathbb{Z} _+^3$. Probably for this reason in \cite{Surg} there is no strict definition of the scaling transition, and the author in \cite{Surg} wrote "...we do not attempt to provide a formal definition of scaling transition for RFs in dimensions $d\ge 3$ since further studies are needed to fully understand it". Our examples of this section show that in dimension 3 we have much more possibilities (comparing with the case $d=2$) to define paths of $(n_1, n_2, n_3)$ growing to infinity, moreover, with growing dimension the complexity grows very rapidly. Therefore, we propose to define the scaling transition, independently of dimension $d$, as the case where the limits for sums (\ref{field}) depend on the way how ${\bf n} \to \infty$, i.e., there exist at least two paths in $\mathbb{Z} _+^d$ such that limit r.f. for these paths are different and cannot be obtained one from another by simple scaling. With such definition we should have a simple dichotomy in limit theorems for sums of values of random fields (\ref{field}): or the limit theorem of Lamperti type, as formulated in Corollary 1 in \cite{DavPau1} (see Proposition 1 in the case $d=2$), holds, either there is the scaling transition. At first we shall show that it is easy to generalize examples of r.f. on $\mathbb{Z} ^2$ with simple structure of filters considered in the previous subsections, to higher dimensions. We consider the case $d=3$, although generalization to higher dimensions in some examples does not present any principal difficulties. For the notation of three-dimensional multi-indices we use bold letters, for example ${\bf i} =(i_1, i_2, i_3), {\bf n} =(n_1, n_2, n_3)$. We consider linear r.f. \begin{equation}\label{field3d} X_{{\bf k}}=\sum_{{\bf i} \in \mathbb{Z} ^3_+} c_{{\bf i}}\xi_{{\bf k}-{\bf i}}, \ {\bf k} \in {\bf Z}^3, \end{equation} where $\mathbb{Z} ^3_+=\{{\bf i} \in \mathbb{Z} ^3: \ {\bf i}\ge {{\bf 0}} \}$, and investigate the asymptotic behavior of the process \begin{equation}\label{sum13d} S_{{\bf n}}({\bf t})=\sum_{{\bf k}={\bf 0}}^{\sv{{\bf n} {\bf t}}}X_{{\bf k}}. \end{equation} \subsection{Example 4} We take three sequences of positive numbers $a_j(i)=(1+i)^{-\gamma_j}, \ i\ge 1, \gamma_j>1/\alpha, \ a_j(0)=0, \ j=1, 2,3,$ and the following filter \begin{equation}\label{ci3d} c_{{\bf i}}= \begin{cases} a_{1}(i_1), \text{ if } i_1\geq 1, i_2=i_3=0,\\ a_{2}(i_2), \text{ if } i_2\geq 1, i_1=i_3=0,\\ a_{3}(i_3), \text{ if } i_3\geq 1, i_2=i_1=0,\\ 0, \text{ elsewhere}. \end{cases} \end{equation} Since this filter is exactly the same as considered in the section "Preliminaries" and in Example 1 (in the case $d=2$), we can start with formula (\ref{fn2}) and to write formulae, similar to (\ref{Jbnd2}) and (\ref{fn1n2}): \begin{equation}\label{Jbn} J_{{\bf n}}=n_1n_2n_3 \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{{\bf n}}^{(4)}({\bf u}){\rm d} u_1{\rm d} u_2{\rm d} u_3, \end{equation} where \begin{equation}\label{fbn} f_{{\bf n}}^{(4)}({\bf u})=\abs{\sum_{l=1}^{d}x_l\left( \sum_{p=1}^3 B_p({\bf n}, {\bf u}, l) U_{t^{(l)}_p,\gamma_p}(\sv{n_pu_p},n_p) \right) }^\alpha, \end{equation} \begin{equation}\label{Bpbn} B_p({\bf n}, {\bf u}, l)=\ind{\{0\leq \sv{n_ru_r} \leq \sv{n_rt^{(l)}_r}, \ 0\leq \sv{n_su_s} \leq \sv{n_st^{(l)}_s}, r\ne s, r, s \in Q_p\}}, \quad Q_p=\{1, 2, 3\}\setminus p, \end{equation} and \begin{equation}\label{Sngamma} U_{t^{(l)}_p,\gamma_p}(i_p,n_p)=\sum_{k=\left( -i_p \right)\vee 0 }^{\sv{n_pt^{(l)}_p}-i_p}a_{k}^{(p)}. \end{equation} For quantities $U_{t^{(l)}_p,\gamma_p}(i_p,n_p)$ we apply Proposition \ref{prop1} with normalization by quantities $z_{\gamma_p,n_p}^{(p)}, \ p=1, 2, 3,$ and the further analysis depends on our assumptions on exponents $\gamma_p$ and relation between coordinates of ${\bf n}$. For example, assuming that all $\gamma_p>1$ we shall get that there is no scaling transition and we have the analogue of Proposition \ref{prop2}. Let us consider the case which will give us the scaling transition. We assume $1/\alpha<\gamma_1<\gamma_2<\gamma_3<1$ and we take $n_1=n$ and , for some positive real numbers $\tau$ and $\sigma$, we set $n_2=n^\tau, n_3=n^\sigma$. Then the normalization quantities $z_{\gamma_p,n_p}^{(p)}, \ p=1, 2, 3,$ become functions of $n$: $$ z_{\gamma_1,n}^{(1)}=n^{1-\gamma_1}, z_{\gamma_2,n}^{(2)}=n^{\tau(1-\gamma_2)}, \ z_{\gamma_3,n}^{(3)}=n^{\sigma(1-\gamma_3)}. $$ Applying Proposition \ref{prop1} we get, as $n\to \infty$, \begin{equation}\label{lim5} \frac{U_{t^{(l)}_p,\gamma_p}(\sv{n_pu_p},n_p)}{z_{\gamma_p,n}^{(p)}} \to H_{\gamma_p}(u_p,t^{(l)}_p). \end{equation} We define $$ M_{n}(\tau, \sigma):=\max\{z_{\gamma_p,n}^{(p)}, \ p=1, 2, 3 \}=\max\{n^{1-\gamma_1}, n^{\tau (1-\gamma_2)}, n^{\sigma(1-\gamma_3)}\} $$ and \begin{equation}\label{Kp} K_p(\tau,\sigma)=\lim_{n\rightarrow\infty}\frac{z_{\gamma_p,n}^{(p)}}{M_{n}(\tau, \sigma)}. \end{equation} Let us denote $$ \tau_0=\frac{ 1-\gamma_1}{ 1-\gamma_2}, \ \sigma_0=\frac{ 1-\gamma_1}{ 1-\gamma_3}, \ a=\frac{ 1-\gamma_2}{ 1-\gamma_3}. $$ \begin{figure}[!ht] \begin{subfigure}[b]{0.33\linewidth} \centering \input{fig1a.tikz} \caption{}\label{fig1a} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \input{fig1b.tikz} \caption{}\label{fig1b} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \input{fig1c.tikz} \caption{}\label{fig1c} \end{subfigure} \caption{} \end{figure} We have $ \sigma_0>\tau_0>1, \ \ a>1,$ and we define the following sets (see Figure \ref{fig1a}) in the first quadrant of the plane $(\tau, \sigma)$: \begin{eqnarray}\label{sets1} A_1 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \tau<\tau_0, \ \sigma<\sigma_0 \right \}, \\ \nonumber A_2 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \tau>\tau_0, \ \sigma<a\tau \right \}, \\ \nonumber A_3 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \sigma>\sigma_0, \ \sigma>a\tau \right \}. \end{eqnarray} It is easy to see that the function $K_p$ is equal to $1$ on the set $A_p,$ while on the borders between two sets two corresponding functions are equal to one, for example, on the interval $\{ \tau=\tau_0, \ \sigma<\sigma_0\}$ we have $K_1(\tau,\sigma)=K_2(\tau,\sigma)=1$. At the point $(\tau_0, \sigma_0)$ all three functions $K_p$ are equal to $1$. Let us denote by ${\tilde S}_{n}({\bf t})$ the sum $S_{{\bf n}}({\bf t})$ with ${\bf n}=(n, n^\tau, n^\sigma)$ and similarly ${\tilde f}_{n}^{(4)}({\bf u})$ and ${\tilde J}_{n}$. We can rewrite (\ref{fbn}) as \begin{equation}\label{fn1} \frac{{\tilde f}_{n}^{(4)}({\bf u})}{M_{n}^\alpha(\tau, \sigma)}=\abs{\sum_{l=1}^{d}x_l\left( \sum_{p=1}^3B_p({\bf n}, {\bf u}, l)\frac{z_{\gamma_p,n}^{(p)}}{M_{n}(\tau, \sigma)}\frac{ U_{t^{(l)}_p,\gamma_p}(\sv{n_pu_p},n_p)}{z_{\gamma_p,n}^{(p)}} \right) }^\alpha. \end{equation} Having point-wise convergence (\ref{lim5}), in order to apply the Lebesgue dominated convergence theorem we must find majorizing function, but since this is done exactly in the same way as in Example 1, we skip this step. Thus we get \begin{equation}\label{Jn1} \frac{{\tilde J}_{n}({\bf u})}{n^{1+\tau+\sigma}M_{n}^\alpha(\tau, \sigma)} \to \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f({\bf u}, \tau, \sigma){\rm d} u_1{\rm d} u_2{\rm d} u_3, \end{equation} where $$ f({\bf u}, \tau, \sigma)=\abs{\sum_{l=1}^{d}x_l\left( \sum_{p=1}^3 B_p({\bf u}, l)K_p(\tau,\sigma)H_{\gamma_p}(u_p,t^{(l)}_p) \right) }^\alpha, $$ $$ B_p({\bf u}, l)=\ind{\{0\leq u_r \leq t^{(l)}_r, \ 0\leq u_s \leq t^{(l)}_s, r\ne s, r, s \in Q_p\}}. $$ \begin{prop}\label{prop6} Suppose that we have the sum ${\tilde S}_{n}({\bf t})$ of a linear r.f. with the filter (\ref{ci3d}). If $1/\alpha<\gamma_1<\gamma_2<\gamma_3<1$, then, as $n \to \infty$, \begin{equation}\label{lim6} (n^{1+\tau+\sigma})^{-1/\alpha}M_{n}^{-1}{\tilde S}_{n}({\bf t})\stackrel{f.d.d.}{\longrightarrow} W({\bf t}, \tau, \sigma) \end{equation} where \begin{equation}\label{lim7} W({\bf t}, \tau, \sigma)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left(\sum_{p=1}^3 B_p({\bf u}, {\bf t})K_p(\tau,\sigma)H_{\gamma_p}(u_p,t_p) \right)M({\rm d} u_1,{\rm d} u_2, {\rm d} u_3) \end{equation} and $$ B_p({\bf u}, {\bf t})=\ind{\{0\leq u_r \leq t_r, \ 0\leq u_s \leq t_s, r\ne s, r, s \in Q_p\}}. $$ \end{prop} Due to the scale transition we have three types of limit processes: on each set $A_i$ in the expression (\ref{lim7}) only $K_i(\tau, \sigma)=1$, while two other functions $K_j, j\in Q_i$ are equal to zero; on boundaries between any two of these sets corresponding two functions are equal to $1$; finally, all three functions are equal at the point $(\tau_0, \sigma_0)$: $K_i(\tau_0, \sigma_0)=1, \ i=1, 2, 3$. We see that in the case $d=3$ it is difficult to use the terms of well-balanced and unbalanced scaling limits introduced in Definition \ref{sctran}, therefore in \cite{Surg} there was proposed three terms for limit r.f.: {\it well-balanced, partially unbalanced}, and {\it completely unbalanced}. In our context, using this terminology, we have well-balanced limit at the point $(\tau_0, \sigma_0)$, partially unbalanced limits on boundaries between any two of sets $A_i, \ i=1, 2, 3$, and completely unbalanced limits on sets $A_i, \ i=1, 2, 3$. In this example we took all parameters $\gamma_i <1, \ i=1, 2, 3,$ which means that we consider long-range dependence along all three axes. We still have the scale transition effect if along one axis we have short-range dependence. Let us consider the case $1/\alpha<\gamma_1<\gamma_2<1<\gamma_3$, then the normalization quantities $z_{\gamma_p,n}^{(p)}, \ p=1, 2,$ remain the same, while $z_{\gamma_3,n}^{(3)}\equiv 1$. Therefore, we get $M_{n}(\tau, \sigma)=\max\{n^{1-\gamma_1}, n^{\tau (1-\gamma_2)}\}$, and defining the sets \begin{eqnarray}\label{set3} A_4 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \tau<\tau_0, \right \}, \\ \nonumber A_5 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \tau>\tau_0, \right \}, \\ \nonumber \end{eqnarray} we get $K_3(\tau,\sigma)\equiv 0, K_1(\tau,\sigma)=1$ on the set $A_4$ and $K_2(\tau,\sigma)=1$ on the set $A_5$. On the half-line (the border between $A_4$ and $A_5$) $\tau=\tau_0, \sigma>0$ both functions $K_1$ and $K_2$ equal to $1$. Similarly, in the case $1/\alpha<\gamma_1<\gamma_3<1<\gamma_2$ we should get the sets \begin{eqnarray}\label{set4} A_6 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \sigma<\sigma_0, \right \}, \\ \nonumber A_7 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \sigma>\sigma_0, \right \}, \\ \nonumber \end{eqnarray} and $K_2(\tau,\sigma)\equiv 0, K_1(\tau,\sigma)=1$ on the set $A_6$ and $K_3(\tau,\sigma)=1$ on the set $A_7$. \bigskip \subsection{Comparison of Example 4 with results from \cite{Surg}} We can compare our Example 4 with the results from \cite{Surg}, and to this aim we formulate main results from \cite{Surg}, using the notation, most close to our notation. In \cite{Surg} a linear r.f. $$ X_{\bf k}=\sum_{{\bf i}\in\mathbb{Z} ^3}c_{{\bf k}-{\bf i}}\varepsilon_{{\bf i}}, \ {\bf k} \in \mathbb{Z} ^3, $$ with a filter \begin{equation}\label{DSfilter} c_{{\bf i}}=\frac{g({\bf i})}{(\sum_{j=1}^3a_j|i_j|_+^{\gamma_j/\nu})^\nu}, \ {\bf i} \in \mathbb{Z} ^3, \end{equation} is considered. Here $\varepsilon_{{\bf i}}, \ {\bf i} \in \mathbb{Z} ^3$ are i.i.d. random variables with mean zero and unit variance (this would correspond to our case $\alpha=2$), $|a|_+=\max (|a|, 1)$, $g({\bf i}), \ {\bf i} \in \mathbb{Z} ^3,$ is bounded function and $\lim_{|{\bf i}|\to \infty} g({\bf i}):=g_\infty \in (0. \infty)$, $a_j>0, j=1, 2, 3, \ \nu>0,$ and parameters $\gamma_j$ satisfy the following condition $$ 1<Q:=\sum_{j=1}^3\frac{1}{\gamma_j}<2. $$ This condition guarantees that $$ \sum_{{\bf i}\in\mathbb{Z} ^3}|c_{{\bf i}}|^2<\infty \quad {\rm and} \quad \sum_{{\bf i}\in\mathbb{Z} ^3}|c_{{\bf i}}|=\infty, $$ i.e., r.f. is a stationary with finite variance and with long-range dependence. Let us note that coefficients of the filter along axes decay at the same rates as in our Example 4, namely, if we denote ${\bf i}^{(j)}$ vector ${\bf i}$ with $i_k=0$ for $k\ne j$, then it is easy to see that $c_{{\bf i}^{(j)}}=O(|i_j|_+^{-\gamma_j})$. If it would be possible to take the function $g$ equal to zero at all points of $\mathbb{Z} ^3$ except axes, then our Example 4 would follow from the results in \cite{Surg}. But this is not possible due to the requirement that the limits of $g({\bf t})$, as $|{\bf t}|\to \infty$ must be the same and positive. As a matter of fact, in the proofs in \cite{Surg} it is assumed without loss of generality that $g({\bf t})\equiv 1$. The values of the r.f. $X_{\bf k}$ are summed over rectangles as in (\ref{field}) with $n_i=n^{q_i}, \ i=1, 2, 3,$ but essential parameters are the ratios between $q_i$, therefore, in order to get the same notation as our one, we can set $q_1=1, q_2=\tau, q_3=\sigma$. With this notation the balance conditions in \cite{Surg} are expressed by means of $\tau_0=\gamma_1/\gamma_2$ and $\sigma_0=\gamma_1/\gamma_3$, and in Fig 1 in \cite{Surg} there is given the partition of the quadrant $\{\tau>0, \sigma>0 \}$ by means of balance conditions. This picture has the same structure as our Fig 1, if we take all sets $A_1 - A_7$. But this is the only similarity between results in \cite{Surg} and our Example 4. The regions of parameters $\gamma_i$, the values of $\tau_0$ and $\sigma_0$, and limit r.f.'s in \cite{Surg} and Example 4 are different. This can be explained by the fact that in \cite{Surg} the filter of a r.f. under consideration is "three-dimensional" (in the sense that coefficients are non-zero over all $\mathbb{Z} ^3$) while in our examples filters are "one-dimensional" (non-zero only on axes or some lines). Such simple filters allow to understand better the mechanism of scaling transition and motivated more general definitions of scale transition comparing with original definition given in \cite{Puplinskaite2}. \subsection{Example 5} In Example 2 (the case $d=2$) we had shown, that long-range dependence connected with the requirement that both exponents $\gamma_i<1, i=1, 2,$ is not necessary condition for the scaling transition, and the scale transition can be observed in the case of negative dependence. The same can be shown in the case $d=3$ and to do this one needs to make small change in the filter (\ref{ci3d}). Namely, we take the same three sequences of positive numbers $a_i^{(j)}, \ i\ge 1, \ j=1, 2,3,$ but now we assume $\max (1, 1/\alpha)<\gamma_1<\gamma_2<\gamma_3<1+1/\alpha$ and we define $c_{(0,0,0)}=a_1^{(0)}+a_2^{(0)}+a_3^{(0)}$ with some numbers $a_j^{(0)}$ in such a way that the following conditions \begin{equation}\label{sum3dZero} \sum_{{\bf i}\ge {\bf 0}}c_{{\bf i}}=0 \end{equation} and $\sum_{i=0}^\infty a_j(i)=0, \ j=1, 2, 3,$ are satisfied. It is possible to show that in this case we get very similar picture as in Example 4, but since the proofs are very similar to those used in Examples 2 and 4, we shall give only the final result. Now the point $(\tau_1, \sigma_1),$ which determines the scaling transition, will be $$ \tau_1=\frac{ \gamma_1-1}{ \gamma_2-1}, \ \sigma_1=\frac{\gamma_1-1}{\gamma_3-1}, \ a_1=\frac{ \gamma_2-1}{\gamma_3-1} $$ and these quantities satisfy $0< \sigma_1<\tau_1<1, \ \ a_1<1.$ As in Example 4 we get three sets (see Figure \ref{fig1b}) in the first quadrant of the plane $(\tau, \sigma)$, in which we have different limit fields: \begin{eqnarray}\label{sets2} {\tilde A}_1 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \tau>\tau_0, \ \sigma>\sigma_0 \right \}, \\ \nonumber {\tilde A}_2 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \tau<\tau_0, \ \sigma>a\tau \right \}, \\ \nonumber {\tilde A}_3 &=&\left \{(\tau, \sigma)\in (0, \infty)^2: \sigma<\sigma_0, \ \sigma<a\tau \right \}. \end{eqnarray} Thus, we get the picture of the scale transition which is in a sense inverse to the picture of Example 4, and the structure of limit r.f. in this case is very similar to (\ref{lim7}). \subsection{Example 6} Considering the case $d=2$ (Examples 1-3) we saw that filters with coefficients or on axes either on diagonal give us one point of the scaling transition, while combining diagonal with axis we got two points of the scaling transition. Therefore, in the case $d=3$, in order to get more complicated picture of the scaling transition, it is natural to consider the linear r.f. (\ref{field3d}) with the following filter \begin{equation}\label{ci3ddiag} c_{{\bf i}}= \begin{cases} a_{1}(i_1), \text{ if } i_1\geq 1, i_2=i_3=0,\\ a_{2}(i_2), \text{ if } i_2\geq 1, i_1=i_3=0,\\ a_{3}(i_3), \text{ if } i_3\geq 1, i_2=i_1=0,\\ a_{4}(i), \text{ if } , i_3=i_2=i_1=i\ge 1, \\ 0, \text{ elsewhere}. \end{cases} \end{equation} where sequences $a_{p}(i), p=1, 2, 3,$ are as in Example 4 and the fourth sequence is $a_{4}(0)=0, \ a_{4}(i)=(1+i)^{-\gamma_4}, \ 1/\alpha<\gamma_4 <1$. If it was possible to compare Example 4 with results for linear r.f. with filter (\ref{DSfilter}), since coefficients of this filter have different decay rates along axes, it is easy to see that coefficients of this filter on diagonal decay with the same rate as coefficients on the axis with the slowest rate of decay, therefore Example 6 cannot be compared with results from \cite{Surg}. For a moment we do not relate the parameter $\gamma_4$ with parameters $\gamma_i, \ i=1, 2, 3.$ Since the filter of this example is obtained by combining filter from Examples 4 and adding coefficients on the diagonal, as in Example 3 ( in the case $d=2$) it is easy to write the ch.f. for $A_{{\bf n}}^{-1}\sum_{l=1}^{d}x_lS_{{\bf n}}({\bf t}^{(l)})$ and to get formulae analogous to (\ref{Jbn}), (\ref{fbn}). Thus we need to investigate the quantity $I_{{\bf n}}:=A_{{\bf n}}^{-\alpha}J_{{\bf n}},$ where \begin{equation}\label{Jbn2} J_{{\bf n}}=n_1n_2n_3 \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{{\bf n}}^{(4)}({\bf u}){\rm d} u_1{\rm d} u_2{\rm d} u_3, \end{equation} and \begin{equation}\label{fbn2} f_{{\bf n}}^{(5)}({\bf u})=\abs{\sum_{l=1}^{d}x_l\left( \sum_{p=1}^3 B_p({\bf n}, {\bf u}, l) U_{t^{(l)}_p,\gamma_p}(\sv{n_pu_p},n_p)+\prod_{j=1}^3\ind{\{0\leq \sv{n_ju_j}\leq \sv{n_j t^{(l)}_j}\}}Z_{{\bf t}^{(l)}, \gamma_4}(\sv{{\bf n}{\bf u}}, {\bf n}) \right) }^\alpha, \end{equation} \begin{equation}\label{zdiag3} Z_{{\bf t}^{(l)}, \gamma_4}({\bf i}, {\bf n}):=\sum_{k=\max\left( 0,-i_1,-i_2, -i_3 \right)}^{\min\left( \sv{n_jt^{(l)}_j}-i_j, j=1, 2, 3, \right)}a_{4}(k). \end{equation} The expression of $f_{{\bf n}}^{(4)}({\bf u})$ is similar to that given in (\ref{fbn}) (differs by one additional term), quantities, present in (\ref{fbn2}) are defined in (\ref{Bpbn}), (\ref{Sngamma}). As in examples above, we set $n_1=n, \ n_2=n^\tau,\ n_3=n^\sigma$ and $1/\alpha<\gamma_1<\gamma_2<\gamma_3<1$. Now we must find the right normalization for four terms present in (\ref{fbn2}), but for three terms with $U_{t^{(l)}_p,\gamma_p}(\sv{n_pu_p},n_p)$ normalization is obtained in Example 4 and is given by quantities $z_{\gamma_p,n}^{(p)}$ (see (\ref{lim5})) and by sets $A_i, i=1, 2, 3,$ (see (\ref{sets1})). Namely, from Example 4 we have that for $(\tau, \sigma)\in A_p$ the normalization for the first three terms in (\ref{fbn2}) is $z_{\gamma_p,n}^{(p)}=n^{s_p}, \ p=1, 2, 3,$ where \begin{equation}\label{normalization1} s_1=1-\gamma_1, \ s_2=\tau(1-\gamma_2), \ s_3=\sigma(1-\gamma_3). \end{equation} Now let us consider normalization for the fourth term in (\ref{fbn2}). It is easy to see that the normalization for $Z_{{\bf t}^{(l)}, \gamma_4}({\bf i}, {\bf n})$ depends on the relations between parameters $\tau$ and $\sigma$, since for the investigation of the growth of this sum we must compare quantities $n_i, \ i=1, 2, 3.$ We shall skip the procedure of this comparison and provide the final result. As in Example 4, the growth of the sum in (\ref{zdiag3}) is different on three sets of the possible values of $(\tau, \sigma)$, and the division of the area of possible values of parameters $(\tau, \sigma) \in (0, \infty)^2$ into three sets (see Figure \ref{fig1c}), is by the point $(\tau_1, \sigma_1)$, with $\tau_1=\sigma_1=1$: \begin{eqnarray}\label{setslyg} A_{4, 1} &=& \{(\tau, \sigma)\in (0, \infty)^2: \tau>1, \ \sigma>1 \}, \\ \nonumber A_{4, 2} &=& \{(\tau, \sigma)\in (0, \infty)^2: \sigma>\tau, \ \tau<1 \}, \\ \nonumber A_{4, 3} &=& \{(\tau, \sigma)\in (0, \infty)^2: \sigma<\tau, \ \sigma<1 \}. \end{eqnarray} Normalization for (the growth of) $Z_{{\bf t}^{(l)}, \gamma_4}({\bf i}, {\bf n})$, according to our assumption, is a function of $n$, and on the set $A_{4, i}$, is $z_{\gamma_4,n}^{(4, i)}=n^{s_{4, i}}, i=1, 2, 3,$ where \begin{equation}\label{normalization2} s_{4, 1}=1-\gamma_4, s_{4, 2}=\tau(1-\gamma_4), s_{4, 3}=\sigma(1-\gamma_4). \end{equation} It is clear that, in order to find normalization for the function (\ref{fbn2}), we must find $M(n; \tau, \sigma)=\max \{z_{\gamma_p,n}^{(p)}, \ p=1, 2, 3, z_{\gamma_4,n}^{(4, i)}, \ i=1, 2, 3,\}$. For this aim we consider the intersections of sets $A_p$ and $A_{4, j}, $ with all possible combinations of $1\le p, j\le 3$. Since the point $(\tau_0, \sigma_0)$ is defined by parameters $\gamma_i$ (and we fixed the relation $1/\alpha<\gamma_1<\gamma_2<\gamma_3<1$) and $(\tau_1, \sigma_1)$ is independent of all $\gamma_i$, we have seven sets (two intersections $A_2\cap A_{4, 2}, \ A_3\cap A_{4, 3}$ are empty, since $\tau_0>\tau_1=1, \ \sigma_0>\sigma_1=1$): $$ B_1=A_2\cap A_{4, 1}, \ B_2=A_3\cap A_{4, 1}, \ B_3=A_3\cap A_{4, 2}, \ B_4=A_1\cap A_{4, 2} $$ $$ B_5=A_1\cap A_{4, 1}, \ B_6=A_1\cap A_{4, 3}, \ B_7=A_2\cap A_{4, 3} $$ Now in each set $B_i$ we must compare only two exponents - if $ B_i=A_p\cap A_{4, j}$, we must compare $s_p$ and $ s_{4, j}$. Till now the choice of the parameter $\gamma_4$ was arbitrary in the interval $(1/\alpha, 1)$, in which all three $\gamma_i, i=1, 2, 3,$ are located. It turns out that the position, where $\gamma_4$ is located, is important. Let us consider the case $1/\alpha<\gamma_1<\gamma_2<\gamma_3<\gamma_4<1$, i.e., the sequence on the diagonal is decreasing most rapidly. Then easy calculations show that in this case in all sets $B_i$ the prevailing (i.e., bigger) exponents are $s_j$: in the sets $B_4, B_5, B_6$ the prevailing is $s_1$, in $B_1, B_7$ - $s_2$ and in $B_2$ and $B_3$ the exponent $s_3$ is bigger than $s_{4, 1}$ and $s_{4, 2}$, respectively. Since $A_1=B_4\cup B_5\cup B_6, \ A_2=B_1 \cup B_7$ and $A_3=B_2\cup B_3$, we see that the scaling transition in this case, where the sequence on the diagonal, comparing with sequences on axes, is decreasing most rapidly, is completely determined by the sets $A_i, \ i=1, 2, 3,$ and the filter coefficients on axes. As some surprise for us it was the fact that the same picture of scaling transition, i.e., defined by sets $A_i$ and exponents $s_i$ \ $i=1, 2, 3,$ we get also in two cases $1/\alpha<\gamma_1<\gamma_2<\gamma_4<\gamma_3<1$ and $1/\alpha<\gamma_1<\gamma_4<\gamma_2<\gamma_3<1$. This is obtained in the same way - comparing exponents of normalizing constants on each of the sets $B_i, \ 1\le i \le 7$. Let us consider the last case, where the sequence on the diagonal is decreasing most slowly, i.e., $1/\alpha <\gamma_4<\gamma_1<\gamma_2<\gamma_3<1$. This case gives us the picture, which we are looking for: the scaling transition is defined by all sets $A_i$ and $A_{4, i}, \ i=1, 2, 3$ and even some sets $B_j$ are divided by new lines. We provide the complete analysis of this case. We recall that we had introduced notations $\tau_0=(1-\gamma_1)/(1-\gamma_2), \sigma_0=(1-\gamma_1)/(1-\gamma_3), \ \tau_1=\sigma_1=1,$ additionally we denote $\tau_2=(1-\gamma_4)/(1-\gamma_2), \ \tau_3=(1-\gamma_1)/(1-\gamma_4), \ \sigma_2=(1-\gamma_4)/(1-\gamma_3). $ We shall use without special mentioning the inequalities $$ 1-\frac{1}{\alpha}>1-\gamma_4>1-\gamma_1>1-\gamma_2>1-\gamma_3>0. $$ Let us consider the set $B_1.$ We must compare $\tau(1-\gamma_2)$ and $1-\gamma_4$. In the set $B_1$ we have $\tau>\tau_0>1$ and since $1-\gamma_4>1-\gamma_1$, therefore, for $\tau_0<\tau<\tau_2$, we get that $1-\gamma_4>\tau(1-\gamma_2)$, while if $\tau>\tau_2$, then $1-\gamma_4<\tau(1-\gamma_2)$. Thus, we get that the set $B_1$ is divided into two parts by the vertical line going through the point $(\tau_2, 0)$. In the set $B_2$ we must compare $\sigma(1-\gamma_3)$ and $1-\gamma_4$. In this set we have $\sigma>\sigma_0>1$ and, again using $1-\gamma_4>1-\gamma_1$, we get that $1-\gamma_4>\sigma(1-\gamma_3)$, for $\sigma_0<\tau<\sigma_2$, while if $\sigma>\sigma_2$, then $1-\gamma_4<\sigma(1-\gamma_3)$. We get that the set $B_2$ is divided into two parts by the horizontal line going through the point $(0, \sigma_2).$ In the set $B_3$ we must compare $\sigma(1-\gamma_3)$ and $\tau(1-\gamma_4)$. In this set we have $\sigma>\tau(1-\gamma_2)/(1-\gamma_3)$ and since $1-\gamma_4>1-\gamma_2$ we get that $\tau(1-\gamma_4)>\sigma(1-\gamma_3)$, for $\tau(1-\gamma_2)/(1-\gamma_3)<\sigma<\sigma_2\tau$, while if $\sigma>\sigma_2\tau$, then $\tau(1-\gamma_4)<\sigma(1-\gamma_3)$. We get that the set $B_3$ is divided into two parts by the line $\sigma=\sigma_2 \tau).$ In the set $B_4$ we must compare $1-\gamma_1$ and $\tau(1-\gamma_4)$. It is easy to see that if $\tau_3<\tau<1,$ then $\tau(1-\gamma_4)>1-\gamma_1$, while for $\tau_3>\tau$ the opposite inequality holds, this means that the set $B_4$ is divided into two parts by the vertical line going through the point $(\tau_3, 0)$. Also from our assumption easily follows that in the set $B_5$ we have $1-\gamma_4>1-\gamma_1$ In the set $B_6$ we must compare $\sigma(1-\gamma_4)$ and $1-\gamma_1$. It is not difficult to verify that this set is divided into two parts by the horizontal line going through the point $(0, \sigma_3),$ namely, if $\sigma_3<\sigma<1$ then $\sigma(1-\gamma_4)>1-\gamma_1$, while if $\sigma_3>\sigma$ then $\sigma(1-\gamma_4)<1-\gamma_1$. Finally, in the set $B_7$, comparing $\sigma(1-\gamma_4)$ and $\tau(1-\gamma_2)$, we get that this set is divided into two parts by the line $\sigma=\tau\t_2^{-1}$, and if $\tau\t_2^{-1}<\sigma<1$ then $\sigma(1-\gamma_4)>\tau(1-\gamma_2)$, while if $\tau\t_2^{-1}>\sigma$ then $\sigma(1-\gamma_4)<\tau(1-\gamma_2)$. \begin{figure}[!ht] \centering \input{fig24.tikz} \caption{}\label{fig4} \end{figure} Collecting all these facts we have the following picture (see Figure \ref{fig4}). Let us denote the points $A=(\tau_3, \sigma_3),\ B=(\tau_3, \sigma_0),\ C=(\tau_1, \sigma_2),\ D=(\tau_2, \sigma_2),\ E=(\tau_2, \sigma_0),\ F=(\tau_0, \sigma_3),\ G=(\tau_0, \sigma_0),\ H=(0, \sigma_0),\ K=(\tau_0, 0), \ O=(0, 0),$ then inside the set $ABCDEF$ sets $A_{4, j}$ and exponents $ s_{4, j}$ are dominating: $ s_{4, 1}$ is dominating in $GCDE$, $ s_{4, 2}$ is dominating in $ABCG$ and $ s_{4, 3}$ is dominating in $AGEF$. Outside of the set $ABCDEF$ sets $A_{ j}$ and exponents $ s_{ j}$ are dominating: to the top of the broken line $HBCD$ the exponent $s_3$ is the biggest, to the right from the broken line $DEFK$ the exponent $s_2$ is dominating and in the set $HBAFKO$ $s_1$ is the biggest. After this analysis we have the normalizing sequence $A_n=n^{(1+\tau+\sigma)/\alpha}M(n; \tau, \sigma)$. Then the final step - finding the limit distributions - is carried in the same way as in Example 4, introducing normalization for each term in (\ref{fbn2}) and functions of type (\ref{Kp}). We shall get that inside each set in Figure \ref{fig4} only one function $K_p$ will be equal to $1$, while on lines which serve as border lines we shall have two functions equal to one, while at points $A, B, C, D, E, F, G$ we shall have three functions equal to $1$, since at each of these points three sets with different prevailing exponents meet. \subsection{Example 7} In the case $d=3$ we consider sums (\ref{sum13d}) indexed with three-dimensional indices ${\bf n} =(n_1, n_2, n_3)$. Clearly, we have the same first possibility as in the case $d=2$, namely to consider limit theorems assuming only ${\bf n} \to \infty$ (we recall that this means that $\min (n_i, \ i=1, 2, 3,) \to \infty$) , but for the second possibility we have more freedom. In Examples 4-6 we assumed that $n_2$ and $n_3$ are functions of $n_1$, more precisely, $n_2=n_1^\tau, n_3=n_1^\sigma$. But in the relation $\min (n_i, \ i=1, 2, 3,) \to \infty$ we may assume that $(n_1, n_2) \to \infty$ and $n_3 = f(n_1, n_2)$ with some function $f: \mathbb{Z} _+^2 \to \mathbb{Z} _+$ such that $f(n_1, n_2) \to \infty)$, as $(n_1, n_2) \to \infty$. One can hope that for some linear fields and some functions $f$ we can have the scaling transition. The most natural functions to begin with are $[(n_1n_2)^\tau], \ [n_1^\tau n_2^\sigma], \ [n_1^\tau]+[n_2^\sigma]$, where $\tau$ and $\sigma$ are positive numbers. We shall take the simple function $f(n_1, n_2)=[n_1^\tau n_2^\sigma]$ and we shall look for a filter of a linear field to get the scaling transition. Since we know that in the case where filter coefficients are expressed as factors of two sequences ($c_{i, j}=a_ib_j$ in the case $d=2$) there is no scale transition, we can try the following filter. We choose three sequences $a_i(j)=(1+j)^{\gamma_i}, \ j\ge 1, a_i(0)=0, \ i=1, 2,3,$ and define \begin{equation}\label{ci3dfakt} c_{{\bf i}}= \begin{cases} a_1(i_1)a_2(i_2), \text{ if } i_3=0,\\ a_3(i_3), \text{ if } i_1=i_2= 0, i_3\ge 1,\\ 0, \text{ elsewhere}. \end{cases} \end{equation} This can be written as \begin{equation*} c_{{\bf i}}=a_1(i_1)a_2(i_2)\ind{i_3=0}+a_3(i_3)\ind{i_1=i_2=0}, \end{equation*} Again we can start from formulae (\ref{generalJn})-(\ref{generalfbn}). Taking into account the expression of the filter (\ref{ci3dfakt}) after some transformations we can obtain \begin{align}\label{fbn3d fakt} f_{{\bf n}}^{(6)}\left({\bf u},{\bf t}^{(j)}\right) &=\sum_{{\bf k}\in\mathbb{Z} ^d} c_{{\bf k}}\ind{[(-\sv{{\bf n}{\bf u}})\vee {\bf 0}\leq{\bf k}\leq{\bf n}{\bf t}^{(j)}-\sv{{\bf n}{\bf u}}]}\\ \nonumber &= \sum_{i_1, i_2=0}^\infty a_1(i_1)a_2(i_2)\ind{[(-\sv{n_q u_q})\vee 0\leq k_q\leq n_q t_q^{(j)}-\sv{n_q u_q}, q=1, 2]}\ind{[-\sv{n_3 u_3} \leq 0 \leq n_3 t_3^{(j)}-\sv{n_3 u_3}]}\\ \nonumber &+\sum_{i_3=1}^\infty a_3(i_3)\ind{[(-\sv{n_q u_q})\leq 0\leq n_q t_q^{(j)}-\sv{n_q u_q}, q=1, 2]}\ind{[-n_3 u_3\vee 0 \leq 0 \leq n_3 t_3^{(j)}-{n_3 u_3}+n_3]} \end{align} Since the first double sum can be written as product of two sums and the second sum is the same as considered in previous examples, using the notation (\ref{Sngamma}), we can write \begin{align}\label{fbnex8} f_{{\bf n}}^{(6)}\left({\bf u},{\bf t}^{(j)}\right) &=U_{t_1^{(j)},\gamma_1}(\sv{n_1u_1},n_1)U_{t_2^{(j)},\gamma_2}(\sv{n_2u_2},n_2)\ind{[0\leq\sv{n_3 u_3} \leq n_3 t_3^{(j)}]}\\ \nonumber &+ U_{t_3^{(j)},\gamma_3}(\sv{n_3u_3},n_3)\ind{[0\leq\sv{n_i u_i} \leq n_i t_i^{(j)}, \ i=1, 2]} \end{align} From Proposition \ref{prop1} we know that the right normalization for the first term is $z_{\gamma_1,n_1}z_{\gamma_2,n_2}$, while $z_{\gamma_3,n_3}$ is normalization sequence for the second term. Therefore, we must investigate the quantity \begin{equation}\label{Mbn} M_{{\bf n}}=M_{{\bf n}}(\gamma_i, \gamma_2, \gamma_3)= \max (z_{\gamma_1,n_1}z_{\gamma_2,n_2}, \ z_{\gamma_3,n_3}), \end{equation} and this quantity depends on parameters $\gamma_i, i=1, 2, 3,$ and the relation between coordinates of the vector ${\bf n}.$ We assume that $1/\alpha<\gamma_3<\gamma_1<\gamma_2<1$ and $n_3=\sv{n_1^\tau n_2^\sigma}, $ where $\tau, \sigma>0.$ Thus we must investigate the quantity \begin{equation}\label{Mn1n2} M_{n_1, n_2}:=\max \left (n_1^{1-\gamma_1}n_2^{1-\gamma_2}, n_1^{\tau(1-\gamma_3)}n_2^{\sigma(1-\gamma_3)}\right ). \end{equation} Let us define \begin{equation}\label{tau0sigma0} \tau_0=\frac{1-\gamma_1}{1-\gamma_3}, \ \sigma_0=\frac{1-\gamma_2}{1-\gamma_3}, \ 0<\sigma_0<\tau_0<1, \end{equation} and four sets ${\bar A}_j, 1\le j\le 4$ in the first quadrant of the $(\tau, \sigma)$ plane: $$ {\bar A}_1=\{\tau\ge\tau_0, \ \sigma\ge\sigma_0 \}, \quad {\bar A}_2=\{0<\tau\le\tau_0, 0<\sigma\le\sigma_0 \}, $$ $$ {\bar A}_3=\{0<\tau<\tau_0, \ \sigma>\sigma_0 \}, \quad {\bar A}_4=\{\tau>\tau_0, \ 0<\sigma<\sigma_0 \}. $$ It is easy to see that in the sets ${\bar A}_1$ and ${\bar A}_2$ one term from two terms in (\ref{Mn1n2}) is prevailing and we have \begin{equation}\label{Mn1n21} M_{n_1, n_2}= \begin{cases} n_1^{\tau(1-\gamma_1)}n_2^{\sigma(1-\gamma_2)}, \text{if} \ (\tau, \sigma)\in {\bar A}_1, \\ n_1^{1-\gamma_1}n_2^{1-\gamma_2}, \text{if} \ (\tau, \sigma)\in {\bar A}_2. \end{cases} \end{equation} In the sets ${\bar A}_3$ and ${\bar A}_4$ the situation is different and the maximum in (\ref{Mn1n2}) depends on the way how $(n_1, n_2)\to \infty$. We assume that $n_1=n, \ n_2=\sv{n^\rho}, n_3=\sv{n^{\tau+\rho\sigma}}$ and let us consider $(\tau, \sigma)\in {\bar A}_3$. Then we must find the quantity \begin{equation}\label{MnA3} M_{n}:=\max \left (n^{1-\gamma_1+\rho(1-\gamma_2)}, n^{\tau(1-\gamma_3)+\sigma(1-\gamma_3)}\right ). \end{equation} If we denote \begin{equation}\label{rho0} \rho_0=\frac{\tau_0-\tau}{\sigma-\sigma_0}, \end{equation} then it is easy to get that for $(\tau, \sigma)\in {\bar A}_3$ \begin{equation}\label{MnA31} M_{n}= \begin{cases} n^{1-\gamma_1+\rho(1-\gamma_2)}, \text{if} \ 0<\rho<\rho_0, \\ n^{(\tau+\rho\sigma)(1-\gamma_3)}, \text{if} \ \rho>\rho_0. \end{cases} \end{equation} Considering $(\tau, \sigma)\in {\bar A}_4$, assuming the same assumption $n_1=n, \ n_2=\sv{n^\rho}, n_3=\sv{n^{\tau+\rho\sigma}}$, and using the same notation (\ref{rho0}) (note that in both sets ${\bar A}_3$ and ${\bar A}_4$ $0<\rho_0<\infty$) we get \begin{equation}\label{MnA4} M_{n}= \begin{cases} n^{(\tau+\rho\sigma)(1-\gamma_3)}, \text{if} \ 0<\rho<\rho_0, \\ n^{1-\gamma_1+\rho(1-\gamma_2)}, \text{if} \ \rho>\rho_0. \end{cases} \end{equation} Thus, we got quite complicated behavior of the quantity (\ref{Mbn}). Assuming that $n_3=n_1^\tau n_2^\sigma, $ we got four sets of parameters $\tau, \sigma$, and in sets ${\bar A}_1$ and ${\bar A}_2$ the growth of coordinates $n_1$ and $n_2$ can be arbitrary and the quantity (\ref{Mbn}), which in this case becomes $M_{n_1, n_2}$, is given in (\ref{Mn1n21}). In sets ${\bar A}_3$ and ${\bar A}_4$ the growth of coordinates $n_1$ and $n_2$ cannot be arbitrary and we must assume $n_1=n, \ n_2=n^\rho, n_3=n^{\tau+\rho\sigma}$, introducing new parameter $\rho$. Then in each set ${\bar A}_3$ and ${\bar A}_4$ we got "boundary"value $\rho_0$, the quantity (\ref{Mbn}) becomes $M_{n}$ and is given in (\ref{MnA31}) and (\ref{MnA4}). Having the expressions of the quantity (\ref{Mbn}) and remembering that $A_{\bf n}=(n_1n_2n_3)^{1/\alpha}M_{\bf n}$ (see (\ref{generalJn})) we can write down the normalization sequence $A_{\bf n}$. Here it is appropriate to note that the value $\rho_0=\rho_0(\tau, \sigma)$ is a function of $\tau, \sigma$ and its behavior on the sets ${\bar A}_3$ and ${\bar A}_4$ has the following properties. Let us take ${\bar A}_3$, then $\lim_{\tau\to \tau_0}\rho_0(\tau, \sigma)=0 $ for any fixed $\sigma$ and $\lim_{\sigma\to \sigma_0}\rho_0(\tau, \sigma)=\infty $ for any fixed $\tau$, similar relations can be written for ${\bar A}_4.$ To complete the analysis of this example it would be necessary to find the limit distribution, which is quite complicated, but it is obtained in a standard way, therefore we mention the main step in finding the limit distribution. We can rewrite (\ref{fbnex8}) as follows \begin{align}\label{fbnex8a} M_{\bf n}^{-1}f_{{\bf n}}\left({\bf u},{\bf t}^{(j)}\right) &=\frac{z_{\gamma_1,n_1}z_{\gamma_2,n_2}}{M_{\bf n}}\frac{U_{t_1^{(j)},\gamma_1}(\sv{n_1u_1},n_1)U_{t_2^{(j)},\gamma_2}(\sv{n_2u_2},n_2)}{z_{\gamma_1,n_1}z_{\gamma_2,n_2}}\ind{[0\leq\sv{n_3 u_3} \leq n_3 t_3^{(j)}]}\\ \nonumber &+\frac{z_{\gamma_3,n_3}}{M_{\bf n}}\frac{U_{t_3^{(j)},\gamma_3}(\sv{n_3u_3},n_3)}{z_{\gamma_3,n_3}}\ind{[0\leq\sv{n_i u_i} \leq n_i t_i^{(j)}, \ i=1, 2]} \end{align} Having this expression we apply Proposition \ref{prop1}, find limits of ratios $$ \frac{z_{\gamma_1,n_1}z_{\gamma_2,n_2}}{M_{\bf n}}, \quad \frac{z_{\gamma_3,n_3}}{M_{\bf n}} $$ in sets ${\bar A}_i, 1\le i\le 4$, using expression of $M_{\bf n}$ and our assumptions about relations between coordinates of ${\bf n}$. At the end of this example let us note that we had considered only one possible variant $1/\alpha<\gamma_3<\gamma_1<\gamma_2<1$. Clearly, it is possible to consider different location of parameters $\gamma_i$, and some variants will exhibit the scaling transition, while others will not. For example, changing only the location of $\gamma_3$ with respect to $\gamma_i, \ i=1, 2,$ we shall change only the location of the point $(\tau_0, \sigma_0)$, namely, in the cases $1/\alpha<\gamma_1<\gamma_3<\gamma_2<1$ and $1/\alpha<\gamma_1<\gamma_2<\gamma_3<1$ we get $0<\sigma_0<1<\tau_0$ and $1<\sigma_0<\tau_0$, respectively. In the case $\gamma_i>1, i=1, 2, \ 1/\alpha<\gamma_3<1$ and $\sum_{i=0}^\infty a_j(i)\ne 0, \ j=1, 2,$ we have $M_{\bf n}=z_{\gamma_3,n_3}, \ A_{\bf n}=(n_1n_2)^{1/\alpha}n_3^{1/\alpha+1-\gamma_3}$ and there is no scaling transition. Using the terminology of \cite{Paul20}, one can say that in this case the r.f. under the consideration has zero memory in directions by the first two axis (horizontal plane) and positive memory in the vertical direction. \section{Dependence structure of r.f. in the above examples} The dependence structure of a linear r.f. is completely determined by the filter of the r.f. under consideration. Since the filters in the above presented examples are quite specific, the dependence structure in these examples is also specific, and it can give some additional insight into phenomenon of the scaling transition. It is easy to see that in Example 1 $X_{k,l}$ and $X_{m,n}$ are independent if $k-m>0, l-n>0$ or $k-m<0, \ l-n<0$, in other cases these two values of the r.f. are dependent. Therefore, calculating covariances (in the case $\alpha=2$) or spectral covariances ($\alpha <2$; see \cite{PaulDam2}, where this quantity and other measures of dependence for r.f. are calculated) of this r.f. we shall get that big part of these quantities will be zero. Let us take $\alpha=2$ in Proposition \ref{prop3} and let us consider the covariances $\rho (n, m):=\mathbb{E} X_{0,0}X_{n, m}, (n, m)\in \mathbb{Z} ^2$. It is easy to calculate that, as $|n|, |m| \to \infty$, \begin{eqnarray}\label{cov1} \rho(n, 0) & \sim & C|n|^{1-2\gamma_1}, \ \text {as}\ |n| \to \infty, \ \rho(0, m)\sim C|m|^{1-2\gamma_2}, \ \text {as}\ |m| \to \infty, \\ \nonumber \rho (n, m)& = & a_1(|n|)a_2(m) \ \ \text {if} \ n<0, m>0, \\ \nonumber \rho (n, m)& =& a_1(n)a_2(|m|) \ \ \text {if} \ n>0, m<0, \end{eqnarray} and $\rho (n, m)=\rho (-n, -m)=0$ if $n>0, m> 0$. Here and in what follows $C$ stands for the constants, not the same at different places, which may be dependent on parameters $\gamma_i, \ i=1, 2$. Since in this example $1/2<\gamma_i<1$, we have the following relations \begin{equation}\label{cov2} \sum_{(n, m) \in \mathbb{Z} ^2}\rho (n, m)=\infty, \ \sum_{ m \in \mathbb{Z} } \rho(0, m)=\infty, \ \sum_{n \in \mathbb{Z} } \rho(n, 0)=\infty. \end{equation} Usually for general stationary random fields with mean zero and finite variance, using the same notation $\rho (n, m)$ for covariances the relation $\sum_{(n, m) \in \mathbb{Z} ^2}|\rho (n, m)|=\infty$ is taken as definition of long-range dependence for the random field under consideration, while the relation $\sum_{(n, m) \in \mathbb{Z} ^2}|\rho (n, m)|<\infty$ serves as definition of short-range dependence. Remembering definition of directional memory for r.f. in \cite{Paul20}, it is possible to define long-range and short-range directional dependencies. We say that a stationary r.f. $\{X_{k, l}, \ (k, l)\in \mathbb{Z} ^2 \}$ is long-range or short-range dependent in the horizontal direction, if for each fixed $m\in \mathbb{Z} $, the series $\sum_{n \in \mathbb{Z} }|\rho (n, m)|$ is divergent or convergent, respectively. Similarly we define directional dependence in the vertical direction. More generally, we can define both sorts of dependence for any direction, defined by means of rational numbers. Let $q=k/l$ be a fixed rational number (positive or negative) which defines a direction by means of the line $y=qx, x\in \mathbb{R} .$ For fixed $a, b\in \mathbb{Z} $, let us denote by ${\mathscr L}(q,a, b)$ the set $\{(lm+b, km+a): m\in \mathbb{Z} \}\subset \mathbb{Z} ^2$ (integers $a, b$ are needed to include horizontal and vertical lines). \begin{definition} We say that a mean zero stationary r.f. $\{X_{k, l}, \ (k, l)\in \mathbb{Z} ^2 \}$ with finite variance is long-range or short-range dependent in direction, defined by a rational number $q$, if for any fixed $a, b\in \mathbb{Z} $ the series $$ \sum_{(n, m)\in {\mathscr L}(q, a, b)}|\rho (n, m)| $$ is divergent or convergent, respectively. We say that this r.f. is long-range or short-range dependent if the series $$ \sum_{(n, m)\in \mathbb{Z} ^2}|\rho (n, m)| $$ is divergent or convergent, respectively. \end{definition} Let us note that in \cite{Pilipauskaite} (see Remark 6.1 therein) vertical and horizontal long-range dependence was defined. Also it is necessary to note that sometimes in the definition of short-range dependence additionally it is required that the sum of covariances is not zero. If the sum of covariances is zero, then we say that we have negative dependence. In our definition negative dependence is part of short-range dependence. Using these definitions we can say that (\ref{cov2}) means that the random field from Proposition \ref{prop3} is long-range dependent and is long-range dependent in both -vertical and horizontal - directions, but is short-range dependent in any other direction. Here it is necessary to note that it is easy to produce examples of a linear r.f. which has long-range dependence, but is short-range dependent in one or even in both directions along axis, one such example will be r.f. from Example 3. Since in the case $\alpha=2$ the variance of a sum $S_{n, m}$ is expressed via covariances, analyzing the normalization constant $A_{n, m}$ it is easy to see that its growth depends on the way how $n, m$ tend to infinity. Dependence structure of r.f. in Examples 2 and 3 is more complicated. Let as take a r.f. from Example 3, which has two points of scaling transition. Using the expression of the filter (\ref{cij2}) we have $$ X_{k, l}=\sum_{i=1}^\infty a_i \varepsilon_{k-i, l} + \sum_{j=1}^\infty c_j \varepsilon_{k-j, l-j}. $$ Therefore, it is easy to see that the random variable $$ X_{0, 0}=\sum_{i=1}^\infty a_i \varepsilon_{-i, 0} + \sum_{j=1}^\infty c_j \varepsilon_{-j, -j} $$ is independent only with variables $X_{k, l}$ if $k>l>0$ or $k<l<0$. For other combinations of indices $k, l$ it is not difficult to get the following relations for covariances: $$ \rho(0, l)=a_{|l|}c_{|l|}=(1+|l|)^{-\gamma_1-\gamma_2}, \ \rho(k, 0)=\sum_{i=1}^\infty a_i a_{|k|+i}\sim C|k|^{1-2\gamma_1}, \ \rho(k, k)=\sum_{i=1}^\infty c_i c_{|k|+i}\sim C|k|^{1-2\gamma_2}. $$ Since $\gamma_1 +\gamma_2>1, \ -1,1-2\gamma_i <0, \ i=1, 2,$ we have \begin{equation}\label{covex3} \sum_{(n, m) \in \mathbb{Z} ^2}\rho (n, m)=\infty, \ \sum_{ k \in \mathbb{Z} } \rho(k, 0)=\infty, \ \sum_{ k \in \mathbb{Z} } \rho(k, k)=\infty, \ \sum_{l \in \mathbb{Z} } \rho( 0, l)<\infty. \end{equation} These relations show that we have the example of a r.f. which we had mentioned above: it is long-range dependent but in vertical direction it is short-range dependent. In this example we have two directions - horizontal and diagonal -with long-range dependence. Analysis of normalizing constants $A_{n, m}$ shows more complicated behavior comparing with $A_{n, m}$ in Example 1. Quite different situation is with Example 2. Since requiring the condition (\ref{sumZero}) we assume that $\sum_{i_1=0}^{\infty}\sum_{i_2=0}^{\infty}|c_{i_1,i_2}|<\infty$, it is not difficult to see that such linear random field is short-range dependent and, therefore, is short-range dependent in any direction. But it is easy to note that in this example we have the following relation: \begin{equation}\label{covzero} \sum_{(n, m) \in \mathbb{Z} ^2}\rho (n, m)=0. \end{equation} Moreover, this relation is valid not only for our Example 2, but for general linear random field satisfying the condition (\ref{sumZero}). Namely, if a linear random field with absolutely summable filter $\{c_{i, j}, \ (i, j)\in \mathbb{Z} _+^2\}$ satisfies (\ref{sumZero}), the for such random field relation (\ref{covzero}) holds. The proof of this statement becomes very simple if we take filter defined on all $\mathbb{Z} ^2$, then the proof follows from equalities $$ \sum_{(n, m) \in \mathbb{Z} ^2}\rho (n, m)=\sum_{(n, m) \in \mathbb{Z} ^2}\sum_{(i, j) \in \mathbb{Z} ^2}c_{i, j}c_{i+n, j+m}= $$ $$ \sum_{(i, j) \in \mathbb{Z} ^2}c_{i, j}^2 +\sum_{(i, j) \in \mathbb{Z} ^2}c_{i, j} \sum_{(n, m) \in \mathbb{Z} ^2, (n, m)\ne (0, 0)}c_{i, j}c_{i+n, j+m}=\left (\sum_{(i, j) \in \mathbb{Z} ^2}c_{i, j} \right )^2. $$ Returning to the dependence structure in the Example 2 it is interesting to note, that despite of the relation (\ref{covzero}), sum of covariances over any line going trough the origin is positive. For example, denoting $A=\sum_{i=1}^\infty a_i, B=\sum_{i=1}^\infty b_i$ and recalling that $c_{0, 0}=a_0+b_0=-(A+B)$, we easily get \begin{eqnarray*} \sum_{n \in \mathbb{Z} }\rho (n, 0) & = & c_{0, 0}^2+2c_{0, 0}A+\sum_{i=1}^\infty a_i^2+2\sum_{i=1}^\infty a_i\sum_{n=1}^\infty a_{i+n}\\ & = & c_{0, 0}^2+2c_{0, 0}A+(\sum_{i=1}^\infty a_i)^2=(c_{0, 0}+A)^2>0 \end{eqnarray*} In a similar way we can prove that $\sum_{(m) \in \mathbb{Z} }\rho (0, m)=(c_{0, 0}+B)^2>0$ and the sums $\sum_{(n) \in \mathbb{Z} }\rho (n, -n)$ and $\sum_{(n) \in \mathbb{Z} }\rho (-n, n)$ are also positive. That all these sums are positive can be explained by fact that in all these sums there is a big positive member $\rho_{0, 0}=\sum_{(i, j) \in \mathbb{Z} ^2_+}c_{i, j}^2$. Most probably (but we did not verify) the same fact holds not only for our Example 2, but in the case of general filter if for all $(i, j)\ne (0, 0) \ c_{i, j}\ge 0$ and $c_{0, 0}=-\sum_{(i, j) \ne (0, 0)}c_{i, j}$. Now let us look at the dependence structure in the same examples in the case $\alpha<2$. Note that the notions of long and short-range dependencies in the literature were used mainly for stationary r.f. with finite variance. This is due to the fact that our knowledge about dependence for stable r.f. is quite limited. For long time there were only some results concerning dependence for stable r.f., see \cite{Samorod}, chapter 8.7, where codifference was calculated for some Takenaka r.f. defined on $\mathbb{R} ^d$. But the dependence was measured in the following way: at first r.f. was projected to a line, then for the obtained stable process on a line codifference, as a measure of dependence, was calculated. Even for linear stable r.f. usual codifference (i.e., dependence between values of a r.f. at two points) was not investigated (reasons for that are explained in \cite{PaulDam2}). It turned out that the so-called $\alpha$-spectral covariance, introduced in \cite{PaulDam2}, can serve as a measure of dependence and is quite successful substitute for the usual covariance in defining long-range and short-range dependencies. For definition of $\alpha$-spectral covariance we refer to \cite{PaulDam2}, here we recall only that for a linear r.f. (\ref{field}) $\alpha$-spectral covariance is given by formula \begin{equation}\label{alphaspcor} \rho_{\alpha}(n,m):=\rho_{\alpha}(X_{0,0}, X_{n,m})=\sum_{i=0}^\infty\sum_{j=0}^\infty c_{i, j}^{\langle \alpha/2\rangle}c_{i+n, j+m}^{\langle \alpha/2\rangle}, \ n>0, m>0, \end{equation} where $ x^{\langle a\rangle}=\abs{x}^a{\rm sign}(x)$. Therefore, we suggest to classify random fields (\ref{field}) in the same way as we classified r.f. with finite variance. Namely, we say that a r.f. (\ref{field}) is long-range or short-range dependent (with respect to $\alpha$-spectral covariance) in direction, defined by a rational number $q$, if for any fixed $a, b\in \mathbb{Z} $ the series $$ \sum_{(n, m)\in {\mathscr L}(q, a, b)}|\rho_{\alpha}(n, m)| $$ is divergent or convergent, respectively. We say that this r.f. is long-range or short-range dependent (with respect to $\alpha$-spectral covariance), if the series $$ \sum_{(n, m)\in \mathbb{Z} ^2}|\rho_{\alpha}(n, m)| $$ is divergent or convergent, respectively. It is easy to see that this classification can be applied to r.f. indexed by $\mathbb{Z} ^d, \ d\ge 2$ and to any stationary r.f. for which we can define $\alpha$-spectral covariance. Having expression (\ref{alphaspcor}), it is not difficult to verify, that the dependence structure in the examples of Section 3 (except Example 2, since we do not know if some analogue of (\ref{covzero}) holds for $\alpha$-spectral covariance) in the case $\alpha<2$ remains the same as in the case $\alpha=2$. For example, taking the filter (\ref{cij}) from Example 1 and substituting this expression into (\ref{alphaspcor}), it is not difficult to get the following relations $$ \rho_{\alpha}(0,m)=\sum_{i=1}^\infty a_2^{\alpha/2}(i) a_2^{\alpha/2}(m+i)\sim Cm^{1-2\beta_2}, \ \rho_{\alpha}(n,0)=\sum_{i=1}^\infty a_1^{\alpha/2}(i) a_1^{\alpha/2}(n+i) \sim Cn^{1-2\beta_1}, $$ where $\beta_i:=\alpha \gamma_i/2$. Since $1/2<\beta_i<\alpha/2<1$, we have $$ \sum_{(n, m) \in \mathbb{Z} ^2}\rho_{\alpha}(n, m)=\infty, \ \sum_{ m \in \mathbb{Z} }\rho_{\alpha}(0, m)=\infty, \ \sum_{n \in \mathbb{Z} }\rho_{\alpha}(n, 0)=\infty, $$ that is, exactly the same relations as in (\ref{cov2}), only with $\rho_{\alpha}(n, m)$ instead of $\rho (n, m)$. Till now we had considered the dependence structure in examples of r.f. in the case $d=2$. Long-range and short-range dependence of stationary r.f. in the case $d=3$ can be defined in the same way as in the case $d=2$, but passing to the case $d=3$ we have more possibilities - we can sum covariances over points on lines or planes. For example, considering Example 4 and denoting $\rho (n_1, n_2, n_3):=\mathbb{E} X_{0,0, 0}X_{n_1, n_2, n_3}, (n_1, n_2, n_3)\in \mathbb{Z} ^3$ and taking $1/2<\gamma_1<\gamma_2<\gamma_3<1$, it is easy to see that non-zero covariances are on axes: $$ \rho (n_1, 0, 0)\sim C|n_1|^{1-2\gamma_1}, \quad \rho (0, n_2, 0)\sim C|n_2|^{1-2\gamma_2}, \quad \rho (0, 0, n_3)\sim C|n_3|^{1-2\gamma_3}. $$ Thus, this r.f. is long-range dependent along each axis, but is short-range dependent along any line, going through origin and not coinciding with any of axes. Also it is easy to see that $$ \sum_{(n_1, n_2) \in \mathbb{Z} ^2}\rho (n_1, n_2, 0)=\infty, \quad \sum_{(n_1, n_3) \in \mathbb{Z} ^2}\rho (n_1, 0, n_3)=\infty, \quad \sum_{(0, n_2, n_3) \in \mathbb{Z} ^2}\rho (0, n_2, n_3)=\infty, $$ but if we take the sum of covariances over plane, not containing any axis, we get the finite value. In Example 5 (the case $\alpha=2$) we have r.f. with negative and short-range dependence (condition (\ref{sum3dZero})), therefore short-range dependence will be over any direction or plane. But there are specific relations for sums of covariances over the coordinate axes or coordinate planes. Let us denote $$ V_1=\sum_{n_1\in \mathbb{Z} }\rho (n_1, 0, 0), \quad V_2=\sum_{n_2\in \mathbb{Z} }\rho (0, n_2, 0), \quad V_3=\sum_{n_3\in \mathbb{Z} }\rho (0, 0, n_3), $$ $$ U_{3}=\sum_{(n_1, n_2)\in \mathbb{Z} ^2}\rho (n_1, n_2, 0), \quad U_{2}=\sum_{(n_1, n_3)\in \mathbb{Z} ^2}\rho (n_1, 0, n_3), \quad U_{1}=\sum_{(0, n_2, n_3)\in \mathbb{Z} ^2}\rho (0, n_2, n_3), $$ $$ A_j=\sum_{i=1}^\infty a_j (i), \quad B_j=\sum_{i=1}^\infty a_j^2 (i), \ j=1, 2, 3, \quad \rho_0=\rho (0, 0, 0), \ {\bar c}_0=c_{(0, 0, 0)}. $$ We have the following result. \begin{prop}\label{prop7} In Example 5 in the case of finite variance we have the following relations: \begin{equation}\label{covzero0} \sum_{(n_1, n_2, n_3)\in \mathbb{Z} ^3}\rho(n_1, n_2, n_3)=0 \end{equation} \begin{equation}\label{covzero1} V_k=\left (\sum_{j=1, j\ne k}^3 A_j \right )^2 + \left (\sum_{j=1, j\ne k}^3 B_j \right )>0, \ k=1, 2, 3, \end{equation} \begin{equation}\label{covzero2} U_k= A_k^2 +B_k>0, \end{equation} i.e., all sums of covariances over the coordinate axes or coordinate planes are positive and only sum of covariances over $\mathbb{Z} ^3$ is equal to zero. \end{prop} {\it Proof of Proposition \ref{prop7}}. We prove (\ref{covzero1}) for $k=1$, since other two relations can be proved in the same way. We have $$ \rho (n_1, 0, 0)={\bar c}_0c_{(n_1, 0, 0)}+\sum_{i=1}^\infty a_1(i)c_{(i+n_1, 0, 0)} $$ and $$ V_1=\sum_{n_1\in \mathbb{Z} }\rho (n_1, 0, 0)=\rho_0+ \left (\sum_{n_i=-\infty}^{-1} +\sum_{n_1=1}^\infty \right )\rho (n_1, 0, 0). $$ From definition of covariance we have $\rho_0={\bar c}_0^2+B_1+B_2+B_3$ and easy calculations give us $$ \left (\sum_{n_i=-\infty}^{-1} +\sum_{n_1=1}^\infty \right )\rho (n_1, 0, 0)=2\left ({\bar c}_0 A_1+\sum_{i=1}^\infty\sum_{m=1}^\infty a_1(i) a_1(i+m)\right ). $$ Therefore, we have \begin{eqnarray*} V_1 &=& {\bar c}_0^2+2{\bar c}_0 A_1+\sum_{i=1}^\infty a_1^2(i)+2\sum_{i=1}^\infty\sum_{m=1}^\infty a_1(i) a_1(i+m)+B_2+B_3\\ &=& ({\bar c}_0+A_1)^2+B_2+B_3. \end{eqnarray*} Taking into account the relation ${\bar c}_0=-(A_1+A_2+A_3)$ we get (\ref{covzero1}) with $k=1.$ Now we prove (\ref{covzero2}) with $k=3$ (the proof for $k=1, 2$ is similar). Let us note that $\rho (n_1, n_2, 0)=0$, if $n_1n_2>0$ and $\rho (n_1, n_2, 0)=a_1(|n_1|)a_2(|n_2|)$, if $n_1n_2<0$. Therefore, we can write $$ U_3=V_1+V_2-\rho_0 +\left (\sum_{n_1=-\infty}^{-1}\sum_{n_2=1}^\infty +\sum_{n_1=1}^\infty\sum_{n_2=-\infty}^{-1} \right )\rho (n_1, n_2, 0). $$ It is easy to see that $$ \left (\sum_{n_1=-\infty}^{-1}\sum_{n_2=1}^\infty +\sum_{n_1=1}^\infty\sum_{n_2=-\infty}^{-1} \right )\rho (n_1, n_2, 0)=A_1A_2, $$ therefore, using expressions (\ref{covzero1}) for $V_1, V_2$ and the relation $\rho_0={\bar c}_0^2+B_1+B_2+B_3$ we get $$ U_3=(A_2+A_3)^2 + (A_1+A_3)^2+B_1+B_2+2B_3-{\bar c}_0^2-(B_1+B_2+B_3)+2A_1A_2. $$ From this relation we easily get (\ref{covzero2}) with $k=3$. Although we had proved (\ref{covzero0}) earlier, writing the identity $$ \sum_{(n_1, n_2, n_3)\in \mathbb{Z} ^3}\rho(n_1, n_2, n_3)=U_1+U_2+U_3-V_1-V_2-V_3+\rho_0 $$ and using (\ref{covzero1}) and (\ref{covzero2}), we can verify (\ref{covzero0}). \vspace{3mm} \hfill \mbox{$\Box$}\\[2mm]
2,869,038,156,217
arxiv
\section{INTRODUCTION} \label{sec:intro} Effective interference management and spatial multiplexing of data in multiuser wireless systems is greatly dependent upon the accuracy of channel state information (CSI) at the transmitter. The use of feedback from receivers in multiuser wireless networks has now become a well-established technique to provide CSI at the transmitter \cite{Love08}. A number of analyses of imperfect feedback scenarios motivated by practical considerations are available, such as partial CSI feedback \cite{Madhow01}, limited-rate feedback \cite{Jindal06}, noisy feedback \cite{Milstein05}, and delayed feedback \cite{WCNC07}. However, the problem considered in this paper is significantly different. Since the performance advantage of closed-loop transmission schemes over their open-loop counterparts is completely determined by the quality of the CSI, this opens the door to deliberate misreporting of CSI by malicious users as a novel form of a physical layer attack. Jamming and eavesdropping are the traditional categories of physical layer attacks in the literature, and have been widely studied for multi-antenna systems \cite{Mukherjee09}. To the author's best knowledge this is the first work to investigate physical layer attacks on MIMO systems based on malicious feedback of CSI. In particular, we examine malicious or \emph{poisoned} feedback attacks on the downlink of a multi-antenna network that is multicasting a common message to multiple receivers. The message being transmitted has no intrinsic value for the attacker; the malicious user is only interested in compromising the Quality-of-Service (QoS) provided to the legitimate receivers. In network security parlance, malicious behavior by authenticated users from within the network are referred to as `Byzantine attacks', and have usually been studied at the network and transport layers \cite{byzantine}. The remainder of this paper is organized as follows. The multicast network model and the adversarial user's capabilities are described in Sec.~\ref{sec:model}. The various forms of malicious feedback based on the corresponding objectives of the transmitter are listed in Sec.~\ref{sec:poison}. Numerical results that depict the impact of poisoned feedback are shown in Sec.~\ref{sec:simul}, and conclusions drawn in Sec.~\ref{sec:concl}. \section{MATHEMATICAL MODEL}\label{sec:model} The network under consideration is comprised of a $N_t$-antenna transmitter multicasting to $\tilde K$ legitimate receivers and a single malicious user, all equipped with a single antenna\footnote{It is straightforward to extend the principle of poisoned feedback to the case where each receiver is also equipped with an antenna array, for which multicasting strategies have been proposed in \cite{Boche08,Tomecki09}.} each, such that $\tilde K+1=K$ is the total number of receiving nodes. In the general multicast scenario, a common scalar information symbol $z$ of unit power is transmitted to all $K$ receivers. This necessitates the use of a common $N_t\times 1$ transmit beamformer $\mathbf{u}$ with with power constraint $||\mathbf{u}||_2^2 \leq P$. Compared to the broadcast scenario of independent information per receiver, the multicast beamforming problem was shown to be NP-hard \cite{Sidiropoulos06}. This led to the development of a number of approximate solutions based on techniques such as semidefinite programming, for instance \cite{Sidiropoulos06}-\cite{Boche08}. The $N_t\times 1$ transmitted signal is \begin{equation} {\mathbf{x}} = \mathbf{u}z. \end{equation} The received signals in a flat fading scenario are \begin{equation} {\mathbf{y}}_k = {\mathbf{h}}_k {\mathbf{\mathbf{u}}}z +{{n}}_k, \quad k = 1, \ldots ,K, \end{equation} where ${\mathbf{h}}_k$ is the $1 \times N_t$ channel state vector for user $k$, and ${\mathbf{n}}_k$ is additive white Gaussian noise with variance $\sigma _k^2$. Due to the absence of inter-user interference, the signal-to-noise ratio (SNR) is the primary figure of merit: \begin{equation} \operatorname{SNR} _k = \frac{{\mathbf{h}}_k {\mathbf{uu}}^H{\mathbf{h}}_k^H} {{\sigma _k^2 }}. \end{equation} We focus on the following potential transmitter objectives in a multicast scenario: \begin{enumerate} \item Minimization of the transmit power subject to a minimum SNR threshold per receiver. \item Maximization of the average received SNR for all receivers. \item Maximization of the minimum user SNR (max-min) under the total power constraint $P$. \item Maximization of the minimum information rate under the total power constraint $P$. \end{enumerate} Objectives 3 and 4 are equivalent for the case of a single multicast group as in this work. To achieve any of the above system objectives, the transmitter requires global channel state information of all $K$ receivers ${\mathbf{H}} = \left[ {\begin{array}{*{20}c} {{\mathbf{h}}_1 } & \ldots & {{\mathbf{h}}_{\tilde K} } & {{\mathbf{h}}_a } \\ \end{array} } \right],$ where the subscript $a$ denotes the malicious adversary. On the other hand, the malicious user seeks to degrade the system performance objectives to the best of its ability by manipulating the CSI it feeds back. We assume that all $\tilde K$ legitimate receivers truthfully transmit their CSI to the transmitter over a error-free public feedback link. Moreover, this global CSI is also known to the malicious user via eavesdropping. The transmitter is assumed to be unaware of the presence of the malicious user and seeks to service all active receivers, i.e., user selection is not considered. The formulation of the resultant poisoned feedback ${{\mathbf{h}}_a }$ from the malicious user is described in the next section. \section{POISONED FEEDBACK}\label{sec:poison} \subsection{Transmit Power Minimization} In this scenario, the transmitter seeks to minimize its transmit power required to satisfy a pre-determined minimum SNR target $\gamma$ for each receiver. On the other hand, the malicious user seeks to maximize the resource consumption at the transmitter. Towards this end, a crude attack would be to demand a very high QoS threshold relative to the legitimate receivers. However, such anomalous attacks are easy to identify, and at the very least would result in the malicious user being dropped from the set of scheduled receivers. Therefore, we consider a more subtle attacker, who seeks to feed back the worst possible channel state information so as to maximize the power consumption at the transmitter. The malicious user has the following relaxed optimization problem: \begin{equation} \begin{gathered} \mathop {\max }\limits_{{\mathbf{h}}_e } \mathop {\min }\limits_{\mathbf{u}} \operatorname{trace} \left( {{\mathbf{uu}}^H } \right) \hfill \\ s.t.\operatorname{trace} \left( {{\mathbf{uu}}^H {\mathbf{h}}_k {\mathbf{h}}_k^H } \right) \geqslant \gamma,{\text{ }}k = 1, \ldots ,K \hfill \\ \hspace{0.6in} \operatorname{trace} \left( {{\mathbf{uu}}^H } \right) \leq P \hfill\\ \|\mathbf{h}_a\|_2^2 \geq \beta, \end{gathered} \end{equation} where an additional norm constraint has been placed on $\mathbf{h}_a$ by the attacker to avoid anomalous feedback values. Define ${\mathbf{D}} \triangleq {\mathbf{h}}_e {\mathbf{h}}_e^H$, ${\mathbf{U}} \triangleq {\mathbf{uu}}^H$, and ${\mathbf{G}}_k \triangleq {\mathbf{h}}_k {\mathbf{h}}_k^H.$ Introducing an auxiliary variable $t$, we have the following SDP relaxation for the attacker: \begin{equation} \begin{gathered} \mathop {\min }\limits_{\mathbf{D}} - t \hfill \\ s.t.{\text{ }}\operatorname{trace} \left( {\mathbf{U}} \right) \geqslant t \hfill \\ {\text{trace}}\left( {{\mathbf{UG}_k}} \right) \geqslant \gamma \hfill \\ {\text{trace}}\left( {\mathbf{D}} \right) \geqslant \beta \hfill \\ \end{gathered} \label{eq:SDP} \end{equation} Due to the relaxation of the rank-1 constraint on the transmit covariance, a randomization step is often required after the optimization in (\ref{eq:SDP}). This implies that the attacker may not be able to compute the same beamformer as the transmitter. \subsection{Maximization of Average Received SNR} Under this transmitter objective, the attacker adopts the following: \[ \begin{gathered} \mathop {\min }\limits_{{\mathbf{h}}_a } \mathop {\max }\limits_{\mathbf{u}} \frac{{{\mathbf{uHH}}^H {\mathbf{u}}^H }} {{\sigma _k^2 }} \hfill \\ s.t.{\text{ }}\left\| {\mathbf{u}} \right\|_2^2 = P \hfill \\ \end{gathered} \] For the transmitter's maximization problem, a closed-form solution exists for the optimal beamformer $\mathbf{u}$, namely the principle eigenvector of ${\mathbf{HH}}^H$ \cite{Liu04}. Intuitively, what the attacker should do here is to choose $\mathbf{h}_a$ to be very large and orthogonal to all of the other legitimate channel vectors. The transmit beamformer would then approach $\mathbf{h}_a$, and all of the other users would see their allocated power approach zero. \subsection{Maximization of Minimum SNR} An alternative attack would be to minimize the maximum SINR enjoyed by any of the legitimate receivers. \begin{equation} \begin{gathered} \mathop {\min }\limits_{{\mathbf{h}}_e } \mathop {\max }\limits_{\mathbf{u}} \mathop {\min }\limits_{k} \operatorname{trace} \left( {{\mathbf{uu}}^H {\mathbf{h}}_k {\mathbf{h}}_k^H } \right) {\text{for }}k = 1, \ldots ,K,\hfill \\ s.t. \operatorname{trace} \left( {{\mathbf{uu}}^H } \right) \leq P\\ \|\mathbf{h}_a\|_2^2 \geq \beta. \end{gathered} \end{equation} Broadly speaking, from the transmitter's perspective the optimal beamformer can be expressed as a linear combination of the user's channel state vectors: \[ {\mathbf{u}}^H = \sum\limits_{k = 1}^K {\alpha _k {\mathbf{h}}_k }, \] where the complex coefficients $\alpha _k$ can be obtained using a sequential quadratic program \cite{Liu04}. However, instead of posing the above problem as another SQP or SDP which are known to be computationally intensive \cite{Utschick07}, we assume the attacker employs an iterative algorithm that alternatively optimizes $\mathbf{h}_a$ for a fixed $\mathbf{u}$, and vice versa. The inner optimization for the transmit beamformer can be carried out based on the iterative SNR-increasing update algorithm in \cite[Sec. VI]{Utschick07}. The attacker initializes the algorithm with an arbitrary channel vector and obtains the corresponding $\mathbf{u}$ for this initial global CSI matrix $\mathbf{H}$. After this step, the new candidate for $\mathbf{h}_a$ is obtained using a line search of appropriate step size in order to find the worst-case feedback in terms of the minimum SNR. These iterations continue until a pre-determined stopping criterion. \section{NUMERICAL RESULTS}\label{sec:simul} The following simulation results are compiled using 1000 Monte Carlo trials per point. The channel vectors for all links are composed of independent Gaussian random variables with zero mean and unit variance. The background noise power is assumed to be the same for all $K$ receivers and the eavesdropper: $\sigma_k^2=1$. All SNR and rate results shown here correspond to the $\tilde K$ legitimate receivers only, since the attacker has no value for the transmitted information as stated previously. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig1} \caption{Transmit power fraction versus number of receivers $K$, $P$=20dB, $N_t=5$ antennas.} \label{fig_txpwr} \end{figure} Fig.~\ref{fig_txpwr} displays the contrast between the total transmit power required to meet a modest SNR target of $\gamma=5$dB per receiver when all receivers report their CSI accurately, and when a single malicious user is present. It is evident that the attacker is able to waste a significant portion of the transmitter's power. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig2} \caption{Maximum average SNR versus transmit power $P$, $N_t=5$ antennas.} \label{fig_maxavgSNR} \end{figure} Fig.~\ref{fig_maxavgSNR} exhibits the performance loss in terms of maximum average received SNR in dB of the legitimate users due to poisoned feedback, with $\tilde K=5$ receivers. The attacker is able to starve the other receivers of allocated power on the downlink, and reduces overall QoS levels by up to 3dB even for large transmit powers. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Fig3} \caption{Minimum information rate versus number of receivers $K$, $P$=20dB, $N_t=4$ antennas.} \label{fig_rate} \end{figure} Fig.~\ref{fig_rate} shows the maximized minimum information rates for the closed-loop systems with completely accurate and poisoned feedback, and the open-loop multicast downlink with isotropic transmission \cite{Love08}, respectively. The maximized minimum information rate is defined as \[ \mathop {\max }\limits_{\mathbf{D}} \mathop {\min }\limits_{1 \leqslant k \leqslant \tilde K} \log _2 \left( {1 + \operatorname{SNR} _k } \right). \] The interesting observation here is that the presence of just a single malicious user drives the system performance significantly below that achievable without any feedback whatsoever. \section{CONCLUSION}\label{sec:concl} This paper presented a preliminary investigation of the vulnerability of feedback-based downlink systems to malicious CSI reporting. It is observed that deliberate feedback of the worst possible CSI can lead to a closed-loop system performance that is considerably worse than that achieved by open-loop multicasting without CSI feedback. Therefore, smart detection and repudiation techniques to validate feedback of CSI at the physical layer are necessary as highlighted by the numerical results. Numerous avenues exist for future work, namely the closed-loop broadcast scenario with independent information for receivers.
2,869,038,156,218
arxiv
\section{Introduction \label{sec:intro}} Star formation within galactic disks was proceeding much faster in the first half of the history of the Universe: the cosmic star-formation rate density declined by a factor of approximately ten since $z=1$ (Madau et al. 1998, Hopkins \& Beacom 2006). The mean star-formation rate with respect to the total stellar mass also decreases with decreasing redshift: star-forming galaxies form a `main sequence' in the star-formation/stellar-mass space (e.g., Brinchmann et al. 2004; Daddi et al. 2007; Elbaz et al. 2007; Noeske et al. 2007a,b; Salim et al. 2007; Whitaker et al. 2012). In addition, galactic starbursts (e.g., ultraluminous infrared galaxies (ULIRGs) or smm-galaxies) represent outliers from the main sequence. The slope and offset of the $\dot{M}_*$--$M_*$ powerlaw main sequence relation change with redshift (Speagle et al. 2014). The most prominent change is an increasing offset with increasing redshift, that is, the specific star-formation rate ($\dot{M}_*/M_*$) increases significantly with increasing $z$ (by a factor of $\sim 6$ for galaxies with masses of $M_* \sim 3 \times 10^{10}$~M$_{\odot}$; Pannella et al. 2015). The slope and scatter of this correlation, the evolution of its normalization with cosmic time, contain crucial and still poorly known information on galaxy evolution (e.g., Karim et al. 2011; Rodighiero et al. 2011; Wuyts et al. 2011; Sargent et al. 2012). Two factors can be invoked to explain the higher star-formation efficiency: (i) higher gas fractions (see, e.g., Combes et al. 2013) and (ii) dynamical trigger of interactions, whose frequency increases with redshift (e.g., Conselice et al. 2009, Kartaltepe et al. 2012). In this work, we take a closer look at the gas content or fraction and the associated star-formation rate in main sequence and starburst galaxies at $z=0$ and $z \sim 1$--$2$. We look preferentially at local starburst galaxies, ULIRGs, and high-redshift starbursts smm-galaxies. Genzel et al. (2010) and Tacconi et al. (2012) showed that star-forming galaxies at $z=1$--$2$ have higher gas fractions ($\sim 33$\,\%) and higher star-formation efficiencies with respect to the molecular gas ($SFE=SFR/M_{\rm H_2} \sim 1/0.7$~Gyr$^{-1}$) than local spiral galaxies ($\sim 10$\,\% and $SFE \sim 1/2.0$~Gyr$^{-1}$; e.g., Bigiel et al. 2008, Leroy et al. 2008). Local ULIRGs and high-redshift smm-galaxies have the highest star-formation efficiencies (e.g., Pope et al. 2013). The uncertainties of the determined star-formation rates are typically $\sim 50$\,\% (e.g., Leroy et al. 2008). Since molecular hydrogen at temperatures below $100$~K is not directly detectable, one has to rely on CO, HCN, or HCO$^+$ observations to derive H$_2$ gas masses. Unfortunately, the associated conversion factors have high uncertainties (approx. a factor of two, e.g., Bolatto et al. 2013). Our understanding of the gas content and thus the star-formation efficiency is limited by these uncertainties. A complementary way to determine galactic gas masses is the direct modeling of molecular emission. Narayanan \& Krumholz (2014) combined numerical simulations of disc galaxies and galaxy mergers with molecular line radiative transfer calculations to develop a model for the physical parameters that drive variations in CO spectral line energy distributions (SLEDs) in galaxies in terms of the star-formation-rate density. Their model was able to reproduce the SLEDs of galaxies over a dynamic range of approximately 200 in star-formation-rate surface density. However, the CO high-$J$ transitions ($J > 8$) of ULIRGs are difficult to reproduce within the model (Fig.~2 of Kamenetzky et al. 2016). Bournaud et al. (2015) modeled the intensity of CO emission lines, based on hydrodynamic simulations of spirals, mergers, and high-redshift galaxies with very high resolutions ($3$~pc and $10^3$~M$_{\odot}$) and detailed models for the phase-space structure of the interstellar gas, including shock heating, stellar feedback processes, and galactic winds. The simulations were analyzed with a large velocity gradient (LVG) model to compute the local emission in various molecular lines in each resolution element, radiation transfer, opacity effect, and the intensity emerging from galaxies to generate synthetic spectra for various CO transitions. This model reproduced the known properties of CO spectra and CO-to-H$_2$ conversion factors in nearby spirals and starbursting major mergers. Alternatively, galactic gas disks can be modeled analytically, assuming axis-symmetry. Krumholz and Thompson (2007) provided a simple model for understanding how Kennicutt- Schmidt laws, which relate the star-formation rate to the mass or surface density of gas as inferred from some particular line, depend on the line chosen to define the correlation. They assume a probability distribution for the mass fraction of gas at a given density and calculate the molecular emission with an escape probability formalism. The model gas clouds have constant temperature, Mach number, and optical depth. Their results showed that for a turbulent medium, the luminosity per unit volume in a given line, provided that this line can be excited at temperatures lower than the mean temperature in a galaxy's molecular clouds, increased faster than linearly with the density for molecules with critical densities larger than the median gas density. The star-formation rate also rose superlinearly with the gas density, and the combination of these two effects produced a close to linear correlation between star-formation rate and line luminosity. Kazandjian et al. (2015) investigated the effect of mechanical heating on atomic fine-structure and molecular lines and on their ratios. They tried to use those ratios as a diagnostic to constrain the amount of mechanical heating in an object and also study its significance on estimating the H$_2$ mass. Equilibrium photodissociation models (PDRs) were used to compute the thermal and chemical balance for the clouds. The equilibria were solved for numerically using the optimized version of the Leiden PDR-XDR code. Large velocity-gradient calculations were done as post-processing on the output of the PDR models using RADEX (van der Tak et al. 2007). They showed that high-$J$ CO line ratios and ratios involving HCN are very sensitive to mechanical heating. In this work, we investigate a different analytical approach, where we extend the model of galactic clumpy gas disks presented in Vollmer \& Leroy (2011). The model has a large-scale and a small-scale part. The large-scale part gives the surface density, turbulent velocity, disk height, and gas viscosity. The small-scale part begins at densities where gas clouds become self-gravitating. The non-self-gravitating and self-gravitating clouds obey different scaling relations, which are set by observations. For clouds of a certain density, the area filling factor is calculated. The gas and dust temperatures are calculated by the heating and cooling equilibrium. Dense clouds are heated by turbulent, mechanical, and cosmic-ray heating. For all model clouds, the size, density, temperature, and velocity dispersion are known. The molecular abundances of individual gas clouds are determined by a detailed chemical network involving the cloud lifetime, density, and temperature. Molecular line emission is calculated with an escape probability formalism. The model is applied to samples of local spiral galaxies, ULIRGs, high-z star-forming galaxies, and smm-galaxies. The model simultaneously calculates the total gas mass, H$_2$ mass, the gas velocity dispersion, H{\sc i} mass, IR luminosity, IR SED, CO SLED, HCN(1--0), and HCO$^+$(1--0) emission of a galaxy given its size, integrated star-formation rate, stellar mass radial profile, rotation curve, and Toomre $Q$ parameter. In addition, the temperature, density, velocity dispersion, and molecular abundance of a gas cloud at a given density can be retrieved. This article presents a sophisticated model and justifies the physics behind it, shows the results we obtain for different choices of input parameters, and compares with observations. The physical processes included in the model are described in detail in Sect.~\ref{sec:model}. The steps involved in the calculations and in determining the uncertainties are described respectively in Sects.~\ref{sec:method} and \ref{sec:uncertain}. Not all spirals are the same so rather than define a `typical' member of each of the four classes (local spirals, local ULIRGs, submillimeter (smm) galaxies, and high-z star-forming galaxies), we use true samples of real objects for which we think we can estimate the appropriate values of the input parameters. The samples and their origin are described in Sect.~\ref{sec:samples} and the results of the calculations for the samples are shown in detail in Sect.~\ref{sec:results}. Hence, the reader interested in how well the model reproduces the observations can go directly to Sect.~\ref{sec:samples} (short) or even Sect.~\ref{sec:results}. Sect.~\ref{sec:variation} evaluates the influence of the choice of the chemical network, the Toomre $Q$ parameter, and the length scale parameter $\delta$ in terms of the effect on line and continuum emission. The importance of each of the heating and cooling processes is described in Sect.~\ref{sec:discussion} and that of the assumed gas properties (mass, velocity dispersion, and free-fall time) in Sect.~\ref{sec:physpar}. Finally, we give our conclusion in Sect.~\ref{sec:conclusions}. \section{The analytical model \label{sec:model}} \begin{table*} \begin{center} \caption{Model Parameters.\label{tab:parameters}} \begin{tabular}{lll} \hline\hline Parameter & Unit & Explanation \\ \hline $G=5 \times 10^{-15}$ & pc$^{3}$yr$^{-1}$M$_{\odot} ^{-1}$ & gravitation constant \\ $\kappa$ & yr$^{-1}$ & \it epicyclic frequency \\ $\bf Q$ & & \bf Toomre parameter \\ $R$ & pc &galactocentric radius \\ $H$ & pc & thickness of the gas disk\\ $H_{*}$ & pc & thickness of the stellar disk \\ $l_{\rm cl}$ & pc & cloud size \\ $v_{\rm rot}$ & pc\,yr$^{-1}$ & \it rotation velocity \\ $\Omega=v_{\rm rot}/R$ & yr$^{-1}$ & \it angular velocity \\ $\Phi_{\rm V}$ & & volume-filling factor \\ $\Phi_{\rm A}=\Phi_{\rm V}\,H/l_{\rm cl}$ & & area-filling factor \\ $\rho$ & M$_{\odot}$pc$^{-3}$ & disk midplane gas density\\ $\rho_{\rm CNM}$ & M$_{\odot}$pc$^{-3}$ & cool neutral medium density \\ $\rho_{\rm cl}=\rho/\Phi_{\rm V}$ & M$_{\odot}$pc$^{-3}$ & cloud density \\ $\dot{\rho}_{*}$ & M$_{\odot}$pc$^{-3}$yr$^{-1}$ & star-formation rate \\ $\Sigma$ & M$_{\odot}$pc$^{-2}$ & gas surface density \\ $\Sigma_{*}$ & M$_{\odot}$pc$^{-2}$ & \it stellar surface density \\ $\dot{\Sigma}_{*}$ & M$_{\odot}$pc$^{-2}$yr$^{-1}$ & \it star-formation rate \\ $\xi=4.6 \times 10^{-8}$ & pc$^2$yr$^{-2}$ & constant relating SN energy input to SF \\ $\bf \dot{M}$ & M$_{\odot}$yr$^{-1}$ & \bf disk mass accretion rate (radial, within the disk) \\ $v_{\rm turb}$ & pc\,yr$^{-1}$ & gas turbulent velocity dispersion \\ $v_{\rm rad}$ & pc\,yr$^{-1}$ & gas radial velocity \\ $v_{\rm disp}^{*}$ & pc\,yr$^{-1}$ & \it stellar vertical velocity dispersion \\ $c_{\rm s}$ & pc\,yr$^{-1}$ & sound speed \\ $\cal{M}$ & & Mach number \\ $\nu$ & pc$^{2}$yr$^{-1}$ & viscosity \\ $f_{\rm mol}=\Sigma_{\rm H_{2}}/(\Sigma_{\rm HI}+\Sigma_{\rm H_{2}})$ & & molecular fraction \\ $\alpha$ & yr\,M$_{\odot}$pc$^{-3}$ & constant of molecule formation timescale \\ $l_{\rm driv}$ & pc & turbulent driving length scale \\ $\bf \delta=5$ & & {\bf scaling between the driving length scale and the size of the} \\ & & {\bf largest self-gravitating structures} \\ $SFE=\dot{\Sigma}_{*}/\Sigma$ & yr$^{-1}$ & star-formation efficiency \\ $t_{\rm ff}^{l}$ & yr & cloud free fall timescale at size $l$ \\ $t_{\rm turb}^{l}$ & yr & cloud turbulent timescale at size $l$ (turbulent crossing time) \\ $t_{\rm mol}^{l}$ & yr & cloud molecule formation timescale at size $l$ \\ $T_{\rm g}$ & K & gas temperature \\ $T_{\rm d}$ & K & dust temperature \\ \hline \end{tabular} \begin{tablenotes} \item {\bf boldface}: free parameters; {\it italic}: parameters determined from observations. \end{tablenotes} \end{center} \end{table*} Compared to the model described in Sect.~2 of Vollmer \& Leroy (2011) that is based on Vollmer \& Beckert (2003; VB03), the present model does not include a break radius, where the star-formation timescale changes from the free fall timescale to the molecular formation timescale. In addition, we included in this more advanced model (i) the determination of the dense gas fraction, (ii) ISM scaling relations, (iii) the determination of dust and gas temperatures, (iv) a chemical network for the determination of molecular abundances, (v) a formalism for the photodissociation of molecules by the interstellar radiation field, and (vi) the determination of dust and molecular line emission. The model considers the warm, cool neutral, and molecular phases of the ISM as a single, turbulent gas. We assume this gas to be in vertical hydrostatic equilibrium with the midplane pressure balancing the weight of the gas and stellar disk. The gas is assumed to be clumpy, so that the local density is enhanced relative to the average density of the disk. Using this local density, we calculate two timescales relevant to star formation: the free-fall timescale of an individual clump and the characteristic timescale for H$_2$ to form on grains. The free-fall timescale is taken as the governing timescale for star formation. The star-formation rate is used to calculate the rate of energy injection by supernovae. This rate is related to the turbulent velocity dispersion and the driving scale of turbulence. These quantities, in turn, provide estimates of the clumpiness of gas in the disk (i.e., the contrast between local and average density) and the rate at which viscosity moves matter inward. The model relies on several empirical calibrations: e.g., the relationship between star-formation rate and energy injected into the ISM by supernovae, the H$_2$ formation timescale (and its dependence on metallicity), and the turbulent dimension of the ISM (used to relate the driving length scale to the characteristic cloud size modulo a free parameter $\delta,$ which is constrained by observations). As far as possible, these are drawn from observations of the Milky Way. The model only contains two free parameters. First, there is an unknown scaling factor relating the driving length of turbulence to the size of gravitationally bound clumps, which we call $\delta$. Second, the mass accretion rate, $\dot{M}$, which is related to the driving length and turbulent velocity, is a free parameter. From a detailed comparison of local spiral galaxies from the THINGS survey, Vollmer \& Leroy (2011) found $\delta=5 \pm 3$. For simplicity, we assume $\delta=5$ in this work. Moreover, the Toomre $Q$ parameter of the gas is set to the observed values for the local spirals ($2$--$8$, e.g., Leroy et al. 2008), and to $Q=1.5$ for the ULIRGs, high-z star-forming and submillimeter galaxies. In the remainder of this section, we discuss our assumptions in slightly more detail, justify them via comparison to observation and theory, and note the physics that we neglect. \subsection{The interstellar medium \label{sec:ISM}} Following, for example, Mac Low \& Klessen (2004), the warm, cool neutral, and molecular phases of the ISM are viewed as a single entity. Locally, the exact phase of the gas depends on the local pressure, metallicity, stellar radiation field, stellar winds, and shocks. Here, we view these factors as secondary, making a few simplifying assumptions. The equilibrium between the different phases of the ISM and the equilibrium between turbulence and star formation depends on three local timescales: the turbulent crossing time $t_{\rm turb}^{l}$, the molecule formation timescale $t_{\rm mol}^{l}$, and the local free-fall timescale $t_{\rm ff}^{l}$ of a cloud. In addition, photodissociation of molecules is taken into account. \subsubsection{The fraction of dense gas \label{sec:gasfrac}} To calculate the mass fraction between two gas densities, we use the density probability distribution function of Padoan et al. (1997) for overdensities $x$: \begin{equation} p(x){\rm d}x=\frac{1}{x \sqrt{2\pi \sigma^2}}{\rm exp}\big(-\frac{({\rm ln}\,x+\sigma^2/2)^2}{2 \sigma^2}\big) {\rm d}x \end{equation} where the standard deviation, $\sigma$, is given by \begin{equation} \sigma^2 \simeq {\rm ln}\big(1+({\cal{M}}/2)^2\big) \end{equation} and ${\cal{M}}=v_{\rm turb}/c_{\rm s}$ is the Mach number with the sound speed $c_{\rm s}$. The mass fraction of gas with overdensities exceeding $x$ is then \begin{equation} \frac{\Delta M}{M} = \frac{1}{2} \big(1+{\rm erf}(\frac{\sigma^2-2{\rm ln}(x)}{2^{\frac{3}{2}}\sigma})\big) \ . \end{equation} The overdensity for a given density $\rho_1$ is calculated with respect to the midplane gas density $x=\rho_1/\rho$. For the calculation of the molecular line emission we divide the ISM into two density bins: (i) densities $\rho_1$ equal to or higher than that of the self-gravitating clouds (see Sect.~\ref{sec:clumpiness}): $\rho_1 \geq \rho/\Phi_{\rm V}$ and (ii) densities $\rho_2$ equal to or higher than that of the cool neutral medium: $\rho_{\rm CNM} \leq \rho_2 \leq \rho/\Phi_{\rm V}$, where $\Phi_{\rm V}$ is the volume-filling factor of the largest self-gravitating structures in the disk, computed via a procedure described in Sect.~\ref{sec:clumpiness}. Following Wolfire et al. (2003), we set the minimum density of the cool neutral medium to \begin{equation} n_{\rm CNM}=\frac{31 \dot{\Sigma_*}/(10^{-8}~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1})}{\big(1+3.1(2.2 \times 10^7~{\rm yr\,M_{\odot}pc^{-3}}/\alpha)^{0.365}\big)}\ {\rm cm^{-3}}. \label{eq:cnmdens} \end{equation} With respect to Eq.~35 of Wolfire et al. (2003), we set the normalized FUV radiation field $G_0'=\dot{\Sigma_*}/(10^{-8}~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1})$ and the normalized dust abundances and gas metallicities $Z'_{\rm d}=Z'_{\rm g}= Z/Z_{\odot}=2.2 \times 10^7~{\rm yr\,M_{\odot}pc^{-3}}/\alpha$ (Eq.~\ref{eq:zzodot}), where $\alpha$ is the constant of the molecule-formation timescale (Eq.~\ref{eq:molform}). Moreover, we set the normalized total ionization rate by cosmic rays and EUV/X-rays $\zeta_{\rm t}'=1$. If the CNM density exceeds the midplane density, we set $\rho_{\rm CNM}=\rho$. The mass fraction of the self-gravitating clouds with respect to the diffuse clouds is a major unknown. Using the lognormal pdf of Padoan et al. (1997) neglects self-gravitation, which can change the shape of the pdf significantly (e.g., Schneider et al. 2015). Based on the findings of the latter authors, we adopt the following recipe: \begin{itemize} \item density bin (ii): \begin{equation} \frac{\Delta M}{M}(R) = y\,\frac{\Delta M}{M}(x_{\rm sg}(R))\ , \end{equation} \item density bin (i): \begin{equation} \frac{\Delta M}{M} = \frac{\Delta M}{M}(x_{\rm CNM}(R)) - y\,\frac{\Delta M}{M}(x_{\rm sg}(R))\ , \end{equation} where $x_{\rm CNM}$ and $x_{\rm sg}$ are the overdensities of the cool neutral medium and self-gravitating clouds. \end{itemize} The normalization factor is \begin{equation} y=0.3\, R_0/\big(\int_0^{R_0} \frac{\Delta M}{M}(x_{\rm CNM}(R)) {\rm d}R\big)\ . \end{equation} Within the density bin, the mass fraction of clouds of overdensity between $x_1$ and $x_2$ is calculated as the difference between the mass fractions. For the determination of the Mach number, we adopt the temperature of the cool neutral medium ($\sim 100$~K) to calculate the sound speed. For the self-gravitating, clouds we adopt the temperature of the molecular cloud ($10$--$30$~K). This prescription conserves mass, that is, $\sum_{i=1}^N \big( \frac{\Delta M}{M} \big)_i = 1$. \subsubsection{ISM scaling relations \label{sec:scaling}} We assume different scaling relations for the two density regimes: (i) for non-selfgravitating clouds, we adopt the scaling relations found for galactic H{\sc i} by Quiroga (1983): $\rho_{\rm cl} \propto l^{-2}$, $v_{\rm turb,cl} \propto l^{1/3}$, and thus $v_{\rm turb,cl} = v_{\rm turb}(\rho_{\rm cl}/\rho)^{-1/6}$, where $v_{\rm turb}$ and $\rho$ are the turbulent velocity and the density of the disk, respectively. Since the minimum density considered in this work is $100$~cm$^{-2}$, the maximum turbulent velocity of diffuse clouds is $\sim v_{\rm turb}/2 \sim 5$~km\,s$^{-1}$. (ii) For self-gravitating clouds, we adopt the scaling relations of Lombardi et al. (2010): $\rho_{\rm cl} \propto l^{-1.4}$, $v_{\rm turb,cl} \propto l^{1/2}$. As described in Sect.~\ref{sec:clumpiness}, the scale of the largest self-gravitating clouds $l_{\rm cl}$ is smaller than the turbulent driving length scale $l_{\rm driv}$ by a factor $\delta=l_{\rm driv}/l_{\rm cl}$. We assume that the turbulent velocity dispersion of the largest self-gravitating clouds of density $\rho_{\rm sg}$ is $v_{\rm turb,cl} = v_{\rm turb}/\sqrt{\delta}$, where $v_{\rm turb}$ is the velocity dispersion of the disk. For the assumed value of $\delta=5$ (see Sect.~\ref{sec:clumpiness}), this yields a velocity dispersion of the largest self-gravitating clouds of $v_{\rm turb,cl} \sim 5$~km\,s$^{-1}$, which is consistent with observations (e.g., Solomon et al. 1987). We thus obtain $v_{\rm turb,cl} = v_{\rm turb}/\sqrt{\delta}(\rho_{\rm cl}/\rho_{\rm sg})^{-1/3}$ for clouds with $\rho_{\rm cl} > \rho_{\rm sg}$. Alternatively, we assume $\rho_{\rm cl} \propto l^{-1}$ and $v_{\rm turb,cl} = v_{\rm turb}/\sqrt{\delta}(\rho_{\rm cl}/\rho_{\rm sg})^{-1/2} \propto l^{\frac{1}{2}}$ (Solomon et al. 1987). It turned out that the different scaling relations for self-gravitating clouds result in consistent molecular line luminosities within $\sim 10$\,\%, except for the HCN emission of local spirals where the difference is $\sim 20$\,\%. The models including the Solomon et al. (1987) scaling always reproduce observations slightly better. Therefore, the following results are based on the Solomon et al. (1987) scaling. \subsubsection{Gas and dust temperatures \label{sec:gdtemp}} Neufeld \& Kaufman (1993) and Neufeld et al. (1995) considered the radiative cooling of fully shielded molecular astrophysical gas over a wide range of temperatures ($10~{\rm K} \leq T_{\rm g} \leq 2500$~K) and H$_2$ densities ($10^3$~cm$^{-3} \leq n({\rm H}_2) \leq 10^{10}$ cm$^{-3}$). Their model for the radiative cooling of molecular gas includes a detailed treatment of the interstellar chemistry that determines the abundances of important coolant molecules, and a detailed treatment of the excitation of the species H$_2$, CO, H$_2$O, HCl, O$_2$, C, O, and their isotopic variants where important. For simplicity, we only take the main cooling agents, CO, H$_2$, and H$_2$O, into account. We assume CO and H$_2$O abundances of $x_{\rm CO}=10^{-4} (Z/Z_{\odot})$ and $x_{\rm H_2O}=10^{-6} (Z/Z_{\odot})$. According to Fig.~2 of Neufeld et al. (1995), we may underestimate, in this way, the cooling rates by approximately a factor of 2. However, for densities $n({\rm H}_2) > 10^{5}$~cm$^{-3}$ and low temperatures ($T \sim 20$~K), the discrepancy increases up to a factor $3$--$4$. Neufeld \& Kaufman (1993) and Neufeld et al. (1995) defined the molecular cooling rate as $\Lambda_{\rm g}=L n(H_2)n(M)$, where $n({\rm H}_2)$ and $n({\rm M})$ are the H$_2$ and coolant particle densities. The rate coefficient $L$ depends on $n({\rm H}_2$), the gas temperature $T$, and \begin{equation} \tilde{N}({\rm M})=\frac{g n({\rm M})}{|{\rm d}v_{\rm turb}/{\rm d}l|}\ , \end{equation} where $g=1$ is a dimensionless geometrical factor and ${\rm d}v_{\rm turb}/{\rm d}l$ is the turbulent velocity gradient. Neufeld \& Kaufman (1993) and Neufeld et al. (1995) provided an analytical expression for $L$ (Eq.~5 of Neufeld \& Kaufman 1993) as a function of a set of parameters which depend on $n({\rm H}_2$), $T$, and $\tilde{N}({\rm M})$. Since for each model cloud, $n({\rm H}_2$), $T$, and $\tilde{N}({\rm M})$ are known, we calculated the molecular line cooling $\Lambda_{\rm g}$ by interpolating the tabulated values of this parameter set. To investigate the differences between our gas cooling and that proposed by Goldsmith (2001), we calculated these quantities for light and massive self-gravitating molecular clouds of different sizes, densities, column densities, and velocity dispersions: (i) light clouds with $M_{\rm cl}=10^4$~M$_{\odot}$: $l_{\rm cl}=\zeta^{-1} 10$~pc, $n_{\rm cl}=\zeta^3 \, 380$~cm$^{-3}$, $N_{\rm cl}=\zeta^2 \, 10^{22}$~cm$^{-2}$, and $v_{\rm turb}^{\rm cl}= \zeta^{\frac{1}{2}} 5.4$~km\,s$^{-1}$ (plus signs in Fig.~\ref{fig:goldsmith}), and (ii) massive clouds with $M_{\rm cl}=6 \times 10^4$~M$_{\odot}$: $l_{\rm cl}=\zeta^{-1} 10$~pc, $n_{\rm cl}=\zeta^3 \, 2350$~cm$^{-3}$, $N_{\rm cl}=\zeta^2 \, 7 \times 10^{22}$~cm$^{-2}$, and $v_{\rm turb}^{\rm cl}= \zeta^{\frac{1}{2}} 13.6$~km\,s$^{-1}$ (triangles in Fig.~\ref{fig:goldsmith}) with $1 \leq \zeta \leq 10$. Our simplified cooling prescription is in good agreement with that of Goldsmith (2001) ($\sim 0.2$~dex) for the less massive clouds (Fig.~\ref{fig:goldsmith}). For the more massive clouds, our cooling prescription gives values up to a factor $4$ ($0.6$~dex) higher than those of Goldsmith (2001) for the highest cloud densities. Overall, the ratio between our cooling and that of Goldsmith (2001) is approximately $0.3$~dex. Since the dependence of cooling on temperature is approximately $\Gamma \propto T^{2.5-3.0}$ (Goldsmith 2001), the corresponding uncertainty on the gas temperature is $0.24$~dex (a factor of $1.7$) at most and $0.12$~dex (a factor of $1.3$) overall. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{goldsmith.ps}} \caption{Ratio between the cooling by CO, H$_2$, and H$_2$O (Neufeld \& Kaufman 1993; Neufeld et al. 1995) and the cooling function proposed by Goldsmith (2001) for light (plus signs) and massive (triangles) self-gravitating molecular clouds. The symbol sizes are proportional to the scaling factor $\zeta$ (see text). \label{fig:goldsmith}} \end{figure} We thus conclude that our cooling prescription and that of Goldsmith (2001) are the same within a factor of $2$. We prefer to use the reduced Neufeld line cooling instead of the cooling function proposed by Goldsmith (2001), because it takes into account the cloud column density and velocity dispersion. For the calculation of the thermal balance within molecular clouds, one needs to consider processes affecting the gas and the dust in addition to the radiative gas cooling discussed above. We assume gas heating via turbulence and cosmic rays: \begin{equation} \Gamma_{\rm g} = \Gamma_{\rm turb} + \Gamma_{\rm CR} = \frac{1}{3} \rho \frac{(v_{\rm turb}^{\rm cl})^3}{r_{\rm cl}} + \eta \rho \dot{\Sigma_*}\ . \end{equation} Photoelectric heating by UV photons within photodissociation regions is neglected because the local FUV field plays a minor role for the CO luminosity of a giant molecular cloud (Wolfire et al. 1993; see Sect.~\ref{sec:pdr}). The factor of $\frac{1}{3}$ in the expression for the turbulent heating is somewhat lower than the factor of $0.42$ advocated by Mac Low (1999). Following Nelson \& Langer (1997), the constant $\eta$ is chosen such that for $\dot{\Sigma_*}=10^{-8}~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1}$ $, \Gamma_{\rm CR}=6.4 \times 10^{-28} (n({\rm H_2})/{\rm cm^{-3}})~{\rm ergs\,cm^{-3}s^{-1}}$. Furthermore, $\eta$ includes the attenuation factor $(\Sigma/(0.9~{\rm M_{\odot}pc^{-2}}))^{0.021}\exp(-\Sigma/(9 \times 10^4~{\rm M_{\odot}pc^{-2}}))$ described by Padovani \& Galli (2013) and the CR advection by a galactic wind ($1$ for local spiral and high-z star-forming galaxies; $1$ for non-self-gravitating clouds and $140$ for the self-gravitating clouds in ULIRGs; $140$ for smm-galaxies; see Sect.~\ref{sec:winds}). The dust is heated by the interstellar UV and optical radiation field: \begin{equation} \Gamma_{\rm d} = n_{\rm d} \sigma_{\rm d} F\ , \end{equation} where $n_{\rm d}$ is the density of dust grains and $\sigma_{\rm d}$ the absorption cross section of a grain. Following Goldsmith (2001), we set $n_{\rm d} \sigma_{\rm d}=7.4 \times 10^{-22} (n({\rm H_2})/{\rm cm^{-3}})$~cm$^{-1}$. The ratio between the interstellar UV/optical and total radiation field is assumed to be \begin{equation} \frac{F}{F_0}=k \times \big( \frac{\dot{\Sigma_*}}{10^{-8}~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1}} + \frac{\Sigma_*}{40~{\rm M_{\odot}pc^{-2}}} \big)\ , \end{equation} where $F_0=5.3 \times 10^{-3}$~ergs\,cm$^{-2}$s$^{-1}$ (Goldsmith 2001). We assume that the UV radiation is emitted by young massive stars whose surface density is proportional to the star-formation rate per unit area $\dot{\Sigma_*}$. The optical light stems from the majority of disks stars (Mathis et al. 1983, Draine 2011) whose surface density is $\Sigma_*$. The normalizations of $\dot{\Sigma_*}$ and $\Sigma_*$ are set by observations of the ISRF at the solar radius: $6.7$\,\% of the total stellar light is emitted in the UV (Mathis et al. 1983, Draine 2011). This implies that the local Galactic star-formation rate is $\dot{\Sigma_*}=6.7 \times 10^{-10}~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1}$, which is (i) approximately a factor of two lower than the value given by Kennicutt \& Evans (2012), (ii) consistent with the local star-formation rate at $\sim 0.75 \times R_{25}$ in the sample of nearby spiral galaxies of Leroy et al. (2008), and (iii) a reasonable value for a gas disk at $R=8$~kpc with $v_{\rm rot}=200$~km\,s$^{-1}$, $Q \sim 3$, and $\dot{M}=0.2$~M$_{\odot}$yr$^{-1}$. Furthermore, we allow for an additional factor $k$, which plays the role of $U_{\rm min}$ in the Draine \& Li (2007) models. We set $k=1$ for all galaxies except the local spirals where $k=2$. This additional factor is (i) needed to reproduce the observed infrared spectra energy distributions and (ii) consistent with the distribution found for nearby spiral galaxies by Dale et al. (2012). In the presence of dust and gas, the interstellar radiation field is attenuated. For this attenuation we adopted the mean extinction of a sphere of constant density \begin{equation} \label{eq:attenuationfactor} I/I(0)= 3\,(\tau^{-1}-2\,\tau^{-2}+2\,\tau^{-3})-6\,\exp(-\tau)\,\tau^{-3}\ , \end{equation} with $\tau=(Z/Z_{\odot})\,\Sigma/(15~{\rm M_{\odot}pc^{-2}})$. For high optical depths, Eq.~\ref{eq:attenuationfactor} becomes $I/I(0) \sim 3 \tau_{\rm V}^{-1}$, that is, the attenuation decreases to very low values. However, when the molecular clouds become optically thick in the near-infrared (at $\tau_{\rm V} \sim 10$), radiative transfer effects become important; a significant infrared radiation field builds up which heats the dust until deep into the molecular clouds. To take this additional heating term into account, we set $I/I(0)=0.246$ if $3\,(\tau^{-1}-2\,\tau^{-2}+2\,\tau^{-3})-6\,\exp(-\tau)\,\tau^{-3} < 0.246$ (see Appendix~\ref{sec:atten}). The model dust temperature in the absence of collisional dust heating for the local Galactic interstellar radiation field with $I/I(0)$ is $T_{\rm dust}=18.4$~K. This temperature lies between the equilibrium temperature of silicate ($16.4$~K) and graphite ($22.3$~K) for the local Galactic interstellar radiation field (Draine 2011). The expression for the radiative heating of dust grains yields \begin{equation} \label{eq:dustheating} \Gamma_{\rm d} = 3.9 \times 10^{-24} (\frac{F}{F_0}) (\frac{I}{I_0}) (n({\rm H_2})/{\rm cm^{-3}})~{\rm ergs\,cm^{-3}s^{-1}}\ . \end{equation} We assume the dust mass absorption coefficient of the following form: \begin{equation} \label{eq:kappa} \kappa(\lambda)=\kappa_0\,(\lambda_0/\lambda)^{\beta}\ , \end{equation} with $\lambda_0=250~\mu$m, $\kappa_0=0.48~{\rm m}^2{\rm kg}^{-1}$ (Dale et al. 2012), and a gas-to-dust ratio of $GDR=M_{\rm gas}/M_{\rm dust}=\frac{Z}{Z_{\odot}} \times 100$ (including helium; R{\'e}my-Ruyer et al. 2014). Our gas-to-dust ratio is a factor of $1.5$ lower than the solar gas-to-dust ratio (Sofia \& Meyer 2001a, b). We set the slope $\beta=1.5$ for the local spiral (Dale et al. 2012) and high-z star-forming galaxies, and $\beta=2.0$ for the ULIRGs (Klaas et al. 2001) and smm-galaxies. Adapting the dust cooling rate of Goldsmith (2001) yields \begin{equation} \label{eq:dustcooling} \Lambda_{\rm d}=7.5 \times 10^{-31} (Z/Z_{\odot})\,(T_{\rm d}/{\rm K})^{5.5} \big(n({\rm H_2})/{\rm cm^{-3}} \big)~{\rm ergs\,cm^{-3}s^{-1}}\ . \end{equation} Following Goldsmith (2001), the dust cooling energy transfer between dust and gas due to collisions is \begin{equation} \Gamma_{\rm gd}= 2 \times 10^{-33} \big(\frac{n({\rm H_2})}{{\rm cm^{-3}}} \big)^2 (\frac{\Delta T}{\rm K}) \sqrt{\frac{T_{\rm g}}{10~{\rm K}}}~{\rm ergs\,cm^{-3}s^{-1}}\ , \end{equation} where $\Delta T=T_{\rm g}-T_{\rm d}$~K. To determine the thermal balance of gas and dust, coupled together by the gas-dust collisions, we solve the following equations simultaneously:\begin{equation} \Gamma_{\rm g} - \Lambda_{\rm g} - \Lambda_{gd} = 0 \end{equation} and \begin{equation} \label{eq:dusttemp} \Gamma_{\rm d} - \Lambda_{\rm d} + \Lambda_{gd} = 0\ . \end{equation} \subsubsection{CO, HCN, and HCO$^+$ abundances from chemical network \label{sec:network}} Chemical modeling is carried out using the {\tt Nautilus} gas-grain code presented in detail in Hersant et al. (2009), Semenov et al. (2010), and Ruaud et al. (2015). This code computes the abundances of chemical species (atoms and molecules) as a function of time by solving the rate equations for a network of reactions. For gas-phase reactions, we use the kida.uva.2014 network (Wakelam et al. 2015\footnote{the network is available online on the KIDA website {\rm http://kida.obs.u-bordeaux1.fr}}) comprising $489$ species and $6992$ reactions. For grain surface reactions, we use the desorption, diffusion, activation barrier energies along with a set of grain surface reactions, all from the KIDA database. Both, thermal and non-thermal desorption processes are taken into account, the latter consisting mainly of CR-induced desorption following the formalism presented by Hasegawa et al. (1993). The model parameters are time, density, gas temperature, grain temperature, UV flux, cosmic ray ionization rate, and the elemental abundances of the elements C, O, and N (C/H$=1.7 \times 10^{-4}$, O/H$=2.4 \times 10^{-4}$, N/H$=6.2 \times 10^{-5}$). Grids of models were obtained by varying the cloud lifetime (20 log spaced steps between $10^3$ and $10^8$~yr), the cloud density (20 log spaced steps between $10^3$ and $10^8$~cm$^{-3}$), and cloud gas temperatures (20 log spaced steps between $10$ and $300$~K for local spirals and $10$ and $800$~K for ULIRGs). For each type of cloud, the CO, HCN, and HCO+ abundances were interpolated on the grid given the lifetime, density, and temperature of the cloud. The ISM chemistry also depends on the dust temperature and the cosmic ray ionization rate, which were assumed to be constant for each galaxy sample: \begin{itemize} \item local spiral galaxies: $T_{\rm dust}=15$~K, $\zeta_{\rm CR}=3 \times 10^{-17}$~s$^{-1}$, \item ULIRGs, smm, high-z star-forming galaxies: $T_{\rm dust}=30$~K, $\zeta_{\rm CR}=1.3 \times 10^{-15}$~s$^{-1}$, \end{itemize} Testing of the influence of these parameters on the molecular line emission showed that the dust temperature plays a minor role. For the choice of the CR ionization rates, we refer to Sect.~\ref{sec:winds}. To take the gas metallicity into account, all abundances are multiplied by $(Z/Z_{\odot})$. \subsection{Supernova-driven turbulence \label{sec:SNturbulence}} First, we assume that the gas is turbulent, so that the turbulent velocity is the relevant one throughout the disk (making the exact temperature of the gas largely irrelevant; for simplicity, we assume a constant sound speed of $c_{\rm s}=6$~km\,s$^{-1}$ for the warm neutral medium). We assume that this turbulence is driven by SNe and that they input their energy in turbulent eddies that have a characteristic length scale, $l_{\rm driv}$, and a characteristic velocity, $v_{\rm turb}$. This driving length scale may be the characteristic length scale of a SN bubble, but it does not have to be so. It may be set by the interaction of multiple SN bubbles or of a SN with the surrounding ISM. We note that based on simulations, the assumption of a single driving scale may be a simplification (Joung \& Mac Low 2006). The VB03 model does not address the spatial inhomogeneity of the turbulent driving nor the mechanics of turbulent driving and dissipation. It is assumed that the energy input rate into the ISM due to SNe, $\dot{E}_{\rm SN}$, is cascaded to smaller scales without losses by turbulence. At scales smaller than the size of the largest self-gravitating clouds, the energy is dissipated via radiation from cloud contraction and star formation. We refer to Mac Low \& Klessen (2004) for a review of these topics. We limit our analytical model to the first energy sink, which is the scale where the clouds become self-gravitating. We can connect the energy input into the ISM by SNe directly to the star-formation rate. With the assumption of a constant initial mass function (IMF) independent of environment one can write \begin{equation} \label{eq:energyflux} \frac{\dot{E}_{\rm SN}}{\Delta A}=\xi\,\dot{\Sigma}_{*} = \xi\,\dot{\rho}_{*} l_{\rm driv}=\Sigma \nu \frac{v_{\rm turb}^{2}}{l_{\rm driv}^{2}}\ , \end{equation} where $\Delta A$ is the unit surface element of the disk and the CO disk thickness is assumed to be $l_{\rm driv}$. The gas disk viscosity is defined as $\nu=l_{\rm driv} v_{\rm turb}$ (VB03 and Sect.~\ref{sec:accretiondisk}). Following Vollmer \& Beckert (2001), the turbulent energy dissipation rate is $\Delta E/(\Delta A \Delta t)=\rho \nu v_{\rm turb}^2/l_{\rm driv}=\rho v_{\rm turb}^3$. The turbulent dissipation timescale is \begin{equation} \Delta t=\frac{\Sigma v_{\rm turb}^2}{\Delta E/(\Delta A \Delta t)}=\frac{\rho H v_{\rm turb}^2}{\Delta E/(\Delta A \Delta t)}= \frac{H}{v_{\rm turb}} \sim \Omega^{-1}\ . \end{equation} This result is in agreement with numerical simulations of turbulence that show a decay of turbulence on an approximately crossing timescale (e.g., Stone et al. 1998; Mac Low et al. 1998). The factor of proportionality $\xi$ relates the local SN energy input to the local star-formation rate and is assumed to be independent of local conditions. $\xi$ is normalized using Galactic observations by integrating over the Galactic disk and results in $\xi=4.6 \times 10^{-8}$~(pc/yr)$^{2}$ (see VB03). The adopted energy that is injected into the ISM is $E^{\rm kin}_{\rm SN}=10^{50}$~ergs based on numerical studies by Thornton et al. (1998). The final two parts of Eq.~\ref{eq:energyflux} assume that stars form over a characteristic scale equal to the driving length and equate energy output from SNe with the energy transported by turbulence (see VB03). In the outer galactic disk, where the star-formation activity is very low, turbulence can be maintained by the energy gained via accretion within the gravitational potential of the galaxy (Vollmer \& Beckert 2002). In this case, the energy injection rate is \begin{equation} \frac{\dot{E}_{\rm acc}}{\Delta A}=\frac{\dot{M}}{2\pi} \Omega^2 \ . \end{equation} This energy source represents an addition to the model described in Vollmer \& Leroy (2011). The total energy injection rate is \begin{equation} \frac{\dot{E}_{\rm tot}}{\Delta A}=\frac{\dot{E}_{\rm SN}}{\Delta A}+\frac{\dot{E}_{\rm acc}}{\Delta A}\ . \end{equation} \subsection{Star formation in molecular clouds \label{sec:SFRGMC}} Second, we assume that stars form out of gravitationally bound clouds. We take the local gravitational free-fall time, given by \begin{equation} t^{\rm l}_{\rm ff}=\sqrt{\frac{3\pi}{32G\rho_{\rm cl}}}\ , \label{eq:localff} \end{equation} \noindent to be the relevant timescale for star formation. Here, $G$ is the gravitational constant and $\rho_{\rm cl}$ the density of a single cloud. Cloud collapse, and thus star formation can only proceed if enough molecules form during the cloud collapse to allow the gas to continue cooling\footnote{However, based on theoretical arguments and numerical simulations Krumholz et al. (2011, 2012) and Glover \& Clark (2012a,b) argue that C$^+$ cooling is sufficiently strong for gas to from stars as long as it is sufficiently shielded from the interstellar radiation field.}. Vollmer \& Leroy (2011) assumed that, where the timescale for H$_2$ formation is long (compared to the free-fall time), the relevant timescale for star formation is the H$_2$ formation timescale. Since we aim at reproducing the ISM properties within the optical disks, we decided not to include this complication in our present model. \subsection{Molecular fraction \label{sec:molfrac1}} We follow two approaches to calculate the fraction of molecular hydrogen: (i) based on the lifetimes of the molecular clouds and (ii) based on photodissociation of molecules by the interstellar radiation field. In Sect.~\ref{sec:molfrac}, we show that both formalisms lead to consistent results for the molecular fraction. \subsubsection{Molecular fraction based on the lifetimes of the molecular clouds} This approach assumes that molecular clouds are relatively short-lived, appearing and disappearing over approximately a free-fall time (equivalently, by our construction, a turbulent crossing time); otherwise they might reach chemical equilibrium even when the H$_2$ formation time is long compared to the free-fall time. Accordingly, we estimate the molecular ratio in the disk from the ratio of a cloud lifetime (i.e., the crossing or free-fall time) to the H$_2$ formation time scale: \begin{equation} R_{\rm mol}=\frac{\Sigma_{\rm H_{2}}}{\Sigma_{\rm HI}}=t_{\rm turb}^{l}/t_{\rm mol}^{l}\ . \end{equation} The molecular fraction is \begin{equation} f_{\rm mol}=\frac{\Sigma_{\rm H_{2}}}{\Sigma_{\rm HI}+\Sigma_{\rm H_{2}}}=\frac{R_{\rm mol}}{1+R_{\rm mol}}\ . \end{equation} We take the characteristic time to form H$_2$ out of H to be approximately \begin{equation} \label{eq:molform} t_{\rm mol}^{l}=\alpha / \rho_{\rm cl}\ , \end{equation} where $\alpha$ is a coefficient that depends on the gas phase metallicity and temperature (Draine \& Bertoldi 1996) and $\rho_{\rm cl}$ is the density of a single cloud. The coefficient of the molecular formation timescale $\alpha_0$ is assumed to be metallicity dependent (Tielens \& Hollenbach 1985). Because we admit external gas accretion, the metallicity of the star-forming ISM mainly depends on the ratio of accretion to star formation rate $a$. Small $a<1$ lead to a metallicity derived from a closed box model, whereas in the case of $a>1,$ the metallicity is equal to the true stellar yield $y_{\rm true}$ (K\"{o}ppen \& Edmunds 1999). For gas fractions higher than 0.04, the difference between the two solutions is less than a factor of two. Moreover, Dalcanton (2007) showed that the effective yield $y_{\rm eff}=Z_{\rm gas}/\ln(1/f_{\rm gas})$, where $Z_{\rm gas}$ is the gas metallicity and $f_{\rm gas}$ the gas fraction, for disk galaxies with a rotation velocity higher than $100$~km\,s$^{-1}$ is approximately constant, that is, for these galaxies, a closed box model can be applied. We thus feel confident to estimate the gas phase metallicity based on a closed box model using the gas fraction: \begin{equation} \label{eq:alphacb} \alpha=\alpha_0 \times \big( \ln(\frac{\Sigma_{*}+\Sigma}{\Sigma})\big)^{-1}\ , \end{equation} where $\Sigma_{*}$ is the stellar surface density and $\alpha_0=3.6 \times 10^{7}~{\rm yr\,M_{\odot}pc^{-3}}$. Adopting a stellar and gas surface density of $\Sigma_{*}=40$~M$_{\odot}$pc$^{-2}$ and $\Sigma_{\rm gas}=10\,$~M$_{\odot}$pc$^{-2}$ at the solar radius of the Galaxy yields $\alpha_{\odot}=2.2 \times 10^{7}$~yr\,M$_{\odot}$pc$^{-3}$, which corresponds to the value used by Hollenbach \& Tielens (1997). Within this framework, the metallicity is \begin{equation} \label{eq:zzodot} Z/Z_{\odot}=(2.2 \times 10^7~{\rm yr\,M_{\odot}pc^{-3}})/\alpha\ . \end{equation} It turned out that the gas phase metallicities of the ULIRG and high-z star-forming galaxy samples are underestimated by up to a factor $10$ with our simple closed-box model (see Sect.~\ref{sec:metals}). To remedy the situation, we adopted the following heuristic recipe for all galaxies: \begin{equation} \label{eq:metulirgs} \alpha=\frac{3.6 \times 10^{7} \times \big( \ln(\frac{\Sigma_{*}+\Sigma}{\Sigma})\big)^{-1}}{{\rm max}\big( (2 \times 10^9~{\rm yr}\,\frac{\dot{M_*}}{M_{\rm gas}})^{\frac{1}{3}},1.0\big)}\ {\rm yr\,M_{\odot}pc^{-3}}\ . \end{equation} A possible explanation for this recipe is accretion of pre-enriched gas onto or into the galactic disks in which case the closed-box model underestimates the metallicity. Less gas depletion in starburst galaxies might also play a role. Since we expect starburst galaxies to host galactic winds (see, e.g., Veilleux et al. 2005), the ejection of metals due to these outflows (leaky box model) is assumed to be much less significant than the addition of metals from accretion. \subsubsection{Molecular fraction based on photodissociation \label{sec:dissociation}} For the determination of the H$_2$ column density of a gas cloud, we take into account (i) photo-dissociation of H$_2$ molecules and (ii) the influence of the finite cloud lifetime on the H$_2$ formation. For the photo-dissociation of H$_2$ molecules, we follow the approach of Krumholz et al. (2008, 2009). These authors solved the idealized problem of determining the location of the atomic-to-molecular transition in a uniform spherical gas cloud bathed in a uniform isotropic dissociating radiation field. It is assumed that the transition from atomic to molecular gas occurs in an infinitely thin shell. The cloud has a constant inner molecular and outer atomic gas density. The inner molecular core and the outer atomic shell are assumed to be in thermal pressure equilibrium. The atomic gas density is taken to be the density of the cool neutral medium (Eq.~\ref{eq:cnmdens}). The H$_2$ to H{\sc i} ratio is \begin{equation} R_{\rm H_2} \simeq \big(1+(s/11)^3(\frac{125+s}{96+s})^3)^{\frac{1}{3}}\big)-1 ,\end{equation} with $s=(\Sigma_{\rm cl}/1~{\rm M}_{\odot})(Z/Z_{\odot})/(4\,\tau_{\rm H{\sc i}})$. The H{\sc i} optical depth is \begin{equation} \tau_{\rm H{\sc I}}=\frac{\chi}{4} \frac{2.5+\chi}{2.5+\chi {\rm e}}\ , \end{equation} with the dimensionless radiation field strength $\chi$ , which we set to $\chi=3.1\,(\dot{\Sigma_*}/(10^{-8}~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1}))/(n_{\rm cl}/(100~{\rm cm}^{-3}))$. Here, we assume a constant ratio between the inner molecular and outer atomic gas density, which is of the order of $10$. For $\tau_{\rm H{\sc I}}=\frac{1}{4}$ and solar metallicity, the transition between a molecular- and atomic-dominated cloud occurs at $\Sigma_{\rm cl} \simeq 20$~M$_{\odot}$pc$^{-2}$. The H$_2$ fraction of the cloud is $f_{\rm H_2}=R_{\rm H_2}/(1+R_{\rm H_2})$. This treatment insures a proper separation of H{\sc i} and H$_2$ in spiral galaxies, that is, clouds of low density ($\sim 100$~cm$^{-3}$) and low column density ($\sim 10^{21}$~cm$^-2$) are fully atomic, whereas clouds of high density, that is, GMCs, ($\geq 1000$~cm$^{-3}$) and high column density ($\geq 10^{22}$~cm$^-2$) are fully molecular. In starburst regions (e.g., in ULIRGs), where gas densities and surface densities are much higher, this treatment has no effect, since the gas will be fully molecular. In a second step, we take into account the molecular fraction due to the finite lifetime of the gas cloud $f_{\rm mol}^{\rm life}=t_{\rm ff}^{\rm cl}/t{\rm mol}^{\rm cl}/(1+t_{\rm ff}^{\rm cl}/t_{\rm mol}^{\rm cl})$. The total molecular fraction of a cloud is $f_{\rm mol}=f_{\rm mol}^{\rm life} \times f_{\rm mol}^{\rm diss}$. The molecular fraction due to the finite lifetime $f_{\rm mol}^{\rm life}$ has the highest influence on $f_{\rm mol}$ at large galactic radii. We now go from the H$_2$ mass fraction to the CO mass fraction. In an externally irradiated gas cloud, a significant H$_2$ mass may lie outside the CO region, that is, it is dark in the outer regions of the cloud where the gas phase carbon resides in C or C$^+$. In this region, H$_2$ self-shields or is shielded by dust from UV photodissociation, whereas CO is photodissociated. Following Wolfire et al. (2010), the dark gas mass fraction for a cloud of constant density is \begin{equation} \label{eq:fdg} f_{\rm DG}=\frac{M_{\rm H_2}-M_{\rm CO}}{M_{\rm H_2}}=1-\big(1 - \frac{2 \Delta A_{\rm V, DG}}{A_{\rm V}}\big)^3 ,\end{equation} with \begin{equation} \begin{split} \Delta A_{\rm V,DG}=&0.53-0.045\,{\rm ln}\big(\frac{\dot{\Sigma_*}/(10^{-8}~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1})}{n_{\rm cl}}\big)-\\ &0.097\,{\rm ln}\big(\frac{Z}{Z_{\odot}}\big) \end{split} ,\end{equation} and $A_{\rm V}=2\,(Z/Z_{\odot})N_{\rm cl}/(1.9 \times 10^{21}~{\rm cm}^{-2})$ where $N_{\rm cl}$ is the H$_2$ column density. The CO mass fraction is then $f_{\rm CO}=f_{\rm H_2} \, \big(1 - \frac{2 \Delta A_{\rm V, DG}}{A_{\rm V}}\big)^3$. Since the attenuation of the UV radiation field leading to Eq.~\ref{eq:fdg} is mainly caused by dust, we expect HCN to survive everywhere where the ISRF is attenuated enough to permit a high CO abundance. Thus, the HCN abundance should approximately follow the CO abundance, unless there is a strong X-ray/cosmic ray flux that is not attenuated by dust. In the absence of a proper theoretical model for the HCN dissociation, we thus assume the same dissociation rate for HCN as for CO. \subsection{Vertical disk structure \label{sec:vertical}} In the model, the disk scale height is determined unambiguously by the assumption of hydrostatic equilibrium and the turbulent pressure (Elmegreen 1989): \begin{equation} p_{\rm turb}=\rho v_{\rm turb}^{2} = \frac{\pi}{2} G \Sigma ( \Sigma + \Sigma_{*} \frac{v_{\rm turb}}{v_{\rm disp}^{*}})~, \label{eq:pressure} \end{equation} \noindent where $\rho$ is the average density, $v_{\rm turb}$ the gas turbulent velocity in the disk, $v_{\rm disp}^{*}$ the stellar vertical velocity dispersion, and $\Sigma$ the surface density of gas and stars. The stellar velocity dispersion is calculated by $v_{\rm disp}^{*}=\sqrt{2 \pi G \Sigma_{*} H_{*}}$, where the stellar vertical height is taken to be $H_{*}=l_{*}/7.3$ with $l_{*}$ being the stellar radial scale length (Kregel et al. 2002). We neglect thermal, cosmic ray, and magnetic pressure. \subsection{Treatment as an accretion disk \label{sec:accretiondisk}} The turbulent motion of clouds is expected to redistribute angular momentum in the gas disk, like an effective viscosity would do. This allows accretion of gas towards the center and makes it possible to treat the disk as an accretion disk (e.g., Pringle 1981). This gaseous turbulent accretion disk rotates in a given gravitational potential $\Phi$ with an angular velocity $\Omega=\sqrt{R^{-1}\frac{{\rm d}\Phi}{{\rm d}R}}$, where $R$ is the disk radius. The disk has an effective turbulent viscosity that is responsible for mass accretion and outward angular momentum transport. In this case, the turbulent velocity is driven by SN explosions, which stir the disk and lead to viscous transport of angular momentum. In addition, star formation removes gas from the viscous evolution. Following Lin \& Pringle 1987, the evolution of the gas surface density is given by \begin{equation} \frac{\partial \Sigma}{\partial t}=-\frac{1}{R}\frac{\partial}{\partial R}\left( \frac{(\partial/\partial R)[\nu \Sigma R^3 ({\rm d}\Omega/{\rm }dR)]}{({\rm d}/{\rm d}R)(R^2 \Omega)}\right) -\dot{\Sigma}_{*}+\dot{\Sigma}_{\rm ext}\ , \label{eq:linpringle} \end{equation} where $\nu$ is the gas disk viscosity, $\Omega$ the angular velocity, and $\dot{\Sigma}_{\rm ext}$ is the external mass accretion rate. In contrast to Lin \& Pringle (1987), we assume a continuous and non-zero external gas mass accretion rate. Forbes et al. (2014) presented an analytical approach based on Eq.~\ref{eq:linpringle}. They showed that galaxies tend to be in a slowly evolving equilibrium state wherein new accretion is balanced by star formation, galactic winds, and radial transport of gas through the disc by gravitational instability-driven torques. For a stationary gas disk in such an equilibrium, where star formation is balanced by external accretion, the local mass and momentum conservation together with $\dot{\Sigma}_{*}=\dot{\Sigma}_{\rm ext}$ yield: \begin{equation} \nu \Sigma=\frac{\dot{M}}{2\pi}\ , \label{eq:transport} \end{equation} where $\dot{M}$ is the mass-accretion rate within the disk. In the absence of external mass accretion, the gas disk can be assumed to be stationary as long as the star-formation timescale $t_*$ exceeds the viscous timescale $t_{\nu}=R^2/\nu$. For $\dot{\Sigma}_{\rm ext} < \dot{\Sigma}_{*}$ and $t_* < t_{\nu}$ , Eq.~\ref{eq:transport} is not valid. In this case the gas disk is rapidly turned into stars within the gas consumption time ($2$~Gyr, Evans 2008). Since most spiral galaxies still have a significant amount of gas, we think that spiral galaxies are generally not in this state. Solving the time dependent Eq.~\ref{eq:linpringle} is beyond the scope of this work and we apply Eq.~\ref{eq:transport}. The viscosity is related to the driving length scale and characteristic velocity of the SN-driven turbulence by $\nu=v_{\rm turb}l_{\rm driv}$ (VB03). Because the lifetime of a collapsing and star-forming cloud ($t_{\rm ff}^{l} < t_{\rm turb}^{l}$) is smaller than the turnover time of the large-scale eddy ($l_{\rm driv}/v_{\rm turb}$), the turbulent and clumpy ISM can be treated as one entity for the viscosity description. \subsection{Clumpiness \label{sec:clumpiness}} A critical factor in the model is the relationship between the density of individual clouds, $\rho_{\rm cl}$, and the average density of the disk, $\rho$. It is the density of individual clouds that is relevant to the timescale for star formation. In this model, the two are related by the volume filling factor, $\Phi_{\rm V,}$ so that $\rho_{\rm cl}=\Phi_{\rm V}^{-1}\rho$. Here, $\rho_{\rm cl}$ refers to the density of the largest self-gravitating structures in the disk, so that for these structures, the turbulent crossing time and gravitational free-fall time are equal. The scale of such a cloud, $l_{\rm cl}$ , is smaller than the driving length scale, $l_{\rm driv}$ , by a factor $\delta$, which we do not know {\em a priori}. Following Vollmer \& Leroy (2011), we set $\delta=5$. Shear, due to differential galactic rotation, could stabilize clouds, modifying the timescale for collapse. However, this effect is mainly important when the ratio of the cloud to disk surface density is lower than the ratio of cloud to disk velocity dispersion, which is not the case over most of the disk in a typical spiral. Typical GMC surface densities are $\sim 200$~M$_{\odot}$pc$^{-2}$ (Solomon et al. 1987), whereas disk surface densities only exceed $100$~M$_{\odot}$pc$^{-2}$ in the very center of spiral galaxies (Leroy et al. 2008). We can calculate the turbulent timescale for the cloud, $t_{\rm turb}^l$, for a fractal ISM: \begin{equation} t_{\rm turb}^{l}=\delta^{-\frac{2}{3}-\frac{3-D}{3}} \,l_{\rm driv}/v_{\rm turb}\ , \end{equation} where $D$ is the fractal dimension (see, e.g., Frisch 1995) of the ISM. We assume $D=2$ for a compressible, self-gravitating fluid, which is close to the findings of Elmegreen \& Falgarone (1996). Once $\delta$ and thus $t_{\rm turb}^l$ are specified, we can solve for the density of the corresponding scale by setting $t_{\rm ff}^l = t_{\rm turb}^l$. The volume filling factor is then defined by comparing $\rho_{\rm cl}$ and $\rho$. Once the volume filling factor is known (from $\delta$ or $l_{\rm cl}$), we can calculate the local star-formation rate, $\dot{\rho_*}$, via \begin{equation}\label{eq:starform} \dot{\rho}_{*} = \Phi_{\rm V} \frac{\rho}{t_{\rm ff}^{\rm l}}\ , \end{equation} \noindent where $t_{\rm ff}^l$ is the local free-fall timescale determined by $t_{\rm ff}^l = t_{\rm turb}^l$ corresponds to the contraction timescale $t_{\rm c}=\sqrt{\pi/(G \rho_{\rm cl})}$ (Ostriker et al. 1999) of clouds of constant density in Virial equilibrium. Since, in our model, the lifetime of a cloud is the free-fall as suggested by Ballesteros-Paredes \& Hartmann (2007), this implies that during the cloud lifetime, approximately $\dot{\rho}_{*,{\rm cl}}/(\rho_{\rm cl} t_{\rm ff}^l)=\Phi_{\rm V} \sim 1$\,\% of the cloud mass turns into stars. \noindent The vertically integrated star-formation rate in the inner disk where $t_{\rm sf}^{\rm l}=t_{\rm ff}^{\rm l}=t_{\rm turb}^{\rm l}=\delta^{-1} t_{\rm turb}$ is \begin{equation}\label{eq:starformm1} \dot{\Sigma}_{*} = \Phi_{\rm V} \frac{\rho}{t_{\rm ff}^{\rm l}} l_{\rm driv} = \delta \Phi_{\rm V} \rho v_{\rm turb}\ , \end{equation} \noindent that is, it is the mass flux density of the turbulent ISM into the regions of star formation. \subsection{Thermal dust emission \label{sec:dustemission}} The dust temperature $T_{\rm d}$ of a gas cloud of given density and size illuminated by a local mean radiation field is calculated by solving Eq.~\ref{eq:dusttemp}. With the dust mass absorption coefficient of Eq.~\ref{eq:kappa}, the dust optical depth is \begin{equation} \tau(\lambda)= \kappa(\lambda)\,\Sigma_{\rm cl} (GDR)^{-1}\ , \end{equation} where $\Sigma_{\rm cl}$ is the cloud surface density in g/cm$^2$. The infrared emission at a given wavelength at a given galactic radius $R$ is calculated in the following way: \begin{equation} \label{eq:idust} I_{\rm dust}(\lambda)=\sum_{\rm i=1}^{\rm N} \big(\Phi_{\rm A} \big)_i\, (f_{\rm mass})_i\, \big(1-\exp(-\tau(\lambda))\big)_i B(\lambda,T_{\rm d})_i\ , \end{equation} where $B(\lambda,T_{\rm d})$ is the Planck function and $\Phi_{\rm A}=1.5\,(\Delta M/M)\,(\Sigma/\Sigma_{\rm cl})$ the area filling factor. The factor $1.5$ takes into account that the mean cloud surface density is $1.5$ times lower than the surface density in the cloud center $\Sigma_{\rm cl}=\rho_{\rm cl}l_{\rm cl}$. The integration of the Eq.~\ref{eq:idust} yields the total infrared emission at a given galactic radius $R$: \begin{equation} I_{\rm TIR}(R)=\int_{10\,\mu{\rm m}}^{1\,{\rm mm}} I_{\rm dust}(\lambda) {\rm d}\lambda\ . \label{eq:idust1} \end{equation} At a given wavelength, $\lambda, $ the effective background temperature of the thermal dust emission $T_{\rm eff\ dust}$ is determined by $I_{\rm dust}(\lambda)=B(\lambda,T_{\rm eff\ dust})$. We do not subtract the cosmic infrared background, because it is always much smaller than the dust emission within the region of interest. The total infrared luminosity is given by \begin{equation} L_{\rm TIR}=2\pi \int_0^{\rm R_0} I_{\rm TIR}(R)\,R\,{\rm d}R\ . \label{eq:idust2} \end{equation} To reproduce the observed total infrared luminosity, it is essential to take into account the diffuse warm neutral medium, which is not taken into account for the molecular line emission, because the gas is in atomic form. We do so by explicitly calculating the dust infrared emission based on the proper dust temperature (Eq.~\ref{eq:dusttemp}), density of $\rho/2$, area filling factor $\Phi^{\rm WNM}_{\rm A}=(1-\Phi_{\rm A}^{\rm CNM})$, and gas mass fraction of $(\Delta M/M)_{\rm WNM}=\big(1-(\Delta M/M)_{\rm CNM+mol}\big)$. \subsection{Molecular line emission \label{sec:lineemission}} A molecular line source is usually observed by chopping the telescope's beam between on- and off-source positions and measuring the difference in antenna temperatures. In general, the difference in brightness temperatures is \begin{equation} \Delta T^*_{\rm A}=\big(1-{\rm e}^{-\tau}\big)\frac{h\nu}{k}\big(\frac{1}{{\rm e}^{h\nu/kT_{\rm ex}}-1}- \frac{1}{{\rm e}^{h\nu/kT_{\rm bg}}-1}\big)\ , \end{equation} where $\tau$ is the optical depth of the line, $\nu$ the frequency of the observations, $h$ and $k$ the Planck and Boltzmann constants, and $T_{\rm ex}$ and $T_{\rm bg}$ the excitation and background brightness temperatures, respectively. Considering only a single collider (H$_2$) for simplicity, the excitation temperature is \begin{equation} \frac{1}{T_{\rm ex}}=\big(\frac{1}{T_{\rm g}}+(\frac{A_{ul}}{n q_{ul}}\frac{T_{\rm bg}}{T_*})\frac{1}{T_{\rm bg}}\big)/(1+\frac{A_{ul}}{nq_{ul}}\frac{T_{\rm bg}}{T_*})\ , \end{equation} where $T_*=h \nu_{ul}/k$, $n$ is the gas density, $nq_{ul}$ the collisional de-excitation rate, and $A_{ul}$ the Einstein coefficients of the transition $ul$. The background brightness temperature $T_{\rm bg}$ is the sum of the effective emission temperatures of the galaxy's dust $T_{\rm eff\ dust}$ and the cosmic background at the galaxy redshift $T_{\rm CMB}$ (see Eq.~17 of da Cunha et al. 2013). For optically thin transitions, the ratio of the radiative and collisional rates is just the ratio of the density to the critical density for the transition \begin{equation} n_{\rm crit}=\frac{A_{ul}}{q_{ul}}\ . \end{equation} We use $T_{\rm CMB}=2.73$~K for the local spiral galaxies and ULIRGs, $T_{\rm CMB}=6.0$~K for the high-z star-forming galaxies, and $T_{\rm CMB}=8.19$~K for the submillimeter galaxies. This corresponds to a cosmic microwave background of $T_{\rm CMB}=2.73\,(1+z)$~K (see, e.g., Carilli \& Walter 2013) and mean redshifts of $\langle z \rangle =0,\ 1.2,\ 2$, respectively. For optically thick transitions, the upper-level population can be enhanced due to absorption of line photons, leading to excitation temperatures higher than those expected simply due to H$_2$ collisions, since the line photons emitted upon spontaneous decay cannot easily escape the cloud. This so-called radiative trapping of the line photons builds up the radiation field at the frequency of the line, leading to enhanced excitation of the upper state via photon absorption. The escape probability formalism can be used to treat this optically thick situation (see, e.g., Scoville 2013). This formalism is applicable to situations in which systematic velocity gradients are large compared to the small-scale thermal motions. The line photons from one region of the cloud are then incoherent with other regions due to the Doppler shift; they can then only interact with molecules in the local region near where they were emitted. In the photon trapping regime, the spontaneous decay rates ($A_{ul}$) and thus the critical density ($n_{\rm crit}$) used in analyzing the equilibrium molecular excitation are reduced by a factor $\beta$ equal to the effective probability for escape of line photons from the emission region (Scoville \& Solomon 1974; Goldreich \& Kwan 1974). For a spherical cloud of uniform density, Draine (2011) gives \begin{equation} \beta=\frac{1}{1+0.5 \tau}\ . \end{equation} The critical density in the optically thick case is then \begin{equation} n_{\rm crit}=\beta\,\frac{A_{ul}}{q_{ul}}\ . \end{equation} For our analytic analysis, we follow Scoville et al. (2015) and use the sum of the collision rate coefficients out of the upper level $J$ to any other rotational level (both below and above $J$) since all of these transitions couple the level to the gas kinetic temperature. For the determination of the optical depth of a molecular emission line, we follow Draine (2011). The line-center optical depth, from cloud center to edge, for a transition from level $J+1$ to level $J$ is \begin{equation} \tau_{(J+1),J}=n_J r_{\rm cl} \big(1-\frac{n_{(J+1)}}{n_J}\frac{g_J}{g_{(J+1)}}\big) \frac{\lambda^3}{8 \pi^{\frac{3}{2}} v_{\rm turb}^{\rm cl}}\frac{g_{(J+1)}}{g_J}A_{(J+1),J}\ , \end{equation} where $r_{\rm cl}=l_{\rm cl}/2$ is the cloud radius, $\lambda$ the wavelength of the observations, and $g_J=2J+1$ the transition weights. Following Draine (2011), we adopt the following expression for the CO line optical depth: \begin{equation} \begin{split} \tau_{(J+1),J}=&281\,n_3R_{19}\frac{Z}{Z_{\odot}}\big(\frac{n({\rm CO})/n_{\rm H}}{7\times 10^{-5}}\big)\big(\frac{n({\rm CO},J)}{n({\rm CO})}\big)\\ &\big(\frac{2~{\rm km\,s^{-1}}}{v_{\rm turb}^{\rm cl}}\big)\big(1-\frac{n_{(J+1)}}{n_J}\frac{g_J}{g_{(J+1)}}\big)\ , \end{split} \end{equation} where $n_3=n/(10^3~{\rm cm^{-3}})$ and $R_{19}=r_{\rm cl}/(10^{19}~{\rm cm})$. The fraction of molecules of species $X$ in a given rotational level is \begin{equation} \begin{split} \frac{n({\rm X},J)}{n({\rm X})}=&\frac{(2J+1){\rm e}^{-B_0J(J+1)/kT_{\rm ex}}}{\sum_J (2J+1){\rm e}^{B_0j(J+1)/kT_{\rm ex}}}\simeq \\ &\simeq \frac{(2J+1){\rm e}^{-B_0J(J+1)/kT_{\rm ex}}}{\big(1+(kT_{\rm ex}/B_0)^2\big)^{\frac{1}{2}}}\ , \end{split} \end{equation} where $B_0$ is the rotation constant of a molecule of species $X$. In summary, we consider two-level molecular systems in which the level populations are determined by a balance of collisions with H$_2$, spontaneous decay and line photon absorption, and stimulated emission with $\tau > 1$. Our final expression for the CO line optical depth reads \begin{equation} \begin{split} &\tau_{(J+1),J}^{\rm CO}=393\,n_3R_{19}\big(\frac{2~{\rm km\,s^{-1}}}{v_{\rm turb}^{\rm cl}}\big)\frac{Z}{Z_{\odot}}\\ &\big(1-{\rm e}^{-B_0(J+1)(J+2)/kT_{\rm ex}`}\big)\big(\frac{(2J+1){\rm e}^{-B_0J(J+1)/kT_{\rm ex}}}{(1+(\frac{kT_{\rm ex}}{B_0})^2)^{\frac{1}{2}}}\big)\\ &\big(\frac{2(J+1)+1}{3(2J+1)}\big) \frac{A_{(J+1),J}^{\rm CO}}{A_{1,0}^{\rm CO}} \big(\frac{5.53~{\rm K}}{B_0(J+1)(J+2)/k}\big)^3\ , \end{split} \end{equation} where the rotation temperature is $B_0/k=2.77$~K. We use a normalization, which is different from that of Draine (2011), because we assume the canonical $x({\rm CO})=10^{-4}$. In a second step, the HCN abundances are calculated using a chemical network (see Sect.~\ref{sec:network}). For simplicity, we neglected the hyperfine structure of HCN. In this simplified treatment, we can write \begin{equation} \begin{split} &\tau_{(J+1),J}^{\rm HCN}=87\,n_3R_{19}\big(\frac{2~{\rm km\,s^{-1}}}{v_{\rm turb}^{\rm cl}}\big)\frac{Z}{Z_{\odot}}\\ &\big(1-{\rm e}^{-B_0(J+1)(J+2)/kT_{\rm ex}}\big)\big(\frac{(2J+1){\rm e}^{-B_0J(J+1)/kT_{\rm ex}}}{(1+(\frac{kT_{\rm ex}}{B_0})^2)^{\frac{1}{2}}}\big)\\ &\big(\frac{2(J+1)+1}{3(2J+1)}\big)\frac{A_{(J+1),J}^{\rm HCN}}{A_{1,0}^{\rm HCN}} \big(\frac{4.25~{\rm K}}{B_0(J+1)(J+2)/k}\big)^3\ , \end{split} \end{equation} where we assumed a HCN abundance of $x({\rm HCN})=2 \times 10^{-8}$. In a second step, the HCN abundances are calculated using a chemical network (see Sect.~\ref{sec:network}). The rotation constant ($B_0/k=2.13$~K) and Einstein coefficients are those of HCN. In the present work, we only investigate the HCN(1--0) transition. The rotation constants, Einstein coefficients, and collision rates were taken from the Leiden Atomic and Molecular Database (LAMDA; Sch\"{o}ier et al. 2005). The CO collision rates were provided by Yang et al. (2010). The HCN collision rates were taken from the He--HCN rate coefficients calculated by Dumouchel et al. (2010), scaled by a factor of $1.36$ to go to HCN--H$_2$ (see Green \& Thaddeus 1976). The HCO$^+$ emission was calculated in the same way. We verified that the model brightness temperature is consistent (within $10$-$20$\,\%) with the brightness temperature calculated by RADEX (van der Tak et al. 2007) for densities, gas kinetic temperatures, column densities, and linewidths typical for giant molecular clouds. \subsection{HCN infrared pumping \label{sec:irpumping}} HCN has a large dipole moment and therefore does not trace dense gas if there is another excitation mechanism that is faster than the H$_2$ collisions and independent of gas density. One such excitation path is through a vibrationally excited state, to which molecules can be pumped by infrared radiation (Carroll \& Goldsmith 1981). The first vibrationally excited state of HCN is its bending state ($v_2=1$) $1024$~K above the ground with an emitting wavelength of $\lambda=14~\mu$m (Sakamoto et al. 2010). Following Sakamoto et al. (2012), we define an equivalent gas density \begin{equation} n_{\rm equiv}=\exp(-T_0/T_{\rm vib})\,A_{\rm vib}/\gamma_{J,J-1}\ , \end{equation} where $T_0=1024$~K corresponds to the energy gap between the two vibrational levels $v=0$ and $1$, $A_{\rm vib}=3.7$~s$^{-1}$ is the Einstein coefficient for the vibrational transition, and $\gamma_{J,J-1}$ is the collisional rate coefficient. As already stated in Sect.~\ref{sec:lineemission}, we follow Scoville et al. (2015) and use the sum of the collision rate coefficients out of the upper level $J$ to any other rotational level (both below and above $J$) since all of these transitions couple the level to the gas kinetic temperature. $T_{\rm vib}$ is the equivalent blackbody temperature at $\lambda=14~\mu$m of the local and global background radiation. HCN IR-pumping is implemented in the model by replacing the cloud density $n_{\rm cl}$ by $n_{\rm equiv}$ if $n_{\rm equiv} > n_{\rm cl}$ in the HCN emission calculations (Sect.~\ref{sec:lineemission}). HCO$^+$ has a similar vibrational bending state at $\lambda=12~\mu$m. The associated excitation through radiative pumping is less than half as significant as that provided by the HCN molecule (Imanishi et al. 2016). For this work, we did not take IR-pumping of HCO$^+$ into account. \subsection{Photon-dominated regions \label{sec:pdr}} Photodissociation regions (PDRs) are regions of a gas cloud where the physics and chemistry is dominated by penetrating FUV photons. The structure of PDRs can be described by a plane-parallel, semi-infinite slab illuminated by an intense FUV field $G_0$, measured in units of the Habing (1968) interstellar radiation field ($=1.6 \times 10^{-3}$~ergs\,cm$^{-2}$s$^{-1}$). At the surface of the cloud, an atomic surface layer is created by the incoming FUV photons. The transition from atomic to molecular hydrogen occurs at the depth approximately $A_{\rm V} \sim 2$. As the FUV photons are attenuated by the dust, the phase of carbon shifts from C$^+$ to C and CO at $A_{\rm V} \sim 4$. Deep inside the cloud ($A_{\rm V} > 10$), HCN and HCO$^+$ are formed. Wolfire et al. (1993) found that the integrated CO(1--0) luminosity of giant molecular clouds increases by only $\sim 10$\,\% between $G_0=1$ and $G_0=100$ models. The luminosities are similar because the higher incident FUV field forces clumps to become optically thick in the CO(1--0) transition deeper into the cloud where dust extinction lowers the radiation field to a value near $G_0 \sim 1$. This results in similar gas temperatures in the optically thick clumps for both models. Thus, the local FUV radiation field plays a minor role as long as $G_0 > 1$. Loenen et al. (2008) subdivided PDRs into two types according to the cycle of star formation; UV-dominated high-density ($n \ge 10^{5}$~cm$^{-3}$) PDRs from deeply embedded young stars, and lower-density ($n=10^{4.5}$~cm$^{-3}$) PDRs that are dominated by mechanical feedback from supernova shocks. Due to the short duty cycle of the evolutionary stage involving young stars compared to the second stage involving supernovae, most of the luminous infrared galaxies of their sample are observed to be in the later stage of their evolution. Since we can reproduce the multi-transition CO, HCN(1--0), and HCO$^+$ luminosities without the inclusion of PDRs (Sect.~\ref{sec:results}), we suggest that the conclusion of Loenen et al. (2008) is also valid for the ULIRGs, smm, and high-z star-forming galaxies. Within our model framework, PDRs are taken into account through the dissociation of molecules (Sect.~\ref{sec:dissociation}), but not through their molecular line emission. \subsection{Galactic winds \label{sec:winds}} Galactic winds are driven by multiple supernova explosions during a starburst or by AGN activity. The following statements are based on the review by Veilleux et al. (2005). The minimum star-formation rate that creates a galactic wind is $\dot{M}_* \sim 5$~M$_{\odot}$yr$^{-1}$ or $\dot{\Sigma}_* \sim 10^{-3}$~M$_{\odot}$kpc$^{-2}$yr$^{-1}$. Galactic winds remove mass and, if they are magnetized, angular momentum from the galactic gas disk. Mass outflow rates range between $0.1$~M$_{\odot}$yr$^{-1}$ and $10$~M$_{\odot}$yr$^{-1}$, with a trend for the rate to increase with increasing star-formation rate. The mass-to-light conversion factors are highly uncertain for galactic winds. The ratio between the mass outflow and star-formation rate varies between $0.01$ and $10$. By writing $\nu \Sigma=\dot{M}/(2\,\pi)$ (Eq.~\ref{eq:diskeq}), we ignored galactic external accretion and outflows (wind). Including galactic winds, Eq.~\ref{eq:linpringle} becomes \begin{equation} \frac{\partial \Sigma}{\partial t}=-\frac{1}{R}\frac{\partial}{\partial R}\left( \frac{(\partial/\partial R)[\nu \Sigma R^3 ({\rm d}\Omega/{\rm }dR)]}{({\rm d}/{\rm d}R)(R^2 \Omega)}\right) -\dot{\Sigma}_{*}-\dot{\Sigma}_{\rm wind}+\dot{\Sigma}_{\rm ext}\ . \label{eq:linpringle1} \end{equation} Within the framework of our equilibrium model, we thus assumed $\dot{\Sigma}_{\rm ext}=\dot{\Sigma}_{*}+\dot{\Sigma}_{\rm wind}$. This might not be true for the compact starbursts as the ULIRGs. In this case, the constant $\dot{M}/(2\,\pi)$ links the turbulent dissipation timescale $t_{\rm turb}$ to the star-formation rate per unit surface $\dot{\Sigma}_*$ (combining the mass, momentum, and energy conservation equations of Eq.~\ref{eq:diskeq}): \begin{equation} \dot{M}=2 \pi \xi \dot{\Sigma}_* t_{\rm turb}^2\ . \end{equation} Galactic winds also remove cosmic ray particles from the galactic disk. Following Suchkov et al. (1993; see also Papadopoulos 2010), the CR energy densities in starburst galaxies driving a galactic wind scale with respect to that of the Galaxy as \begin{equation} \frac{U_{\rm CR}}{U_{\rm CR, Gal}} \sim \frac{\dot{\Sigma}_*}{\dot{\Sigma}_{\rm *, Gal}} \times \frac{v_{\rm diff}}{v_{\rm wind}}\ , \end{equation} where $v_{\rm diff}$ is the diffusion velocity at which CRs escape from quiescent disks such as the Milky Way while $v_{\rm wind}$ is the velocity of a star-formation-induced wind at which CRs are advected out of the star-forming regions. The diffusion velocity is set by the Alfven velocity $v_{\rm A}=B/\sqrt{4\pi \rho}$, where $B$ is the magnetic field. Here, we assumed that the SN explosion rate is proportional to the star-formation rate. For the chemical network (Sect.~\ref{sec:network}), we assumed $\frac{U_{\rm CR}}{U_{\rm CR, Gal}}=40$ for the ULIRGs, smm, and high-z star-forming galaxies. Since our model yields $\langle \frac{\dot{\Sigma}_{\rm *}}{\dot{\Sigma}_{\rm *, local\ spirals}} \rangle \sim 6000$ for the ULIRGs and smm-galaxies, the mean ratio is $\langle \frac{v_{\rm diff}}{v_{\rm wind}} \rangle \sim 7 \times 10^{-3}$. Assuming $v_{\rm diff}=10$~km\,s$^{-1}$ gives a mean wind velocity of $v_{\rm wind} \sim 1500$~km\,s$^{-1}$. This represents approximately four times the escape velocity, two times the observed mean velocity of a molecular wind (Feruglio et al. 2015, Sakamoto et al. 2015), and is close to terminal wind velocities (e.g., Veilleux et al. 2005). For the ULIRG model, we decided to apply the full CR heating $\frac{U_{\rm CR}}{U_{\rm CR, Gal}} = \frac{\dot{\Sigma}_*}{\dot{\Sigma}_{\rm *, Gal}}$ to the non-self-gravitating clouds. This modification did not change the CO flux luminosities, but increased the HCN(1--0) luminosities by a factor of $\sim 1.5$ putting them closer to the observed luminosities. This treatment implies that (i) the initially spherical wind escapes mostly vertically from the star-forming regions without touching many of the non-self-gravitating clouds (``champagne effect''), (ii) the SN wind that blows in the direction of the galactic disk is absorbed by the non-self-gravitating clouds, and (iii) the solid angle around the star-forming region occupied by non-self-gravitating clouds is substantial (i.e., a wind opening angle $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 90^{\odot}$). Further testing of this hypothesis on local starburst galaxies is needed to clarify the situation. In the present model, the assumed CR ionization rate of the high-z star-forming galaxies is consistent with an absence of a galactic wind. In a future project, we will investigate the influence of a lower CR energy density due to a wind on the chemical network and thus the molecular line emission of these galaxies. \section{Method \label{sec:method}} The outline of our model/method is presented in Fig.~\ref{fig:bild1}. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{fig4bernd1.ps}} \resizebox{\hsize}{!}{\includegraphics{bild1.ps}} \caption{Schematic outline of our clumpy star-forming galactic disk model. \label{fig:bild1}} \end{figure*} The VB03 model yields the following system of equations to describe a turbulent clumpy galactic accretion disk: \begin{equation} \label{eq:diskeq} \begin{gathered} \nu = v_{\rm turb} l_{\rm driv}\ ,\\ \nu \Sigma = \frac{\dot{M}}{2\pi}\ ,\\ \Sigma = \rho\,H\ ,\\ p_{\rm turb}=\rho v_{\rm turb}^{2} = \frac{\pi}{2} G \Sigma ( \Sigma + \Sigma_{*} \frac{v_{\rm turb}}{v_{\rm disp}^{*}})~,\\ Q = \frac{v_{\rm turb} \Omega}{\pi G \Sigma}\ ,\\ \Sigma \nu \frac{v_{\rm turb}^{2}}{l_{\rm driv}^{2}} = \xi\,\dot{\Sigma}_{*} + \frac{\dot{M}}{2\pi} \Omega^2\ ,\\ \dot{\Sigma}_{*} = \Phi_{\rm V} \frac{\rho}{t_{\rm SF}^{\rm l}} l_{\rm driv}\ ,\\ t_{\rm SF}^{\rm l}=\sqrt{\frac{3\pi}{32G\rho_{\rm cl}}}=t_{\rm turb}^{\rm l}\ . \end{gathered} \end{equation} The meaning of the variables is given in Table~\ref{tab:parameters}. For the global comparison between the observed and the model radial profiles, we solve the set of equations given above numerically. The free parameters of the analytical model are the Toomre parameter $Q$ and the disk mass accretion rate $\dot{M}$. For the local spirals, we set $Q$ to values derived in Vollmer \& Leroy (2011) (see Table~\ref{tab:gleroy}). For the ULIRGs, high-z star-forming galaxies, and submillimeter galaxies, we assume a constant $Q=1.5$ (see Tables~\ref{tab:gulirg}, \ref{tab:gphibbs}, \ref{tab:gbzk}). The mass accretion rate $\dot{M}$ is determined by the total star-formation rate of the galactic disks (see Eqs.~\ref{eq:diskeq}). For the calculation of the infrared dust emission and molecular line emission from the galactic disk, we divide the ISM into two density bins (see Sect.~\ref{sec:scaling}): (i) non-self-gravitating clouds with densities $\rho_2$ equal or higher than that of the cool neutral medium: ${\rm max}(n_{\rm CNM},100~{\rm cm^{-3}}) \leq \rho_2 \leq \rho/\Phi_{\rm V}$ and (ii) self-gravitating clouds with densities $\rho_1 \geq \rho/\Phi_{\rm V}$. The calculation of the molecular emission is done in the following steps: \begin{enumerate} \item Calculation of the temperatures of the dust and cool neutral medium $T_{\rm CNM}$ and the self-gravitating clouds $T_{\rm cl}$ according to Sect.~\ref{sec:gdtemp}. Even if the gas is not molecular at densities of $100$~cm$^{-3}$, we use the CO, H$_2$, H$_2$O cooling function for model consistency, because we want to apply the same code for local spirals and ULIRGs. For clouds of lowest densities in local galaxies, this yields temperatures between $80$ and $150$~K, well in the range of observed CNM temperatures (Kulkarni \& Heiles 1987; Dickey \& Lockman 1990). In addition, in these clouds, almost all H$_2$ and CO is dissociated (Sect.~\ref{sec:dissociation}) and the molecular line emission is very weak (Sect.~\ref{sec:lineemission}) due to the low gas density in these low-density clouds. Therefore, the high uncertainty of the calculated temperature of these clouds does not affect the total molecular line emission of the galactic disk. \item We assume different scaling relations for the two density regimes: (i) non-self-gravitating clouds and (ii) self-gravitating clouds (Sect.~\ref{sec:scaling}). \item Within the two density regimes, the properties (size, column density, turbulent velocity, temperature) of clouds with over-densities $2^N$ with $N=1,2,...$ are calculated according to the scaling relations. The mass fraction $\frac{\Delta M}{M}$ of each density bin is calculated via the pdfs described in Sect.~\ref{sec:gasfrac}. This procedure insures mass conservation, that is, $\sum_{i=1}^N \big( \frac{\Delta M}{M} \big)_i =1$. \item The fraction of molecular mass $f_{\rm H_2}$ and mass emitting in the molecular line $f_{\rm CO}$ is determined according to Sect.~\ref{sec:dissociation}. In the absence of a detailed theory of HCN dissociation, we use the same dark gas fraction for HCN as for CO. \item The molecular line emission ($T^*_{\rm A}$) is calculated according to Sect.~\ref{sec:lineemission} using the density, size, and temperature of the clouds in each density bin. \item The total dust infrared emission at a given galactic radius $R$ is determined by Eq.~\ref{eq:idust}, the TIR luminosity is calculated via Eqs.~\ref{eq:idust1} and \ref{eq:idust2}, and the total CO or HCN molecular line emissions are calculated in the following way: \begin{equation} I_{\rm mol}(R)=\sum_{\rm i=1}^{\rm N} \big(\Phi_{\rm A} \big)_i\, (f_{\rm CO})_i\, (\Delta T^*_{\rm A})_i\, 2.35\,(v_{\rm turb,cl})_i\,, \end{equation} where the factor $2.35$ links the turbulent velocity to the linewidth. The cosmic and local dust infrared backgrounds are taken into account. The total flux density is calculated by integrating the radial profile \begin{equation} F=2\,\pi \int_{0}^{R_0} I(R)\,R\,{\rm d}R\ . \end{equation} The molecular fraction is \begin{equation} \frac{\Sigma_{\rm H_2}}{\Sigma}=\sum_{\rm i=1}^{\rm N} \big( \frac{\Delta M}{M} \big)_i\,(f_{\rm H_2})_i \end{equation} The total molecular gas mass is \begin{equation} M_{\rm H_2}=2\,\pi \int_{0}^{R_0} \Sigma_{\rm H_2}\,R\,{\rm d}R\ . \end{equation} \end{enumerate} The H{\sc i} surface density is $\Sigma_{\rm HI}=\Sigma-\Sigma_{\rm H_2}$ and the total H{\sc i} mass is \begin{equation} M_{\rm HI}=2\,\pi \int_{0}^{R_0} \Sigma_{\rm HI}\,R\,{\rm d}R\ . \end{equation} The upper limit of the galactic radius is $R_0=4.5 \times R_*$ for the local spiral galaxies and high-z star-forming galaxies (van der Kruit \& Searle 1982) and $R_0=3.4 \times l_*$ for the ULIRGs and submillimeter galaxies, where $l_*$ is the exponential scalelength of the stellar disk. The latter normalization was chosen to reproduce the spatial extents of ULIRGs (Downes \& Solomon 1998). \section{Uncertainties \label{sec:uncertain}} All the different steps of our model calculations have associated uncertainties. In the following, we discuss the main uncertainties for each step. \begin{enumerate} \item Analytical model (Sect.~\ref{sec:model}, Eq.~\ref{eq:diskeq}): the constant $\xi$ that links the star-formation rate to the energy injection rate is calibrated to the Galaxy. We do not know if this calibration also holds for the densest starburst regions; for example, in ULIRGs. In the presence of outbreaking supernovae giving rise to a galactic wind, one might expect a lower value of $\xi$. Since the total and molecular gas masses are only weakly dependent on $\xi$, we expect an uncertainty of the order of a few $10$\,\% on the determination of the gas mass associated to the uncertainty of $\xi$. For our calculations, we regard $\xi$ as constant. The mass accretion rate $\dot{M}$ is determined by the total star-formation rate which is typically uncertain to a factor of $1.5$ (Leroy et al. 2008, Genzel et al. 2015). Due to an uncertain AGN contribution to the IR luminosity, the uncertainty is expected to be higher for ULIRGs. Since $\dot{M}_* \propto \dot{M}^{0.4\ {\rm to}\ 0.6}$ (Vollmer \& Leroy 2011), we estimate the uncertainty associated to the determination of $\dot{M}$ from the integrated star-formation rate to be of the order of a few tens of percent. The uncertainty on the derived total gas mass is of the same order. Another uncertainty comes from the unknown velocity dispersion of the disk stars or the thickness of the galactic stellar disk. The applied relation $H_*/l_*=7.3$ for local spiral galaxies (Kregel et al. 2002) has a dispersion of $\sim 30$\,\%. We estimate the uncertainty on the gas pressure and thus the gas surface density and total gas mass to be small for local spirals. However, it is expected to be higher for ULRIGs, high-z star-forming galaxies, and submillimeter galaxies that might deviate from this relation. The mass and momentum conservation equation $\dot{M}/(2\,\pi)=\nu \Sigma$ represents another source of uncertainty. This equation implies that the mass loss through star formation is balanced by radial or external gas accretion. In most cases, radial accretion within the disk is negligible. In the absence of this external accretion, we expect that the gas surface density in the central part of the galaxy decreases because of star formation. We take this phenomenon into account by increasing $Q$ towards the galaxy center in some of the local spiral galaxies. For ULIRGs, high-z star-forming galaxies and submillimeter galaxies we assume a constant $Q$. This might be justified by a large gas mass and star-formation rate of these galaxies. Vollmer \& Leroy (2011) have shown that the system of equations including the mass and momentum conservation equation describes the radial H$_2$, star formation, turbulent velocity, and molecular fraction profiles of local spiral galaxies in a satisfying way. For the metallicity, we use a simple closed-box model. As long as the star-formation timescale is smaller than the gas accretion timescale, this assumption is justified. In ULIRGs and submillimeter galaxies, this might not be the case, leading to an overestimation of the metallicity and thus the molecular abundances $X$. This will increase the molecular line emission $\propto X^{0.4}$ (Scoville \& Solomon 1974). We are thus confident that the model equations describe the physics of a star-forming galactic disk within a factor of $1.5$-$2$. \item Density probability distribution functions (pdfs) (Sect.~\ref{sec:gasfrac}): the role of gas self-gravitation is not included in the pdf of Padoan et al. (1997). In the presence of self-gravitation, the shape of the density pdf is altered at high densities, that is, the mass fraction of high-density gas is higher in a pdf including self-gravitation than in a pdf without self-gravitation (see, e.g., Schneider et al. 2015). We estimate the uncertainty of the mass calculation due to different pdfs to be a factor of $2$. \item Gas and dust temperature calculations (Sect.~\ref{sec:gdtemp}): the main uncertainty comes here from the gas cooling function, which is assumed to be dominated by CO, H$_2$, and H$_2$O line cooling. Based on Neufeld et al. (1995), this leads to a possible underestimation of the total cooling by up to a factor of $3$ to $4$. On the other hand, the comparison with the cooling function proposed by Goldreich (2001) shows an overall discrepancy of a factor of $2$. We believe that, on average, our cooling function is uncertain to a factor of $\sim 2$. The resulting gas temperature is uncertain by approximately a factor of $\sim 1.3$. \item Molecular line emission (Sect.~\ref{sec:lineemission}): sources of uncertainties are manyfold: (i) we only calculate a single transition at a time and not the full system of transitions of a given molecule, (ii) we assume spherical geometry of all clouds, whereas a sheet- or filament-like geometry is not excluded for non-self-gravitating clouds. This directly affects the area-filling factor $\Phi_{\rm A}$, (iii) the area-filling factor is small enough so that line emission of a smaller cloud is not absorbed by a larger cloud. This effect might only play a role in ULIRGs, (iv) we assume abundances that are proportional to the closed-box metallicity; we therefore neglect gas depletion and chemistry effects. We had to modify our simple closed-box model for starburst galaxies to take these effects into account (Eq.~\ref{eq:metulirgs}). Of all these sources of uncertainty we estimate (iv) to be most important, leading to an uncertainty associated to the molecular line emission of a factor of a $2$ for CO and of a few for HCN. \item H$_2$ and CO dissociation (Sect.~\ref{sec:dissociation}): we treat the photo-dissociated region in molecular gas clouds in a crude way. It is assumed that the outer H{\sc i} envelope has the density of the CNM, which has not to be the case. The dark gas fraction (H$_2$ without CO) also depends on the density of the cloud envelope or the density profile of the cloud. We estimate the uncertainty of the H$_2$ and CO-emitting mass calculation to be approximately a factor of $2$. \item Scaling relations (Sect.~\ref{sec:scaling}): whereas the scaling relations for self-gravitating GMCs are relatively robust, those for diffuse (non-self-gravitating) clouds are less well established. This will lead to higher uncertainties for galaxies with very dense gas disks, especially for ULIRGs, where the molecular line emission from the diffuse clouds dominates the total emission. We estimate the uncertainty on the calculation of the mass fraction at a given density due to the adopted scaling relations to be a factor of $2$ for the ULIRGs and $30$-$50$\,\% for the high-z star-forming and submillimeter galaxies. \end{enumerate} In summary, the most important uncertainties are due to the analytical model, the choice of the pdf, and the adopted molecular abundances. All uncertainties are of the order of a factor of $2$. In addition, the uncertain scaling relations of ULIRGs might add an additional factor to the molecular line emission calculation of ULIRGs. \section{Galaxy samples \label{sec:samples}} The input parameters of our model are: the exponential scale-length of the stellar disk $l_*$, the stellar mass $M_*$, the rotation velocity $v_{\rm rot}$, and the star-formation rate $\dot{M}_*$. We apply our model to four galaxy samples for which these parameters are determined observationally in a uniform way: local spiral galaxies, local ultraluminous infrared galaxies (ULIRGs), high-z star-forming galaxies, and submillimeter galaxies. Following Boissier et al. (2003), for all galaxies, we assume a rotation curve of the form \begin{equation} v_{\rm rot}(R)=v_{\rm flat}\big(1-\exp(-\frac{R}{l_{\rm flat}})\big)\ , \end{equation} where $v_{\rm flat}$ and $l_{\rm flat}$ represent the velocity at which the rotation curve is flat and the length scale over which it approaches this velocity, respectively. Moreover, we assumed $\delta=5$ for all galaxies. The stellar surface density is assumed to be of the form: \begin{equation} \Sigma_*=\Sigma_{*,0} \exp(-\frac{R}{l_*}) ,\end{equation} with $M_*=2\,\pi \int_0^{R_0} \Sigma_* R\,{\rm d}R$. \subsection{Local spiral galaxies} The sample of local spiral galaxies (Table~\ref{tab:gleroy}) is taken from Leroy et al. (2008). For this work, we did not consider dwarf galaxies, which will be the subject of a subsequent article. The spiral galaxies have total stellar masses in excess of $10^{10}$~M$_{\odot}$. The gas masses are derived from IRAM 30m CO(2--1) HERACLES (Leroy et al. 2009) and VLA H{\sc i} THINGS (Walter et al. 2008) data. The star-formation rate was derived from Spitzer MIR and GALEX UV data (Leroy et al. 2008). The total infrared luminosities are taken from Dale et al. (2012). Following Vollmer \& Leroy (2011), the Toomre parameter of NGC~628, NGC~3198, NGC~5194, and NGC~7331 was set to $Q(R)=Q+3\,\exp(-2\,R/l_*)$, that of NGC~3351 by $Q(R)=Q-4\,\exp\big(-(2\,R/l_*)^2\big)$. The Toomre parameter $Q$ was assumed to be constant for all other galaxies (see Table~\ref{tab:gleroy}). \subsection{Local ULIRGs} The ULIRG sample (Table~\ref{tab:gulirg}) is taken from Downes \& Solomon (1998). These authors derived the spatial extent, rotation velocity, gas mass, and dynamical mass $M_{\rm dyn}$ for local ULIRGs from PdB interferometric CO-line observations. The total infrared luminosities are taken from Graci\'{a}-Carpio et al. (2008). We adopted the star-formation rates based on FIR data from Graci\'{a}-Carpio et al. (2008). We calculated the stellar mass as $M_*=M_{\rm dyn}-M_{\rm gas}$ and assumed that the stellar scale length is approximately equal to the observed CO scale length. \subsection{High-z star-forming galaxies} The high-z star-forming sample (Table~\ref{tab:gphibbs}) is taken from PHIBSS (Tacconi et al. 2013), the IRAM PdB high-z blue sequence CO(3--2) survey of the molecular gas properties in massive, main-sequence star-forming galaxies at $z=1$-$1.5$. For our purpose, we only took the disk galaxies from PHIBSS. The stellar masses given by Tacconi et al. (2013) were derived from SED fitting, assuming a Chabrier IMF. Following Genzel et al. (2010), we calculated the star-formation rate from the total infrared luminosity using $\dot{M_*}(1~{\rm M}_{\odot}{\rm yr}^{-1})=10^{-10} L_{\rm TIR}({\rm L}_{\odot})$. Their star-formation rates are based on the sum of the observed UV- and IR-luminosities, or an extinction-corrected H$\alpha$ luminosity. Their half-light radii were derived from Sersic fits to the HST ACS and/or WFC3 CANDELS data (Grogin et al. 2011). To estimate the characteristic circular velocities, Tacconi et al. (2013) took the isotropic virial estimate $v_{\rm circ}=\sqrt{3/(8\,\ln 2)}\Delta v_{\rm FWHM}$, where $\Delta v_{\rm FWHM}$ is the CO(3--2) linewidth for unresolved galaxies without a velocity gradient, and $v_{\rm circ}=1.3\,\big(\Delta v_{\rm blue-red}/(2 \sin(i))\big)$ if a velocity gradient ($\Delta v_{\rm blue-red}$) indicative of rotation is detected in a galaxy with an inclination $i$. Since the inclination angle is difficult to determine in these high-z star-forming disk galaxies we adopted the following strategy for the determination of the rotation velocity: if $v_{\rm rot} < \sqrt{(M_{\rm gas}+M_*)\,G/(2\,l_*),}$ the assumed rotation velocity is $v_{\rm rot}=\sqrt{(M_{\rm gas}+M_*)\,G/(2\,l_*)}$; otherwise $v_{\rm rot}=v_{\rm circ}$. In this way, the rotation velocity of $15$ out of $45$ galaxies was increased by more than $50$\,\%. \subsection{Submillimeter galaxies} The smm-galaxy sample (Table~\ref{tab:gbzk}) was drawn from Genzel et al. (2010). The total infrared luminosities are based on the $850$~$\mu$m flux densities (Genzel et al. 2010). We calculated the star-formation rate from the total infrared luminosity using $\dot{M_*}({\rm M}_{\odot}{\rm yr}^{-1})=1.7 \times 10^{-10} L_{\rm TIR}({\rm L}_{\odot})$. Stellar masses are from the SED fits in Erb et al. (2006) and F\"{o}rster Schreiber et al. (2009) and rotation velocities and half-light radii are from the data in the same references with the methods discussed in F\"{o}rster Schreiber et al. (2009). When the stellar mass was not available, it was set to $10^{11}$~M$_{\odot}$. The CO-line observations were made in the CO(2--1), CO(3--2), and CO(4--3) lines. \section{Results \label{sec:results}} The aim of the present work is to directly compare infrared and molecular line luminosities. Before doing so, the distribution of the galaxy metallicities, dust SEDs, TIR luminosities, and dust temperatures are presented. \subsection{Metallicity \label{sec:metals}} As described in Sect.~\ref{sec:model}, we use a simple close-box model for the metallicity (Eq.~\ref{eq:zzodot}). The oxygen abundance is then calculated by 12+log(O/H)=$\log(Z/Z_{\odot})+8.7$, with 12+log(O/H)=8.7 being the solar oxygen abundance (Asplund et al. 2005). For the global mean metallicity, the average is weighted by optical emission from a disk where a uniform distribution of stars is mixed homogeneously with dust (``mixed'' model of McLeod et al. 1993): \begin{equation} \langle Z \rangle=\frac{\int_0^{R_{0}} Z\,\dot{\Sigma}_*\,\frac{1-\exp(-\tau)}{\tau}\,{\rm d}R}{\int_0^{R_{0}} \dot{\Sigma}_*\,\frac{1-\exp(-\tau)}{\tau}\,{\rm d}R}\ , \end{equation} with $\tau=\Sigma/(7.5~{\rm M}_{\odot}{\rm pc}^{-2})$. The mean oxygen abundances resulting from our modeling $\langle$12+log(O/H)$\rangle$ are presented in Fig.~\ref{fig:metallicities}. The oxygen abundances of our local spiral galaxy sample range is $8.7 \leq $12+log(O/H)$\leq 9.0$. This is close to the findings of Moustakas \& Kennicutt (2006) who studied $14$ nearby disk galaxies with integrated spectrophotometry and observations of more than $250$ individual H{\sc ii} regions; their oxygen abundances based on the McGaugh (1991) calibration also fall in the range $8.6 \leq $12+log(O/H)$\leq 9.0,$ with most of the galaxies having an oxygen abundance of $8.8$-$9.0$. The oxygen abundances of our ULIRG sample fall in the range $8.6 \leq $12+log(O/H)$\leq 9.4$. The mean oxygen abundance of the sample is 12+log(O/H)$=9.0 \pm 0.3$. Compared to the oxygen abundances determined by Rupke et al. (2008) and Kilerci Eser et al. (2014), with lower and upper limits of $8.4$ and $9.0$, respectively, four out of nine ULIRGs show model oxygen abundances in excess of $9.0$. The model metallicities are thus up to a factor of $2.5$ higher than the observed metallicities derived from optical emission line diagnostics. The oxygen abundances or metallicities of the smm-galaxy sample are in the range $8.7 \leq $12+log(O/H)$\leq 9.4$, that is, the metallicities are mostly supersolar. Three out of ten galaxies show metallicities in excess of 12+log(O/H)$=9.4$. We decided not to arbitrarily modify these obviously too high model metallicities. The mean oxygen abundance of the smm-galaxy sample is 12+log(O/H)$=9.3 \pm 0.4$. Swinbank et al. (2004) found slightly subsolar metallicities in their sample of 30 smm-galaxies at a median redshift of $z \sim 2.4$. Tecza et al. (2004) found a supersolar oxygen abundance of 12+log(O/H)=9.0 for SMM J14011+0252 at z=2.57. Nagao et al. (2012) found a solar metallicity of the submillimeter galaxy LESS J033229.4--275619 at z=4.76. The model metallicities of our smm-galaxy sample are thus a factor $2$-$4$ higher than the observed metallicities. The oxygen abundances of our high-z star-forming galaxy sample range between $8.0 \leq $12+log(O/H)$\leq 9.0$. The mean oxygen abundance of the sample is 12+log(O/H)$=8.6 \pm 0.2$, thus very close to solar metallicity. This is consistent with the results of Shapley et al. (2004) who found solar and possibly supersolar metallicities in high-z star-forming galaxies. It is also consistent with the mean metallicity of 12+log(O/H)$=8.7 \pm 0.2$ of the sample of 50 galaxies at $z \sim 1.2$ in the MASSIV survey (Queyrel et al. 2012). We conclude that the integrated model metallicities of the local spiral and high-z star-forming galaxies are in good agreement with observations. On the other hand, the model metallicities of half of the galaxies of the ULIRG and smm-galaxy sample show metallicities that are $2$-$4$ times higher than the observed metallicities. This leads to a potential overestimation of the molecular line emission ($\propto X^{0.4}$; Scoville \& Solomon 1974) by a factor of $1.3$-$1.7$. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{metallicities.ps}} \caption{Distribution of galaxy metallicities. Black solid line: local spiral galaxies. Blue dotted line: ULIRGs. Green dashed line: smm-galaxies. Red dash-dotted line: high-z star-forming galaxies. The vertical black line corresponds to the solar metallicity of 12+log(O/H)=8.7. \label{fig:metallicities}} \end{figure} \subsection{Dust SED, TIR luminosity, and dust temperature \label{sec:sedtir}} For the direct comparison between the model and observed dust IR spectral energy distributions (SED), we extracted all available photometric data points for our galaxy samples from the CDS VizieR database\footnote{\tt http://vizier.u-strasbg.fr/viz-bin/VizieR}. Since the flux densities in the different catalogs are determined within different apertures, we only take the highest flux densities for a given wavelength range around a central wavelength $\lambda_0$ ($0.75 \leq \lambda/\lambda_0 \leq 1.25$). In this way, only the outer envelope of the flux density distribution is selected. We did not attempt to remove flux densities below this outer envelope, which most probably corresponds to apertures that do not include the whole object or are erroneous. Since our dust model does not include stochastically heated small grains and PAHs, the observed IR flux densities for $\lambda \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 50~\mu$m cannot be reproduced by the model. Flux densities at wavelengths $\ge 70~\mu$m are available for all local spirals/ULIRGs, 9 out of 10 smm-galaxies, and 30 out of 44 high-z star-forming galaxies. The model dust IR SEDs of the local spiral galaxies reproduce the observed SEDs very well (Fig.~\ref{fig:IRspectra_spirals}). Only the flux densities at $\lambda > 200~\mu$m NGC~4736, NGC~4535, NGC~6946, and NGC~3627 are somewhat overestimated by the model. The models of the ULIRGs reproduce the existing observations very well (Fig.~\ref{fig:IRspectra_ulirgs}). The comparison between the model and observed SED is difficult for Arp~220, because the observed SED contains the whole system, whereas the model SEDs are made separately for the Disk, Western, and Eastern nuclei. For the comparison, we assumed that $30$\,\%, $20$\,\%, and $30$\,\% of the observed total flux densities are emitted by the Disk, Western, and Eastern nuclei, respectively. The models of the smm-galaxies reproduce the existing observations in a satisfactory way (Fig.~\ref{fig:IRspectra_smm}). Only for SMM~J123549+6215 are the infrared flux densities underestimated by approximately a factor of $2$. As for the local spiral galaxy sample, the model dust IR SEDs of the high-z star-forming galaxies reproduce the observed SEDs well (Fig.~\ref{fig:IRspectra_phibss1}) at almost all wavelengths, especially those of the $z \sim 1.5$ (EGS) sample. The model SEDs of the $z \sim 2.5$ sample underestimate the observed SEDs by up to a factor of $2$ for BzK~4171 and BzK~16000 and overestimate the observed SEDs by approximately a factor of $2$ for BzK~17999. The comparison of the model total IR luminosities (from $10~\mu$m to $1000~\mu$m) to the observed total IR luminosities is presented in Fig.~\ref{fig:tirlum}. The model reproduces the total IR luminosities within a factor of $2$. Only for two smm-galaxies and two high-z star-forming galaxies does the model underestimate the total IR luminosities by more than a factor of $\sim 2.5$. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{tirlum.ps}} \caption{Model total infrared luminosity as a function of the observed TIR luminosity of the galaxies. Black plus symbols represent local spiral galaxies, blue boxes represent ULIRGs, green triangles represent smm-galaxies and red crosses represent high-z star-forming galaxies. \label{fig:tirlum}} \end{figure} We fitted modified Planck functions with $\beta=1.5$ (see Sect.~\ref{sec:dustemission}) to the model dust IR SEDs to derive dust temperatures. The modified Planck functions are shown as red dashed lines in Figs.~\ref{fig:IRspectra_spirals} to \ref{fig:IRspectra_phibss4}. The resulting distribution of dust temperatures for the different galaxy samples are presented in Fig.~\ref{fig:tdust}. The dust temperatures of the local spiral galaxies range between $19$ and $24$~K. This is in excellent agreement with Dale et al. (2012; Fig.~10), which is not surprising given the good fit to the observed IR dust SEDs. The dust temperatures of the ULIRGs range between $39$ and $72$~K with a mean of $50 \pm 11$, in reasonable agreement with the results of Symeonidis et al. (2013) who found that the majority of (U)LIRGs at all redshifts have mean dust temperatures between $25$ and $45$~K using IRAS- and Herschel-selected samples, and Hwang et al. (2012) who found a dust temperature range between $35$ and $43$~K based on Herschel IR SEDs. Our dust temperatures are somewhat lower than the temperature distribution ($61 \pm 9$) found by Klaas et al. (2001) for local ULIRGs. The dust temperatures of the smm-galaxies range between $31$ and $64$~K. The smm-galaxy dust temperatures cover approximately the same range as the ULIRG dust temperatures, but their mean dust temperature is somewhat smaller $43 \pm 10$~K compared to $50 \pm 11$~K for the ULIRG sample. This is in good agreement with the dust temperatures ($30$-$45$~K) of smm-galaxies with IR luminosities $< 10^{13}$~L$_{\odot}$ of Hwang et al. (2010; Fig.~3). The dust temperatures of the high-z star-forming galaxies range between $27$ and $48$~K with a mean of $33 \pm 4$~K. This is in good agreement with (i) the mean temperature of the stacked $z \sim 1$ sample ($32 \pm 2$) of Magdis et al. (2012; Table~2) and (ii) the mean temperature of $30$~K found for $z \sim 1$ sources by Magnelli et al. (2014; Eq.~4) based on Herschel observations. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{tdust.ps}} \caption{Model distribution of dust temperatures. The black solid line represents local spiral galaxies, the blue dotted line represents ULIRGs the green dashed line represents smm-galaxies and the red dash-dotted line represents high-z star-forming galaxies. \label{fig:tdust}} \end{figure} We conclude that our model reproduces the dust IR SEDs of all galaxy samples. The IR flux densities of one smm-galaxy and two high-z star-forming galaxies at $z \sim 2.5$ underestimated by a factor of $\sim 2$. The derived dust temperatures of all galaxy samples are consistent with MIR and FIR observations. \subsection{Molecular fraction and H{\sc i} mass \label{sec:molfrac}} In the present model, the molecular fraction is calculated for each cloud of scale $l_{\rm cl}$. It is determined by the photodissocation of H$_2$ molecules or the finite lifetime of a cloud (see Sect.~\ref{sec:method}). As a first step, we compare this molecular fraction to the molecular fraction defined by Vollmer \& Leroy (2011) as $f_{\rm mol}=t_{\rm ff}/t_{\rm mol}/(1+t_{\rm ff}/t_{\rm mol})$, where the free-fall and molecule formation times are those of self-gravitating clouds at a galactic radius $R$ (Fig.~\ref{fig:ffmmooll}). \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{ffmmooll1.ps}} \caption{Radial profiles of the molecular fraction $f_{\rm mol}$ for the local spiral galaxies. The solid line shows this model and the dashed line shows $f_{\rm mol}=t_{\rm ff}/t{\rm mol}/(1+t_{\rm ff}/t_{\rm mol})$, where the free-fall and molecule-formation times are those of self-gravitating clouds (Vollmer \& Leroy 2011). The profile for each galaxy is shown in a different color. The observed relation $R_{\rm mol}=10.6 \exp(-R/0.21R_{25})$ (Leroy et al. 2008) is shown as a thick dashed line. \label{fig:ffmmooll}} \end{figure} The mean deviation between the two molecular fractions is approximately $30$\,\%, with a maximum deviation of approximately $50$\,\% at large galactic radii. At small galactic radii, the molecular fraction of the present model is $\sim 30$\,\% smaller, whereas at large galactic radii, it is up to $\sim 50$\,\% higher than that of Vollmer \& Leroy (2011). We thus conclude that both prescriptions are comparable. This is quite surprising, because the Vollmer \& Leroy (2011) prescription is only based on the properties of the self-gravitating clouds. We interpret this result as evidence for the dominant role of self-gravitating clouds for the formation of molecular hydrogen in local spiral galaxies. Since the model yields the molecular fraction of the ISM, we can calculate the H{\sc i} surface density radial profile and the H{\sc i} mass. The comparison between the observed and model H{\sc i} masses is presented in Fig.~\ref{fig:HImasses}. The model reproduces the observed H{\sc i} masses within a factor of $2$. Especially the H{\sc i} masses of the galaxies with $M_{\rm HI} > 10^{10}$~M$_{\odot}$ are underestimated by a factor of $\sim 2$. This is intrinsic to the model and cannot be compensated by a modification of the free model parameters ($Q$ and $\delta$). \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{HImasses.ps}} \caption{Model H{\sc i} mass as a function of the observed H{\sc i} mass for the sample of local spiral galaxies from Walter et al. (2008). The solid line corresponds to equality, the dotted lines to factors of $1/2$ and $2$. \label{fig:HImasses}} \end{figure} The associated radial H{\sc i} surface density profiles together with the observed H{\sc i} profiles from Leroy et al. (2008) are presented in the upper and lower panels of Fig.~\ref{fig:HIprofiles}. Whereas the model profiles are all monotonically declining, most of the observed profiles are constant in the range $1 \leq R/l_* \leq 4$. In addition, most of the observed profiles show a depression in the central part of the galactic disks, whereas the models often show a maximum toward the galaxy center. The latter difference can be explained by (i) an underestimation of the model molecular fraction or (ii) by the ionization of atomic hydrogen in the inner part of the galactic disks, that is, the inclusion of a warm ionized medium into the model. In order to investigate the effect of ionization by the UV radiation of massive stars in the galactic disk, we use the ionization-recombination equilibrium to calculate the number surface density of ionized gas following Maloney (1993): \begin{equation} N_{\rm ion}=7.7 \times 10^{18} \frac{(\varphi/10^4)}{(n/10^{-2})}~{\rm cm}^{-2}\ , \end{equation} where $\varphi$ is the flux of ionizing photons and $n$ the gas number density in cm$^{-3}$. The surface density of ionized gas is then $\Sigma_{\rm ion}=1.36 \times m_{\rm p} \times N_{\rm ion}\ .$ Newborn massive stars are preferentially located in high-density regions, which they illuminate. As long as the area filling factor of the high-density gas surrounding these stars is not too large, the UV photons can escape the H{\sc ii} regions and ionize the warm neutral medium. This scenario is supported by H$\alpha$ observations of Thilker et al. (2002) and Oey et al. (2007) who found a fraction of diffuse H$\alpha$ emission of $0.5$-$0.6$ for local spiral galaxies. Based on Galactic H{\sc i} observations (Dickey \& Lockman 1990), we used a constant gas number density of $n=5$~cm$^{-3}$ for the warm neutral medium, which corresponds to the observed midplane density of $0.6$~cm$^{-3}$ (Dickey \& Lockman 1990) and a volume-filling factor of $0.12$. For the relation between the ionizing photon flux and the star-formation rate we use \begin{equation} \varphi = 2.3 \times 10^{-7} \big( \dot{\Sigma}_*/(1~{\rm M}_{\odot}{\rm pc}^{-2}{\rm yr}^{-1}) \big)~{\rm photons}\,{\rm cm}^{-2}{\rm sec}^{-1}\ , \end{equation} based on the star-formation rate calibration of Kennicutt (1998). We then calculated the H{\sc i} surface density as \begin{equation} \Sigma_{\rm HI}=(1-f_{\rm mol}) \times \Sigma - \Sigma_{\rm ion} \ . \end{equation} The resulting H{\sc i} surface density is presented in the middle panel of Fig.~\ref{fig:HIprofiles}. As the observed H{\sc i} surface density, the model H{\sc i} surface density is approximately constant $\Sigma_{\rm HI} \sim 10$~M$_{\odot}$pc$^{-2}$ between $R=l_*$ and $R=4 \times l_*$. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{HIprofiles.ps}} \resizebox{\hsize}{!}{\includegraphics{HIprofiles_ion.ps}} \resizebox{\hsize}{!}{\includegraphics{THINGS_HIprofiles.ps}} \caption{Upper panel: model H{\sc i} radial profiles of the local spiral galaxies. The radial distance is normalized by the stellar scale length $l_*$. The middle panel shows model H{\sc i} including UV ionization with a constant H{\sc i} volume density, while the lower panel shows THINGS radial H{\sc i} profiles from Leroy et al. (2008) where we have assumed $R_{25}=4.5 \times l_*$. The atomic gas surface density saturates at $\sim 10$~M$_{\odot}$pc$^{-2}$ (dotted line). \label{fig:HIprofiles}} \end{figure} We conclude that the inclusion of the warm ionized medium in the model leads to radial H{\sc i} surface density profiles that are well comparable to observations. However, for the galaxies with the highest H{\sc i} masses (NGC~2841, NGC~3198, NGC~3521, and NGC~5055), the model H{\sc i} surface density profiles and total H{\sc i} masses are underestimated by approximately a factor of two. \subsection{CO(1--0) and HCN(1--0) radial profiles for local spiral galaxies \label{sec:profiles}} The CO and HCN emission can be spatially resolved in local spiral galaxies only (at $D=10$~Mpc, the beam sizes are $\sim 20''$ or $\sim 1$~kpc). Whereas CO maps are frequently found in the literature (e.g., Wong \& Blitz 2002, Leroy et al. 2008), HCN maps are rare (e.g. Chen et al. 2015, Bigiel et al. 2016). The H$_2$ and dense gas surface densities are usually calculated with a CO-H$_2$ conversion factor of $\alpha_{\rm CO}=4.36$~M$_{\odot}$pc$^{-2}$/(K\,km\,s$^{-1}$) (e.g., Bolatto et al. 2013) and $\alpha_{\rm HCN}=10$~M$_{\odot}$pc$^{-2}$/(K\,km\,s$^{-1}$) (e.g., Gao \& Solomon 2004). The model SFR-$\Sigma_{\rm H_2}$ relation of the local spiral galaxies is presented in the top panel of Fig.~\ref{fig:THINGS_profiles} together with the observed relation $\dot{\Sigma}_*=\Sigma_{\rm H_2}/(2 \times 10^9~{\rm yr})$ (e.g., Bigiel et al. 2008, Leroy et al. 2008), which corresponds to a constant star-formation rate timescale of $2 \times 10^{9}$~yr. The model SFR-$\Sigma_{\rm H_2}$(CO) relations are well consistent with the observed relation within a scatter of approximately $0.2$~dex. Overall, they show a somewhat flatter slope than the observed relation. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{THINGS_profiles1.ps}} \caption{Upper panel: model radial profiles of the star-formation rate surface density (thin solid lines) as a function of molecular gas surface density $\Sigma_{\rm H_2}=4.36 \times I_{\rm CO}$ for the THINGS local spiral galaxies. The thick dashed line corresponds to a star-formation rate timescale of $2 \times 10^{9}$~yr (Leroy et al. 2008). The lower panel shows model radial profiles of the star-formation rate surface density (thin solid lines) as a function of dense molecular gas surface density $\Sigma_{\rm dense}=10 \times I_{\rm HCN}$ for the THINGS local spiral galaxies. The dashed line corresponds to the relation $\log(\dot{\Sigma}_*)=1.12 \times \log(\Sigma_{\rm dense})-2.10$ found by Graci\'a-Carpio et al. (2008). The dotted line corresponds to the relation $\log(\dot{\Sigma}_*)=0.69 \times \log(\Sigma_{\rm dense})-1.58$ found by Usero et al. (2015). The dash-dotted line corresponds to the Gao \& Solomon (2004) relation $\log(\dot{\Sigma}_*)=\log(\Sigma_{\rm dense})-2.05$. All observed relations are corrected for the model conversion factor between the total infrared luminosity and the star-formation rate, which is different from the value assumed in the literature (see Sect.~\ref{sec:profiles}). \label{fig:THINGS_profiles}} \end{figure} The situation is more complex for the dense gas (HCN) than for H$_2$. The observed $\dot{\Sigma}_*$--$\Sigma_{\rm dense}$ relations show different slopes; a linear slope was found by Gao \& Solomon (2004) and Garcia-Burillo et al. (2012) for integrated SFR and gas masses, whereas a sub-linear slope of $0.7$ was found by Usero et al. (2015) using single pointings in nearby spiral galaxies. For a direct comparison between the model and observed $\dot{\Sigma}_*$--$\Sigma_{\rm dense}$ relations, the conversion factor between the total infrared luminosity and the star-formation rate $c_{\rm TIR}$ must be taken into account, $SFR (M_{\odot} {\rm yr}^{-1})=c_{\rm TIR} L_{\rm TIR} ({\rm L}_{\odot})$. The different conversion factors are $c_{\rm TIR}=2.0,\ 1.7,\ 1.5 \times 10^{-10}$ used by Gao \& Solomon (2004), Garcia-Burillo et al. (2012), and Usero et al. (2015), respectively. The model conversion factor for the local spiral galaxy sample is $c_{\rm TIR}=0.9 \times 10^{-10}$, that is, a factor of $\sim 2$ lower than the values used in the literature. For a consistent comparison between the model and observations, we multiplied the star-formation rates of the observed relation by $c_{\rm TIR}^{\rm model}/c_{\rm TIR}^{\rm obs}$. The resulting model $\dot{\Sigma}_*$--$\Sigma_{\rm dense}$ relation (lower panel of Fig.~\ref{fig:THINGS_profiles}) has a slope consistent with observations, but shows a negative offset of $0.2$--$0.3$~dex with respect to the observed relations. We conclude that the resolved $\dot{\Sigma}_*$--$\Sigma_{\rm H_2}$ and $\dot{\Sigma}_*$--$\Sigma_{\rm dense}$ relations are consistent with available IR, CO, and HCN observations within the uncertainties (see Sect.~\ref{sec:uncertain}). \subsection{Integrated CO, HCN(1--0), and HCO$^+$(1--0) flux densities \label{sec:intflux}} For the ULIRG, smm, and high-z star-forming galaxies, we can only compare the integrated model HCN, CO, and HCO$^+$ luminosities to observations. The comparison between the model and the observed CO luminosities is shown in Fig.~\ref{fig:plots_HCNCO_CO}, where the observed transitions were used: CO(2--1) for the local spiral galaxies, CO(1--0) for the ULIRGs, CO(3--2) for the smm-galaxies, and CO(3--2) for the high-z star-forming galaxies. The corresponding mean ratio $L_{\rm model}/L_{\rm obs}$ and its uncertainty are presented in Table~\ref{tab:correl} (preferred model). We observe approximately linear correlations between the model and observed CO luminosities. The model CO luminosities of the smm and high-z star-forming galaxies are $\langle \log(L_{\rm CO,obs}/L_{\rm CO,model}) \rangle \sim 0.2$~dex smaller than observed; the ratio is only $\sim 0.1$~dex for the local spirals and ULIRGs. The corresponding model and observed $L_{\rm TIR}$--$L'_{\rm CO}$ relations for the CO(1--0) line are shown in Fig.~\ref{fig:plots_HCNCO_SFRCO}. Line ratios from the literature are applied to determine the CO(1--0) fluxes (CO(2--1)/CO(1--0)$=0.7$, CO(3--2)/CO(1--0)$=0.77$ Genzel et al. 2010, and CO(3--2)/CO(1--0)$=0.5$ Tacconi et al. 2013). The differences between the model and observed relation are again due to the underestimation ($0.2$~dex) of the model CO luminosities of the smm and high-z star-forming galaxies with respect to observations. Overall, the model $L_{\rm TIR}$--$L'_{\rm CO}$ relations are consistent with the observed relations. \begin{table*} \begin{center} \caption{Comparison to observed CO, HCN(1--0), and HCO$^+$(1--0) data. Ratio between the model and observed line luminosities. \label{tab:correl}} \begin{tabular}{lccccc} \hline Galaxy sample & CO & HCN(1--0) & HCN(1--0) GS04$^{\rm a}$ & HCN(1--0) GC08$^{\rm b}$ & HCO$^+$(1--0) \\ \hline Preferred model & & & & & \\ \hline local spirals & $0.86 \pm 0.31$ & -- & $1.72 \pm 0.49$ & $1.13 \pm 0.26$ & -- \\ ULIRGs & $0.75 \pm 0.37$ & $0.67 \pm 0.22$ & $0.36 \pm 0.15$ & $0.51 \pm 0.13$ & $0.89 \pm 0.30$ \\ smm-galaxies & $0.67 \pm 0.20$ & -- & $0.37 \pm 0.06$ & $0.69 \pm 0.14$ & -- \\ high-z star-forming galaxies & $0.72 \pm 0.39$ & -- & $0.18 \pm 0.03$ & $0.27 \pm 0.06$ & -- \\ \hline Constant abundances & & & & & \\ \hline local spirals & $1.23 \pm 0.38$ & -- & $0.86 \pm 0.24$ & $0.56 \pm 0.12$ & -- \\ ULIRGs & $0.75 \pm 0.36$ & $0.32 \pm 0.10$ & $0.18 \pm 0.09$ & $0.25 \pm 0.08$ & $0.37 \pm 0.12$ \\ smm-galaxies & $0.63 \pm 0.19$ & -- & $0.20 \pm 0.05$ & $0.37 \pm 0.09$ & -- \\ high-z star-forming galaxies & $0.73 \pm 0.36$ & -- & $0.21 \pm 0.07$ & $0.30 \pm 0.07$ & -- \\ \hline $Q=1$ & & & & & \\ \hline ULIRGs & $0.55 \pm 0.33$ & $0.61 \pm 0.17$ & $0.33 \pm 0.14$ & $0.46 \pm 0.11$ & $0.91 \pm 0.29$ \\ smm-galaxies & $0.63 \pm 0.25$ & -- & $0.23 \pm 0.03$ & $0.44 \pm 0.08$ & -- \\ high-z star-forming galaxies & $0.74 \pm 0.43$ & -- & $0.19 \pm 0.04$ & $0.27 \pm 0.06$ & -- \\ \hline $\delta=15$ & & & & & \\ \hline local spirals & $1.09 \pm 0.47$ & -- & $2.20 \pm 1.04$ & $1.43 \pm 0.60$ & -- \\ ULIRGs & $0.67 \pm 0.40$ & $0.78 \pm 0.20$ & $0.40 \pm 0.13$ & $0.57 \pm 0.09$ & $1.13 \pm 0.36$ \\ smm-galaxies & $0.86 \pm 0.33$ & -- & $0.37 \pm 0.04$ & $0.69 \pm 0.08$ & -- \\ high-z star-forming galaxies & $0.74 \pm 0.43$ & -- & $0.19 \pm 0.04$ & $0.27 \pm 0.06$ & -- \\ \hline No cloud substructure & & & & & \\ \hline local spirals & $1.16 \pm 0.38$ & -- & $1.69 \pm 0.49$ & $1.10 \pm 0.27$ & -- \\ ULIRGs & $0.94 \pm 0.46$ & $0.73 \pm 0.25$ & $0.38 \pm 0.14$ & $0.55 \pm 0.13$ & $1.01 \pm 0.36$ \\ smm-galaxies & $0.91 \pm 0.30$ & -- & $0.39 \pm 0.06$ & $0.73 \pm 0.15$ & -- \\ high-z star-forming galaxies & $0.93 \pm 0.55$ & -- & $0.13 \pm 0.04$ & $0.19 \pm 0.08$ & -- \\ \hline No CR heating & & & & & \\ \hline local spirals & $0.85 \pm 0.31$ & -- & $1.70 \pm 0.48$ & $1.12 \pm 0.26$ & -- \\ ULIRGs & $0.75 \pm 0.37$ & $0.44 \pm 0.20$ & $0.22 \pm 0.07$ & $0.32 \pm 0.08$ & $0.80 \pm 0.32$ \\ smm-galaxies & $0.67 \pm 0.20$ & -- & $0.36 \pm 0.06$ & $0.68 \pm 0.14$ & -- \\ high-z star-forming galaxies & $0.47 \pm 0.27$ & -- & $0.12 \pm 0.02$ & $0.17 \pm 0.03$ & -- \\ \hline No HCN IR-pumping & & & & & \\ \hline local spirals & $0.85 \pm 0.31$ & -- & $1.70 \pm 0.48$ & $1.12 \pm 0.26$ & -- \\ ULIRGs & $0.75 \pm 0.37$ & $0.61 \pm 0.20$ & $0.34 \pm 0.16$ & $0.47 \pm 0.15$ & $0.89 \pm 0.30$ \\ smm-galaxies & $0.67 \pm 0.20$ & -- & $0.33 \pm 0.03$ & $0.63 \pm 0.09$ & -- \\ high-z star-forming galaxies & $0.72 \pm 0.39$ & -- & $0.16 \pm 0.04$ & $0.22 \pm 0.05$ & -- \\ \hline \end{tabular} \begin{tablenotes} \item Ratio between the model and observed line luminosities. \item local spirals: CO(2--1); ULIRGs: CO(1--0); smm-galaxies: CO(3--2); high-z star-forming galaxies: CO(3--2). \item $^{\rm a}$ with respect to the Gao \& Solomon (2004) relation. \item $^{\rm b}$ with respect to the Graci\'a-Carpio et al. (2008) relation. \end{tablenotes} \end{center} \end{table*} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_CO.ps}} \caption{Model CO luminosity as a function of the observed luminosity for local spirals (CO(2--1)), local ULIRGs (CO(1--0)), submillimeter (CO(3--2)), and high-z star-forming galaxies (CO(3--2)). \label{fig:plots_HCNCO_CO}} \end{figure} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_SFRCO.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_SFRCO1.ps}} \caption{Total infrared luminosity as a function of CO(1--0) luminosity. Upper panel shows observations while the lower panel shows the model. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. \label{fig:plots_HCNCO_SFRCO}} \end{figure} For the HCN(1--0) luminosities, the situation is more complex. We only compare $L_{\rm TIR}$--$L'_{\rm HCN}$ relations (upper panel of Fig.~\ref{fig:plots_HCNCO_HCN}) for the local spirals, smm, and high-z star-forming galaxies, because the overlap between the model and observed samples is very small. For the ULIRG sample, there are six out of nine galaxies in common between the Downes \& Solomon (1998) and Graci\'a-Carpio et al. (2008) samples, which permits a direct comparison of the HCN(1--0) luminosities (lower panel of Fig.~\ref{fig:plots_HCNCO_HCN}). Since the observed relations have different slopes, we compare the model relation to the relations observed by Gao \& Solomon (2004) and Graci\'a-Carpio et al. (2008). Based on the $L_{\rm TIR}$--$L'_{\rm HCN}$ relation, the model overestimates the HCN(1--0) luminosities of the local spiral galaxies by $\sim 0.2$~dex and underestimates those of ULIRGs by $\sim 0.3$~dex. However, the direct comparison of model and observed HCN(1--0) luminosities for ULIRGs yields an underestimation of only $0.13$~dex. This means that the Downes \& Solomon (1998) ULIRG sample contains mainly HCN-bright galaxies. Concerning the HCN emission of smm-galaxies, Gao \& Solomon (2007) claimed that the FIR/HCN ratios in these high-redshift sources lie systematically above the FIR/HCN correlation established for nearby galaxies by approximately a factor of $2$. This behavior is well reproduced by the model. Since there are no HCN detections of high-z star-forming galaxies in the literature, we can only suggest that their HCN emission might be a factor of $3$ lower than expected from observed $L_{\rm TIR}$--$L'_{\rm HCN}$ relations. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_HCN1.ps}} \caption{Upper panel: observed total infrared luminosity as a function of the model HCN(1--0) luminosity. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. In addition, orange stars represent the observed HCN(1--0) of an ULIRG subsample (from Graci\'a-Carpio et al. 2008). The solid line represents the correlation $L_{\rm TIR}=900 \times L'_{\rm HCN}$ found by Gao \& Solomon (2004). The dashed line represents the correlation found by Graci\'a-Carpio et al. (2008) with $L_{\rm TIR} = 1.28 \times L_{\rm FIR}$: $\log(L_{\rm TIR}=1.23 \times \log(L'_{\rm HCN})+1.06$. Lower panel shows the model HCN(1--0) luminosity as a function of the observed HCN(1--0) luminosity (Graci\'a-Carpio et al. 2008) for individual galaxies. The dashed line corresponds to a robust bisector fit. \label{fig:plots_HCNCO_HCN}} \end{figure} The model HCN/CO ratio is compared to observations in Fig.~\ref{fig:plots_HCNCO_HCNCO} for the four galaxy samples. As expected, the model points for the local spiral galaxies lie below whereas those of the ULIRGs lie above the observed correlation. Within the model, the smm-galaxies follow the same correlation as the ULIRGs, whereas the HCN emission of the high-z star-forming galaxies is significantly lower than expected by the observed correlation. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_HCNCO.ps}} \caption{Ratio between total infrared luminosity and CO(1--0) luminosity as a function of the ratio between HCN(1--0) and CO(1--0) luminosity, that is, the dense gas fraction. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. The solid line is the relation observed by Gao \& Solomon (2004). \label{fig:plots_HCNCO_HCNCO}} \end{figure} The model integrated HCO$^+$ luminosity can be compared to observations via the $L_{\rm TIR}$--$L'_{\rm HCO+}$ relation (upper panel of Fig.~\ref{fig:plots_HCNCO_HCO}). The observed relations (Juneau et al. 2009, Garcia-Burillo et al. 2012) do not differ significantly. Within the model, the ULIRGs and high-z star-forming galaxies follow the observed relations. The smm-galaxies lie somewhat below the observed relation. The model HCO$^+$ luminosities are a factor of approximately three higher than the expected HCO$^+$ luminosities assuming $L'_{\rm HCN}=L'_{\rm HCO+}$ (e.g., Nguyen et al. 1992, Brouillet et al. 2005, Knudsen et al. 2007). The direct comparison of the model and observed HCO$^+$ luminosities for the ULIRG sample shows good agreement (lower panel of Fig.~\ref{fig:plots_HCNCO_HCO}). The model thus seems to overestimate the HCO$^+$ luminosities of the local spiral galaxies. We note that the HCO$^+$ emission strongly depends on the cosmic ray ionization rate used for the chemical network. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_HCO.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_HCO1.ps}} \caption{Upper panel: Observed total infrared luminosity as a function of the model HCO$^+$(1--0) luminosity. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. The dashed line represents the relation $\log(L_{\rm TIR})=0.99 \times \log(L'_{\rm HCO+})+3.25$ found by Juneau et al. (2009). The dotted line represents the relation $\log(L_{\rm TIR}=1.06 \times \log(L'_{\rm HCO+})+2.75$ found by Garcia-Burillo et al. (2012) with $L_{\rm TIR} = 1.28 \times L_{\rm FIR}$. Lower panel: model HCO$^+$ (1--0) luminosity as a function of the observed HCO$^+$(1--0) luminosity. The solid line corresponds to equality, the dotted lines to factors of $1/2$ and $2$. The dashed line represents a robust bisector fit to the data. \label{fig:plots_HCNCO_HCO}} \end{figure} As an additional consistency check, the HCN/HCO$^+$ ratio as a function of the total infrared luminosity is shown in Fig.~\ref{fig:plots_HCNCO_HCNHCO}. Observations of local spiral galaxies (e.g., Nguyen et al. 1992, Brouillet et al. 2005, Knudsen et al. 2007) show $\langle \log(L'_{\rm HCN}/L'_{\rm HCO^+}) \rangle \sim 0.0$. As expected, the HCN/HCO$^+$ ratio of the local spiral galaxies is smaller by $\sim 0.2$~dex than the observed ratio. The model HCN/HCO$^+$ ratio for the ULIRGs are well within the observed range (Graci\'a-Carpio et al. 2008). \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_HCNHCO.ps}} \caption{The ratio between HCN(1--0) and HCO$^+$(1--0) luminosities as a function of the total infrared luminosity. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. In addition, orange stars represent the observed HCN(1--0) of a ULIRG subsample (from Graci\'a-Carpio et al. 2008). \label{fig:plots_HCNCO_HCNHCO}} \end{figure} We conclude that, overall, the CO luminosities are underestimated by up to a factor of $1.5$. The model HCN luminosities for local spirals are overestimated by a factor of $1.5$ and those for ULIRGs are underestimated by a factor of $1.5$ with respect to observations. The model HCN luminosities are consistent with observations for the smm-galaxies. The model HCO$^+$ luminosities are consistent observations for all galaxy samples except the local spiral galaxies, where they are significantly overestimated. \subsection{CO SLEDs} The lowest three rotational transitions of CO, which trace the cooler gas component, are relatively easily accessible with ground-based radio and submillimeter telescopes, and have been observed in many local galaxies. It was not until the launch of the Herschel Space Observatory (Pilbratt et al. 2010) that the CO ladder up to $J=13$ was generally available for the ISM within our Galaxy and in nearby galaxies. Early SPIRE observations showed much brighter high-$J$ CO emission than would be predicted by cool ($T_{\rm kin} < 50$~K) molecular gas in giant molecular clouds, the type of gas responsible for the CO(1--0) emission. A warmer, denser (higher pressure) component of molecular gas is responsible for the emission of mid- ($J=4$--$3$ to $J=6$--$5$) and high-$J$ ($J=7$--$6$ and above) CO lines. Far-IR CO rotational lines, with $J_{\rm upper} \ge 13$, arise from states $500$--$7000$~K above the ground state and have critical densities of $10^6$ to $10^8$~cm$^{-3}$ (Hailey-Dunsheath et al. 2012). It is well known locally that empirically and observationally, different CO excitation properties characterize spiral galaxies as opposed to merging-driven ULIRGs (e.g., Daddi et al. 2015). The latter are much more highly excited in their high-$J$ CO transitions (Weiss et al. 2007, Papadopoulos et al. 2012). The model CO SLEDs of all galaxies of the four samples and the mean CO SLEDs are presented in Fig.~\ref{fig:plots_HCNCO_SLED}. The latter can be directly compared to the observed mean SLEDs in Fig.~8 of Daddi et al. (2015). We can only compare the model CO SLEDs of the local spiral galaxies to that of the inner Milky Way, which shows, as expected, higher intensities for transition with $J_{\rm upper} \ge 3$. The model CO SLEDs of the ULIRGs are consistent in shape and absolute values with observations (Daddi et al. 2015). The shape of the model CO SLEDs of the smm and high-z star-forming galaxies is different from that of the observed SLEDs; whereas the shape of the model CO SLEDs is concave, that of the observed CO SLED is convex. This difference is mainly due to the CO(3--2) flux, which in the model is a factor of two higher than observed. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_SLED.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_SLED1.ps}} \caption{Upper panel: model CO spectral line energy distributions (SLEDs) of the local spiral, ULIRG, submillimeter, and high-z star-forming galaxies. The thick solid line corresponds to a constant brightness temperature. Lower panel shows the mean model CO SLEDs of our galaxy samples. For direct comparison with Fig.~8 of Daddi et al. (2015), the CO(1--0) emission of all galaxies was set to $I_{\rm CO}=0.2$~Jy\,km\,s$^{-1}$. \label{fig:plots_HCNCO_SLED}} \end{figure} In a sample of ULIRGs, higher CO transitions ($J_{\rm upper} > 6$) were observed by Kamenetzky et al. (2016). Their mean CO SLEDs for galaxies with total infrared luminosities between $3 \times 10^{11}$ and $10^{12}$~L$_{\odot}$ and higher than $10^{12}$~L$_{\odot}$ are shown together with the model ULIRG CO SLEDs in Fig.~\ref{fig:ulirg_ladder}. The two compact residual disks of Arp~220 (E and W) cannot be directly compared to this sample because Arp~220 would be seen as one entity (Disk + East + West) by the Herschel satellite. The shape and the absolute values of the observed CO SLEDs are well reproduced by the model. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{ulirg_ladder.ps}} \caption{CO SLEDs of the local ULIRGs. The model CO SLEDs are shown as colored lines. The black boxes and triangles linked by black lines represent the observed mean CO SLEDs of ULIRGs with total infrared luminosities between $3 \times 10^{11}$ and $10^{12}$~L$_{\odot}$ and higher than $10^{12}$~L$_{\odot}$, respectively (from Kamenetzky et al. 2016). \label{fig:ulirg_ladder}} \end{figure} \subsection{Integrated CO and HCN conversion factors \label{sec:conversion}} Since the integrated molecular fraction and the line emission are calculated within the model (see Sect.~\ref{sec:molfrac}), the integrated mass-to-light conversion factors can be determined. For the CO line emission we use two approaches: (i) the model CO(1--0) is adopted for all galaxies (upper panel of Fig.~\ref{fig:plots_HCNCO_alphaCO}) and (ii) the model flux of the observed CO line (CO(2--1) for the local spiral galaxies, CO(1--0) for the ULIRGs, CO(3--2) for the smm-galaxies, and CO(3--2) for the high-z star-forming galaxies) is calculated and a line ratio given in the literature is applied to determine the CO(1--0) flux (CO(2--1)/CO(1--0)$=0.7$, CO(3--2)/CO(1--0)$=0.77$ Genzel et al. 2010, and CO(3--2)/CO(1--0)$=0.5$ Tacconi et al. 2013) (lower panel of Fig.~\ref{fig:plots_HCNCO_alphaCO}). The mass-to-light conversion factor is then calculated by $\alpha_{\rm CO}=M_{\rm H_2}/L'_{\rm CO}$. The mean conversion factors are given in Table~\ref{tab:convtable}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_alphaCO.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_alphaCO1.ps}} \caption{Upper panel: CO(1--0) conversion factors for our galaxy samples. Lower panel shows CO conversion factors based on CO(2--1) (local spirals), CO(1--0) (ULIRGs), CO(4--3) (smm-galaxies), and CO(3--2) (high-z star-forming galaxies). These conversion factors imply the following line ratios: CO(2--1)/CO(1--0)$=0.7$, CO(4--3)/CO(1--0)$=0.63$ (Genzel et al. 2010), and CO(3--2)/CO(1--0)$=0.5$ (Tacconi et al. 2013). \label{fig:plots_HCNCO_alphaCO}} \end{figure} The model CO(1--0) conversion factor varies between $2$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ and $6$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ with one galaxy showing $\alpha_{\rm CO}=10$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$. The mean conversion factor of local spirals ($\langle \alpha_{\rm CO} \rangle =4.7 \pm 1.8$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$) is close to the observed value of $\alpha_{\rm CO}=4.3$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ (Bolatto et al. 2013). The mean conversion factor of the ULIRGs ($\langle \alpha_{\rm CO} \rangle =1.7 \pm 0.4$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$) is twice the usually assumed conversion factor for dense starburst galaxies ($\alpha_{\rm CO}=0.8$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$; Downes \& Solomon 1998) while that of smm-galaxies is similar to it. The model CO(1--0) conversion factor of high-z star-forming galaxies is intermediate between those of local spiral galaxies and ULIRGs/smm-galaxies ($\langle \alpha_{\rm CO} \rangle =2.6 \pm 0.9$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$). The situation changes only slightly if higher $J$ transitions are used instead of CO(1--0). The only remarkable change is the decrease of the CO conversion factor of high-z star-forming galaxies by $30$\,\% to ($\langle \alpha_{\rm CO} \rangle =1.6 \pm 0.6$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$). However, the distribution of the CO conversion factors for high-z star-forming galaxies shows a tail with conversion factors comparable to that of the Galaxy. The HCN conversion factor is usually given with respect to the dense gas, that is, gas with densities exceeding $n=3 \times 10^4$~cm$^{-3}$ (e.g., Gao \& Solomon 2004). For completeness, we give the HCN(1--0)--$M_{\rm H_2}$ and the HCN(1--0)--$M_{\rm dense}$ conversion factors in Fig.~\ref{fig:plots_HCNCO_alphaHCN}. Only the latter conversion factor can be compared to the literature (lower panel of Fig.~\ref{fig:plots_HCNCO_alphaHCN}), where $\alpha_{\rm HCN}=10$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ is assumed (e.g., Gao \& Solomon 2004). The mean HCN conversion factors are $\alpha_{\rm HCN}=21 \pm 6$, $33 \pm 17$, and $59 \pm 21$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ for the local spiral galaxies/ULIRGs, smm-galaxies, and high-z star-forming galaxies, respectively (Table~\ref{tab:convtable}). The mean HCO$^+$ conversion factors are $\alpha_{\rm HCO+}=11 \pm 2$, $17 \pm 5$, $19 \pm 11$, and $25 \pm 7$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ for the local spiral galaxies, ULIRGs, smm-galaxies, and high-z star-forming galaxies, respectively (Table~\ref{tab:convtable}). Since the HCO$^+$(1--0) luminosity of the local spiral galaxies is significantly overestimated by the model, the associated conversion factor is a lower limit. We thus find a relatively uniform HCO$^+$ conversion factor $\alpha_{\rm HCO+} \sim 20$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ for all galaxy samples. We conclude that all model mass-to-light conversion factors are consistent with the values used in the literature within a factor of two. Both, the HCN and HCO$^+$ emission trace the dense molecular gas to a factor of approximately two for the local spiral galaxies, ULIRGs and smm-galaxies. For the high-z star-forming galaxies, HCO$^+$ might be the better tracer, but this needs to be confirmed. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_alphaHCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_alphaHCN1.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_alphaHCO.ps}} \caption{Upper panel: HCN(1--0)-to-H$_2$ conversion factor for our galaxy samples. The middle panel shows HCN(1--0)-to-dense gas ($n > 3 \times 10^{4}$~cm$^{-3}$) conversion factor. The lower panel shows the HCO$^+$(1--0)-to-dense gas ($n > 3 \times 10^{4}$~cm$^{-3}$) conversion factor. \label{fig:plots_HCNCO_alphaHCN}} \end{figure} \begin{table*} \begin{center} \caption{CO(1--0) and HCN(1--0) conversion factors in units of M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$.\label{tab:convtable}} \begin{tabular}{lcccc} \hline Galaxy sample & CO(1--0) & HCN(1--0) & HCN(1--0) & HCO$^+$(1--0) \\ & --H$_2$ & --H$_2$ & --dense gas & --dense gas \\ \hline Preferred model & & & \\ \hline local spirals & $4.7 \pm 1.8$ & $62 \pm 15$ & $21 \pm 6$ & $11 \pm 2$ \\ ULIRGs & $1.7 \pm 0.4$ & $11 \pm 5$ & $21 \pm 6$ & $17 \pm 5$ \\ smm-galaxies & $1.4 \pm 0.7$ & $20 \pm 10$ & $33 \pm 17$ & $19 \pm 11$ \\ high-z star-forming & $2.6 \pm 0.9$ & $85 \pm 37$ & $59 \pm 21$ & $25 \pm 7$ \\ \hline No substructure & & & \\ \hline local spirals & $2.9 \pm 0.9$ & $52 \pm 10$ & -- & -- \\ ULIRGs & $1.3 \pm 0.3$ & $10 \pm 5$ & $8 \pm 3$ & $6 \pm 2$ \\ smm-galaxies & $0.9 \pm 0.3$ & $16 \pm 8$ & $6 \pm 2$ & $3 \pm 1$ \\ high-z star-forming & $1.6 \pm 0.6$ & $111 \pm 75$ & $11 \pm 9$ & $4 \pm 3$ \\ \hline \end{tabular} \begin{tablenotes} \item for dense gas $\alpha_{\rm HCN}=10$~M$_{\odot}$(K\,km\,s$^{-1}$pc$^2$)$^{-1}$ (e.g., Gao \& Solomon 2004) \end{tablenotes} \end{center} \end{table*} \section{Variation of model parameters \label{sec:variation}} Only the preferred model is presented in Sect.~\ref{sec:results}. A natural question to ask is how our model results depend on the different model assumption and free parameters. To answer this question, we replace the chemical network by constant molecular abundances and vary the free parameters $Q$ and $\delta$. In addition, in Sect.~\ref{sec:discussion}, we remove the cloud substructure, cosmic ray heating, and HCN IR-pumping. \subsection{Importance of the chemical network \label{sec:constabund}} Initially, we adopted constant molecular abundances for the CO, HCN, and HCO$^+$ molecules, which are then scaled with metallicity: $x_{\rm CO}=10^{-4}$, $x_{\rm HCN}=x_{\rm HCO^+}=2 \times 10^{-8}$. Whereas the CO abundance corresponds to the canonical value (e.g., Draine 2011), the assumption of a constant HCN and HCO$^+$ is not well justified. The resulting ratios between the model and observed line luminosities are shown in Table~\ref{tab:correl}. The mean of the ratios between the model and observed CO luminosities are $\langle \log(L'_{\rm CO,\ model}/L'_{\rm CO,\ obs}) \rangle =-0.09 \pm 0.15$ for the local spiral, $-0.10 \pm 0.16$ for the ULIRG, $-0.19 \pm 0.12$ for the smm-galaxy, and $-0.20 \pm 0.22$ for the high-z star-forming galaxy samples, respectively. Thus, the observed CO luminosities are reproduced for all galaxy samples by the model within $\sim 0.2$~dex or a factor of $\sim 1.6$. The resulting HCN(1--0) luminosities are compared to the observed CO luminosities in Fig.~\ref{fig:plots_HCNCO_constantabundances_HCN}. In order to compare all model HCN luminosities to observations even in the absence of HCN measurements, we assumed $L'_{\rm HCN,\ obs}=900 \times L_{\rm TIR,\ obs}$ (Gao \& Solomon 2004). The mean of the ratios between the model and observed HCN luminosities are $\langle \log(L'_{\rm HCN,\ model}/L'_{\rm HCN,\ obs}) \rangle =0.22 \pm 0.12$ for the local spiral, $-0.47 \pm 0.15$ for the ULIRG , $-0.44 \pm 0.07$ for the smm-galaxy , and $-0.75 \pm 0.08$ for the high-z star-forming galaxy samples, respectively. Whereas the model HCN luminosities agree well with the observed HCN luminosities for the local spiral galaxies, the model underestimates the HCN luminosities by up to a factor of four for the ULIRG, smm-galaxy, and high-z star-forming galaxies. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_constantabundances_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_constantabundances_HCN_1.ps}} \caption{{\it Constant abundances model.} Upper panel shows observed total infrared luminosity as a function of the model HCN luminosity. The solid line corresponds to $L'_{\rm HCN,\ obs}=900 \times L_{\rm TIR,\ obs}$ (Gao \& Solomon 2004). The dashed line corresponds to $L_{\rm TIR,\ obs}=12 \times L_{\rm HCN,\ obs}^{'\ 1.23}$ (Graci\'a-Carpio et al. 2008) with factors of $0.5$ and $2$ (dotted lines). Lower panel shows the model HCN(1--0) luminosity as a function of the observed HCN(1--0) luminosity. The dashed line represents a robust bisector fit. Compare to Fig.~\ref{fig:plots_HCNCO_HCN}. \label{fig:plots_HCNCO_constantabundances_HCN}} \end{figure} Observations of HCN(1-0)/HCO$^+$(1-0) ratios of large galaxy samples are rare. Graci\'a-Carpio et al. (2006) and Juneau et al. (2009) found $\langle \log(L'_{\rm HCN,\ model}/L'_{\rm HCO^+,\ obs}) \rangle \sim 0.2 \pm 0.2$ in local ULIRGs. The model with constant abundances systematically underestimates the HCN/HCO$^+$ ratio by approximately a factor of three for all galaxy samples (Fig.~\ref{fig:plots_HCNCO_constantabundances_HCO+}). We conclude that, as expected, our model with constant HCN(1-0) and HCO$^+$(1-0) abundances does not lead to HCN and HCO$^+$ luminosities that are comparable to observations. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_constantabundances_HCO+.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_constantabundances_HCO+_1.ps}} \caption{{\it Constant abundances model.} Upper panel shows observed total infrared luminosity as a function of the HCO$^+$(1--0) luminosity. Lower panel shows the model HCN(1--0) luminosity as a function of the observed HCO$^+$(1--0) luminosity. The dashed line represents a robust bisector fit. Compare to Fig.~\ref{fig:plots_HCNCO_HCO}. \label{fig:plots_HCNCO_constantabundances_HCO+}} \end{figure} We conclude that models with the canonical CO abundances reproduce observations as well as models using a detailed chemical network. However, constant HCN and HCO$^{+}$ abundances yield HCN and HCO$^{+}$ line luminosities that are at least a factor of two smaller than the observed line luminosities. \subsection{Toomre Q \label{sec:Q}} To investigate the influence of the Toomre $Q$ parameter on the model CO and HCN line emission, we tested $Q=1$ instead of $Q=1.5$ for the ULIRGs, smm, and high-z star-forming galaxies (see Table~\ref{tab:correl}). The comparison of the model CO and HCN luminosities and CO SLED with observations is presented in Fig.~\ref{fig:plots_HCNCO_Q_CO}. The resulting ratios between the model and observed line luminosities are shown in Table~\ref{tab:correl}. Whereas the model CO and HCO$^+$ line luminosities are barely affected, the model HCN line luminosity decreases by $\sim 15$\,\% when $Q=1$ instead of $Q=1.5$. The CO SLEDs of the smm and high-z star-forming galaxies do not change significantly with respect to $Q=1.5$, but the ULIRG CO SLED increases by $\sim 30$\,\% (lower panel of Fig.~\ref{fig:plots_HCNCO_Q_CO}). \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_Q_CO.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_Q_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_Q_SLED.ps}} \caption{Same as Fig.~\ref{fig:plots_HCNCO_CO}, upper panel of Fig.~\ref{fig:plots_HCNCO_HCN}, and lower panel of Fig.~\ref{fig:plots_HCNCO_SLED}, but the ULIRG, smm-galaxy, and high-z star-forming galaxy models were calculated with $Q=1$, that is, maximum gas mass. \label{fig:plots_HCNCO_Q_CO}} \end{figure} We conclude that the low $J$ CO, HCN, and HCO$^+$ line emissions of all galaxies are not significantly affected when $Q$ is decreased from $1.5$ to $1$. However, a variation of $Q$ changes the gas velocity dispersion significantly (Sect.~\ref{sec:veldisp}). \subsection{The scale parameter $\delta$ \label{sec:delta}} As described in Sect.~\ref{sec:clumpiness}, the scale of the largest self-gravitating clouds $l_{\rm cl}$ is smaller than the turbulent driving length scale $l_{\rm driv}$ by a factor $\delta=l_{\rm driv}/l_{\rm cl}$. For a typical cloud size of $20$~pc and a driving length scale of $100$~pc, one obtains $\delta=5$, which is the mean value determined for local spiral galaxies by Vollmer \& Leroy (2011). To investigate the influence of the scale parameter $\delta$ on the model CO and HCN line emission, we used $\delta=15$ instead of $\delta=5$ for all galaxies. The comparison of the model CO and HCN luminosities and CO SLED with observations is presented in Fig.~\ref{fig:plots_HCNCO_Q_delta}. The resulting ratios between the model and observed line luminosities are shown in Table~\ref{tab:correl}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_delta_CO.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_delta_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_delta_SLED.ps}} \caption{Same as Fig.~\ref{fig:plots_HCNCO_CO}, upper panel of Fig.~\ref{fig:plots_HCNCO_HCN}, and lower panel of Fig.~\ref{fig:plots_HCNCO_SLED}, but the ULIRG, smm-galaxy, and high-z star-forming galaxy models were calculated with $\delta=15$. \label{fig:plots_HCNCO_Q_delta} } \end{figure} As for the $Q=1$, the model CO, HCN, and HCO$^+$ line luminosities are barely affected. The effect on the CO SLEDs is also comparable to the $Q=1$ models: whereas the CO SLEDs of the local spiral galaxies, smm, and high-z star-forming galaxies do not change significantly with respect to $Q=1.5$, the ULIRG CO SLED increases by $\sim 30$\,\%. We conclude that the low $J$ CO, HCN, and HCO$^+$ line emissions of all galaxies are not significantly affected, but the CO SLED of the ULIRGs is increased by $\sim 30$\,\% when $\delta$ is increased by a factor of three. In contrast to the decrease of $Q$, the increase of $\delta$ does not result in a significant increase of the gas velocity dispersion (Sect.~\ref{sec:veldisp}). \section{Galactic physics \label{sec:discussion}} In this section we examine the role of non-self-gravitating clouds for molecular line emission and investigate how different recipes for galactic physics influence the model results. \subsection{The role of non-self-gravitating clouds for molecular line emission \label{sec:nonself}} Within the framework of the analytical model (Sect.~\ref{sec:model}), the scale parameter $\delta$ determines the density of the self-gravitating clouds $\rho_{\rm cl}=\rho/\phi_{\rm V}$ via $t_{\rm ff}^l = t_{\rm turb}^l$. At this characteristic spatial length scale, the scaling relations for the density and velocity dispersion change (see Sect.~\ref{sec:scaling}). We would like to know which fractions of molecular line emission originate in self-gravitating and non-self-gravitating gas clouds in our galaxy samples. To answer this question, the ratio between the molecular line emission of non-self-gravitating clouds and the total line emission is shown as a function of the total infrared luminosity in Fig.~\ref{fig:newplots_CO}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{newplots_CO.ps}} \resizebox{\hsize}{!}{\includegraphics{newplots_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{newplots_HCO.ps}} \caption{The role of non-self-gravitating clouds for molecule emission. Ratio between the CO(1--0) (upper panel), HCN(1--0) (middle panel), and HCO$^+$ (lower panel) emission of non-self-gravitating clouds and the total CO(1--0) emission. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. \label{fig:newplots_CO}} \end{figure} In the local spiral galaxies, approximately half of the total CO emission originates in non-self-gravitating clouds ($f_{\rm nsg, CO} \sim 0.5$). This is consistent with the ratio between CO emission from diffuse clouds and the total emission $I_{\rm diff}/I_{\rm tot}=0.6 \pm 0.1$ determined by Polk et al. (1988)\footnote{$I_{\rm diff}/I_{\rm tot}=1/(1+F)$ with $F=0.7 \pm 0.3$ from Polk et al. (1988).} In the ULIRGs, the fraction is $f_{\rm nsg, CO} \sim 0.8$. The smm-galaxies also show high fractions of emission from non-self-gravitating clouds, however there are also two smm-galaxies with $f_{\rm nsg, CO} \sim 0.4$. The high-z star-forming galaxies show an approximately flat distribution with $0.3 \le f_{\rm nsg, CO} \le 0.9$. For the HCN and HCO$^+$ emission, the situation is different. Since the critical densities for these transitions are much higher than for the CO emission, it is expected that the denser self-gravitating clouds dominate the HCN and HCO$^+$ emission. This is indeed the case for the local spiral galaxies and the high-z star-forming galaxies. However, in the ULIRGs and smm galaxies, the HCN and HCO$^+$ emission mostly originates in non-self-gravitating clouds ($0.7 \le f_{\rm nsg, HCN/HCO^+} \le 0.9$). Even in local spiral galaxies, the model predicts that $\sim 30$\,\% of the HCN(1-0) emission is emitted by non-self-gravitating clouds. In these clouds, the effective critical density might be as low as $n_{\rm crit}^{\rm HCN} \sim 10^4$~cm$^{-3}$ due to radiative trapping, approximately $30$ times lower than the critical density in the optically thin limit (Shirley 2015). Future combined interferometric and single dish HCN(1-0) observations of local spiral galaxies will be able to test our prediction. We conclude that compact starburst galaxies (ULIRGs and smm-galaxies) have, on average, significantly higher fractions of molecular line emission that originates in non-self-gravitating clouds. This effect is more pronounced for HCN(1--0) and HCO$^+$(1--0) emission. In compact starburst galaxies, the molecular line emission is most frequently dominated by emission from non-self-gravitating clouds/filaments. This is due to the highly compact nature of ULIRG and smm-galaxie centers, such that the density of the intercloud medium rivals or even exceeds the density of Galactic giant molecular clouds. \subsection{The role of cloud substructure \label{sec:substruct}} The molecular line emission of giant molecular clouds shows substructure (e.g., Heyer \& Dame 2015). This substructure is taken into account by the model (Sect.~\ref{sec:model}). To investigate the role of GMC substructure, we tested a model where self-gravitating clouds with uniform densities are assumed. The resulting CO and HCN luminosities are presented in Fig.~\ref{fig:plots_HCNCO_nosub_CO}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_nosub_CO.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_nosub_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_nosub_SLED.ps}} \caption{Same as Fig.~\ref{fig:plots_HCNCO_CO}, upper panel of Fig.~\ref{fig:plots_HCNCO_HCN}, and lower panel of Fig.~\ref{fig:plots_HCNCO_SLED}, but the local spiral galaxies, ULIRG, smm , and high-z star-forming galaxy models were calculated without taking into account substructure of the self-gravitating clouds. \label{fig:plots_HCNCO_nosub_CO}} \end{figure} It might be expected that the higher densities in cloud substructures, that is, cores, increase the molecular line emission. To our surprise, the changes, with respect to the model including substructure of self-gravitating clouds, are marginal (Table~\ref{tab:correl}). As expected, these small changes are most visible at high-$J$ CO transitions. This effect can be explained by a balance between the absence of high-density gas and the increase of the area-filling factor of emission from self-gravitating clouds with a uniform gas density. \subsection{Cosmic ray heating \label{sec:CRheat}} The molecular line emission depends on the gas density, column density, and temperature. Within the model framework, the gas temperature within the dense gas clouds depends on the turbulent and cosmic ray heating (see Sect.~\ref{sec:gdtemp}). The cosmic ray ionization rate is a factor of $40$ higher in ULIRGs and smm-galaxies than in local spiral galaxies. To investigate the influence of the cosmic ray heating, we made a model calculation without this heating term. The resulting CO and HCN luminosities are presented in Fig.~\ref{fig:plots_HCNCO_noCR_CO}. The CO(1--0) luminosities of all galaxies are not altered by the absence or presence of cosmic ray heating (Table~\ref{tab:correl}). This implies that, at the relevant density regime of $300$-$1000$~cm$^{-3}$, turbulent heating dominates over cosmic ray heating. One would expect that the situation changes for molecular line transitions with higher critical densities because the cosmic ray heating is proportional to the gas density, whereas the turbulent heating is proportional to the square root of the gas density (see Sect.~\ref{sec:gdtemp}). Surprisingly, the model ULIRG CO emission of transitions with upper $J$ of $2 \ge J \ge 11$ only changes by at most $\pm 0.1$~dex. Four ULIRGs even show a slight increase of CO emission around $J=7$ when the CR ray heating is suppressed. We suspect that the effect is due to a changing chemistry that requires thorough investigation, something beyond the scope of this work. There is no effect of CR heating on HCN(1--0) and HCO$^+$(1--0) emission for the local spiral and smm-galaxies. In the absence of CR heating, the HCN(1--0) line luminosity decreases by a factor of approximately 1.5 for ULIRGs and high-z star-forming galaxies. The HCO$^+$ (1--0) line emission shows the same behavior as the HCN(1--0) line emission for local spiral galaxies, smm-galaxies, and high-z star-forming galaxies. For ULIRGs, the situation is more complicated; whereas the overall comparison with the observed $L_{\rm IR}$--$L_{\rm HCN}$ relation shows a decreasing line emission in the absence of CR heating, the direct comparison of the model and observed HCN luminosities is consistent with no change. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_noCR_CO.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_noCR_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_noCR_SLED.ps}} \caption{As for Fig.~\ref{fig:plots_HCNCO_CO}, upper panel of Fig.~\ref{fig:plots_HCNCO_HCN}, and lower panel of Fig.~\ref{fig:plots_HCNCO_SLED}, but the local spiral galaxies, ULIRG, smm, and high-z star-forming galaxy models were calculated without cosmic ray heating. \label{fig:plots_HCNCO_noCR_CO}} \end{figure} We conclude that within the framework of this model, the inclusion of CR heating does not significantly affect the CO emission, but increases the HCN(1--0), and HCO$^+$(1--0) emission by at most a factor of two. This factor is needed to reproduce the observed HCN(1--0) emission of the ULIRGs. \subsection{Infrared-pumping \label{sec:IRpumping}} Infrared pumping of HCN (Sect.~\ref{sec:irpumping}) via the $14~\mu$m bending modes is suggested to play an important role for the HCN(1--0) emission of ULIRGs (e.g., Aalto et al. 2015). To investigate the influence of infrared pumping on the HCN(1--0) line emission, we calculated a model without infrared pumping. The results are presented in Fig.~\ref{fig:plots_HCNCO_noIRpump_HCN} and Table~\ref{tab:correl}. The HCN(1--0) emission of local spiral galaxies and smm-galaxies is not significantly affected by HCN infrared pumping. The HCN(1--0) emission of ULIRGs and high-z star-forming galaxies decreases by $20$--$30$\,\% when the infrared pumping is suppressed. Thus, within the framework of our model, HCN infrared pumping has a measurable but relatively small effect on the HCN(1--0) of ULIRGs and high-star-forming galaxies. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_noIRpump_HCN.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_HCNCO_noIRpump_HCN1.ps}} \caption{{\it The local spiral galaxies, ULIRG, smm, and high-z star-forming galaxy models were calculated without IR-pumping for HCN}. Upper panel shows observed total infrared luminosity as a function of the model HCN luminosity. The solid line corresponds to $L'_{\rm HCN,\ obs}=900 \times L_{\rm TIR,\ obs}$ (Gao \& Solomon 2004). The dashed line corresponds to $L_{\rm TIR,\ obs}=12 \times L_{\rm HCN,\ obs}^{'\ 1.23}$ (Graci\'a-Carpio et al. 2008) with factors of $0.5$ and $2$ (dotted lines). Lower panel shows the model HCN(1--0) luminosity as a function of the observed HCN(1--0) luminosity. Compare to Figs.~\ref{fig:plots_HCNCO_CO} and \ref{fig:plots_HCNCO_HCN}. \label{fig:plots_HCNCO_noIRpump_HCN}} \end{figure} \section{Physical properties of the galaxies \label{sec:physpar}} \subsection{Gas fraction \label{sec:gasfraction}} The observed high gas fractions ($M_{\rm H_2}/(M_{\rm H_2}+M_*)$) of high-z star-forming galaxies (Tacconi et al. 2013) strongly depend on the assumed CO conversion factor $\alpha_{\rm CO}$. Tacconi et al. (2013) used a Galactic conversion factor. Our CO conversion factor is a factor of approximately two smaller (Sect.~\ref{sec:conversion}). Genzel et al. (2015) claimed that the H$_2$ mass estimates of the high-z star-forming galaxies with a Galactic conversion factor agree to better than $50$\,\% with the H$_2$ mass estimates based on the infrared SEDs. We suppose that our molecular mass estimate is consistent with that of Tacconi et al. (2013) given the uncertainties of the observed and model (see Sect.~\ref{sec:uncertain}) CO and dust conversion factors of approximately a factor of two. The infrared luminosities of high-z star-forming galaxies ($\sim 10^{11}$~L$_{\odot}$) are between those of local spiral galaxies ($\sim 10^{10}$~L$_{\odot}$) and ULIRGs ($\sim 10^{12}$~L$_{\odot}$). It seems thus reasonable to find conversion factors for high-z star-forming galaxies that range between the Galactic conversion factor and that for ULIRGs. The model H$_2$ gas fractions are presented in Fig.~\ref{fig:gasfractions}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{gasfractions.ps}} \caption{Model gas fraction as a function of the stellar mass. \label{fig:gasfractions}} \end{figure} Whereas the H$_2$ gas fraction of local spirals is approximately $5$\,\%, that of the high-z star-forming galaxies range between $10$ and $30$\,\% with a mean of $\sim 20$\,\%. Tacconi et al. (2013) found a H$_2$ gas fraction of $0.33$. If we use a Galactic conversion factor, we recover these high gas fractions. However, we determined a conversion factor for the high-z star-forming galaxies which is half of the Galactic conversion factor. So, at least in our model counterparts of the high-z star-forming galaxies, we know that the H$_2$ gas fraction is lower. In our model, the highest H$_2$ gas fractions, exceeding $30$\,\%, are found in ULIRGs. The widely varying H$_2$ gas fractions of the smm-galaxies greatly depend on the determination of the stellar masses from observations, which have large uncertainties. \subsection{Gas velocity dispersion \label{sec:veldisp}} An important finding in many high-redshift star-forming disc galaxies is the high gas velocity dispersion. Values of $50$--$100$~km\,s$^{-1}$ (Genzel et al. 2006; Law et al. 2009; F\"orster Schreiber et al. 2009; Vergani et al. 2012; Tacconi et al. 2013) are frequently observed. Whether or not these high-velocity dispersions are mandatory for the high-z star-forming galaxies remains to be elucidated. To find an initial answer to this question, we compare our preferred model, $Q \sim 1.5$ (Fig.~\ref{fig:plots_1_vturb}), to the $Q \sim 1$ model (Fig.~\ref{fig:plots_1_Q_vturb}). The upper panels show the comparison between the observed (H$\alpha$ and CO) and modelled gas velocity dispersions. The galaxies within the different samples (local spiral galaxies, ULIRGs, smm, high-z star-forming galaxies) are not the same. Clearly, the decrease of $Q$ at fixed star-formation rate from $Q \sim 1.5$ to $Q \sim 1$ has a big effect on the gas velocity dispersions, which decreases by a factor of $2$ for the ULIRGs and smm-galaxies and by a factor of $1.6$ for the high-z star-forming galaxies. A lower $Q$ decreases the velocity dispersion, but also increases the star-formation rate at fixed gas surface density. Thus, at fixed star-formation rate, a lower $Q$ decreases both the gas surface density and the velocity dispersion. Our preferred model ($Q \sim 1.5$) better reproduces the observed velocity dispersions. A direct comparison of observed and modeled gas velocity dispersions for single galaxies in presented in the lower panels of Figs.~\ref{fig:plots_1_vturb} and~\ref{fig:plots_1_Q_vturb}. Within the preferred $Q \sim 1.5$ model the gas velocity dispersions of two ULIRGs and three high-z star-forming galaxies exceed the observed values by approximately a factor of two. On the other hand, the gas velocity dispersions of four ULIRGs is approximately two times lower than the observed values. Given that the measured velocity dispersion of ULIRGs and high-z star-forming galaxies that are barely spatially resolved can be easily dominated by non-circular gas motions, one expects the model gas velocity dispersions to be systematically smaller than the observed velocity dispersions. This is only the case for the $Q \sim 1$ model. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_1_vturb.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_1_vturb1.ps}} \caption{Upper panel: model turbulent velocity dispersion as a function of star-formation surface density. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. In addition, filled triangles are the observed velocity dispersions from Downes \& Solomon (1998). Filled circles are observed H$\alpha$ velocity dispersions from Cresci et al. (2009). Lower panel shows observed CO velocity dispersion (Tacconi et al. 2013) as a function of the model velocity dispersion. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. The solid line corresponds to equality and the dotted lines to factors of $1/1.5$ and $1.5$. \label{fig:plots_1_vturb}} \end{figure} \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_1_Q_vturb.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_1_Q_vturb1.ps}} \caption{As for Fig.~\ref{fig:plots_1_vturb}, but with $Q=1$ for the ULIRG, smm, and high-z star-forming galaxies. \label{fig:plots_1_Q_vturb}} \end{figure} We conclude that the preferred $Q \sim 1.5$ model better reproduces the high gas velocity dispersions observed in ULIRGs and high-star-forming galaxies. However, if the observed linewidths are dominated by non-circular gas motions, the $Q \sim 1$ model is consistent with available observations. We recall that the HCN(1--0) emission of the $Q \sim 1$ model is somewhat smaller than that of the preferred model leading to a less good reproduction of the available HCN(1--0) observations (see Sect.~\ref{sec:Q}). \subsection{Star-formation laws} Genzel et al. (2010) and Daddi et al. (2010) found a long-lasting star-formation mode for disk galaxies and a more rapid mode for starbursts. The two modes can be unified to a single star-formation law, if the star-formation timescale with respect to the molecular gas ($t_{\rm SF}=M_{\rm H_2}/SFR$) observed in CO emission is assumed to be proportional to the dynamical timescale, that is, the angular velocity of the galaxy: $\dot{\Sigma}_* \propto \Sigma_{\rm mol} \Omega$. Our model directly yields the local star-formation rate, the H$_2$ gas surface density, and the angular velocity. The Kennicutt-Schmidt relation for the integrated star-formation rates and H$_2$ gas masses of our model is presented in Fig.~\ref{fig:KSlaw}. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{KSlaw.ps}} \caption{Model Kennicutt-Schmidt law: star-formation rate as a function of the H$_2$ gas mass. The solid/dotted/dashed lines are linear bisector fits to the local spiral/ULIRG+smm/high-z star-forming galaxy samples, respectively. \label{fig:KSlaw}} \end{figure} The slopes of the relation for the local spiral galaxies and ULIRGs/smm galaxies are approximately unity ($1.0$ and $1.2$). We observe a factor of approximately 50 between the star-formation efficiencies $SFE_{\rm H_2}$ of the local spiral galaxies on the one hand, and the ULIRGs/smm-galaxies on the other. The SFR--$M_{\rm H_2}$ relation of the high-z star-forming galaxies is intermediate between those of the local spiral galaxies and ULIRGs/smm-galaxies with a slope of $1.7$. The Kennicutt-Schmidt relation with respect to the molecular gas surface density of our model galaxies is presented in the upper panel of Fig.~\ref{fig:plots_1_SFE}. For the area calculation, the stellar scalelength was adopted. The slope of the relation for the local spiral galaxies derived by the IDL routine robust\_linefit is $1.5$. The fitted slopes of the relations for ULIRGs and high-z star-forming galaxies are $1.5$ and $1.4$, respectively. The slope of the relation for smm-galaxies is $2$, that of the combined ULIRG and smm-galaxy sample is $1.6$. We observe an offset of a factor of approximately 7 between the relation of the high-z star-forming and the local spiral galaxies. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{plots_1_SFE.ps}} \resizebox{\hsize}{!}{\includegraphics{plots_1_SFE1.ps}} \caption{Upper panel: model star-formation rate surface density as a function of the model molecular gas surface density. The symbols are the same as in Fig.~\ref{fig:plots_HCNCO_CO}. The dotted lines are linear and square fits to guide the eye. The dashed lines are robust fits to the different galaxy samples. Lower panel shows the model star formation per unit area as a function of the model molecular gas surface density multiplied by the angular velocity (divided by the dynamical timescale). The dashed line represents a robust bisector fit to the entire sample with its associated rms (dotted lines). \label{fig:plots_1_SFE}} \end{figure} The model star formation per unit area as a function of the model molecular gas surface density multiplied by the angular velocity (or divided by the dynamical timescale) $\Sigma_{\rm H_2} \Omega$ is presented in the lower panel of Fig.~\ref{fig:plots_1_SFE}. For the calculation of the angular velocity $\Omega=v_{\rm rot}/R,$ we followed Daddi et al. (2010) and took optical radius $R=R_{25} = 4.5 \times l_*$ for the local spiral galaxies (see also Kennicutt 1998) and the half-light radius $R=R_{\frac{1}{2}}$ for the ULIRGs, smm-galaxies, and high-z star-forming galaxies. The slope of this relation is $1.0$ for the entire sample. Compared to that of the local spiral galaxies, the molecular star-formation efficiency of the ULIRGs, smm, and high-z star-forming galaxies is approximately twice as high. With $R=R_{\frac{1}{2}}$ for the local spiral galaxies, this ratio would increase to a factor of $5$. We conclude that the model Kennicutt-Schmidt laws for the integrated H$_2$ masses and surface densities do not show the same slopes. The integrated Kennicutt-Schmidt law has a slope ofapproximately 1 for the local spirals, ULIRGs, and smm-galaxies, whereas the slope is $1.7$ for high-z star-forming galaxies. The model shows Kennicutt-Schmidt laws with respect to the molecular gas surface density with slopes of approximately $1.5$ for local spiral galaxies, ULIRGs, and high-z star-forming galaxies. The slope for the smm-galaxies is approximately $2$. The model star-formation rate per unit area is, as observed, approximately proportional to the molecular gas surface density divided by the dynamical timescale. \section{Conclusions \label{sec:conclusions}} The theory of clumpy gas disks (Vollmer \& Beckert 2003) provides the large-scale and small-scale properties of galactic gas disks. Large-scale properties considered are the gas surface density, density, disk height, turbulent driving length scale, velocity dispersion, gas viscosity, volume filling factor, and molecular fraction. Small-scale properties are the mass, size, density, turbulent, free-fall, and molecular formation timescales of the most massive self-gravitating gas clouds. These quantities depend on the stellar surface density, the angular velocity $\Omega$, the disk radius $R$, and three additional parameters, which are the Toomre parameter $Q$ of the gas, the mass accretion rate $\dot{M}$, and the ratio $\delta$ between the driving length scale of turbulence and the cloud size. The large-scale part of the model disk is governed by vertical pressure equilibrium, a constant Toomre $Q$ parameter, conservation of the turbulent energy flux, a relation between the gas viscosity and the gas surface density, a star-formation recipe (Sect.~\ref{sec:method}), and a simple closed-box model for the gas metallicity. The small-scale part is divided into two parts according to gas density: non-self-gravitating and self-gravitating gas clouds. The mass fraction at a given density is determined by a density probability distribution involving the overdensity and the Mach number (Sect.~\ref{sec:gasfrac}). Both density regimes are governed by different observed scale relations (Sect.~\ref{sec:scaling}). The dense gas clouds are mechanically heated by turbulence. In addition, they are heated by cosmic rays. The gas temperature is calculated through the equilibrium between gas heating and cooling via molecular line emission (CO, H$_2$, H$_2$O; Sect.~\ref{sec:gdtemp}). The dust temperature is determined by the equilibrium between radiative heating and cooling and the heat transfer between gas and dust (Sect.~\ref{sec:dustemission}). The molecular line emission calculation is based on the escape probability formalism (Sect.~\ref{sec:lineemission}). An important ingredient for the line emission is the area-filling factor of the gas clouds, which is a result of the small-scale part of the analytic disk model. The molecular abundances of individual gas clouds are determined by a detailed chemical network involving the cloud lifetime, density, and temperature (Sect.~\ref{sec:network}). H$_2$ and CO dissociation in photodissociation regions are taken into account (Sect.~\ref{sec:dissociation}). Moreover, a simple formalism for HCN infrared pumping is applied to the HCN line emission (Sect.~\ref{sec:irpumping}). The stellar radiation field is constrained by the observed IR luminosity and SED. The normalization of the cosmic ray ionization rate is constrained by the observed HCO$^+$(1--0) emission. The density and temperature structure of the clumpy gas disk is constrained by the observed HCN(1--0) and multi-transition CO emission. This model is applied to samples of local spiral galaxies, ULIRGs, smm, and high-z star-forming galaxies (Sect.~\ref{sec:samples}). Based on the comparison between the model results and observations available in the literature (see Table~\ref{tab:correl}) we conclude that \begin{enumerate} \item the following observed quantities are consistent with observations: \begin{itemize} \item global metallicities (Fig.~\ref{fig:metallicities}), \item total infrared luminosities and dust SEDs (Fig.~\ref{fig:tirlum}, Appendix~\ref{sec:seds}), \item dust temperatures (Fig.~\ref{fig:tdust}), \item H{\sc i} masses and radial profiles of the local spiral galaxies (Figs.~\ref{fig:HImasses}, \ref{fig:HIprofiles}), \item CO luminosities (Fig.~\ref{fig:plots_HCNCO_CO}), \item HCO$^+$ luminosities of the ULIRGs, smm, and high-z star-forming galaxies (Fig.~\ref{fig:plots_HCNCO_HCO}), \item CO SLEDs up to $J=6$ (Fig.~\ref{fig:plots_HCNCO_SLED}), \item CO SLEDs up to $J=12$ for the ULIRGs (Fig.~\ref{fig:ulirg_ladder}), \end{itemize} \item the model HCN radial profiles are a factor $1.5$--$2$ higher than the observed profiles (Fig.~\ref{fig:THINGS_profiles}), \item the model HCN luminosities are a factor of $1.5$ higher/lower than the observed luminosities for the local spiral galaxies/ULIRGs (Fig.~\ref{fig:plots_HCNCO_CO}), \item the model HCO$^+$ luminosities of the local spiral galaxies are a factor of $\sim 3$ higher than the observed HCO$^+$ luminosities (Fig.~\ref{fig:plots_HCNCO_HCO}); the HCO$^+$ emission mainly depends on the CR ionzation rate used in the chemical network, \item all model conversion factors (mass-to-light and SFR-to-light) have uncertainties of a factor of two, \item the model CO conversion factors deduced when including CO-dark H$_2$ are $\alpha_{\rm CO}=4.7 \pm 1.8,\ 1.7 \pm 0.4,\ 1.4 \pm 0.7,\ 2.6 \pm 0.9$~M$_{\odot}({\rm K\,km\,s}^{-1}{\rm pc}^2)^{-1}$ for the local spirals, ULIRGs, smm, and high-z star-forming galaxies (Fig.~\ref{fig:plots_HCNCO_alphaCO}); the model CO conversion factor of the ULIRGs is a factor of two higher than the value derived by Downes \& Solomon (1998), the CO conversion factor of the high-star-forming galaxies is a factor of two lower than assumed by Genzel et al. (2010) and Tacconi et al. (2010), \item the model HCN-dense gas conversion factor is $\alpha_{\rm HCN}=21 \pm 6$~M$_{\odot}({\rm K\,km\,s}^{-1}{\rm pc}^2)^{-1}$ for the local spiral galaxies and ULIRGs; this is a factor of two higher than the value used in the literature (e.g., Gao \& Solomon 2004); the model HCN-dense gas conversion factor is $33 \pm 17$~M$_{\odot}({\rm K\,km\,s}^{-1}{\rm pc}^2)^{-1}$ for the smm-galaxies, and $59 \pm 21$~M$_{\odot}({\rm K\,km\,s}^{-1}{\rm pc}^2)^{-1}$ for the high-z star-forming galaxies, \item both, the HCN and HCO$^+$ emission trace the dense molecular gas to a factor of approximately $2$ for the local spiral galaxies, ULIRGs and smm-galaxies. \end{enumerate} We tested the influence of constant abundances (Sect.~\ref{sec:constabund}), Toomre $Q$ ($Q=1$ instead of $Q=1.5$; Sect.~\ref{sec:Q}), and of the scale parameter $\delta$ ($\delta=15$ instead of $\delta=5$; Sect.~\ref{sec:delta}). The changes in molecular emission are minor ($< 0.2$~dex). The $Q=1$ and $\delta=15$ model overestimate the CO SLEDs of the ULIRGs. The $Q=1$ model yields a lower HCN(1--0) emission than the $Q=1.5$ model. Since the $Q=1.5$ model already underestimates the HCN(1--0) luminosity with respect to observations, the $Q=1.5$ model is our preferred model. Whereas the CO emission is robust against the variation of model parameters and chemistry, the HCN and HCO$^+$ emission is most sensitive to the chemistry of the interstellar medium. Within the model framework $\sim 60$\,\% of the CO(1--0) emission of local spiral galaxies and high-z star-forming galaxies is emitted in non-self-gravitating clouds. This fraction increases to $\sim 80$\,\% for ULIRGs. Whereas $\sim 80$\,\% of the HCN(1--0) and HCO$^+$ emission originates in non-self-gravitating clouds, this fraction decreases to $\sim 30$\,\% for local spirals and high-z star-forming galaxies (Sect.~\ref{sec:nonself}). The resulting CO, HCN, and HCO$^+$ line emission does not change significantly if cloud-substructure is not taken into account (Sect.~\ref{sec:substruct}). Ignoring the cosmic ray heating (Sect.~\ref{sec:CRheat}) leads to low CO(1--0) emission from high-z star-forming galaxies and low HCN(1--0) emission from ULIRGs. The effect of HCN infrared pumping is small but measurable ($20$--$30$\,\%; Sect.~\ref{sec:IRpumping}). The gas velocity dispersion varies significantly with the Toomre $Q$ parameter. The $Q=1.5$ model yields high-velocity dispersions ($v_{\rm disp} \gg 10$~km\,s$^{-1}$) consistent with available observations of high-z star-forming galaxies and ULIRGs (Fig.~\ref{fig:plots_1_vturb}). However, we note that these high-velocity dispersions may not be mandatory for starburst galaxies (Fig.~\ref{fig:plots_1_Q_vturb}). The model yields molecular star-formation laws ($\dot{\Sigma}_*$--$\Sigma_{\rm H_2}$) with slopes of $1.5/2$ for the local spiral galaxies, ULIRGs, and high-z star-forming galaxies/smm galaxies. The model star-formation rate per unit area is, as observed, proportional to the molecular gas surface density divided by the dynamical timescale (lower panel of Fig.~\ref{fig:plots_1_SFE}). There is a pronounced offset between the $\dot{\Sigma}_*$--$\Sigma_{\rm H_2} \Omega$ relations of the local spirals on the one hand, and ULIRGs, smm, and high-z star-forming galaxies on the other. We conclude that our relatively simple analytic model (Sect.~\ref{sec:model}), together with the recipes for the molecular line emission (Fig.~\ref{fig:bild1}), captures the essential physics of galactic clumpy gas disks. \begin{acknowledgements} We would like to thank the anonymous referee for his/her suggestion to include the IR radiative transfer into our model and for improving the manuscript significantly. PG acknowledges funding by the European Research Council (Starting Grant 3DICE, grant agreement 336474, PI: V. Wakelam). This research has made use of the SIMBAD database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \end{acknowledgements}
2,869,038,156,219
arxiv
\section{Introduction} \label{sec:introduction} Do Fanaroff-Riley type II (FRII; Fanaroff \& Riley, 1974) radio galaxies remain over-pressured with respect to the external medium throughout their entire active life, or do they at some point reach pressure equilibrium with their environment? The answer to this question has potentially far-reaching implications for the understanding of radio galaxy dynamics and energetics. If the lobes remain over-pressured throughout the entire life, the source is always surrounded by an elliptical bow shock and the lobes undergo supersonic self-similar expansion \citep{ka97}. This self-similar scenario is typically assumed when modelling populations of FRII radio galaxies \citep[for example][to name a few]{blundell99, wang08, kapinska12}. However, some studies now point to a more complex situation, in which the lobes start out being highly over-pressured, but reach pressure equilibrium with their environment well before the jet activity comes to an end. Estimates of internal lobe pressures based on the assumption of minimum energy \citep{hardcastle00} as well as those based on inverse-Compton measurements \citep{hardcastle02, croston04} are observed to be comparable to the external pressures, assuming no large contribution to the lobe pressure from thermal material within the lobes. Furthermore, \citet{mullin08} find a trend of increasing axial ratio with sources size in a flux limited sample of FRIIs with $z<1$. If the lobes remained over-pressured throughout their lifetime, undergoing self-similar expansion, then the axial ratio is expected to remain constant. The observed size-dependent axial ratio distribution demonstrated by \citet{mullin08} is therefore inconsistent with models in which the expansion remains self-similar. \citet{hardcastle00} and \citet{mullin08} suggest that FRII radio galaxies may only grow self-similarly early on in their lifetime, and reach pressure equilibrium with their surroundings in middle-age. One way to discriminate between these two scenarios is to consider the remnant phase of radio galaxy evolution. If the lobes remain highly over pressured throughout their lifetime, then the remnant phase will be governed by supersonic Sedov-like expansion \citep{kaiser02}. This continued supersonic lobe expansion in the remnant phase will cause rapid dimming of the lobe radio emission due to a decrease in the magnetic field strength and decrease in particle energies due to adiabatic expansion losses. In contrast, if the lobes are already in pressure equilibrium with the external medium at the end of the active phase, the luminosity evolution in the remnant phase is expected to be much more sedate. \citet{kaiser02} showed that models of remnant radio galaxy spectra are highly degenerate, and modelling of individual remnant radio galaxy spectra cannot constrain the history of lobe evolution. The only way to constrain the lobe evolution in the remnant phase is via a statistical approach. That is the approach taken in this paper. We compose a flux-limited sample radio sources that is dominated by FRII radio galaxies (Section \ref{sec:sample_selection}). With this sample, we obtain an upper limit on the number of ultra-steep spectrum remnant radio galaxies in a low frequency selected, flux limited sample of FRII radio galaxies (Section \ref{sec:empirical_results}). We then perform Monte-Carlo simulations to assess whether models of remnant phase lobe evolution are consistent with the observed limit on ultra-steep spectrum remnants in our flux limited sample (Sections \ref{sec:simulations} and \ref{sec:modeling_results}). We make a strong distinction between the population of ultra-steep spectrum remnants, which we define as remnant radio galaxies with spectral index $\alpha > 1.2$ (defined such that $S_\nu \propto \nu^{-\alpha}$) between our chosen frequencies, and the entire population of remnant radio galaxies, which includes any remnant radio galaxy regardless of spectral characteristics. This distinction is necessary because not all remnant radio galaxies have ultra-steep spectra in the observed frequency range. A good example of this is the remnant radio galaxy discovered by \citet{brienza16}, which was identified purely based on morphological characteristics, and only shows an ultra-steep spectrum above 1.4 GHz. \\ In this work, we restrict our analysis to FRII radio galaxies. A follow-up paper (Brienza et al., 2017 submitted) will focus on the study of lower luminosity, FRI class remnant radio galaxies. \section{Sample selection} \label{sec:sample_selection} \subsection{Sample Definition} \label{sec:VLSSr_sample_definition} Our sample selection is based on the 74 MHz VLA Low-freqeuncy Sky Survey Redux catalog \citep[VLSSr;][]{lane14}. We calculate the flux density of sources in the VLSSr catalogue from the catalogued peak intensity, and fitted major and minor axes, based on the expressions given in \citet{condon97, cohen07}. We then restrict the VLSSr catalog as follows: \begin{enumerate} \item 9 hrs $<$ RA $<$ 16 hrs \item $0^{\circ} <$ DEC $< 60^{\circ}$ \item Distance to nearest neighbour D $>$ 4 arcminutes. \item Fitted major axis $<$ 120 arcseconds. \item Flux density $S_{\rm 74~MHz} > 1.5$~Jy \end{enumerate} The reason for each of the restrictions is as follows: (i and ii) The restrictions on RA and DEC are imposed in order to match the sky area of the FIRST survey, so that we can assess the morphology of the selected objects. (iii) The median distance to the nearest neighbour in the VLSSr catalogue is approximately 15 arcminutes. However, a histogram of distance to nearest neighbour shows a clear peak in the range 1 - 4 arcminutes. The narrow peak at 1-4 arcminutes is a result of radio galaxies with large angular size and complex morphology, in which the individual radio galaxies are fitted by more than one Gaussian component. To simplify the cross-matching with NVSS, we restrict our sample to ``isolated" VLSSr sources, for which the distance to nearest neighbour is greater than 4 arcminutes. This selection criterion reduces the catalogue size by 8$\%$. However, for sources with nearest neighbour $<$ 4 arcminutes, several catalogued sources are often related to the same radio galaxy, and so the reduction in the number of radio galaxies is likely to be significantly less than 8$\%$. (iv) We next remove all objects for which the fitted major axis size is equal to 120 arcseconds, which is the upper limit on fitted major axis \citep{lane14}. For these objects, we are unable to accurately calculate the flux density based on the catalogued values of peak intensity, major and minor axes. This selection criterion reduces the catalogue size by a further 8$\%$ (1237 objects). These very large angular size sources would provide an interesting sample for searches for remnant radio galaxies, as they most likely represent very large, relatively nearby radio galaxies. However, the need to obtain accurate flux density estimates from the catalogued values of peak intensity, major axis and minor axis, necessitates the removal of these large sources from our analysis. (v) Finally, we impose a flux density limit of $S_{\rm 74~MHz} > 1.5$~Jy, which corresponds to the knee in the source-counts distribution \citep[eg.][]{massardi10}. With this flux limit, the sample is expected to be highly dominated by high excitation (predominantly FRII) radio galaxies, with an FRII fraction of up to 80$\%$ \citep{willott01}. We note that the S-cubed simulation of \citet{wilman08} predicts that our sample will comprise 55$\%$ FRII, 5$\%$ Gigahertz Peaked Spectrum (GPS), and 40$\%$ FRI\footnote{The lowest frequency in the S-cubed database is 151 MHz, so we converted our 74 MHz flux limit to 151 MHz assuming a spectral index of 0.75.}. The morphologies of our sample agree with the prediction of the S-cubed simulation: $\gtrsim 50\%$ of our sample have a double-lobed FRII-like appearance. Whilst the sample contains only $\gtrsim 50\%$ FRII radio galaxies, we account for this factor in our analysis of the FRII remnant population, as described in section \ref{sec:FIRST_morphologies_entire_sample}. The flux limit further reduces the sample size by 72$\%$, giving a final sample size of 3861 objects. \subsection{Cross-matching with NVSS} We cross-match our 74MHz flux limited sample described in Section \ref{sec:VLSSr_sample_definition} with the NVSS catalogue at 1.4 GHz \citep{condon98}. Due to the higher frequency and higher resolution of the NVSS relative to the VLSSr, individual catalogue entries in the VLSSr may be associated with multiple catalogue entries in the NVSS. For this reason, we must use a large matching radius. Given that the largest fitted major axis in the VLSSr sample is 120 arcseconds, we use a matching radius of 60 arcseconds, and sum the flux densities of all NVSS catalogue entries within the matching radius. The results of our cross-matching is summarised in Table \ref{table:cross_match_results}. The probability of finding an NVSS source within 60 arcseconds of a random position is approximately 4$\%$ \citep{condon98}. We therefore expect that approximately 4$\%$ of our sample ($\sim 150$ objects) will contain an unrelated NVSS source within the search radius of 60 arcseconds. However, these unrelated sources are clustered near the NVSS flux limit (2.5 mJy), while the majority of our sample have NVSS flux densities more than 10 times higher than the flux limit (see Figure \ref{fig:spectral_index_distribution}). Therefore, only a very small fraction of our sample will be significantly affected by the detection of an unrelated NVSS object within the search radius. The differing survey resolution is not likely to cause significant errors in spectral index calculation. Our sample is restricted to sources with fitted major axes less than 120 arcseconds in the VLSSr catalogue, a factor of only 2.7 times the NVSS angular resolution of 45 arcseconds. Furthermore, our sample has a high signal to noise in the NVSS survey, and therefore the catalogued flux densities are unlikely to have missed flux. \begin{table} \caption{Results of cross-matching our VLSSr selected sample with NVSS.} \label{table:cross_match_results} \begin{tabular}{cc} \hline Number of NVSS matches & Number of VLSSr sources \\ \hline 0 & 4 \\ 1 & 3466 \\ 2 & 382 \\ 3 & 9 \\ $>3$ & 0 \\ \hline Total & 3861 \\ \hline \end{tabular} \end{table} \begin{table*} \caption{Results of cross-matching our VLSSr sample with the FRII candidate sample of \citet{vanvelzen15}.} \label{table:cross_match_results_dubbeltjes} \begin{tabular}{ccc} \hline & Entire VLSSr selected Sample & Ultra-steep Spectrum Sample \\ & & ($\alpha_{\rm VLSSr}^{\rm NVSS} > 1.2$) \\ \hline Sample Size & 3861 & 57 \\ Fraction matched with FRII candidates & 65$\%$ (2498/3861) & 45$\%$ (26/57) \\ Fraction of FRII candidates with detected core & 14$\%$ (348/2498) & 23$\%$ (6/26) \\ \hline \end{tabular} \end{table*} \begin{figure} \includegraphics[width=1.0\columnwidth]{flux_and_alpha_histograms.pdf} \caption{Histograms of spectral index and flux densities for our VLSSr selected sample. The histogram of spectral index clearly consists of a broad central peak, along with a flat spectrum tail and ultra-steep spectrum tail. The two vertical lines in the upper panel denote our division of the sample into flat spectrum ($\alpha < 0.45$), normal spectrum (0.45 < $\alpha < 1.2$), and ultra-steep spectrum ($\alpha > 1.2$). Note that 4 objects are not detected in NVSS, and are not included in the above histograms.} \label{fig:spectral_index_distribution} \end{figure} \section{Empirical Results} \label{sec:empirical_results} In Figure \ref{fig:spectral_index_distribution} we present the histogram of spectral index between 74 MHz and 1400 MHz for our sample, as well as the histogram of 74 MHz and 1400 MHz flux densities. The spectral index distribution shows a broad, symmetric central peak, along with a flat spectrum tail and an ultra-steep spectrum tail. We wish to place an upper limit on the fraction of our sample that are ultra-steep spectrum remnant radio galaxies. To do so, we split our sample into 3 spectral categories: flat ($\alpha < 0.45$); normal ($0.45 < \alpha < 1.2$); and ultra-steep ($\alpha > 1.2$). The choice of the dividing line between flat, normal and ultra-steep spectrum is somewhat arbitrary, but is guided by the shape of the histogram in Figure \ref{fig:spectral_index_distribution}, and models of radio source emission. The ultra-steep spectrum limit at $\alpha = 1.2$ corresponds to the maximum spectral index for active radio galaxy models with particle injection corresponding to $\alpha < 0.7$ and a cooling break of $\Delta \alpha = 0.5$. We stress that our choice of dividing line at $\alpha = 1.2$ is not intended to capture all remnant radio galaxies. Indeed, we expect that most remnant radio galaxies above our flux limit will not have ultra-steep spectra according to our definition. However, the key results are not strongly affected by our choice of dividing line, since our modelling approach is designed specifically to accommodate arbitrary selection criteria. Flat spectrum objects comprise $2\%$ of the sample (78 objects), ``normal" spectrum objects comprise 96.5$\%$ of our sample (3726 objects), and ultra-steep spectrum objects comprise only $1.5\%$ of the sample (57 objects). Note, however, that our sample is comprised of both FRI and FRII radio galaxies. In this work, we seek an upper limit on the fraction of \emph{FRII radio galaxies} that are ultra-steep spectrum remnants. We therefore need to account for the fraction of the sample that is FRII. We consider this question in the following sections. \begin{figure*} \includegraphics[width=2.0\columnwidth]{all_normal_spectrum_images_together_reduced_file_size.pdf} \caption{1.4 GHz radio images from the FIRST survey of 55 randomly selected sources in our VLSSr selected sample with normal spectral index values ($0.45 < \alpha_{\rm VLSS}^{\rm NVSS} < 1.2$). The images are 2 x 2 arcminutes.} \label{fig:first_morphologies_entire_sample} \end{figure*} \begin{figure*} \includegraphics[width=2.0\columnwidth]{all_steep_spectrum_images_together_reduced_file_size.pdf} \caption{1.4 GHz radio images from the FIRST survey of the 53 VLSSr selected sources detected in NVSS with ultra-steep spectra ($\alpha_{\rm VLSS}^{\rm NVSS} > 1.2$). Many of the ultra-steep spectrum sources show evidence of active cores, and may represent re-started radio galaxies. Many are unresolved, and may be associated with variable AGN cores. The images are 2 x 2 arcminutes.} \label{fig:first_morphologies_steep_spectrum_sample} \end{figure*} \subsection{Morphology of the S$_{\rm 74~MHz} >$ 1.5~Jy sample in FIRST} \label{sec:FIRST_morphologies_entire_sample} Our flux density limit of 1.5 Jy at 74 MHz corresponds to the knee in the source counts distribution. This flux limit is chosen to provide a large sample that is dominated by high excitation (predominantly FRII) radio galaxies. To assess the dominant morphology of the sample, we compare our sample to that of \citet{vanvelzen15}, who compiled a sample of 59,192 candidate FRII radio sources from the FIRST survey catalogue, using an algorithm that selected double, triple, or multiple sources with lobe-lobe separation of less than 60 arcseconds. This sample of ``dubbeltjes" (small doubles) is considered to be a relatively clean sample of FRII radio galaxies on these angular scales. We cross-match our VLSSr flux limited sample with the dubbeltjes sample of \citet{vanvelzen15}, using a matching radius of 30 arcseconds ($\lesssim 5$ expected random associations), and find 65$\%$ of our sample (2498 objects) are matched to candidate FRII radio galaxies in the catalogue of \citet{vanvelzen15}. Not all of the \citet{vanvelzen15} sample are confirmed FRIIs. However, the selection of FRIIs in \citet{vanvelzen15} only includes radio galaxies up to 1 arcminute in angular size. Our sample includes sources with de-convolved major axes up to approximately 100 arcseconds. We therefore visually inspected a random sample of FIRST images centred on the VLSSr coordinates, and find that more than 50$\%$ have a double-lobed FRII-like appearance, while 10$\%$ are unresolved at the FIRST resolution (see Figure \ref{fig:first_morphologies_entire_sample}). This random sample is drawn from our VLSSr flux limited sample, and includes sources that are within the van Velzen sample, as well as some that are not in that sample. Based on the preceding discussion, we can be confident that our $S_{\rm 74~ MHz} > 1.5$~Jy sample is dominated by FRII radio galaxies, as expected. We therefore place a robust lower limit on the number of FRII radio galaxies in our sample: $N_{\rm FRII} > 0.5 \times 3861 > 1930$ objects. \subsection{A limit on the fraction of FRIIs in our sample that are ultra-steep spectrum remnants} \label{sec:empirical_limit_on_remnant_fraction} The total number of ultra-steep spectrum ($\alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 1.2$) sources in our sample is 57 (53 detections + 4 non-detections in NVSS). Therefore, as an absolute upper limit, fewer than 3$\%$ (57/1930) of FRII radio galaxies in our flux limited sample are ultra-steep spectrum remnants. Of course, not all ultra-steep spectrum radio sources are remnant radio galaxies, and therefore we can place a more stringent upper limit on the fraction of remnant FRIIs in our sample by considering what fraction of the ultra-steep spectrum objects in our sample are active. We do this initially by considering the ultra-steep spectrum sample object's morphology in FIRST. Of the 53 ultra-steep spectrum sources detected in NVSS, 10 are point-like in FIRST and may be high redshift radio galaxies, or core dominated systems in which the spectral index has been influenced by variable flux between the dates of observation in VLSS and NVSS surveys. A further 6 objects are triple systems, exhibiting two lobes plus an AGN core, indicating that these systems are still active (although some may be re-started systems, with remnant lobes and a re-born AGN). After removing the point-like and triple systems from the ultra-steep spectrum sample, we find that less than 41 objects are remnant radio galaxies, indicating that less than 2$\%$ of the FRIIs in our sample are candidate remnant radio galaxies. The large core-detection fraction in the ultra-steep spectrum sample suggests that many of the ultra-steep spectrum objects without a core detection would have a detected core if we were to follow-up with sensitive high resolution observations. Therefore, the true number of ultra-steep spectrum FRII remnants in our sample is likely to be less than 41, because we detect such a large fraction of triples in our ultra-steep spectrum sample. Out of the 2498 FRII candidates from \citet{vanvelzen15} in our VLSSr sample, only 14$\%$ have a detected core. In comparison, of the 26 FRII candidates from \citet{vanvelzen15} in our ultra-steep spectrum sample, 23$\%$ have a detected core. Extrapolating from the 23$\%$ of ultra-steep sources with detected cores, it is conceivable that most of the ultra-steep spectrum objects in this flux limited sample are active, and would be found to have core emission if they were followed up with more sensitive high resolution observations. We note that the higher core fraction in the ultra-steep spectrum dubbeltjes from \citep{vanvelzen15} could arise if the ultra-steep spectrum dubbeltjes tend to be larger angular size objects, enabling core detection in a larger number of cases. Some of the ultra-steep spectrum triple sources may be restarted objects (eg. double-doubles) or high redshift radio galaxies, and a follow-up study of this sample of ultra-steep spectrum triples is warranted, particularly given that they are significantly lower angular size objects than most known double-doubles. In summary, fewer than 2$\%$ of the FRII radio galaxies in our 74~MHz selected flux limited sample are candidate ultra-steep spectrum remnants. Due to the large core-detection fraction in our ultra-steep spectrum sample, we expect that many of the canidate remnants are in fact active, so that the true ultra-steep remnant fraction is likely to be significantly less than 2$\%$. Follow-up radio observations at higher resolution are required to confirm the nature of the ultra-steep spectrum objects, and which, if any, are indeed remnant FRII radio galaxies. \section{Simulating the population of active and remnant FRII radio galaxies} \label{sec:simulations} The second part of our study involves simulating the flux limited sample of FRII radio galaxies, including both the active and remnant phase, using an accurate model of spectral evolution. This Monte-Carlo approach allows us to compare the models with the empirical results using the exact same selection criteria. It also allows us, for the benefit of future studies, to investigate the efficiency of alternative selection criteria including such things as spectral curvature, high frequency spectra, and redshift limits. The degeneracy involved in modelling the radio spectrum of an individual remnant radio galaxy at one point in it's life \citep{kaiser02} can be broken by observing many sources at different stages of their life, and modelling their spectral distributions. In this section we present the relevant model equations used to construct the time dependent radio galaxy spectra, and we describe the procedure used to generate the simulated catalogues. \subsection{Flux density calculation assuming non-uniform magnetic field strength} In order to accurately simulate a population of radio galaxies including remnants, we must accurately trace the spectral evolution in both the active and remnant phases. \citet{tribble91, tribble93} showed that a non-uniform distribution of magnetic field strengths gives rise to aged synchrotron spectra that differ significantly from those obtained under the assumption of a uniform magnetic field strength. We therefore follow \citet{tribble91, tribble93} and \citet{hardcastle13a}, and calculate the synchrotron spectra assuming that within each volume element of the lobes the magnetic field is distributed according to a Guassian-random field. That is, at each point within a volume element, the cartesian components of the field are each drawn independently from a Gaussian distribution with mean of zero. This magnetic field configuration is expected to arise from homogenous, isotropic turbulence \citep{tribble91, hardcastle13a}. We further assume that each volume element within the lobes has the same magnetic field distribution, that the electron energy distribution is independent of the local magnetic field strength, and that the particle pitch angle distribution is isotropic. With these assumptions, we can calculate the synchrotron flux density as \begin{equation} \label{eqn:S_nu} S_\nu = \frac{(1+z)}{D_L^2} \frac{\sqrt{3} e^3 }{16 \pi^2 \epsilon_0 c m_e} \int_{\gamma_{\rm min}}^{\gamma_{\rm max}} N(\gamma) \left[ \int_0^\infty B p_B \bar{F} (y) dB \right]d\gamma \end{equation} where $N(\gamma) = \frac{dN}{d\gamma}$ is the volume integrated electron energy distribution, $\bar{F}(y)$ is the angle-averaged synchrotron function (see Section \ref{app:F_bar_y}), $p_B$ is the probability distribution for the magnetic field strength, $\epsilon_0$ is the permittivity of free space, $c$ is the speed of light, $m_e$ is the electron mass, and $e$ is the electron charge. For a Guassian-random field, the probability distribution for the magnetic field strength $p_B$ is the Maxwell-Boltzmann distribution \citep[][]{hardcastle13a}: \begin{eqnarray} p_B = \sqrt{\frac{2}{\pi}} \frac{B^2 \exp(-B^2/2a^2)}{a^3} \end{eqnarray} where \begin{eqnarray} a = \frac{B_0}{\sqrt{3}} \end{eqnarray} and $B_0$ specifies the mean magnetic energy density, defined such that \begin{eqnarray} \int B^2 p_B dB = B_0^2 \end{eqnarray} \subsection{Radio source evolution} \label{sec:radio_source_evolution} In the current work, we are focused on FRII radio galaxies, and therefore we assume that during the active phase the radio sources evolve according to the self-similar dynamical model of \citet{ka97}. During the remnant phase, the lobe evolution is not well understood, however, the true lobe expansion is likely to be bounded by two extreme cases of (i) maximal energy driven expansion and (ii) no expansion. For the maximal expansion rate of case (i) we consider the Sedov-like expansion described by \citet{kaiser02}. Sedov-like expansion refers to the situation in which the bow shock surrounding the lobes is driven by the adiabatic expansion of the lobes. This provides a solution that is similar to, but not entirely analogous to the Sedov solution for a point explosion, and is therefore referred to by Kaiser and Cotter as "Sedov-like". In order to apply the models of \citet{ka97} and \citet{kaiser02}, we must assume that the radio lobes expand into a power-law radial density profile, in which the ambient density, $\rho$, scales with the radial distance, $r$, according to \begin{equation} \rho \propto r^{-\beta} \end{equation} \subsubsection{Case (i), Sedov-like expansion in the remnant phase} In our model, for case (i), the evolution of the radio source volume is a piece-wise power law, with: \begin{equation} \label{eqn:V_evolution} V(t) \propto\begin{cases} t^{9/(5 - \beta)}, & \text{if } t<t_{\rm on}.\\ t^{6/(6-\beta)}, & \text{if } t>t_{\rm on}. \end{cases} \end{equation} where $t_{\rm on}$ is the length of the active phase. Our assumption of piece-wise power-law volume evolution means that the integrated electron energy distribution $N(\gamma)$ does not depend on the absolute value of the volume, only the exponents of the time evolution. For this reason, we have purposefully left the above expression as a proportionality. We assume that the magnetic field is isotropically distributed (``tangled") on all scales, and therefore can be treated as a magnetic ``fluid" with adiabatic index $\Gamma_B = 4/3$ \citep{leahy91}. In this case, the magnetic field evolves according to \begin{equation} \label{eqn:B_evolution} B(t)=\begin{cases} B_0 \left( \frac{t}{t_0} \right)^{\frac{-4-\beta}{2(5 - \beta)}} \left( \frac{Q}{Q_0} \right)^{\frac{2-\beta}{2(5 - \beta)}} , & \text{if } t<t_{\rm on}.\\ B(t_{\rm on}) \left( \frac{t}{t_{\rm on}} \right)^{\frac{-4}{(6-\beta)}}, & \text{if } t>t_{\rm on}. \end{cases} \end{equation} \citep{ka97, kaiser02}, where $Q$ is the jet power and $Q_0 = 10^{39}$~W is a normalisation constant. We note that for $\beta \approx 1.5 - 2$, the dependence of magnetic field strength on jet power is extremely weak, and therefore $Q_0$ does not strongly affect the results. We assume that the particle energy distribution injected into the lobes is a power-law, such that \begin{equation} \label{eqn:injection_spectrum} \frac{dN}{d\gamma_i dt_i} =\begin{cases} q_0 \gamma_i^{-a} , & \text{if } t<t_{\rm on} \text{ and } \gamma_{\rm i, min} < \gamma_i < \gamma_{\rm i, max}. \\ 0, & \text{otherwise } \end{cases} \end{equation} where $q_0$ is proportional to the jet power, and a number of other assumed model parameters, as described in Section \ref{sec:sigma_and_jet_power}. \subsubsection{Case (ii), no expansion in the remnant phase} The only difference in our model for case (i) and case (ii) is in equations \ref{eqn:V_evolution} and \ref{eqn:B_evolution} at $t>t_{\rm on}$. For completeness, they are defined below: \begin{equation} V(t) \propto\begin{cases} t^{9/(5 - \beta)}, & \text{if } t<t_{\rm on}.\\ \mbox{constant}, & \text{if } t>t_{\rm on}. \end{cases} \end{equation} and \begin{equation} B(t)=\begin{cases} B_0 \left( \frac{t}{t_0} \right)^{\frac{-4-\beta}{2(5 - \beta)}} \left( \frac{Q}{Q_0} \right)^{\frac{2-\beta}{2(5 - \beta)}} , & \text{if } t<t_{\rm on}.\\ \mbox{constant}, & \text{if } t>t_{\rm on}. \end{cases} \end{equation} \subsection{Volume integrated electron energy distribution} Equations \ref{eqn:V_evolution}, \ref{eqn:B_evolution} and \ref{eqn:injection_spectrum} completely define our radio source model. The volume integrated electron energy distribution can be obtained from the completely general solution to the continuity equation (see Appendix \ref{app:vol_integrated_N_gamma}) \begin{equation} \label{eqn:dN_dgamma} \frac{dN}{d\gamma}\left( \gamma, t \right) = q_0 \gamma^{-a} \int_{t_{\rm i, min}}^{t_{\rm i, max}} \left( \frac{V(t_i)}{V(t)} \right)^{(a-1)/3} \left( 1 - \frac{\gamma}{\gamma_*(t_i, t)} \right)^{a-2} d t_i \end{equation} where $t_{\rm i, max} = t$ for active sources, and $t_{\rm i, max} = t_{\rm on}$ for remnant sources. \begin{equation} \frac{1}{\gamma_*(t_i, t)} = \int_{t_i}^{t} a_0 \left( \frac{V(\tau)}{V(t)} \right)^{-1/3} \left( \frac{B^2(\tau) + B^2_{\rm CMB}}{2 \mu_0} \right) d\tau, \end{equation} where $B_{\rm CMB} = 0.325 (1+z)^2$~nT is the equivalent magnetic field strength of the CMB, and the integration limit $t_{\rm i, min}$ is given by \begin{equation} t_{\rm i, min} = \mbox{MAX}(0, t_{\rm i, min}^*). \end{equation} The parameter $t_{\rm i, min}^*$ corresponds to the injection time at which a particle injected with Lorentz factor $\gamma_{\rm i, max}$ will have cooled to Lorentz factor $\gamma$ at time $t$, and is given by the solution to the equation \begin{equation} \frac{1}{\gamma} = \frac{1}{\gamma_{\rm i, max} \left( \frac{V(t_{\rm i, min}^*)}{V(t)} \right)^{1/3}} + \frac{1}{\gamma_*(t_{\rm i, min}^*, t)} \end{equation} where $\gamma_{\rm i, max}$ is the maximum electron Lorentz factor of the particle distribution injected into the lobes (see Appendix \ref{app:vol_integrated_N_gamma}). \subsection{Simulation Approach} \label{sec:simulation_approach} To generate a mock catalogue of radio galaxies, several of the model parameters are sampled from probability distributions, while others remain fixed. Each of the free parameters and the relevant distributions are discussed below. \subsubsection{Redshift} Redshifts are sampled from a probability distribution given by \begin{equation} p(z) \propto \rho(z) \frac{dV}{dz} \end{equation} where $\rho(z)$ is the volume density of radio galaxies as a function of redshift and $\frac{dV}{dz}$ is the differential comoving volume element for a spherical shell. For high power FRII radio galaxies, the volume density $\rho(z)$ is typically taken to be a Gaussian \citep{blundell99, willott01, grimes04}. We assume \begin{equation} \rho(z) \propto \exp \left[ -\frac{1}{2}\left( \frac{z - z_{h0}}{z_{h1}} \right) \right] \end{equation} with $z_{h0} = 1.95$ and $z_{h1} = 0.55$ \citep{grimes04}. The comoving volume element for a flat Universe ($\Omega_k = 0$) is \begin{equation} \frac{dV}{dz} \propto \frac{(1+z)^2 D_A^2}{\sqrt{\Omega_M (1+z)^3 + \Omega_\Lambda}} \end{equation} \citep{hogg99}. \subsubsection{Radial density profile exponent, $\beta$} As discussed, we assume that our mock radio galaxies expand into a power-law density profile with $\rho_{\rm ext} \propto r^{-\beta}$. We assume that the exponent $\beta = 1.9$ for all of our mock radio galaxies. Lower values of $\beta$ increase the fraction of remnants, but not by a large factor, due to the less rapid evolution of volume and magnetic field. \subsubsection{Active lifetime $t_{\rm on}$} The typical active lifetime of FRII radio galaxies remains poorly constrained. Estimates based on self-similar dynamical models range from $\gtrsim$ 10 Myr \citep{bird08, kapinska12}, to 200 Myr \citep{antognini12}. Analysis of the length asymmetry of the most powerful double lobed radio sources indicates that the lobe advance speeds are typically a few percent of the speed of light, and not more than 0.15c \citep{scheuer95}, implying a typical active lifetime of a few tens of Myr. Estimates based on spectral ageing \citep{alexander87, liu92} systematically underestimate dynamical ages by a significant factor, and both dynamical and spectral age estimates have an uncertain relationship to the true source age \citep{eilek97, blundell00, kaiser05, hardcastle13a}. For the purposes of our simulated catalogues, we assume that the active lifetimes follow a truncated log-normal distribution, with mean $\langle \log(t_{\rm on}) \rangle = 7.5$ and standard deviation $\sigma_{\log(t_{\rm on})} = 0.1$, truncated such that $7.3 < \log(t_{\rm on}) < 8.3$, where $t_{\rm on}$ is specified in years. The active lifetime $t_{\rm on}$ refers to the time period during which the lobes are fed with fresh electrons. Increasing the mean active lifetime by a factor $f$ will cause a decrease in the simulated catalogue remnant fraction by a factor $\lesssim f$, while decreasing the mean active lifetime by a factor $f$ will increase the remnant fraction by a factor $\lesssim f$. \subsubsection{Age at which the source is observed, $t_{\rm obs}$} Source ages are sampled from a uniform distribution between $t_{\rm obs, min} = 0.1$ Myr and $t_{\rm obs, max} = 200$ Myr, which is several times the active lifetime. The results are insensitive to the assumed value for $t_{\rm obs, max}$, because the age distribution of remnants declines very steeply, as shown in Figure \ref{fig:histograms} (b). The results are insensitive to the assumed value for $t_{\rm obs, min}$, because $t_{\rm obs, min} \ll t_{\rm obs, max}$ and source age $t_{\rm obs}$ is sampled uniformly. \subsubsection{Magnetic field normalisation $B_0$} One of the most important and influential parameters of our model is the normalisation of the magnetic field strength, $B_0$ in Equation \ref{eqn:B_evolution}. \citet{croston05} measured lobe magnetic field strengths in a sample of 33 powerful FRII radio galaxies, based on inverse Compton modelling of the observed X-ray emission. They find that the magnetic field strength in their sample is a strong function of source size. For sources greater than 300 kpc, \citet{croston05} find the median magnetic field strength is 0.6 nT, and reaches 0.2 nT for the largest sources in that sample. We therefore fix $B_0 = 0.6$ nT at time $\log(t_{\rm years}) = 7.3$ corresponding to the lower limit on the active lifetime for sources in our mock sample. In this way, we ensure that the magnetic field strength at the end of each source's active lifetime is $\lesssim 0.6$ nT, consistent with the results of \citet{croston05}. In equation \ref{eqn:B_evolution}, we have set $Q_0 = 10^{39}$ W. However, we note that the dependence of magnetic field strength on jet power is extremely weak, and $Q_0$ is therefore not of great importance. \subsubsection{Energy injection index, $a$} \label{sec:injection_index_distribution} The injection index is represented by the parameter $a$ in equation \ref{eqn:injection_spectrum}. We sample the injection index for each source from a truncated Gaussian distribution with $2.0 < a < 2.4$, mean $\bar{a} = 2.2$ and standard deviation $\sigma_a = 0.2$. This distribution of injection indices results in a relatively broad distribution of ``observed" spectral index in our mock catalogues, ranging between $0.5 < \alpha_{74}^{1400} < 1.2$ with a peak at $\alpha_{74}^{1400} \sim 0.8$, providing a good match to the spectral index distribution observed in our VLSSr sample. Note that with this injection index distribution, even the oldest active sources cannot have spectral index $\alpha > 1.2$ between any pair of frequencies, unless those frequencies are near to the frequency corresponding to the cutoff in the electron distribution. Therefore, active sources do not contaminate our ultra-steep spectrum sample in our mock catalogue. To explain this in more detail, consider a source with injection index $a=2.4$ -- the maximum possible value that could be drawn from the truncated Gaussian described above. Imagine this source is observed when it is very old, so that the radiative cooling break of $\Delta \alpha = 0.5$ occurs well below the lowest frequency we have observed. In that case, the spectral index will be $\alpha = 1.2$, but cannot get any steeper with age. \subsubsection{Minimum/maximum injected Lorentz factor, $\gamma_{\rm min, max}$} We assume $\gamma_{\rm min} = 100$ for all mock radio galaxies, and $\gamma_{\rm max} = 5 \times 10^6$. The maximum and minimum injected Lorentz factors do not strongly affect the results presented here, since the electrons at $\gamma_{\rm min}$ and those at $\gamma_{\rm max}$ emit at frequencies well outside the observed range. They do however affect the scaling between jet power and particle injection rate $q_0$ (see Section \ref{sec:sigma_and_jet_power}. ). Our choice for low energy electron cutoff is based on the observation in several sources of a low-frequency flattening in the hotspot spectra, consistent with a low energy cutoff in the electron distribution at a Lorentz factor of $\gamma_{\rm min} \sim 100 - 700$ \citep[][and references therein]{godfrey09}. Such values of $\gamma_{\rm min}$ are likely to be the result of dissipation of jet bulk kinetic energy \citep{godfrey09}. In the case of Cygnus A, absorption is at least partially responsible for the low frequency turnover in the hotspot spectra \citep{mckean16}, however, this does not rule out the possible involvement of a low energy cutoff in the hotspots of Cygnus A. The interpretation of the hotspot turnover in Cygnus A remains problematic \citep{mckean16}. \subsubsection{Jet power, $Q_{\rm jet}$} The jet power is sampled from a power-law probability distribution, with \begin{equation} p(Q_{\rm jet}) \propto \begin{cases} Q^{-n_Q} \text{ if } 5 \times 10^{36} ~ W < Q_{\rm jet} < 2 \times 10^{42} ~ W \\ = 0 \text{ otherwise} \end{cases} \end{equation} We assume $n_Q = 2.3$, which is an average of the values derived by \citet{blundell99} (2.6) and \citet{wang08} (2.0). We note that the results presented here are not strongly dependent on the assumed value of $n_Q$, for reasonable departures from our assumed value. \subsubsection{Electron energy fraction, $\epsilon_e$} To calculate the particle injection rate $q_0$, we must specify the fraction of jet power that is converted to the internal energy of the relativistic electron population, $\epsilon_e$ (see Equation \ref{eqn:q_0}). Here we assume that the jet power is equally distributed between magnetic energy, relativistic electron energy, and the energy of non-radiating particles, so that $\epsilon_e = 1/3$. Our conclusions are not sensitive to the value of $\epsilon_e$. \subsubsection{Ratio of hotspot pressure to lobe pressure, $\frac{p_{\rm HS}}{p_{\rm lobe}}$} Another parameter that is necessary to calculate the particle injection rate into the lobes, is the ratio of hotspot pressure to lobe pressure (see Equation \ref{eqn:q_0}). Here we assume $\frac{p_{\rm HS}}{p_{\rm lobe}} = 10$, however we note the very weak dependence of $q_0$ on the value of $\frac{p_{\rm HS}}{p_{\rm lobe}}$. Again, our conclusions are not sensitive to the assumed value of $\frac{p_{\rm HS}}{p_{\rm lobe}}$. \begin{comment} Equation A6 of Kaiser et al, 2007, shows that for a given age, the lobe pressure has a weak dependence on jet power, but a stronger dependence on the environmental parameter $\rho_0 r_0^\beta$ . We assume self-similar lobe evolution in both the active and remnant phase, and equipartition between magnetic and particle energy density, such that the magnetic energy density $U_B \propto p_{\rm lobe}$. In this case, \begin{equation} B \propto \left( \rho_0 r_0^\beta \right)^{\frac{3}{2(5-\beta)}} Q^{\frac{2-\beta}{2(5 - \beta)}} t^{\frac{-4-\beta}{2(5 - \beta)}} \end{equation} In the active phase, we assume the lobes evolve according to the self-similar model discussed by \citet{falle91, ka97, kda97, komissarov98}. The model can be summarised as follows: In the following, all quantities such as the jet power, length, volume, etc. refer to an individual lobe. For example, the total kinetic power produced by the central engine (the sum of the two individual jets) is given by 2Q, and the kinetic power carried by each individual jet is Q. We calculate the luminosity of an individual lobe, then multiply the result by 2, to obtain the total source luminosity. The lobes are assumed to expand into a power law density profile described by \begin{equation} \rho(r) = \rho_0 \left( \frac{r}{a_0} \right)^{-\beta} \end{equation} The linear extent of each individual lobe (from the central engine to the hotspot) is given by \begin{equation} D = c_1 \left( \frac{Q}{\rho a_0^\beta} \right)^{\frac{1}{5 - \beta}} t^{\frac{3}{5 - \beta}} \end{equation} The constant of proportionality $c_1$ is model dependent, and can be estimated through theoretical arguments \citep{kda97}, or empirically \citep{kaiser00, willott99}. However, this constant is of order unity, and is a weakly varying function of model parameters, and as such, the value of $c_1$ does not strongly impact our results. The ratio of an individual lobe's length to it's radius is A \citep{kaiser07}. \citet{kda97} assume cylindrical geometry of the lobes, such that the volume is . \begin{equation} V_{\rm lobe} = \pi D^3/A^2 \end{equation} The internal pressure of the lobe is given by \begin{equation} p_{\rm lobe} = f_p \left( \rho_0 r_0^\beta \right)^{1/3} Q^{2/3} D^{(-4-\beta)/3} \end{equation} with \begin{equation} f_p = \frac{18 c_1^{2(5-\beta)/3}}{(\Gamma_x + 1)(5 - \beta)^2 A^2} \end{equation} We assume energy equipartition between relativistic particles and magnetic field, so that the magnetic energy density is \begin{equation} U_B = \frac{3 p_{\rm lobe}}{2} \end{equation} For low values of $\beta$ and low jet powers, the lobes will reach pressure equilibrium with their surroundings prior to the end of the active phase. Following the active phase, the buoyant phase will begin at a time \begin{equation} t_{\rm buoy} = t_{\rm on} \left( \frac{p_c (t_{\rm on}) D(t_{\rm on})^{\beta} }{p_{\rm 0} r_0^\beta } \right)^{\frac{1}{2} \left( \frac{6-\beta}{4-\beta}\right)} \end{equation} where $p_0$ is the external gas pressure at the core radius $r_0$, given by $p_0 = \frac{k_B}{\mu m_p} \rho_0 T$ where T is the gas temperature in Kelvin and $\mu m_p$ is the mean particle mass in the plasma. \end{comment} \subsection{Contribution of the jets and hotspots in active sources} After the jets stop supplying energy to the hotspots, the bright emission from them will disappear in the order of a sound-crossing time ($\lesssim 1$~Myr for a hotspot with a diameter of a few kpc). If the hotspots provide a significant fraction of the total source flux at 74 MHz, then the rapid disappearance of the hotspots at the start of the remnant phase could help to explain the rapid disappearance of remnant FRII radio galaxies from our flux limited samples. \citet{jenkins77} defined the hotspots as regions 15 kpc in diameter, and found that the ratio of hotspot to total luminosity is strongly correlated with the total luminosity, and furthermore, that the hotspots can often contribute more than 90$\%$ of the source flux density at 178 MHz, particularly at high luminosities. In direct contrast to this result, is the more recent study by \citet{mullin08} of a sample of 100 FRII 3CRR radio galaxies with $z < 1$. \citet{mullin08} are able to more accurately identify the hotspot regions than \citet{jenkins77}, and find no correlation between hotspot prominence and the total radio luminosity in this sample (see their figure 32). Furthermore, \citet{mullin08} find that the hotspots contribute typically a few percent to a few tens of percent to the total radio luminosity at 178 MHz\footnote{\citet{mullin08} define hotspot prominence as the ratio of hotspot luminosity at 8.4 GHz to the total source luminosity at 178MHz. We have converted the hotspot prominence of \citet{mullin08} to a hotspot emission fraction, or compactness in the terminology of \citet{jenkins77} by multiplying the hotspot prominence values by a factor of $(8400/178)^{-\alpha}$ assuming $\alpha = 0.8$.}. The median hotspot prominence (summed for both the north and south hotspot) for the sample of \citet{mullin08} is 0.008. We convert this to a hotspot fraction (the ratio of total source luminosity contributed by the hotspots at 74 MHz) by multiplying the hotspot prominence by a factor of $(74/8400)^{-\alpha}$ assuming $\alpha = 0.8$. This gives a median hotspot fraction of 35$\%$. However, this does not account for the fact that hotspot spectra often become much flatter towards low frequency, particularly in high luminosity radio galaxies \citep{leahy89, godfrey09, mckean16}. We therefore do not attempt to account for the contribution to the source flux from hotspot related emission. We simply note that if the source flux density drops by a factor $f_{\rm drop}$ at the end of the active phase due to the rapid disappearance of the hotspots, then we will over-estimate the fraction of remnant to active sources by a factor of $f_{\rm drop}^{p-1}$, where $p \approx 2.3$ is the slope of the luminosity function. The jet prominence is typically much lower than the hotspot prominence, by an order of magnitude or more \citep{mullin08}. While it is true that in a few percent of cases, the jet prominence is comparable or greater than the hotspot prominence, in the vast majority of cases, the jet prominence is a negligible component of the total source flux density. \subsection{Procedure for generating mock catalogues} The mock catalogues are created in several steps, as described below. \begin{enumerate} \item First, we generate several million radio galaxies, each with the model parameters fixed or sampled from their corresponding probability distributions as described in the preceding sections. \item For each source, we calculate an upper limit to the flux density at the sample selection frequency (74 MHz) using a fast, analytic expression. \item We apply the flux cut to the sample, using the upper limits calculated in the previous step: All sources for which the calculated flux upper limit at the selection frequency is below the flux limit are removed from the mock catalogue. \item For the remaining sources, we calculate the model radio galaxy spectrum accurately using numerical integration of equations \ref{eqn:dN_dgamma} and \ref{eqn:S_nu}. \item We again apply a flux cut to the sample, this time using the accurate model flux densities as the basis of the flux cut. Only those sources whose flux densities lie above the flux limit remain in the sample. \end{enumerate} The flux density upper limit in step (ii) described above, is obtained by using an analytic approximation to equation \ref{eqn:dN_dgamma}, along with the $\delta-$function approximation to the synchrotron emission spectrum. The analytic approximation to equation \ref{eqn:dN_dgamma} is derived by neglecting the term $(1 - \gamma/\gamma^*)^{a-2}$. That is, we replace the following equation (Equation \ref{eqn:dN_dgamma}) \begin{equation} \frac{dN}{d\gamma}\left( \gamma, t \right) = q_0 \gamma^{-a} \int_{t_{\rm i, min}}^{t_{\rm i, max}} \left( \frac{V(t_i)}{V(t)} \right)^{(a-1)/3} \left( 1 - \frac{\gamma}{\gamma_*(t_i, t)} \right)^{a-2} d t_i \nonumber \end{equation} with the following integral, \begin{equation} \label{eqn:upper_limit_on_N_gamma} \frac{dN}{d\gamma}\left( \gamma, t \right) < q_0 \gamma^{-a} \int_{t_{\rm i, min}}^{t_{\rm i, max}} \left( \frac{V(t_i)}{V(t)} \right)^{(a-1)/3} d t_i \end{equation} which in the case of power law volume evolution, has an analytic solution. Since the term $(1 - \gamma/\gamma^*)^{a-2} < 1$, Equation \ref{eqn:upper_limit_on_N_gamma} provides an upper limit on $N(\gamma)$, and therefore an upper limit on the flux density at any frequency. \section{Modeling Results} \label{sec:modeling_results} We carried out two Monte-Carlo simulations, each one representing a bound on the possible evolution scenarios in the remnant phase, and each one differing only in the prescribed evolution of lobe volume and magnetic field strength in the remnant phase (see the description of these scenarios in section \ref{sec:radio_source_evolution}). In the remnant phase, for case (i), we assume that the lobes remain over pressured with respect to the ambient medium, and evolve in a Sedov-like manner as described by \citet{kaiser02}. This is the maximal expansion rate that can be achieved by inactive radio galaxies. In case (ii) we assume that there is no lobe expansion in the remnant phase. In the active phase, for both of our simulations, we assume the lobes evolve according to self-similar expansion models of \citet{ka97}. We note that our model is only applicable to radio galaxies with a single active phase: re-started radio galaxies will show different characteristic luminosity and spectral evolution. \subsection{Results of Monte-Carlo simulation with Maximal (sedov-like) expansion in the remnant phase (Case (i))} \begin{table*} \caption{Results of Monte Carlo Simulations: All redshifts.} \label{table:modeling_results_all_redshifts} \begin{tabular}{cccccc} \hline & \multicolumn{2}{c}{Maximal (Sedov-like) expansion} && \multicolumn{2}{c}{No expansion} \\ & \multicolumn{2}{c}{in remnant phase} && \multicolumn{2}{c}{in remnant phase} \\ \cline{2-3} \cline{5-6} \\ Quantity & Number of sources & Fraction of sample && Number of sources & Fraction of sample \\ \hline Active sources & 9499 & 94.6$\%$ && 5372 & 82$\%$ \\ \\ All Remnant sources & 539 & 5.4$\%$ && 1199 & 18$\%$ \\ \\ Low $\nu$ ultra-steep spectrum remnants ($\alpha_{\rm 74~MHz}^{\rm 1400 MHz} > 1.2$) & 187 & 1.8$\%$ && 751 & 11.4$\%$ \\ \\ Mid $\nu$ ultra-steep spectrum remnants ($\alpha_{\rm 1400 MHz}^{\rm 5 GHz} > 1.2$) & 378 & 3.8$\%$ && 1036 & 15.8$\%$ \\ \\ High $\nu$ ultra-steep spectrum remnants ($\alpha_{\rm 5~GHz}^{\rm 10 GHz} > 1.2$) & 441 & 4.4$\%$ && 1110 & 16.9$\%$ \\ \\ Curved spectrum remnants ($\alpha_{\rm 1400~MHz}^{\rm 5000~MHz} - \alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 0.5$) & 292 & 2.9$\%$ && 976 & 14.9$\%$ \\ \\ Curved spectrum remnants ($\alpha_{\rm 5000~MHz}^{\rm 10000~MHz} - \alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 0.5$) & 385 & 3.8$\%$ && 1068 & 16.3$\%$ \\ \hline \end{tabular} \end{table*} \begin{table*} \caption{Results of Monte Carlo Simulations: Low redshifts only ($z<0.5$).} \label{table:modeling_results_low_redshifts} \begin{tabular}{cccccc} \hline & \multicolumn{2}{c}{Maximal (Sedov-like) expansion} && \multicolumn{2}{c}{No expansion} \\ & \multicolumn{2}{c}{in remnant phase} && \multicolumn{2}{c}{in remnant phase} \\ \cline{2-3} \cline{5-6} \\ Quantity & Number of sources & Fraction of sample && Number of sources & Fraction of sample \\ \hline Active sources & 496 & 80$\%$ && 292 & 37$\%$ \\ \\ All Remnant sources & 123 & 20$\%$ && 505 & 63$\%$\\ \\ Low $\nu$ ultra-steep spectrum remnants ($\alpha_{\rm 74~MHz}^{\rm 1400 MHz} > 1.2$) & 20 & 3.2$\%$ && 265 & 33$\%$ \\ \\ Mid $\nu$ ultra-steep spectrum remnants ($\alpha_{\rm 1400~MHz}^{\rm 5 GHz} > 1.2$) & 57 & 9.2$\%$ && 419 & 53$\%$ \\ \\ High $\nu$ ultra-steep spectrum remnants ($\alpha_{\rm 5~GHz}^{\rm 10 GHz} > 1.2$) & 79 & 12.8$\%$ && 456 & 57$\%$ \\ \\ Curved spectrum remnants ($\alpha_{\rm 1400~MHz}^{\rm 5000~MHz} - \alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 0.5$) & 47 & 7.6$\%$ && 397 & 50$\%$ \\ \\ Curved spectrum remnants ($\alpha_{\rm 5000~MHz}^{\rm 10000~MHz} - \alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 0.5$) & 69 & 11.1$\%$ && 448 & 56$\%$ \\ \hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[width=2.0\columnwidth]{histograms_sedov.pdf} \caption{Histograms of spectral index and flux densities for our mock catalogue obtained under the assumption of Sedov-like expansion in the remnant phase (case (i)). Green bars represent active sources, red bars represent remnant sources. Note that not all remnant sources are ultra-steep spectrum (see Figure \ref{fig:remnant_fraction_vs_redshift}).} \label{fig:histograms} \end{figure*} \begin{figure} \includegraphics[width=1.0\columnwidth]{remnant_fraction_vs_redshift_sedov.pdf} \caption{Fraction of flux limited sample that are remnants, as a function of redshift, for our simulated population of $S_{\rm 74 MHz} > 1.5$~Jy FRII radio galaxies, assuming Sedov-like expansion in the remnant phase.} \label{fig:remnant_fraction_vs_redshift} \end{figure} In Figures \ref{fig:histograms} and \ref{fig:remnant_fraction_vs_redshift} and Tables \ref{table:modeling_results_all_redshifts} and \ref{table:modeling_results_low_redshifts} we summarise the results of our simulations of active and remnant radio galaxy populations with maximal (sedov-like) expansion in the remnant phase. Our simulated catalogue contains a total of 10,038 sources\footnote{We purposefully simulated a larger catalogue of sources than was obtained in our VLSSr selected sample in order to increase the fidelity of the simulated distributions.}, of which 9499 are active (94.6$\%$), and 539 are remnant (5.4$\%$). However, of the 539 remnants, only 187 (1.8$\%$ of the entire sample) have ultra-steep spectra between 74 MHz and 1400 MHz with $\alpha_{\rm 74 MHz}^{\rm 1400 MHz} > 1.2$. The fraction of ultra-steep spectrum remnants in our mock sample is therefore consistent with our upper limit of $2\%$ on the ultra-steep spectrum FRII remnant fraction obtained in section \ref{sec:empirical_results}. However, this model predicts almost two times as many remnants would have been missed by our remnant selection criterion. The 74 MHz to 1400 MHz spectral index is clearly an inefficient selection criterion, with a selection efficiency of only $35\%$ (187/539). Spectral indices measured at higher frequency are significantly more efficient at remnant selection. If remnant selection incorporated higher frequency data using the criterion $\alpha_{\rm 1.4 GHz}^{\rm 5 GHz} > 1.2$, our Monte-Carlo simulation predicts that we would more than double the number of remnants selected, and achieve $70\%$ remnant selection efficiency. If we could have incorporated both 5GHz and 10 GHz data into our analysis, and selected remnants based on the criterion $\alpha_{\rm 5 GHz}^{\rm 10 GHz} > 1.2$, our Monte-Carlo simulation predicts a selection efficiency of 82$\%$. One of the problems with selecting remnants based on ultra-steep spectra alone is that there are other types of radio source that can satisfy the ultra-steep spectrum criterion, such as high-redshift radio galaxies (HzRGs). To overcome this problem, some authors have used spectral curvature as a more robust indicator of remnant radio galaxies \citep[eg.][]{murgia11}. Spectral curvature is calculated as the difference between a high frequency spectral index and low frequency spectral index, and requires flux density measurements at at least three different frequencies. We have considered the spectral curvature selection in our mock sample using measurements at three frequencies, up to 5 GHz ($\alpha_{\rm 1400~MHz}^{\rm 5000~MHz} - \alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 0.5$), and also using four frequencies up to 10 GHz ($\alpha_{\rm 1400~MHz}^{\rm 5000~MHz} - \alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 0.5$), achieving remnant selection efficiency of 54$\%$ and $71\%$, respectively. Figure \ref{fig:histograms} (a) demonstrates that the flux density distribution for the remnant population has the same slope as that of the active population, despite their different luminosity evolution. This implies that the overall remnant fraction is independent of the flux limit of the sample. Figure \ref{fig:histograms} (b) demonstrates the very sharp decline in the number of remnants as a function of source age. This implies that remnants in flux limited samples tend to be relatively ``new" remnants, as expected due to the rapid luminosity evolution that arises from a combination of decreasing magnetic field strength along with the adiabatic and radiative losses during the remnant phase. This tendency for remnant radio galaxies in flux limited samples to be young is the reason that many remnants in our Monte-Carlo simulations do not have ultra-steep spectra in the observed frequency range. Most importantly, unlike the luminosity distribution, the age distribution of a flux limited sample of remnants can provide meaningful constraints on the luminosity evolution during the remnant phase. To illustrate this, we consider the following simplified example. Suppose that sources are ``born" with some peak luminosity $L_{\rm peak}$, and evolve according to a piecewise power law with \begin{eqnarray} \label{eqn:L_obs} L_\nu &\propto& L_{\rm peak} t^{-a_1} \qquad \qquad \qquad t < t_{\rm on} \nonumber \\ L_\nu &\propto& L_{\rm peak} t^{-a_2} \quad \qquad ~ t > t_{\rm on} \nonumber \end{eqnarray} Now assume that sources are ``born" with luminosity $L_{\rm peak}$ at a rate given by the birth function \begin{eqnarray} \label{eqn:L_peak_pdf} \frac{dN}{dt ~ d L_{\rm peak}} &\propto& L_{\rm peak}^{-p} \nonumber \end{eqnarray} Then the age distribution of remnant radio galaxies at a given luminosity $L_\nu$ is given by \begin{equation} \frac{dN}{dt d L_\nu} \propto t^{a_2 (1 - p)} \end{equation} Thus, given an estimate of the ``birth" luminosity function (i.e. $p$ in the above model), the age distribution can be used to constrain the luminosity evolution in the remnant phase. Figures \ref{fig:histograms} (c) and (d) demonstrate the existence of many remnant radio galaxies with ``normal" spectral index values in our mock catalogue, as well as the broad tail of ultra-steep spectrum remnants. Figure \ref{fig:histograms} (e) demonstrates that high redshift remnant radio galaxies are extremely rare, and low redshift remnant radio galaxies are predicted to be much more common. This is due to the combination of two factors: (1) the increased rest-frame frequency at higher redshifts corresponds to higher energy electrons, and consequently corresponds to a faster radiative cooling rate; and (2) the increased energy density of the CMB at higher redshifts causes a faster radiative cooling rate from inverse Compton scattering of the CMB. The number of remnants per redshift bin is approximately constant up to $z \gtrsim 1$, but decreases dramatically for $z \gtrsim 1.5$. In Table \ref{table:modeling_results_low_redshifts} we list results for the low redshift ($z<0.5$) subsample of our simulated catalogues. In the simulated catalogue with Sedov-like expansion in the remnant phase, 20$\%$ of the flux limited FRII sample is predicted to be remnant, and of those remnants, approximately half will show ultra-steep spectra between 1.4 GHz and 5 GHz. It is clear from Table \ref{table:modeling_results_low_redshifts} and Figures \ref{fig:histograms} (e) and \ref{fig:remnant_fraction_vs_redshift} that there is a significant advantage to be gained by studying the remnant population at low redshifts, particularly when high frequencies (5 and 10 GHz) are included in the analysis. \subsection{Results of Monte-Carlo simulation with no expansion in the remnant phase} We repeated the Monte-Carlo simulations with no expansion in the remnant phase, so that \begin{equation} \label{eqn:V_evolution_no_sedov} V(t) \propto\begin{cases} t^{9/(5 - \beta)}, & \text{if } t<t_{\rm on}.\\ V(t_{\rm on}), & \text{if } t>t_{\rm on}. \end{cases} \end{equation} and \begin{equation} \label{eqn:B_evolution_no_sedov} B(t)=\begin{cases} B_0 \left( \frac{t}{t_0} \right)^{\frac{-4-\beta}{2(5 - \beta)}} \left( \frac{Q}{Q_0} \right)^{\frac{2-\beta}{2(5 - \beta)}} , & \text{if } t<t_{\rm on}.\\ B(t_{\rm on}), & \text{if } t>t_{\rm on}. \end{cases} \end{equation} This model is not physically realistic: it is unlikely that all sources will reach pressure equilibrium right at the moment the central engine shuts off. However, it is a useful exercise to demonstrate the strength of the effect of expansion in the remnant phase. It also clearly demonstrates that models with no expansion in the remnant phase are in great conflict with observations. This is important, given the evidence that FRII radio galaxies might reach pressure equilibrium with the surroundings before the end of their active life. The results for this model with no expansion in the remnant phase are presented in figures \ref{fig:histograms_no_expansion_in_remnant_phase} and \ref{fig:remnant_fraction_vs_redshift_no_expansion_in_remnant_phase} and Tables \ref{table:modeling_results_all_redshifts} and \ref{table:modeling_results_low_redshifts}. It is immediately clear, as expected, that the remnant fraction is greater in this simulated catalogue, particularly at low redshift ($z < 0.5$), where the model predicts that nearly two-thirds of the flux limited sample are remnants, and that a third of the flux limited sample are ultra-steep spectrum remnants with $\alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 1.2$ (see table \ref{table:modeling_results_low_redshifts} and Figure \ref{fig:remnant_fraction_vs_redshift_no_expansion_in_remnant_phase}). \begin{figure} \includegraphics[width=1.0\columnwidth]{remnant_fraction_vs_redshift_no_expansion_in_remnant_phase.pdf} \caption{Fraction of flux limited sample that are remnants, as a function of redshift, for our simulated population of $S_{\rm 74 MHz} > 1.5$~Jy FRII radio galaxies, assuming there are no adiabatic losses, and no magnetic field evolution in the remnant phase.} \label{fig:remnant_fraction_vs_redshift_no_expansion_in_remnant_phase} \end{figure} \begin{figure} \includegraphics[width=1.0\columnwidth]{luminosity_evolution_comparison.pdf} \caption{This figure illustrates the effect of three different model parameters on the flux density evolution of model radio galaxies in the active and remnant phase, at 74~MHz, 1.4~GHz, and 5~GHz. The three model parameters of interest in this figure are the most significant in terms of their effect on the remnant fraction derived from our Monte Carlo simulations. In each figure the model parameters are fixed except for either $\beta$, $B_0$ or $t_{\rm on}$, the values of which are specified in the figure legend. The model parameters are as follows, unless specified otherwise in the figure legend: $\gamma_{\rm min} = 100$, $\gamma_{\rm max} = 5 \times 10^6$, $\epsilon_e = 1/3$, $\frac{p_{\rm hs}}{p_{\rm lobe}} = 10$, $B_0 = 0.6$ nT, $Q_0 = 10^{39}$, $z = 1.0$, $t_{\rm on} = 30$ Myr, $\beta = 1.9$, $Q_{\rm jet} = 10^{39}$ Watts, injection index $a = 2.2$. See section \ref{sec:simulation_approach} for a description of each of these model parameters. Note that in the second panel, each of the model curves for $B_0 = 0.3$nT have been scaled by a factor $\gtrsim$ 3, to enable better comparison between the light curves.} \label{fig:luminosity_evolution_comparison} \end{figure} \begin{figure*} \includegraphics[width=2.0\columnwidth]{histograms_no_expansion_in_remnant_phase.pdf} \caption{Histograms of spectral index and flux densities for our mock catalogue under the assumption of no expansion in the remnant phase (case (ii)). Green bars represent active sources, red bars represent remnant sources. Note that not all remnant sources are ultra-steep spectrum (see Figure \ref{fig:remnant_fraction_vs_redshift}).} \label{fig:histograms_no_expansion_in_remnant_phase} \end{figure*} \newpage \section{Discussion and Conclusions} We have carried out an empirical study based on a flux limited sample from the VLSSr radio catalogue, in order to place a limit on the occurrence of remnant FRII radio galaxies in a 74 MHz flux limited sample. We have also performed Monte-Carlo simulations of the population of active and remnant FRII radio galaxies to assess whether models of remnant lobe evolution are consistent with the observed remnant fraction. Our main conclusions may be summarised as follows: \begin{enumerate} \item In our VLSSr selected sample, fewer than 2$\%$ of FRII radio galaxies with 74 MHz flux density greater than 1.5 Jy are ultra-steep spectrum remnants with $\alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 1.2$. \item Our Monte-Carlo simulation with Maximal (sedov-like) expansion in the remnant phase produced a remnant fraction of $2\%$, marginally consistent with the upper limit described above. \item Our Monte-Carlo simulation with Maximal (sedov-like) expansion in the remnant phase predicts the existence of nearly twice as many remnants with ``normal" spectra ($\alpha_{\rm 74~MHz}^{\rm 1400~MHz} < 1.2$) as there are ultra-steep spectrum remnants in our sample. \item The above conclusion can be phrased another way: the ultra-steep selection criterion $\alpha_{\rm 74~MHz}^{\rm 1400~MHz} > 1.2$ is not efficient at selecting remnant radio galaxies, with a selection efficiency of only $\sim 35\%$. Higher frequency spectral indices are significantly more efficient at remnant selection. Remnant selection based on the criterion $\alpha_{\rm 1.4 GHz}^{\rm 5 GHz} > 1.2$ increases the selection efficiency to $70\%$, and remnant selection based on the criterion $\alpha_{\rm 5 GHz}^{\rm 10 GHz} > 1.2$ increases the selection efficiency to 82$\%$. For redshifts less than 0.5, the number of identified ultra-steep spectrum remnants increases by a factor of 4 when using $\alpha_{\rm 5~GHz}^{\rm 10~GHz}$ as opposed to $\alpha_{\rm 74~MHz}^{\rm 1400~MHz}$ (Table \ref{table:modeling_results_low_redshifts}). \item The remnant fraction increases rapidly towards low redshift. This is the result of (1) the increased rest-frame frequency at higher redshifts and (2) the increased energy density of the CMB at higher redshifts, resulting in and increase in the radiative cooling rate from inverse Compton scattering of the CMB. \item The model predicts an ultra-steep remnant fraction approaching 10$\%$ at redshifts $z < 0.5$, when considering ultra-steep selection based on higher frequency data ($\alpha_{\rm 1.4~GHz}^{\rm 5~GHz} > 1.2$). \item The age distribution of remnant radio galaxies in flux limited samples is a steeply decreasing function of sources age, indicating that most remnant radio galaxies in flux limited samples are young remnants (Figures \ref{fig:histograms} (b) and \ref{fig:histograms_no_expansion_in_remnant_phase} (b)). Due to the steep remnant age distribution, incorporating higher frequency data into the analysis will be more important in future studies than incorporating lower frequencies. \item The age distribution of remnant radio galaxies in a flux limited sample can constrain the luminosity evolution of the remnant phase. The luminosity distribution of remnants cannot. \item The spectral index distribution of remnant radio galaxies peaks at ``normal" spectral index values. This is a direct result of the age distribution: most remnants are young in flux limited samples (Figure \ref{fig:histograms} (c)). \item In the idealised situation that we have modelled, and with the selection criteria we have used to identify candidate remnants, the high frequency spectral index is more efficient at selecting remnants than the spectral curvature. However, when considering heterogeneous samples as obtained for example from flux limited samples, the spectral curvature is likely to be more robust and may result in fewer false remnant candidates. \item The remnant fraction is independent of flux limit (Figure \ref{fig:histograms} (a)). Therefore, going to fainter flux limits will not increase the remnant fraction. \end{enumerate} As discussed in Section \ref{sec:introduction}, several studies suggest that FRII radio lobes may reach pressure equilibrium before the end of their lifetime. However, we have shown that models without rapid remnant phase expansion significantly over-predict the FRII remnant fraction. Rapid luminosity evolution in the remnant phase resulting from Sedov-like expansion is required to match the low observed remnant fraction in our flux limited sample. Our results imply that either the previous evidence for internal/external pressure equilibrium in FRII radio galaxy lobes is flawed, or alternative mechanisms other than adiabatic expansion are required explain the low remnant fraction in our flux limited sample. \section*{Acknowledgements} The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Advanced Grant RADIOLIFE-320745. \bibliographystyle{mnras}
2,869,038,156,220
arxiv
\section{Introduction} \label{sec:intro} The CANDECOMP/PARAFAC or canonical polyadic (CP) decomposition for multidimensional data, or tensors, is a popular tool for analyzing and interpreting latent patterns that may be present in multidimensional data. One of the most popular methods used to compute a CP decomposition is the alternating least squares (CP-ALS) approach, which solves a series of linear least squares problems \cite{tensor-toolbox,tensorlab,Tensorly}. To solve these linear least squares problems, CP-ALS uses the normal equations, which are well known to be sensitive to roundoff error for moderately ill-conditioned matrices. We propose to use a more stable approach, where we solve the linear least squares problems using the QR decomposition instead. Consider the standard linear least squares problem with multiple outputs, i.e., $\min_{\M{X}} \| \M{AX}-\M{B}\|_F.$ The normal equations for this problem are $\M{A}^\top \M{A}\M{X} = \M{A}^\top \M{B}$. Given the compact/thin QR decomposition $\M{A}=\M{Q}\M{R}$, the more numerically stable solution is computed from $\M{R}\M{X} = \M{Q}^\top \M{B}$. When $\M{A}$ is tall and skinny and $\M{B}$ has few columns, the normal equations approach has about half the cost of the QR-based approach because the dominant costs are computing $\M{A}^\top \M{A}$ and the QR decomposition, respectively. However, if $\M{B}$ has many more columns than $\M{A}$, then the dominant costs are those of computing $\M{A}^\top \M{B}$ and $\M{Q}^\top\M{B}$, which are equivalent when $\M{Q}$ is formed explicitly. In this case, the QR-based approach is more numerically stable and requires practically no more computation compared to the normal equations approach. Furthermore, for rank-deficient problems, the QR-based approach can be cheaply extended to use the SVD to solve the least squares problem, computing the minimum norm solution among the set of solutions with equivalent residual norm. As we describe in more detail in \cref{sec:background}, when solving linear least squares problems within CP-ALS, $\M{A}$ corresponds to a Khatri-Rao product of factor matrices, and $\M{B}$ corresponds to the transpose of a matricized tensor. In this case, the number of columns of $\M{A}$ is the rank of the CP decomposition, and the number of columns of $\M{B}$ corresponds to one of the tensor dimensions. The normal equations are particularly convenient within CP-ALS because the Khatri-Rao structure of $\M{A}$ can be exploited to compute $\M{A}^\top\M{A}$ very efficiently, and thus, even for large ranks, the dominant cost is computing $\M{A}^\top \M{B}$, whose transpose is known as the matricized-tensor times Khatri-Rao product (MTTKRP). The MTTKRP is a well-studied and well-optimized computation because of its importance for the performance of CP-ALS and other gradient-based optimization algorithms for CP \cite{BR20,NL+19,PTC13a,splat}. In \cref{ssec:qr-solve}, we present a QR-based approach to solving the linear least squares problems within CP-ALS. In order to achieve comparable computational complexity with the normal equations approach, we exploit the Khatri-Rao structure of $\M{A}$ in the computation of the QR decomposition as well as $\M{Q}^\top\M{B}$. In particular, we show that the QR decomposition of a Khatri-Rao product of matrices can be computed efficiently from the QR decompositions of the individual factor matrices. Using the structure of the orthonormal component $\M{Q}$, the computation of $\M{Q}^\top\M{B}$ involves multiple tensor-times-matrix (Multi-TTM) products. We prove in \cref{ssec:costs} that when the rank is small relative to the tensor dimensions, the Multi-TTM is the dominant cost, and it has the same leading-order complexity as MTTKRP for dense tensors. Multi-TTM is also a well-optimized tensor computation, as it is important for the computation of Tucker decompositions \cite{BKK20,CC+17,KU16,smith-tucker}. When the rank is comparable to or much larger than the tensor dimensions, the QR-based approach can require significantly more computation time than standard CP-ALS using the normal equations. Both the QR decomposition and application of the orthonormal component have costs that are lower order only when the rank is small. In this case, the numerical stability provided by QR comes at the expense of performance. We demonstrate the performance and accuracy of our methods using several example input tensors in \cref{sec:examples}. Our MATLAB implementation of the algorithms uses the Tensor Toolbox \cite{tensor-toolbox}, and we compare against the CP-ALS algorithm implemented in that library. In \cref{ssec:perf}, we validate the theoretical complexity analysis and show that there is no increase in per-iteration costs when the rank is small, and we demonstrate with a time breakdown which of the computations become bottlenecks as the rank grows larger relative to the tensor dimensions. To illustrate the differences in accuracy, we present two sets of examples in \cref{ssec:collinear,ssec:sine} that lead to ill-conditioned subproblems and show that the instability of the normal equations can lead to degradation of approximation error, recovery of low-rank signal when the ground truth is known, and in some cases, convergence of the overall algorithm. We conclude in \cref{sec:conclusion} that using the QR-based approaches to solve the CP-ALS subproblems increases the robustness of the overall algorithm without sacrificing performance in the typical case of small ranks. However, due to the complexity of the algorithm and the extra computation that becomes significant for large ranks, we envision a CP-ALS solver that uses the fast-and-inaccurate normal equations approach by default and falls back on the accurate-but-possibly-slow SVD approach when necessary. For problems that do not involve ill-conditioning, which is the case for many tensors representing noisy data, the normal equations are sufficient for obtaining accurate solutions. However, when ill-conditioning degrades the accuracy of solutions computed from the normal equations, we show that it is possible to obtain the stability of the SVD with feasible computational cost. \section{Background}\label{sec:background} In this section, we first review typical methods for solving linear least squares problems. We also discuss the relevant information regarding tensors and the CP decomposition, focusing on the alternating least squares (CP-ALS) algorithm. We also briefly describe an optimization approach for CP using Gauss-Newton. \subsection{Linear Least Squares Methods} A common approach to solving linear least squares problems is by solving the associated normal equations. When applied to ill-conditioned problems, however, using the normal equations results in numerical instability. More numerically stable methods to solve least squares problems include using the QR decomposition or the SVD. Consider a least squares problem of the form \begin{equation*} \min_{\M{X}} \| \M{B} - \M{X} \M{A}^\top \|_F. \end{equation*} Note that the coefficient matrix appears to the right of the variable matrix rather than the left (as appears in \cref{sec:intro}) in order to match the form of the CP-ALS subproblems described below. The normal equations for this problem are $\M{X} \M{A}^\top\M{A} = \M{B}\M{A}$, which is equivalent to $\M{A}^\top\M{A} \M{X}^\top = \M{A}^\top\M{B}^\top$. To solve this least squares problem using QR, we first compute the compact/thin QR factorization of $\M{A} = \M{QR}$, so that $\M{Q}$ has the same dimensions as $\M{A}$ and $\M{R}$ is square. We then apply $\M{Q}$ to matrix $\M{B}$ on the right, and use the result to solve the triangular system $\M{XR}^\top = \M{B} \M{Q}$ for $\M{X}$. When the coefficient matrix $\M{A}$ is tall and skinny, the QR approach can be cheaply extended to use the SVD. Given the QR factorization of $\M{A}$, we compute the SVD of $\M{R} = \M{U \Sigma V}^\top$. If we apply $\M{U}$ to $\M{B}\M{Q}$ on the right, we can then solve the system $\M{Y}\M{\Sigma} = \M{B}\M{Q}\M{U}$ for $\M{Y}$ and compute $\M{X} = \M{Y} \M{V}^\top$. If $\M{A}$ is numerically low rank, we can solve the system using the pseudoinverse of $\M{\Sigma}$ to find the minimum-norm solution to the original problem. For more details on methods to solve linear least squares problems, see \cite{Demmel97,GVL13,trefethen-book}. \subsection{Tensor Notation and Preliminaries} Throughout this paper, we follow the notation from \cite{intro-paper}. A scalar is denoted by lowercase letters, e.g. $a$, while vectors are denoted by boldface lowercase letters, e.g. $\V{a}$. Matrices are denoted by boldface uppercase letters, e.g. $\M{A}$, and tensors are denoted by boldface uppercase calligraphic letters, e.g. $\T{X}$. We use the \textsf{MATLAB} notation $\M{A}(i,:)$ to refer to the $i^{\text{th}}$ row of $\M{A}$ and $\M{A}(:,j)$ to refer to the $j^{\text{th}}$ column of $\M{A}$. \paragraph{Matrix Products} We define three matrix products that will appear in our algorithms. First, the Kronecker product of two matrices $\M{A} \in \mb{R}^{I \times J}$ and $\M{B} \in \mb{R}^{K \times M}$ is denoted $\M{A} \Kron \M{B} \in \mb{R}^{(IK) \times (JM)}$ matrix with entries $[\M{A} \Kron \M{B}](K(i-1)+k, M(j-1) + \ell) = \M{A}(i,j)\M{B}(k,\ell)$. The Khatri-Rao product of matrices $\M{A} \in \mb{R}^{I \times K}$ and $\M{B} \in \mb{R}^{J \times K}$ is denoted $\M{A} \Khat \M{B} \in \mb{R}^{(IJ) \times K}$ matrix with columns $[\M{A} \Khat \M{B}](:,k) = \M{A}(:,k) \Kron \M{B}(:,k)$ for every $k = 1,\dots, K$. We present a property regarding the Khatri-Rao product of a product of two matrices, which will be useful in a later section. Let $\M{A} \in \mb{R}^{K \times J}, \M{B} \in \mb{R}^{I \times J}, \M{C} \in \mb{R}^{I \times K}$, and $\M{D} \in \mb{R}^{J \times I}$ be four matrices. Then, \begin{equation}\label{eq:kr-id} (\M{C} \Kron \M{D}) (\M{A} \Khat \M{B}) = (\M{C} \M{A}) \Khat (\M{D} \M{B}). \end{equation} Finally, the Hadamard product of two matrices $\M{A} \in \mb{R}^{I \times J}$ and $\M{B} \in \mb{R}^{I \times J}$ is the elementwise product denoted as $\M{A} * \M{B} \in \mb{R}^{I \times J}$, with entries $[\M{A} * \M{B}](i,j) = \M{A}(i,j)\M{B}(i,j)$. \paragraph{Tensor Components} As generalizations to matrix rows and columns, tensors have mode-$j$ fibers, which are vectors formed by fixing all but one index of a tensor. Similarly, slices of a tensor are two-dimensional sections, formed by fixing all but two indices of a tensor. \paragraph{Tensor Operations} There are two major tensor operations we will frequently use throughout this paper, namely matricization and multiplying a tensor with a matrix. The $n$-mode matricization, or unfolding, of a tensor $\T{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N}$, denoted $\Mz{X}{n} \in \mb{R}^{I_n \times \left(\prod_{j \neq n} I_j \right)}$, and the matrix $\Mz{X}{n}$ is formed so that the columns are the mode-$n$ fibers of $\T{X}$. Also useful is the tensor-times-matrix (TTM) multiplication, or the $n$-mode product of a tensor and matrix. Let $\T{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N}$ and $\M{U} \in \mathbb{R}^{J \times I_n}$. The resulting tensor $\T{Y} = \T{X} \times_n \M{U} \in \mb{R}^{I_1 \times \cdots \times I_{n-1} \times J \times I_{n+1} \times \cdots \times I_N}$, and it has entries $ \T{Y}(i_1,\dots,i_{n-1},j,i_{n+1},\dots,i_N) = \sum_{i_n = 1}^{I_n} \T{X}(i_1, \dots, i_N) \M{U}(j, i_n)$. The TTM can also be computed via the mode-$n$ matricization $\Mz{Y}{n} = \M{U} \Mz{X}{n}$. Multiplying an $N$-mode tensor by multiple matrices in distinct modes is known as Multi-TTM. The computation can be performed using a sequence of individual mode TTMs, and they can be done in any order. In particular, we will be interested in the case where we multiply an $N$-mode tensor by matrices $\M{U}_j$ in every mode except $n$, denoted $\T{Y} = \T{X} \times_1 \M{U}_1 \dots \times_{n+1} \M{U}_{n-1} \times_{n+1} \M{U}_{n+1} \dots \times_N \M{U}_N$. This is expressed in the mode-$n$ matricization as \begin{equation} \label{eq:multittm} \Mz{Y}{n} = \Mz{X}{n}(\M{U}_N \otimes \dots \otimes \M{U}_{n+1} \otimes \M{U}_{n-1} \otimes \dots \otimes \M{U}_1)^\top. \end{equation} \subsection{CP-ALS Algorithm} We now detail both the CP decomposition as well as the alternating least squares approach (CP-ALS), one of the most popular algorithms used to compute the CP decomposition. \paragraph{CP Decomposition} The aim of the CP decomposition is to represent a tensor as a sum of rank-one components. For an $N$-mode tensor $\T{X} \in \mb{R}^{I_1 \times \dots \times I_N}$, the rank-$R$ CP decomposition of $\T{X}$ is the approximation \begin{equation}\label{eq:cp} \hat{\T{X}} = \sum_{r=1}^R \lambda_r \, \V{a}_r^{(1)} \circ \V{a}_r^{(2)} \circ \dots \circ \V{a}_r^{(N)}. \end{equation} where $\V{a}_r^{(n)} \in \mb{R}^{I_n}$ are unit vectors with weight vector $\V{\lambda} \in \mb{R}^{R}$, and $\circ$ denotes the outer product. The collection of all $\V{a}_r^{(n)}$ vectors for each mode is called a factor matrix, i.e., $\M{A}_{n} = \bmat{ \V{a}_1^{(n)} & \V{a}_2^{(n)} & \dots & \V{a}_r^{(n)}} \in \mb{R}^{I_n \times R}.$ A visualization of the three-dimensional version of this representation is in \cref{fig:3d-cp-decomp}. \begin{figure}[H] \centering \begin{tikzpicture}[scale=0.6] \draw node (X) at (1.5, 2.5) {$\T{X}$}; \draw[thick] (0,1) -- (3,1); \draw[thick] (0,1) -- (0,4); \draw[thick] (0,4) -- (3,4); \draw[thick] (3,4) -- (3,1); \draw[thick] (0,4) -- (1,5); \draw[thick] (1,5) -- (4,5); \draw[thick] (3,4) -- (4,5); \draw[thick] (4,5) -- (4,2); \draw[thick] (4,2) -- (3,1); \draw node at (5.2, 3) {$\approx$}; \draw node at (6.3, 3.5) {$\lambda_1$}; \draw node at (7.3, 0.5) {$\V{a}_1$}; \draw[thick] (7,1) -- (7.5,1); \draw[thick] (7,1) -- (7,4); \draw[thick] (7,4) -- (7.5,4); \draw[thick] (7.5,1) -- (7.5,4); \draw node at (9.2, 3) {$\V{b}_1$}; \draw[thick] (7.8,3.5) -- (7.8,4); \draw[thick] (7.8,3.5) -- (10.8,3.5); \draw[thick] (7.8,4) -- (10.8,4); \draw[thick] (10.8,4) -- (10.8,3.5); \draw node at (8.5, 4.5) {$\V{c}_1$}; \draw[thick] (7,4.2) -- (7.5,4.2); \draw[thick] (7,4.2) -- (8,5.2); \draw[thick] (8,5.2) -- (8.5,5.2); \draw[thick] (7.5,4.2) -- (8.5,5.2); \draw node at (11.7, 3) {$+$}; \draw node at (12.3, 3.5) {$\lambda_2$}; \draw node at (13.3, 0.5) {$\V{a}_2$}; \draw[thick] (13,1) -- (13.5,1); \draw[thick] (13,1) -- (13,4); \draw[thick] (13,4) -- (13.5,4); \draw[thick] (13.5,1) -- (13.5,4); \draw node at (15.2, 3) {$\V{b}_2$}; \draw[thick] (13.8,3.5) -- (13.8,4); \draw[thick] (13.8,3.5) -- (16.8,3.5); \draw[thick] (13.8,4) -- (16.8,4); \draw[thick] (16.8,4) -- (16.8,3.5); \draw node at (14.5, 4.5) {$\V{c}_2$}; \draw[thick] (13,4.2) -- (13.5,4.2); \draw[thick] (13,4.2) -- (14,5.2); \draw[thick] (14,5.2) -- (14.5,5.2); \draw[thick] (13.5,4.2) -- (14.5,5.2); \draw node at (18.7, 3) {$+ \ \cdots \ +$}; \draw node at (20.3, 3.5) {$\lambda_R$}; \draw node at (21.3, 0.5) {$\V{a}_R$}; \draw[thick] (21,1) -- (21.5,1); \draw[thick] (21,1) -- (21,4); \draw[thick] (21,4) -- (21.5,4); \draw[thick] (21.5,1) -- (21.5,4); \draw node at (23.2, 3) {$\V{b}_R$}; \draw[thick] (21.8,3.5) -- (21.8,4); \draw[thick] (21.8,3.5) -- (24.8,3.5); \draw[thick] (21.8,4) -- (24.8,4); \draw[thick] (24.8,4) -- (24.8,3.5); \draw node at (22.5, 4.5) {$\V{c}_R$}; \draw[thick] (21,4.2) -- (21.5,4.2); \draw[thick] (21,4.2) -- (22,5.2); \draw[thick] (22,5.2) -- (22.5,5.2); \draw[thick] (21.5,4.2) -- (22.5,5.2); \end{tikzpicture} \caption{CP decomposition of rank $R$ for a three-dimensional tensor $\T{X}$ \label{fig:3d-cp-decomp}} \end{figure} Using the notation $\M{\hat{A}}_n = \M{A}_n \cdot \text{diag}(\M{\lambda})$, we can express the mode-$n$ matricization of $\hat{\T{X}}$ as \begin{equation}\label{eq:cp_mat} \Mz{\hat{X}}{n} = \M{\hat{A}}_{n}(\M{A}_{N} \odot \dots \odot \M{A}_{n+1} \odot \M{A}_{n-1} \odot \dots \odot \M{A}_{1})^\top = \M{A}_{n}\M{Z}_n^\top, \end{equation} letting $\M{Z}_{n} = \M{A}_{N} \odot \dots \odot \M{A}_{n+1} \odot \M{A}_{n-1} \odot \dots \odot \M{A}_{1}$. \paragraph{CP-ALS Algorithm} In order to compute the CP-decomposition, the CP-ALS algorithm solves a least squares problem for the matricized tensors $\M{X}_{(n)}$ and $\hat{\M{X}}_{(n)}$ along each mode. For mode $n$, we fix every factor matrix except $\M{\hat{A}}_{n}$ and then solve for $\Mn{\hat{A}}{n}$. This process is repeated by alternating between modes until some termination criteria is met. We solve the linear least squares problem \begin{equation}\label{eq:ls_n} \min_{\M{\hat{A}}_{n}} \| \Mz{X}{n} - \M{\hat{A}}_{n}\M{Z}_{n}^\top \|_F, \end{equation} using the representation in \cref{eq:cp_mat}. The linear least squares problem from \cref{eq:ls_n} is typically solved using the normal equations, i.e., \begin{equation*} \Mz{X}{n} \M{Z}_{n} = \M{\hat{A}}_{n} ( \M{Z}_{n}^\top \M{Z}_{n}). \end{equation*} The coefficient matrix $\M{Z}_{n}^\top \M{Z}_{n}$ is computed efficiently as $\M{A}_1^\top \M{A}_1 \Hada \dots \Hada \M{A}_{n-1}^\top \M{A}_{n-1} \Hada \M{A}_{n+1}^\top \M{A}_{n+1} \Hada \dots \Hada \M{A}_{N}^\top \M{A}_{N}$. We obtain the desired factor matrix $\M{A}_n$ by normalizing the columns of $\M{\hat{A}}_n$ and updating the weight vector $\M{\lambda}$. This approach is detailed in \cref{alg:cp-als}. \begin{algorithm}[!ht] \caption{CP-ALS} \label{alg:cp-als} \begin{algorithmic}[1]\footnotesize \Function{$[\bm{\lambda},\set{\M{A}_{n}}]=$ CP-ALS}{$\T{X},R$}\Comment{$\T{X}\in\mathbb{R}^{I_1\times \cdots \times I_N}$} \State \label{line:cpals:init}Initialize factor matrices $\M{A}_{1}, \dots, \M{A}_{N}$ \State \label{line:gram}Compute Gram matrices $\Mn{G}{1} = \M{A}_1^\top \Mn{A}{1}, \ldots, \Mn{G}{N} = \M{A}_N^\top \Mn{A}{N}$ \Repeat \For{$n=1,\dots, N$} \State $\M{S}_n \gets \Mn{G}{N} \Hada \cdots \Hada \Mn{G}{n+1} \Hada \Mn{G}{n-1} \Hada \cdots \Hada \Mn{G}{1}$\label{line:cpals:Gram} \State \label{line:cpals:KR}$\M{Z}_{n} \gets \M{A}_{N}\odot \cdots \odot \M{A}_{n+1}\odot \M{A}_{n-1} \odot \cdots \odot \M{A}_{1}$ \State \label{line:mttkrp}$\M{M}_n \gets \M{X}_{(n)}\M{Z}_{n}$\Comment{MTTKRP} \State \label{line:cpals:solve}Solve $\M{\hat{A}}_{n}\M{S}_n = \M{M}_n$ for $\M{\hat{A}}_{n}$ via Cholesky (CP-ALS) or SVD (CP-ALS-PINV) \Comment{Normal equations} \State Normalize columns of $\M[\hat]{A}_{n}$ to obtain $\M{A}_n$ and $\V{\lambda}$ \State Recompute Gram matrix $\Mn{G}{n} = \M{A}_{n}^\top \Mn{A}{n}$ for updated factor matrix $\Mn{A}{n}$ \EndFor \Until termination criteria met \State \textbf{return} $\bm{\lambda}$, factor matrices $\set{\M{A}_{n}}$ \EndFunction \end{algorithmic} \end{algorithm} Note that in \cref{line:cpals:solve}, the standard approach (implemented in the Tensor Toolbox \cite{tensor-toolbox}) is to solve $\M{A}_{n}\M{S} = \M{M}$ for $\M{A}_{n}$ using a Cholesky decomposition, which is implemented in MATLAB using the backslash operator. An alternative method for solving the linear system is to use the SVD, which is implemented in MATLAB using the \texttt{pinv} function. We call the algorithm obtained by taking this alternate approach CP-ALS-PINV. \subsection{Gauss-Newton Optimization Approach}\label{ssec:gn} An alternate approach to alternating least squares is to minimize the CP model ``all at once'' using optimization techniques. Instead of the linear least squares problem from \cref{eq:ls_n}, this approach solves the nonlinear least squares problem $\min \| \T{X} - \hat{\T{X}} \|_F^2$, subject to factor matrices $\M{A}_j$ with $j = 1,\dots, N$, where $\hat{\T{X}}$ is defined in \cref{eq:cp}. Gauss-Newton attempts to minimize this nonlinear residual by using a linear Taylor series approximation at each iteration and minimizing that function using standard linear least squares methods. Typically, these linear least squares problems are solved via the normal equations as follows. For residual $\V{r} = \text{vec}{(\T{X} - \hat{\T{X}})}$ and $\M{J}$ the Jacobian of $\V{r}$, the normal equations to be solved are $\M{J}^\top \M{J} = \M{J} \V{r}$. While CP-ALS is typically fast and easy to implement, the Gauss-Newton approach is beneficial as it can converge quadratically if the residual is small. We use the implementation of Gauss-Newton in Tensorlab \cite{tensorlab} in the experiments in \cref{sec:examples}. For more details on this approach, see \cite{singhGN,sorberGN,vervlietGN}. \section{Proposed Methods}\label{sec:cp-als-qr} \subsection{CP-ALS-QR Algorithms}\label{ssec:qr-solve} In our proposed CP-ALS approach, we incorporate the more stable QR decomposition and SVD and avoid using the normal equations. Suppose $\T{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N}$ is an $N$-dimensional tensor and we wish to approximate a solution $\T{\hat{X}} = [\![ \V{\lambda}; \Mn{A}{1}, \ldots, \Mn{A}{N} ]\!]$ of rank $R$. Recall our linear least-squares problem from \cref{eq:ls_n}. The key to the efficiency of our algorithm is to form the QR decomposition of $\M{Z}_{n}$ using the Khatri-Rao structure as follows. As a first step, we compute compact QR factorizations of each individual factor matrix, so thatLet each factor matrix $\M{A}_j$ have compact QR factorization $\M{A}_j = \M{Q}_j \M{R}_j$. Then \begin{equation*} \begin{aligned} \M{Z}_n &= \M{A}_N \odot \dots \odot \M{A}_{n+1} \odot \M{A}_{n-1} \odot \dots \odot \M{A}_1 \\ &= \M{Q}_n \M{R}_N \odot \dots \odot \M{Q}_{n+1}\M{R}_{n+1} \odot \M{Q}_{n-1} \M{R}_{n-1} \odot \dots \odot \M{Q}_1 \M{R}_1 \\ &= (\M{Q}_N \otimes \dots \otimes \M{Q}_{n+1} \otimes \M{Q}_{n-1} \otimes \dots \otimes \M{Q}_1)\underbrace{(\M{R}_N \odot \dots \odot \M{R}_{n+1} \odot \M{R}_{n-1} \odot \dots \odot \M{R}_1)}_{\M{V}_n}, \end{aligned} \end{equation*} where the last equality comes from \cref{eq:kr-id}. We then compute the QR factorization of the Khatri-Rao product $\M{V}_n=\M{R}_N \odot \dots \odot \M{R}_{n+1} \odot \M{R}_{n-1} \odot \dots \odot \M{R}_1 = \M{Q}_0 \M{R}_0$. This allows us to express the QR of $\M{Z}_n$ as \begin{equation}\label{eq:Z_qr} \begin{aligned} \M{Z}_n &= (\M{Q}_N \otimes \dots \otimes \M{Q}_{n+1} \otimes \M{Q}_{n-1} \otimes \dots \otimes \M{Q}_1)(\M{R}_N \odot \dots \odot \M{R}_{n+1} \odot \M{R}_{n-1} \odot \dots \odot \M{R}_1) \\ &= \underbrace{(\M{Q}_N \otimes \dots \otimes \M{Q}_{n+1} \otimes \M{Q}_{n-1} \otimes \dots \otimes \M{Q}_1) \M{Q}_0}_{\M{Q}} \underbrace{\M{R}_0}_{\M{R}}. \end{aligned} \end{equation} Note that $\M{Q}$ has orthonormal columns, and $\M{R}$ is upper triangular. The Khatri-Rao product of triangular matrices $\M{V}_n$ has sparse structure. One column is dense, but the rest have many zeros. As discussed in \cref{ssec:sparsity}, this sparse structure can be exploited while implementing the QR of $\M{V}_n$, but our current implementation treats it as dense. Once the QR is computed, we apply the representation in \cref{eq:Z_qr} to our least squares problem to obtain \begin{equation}\label{eq:ls-qr-step} \min_{\Mn{\hat{A}}{n}} \| \Mz{X}{n} - \Mn{\hat{A}}{n} \M{R}^\top \M{Q}_0^\top (\M{Q}_{N} \otimes \dots \otimes \M{Q}_{n+1} \otimes \M{Q}_{n-1} \otimes \dots \otimes \M{Q}_1 )^\top \|_F, \end{equation} with $\M{R} \in \mb{R}^{R \times R}$, $\M{Q}_0 \in \mb{R}^{R^{N-1} \times R}$, and each $\M{Q}_{i} \in \mb{R}^{I_i \times R}$. We are solving for the unnormalized factor matrix $\Mn{\hat{A}}{n} = \Mn{A}{n} \cdot \text{diag}(\M{\lambda})$. Next, we apply the product $\M{Q}_N \otimes \dots \otimes \M{Q}_{n+1} \otimes \M{Q}_{n-1} \otimes \dots \otimes \M{Q}_1$ to $\Mz{X}{n}$ on the right via the Multi-TTM $\T{Y} = \T{X} \times_1 \M{Q}_1^\top \dots \times_{n-1} \M{Q}_{n-1}^\top \times_{n+1} \M{Q}_{n+1}^\top \dots \times_N \M{Q}_N^\top$, following \cref{eq:multittm}. Our least-squares problem then becomes \begin{equation}\label{eq:ls-qr-ttm-step} \min_{\Mn{\hat{A}}{n}} \| \Mz{Y}{n} - \Mn{\hat{A}}{n} \M{R}^\top \M{Q}_0^\top \|_F. \end{equation} After computing the Multi-TTM, we form $\M{W}_n = \Mz{Y}{n} \M{Q}_0$ to obtain the smaller least-squares problem \begin{equation}\label{eq:ls-qr-q0-step} \min_{\Mn{\hat{A}}{n}} \| \M{W}_n - \Mn{\hat{A}}{n} \M{R}^\top \|_F, \end{equation} and, finally, we use substitution with $\M{R}^\top$ to compute the factor matrix $\Mn{\hat{A}}{n}$. We call this algorithm CP-ALS-QR. Another more stable way of solving \cref{eq:ls-qr-q0-step}, particularly when Khatri-Rao product $\M{Z}_n$ is rank deficient, is to use the singular value decomposition (SVD) of $\M{R}$ in addition to the QR factorization. Let $\M{R} = \M{U\Sigma V}^\top$ be the SVD. We then obtain \begin{equation*} \min_{\Mn{\hat{A}}{n}} \| \M{W} - \Mn{\hat{A}}{n} \M{V} \M{\Sigma} \M{U}^\top \|_F. \end{equation*} Now, since $\M{U}$ and $\M{V}$ are orthogonal and $\M{\Sigma}$ is a diagonal matrix, our solution to the least-squares problem is $\Mn{\hat{A}}{n} = \M{W}_n \M{U} \M{\Sigma}^{\dagger} \M{V}^\top$. Note that because we are utilizing the pseudoinverse on $\M{\Sigma}$, we can truncate small singular values and thereby manage a rank-deficient least-squares problem more stably. Using the SVD in this way gives us our second algorithm CP-ALS-QR-SVD. Both CP-ALS-QR and CP-ALS-QR-SVD are summarized in \cref{alg:cp-als-qr}, with a choice in \cref{line:qrsolve} to distinguish between the two methods. The two algorithms are implemented in MATLAB using the Tensor Toolbox \cite{tensor-toolbox}. The implementation is available here: \href{https://github.com/rlminste/CP-ALS-QR}{\textbf{https://github.com/rlminste/CP-ALS-QR}}. \begin{algorithm} \caption{CP-ALS-QR} \label{alg:cp-als-qr} \begin{algorithmic}[1]\footnotesize \Function{$[\bm{\lambda},\set{\Mn{A}{n}}]=$ CP-ALS-QR}{$\T{X},R$}\Comment{$\T{X}\in\mathbb{R}^{I_1\times \cdots \times I_N}$} \State Initialize factor matrices $\Mn{\hat{A}}{2}, \dots, \Mn{\hat{A}}{N}$ \State \label{line:qr_factor}Compute compact QR-decomposition $\Mn{Q}{2} \Mn{R}{2}, \ldots, \Mn{Q}{N} \Mn{R}{N}$ of factor matrices \Repeat \For{$n=1,\dots, N$} \State \label{line:qr-khatrirao}$\M{V}_n \gets \Mn{R}{N} \Khat \cdots \Khat \Mn{R}{n+1} \Khat \Mn{R}{n-1} \Khat \cdots \Khat \Mn{R}{1}$ \State \label{line:q0}Compute compact QR-decomposition $\M{V}_n = \M{Q}_0 \M{R}$ \Comment{Last step of QR decomposition} \State \label{line:ttm}$\T{Y} \gets \T{X} \times_1 \M{Q}_1^\top \times_2 \cdots \times_{n-1} \M{Q}_{n-1}^\top \times_{n+1} \M{Q}_{n+1}^\top \times_{n+2} \cdots \times_N \M{Q}_{N}^\top$\Comment{Multi-TTM} \State \label{line:apply_q0}$\M{W}_n \gets \Mz{Y}{n} \M{Q}_0$ \State \label{line:qrsolve}Solve $\Mn{\hat{A}}{n} \M{R}^\top = \M{W}_n$ for $\Mn{\hat{A}}{n}$ by substitution (CP-ALS-QR) or SVD (CP-ALS-QR-SVD) \State Normalize columns of $\Mn{\hat{A}}{n}$ to obtain $\M{A}_n$ and $\V{\lambda}$ \State \label{line:factorqr}Recompute QR-decomposition for updated factor matrix $\Mn{A}{n}=\Mn{Q}{n} \Mn{R}{n}$ \EndFor \Until termination criteria met \State \textbf{return} $\bm{\lambda}$, factor matrices $\set{\Mn{A}{n}}$ \EndFunction \end{algorithmic} \end{algorithm} \subsection{CP-ALS-QR Cost Analysis}\label{ssec:costs} We now analyze the computational complexity of each iteration of CP-ALS-QR and CP-ALS-QR-SVD as presented in \cref{alg:cp-als-qr}. Recall that $\T{X}$ has dimensions $I_1\times \cdots \times I_N$ and the CP approximation has rank $R$. To simplify notation, we assume in the analysis that $I_1 \geq \cdots \geq I_N$. The cost of forming the Khatri-Rao product $\M{V}_n$ of $N{-}1$ upper-triangular factors (\cref{line:qr-khatrirao}) is $R^N+\mathcal{O}(R^{N-1})$, assuming the individual Khatri-Rao products are formed pairwise and no sparsity is exploited. Here $\M{V}_n$ has dimensions $R^{N-1} \times R$, and it has $R^N/N+\mathcal{O}(R^{N-1})$ nonzeros. We treat $\M{V}_n$ as a dense matrix here but discuss the possibility of exploiting sparsity in \cref{ssec:sparsity}. Computing the QR decomposition of $\M{V}_n$ to obtain $\M{Q}_0$ and $\M{R}$ in \cref{line:q0} costs \begin{equation} \label{eq:QRcost} 4R^{N+1}+\mathcal{O}(R^3), \end{equation} assuming $\M{Q}_0$ is formed explicitly and again no sparsity is exploited. The Multi-TTM is performed in \cref{line:ttm} and involves the input tensor. We compute the resulting tensor $\T{Y}$, which has dimensions $R\times {\cdots} \times I_n\times {\cdots} \times R$, by performing single TTMs in sequence; to minimize flops we perform the $N{-}1$ TTMs in order of decreasing tensor dimension, which is left to right given our assumption above. The cost of the first TTM is $2I_1\cdots I_NR$, the cost of the second is $2I_2\cdots I_N R^2$, and so on. Thus, we can write the overall cost of the Multi-TTM as \begin{equation} \label{eq:TTMcost} 2I_1\cdots I_N R \left(1 + \frac{R}{I_1} + \frac{R^2}{I_1I_2} + \cdots + \frac{R^{N-2}}{I_1\cdots I_{n-1}I_{n+1}\cdots I_{N-1}} \right). \end{equation} We apply $\M{Q}_0$ to $\Mz{Y}{n}$ via matrix multiplication in \cref{line:apply_q0} with cost $2I_nR^N$, assuming we use an explicit, dense $\M{Q}_0$. Solving the linear system in \cref{line:qrsolve} costs $\mathcal{O}(I_nR^2)$, with an extra $\mathcal{O}(R^3)$ cost if the SVD of $\M{R}$ is computed for the CP-ALS-QR-SVD method. (Note that using the more stable SVD approach to solve the linear system has no significant impact on the overall computational complexity.) Finally, computing the QR decomposition of the updated $n$th factor matrix in \cref{line:factorqr} and forming the orthonormal factor $\M{Q}_n$ explicitly costs $4I_nR^2 + \mathcal{O}(R^3)$. If the rank $R$ is significantly smaller than the tensor dimensions, then the cost is dominated by the first TTM, which has cost $2I_1\cdots I_NR$ from \cref{eq:TTMcost}. In this case, the remaining TTMs are each at least a factor of $R/I_1$ times cheaper, and the computations involving $\M{Q}_0$ are cheaper than any of the TTMs, because those computational costs are independent of any tensor dimensions. The dominant cost of CP-ALS is the MTTKRP in \cref{line:mttkrp} of \cref{alg:cp-als}, which also has cost $2I_1\cdots I_NR$. Thus, in the case of small $R$, the two algorithms have identical leading-order computational complexity per iteration. If the rank $R$ is larger than all tensor dimensions, then the cost of the QR of $\M{V}_n$ given in \cref{eq:QRcost} will be the dominant cost. If the rank is comparable to the tensor dimensions, then the computation and application of $\M{Q}_0$ in \cref{line:apply_q0} and the subsequent TTMs after the first may also contribute to the running time in a significant way. \subsection{Implementation Details and Extensions}\label{sec:implement} \subsubsection{Efficient Computation of Approximation Error}\label{ssec:error} For each of the CP algorithms (\cref{alg:cp-als,alg:cp-als-qr}), we consider computing the approximation error in two ways. The more accurate but less efficient approach is to form the explicit representation of $\T{\hat{X}} = [\![ \V{\lambda}; \Mn{A}{1}, \ldots, \Mn{A}{N} ]\!]$ and compute the residual norm $\|\T{X} - \T{\hat{X}}\|$ directly. The less accurate but more efficient approach exploits the identity $\|\T{X} - \T{\hat{X}}\|^2=\|\T{X}\|^2-2 \langle \T{X},\T{\hat{X}} \rangle + \|\T{\hat{X}}\|^2$ and computes $\langle \T{X},\T{\hat{X}} \rangle$ and $\|\T{\hat{X}}\|$ cheaply by using temporary quantities already computed by the ALS iterations ($\|\T{X}\|$ is pre-computed and does not change over iterations). In the case of CP-ALS (\cref{alg:cp-als}), we have $\Mz{\hat{X}}{N} = \M{\hat A}_{N}\M{Z}_{N}^\top$, for $\Mn{\hat{A}}{N} = \Mn{A}{N} \cdot \text{diag}(\V{\lambda})$ and $\M{Z}_{N} = \M{A}_{N-1} \odot \dots \odot \M{A}_{1}$. Then $$\langle \T{X}, \T{\hat{X}} \rangle = \langle \Mz{X}{N}, \M{\hat A}_{N}\M{Z}_{N}^\top \rangle = \langle \Mz{X}{N}\M{Z}_{N}, \M{\hat A}_N \rangle = \langle \M{M}_N, \M{\hat A}_{N} \rangle,$$ where $\M{M}_N$ is the result of the MTTKRP computation in mode $N$, the mode of the last subiteration. Thus, computing the inner product between the data and model tensors requires only $\mathcal{O} (I_NR)$ extra operations. Likewise, we have $$\|\T{\hat{X}}\|^2 = \langle \M{\hat A}_{N}\M{Z}_{N}^\top, \M{\hat A}_N \M{Z}_{N}^\top \rangle = \langle \M{Z}_{N}^\top \M{Z}_{N}, \M{\hat A}_{N}^\top \M{\hat A}_{N}\rangle = \langle \M{S}_N, \text{diag}(\V{\lambda})\M{G}_{N}\text{diag}(\V{\lambda}) \rangle,$$ where $\M{G}_{N} = \M{A}_{N}^\top \M{A}_{N}$ is the Gram matrix of the (normalized) $N$th factor and $\M{S}_N$ is the Hadamard product of the Gram matrices of the first $N-1$ modes. Computing the norm of $\T{\hat{X}}$ thus requires only $\mathcal{O} (R^2)$ extra operations. This efficient error computation is well known \cite{tensor-toolbox,EH+21,LK+17,TensorBox,SK16}. Note that this approach is slightly less accurate than a direct computation: the identity applies to the square of the residual norm, so taking the square root of the difference of these quantities limits the accuracy of the relative error to the square root of machine precision. We complete the efficient error computation for CP-ALS-QR (\cref{alg:cp-als-qr}) with similar cost. In this case, we have $\Mz{\hat{X}}{N} = \M{\hat A}_{N} \M{Z}_{N}^\top$ with $\M{Z}_{N} = (\M{Q}_{N-1} \Kron \cdots \Kron \M{Q}_{1}) \M{Q}_0 \M{R}$. Then $$\langle \T{X}, \T{\hat{X}} \rangle = \langle \Mz{X}{N}, \M{\hat A}_{N}\M{Z}_{N}^\top \rangle = \langle \Mz{X}{N}(\M{Q}_{N-1} \Kron \cdots \Kron \M{Q}_{1}) \M{Q}_0, \M{\hat A}_{N}\M{R}^{\top} \rangle = \langle \M{W}_N, \M{\hat A}_{N}\M{R}^{\top} \rangle,$$ where in \cref{alg:cp-als-qr}, $\M{W}_N$ is the result of the Multi-TTM (except in mode $N$) and the multiplication with $\M{Q}_0$, and thus has dimension $I_N \times R$. Thus, the cost of this computation is dominated by that of computing $\M{\hat A}^{(N)}\M{R}^{\top}$, or $\mathcal{O} (I_NR^2)$. We also have $$\|\T{\hat{X}}\|^2 = \langle \M{\hat A}_{N}\M{Z}_{N}^\top, \M{\hat A}_{N}\M{Z}_{N}^\top \rangle = \langle \M{Z}_{N}^\top \M{Z}_{N}, \M{\hat A}_{N}^\top \M{\hat A}_{N}\rangle = \langle \M{R}^\top \M{R}, \text{diag}(\V{\lambda})\M{R}_{N}^\top \M{R}_{N}\text{diag}(\V{\lambda}) \rangle,$$ where $\M{R}_{N}$ is the triangular factor in the QR decomposition of $\M{A}_N$. The cost of this extra computation is $\mathcal{O} (R^3)$. \subsubsection{Kruskal Tensor Input}\label{ssec:ktensor} The analysis of both CP-ALS-QR and CP-ALS-QR-SVD, as explained in \cref{ssec:costs}, assume the input tensor is a dense tensor. When the input tensor has special structure, the key operations can be computed more efficiently. We also implemented a version of each algorithm that exploits inputs with Kruskal structure, that is, a tensor stored as factor matrices and corresponding weights, which we use for the input in \cref{ssec:sine}. Exploiting this structure is beneficial as we avoid forming the input tensor or Multi-TTM product tensor $\T{Y}$, because all computations can be performed using the factor matrices instead. Note that the Tensor Toolbox has optimized the MTTRKP (\cref{alg:cp-als}, \cref{line:mttkrp}) and Multi-TTM (\cref{alg:cp-als-qr}, \cref{line:ttm}) computations for a Kruskal tensor input \cite{efficient-matlab}. For example, the complexity of the Multi-TTM product to obtain \cref{eq:ls-qr-ttm-step} in the case of a Kruskal tensor becomes $\mathcal{O} (R^N \sum_{i=2}^N I_i)$. For an $N$-mode Kruskal tensor $\T{X} \in \mathbb{R}^{I_1 \times \cdots \times I_N}$ of rank $R$ and $N$ matrices $\M{V}_j \in \mb{R}^{I_j \times S}$ for $j = 1,\dots, N$, the MTTKRP \begin{equation*} \Mz{X}{n} ( \Mn{V}{N} \Khat \cdots \Khat \Mn{V}{n+1} \Khat \Mn{V}{n-1} \Khat \cdots \Khat \Mn{V}{1}) \end{equation*} has cost $\mathcal{O} (RS \sum_{i=1}^N I_i)$. This is an improvement compared to the cost of the MTTKRP in the dense case, which is on the order of the product of the $N$ dimensions instead of their sum. The Kruskal structure has the added benefit of reducing the cost of computing the product $\M{W}_n = \Mz{Y}{n}\M{Q}_0$ in our QR-based algorithms (see \cref{line:apply_q0} in \cref{alg:cp-als-qr}), as we do not need to explicitly matricize $\T{Y}$. Let us consider the first mode as an example. In this case, $\T{Y}$ is an $N$-dimensional Kruskal tensor with rank $R$ and $\T{Y} = [\![ \lambda; \Mn{B}{1}, \Mn{B}{2}, \ldots ,\Mn{B}{N} ]\!]$, with $\Mn{B}{1} \in \mb{R}^{I_1 \times R}$, $\Mn{B}{i} \in \mb{R}^{R \times R}$ for $i = 2, \ldots, N$, and $\M{Q}_0 \in \mb{R}^{R^{N-1} \times R}$. Then, we can write $\Mz{Y}{1} = \Mn{B}{1} ( \Mn{B}{N} \Khat \cdots \Khat \Mn{B}{2})^\top$, treat $\M{Q}_0$ as a matricized tensor and compute the $\M{W} = \Mn{B}{1} [( \Mn{B}{N} \Khat \cdots \Khat \Mn{B}{2})^\top \M{Q}_0 ]$ using an MTTKRP followed by a small matrix product. This gives us a total cost of $2(I_1 R^2 + R^{N+1})$. \subsubsection{Other Computation-Reducing Optimizations} \label{ssec:sparsity} As noted in \cref{ssec:costs}, the Khatri-Rao product of triangular matrices $\M{V}_n$ computed in \cref{alg:cp-als-qr} is a sparse matrix with density proportional to $1/N$, where $N$ is the number of modes. This is because the $i$th column of $\M{V}_n$ is a Kronecker product of $N{-}1$ vectors each with $i$ nonzeros and therefore has $i^{N-1}$ nonzeros. This sparsity could be exploited in the computation of $\M{V}_n$ (\cref{line:qr-khatrirao}), computing its QR decomposition (\cref{line:q0}), and applying its orthonormal factor $\M{Q}_0$ (\cref{line:apply_q0}). In particular, when using a sparse QR decomposition algorithm, there will be no fill-in, as every row is dense to the right of its first nonzero, and the number of flops required is a factor of $\mathcal{O}(1/N^2)$ times that of the dense QR algorithm. The computational savings in computing $\M{V}_n$ and applying $\M{Q}_0$ is $\mathcal{O}(1/N)$. However, the use of (general) sparse computational kernels comes at a price of performance, so for small $N$ we do not expect much reduction in time and did not exploit sparsity in our implementation. An important optimization for CP-ALS (\cref{alg:cp-als}) is to avoid recomputation across MTTKRPs of the different modes. For example, the Khatri-Rao products $\M{Z}_n$ and $\M{Z}_{n+1}$ share $N-2$ different factors, so the computations of $\M{M}_n$ and $\M{M}_{n+1}$ have significant overlap. The general approach to avoiding this recomputation is known as dimension trees, as a tree of temporary matrices can be computed, stored, and re-used for the MTTKRPs across modes \cite{EH+21,PTC13a}. Using dimension trees reduces the outer-iteration CP-ALS cost from $N$ MTTKRPs to the cost of 2 MTTKRPs. Similar savings can be obtained by applying dimension trees to the set of Multi-TTM operations in CP-ALS-QR (\cref{alg:cp-als-qr}). In this case, we exploit the overlap in individual TTMs across modes and store a different set of intermediate tensors that can be re-used across modes. Dimension trees have been used for Multi-TTM before in the context of Tucker decompositions for sparse tensors and the Higher-Order Orthogonal Iteration algorithm \cite{BMVL12,KU16}. As in the case of CP-ALS, dimension trees can reduce the outer-iteration CP-ALS-QR cost from $N$ TTMs involving the data tensor to 2 TTMs. Neither of these reductions come at the expense of lower performance, so we can expect $\mathcal{O}(N)$ speedup in each case. For CP-ALS-QR, there are other overlapping computations that can be similarly exploited. For example, the QR decomposition of $\M{V}_n$ can be performed using a tree across the Khatri-Rao factors, some of which are shared across modes, though the structure of the orthonormal factor would need to be maintained when applying it. Because the Tensor Toolbox implementation of CP-ALS does not employ dimension trees, for fair comparison, we do not use them for CP-ALS-QR either. \section{Numerical Experiments}\label{sec:examples} In this section, we explore several examples that demonstrate the benefits of CP-ALS-QR and CP-ALS-QR-SVD over the typical ALS approaches. Specifically, we will demonstrate the performance of our algorithms as well as show the stability of our algorithms by considering ill-conditioned problems. \subsection{Performance Results} \label{ssec:perf} As seen in our analysis in \cref{ssec:costs}, the dominant cost for our new algorithms is the same as for CP-ALS and CP-ALS-PINV when $R$ is small. For large $R$, we see that the lower-order terms for CP-ALS-QR and CP-ALS-QR-SVD do have an effect on the runtime. We verify the comparable runtimes for small $R$ and examine the slowdown for large $R$ with a few experiments here. We break each algorithm down to its key components and time each individually. These components are listed in \cref{tab:its_parts}. Each row of the table represents corresponding parts of the two different types of algorithm. \begin{table}[!ht] \centering \begin{tabular}{|c|c|} \hline \textbf{CP-ALS, CP-ALS-PINV} & \textbf{CP-ALS-QR, CP-ALS-QR-SVD} \\ \hline MTTKRP & Multi-TTM \\ Gram of factor matrices & QR of factor matrices \\ N/A & Computing $\M{Q}_0$ \\ N/A & Applying $\M{Q}_0$\\ Other & Other \\ \hline \end{tabular} \caption{Breakdown of main components in each iteration of CP-ALS, CP-ALS-PINV, CP-ALS-QR, and CP-ALS-QR-SVD. We use this breakdown in our performance experiments shown in \cref{fig:its}.} \label{tab:its_parts} \end{table} For CP-ALS and CP-ALS-PINV (\cref{alg:cp-als}), the MTTKRP refers to forming $\M{M}_n = \Mz{X}{n} \M{Z}_n$, see \cref{line:mttkrp}, while we compute the Gram matrices for each factor matrix in \cref{line:gram}. For CP-ALS-QR and CP-ALS-QR-SVD (\cref{alg:cp-als-qr}), the Multi-TTM is computed when applying the Kronecker product of $\M{Q}_j$ matrices to $\Mz{X}{n}$, see \cref{line:ttm}, and we compute the QR factorization of each factor matrix in \cref{line:qr_factor}. Computing $\M{Q}_0$ involves computing a QR factorization of Khatri-Rao product $\M{V}_n$, see \cref{line:q0}, and applying $\M{Q}_0$ is a matrix multiplication, see \cref{line:apply_q0}. The steps included in the ``Other'' category include solving (by substitution or SVD), finding our weight vector $\V{\lambda}$, and the error computation. These steps are combined as none represent a significant portion of the runtime for any of the four algorithms. \begin{figure}[!t] \centering \includegraphics[scale=.4]{bar_its.pdf} \caption{Average runtime in seconds of a single iteration for CP-ALS, CP-ALS-PINV, CP-ALS-QR, and CP-ALS-QR-SVD for a three-way tensor of size 700 (top left), a four-way tensor of size 300 (top right), and a five-way tensor of size 75 (bottom left). Results are plotted for increasing rank values, and the slowdown ratio between the runtimes of CP-ALS-QR and CP-ALS is plotted above the group of results for each rank.} \label{fig:its} \end{figure} The three tensors we test are randomly generated cubical tensors of three, four, and five modes. The three-way tensor has dimension $700$, the four-way tensor has dimension $300$, and the five-way tensor has dimension $75$. We computed the average iteration time over 10 iterations (omitting the first iteration to ensure a warm cache). The tolerance we used for all algorithms was $10^{-10}$, and we computed the error in the efficient manner described in \cref{ssec:error}. The results for these three tensors with increasing rank values are in \cref{fig:its}. For each rank value, we also plot the slowdown ratio we see between the overall runtimes of CP-ALS-QR and CP-ALS. For all three tensors, the dominant cost per iteration is the MTTKRP for CP-ALS and CP-ALS-PINV, and the Multi-TTM for CP-ALS-QR and CP-ALS-QR-SVD. Only for high ranks do other costs, computing and applying the QR of the Khatri-Rao product, even appear visibly in the plot. In the three-way case, all the slowdown ratios are close to $1\times$. This is similar for low ranks in four and five modes, but the ratio for high ranks jumps up to $5\times$. These results demonstrate that the slowdown incurred by using our QR-based algorithms is not significant when the CP rank is small. \subsection{Collinear Factor Matrices} \label{ssec:collinear} In this example, we test our algorithms on synthetic tensors constructed so that the factor matrices are ill-conditioned. Following the approach in \cite{score-info}, we create this tensor from randomly generated factor matrices and weights so that we are able to compare the results of our algorithms to the true solution, and we add Gaussian noise. The randomly generated factor matrices are constructed as in \cite{tomasi2006comparison} so that we can control their collinearity. For our experiments, we construct a 3-way $50 \times 50 \times 50$ tensor with rank 5 and varying levels of collinearity and noise. We compare our QR-based algorithms, CP-ALS-QR and CP-ALS-QR-SVD, to CP-ALS and CP-ALS-PINV. We also compare all four ALS algorithms to an optimization-based CP method using Gauss-Newton implemented in Tensorlab \cite{tensorlab} we describe in \cref{ssec:gn}. We test all combinations of three different noise levels $10^{-4}, 10^{-7}$, and $10^{-10}$, and three different collinearity levels $1-10^{-4}, 1-10^{-7}$, and $1-10^{-10}$. For each configuration, we run $100$ trials of each algorithm to approximate a rank-$5$ CP factorization. The maximum number of iterations is $500$ and the convergence tolerance for change in the relative error is $10^{-15}$. We use such a tight tolerance to ensure the computed metrics reflect what is attainable by the algorithm and not an artifact of early convergence. A random guess is used for each initialization, and each algorithm is configured with the same initial factor matrices. To measure the performance of each algorithm, we consider the number of iterations to converge, the relative error defined as $\||\T{X} - \T{\hat{X}}\|/\|\T{X}\|,$ and a similarity measure between the approximation and true tensor called the score \cite{tomasi2006comparison}. The score is a proxy for forward error by checking if two given Kruskal tensors are nearly equivalent up to scaling and permutation. The maximum score is 1 (for Kruskal tensors that are equivalent), and the minimum is 0. An alternative metric called CorrIndex \cite{CorrIndex} also measures how closely two Kruskal tensors match. We observed similar quantitative behaviors between the score and CorrIndex and report only the score in our experiments. Results of the experiments are presented in \cref{tab:coll-noise}, where we see that the combination of noise and collinearity affects the ill-conditioning of the problem in different ways. When the noise level is $10^{-4}$ (first row), as well as when the collinearity is $1-10^{-4}$ (first column), all ALS algorithms have similar performances, implying that the subproblems are not ill-conditioned in these cases. The behavior of Gauss-Newton is more variable across the first row and down the first column. In the first row ($10^{-4}$ noise), we see the score decrease dramatically as collinearity increases. In the first column ($1-10^{-4}$ collinearity), we see much faster (quadratic) convergence as noise decreases, and Gauss-Newton converges to the correct solution. We observe variability across ALS algorithms in the bottom-right $2\times 2$ grid of experiments, where the combination of higher collinearity ($1-10^{-7}$ and $1-10^{-10}$) and low noise ($10^{-7}$ and $10^{-10}$) creates ill-conditioned subproblems. Our first observation is that CP-ALS-QR and CP-ALS-QR-SVD are robust in these cases: they obtain the lowest relative error with very little variation across initialization, they converge in fewer iterations than CP-ALS and CP-ALS-PINV, and their scores are generally high. Among the two normal equations based algorithms, we observe that CP-ALS-PINV obtains a better score than CP-ALS in all cases, but this comes at the expense of higher backward error. These two algorithms rarely converge before they hit the maximum of 500 iterations, which we attribute to the instability of the iterations, particularly near the ill-conditioned solution. The behavior of Gauss-Newton is consistent across these four cases: it converges quickly (due to the low noise), but the scores and relative errors are generally worse than the ALS-based algorithms, sometimes significantly. Comparing CP-ALS-QR and CP-ALS-QR-SVD, we see that the SVD variant is slightly more robust, obtaining higher scores, but sometimes requires more iterations to converge. \begin{table} \centering \begin{tabular}{|c|M{49mm}M{46mm}M{45mm}|} \hline {} & \multicolumn{3}{c|}{}\\[1pt] {} & \multicolumn{3}{c|}{\small Collinearity}\\[8pt] \hline {} & \multicolumn{3}{c|}{} \\[1pt] \small Noise & \footnotesize$1-10^{-4}$ & \footnotesize$1-10^{-7}$ & \footnotesize$1-10^{-10}$ \\[8pt] \hline \footnotesize$10^{-4}$ & \includegraphics[scale=.25]{n4c4.pdf} & \includegraphics[scale=.25]{n4c7.pdf} & \includegraphics[scale=.25]{n4c10.pdf}\\ \hline \footnotesize$10^{-7}$ & \includegraphics[scale=.25]{n7c4.pdf} & \includegraphics[scale=.25]{n7c7.pdf} & \includegraphics[scale=.25]{n7c10.pdf}\\ \hline \footnotesize$10^{-10}$ & \hspace{-.4cm} \includegraphics[scale=.25]{n10c4.pdf} & \includegraphics[scale=.25]{n10c7.pdf} & \includegraphics[scale=.25]{n10c10.pdf} \\ \hline \end{tabular} \caption{Scores, iterations, and relative error boxplots for Gauss-Newton, CP-ALS, CP-ALS-PINV, CP-ALS-QR, and CP-ALS-QR-SVD on a $50 \times 50 \times 50$ synthetic tensor of rank 5 with three different levels of collinearity for the true factor matrices and three different levels of Gaussian noise added.} \label{tab:coll-noise} \end{table} To summarize, when subproblems are well conditioned (either high noise or low collinearity), we observe no difference in the convergence or accuracy in the ALS algorithms. In the presence of ill-conditioned subproblems, we see that the QR-based algorithms, CP-ALS-QR and CP-ALS-QR-SVD, have stable performances in all scenarios, while the normal equations based algorithms tend not to converge quickly and also suffer from higher forward or backward errors. We see that using the SVD within CP-ALS-PINV can improve the quality of solution compared to CP-ALS, it sacrifices the backward error and is not as robust as the QR-based methods. Gauss-Newton is a viable alternative to ALS, exhibiting much faster convergence in the case of low noise, but it suffers from the same sensitivity to ill-conditioned problems and is not as robust as CP-ALS-QR or CP-ALS-QR-SVD in those cases. \subsection{Sine of Sums} \label{ssec:sine} We further test the stability of our QR-based algorithms on a function approximation problem discussed in \cite{sine-defn}. The sine of sums $\sin(x_1 + \dots + x_N)$ is an example of an $N$-dimensional function that can be approximated in such a way where complexity grows linearly with $N$ instead of exponentially. These efficient approximations are sometimes referred to as separated representations, and they are closely related to CP decompositions. In this case, we are simulating the numerical discovery of an efficient separated representation, as we already know it exists. That is, given a separated representation with large rank as input, we seek to compute a lower-rank representation that approximates it to numerical precision. We consider this problem as it can be ill-conditioned depending on the representation, which is nonunique. \subsubsection{Setup} The multivariate sine of sums function $\sin(x_1 + \dots + x_N)$ can be discretized as a dense $N$-mode tensor $\T{X} \in \mb{R}^{n \times \dots \times n}$ with $x_j \in \mb{R}^{n}$ as vectors discretizing the interval $[0,2\pi)$ for $j = 1,\ldots,N$. As the sine of sums can be expressed as a sum of $2^{N-1}$ terms using sum identities for sine and cosine, we can expand all terms to obtain a rank-$2^{N-1}$ representation of $\T{X}$, which corresponds to an exact CP decomposition. Another exact CP representation of rank $N$ also exists, of the form \begin{equation}\label{eq:sinsum_rkd} \T{X} = \sin\left(\sum_{j=1}^N x_j \right) = \sum_{j=1}^N \sin(x_j) \prod_{k=1,k\neq j}^N \frac{\sin(x_k+\alpha_k - \alpha_j)}{\sin(\alpha_k - \alpha_j)}, \end{equation} where $\alpha_j$ must satisfy $\sin(\alpha_k - \alpha_j) \neq 0$ for all $j \neq k$. This rank-$N$ representation is nonunique, and can be numerically unstable depending on the choice of $\alpha_j$. The input representations are already rank-$2^{N-1}$ Kruskal tensors, and we exploit that structure in our algorithms as discussed in \cref{ssec:ktensor}. In the experiments below, we vary the number of modes $N$, corresponding to the number of variables in the sine of sums function, and the dimension $n$ of each mode, corresponding to the number of discretization points in the interval $[0,2\pi)$. In each experiment, we consider the relative error of four algorithms, CP-ALS, CP-ALS-PINV, CP-ALS-QR, and CP-ALS-QR-SVD, at the end of each of the first 40 iterations. We use the same random initial guesses for the factor matrices across algorithms and a convergence tolerance of 0. We also compute the relative error $\| \T{X} - \T{\hat{X}} \|/ \|\T{X}\|$ directly (as opposed to the methods described in \cref{ssec:error}) as we need a more accurate computation of the error to truly compare the accuracy of our algorithms. \subsubsection{Lower-order tensors} We first examine two lower-order cases, with $N = 4$ and 5 modes. Starting with $N = 4$, we compute the relative error at the end of each iteration for all four ALS algorithms for two different $n$ values, the size of vectors $x_j$. The results are plotted in \cref{fig:sin4d}, where we see that as $n$ increases, the difference between the relative errors of the algorithms becomes larger. Specifically, the relative error at the end of each iteration is larger for CP-ALS and CP-ALS-PINV than our QR-based algorithms. \begin{figure} \centering \includegraphics[scale=.4]{d4r8n64} \includegraphics[scale=.4]{d4r8n128} \caption{Relative error of CP-ALS, CP-ALS-PINV, CP-ALS-QR, and CP-ALS-QR-SVD on the four-way sine of sums tensor in full rank-8 representation at the end of each iteration for dimension $n=64$ (left) and $n=128$ (right).} \label{fig:sin4d} \end{figure} Similarly, for $N = 5$, we plot the relative error at the end of each iteration for all four ALS algorithms for increasing $n$ values. These results are shown in \cref{fig:sin5d}, where we see a similar result to $N = 4$. As the dimension gets bigger, the QR-based algorithms converge to a lower relative error value. Note that in this case, $n=32$ is not a fine enough discretization to obtain relative error near machine precision; only $10^{-6}$ error is achieved by any algorithm. \begin{figure} \centering \includegraphics[scale=.4]{d5r16n32} \includegraphics[scale=.4]{d5r16n64} \caption{Relative error of CP-ALS, CP-ALS-PINV, CP-ALS-QR, and CP-ALS-QR-SVD on the five-way sine of sums tensor in full rank-16 representation at the end of each iteration for dimension $n=32$ (left) and $n=64$ (right).} \label{fig:sin5d} \end{figure} \subsubsection{Higher-order tensors} We now consider a higher-order case, where our tensor has $N = 10$ modes. In \cref{fig:sin10d}, we plot these relative errors in two cases. We use $n=8$ for both cases, but use two different random initializations to show two different types of results we obtained. For the first random initialization (left), we see similar results to the lower-order cases, with all four algorithms converging, but the gap between the relative error for the QR-based algorithms and the normal equations-based algorithms is much larger than in lower-order tensors. With the second random initialization (right), CP-ALS and CP-ALS-PINV do not converge to anything, while the relative error for CP-ALS-QR and CP-ALS-QR-SVD converge normally to low values. When repeating this experiment for multiple random initializations, we found that this second case was more common, occurring in four out of five trials. \begin{figure} \centering \includegraphics[scale=.4]{d10} \includegraphics[scale=.4]{d10_break} \caption{Relative error of CP-ALS, CP-ALS-PINV, CP-ALS-QR, and CP-ALS-QR-SVD on the ten-way sine of sums tensor in rank-10 representation at the end of each iteration for dimension $n=8$. Results from two different random initializations are shown. } \label{fig:sin10d} \end{figure} From these experiments, we can see that for ill-conditioned problems, our QR-based algorithms are more stable than the typical algorithms in higher dimensions. For all dimensions, we are also able to attain higher accuracy than traditional ALS algorithms. \section{Conclusions and Future Work} \label{sec:conclusion} We have developed and implemented versions of the CP-ALS algorithm using the QR decomposition and SVD in an effort to address the numerical ill-conditioning to which the normal equations in the traditional algorithm are susceptible. The first version uses a QR factorization to solve the linear least squares problems within CP-ALS. We also present the CP-ALS-QR-SVD algorithm, which applies the SVD as an extra step in the algorithm to handle numerically rank-deficient problems. In addition to the algorithms themselves, we provide analysis of their complexity, which is comparable to that of the widely-used CP-ALS algorithm when the rank is small. Our new algorithms prove useful for computing CP tensor decompositions with more stability in the event of ill-conditioned coefficient matrices, and present an alternative when analyzing tensor data for which the CP-ALS algorithm produces dissatisfactory results, or is unable to produce any result due to ill-conditioning. We envision our QR-based algorithms being used as part of a robust CP-ALS solver that uses the traditional normal equations approach by default, but solves the least squares problems using QR if any ill-conditioning is detected. There are several potential performance improvements to pursue in future work. In situations where the target rank is high, computing $\M{Q}_0$, which involves a QR of a Khatri-Rao product of upper triangular matrices, becomes a more dominant cost of the CP-ALS-QR algorithm. The Khatri-Rao product of upper triangular matrices has structure which we do not exploit in our implementation, and which could lead to a more efficient implementation. We could also use dimension trees to speed up our implementation of the Multi-TTM function, the major dominant cost in both new algorithms. We currently use the Tensor Toolbox implementation but could improve the performance by reusing computations as in a dimension tree. The Gauss-Newton algorithm we use solves the approximate linear least squares problems via the normal equations. Another interesting direction to pursue would be to use the QR of the Jacobian to solve the least squares problems instead to improve the stability in the presence of ill-conditioning. \bibliographystyle{siamplain}
2,869,038,156,221
arxiv
\section{Introduction} Multi-Agent Systems (MAS) have been an interesting topic in the areas of decision theory and game theory. MAS are composed of a number of autonomous agents. In some applications, these autonomous agents act in a self-interested manner in their dealings with numerous other agents. Even in game theory, in an interactive framework the decision of one agent often affects that of another. This behavior is seen in the MAS which mainly deal with issues like resource allocation~\cite{bredin00gametheoretic,sycara98}. In such scenarios, each agent holds different preferences over the various possible allocations and hence, concepts like individual rationality, fairness, optimality, efficiency, etc., are important~\cite{mara-survey}. In this paper, we study a framework where optimality is a desirable property but fairness is a required property. An excellent example of such a framework is Combinatorial Auctioning Systems (CAS) where the two most important issues pertaining to resource allocation are \emph{optimality} and \emph{fairness}. Incorporation of fairness into game theory and economics is a significant issue. Its welfare implications in different systems were explored by Rabin~\cite{rabin93}. The problem of fair allocation is being resolved in various MAS by using different procedures depending upon the technique of allocation of goods and the nature of goods. Brams and Taylor give the analysis of procedures for dividing divisible and indivisible items and resolving disputes among the self-interested agents~\cite{brams96}. Some of the procedures described by them include the ``Divide and Choose'' method of allocation of divisible goods among two agents to ensure the fair allocation of goods which also exhibits the property of ``envy-freeness,'' a property first introduced by Foley~\cite{foley67}. Lucas' method of markers and Knaster's method of sealed bids are described for MAS comprising more than two players and for the division of indivisible items. The Adjusted-Winner (AW) procedure is also defined by Brams~\cite{brams05} for envy-freeness and equitability in two-agent systems. Various other procedures like moving knife procedures for cake cutting are defined for the MAS comprising three or more agents~\cite{brams05, barbane04}. However, it can also be seen that the definition of fairness varies across the different multi-agent systems, i.e., the term \emph{fairness} is perceived differently in various MAS with regard to the resource allocation. In some MAS, it can be defined as equitable distribution of resources such that each recipient believes that it receives its fair share. Thus, each agent likes its share at least as much as that of other agents' share and, thereby, it is also known as envy-free division of resources~\cite{brams05}. But this definition of fairness is not applicable to all the MAS. To explain the notions of fairness in MAS, we classify fairness into \emph{extended fairness} and \emph{basic fairness} in this paper. To illustrate these notions of fairness mathematically, we shall use the framework of the Combinatorial Auctioning Systems (CAS). The CAS is a kind of MAS whereby the bidders can express preferences over combination of items~\cite{nisan00,narahari05}. The CAS approach is being used by different government agencies like the FCC~\cite{cramton05} and numerous business applications like logistics and transportation~\cite{caplice03, caplice05}, supply chain formation~\cite{walsh00}, B2B negotiations~\cite{jones00}, etc. It has been noticed that one of the significant issues in CAS is that of resource allocation. Optimum resource allocation is one of the most desirable properties in a CAS, and deals mainly with the Winner Determination Problem (WDP)~\cite{sandholm02, naramunchi05}. Determining the winner in a CAS so as to maximize revenue is an NP-complete problem. However, it is seen that besides WDP, fairness is another important objective in many CAS-like government auctions. Rothkopf expressed his view in~\cite{rothkopf01} that ``optimal solution to the winner determination problem, while desirable, is not required. What is required is a guarantee that the auction will be fair and will be perceived as fair.'' Hence, we realize the significance of fairness in CAS. We shall consider a CAS that uses the Sandholm algorithm and the concept of a Generalized Vickrey Auction (GVA)~\cite{narahari05}. Sandholm's algorithm is a method to determine the optimal allocation of resources~\cite{sandholm02} in a CAS. The concept of single-round second-price sealed-bid auction is then used to determine the payment made by the winners. According to this, the payment made by a winner is determined by the second-highest bid. In order to achieve fairness in such a CAS, we extend this existing payment scheme and take into consideration the fair values of resources as perceived by the bidders and the auctioneer in the system. Based upon their estimate of fair values, payments are made by the winners. A detailed analysis is done to highlight some important properties exhibited by this extension of the payment scheme. We start by classifying fairness and explain its different notions in Section~\ref{sec_fairness}. It is followed by our study on CAS in Section~\ref{sec_cas} and mathematical formulations are given that are used to extend the payment scheme to achieve fairness in CAS. Section~\ref{sec_analysis} gives a detail analysis of the scheme that highlights the attractive properties in our payment scheme. We conclude with Section~\ref{sec_conclusion} which offers some conclusions about our efforts, and some suggestions for further work along these lines. \section{Classification of Fairness} \label{sec_fairness} To explain the different notions of fairness in various MAS, we classify fairness as \emph{Basic Fairness} and \emph{Extended Fairness}. This section defines the various perceptions about measuring fairness in MAS. In our analysis, we do not consider agent preferences as being apart from their bids, i.e., if an agent has a higher preference for something, it is considered to indicate the same by a higher bid, and vice versa. All goods are considered divisible. Our algorithm given in Section~\ref{fairness_algo} creates an allocation that is seen as having fairness (either basic or extended) by all agents in the system. \subsection{Basic Fairness} In many MAS, there occurs a need of allocating the resources in an equitable manner, i.e., each agent gets an equitable share of the resources. This happens mainly when every agent holds similar significance for the given set of resources and has a desire to procure it. Thus, it becomes necessary to allocate the resources in an equitable fashion, i.e., such that each agent believes that its share is comparable to the share of other agents. Thus, none of the agents hold preferences over the share of other agents. Hence, we say that every agent believes that the set of resources is divided fairly among all the agents. This concept of fairness is termed as \emph{basic fairness}. \begin{definition} When allocation is perceived to be fair in comparison to the other agents i.e. share of all the agents is comparable, \emph{basic fairness} is said to be achieved in resource allocation. \end{definition} This kind of fairness is required in the applications whereby fairness is the key issue rather than the individual satisfaction of the self-interested agents. In such applications, it becomes necessary to divide a resource set in an equitable fashion so that every agent believes that it is receiving its fair share from the set of resources. Hence, we see that every agent enjoys material equality and this ensures basic fairness among them. In other words, the concept of basic fairness also ensures egalitarian social welfare~\cite{yann05} and envy-freeness~\cite{brams05}. An example of such application that pertains to the equitable allocation of resources is given by Lematre~\cite{mara-survey}. It deals with the equitable distribution of Earth Observing Satellite (EOS) Resources. EOS is co-funded and exploited by a number of agents and its mission is to acquire images of specific areas on earth surface, in response to observation demands from agents. However, due to some exploitation constraints and due to large number of demands, a set of demands, each of which could be satisfied individually, may not be satisfiable in a single day. Thus, exploitation of EOS should ensure that each agent gets an equitable share in the EOS resources, i.e., the demands of each agent is given equal weight assuming that agents have equal rights over the resource (we assume that they have funded the satellite equally). Hence, we observe that basic fairness is achieved as the demands of all agents are entertained by the equitable distribution of EOS resources. \subsection{Extended Fairness} In every MAS, we observe that each agent intends to procure a resource at a value that is perceived by it to be fair for the procurement. In other words, every agent assigns a fair value to each resource that determines its estimate of the value of the resource in quantitative terms. The fair value attached to each resource can be expressed in monetary terms in most MAS. Thus, an agent intends to procure a resource by trading it with cash which is equal to the fair value attached to the resource by the respective agent. In such cases, each agent believes that it procures the resource at a fair value and, hence, believes the allocation to be fair. However, it is important to mention that the fair value attached to each resource by an agent does not necessarily reflect the utility value of the resource to it. An agent may hold a higher or lower utility value for a resource irrespective of the fair value attached to the resource by it. Thus, the fair value attached to a resource is an estimate of the actual value of the resource in the system as perceived by an agent in quantitative terms. It means that an agent is always willing to trade a resource at its fair value. The resource procurement in such MAS is perceived to be fair by every agent. Resources are allocated to the agents based upon different criteria of optimality in a system. However, it is assured that each agent that procures a resource perceives the trade to be fair. The other aspects of allocation like resources procured by other agents, fair values attached to the resource by other agents, utility value of the resource to other agents, etc., are not considered while an agent trades a resource with its fair value. Thus, we see that the kind of fairness that is achieved in such system is irrespective of other agents and hence, we term it as \emph{extended fairness}. \begin{definition} When allocation is perceived to be fair by an individual agent procuring a resource, and is irrespective of the measures attached by other agents, \emph{extended fairness} is said to be achieved in resource allocation. \end{definition} \noindent An example of such a system can be explained through a scenario of job allocations in a multi-national company. Consider a MAS that refers to a company hiring situation, comprising an agent offering the job positions (i.e., the owner's agent) and a number of self-interested agents who contend for these jobs. The contending agents express their estimate of the fair value through their curriculum vitae that is submitted to the owner agent, i.e., each contending agent believes that its curriculum vita fulfills the minimum requirements for the job and that it is eligible for the job. Hence, the agents define their perception of the required qualifications for the job through their curriculum vitae and believe it to be sufficient to qualify for the job. The owner agent selects the job-seeker agent that holds at least minimum qualifications required for the job but holds the maximum qualifications among all the contending agents. Thus, the job is allocated to the agent whose curriculum vita matches this criterion. Hence, the allocation is perceived to be fair by the winning agent and by all other agents as it is allocated to the most deserving among all the agents. Hence, the job is allocated to the agent on the basis of its curriculum vita, i.e., an agent acquires a job at its estimate of the fair value of the qualifications required for the job. Thus, we see the two broad classification of fairness that explains different notions of fairness as perceived by the agents in different MAS. To explain these notions of fairness mathematically, we shall study a framework where fairness is a required property in resource allocation. However, we also see that resource allocation deals with another key issue of optimality in various MAS. Thus, the best example of resource allocation framework where both optimality and fairness are the key issues is Combinatorial Auctioning Systems (CAS). \section{Fairness in Combinatorial Auctioning Systems (CAS)} \label{sec_cas} Combinatorial Auctioning Systems are a kind of MAS which comprise an auctioneer and a number of self-interested bidders. The auctioneer aims at allocating the available resources among the bidders who, in turn, bid for sets of resources to procure them in order to satisfy their needs. The bidders aim at procuring the resources at minimum value during the bidding process, while the auctioneer aims at maximizing the revenue generated by the allocation of these resources. Thus, CAS refers to a scenario where the bidders bid for the set of resources and the auctioneer allocates the same to the highest-bidding agent in order to maximize the revenue. Hence, we see that optimality is one of the key issues in CAS. The Sandholm algorithm is used here to attain optimal allocation of resources. It works by making an allocation tree and carrying out some preprocessing steps like pruning to make the steps faster without compromising the optimality~\cite{narahari05, sandholm02}. However, besides optimality, another key issue desired by some auctioning systems is fairness. To incorporate this significant property in this resource allocation procedure, we propose an algorithm which uses a metric to measure fairness for each agent and determines the final payment made by the winning bidders. The algorithm that we describe is based upon a CAS that uses the Sandholm algorithm for achieving optimality, and an incentive-compatible mechanism called Generalized Vickrey Auction (GVA) as the pricing mechanism that determines the payments to be given by the winning bidders. The Generalized Vickrey Auction (GVA) has a payoff structure that is designed in a manner such that each winning agent gets a discount on its actual bid. This discount is called a Vickrey Discount, and is defined in~\cite{narahari05} as the extent by which the total revenue to the seller is increased due to the presence of that winning bidder, i.e., the marginal contribution of the winning bidder to the total revenue. We give mathematical formulations to show that both kinds of fairness can be achieved in CAS. We show that \emph{extended fairness} is achieved in all cases except in case of a tie, in which case \emph{basic fairness} is ensured. \subsection{Mathematical Formulation} \subsubsection{Terminology} Let our CAS be a multi-agent system which is defined by the following entities: \begin{itemize} \item[(i)]A set $\Phi$ comprising $m$ resources \textit{\(r_0, r_1,\ldots, r_{m-1}\)} for which the bids are raised. \item[(ii)] A set $\xi$ comprising $n$ bidders \textit{\(b_0,b_1,\ldots, b_{n-1}\)}. These are the agents among whom the resources are allocated. \item[(iii)] An auctioneer, denoted by $\lambda$, is the initial owner of all the resources and invites bids in the auctions. \end{itemize} Let us consider a CAS that comprises three bidders \textit{\(b_0,b_1,b_2\)}, an auctioneer denoted as $\lambda$, and three resources \textit{\(r_0, r_1, r_2\)}. Each bidder is privileged to bid upon any combination of these resources. We denote the combinations or subsets of these resources as \textit{\{\(r_0\)\}, \{\(r_1\)\}, \{\(r_2\)\}, \{\(r_0, r_1\)\}, \{\(r_0, r_2\)\}, \{\(r_1, r_2\)\}, \{\(r_0, r_1, r_2\)\}}. We shall use the term package to define a set that comprises the subsets of resources won by a bidder. For example, a package for a bidder winning the subsets \textit{\{\(r_0\)\}} and \textit{\{\(r_1\)\}} is defined as \textit{\{\{\(r_0\)\}, \{\(r_1\)\}\}}. Assume that the auctioneer and each bidder has fair valuation for each of the individual resource (say, in dollars) as shown in Table~\ref{table1}. \begin{definition} The fair valuation for an agent represents its estimate of the actual value of the resource. \end{definition} Thus, fair valuation by a bidder and an auctioneer for each resource represents their estimate of the actual value of each resource. Thus, a bidder is willing to trade a resource at its fair value and also believes that no loss is incurred by the seller in the trade. Similarly, the auctioneer is willing to sell a resource at the fair valuation described for it by him. Fair value for a combination of resources can be calculated as the sum of the fair value for each of the resources in that combination. The fair valuation for a resource by a bidder does not refer to the utility measure of the resource for the bidder. We shall use the term fair valuation and fair value interchangeably. \begin{table*}[!h] \centering \begin{tabular}[h]{c|c c c} \hline &\(r_0\)&\(r_1\)&\(r_2\) \\ \hline \hline \emph{Bidder \(b_0\)}&5&8&8 \\ \emph{Bidder \(b_1\)}&10&2&8 \\ \emph{Bidder \(b_2\)}&10&5&10 \\ \emph{Auctioneer, $\lambda$}&8&10&15\\ \end{tabular} \caption{Fair valuations for each resource by all bidders} \label{table1} \end{table*} From Table~\ref{table1}, we can see that the bidder \(b_0\) values resource \(r_0\) for \$5, \(r_1\) for \$8 and \(r_2\) for \$10. This means that bidder \(b_0\) is willing to trade resource \(r_0\) with \$5, \(r_1\) with \$8 and \(r_2\) with \$8 and believes that no loss is incurred by the auctioneer in this trade. The fair valuation for the subset \{\(r_0, r_2\)\} for the bidder \(b_0\) is calculated as the sum of his the fair values for \(r_0\) and \(r_2\) i.e. 5 + 8 = \$13. Similarly, fair valuation for a package is the sum of the fair valuation of the comprising sets i.e. for a package \{\(r_0\)\}, \{\(r_1, r_2\)\}\}, the fair value is the sum of the fair values of \{\(r_0\)\} and \{\(r_1, r_2\)\}. Let the bids raised by the bidders for the individual resource and different combination of resources be as given in table2. It can be seen that the bids raised by each of the bidder for different sets of resources may or may not be equal to the fair valuation of the respective set of resources. A bidder can put zero bids for the set of resources it does not wish to procure. \begin{table*}[!h] \centering \begin{tabular}[h]{c|c c c c c c c} \hline &$r_0$&\(r_1\)&\(r_2\)&\{\(r_0,r_1\)\}&\{\(r_0,r_2\)\}&\{\(r_1,r_2\)\}&\{\(r_0, r_1,r_2\)\} \\ \hline \hline \emph{Bidder \(b_0\)}&0&10&5&0&20&15&50\\ \emph{Bidder \(b_1\)}&10&5&10&30&0&0&50 \\ \emph{Bidder \(b_2\)}&10&0&15&20&30&0&30 \\ \end{tabular} \caption{Bids raised by the bidders for different combination of resources} \label{table2} \end{table*} It is assumed that the bidding language used in our system is $OR$ bids, i.e., a bidder can submit any number of bids and is willing to obtain any number of atomic bids for a price equal to the sum of their prices~\cite{nisan00, narahari05, sandholm02}. Recall that the set of all the bids won by a bidder is referred to as a package. \begin{itemize} \item[(a)] A set $D$ which is a subset of the set of natural numbers, i.e., \(D \subseteq \mathbb{N}\), describing the possible values (in dollars) given to resources by bidders. \item[(b)] A \emph{fairness matrix}, $\Gamma_{i,[1 \times m]}$, for the bidder $b_i$, and $\Gamma_{\lambda,[1 \times m]}$ for the auctioneer, $\lambda$, is defined as : \(\Gamma_i = [\tau_{i,0}, \tau_{i,1}, \ldots, \tau_{i,m-1}]\), for the bidder $b_i$.\\ \(\Gamma_\lambda = [\tau_{\lambda,0}, \tau_{\lambda,1}, \ldots, \tau_{\lambda,m-1}]\), for the auctioneer, $\lambda$.\\ where the function $\tau_i$ is defined by a bidder, $b_i$, for a resource, $r_j$ as: \begin{displaymath} \tau_i ( r_j) = d, d \in D \end{displaymath} This function represents a fair valuation of a resource, $r_j$, by a bidder $b_i$. From table 1, we have $\tau_0 \left( r_1\right)$ = 8, $\tau_1 \left( r_1\right)$ = 2, etc. Thus, from table 1, we have the following fairness matrices: $\Gamma_0$ = [5, 8, 8]; $\Gamma_1$ = [10, 2, 8]; $\Gamma_2$ = [10, 5, 10]; $\Gamma_\lambda$ = [ 8, 10, 15] \item[(c)] A function $\Upsilon_{i,k}$, known as the \emph{pay function} by a bidder, $b_i$ is defined as: \begin{displaymath} \Upsilon_{i,k} \left( b_i, \Psi_k\right) = d \end{displaymath} where \(Psi_k = \{\mu_j | \mu_j \in \mathrm{set \ of \ resources \ won \ by \ bidder} b_i\}\), and $\Upsilon_{i,k}$ is the cost of the package, $\Psi_k$, to the bidder $b_i$ as calculated from the GVA payment scheme. \end{itemize} \subsubsection{Algorithm To Incorporate Extended Fairness In CAS} \label{fairness_algo} \begin{itemize} \item[(1)] Each bidder and the auctioneer define its fairness matrix before the start of bidding process. It is a sealed matrix and is unsealed at the end of bidding process. \item[(2)] An allocation tree is constructed at the end of the bidding process to determine the optimum allocation and the winning bidders~\cite{sandholm02}. Information about all the bidders in a tie is not discarded using some pre-defined criteria. \item[(3)] Use GVA pricing mechanism to calculate the Vickrey discount~\cite{narahari05} and, hence, payments by the winning bidders for their corresponding packages, i.e., calculate $\Upsilon_{ij}$ for the package $\Psi_j$ won by the bidder $b_i$. \item[(4)] Calculate the fair value of the package won by each bidder and denote it as $\Pi_{ij}$ for the bidder $b_i$ who wins the package $\Psi_j$. \item[(5)] Also calculate the fair value of each package using the fairness matrix of the auctioneer and denote it as $\Pi_{\lambda j}$ for a package $\Psi_j$. \item[(6)] Compare the values of $\Pi_{\lambda j}$ and $\Upsilon_{ij}$ and determine the final payment by the bidder depending upon the following conditions: \end{itemize} Case 1: $\Upsilon_{ij} > \Pi_{\lambda j}$ Bidder pays the amount $\Upsilon_{ij}$ and the auctioneer gains profit equal to ($\Upsilon_{ij} - \Pi_{\lambda j}$) which is distributed among other bidders who bid for the package $\Psi_j$. The profit is distributed in a proportional manner, i.e., in the ratio of $(\Pi_{kj} - \Pi_{\lambda j}) / (\Pi_{\lambda j})$ for a bidder $b_k$ who also bid for $\Psi_j$ but is not a winning bidder. Case 2: $\Upsilon_{ij} = \Pi_{\lambda j}$ In this case, the bidder pays the amount $\Upsilon_{ij}$ to the auctioneer. Case 3: $\Upsilon_{ij} < \Pi_{\lambda j}$ Auctioneer suffers a loss of amount ($\Pi_{\lambda j} - \Upsilon_{ij}$). However, loss can be recovered as per the following cases: \begin{itemize} \item[(i)] $\Pi_{ij} > \Pi_{\lambda j}$ Bidder's estimate of fair valuation is more than $\Upsilon_{ij}$. Thus, bidder gives the final payment of $\Pi_{\lambda j}$ to the auctioneer. \item[(ii)] $\Pi_{ij} = Pi_{\lambda j}$ Bidder's estimate of fair value is same as that of auctioneer's estimate and is greater than the value $\Upsilon_{ij}$. Thus, bidder pays amount $\Pi_{ij}$ to the auctioneer. \item[(iii)] $\Pi_{ij} < \Pi_{\lambda j}$ \begin{itemize} \item[(a)] $\Pi_{ij} \le \Upsilon_{ij}$ : then bidder's final payment remains the same, i.e., $\Upsilon_{ij}$ \item[(b)] $\Pi_{ij} > \Upsilon_{ij}$ : then bidder's final payment is equal to $\Pi_{ij}$. \end{itemize} \end{itemize} \subsubsection{Handling the cases of tie - Incorporating Basic Fairness} Unlike traditional algorithms, we do not discard the bids in the cases of a tie on the basis of some pre-decided criterion. We consider these cases in our algorithm to provide \emph{basic fairness} to the bidders. In cases of a tie, we shall measure the utility value of the resource to each bidder in the tie. \begin{definition} The utility value of a resource to a bidder is defined as the quantified measure of satisfaction or happiness derived by the procurement of the resource. \end{definition} Mathematically, we define utility value for a resource set $\mu_j$ as: \[\upsilon_i(\mu_j ) = \nu_i(\mu_j) - \Pi_{ij}\] where \(\nu_i(\mu_j)\) is the bid value of the resource \(\mu_j\) and \(\Pi_{ij}\) is the fair valuation for the resource set $\mu_j$ for the bidder $b_i$. The bidders maximize this utility value to quantify the importance and their need for the resource to them. Thus, the higher the utility value, the greater is the need for the resource set. In such a case, fairness can be imparted if the resource set $\mu_j$ is divided among all the bidders in a proportional manner, i.e., in accordance to the utility value attached to the resource by each bidder. Let us consider the same example to explain the concept of basic fairness in our system. From table 2, we observe that the optimum allocation attained through allocation tree comprises the resource set $\{r_0, r_1, r_2\}$ as it generates the maximum revenue of \$50. However, we see that this bid is raised by the two bidders, $b_0$ and $b_1$. Thus, we calculate the fair value of the resource set $\mu_1 = \{r_0, r_1, r_2\}$ for the bidder $b_0$ and $b_1$, i.e., $\Pi_{01}$ = 5+8+8 = \$21 and $\Pi_{11}$ = 10+2+8 = \$20. Thus, the utility value of the resource set $\mu_0$ for the bidder $b_0$ and $b_1$ is as follows: \begin{itemize} \item[] for bidder $b_0$, $\upsilon_0(\mu_1 )$ = 50 - 21 = \$29, and \item[] for bidder $b_1$, $\upsilon_1(\mu_1 )$ = 50 - 20 = \$30. \end{itemize} Hence, the resource set $\mu_1$ is divided among bidders, $b_0$ and $b_1$, in the ratio of 29:30. In other words, bidder $b_0$ gets 49.15\% and bidder b1 gets 50.85\% of the resource set $\mu_1$. The payment made by the bidders is also done in the similar proportional manner. For example, the bidders, $b_0$ and $b_1$, make their respective payments in the ratio of 29:30 to make up a total of \$50 for the auctioneer, i.e., bidder $b_0$ pays \$24.65 and bidder $b_1$ pays \$25.35 to the auctioneer for their respective shares. Hence, we see that extended fairness as well as basic fairness are achieved in CAS by using a fairness metric. We take into account the fair estimates of the auctioneer and the bidders for each resource to ensure that fairness is achieved to auctioneer as well as the bidders. We shall do a detailed analysis of the new mechanism in the following section. \section{Analysis} \label{sec_analysis} A detailed analysis is done to highlight some important concepts used and the significant properties exhibited by our CAS through our payment mechanism. \subsection{Fairness} In MAS, every agent has its own metric to measure fairness with regards to the allocation of resources. In CAS, we see that the auctioneer and the bidders have their own estimate of the fairness value attached to each resource. We introduced the concept of fairness matrix to attain the knowledge of the fair value attached to each resource by the auctioneer and each bidder. This matrix is used as a metric to ensure that each allocation of resources is perceived to be a fair allocation by the bidder as well as the auctioneer. Thus, we say that extended fairness is achieved when a bidder procures a resource for an amount that is equal to its estimate of fair value of that resource. In such a case, the bidder believes that the resource was procured by it at a fair amount irrespective of other bidders' estimate of fair value of that resource. Thus, the allocation is believed to be extendedly fair as per the estimates of the winning bidder. We also see that basic fairness is achieved in our system when there is more than one bidder who has raised equal bid for the same set of resources. In such a case, we divide the set of resources among all the bidders so as to ensure fairness to all the bidders in a tie. However, this division of resources set is done in a proportional manner. We intend to divide the resource such that the bidder holding highest utility value to it should get the biggest share. To ensure this, we calculate the utility value (i.e., $\upsilon_i(\mu_j ) = \nu_i(\mu_j) - \Pi_{ij}$) of the set of resources to each bidder and divide the set in the ratio of these values among the respective bidders. Thus, we see that each bidder procures its basic share of the set of resources in accordance to the basic importance attached by the bidder to the set of resources. Due to the achievement of fairness through our payment scheme, the bidders are expected to show willingness to participate in the auctions. \subsection{Rationality} We shall see that the fairness matrix is a metric for fair valuation that forces the bidders and the auctioneer to behave rationally. In other words, they attain maximum profits if they describe their fair matrix truthfully. Our system ensures certain behavioral traits of auctioneer and the bidders through which this property of rationality is achieved in our system. These behavioral traits are described in the following: \begin{proposition} The auctioneer does not state extremely high or low values in its fairness matrix as this does not generate higher revenue. \end{proposition} \begin{proof} If an auctioneer states very high values in its fairness matrix, then Case 3 follows most of the times. From Case 3, we observe that the auctioneer receives a payment equal to $\Pi_{\lambda j}$ only if this value is comparable to that of $\Pi_{ij}$ for a bidder $b_i$. In other words, an auctioneer benefits only if its valuation is not irrationally higher than that of the bidder. On contrary, the auctioneer does not state very low values in its fairness matrix. For such circumstances, Case 1 follows, whereby it seems to be that the auctioneer gains profit and, hence, it is distributed among the bidders.\end{proof} \begin{proposition} Bidders do not state extremely high or low values in the fairness matrix as it does not help them procure the resources at lower values. \end{proposition} \begin{proof} We see that the Case 3 deals with the fairness values of the bidder $b_i$. In case $\Upsilon_{ij} < \Pi_{\lambda j}$ and $\Pi_{\lambda j} < \Pi_{ij}$, the bidder pays the amount $\Pi_{\lambda j}$. Otherwise if $\Upsilon_{ij} \le \Pi_{ij} \le \Pi_{\lambda j}$, the bidder pays the amount equal to $\Pi_{ij}$. In both the cases, we see that the value to be paid is higher than the bid value. However, if the bidder is in a tie for a resource set, then its utility value falls negative if $\Upsilon_{ij} \le \Pi_{ij}$. Hence, the bidder does not get the profits which are distributed among other bidders in a tie. Thus, a bidder undergoes a loss if the value of $\Pi_{ij}$ is very high. On contrary, the bidder does not state lower values in the fairness matrix. In this case, a loss is perceived by the bidder under Case 3, condition (iii), part (a).\end{proof} \begin{proposition} Bidders raise their bids truthfully. \end{proposition} \begin{proof} Bidders gain by bidding truthfully. On bidding truthfully, they can maximize the Vickrey Discount on their bids. Secondly, in the cases of tie, they can maximize the profit earned ($\upsilon_i (\mu_j) = \nu_i (\mu_j) - \Pi_{ij}$), i.e., for a given value of $\Pi_{ij}$, profit can be maximized by raising the bids truthfully.\end{proof} \subsection{Incentive Compatibility} The payment mechanism described in our system is incentive compatible in certain cases. In the cases, when payment value for a package, as calculated from the VCG mechanism, is greater than the fair valuation of the auctioneer for the same package, then Case 1 follows, i.e., the auctioneer gets an amount higher than its fair valuation for that package. It means that the auctioneer gains the profit equal to ($\Upsilon_{ij} - \Pi_{\lambda j}$). This profit is distributed among the bidders who bid for the same package in the proportional manner as explained in Case 1. Thus, it also forces the bidders to bid truthfully so as to gain maximum benefits from the auctioning system. \subsection{Efficiency} The cases of a tie are handled in such a way so as to ensure basic fairness. In such a case, we divide the resource in proportion to its utility value to a bidder. Thus, a resource is allocated in accordance to the wishes of the consumers and, hence, the net benefit attained through its use is maximized. In other words, we can say that our system is allocatively efficient as the resources are allocated to the bidders who value them most and can derive maximum benefits through their use. Hence, we achieve allocative efficiency by handling the cases of tie in an efficient manner. \subsection{Optimality} Optimality is a significant property that is desired in a CAS. We ensure this property by the use of Sandholm algorithm in our system. It is used to obtain the optimum allocation of resources so as to maximize the revenue generated for the auctioneer. Thus, output obtained is the most optimal output and there is no other allocation that generates more revenues than the current allocation. \section{Conclusion} \label{sec_conclusion} Thus, we have shown that fairness is incorporated in CAS, whereby all the agents receive their fair share if they behave rationally. Extended fairness as well as basic fairness is attained through our payment mechanism. Optimal allocation is obtained through the Sandholm algorithm and the other significant properties like allocative efficiency and incentive compatibility are also achieved. This is an improvement because in the existing world of multi-agent systems, there do not seem to be many studies that attempt to incorporate optimality as well as fairness. The present paper addresses this lack in a specific multi-agent system, namely, the CAS. However, this work can be extended towards achieving a generalized framework suitable for all, or at least many, multi-agent systems, rather than just CAS. The framework described can also be extended in several ways: one is to de-centralize the suggested algorithm, to avoid use of a single dedicated auctioneer. Especially in distributed computing environments, it would be best for there to be a method to implement the suggested algorithm (or something close to it) without requiring an agent to act as a dedicated auctioneer. A second important extension would be to find applications for the work. Some applications that suggest themselves include distribution of land (a matter of great concern for governments and people the world over) in a fair manner. In land auctions where a tie occurs, no pre-defined or idiosyncratic method need be used to break the tie; rather, the allocation can be done fairly in the manner suggested. Fairness is also an important and pressing concern in the computing sciences and information technology, particularly, in distributed computing~\cite{lamport2000}. It is therefore also of interest to see how our method for achieving fairness could be applied in such contexts. \bibliographystyle{siam}
2,869,038,156,222
arxiv
\section{Introduction} Gauge/gravity correspondence, as an useful tool to explore the field theories with strong interaction, e.g. QCD, has been widely studied during the last decade. However, one might hope to learn some qualitative lessons by searching for quantities that do not depend on the details of the particular gravity dual. Such 'universal' properties may apply to field theories without knowing their gravity dual. An elegant example of such universal quantity is the ratio $\eta/s$ of shear viscosity to entropy density. This takes the value $1/4\pi$ in all theories with gravity dual. In \cite{0905.0900,0905.0903}, it is showed that the speed of sound approaches the conformal value $c_{s}^{2}=1/3$ universally from below in a general class of strongly interacting $\left( 3+1\right) $-dimensional theories at high temperatures and zero chemical potential. This result is consistent with the Monte Carlo lattice QCD calculations \cite{0711.0656,1007.2580}. A number of string theory examples of holographically dual theories, including both bottom-up models \cite{0905.0900,0905.0903,0905.2969} and top-down models \cite{0210220,0305064,0406200,0506002,0507026,0605076,0701132,0806.3796,0808.3953} do indeed consistently show that $c_{s}^{2}\leq1/d$ case by case. The speed of sound in QCD with isospin chemical potential \ has been studied in \cite{0011365}, with the conjecture that the transition from hadron to quark matter is smooth, it is showed that the speed of sound raises from $0$ to some value close to $1$ (speed of light), then drops to some minimal value, and then approaches $1/3$ from below at large isospin chemical potential. While for baryon chemical potential, no physical systems in a deconfined phase with a speed of sound exceeding the conformal value has prompted a conjecture that this might represent a theoretical upper limit for the quantity. In this work, we study $\left( d+2\right) $-dimensional Einstein-Maxwell-Scalar (EMS) system, which has been widely studied as a successful class of holographic QCD models by gauge/gravity correspondence. We analytically obtained a general class of back-reacted black hole solutions and study their dual $\left( d+1\right) $-dimensional field theories. We focused on the behaviors of the speed of sound at arbitrary temperature and baryon chemical potential. Instead of the well known conformal limit $c_{s ^{2}\rightarrow1/d$ at high temperature, we reveal two more universal quantities in various limits: $c_{s}^{2}\rightarrow\left( d-1\right) /16\pi$ at low temperature and $c_{s}^{2}\rightarrow\left( d-1\right) /16\pi d$ at large chemical potential. We briefly review the EMS system and the solutions in section II. In section III, we investigate the behavior of speed of sound in various limits. We summarize our results in section IV. \section{Einstein-Maxwell-Scalar Background} To study holographic QCD theory in $\left( d+1\right) $-dimensional spacetime, we consider a $\left( d+2\right) $-dimensional gravitational background coupled to a Maxwell field and a neutral scalar field, i.e. the Einstein-Maxwell-scalar system. In Einstein frame, the action is \begin{align} S & =\dfrac{1}{16\pi G_{d+2}}\int d^{d+2}x\sqrt{-g}\nonumber\\ & \cdot\left[ R-{\frac{f\left( {\phi}\right) }{4}F^{2}}-\dfrac{1 {2}\left( {\partial}_{\mu}{\phi}\right) ^{2}-V\left( \phi\right) \right] , \end{align} where $f\left( {\phi}\right) $ is a positive defined gauge kinematic function. The equations of motion is derived as \begin{align} & \nabla^{2}\phi=V_{\phi}+\frac{1}{4}f_{\phi}F^{2},\\ & \nabla_{\mu}\left[ f F^{\mu\nu}\right] ={{0,}}\\ & R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{f}{2}\left( F_{\mu\rho}F_{\nu }^{\rho}-\frac{1}{4}g_{\mu\nu}F^{2}\right) \nonumber\\ & +\frac{1}{2}\left[ \partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2 g_{\mu\nu}\left( \partial\phi\right) ^{2}-g_{\mu\nu}V\right] . \end{align} To study the general asymptotic AdS black hole backgrounds with spherical symmetry, we use the following ansatz for the fields, \begin{align} ds^{2} & =\frac{e^{2A\left( z\right) }}{z^{2}}\left[ -g(z)dt^{2 +\frac{dz^{2}}{g(z)}+d\vec{x}^{2}\right] ,\label{metric}\\ \phi & =\phi\left( z\right) \text{, \ }A_{\mu}=A_{t}\left( z\right) dt. \end{align} The equations of motion reduce t \begin{align} \phi^{\prime\prime}+\left( \frac{g^{\prime}}{g}+dw^{\prime}\right) \phi^{\prime}+\frac{A_{t}^{\prime2}f_{\phi}}{2ge^{2w}}-\frac{e^{2w}V_{\phi }{g} & =0,\\ A_{t}^{\prime\prime}+\left[ \frac{f^{\prime}}{f}+\left( d-2\right) w^{\prime}\right] A_{t}^{\prime} & =0,\\ w^{\prime\prime}-w^{\prime2}+\dfrac{\phi^{\prime2}}{2d} & =0,\\ g^{\prime\prime}+dw^{\prime}g^{\prime}-\frac{fA_{t}^{\prime2}}{e^{2w}} & =0,\\ w^{\prime\prime}+dw^{\prime2}+\dfrac{3g^{\prime}}{2g}w^{\prime}+\dfrac {g^{\prime\prime}+2e^{2w}V}{2dg} & =0, \end{align} where we defined $w\left( z\right) =A\left( z\right) -\ln z$. Given the boundary conditions of regularity at the horizon $z=z_{H}$ \begin{equation} g\left( z_{H}\right) =A_{t}\left( z_{H}\right) =0\text{, \end{equation} and asymptotic AdS spacetime at the boundary $z=0$ \begin{equation} g(0)=f\left( 0\right) =1,A\left( 0\right) =A^{\prime}\left( 0\right) =0, \end{equation} the most general black hole solutions can be analytically obtained a \begin{align} \phi & =\int_{0}^{z}\sqrt{2d\left( w^{\prime2}-w^{\prime\prime}\right) }dy,\label{phip}\\ A_{t} & =\mu\frac{\int_{z}^{z_{H}}\dfrac{e^{\left( 2-d\right) w}}{f dy}{\int_{0}^{z_{H}}\dfrac{e^{\left( 2-d\right) w}}{f}dy}=\mu-\rho z^{d-1}+\cdots,\label{At}\\ g & =1-\frac{\int_{0}^{z}e^{-dw}dy}{\int_{0}^{z_{H}}e^{-dw}dy}\nonumber\\ & +\dfrac{\mu^{2}\left\vert \begin{array} [c]{cc \int_{0}^{z_{H}}e^{-dw}dy & \int_{0}^{z_{H}}e^{-dw}dy\int_{0}^{y \dfrac{e^{\left( 2-d\right) w}}{f}dx\\ \int_{z_{H}}^{z}e^{-dw}dy & \int_{z_{H}}^{z}e^{-dw}dy\int_{0}^{y \dfrac{e^{\left( 2-d\right) w}}{f}dx \end{array} \right\vert }{\int_{0}^{z_{H}}e^{-dw}dz\left( \int_{0}^{z_{H}}\dfrac {e^{\left( 2-d\right) w}}{f}dz\right) ^{2}}\label{g}\\ V & =-\frac{e^{-2w}}{2}\left( 2dgw^{\prime\prime}+2d^{2}gw^{\prime 2}+3dg^{\prime}w^{\prime}+g^{\prime\prime}\right) , \label{V \end{align} where $\mu$ is chemical potential and $\rho$ is the baryon densit \begin{equation} \rho=\dfrac{\mu}{\left( d-1\right) \int_{0}^{z_{H}}\dfrac{e^{\left( 2-d\right) w}}{f}dy}. \label{rho \end{equation} In the solution Eq. (\ref{phip}-\ref{V}), the warped factor $w\left( z\right) $ and the gauge kinetic function $f\left( \phi\right) $ are two arbitrary functions. We should note that, to guarantee the scalar field $\phi$ to be real, the warped factor $w\left( z\right) $ need to satisfy the condition $w^{\prime2}\geq w^{\prime\prime}$, which leads to $A^{\prime\prime }\left( 0\right) \leq0$. The entropy density and the temperature of the black hole can be obtained from the background as \begin{align} s & =\dfrac{e^{dw\left( z_{H}\right) }}{4}.\label{s}\\ T & =\left\vert \dfrac{g^{\prime}\left( z_{H}\right) }{4\pi}\right\vert =T_{0}\left[ 1-\frac{\mu^{2}\int_{0}^{z_{H}}e^{-dw}dz\int_{z}^{z_{H} \dfrac{e^{\left( 2-d\right) w}}{f}dy}{\left( \int_{0}^{z_{H} \dfrac{e^{\left( 2-d\right) w}}{f}dz\right) ^{2}}\right] , \label{T \end{align} wher \begin{equation} T_{0}=\frac{e^{-dw\left( z_{H}\right) }}{4\pi\int_{0}^{z_{H}}e^{-dw}dz}, \label{T0 \end{equation} is the black hole temperature at $\mu=0$. It is easy to see that the high temperature limit corresponds to $z_{H}\rightarrow0$. \section{Speed of Sound} In grand canonical ensemble with fixed chemical potential, the squared speed of sound can be calculated as \begin{equation} c_{s}^{2}=\frac{s}{T\left( \frac{\partial s}{\partial T}\right) _{\mu +\mu\left( \frac{\partial\rho}{\partial T}\right) _{\mu}},\label{cs \end{equation} Plugging Eqs. (\ref{s}-\ref{T}) into Eq. (\ref{cs}), it is straightforward to obtain \begin{equation} c_{s}^{2}=\frac{c_{s0}^{2}+a\left( 1+c_{s0}^{2}\right) \left[ 1-b\left( T_{0}-T\right) \right] \tilde{c}_{s0}^{2}}{1+a\left( 1+c_{s0}^{2}\right) },\label{cs2 \end{equation} wher \begin{align} \tilde{c}_{s0}^{2} & =\frac{d-1}{16\pi}\text{,}\\ a & =\frac{\left( d-1\right) \rho^{2}}{\pi T_{0}Tf\left( z_{H}\right) e^{2\left( d-1\right) w\left( z_{H}\right) }}\geq0\text{,}\label{a}\\ b & =\frac{8\pi}{\left( d-1\right) \mu\rho\text{ }e^{-dw\left( z_{H}\right) }}\geq0, \end{align} an \begin{equation} c_{s0}^{2}=-1-\frac{e^{-dw\left( z_{H}\right) }}{dw^{\prime}\left( z_{H}\right) \left( \int_{0}^{z_{H}}e^{-dw}dz\right) },\label{cs0 \end{equation} is the\ squared speed of sound at $\mu=0$. \noindent\textbf{Zero Chemical Potential.} First, we will study the properties for the speed of sound at zero chemical potential, i.e. $\mu=0$. It is well known that the squared speed of sound\ in QCD approaches the conformal limit $1/3$ in high temperature limit. From Eq. (\ref{cs0}), it is easy to show that, \begin{equation} \lim_{T\rightarrow\infty}c_{s0}^{2}=\lim_{z_{H}\rightarrow0}\left[ -1-\frac{e^{-dw\left( z_{H}\right) }}{dw^{\prime}\left( z_{H}\right) \left( \int_{0}^{z_{H}}e^{-dw}dz\right) }\right] =\frac{1}{d}, \label{conformal \end{equation} is an universal quantity that is model independent, i.e. independent of the choice of the functions $w\left( z\right) $ and $f\left( z\right) $ in the above solution Eqs.(\ref{phip}-\ref{V}). Eq. (\ref{conformal}) applies to $\left( d+1\right) $-dimensional QCD, which is the natural generalization of the $\left( 3+1\right) $-dimensional QCD. To further investigate the behavior of the speed of sound in high temperature limit, we expand $c_{s0}^{2}$ at $z_{H}=0$, \begin{equation} c_{s0}^{2}=\frac{1}{d}+\frac{3\left( d+1\right) }{d\left( d+3\right) }A_{0}^{\prime\prime}z_{H}^{2}+O\left( z_{H}^{4}\right) . \label{cs0T \end{equation} Since $A^{\prime\prime}\left( 0\right) \leq0$, the coefficient of $z_{H ^{2}$ in Eq. (\ref{cs0T}) is always negative. It indicates that, in high temperature limit, the squared speed of sound approaches to $1/d$ from below. To obtain the bound for the speed of sound at arbitrary temperature, we calculate the extreme of speed of sound by differentiating it with respect to $z_{H}$, \begin{equation} \frac{dc_{s0}^{2}}{dzH}=\frac{e^{-dw\left( z_{H}\right) }}{\int_{0}^{z_{H }e^{-dw}dz}\left( \frac{w^{\prime\prime}\left( z_{H}\right) }{dw^{\prime 2}\left( z_{H}\right) }-c_{s0}^{2}\right) =0, \end{equation} which leads t \begin{equation} c_{s0}^{2}=\frac{w^{\prime\prime}\left( z_{H}\right) }{dw^{\prime2}\left( z_{H}\right) }\leq\frac{1}{d},\label{bound \end{equation} in which we have used the condition $w^{\prime2}\geq w^{\prime\prime}$. We therefore conclude that, at zero chemical potential $\mu=0$, the squared speed of sound approaches to its maximum value, the conformal limit $1/d$, in high temperature limit for $\left( d+1\right) $-dimensional QCD. This is consistent with the recent results from lattice QCD \cite{Phy.Rep.}. \noindent\textbf{Finite Chemical Potential. }Next, we will study the properties for the speed of sound at finite chemical potential. i.e. $0<\mu<\infty$. In high temperature limit, $T,T_{0}\rightarrow\infty$, $a\rightarrow0$. By using Eq. (\ref{cs2}), it is easy to verify that \begin{equation} \lim_{T\rightarrow\infty}c_{s}^{2}=\lim_{T\rightarrow\infty}c_{s0}^{2 =\frac{1}{d}. \label{highT \end{equation} Thus the speed of sound\ in $\left( d+1\right) $-dimensional QCD approaches to the same universal value $1/d$ in high temperature limit even for finite chemical potential. Similarly, we expand $c_{s}^{2}$ at $z_{H}=0$ \begin{align} c_{s}^{2} & =\frac{1}{d}+\left[ \frac{3\left( d+1\right) }{d\left( d+3\right) }A_{0}^{\prime\prime}+\frac{\mu^{2}\left( d-1\right) ^{2} {d^{2}\left( d+1\right) }\left( 1-\frac{1}{\tilde{c}_{s0}^{2}}\right) \right] z_{H}^{2}\nonumber\\ & +O\left( z_{H}^{4}\right) . \label{csT \end{align} Since $A^{\prime\prime}\left( 0\right) \leq0$, the coefficient of $z_{H ^{2}$ in Eq. (\ref{csT}) is always negative\ for $d<1+16\pi\simeq52$. This indicates that, in high temperature limit, the speed of sound also approaches to $1/d$ from below for finite chemical potential. Furthermore, it is easy to see that $T<T_{0}$ from Eq. (\ref{T}), which implies, from Eq. (\ref{cs2}), that \begin{equation} c_{s}^{2}\leq\frac{c_{s0}^{2}+a\left( 1+c_{s0}^{2}\right) \tilde{c}_{s0 ^{2}}{1+a\left( 1+c_{s0}^{2}\right) }\leq\max\left( c_{s0}^{2},\tilde {c}_{s0}^{2}\right) . \end{equation} In Eq. (\ref{bound}), we have proved $c_{s0}^{2}<1/d$, and it is easy to show that $\tilde{c}_{s0}^{2}=\left( d-1\right) /16\pi<1/d$ for $d\leq7$. We thus conclude that, at finite chemical potential, the squared speed of sound approaches to its maximum value, the conformal limit $1/d$, in high temperature limit for $\left( d+1\right) $-dimensional QCD at least for $d\leq7$. \noindent\textbf{Large Chemical Potential. }For large chemical potential, a new phase, color-flavor-locking or color superconductivity, has been conjectured in QCD. It is thus interesting to investigate the behavior of speed of sound at large chemical potential. By using Eq. (\ref{rho}) and Eq. (\ref{T}), we rewrite the speed of sound Eq. (\ref{cs2}) in the following form, \begin{equation} c_{s}^{2}=\frac{c_{s0}^{2}+a\left( 1+c_{s0}^{2}\right) \left[ 1-\frac {2\int_{0}^{z_{H}}e^{-dw}dz\int_{z}^{z_{H}}\dfrac{e^{\left( 2-d\right) w }{f}dy}{\int_{0}^{z_{H}}e^{-dw}dz\int_{0}^{z_{H}}\dfrac{e^{\left( 2-d\right) w}}{f}dz}\right] \tilde{c}_{s0}^{2}}{1+a\left( 1+c_{s0}^{2}\right) }. \end{equation} We should be more careful to take the large chemical potential limit because that, from Eq. (\ref{T}), the temperature become negative when $\mu$ exceed a critical value, which certainly does not make sense physically. To take the large chemical potential limit at a fixed temperature, the correct process is to take the double limits of $\mu\rightarrow\infty$ with $z_{H}\rightarrow0$ together. Under the double limits, $a\rightarrow\infty$\ and the speed of sound reduces to \begin{equation} \lim_{\substack{\mu\rightarrow\infty\\z_{H}\rightarrow0}}c_{s}^{2 =\frac{\tilde{c}_{s0}^{2}}{d}=\frac{d-1}{16\pi d}.\label{csmu \end{equation} In stead of the conformal limit $c_{s}^{2}\rightarrow1/d$ in high temperature limit, we find another universal quantity $c_{s}^{2}\rightarrow\left( d-1\right) /16\pi d$ in the limit of infinity chemical potential. Eq. (\ref{csmu}) shows that the speed of sound does not approach to the conformal limit at large chemical potentia., This implies that the theory at large chemical potential is in a new phase which is different from the phase at high temperature as people conjectured. \noindent\textbf{Low Temperature. }We further analyze the behavior of speed of sound in low temperature limit $T\rightarrow0$ which leads to $a\rightarrow \infty$. From Eq. (\ref{cs2}), we hav \begin{equation} \lim_{T\rightarrow0}c_{s}^{2}=\tilde{c}_{s0}^{2}=\frac{d-1}{16\pi }.\label{lowT \end{equation} Remarkably, we find one more universal quantity $c_{s}^{2}\rightarrow\left( d-1\right) /16\pi$ in low temperature limit. Finally, we present an explicit example of $(3+1)$-dimensional holographic QCD model by taking the warped factor $w\left( z\right) $ and gauge kinetic function $f\left( \phi\right) $ in \cite{1703.09184}. For $d=3$, the universal quantities that we found in Eqs. (\ref{highT},\ref{csmu},\ref{lowT}) reduce to \begin{align} \text{at large }T & :c_{s}^{2}=\frac{1}{d}\rightarrow\frac{1}{3},\\ \text{at small }T & :c_{s}^{2}=\frac{d-1}{16\pi}\rightarrow\frac{1}{8\pi},\\ \text{at large }\mu & :c_{s}^{2}=\frac{d-1}{16\pi d}\rightarrow\frac{1 {24\pi}. \end{align} We plot the squared speed of sound v.s. temperature and chemical potential in Fig.\ref{figT} and Fig.\ref{figmu} respectively. The behaviors of the sound speed in the figures perfectly match the analysis in this work. \begin{figure}[t] \begin{center} \includegraphics[ height=2in, width=2.6in] {cs2_1.eps} \end{center} \caption{squared speed of sound v.s. temperature at $\mu=0, 0.4779, 0.08, 0.12$ from upper to lower lines. \label{figT \end{figure} \begin{figure}[t] \begin{center} \includegraphics[ height=2in, width=2.6in] {cs2mu_1.eps} \end{center} \caption{squared speed of sound v.s. chemical potential at $T=0.2, 0.3, 0.5, 1, 3, 10, 20, 50$ from lower to upper lines. \label{figmu \end{figure} \section{Summary} In this work, we studied gauge/gravity correspondence by considering the $\left( d+2\right) $-dimensional Einstein-Maxwell-Scalar system, which has been widely studied as a successful class of holographic QCD models. We analytically obtained a general class of back-reacted solutions and focused on the behaviors of speed of sound at arbitrary temperature and chemical potential in $\left( d+1\right) $-dimensional holographic QCD. We found that, in various limits, the speed of sound approaches certain universal quantities which are not dependent on the details of the models. The universal behaviors of speed of sound we have found in this work are follows: \begin{itemize} \item $c_{s}^{2}\rightarrow\frac{1}{d}$, as $T\rightarrow\infty$ with fixed $\mu$; \item $c_{s}^{2}\rightarrow\frac{d-1}{16\pi}$, as $T\rightarrow0$ with fixed $\mu$; \item $c_{s}^{2}\rightarrow\frac{d-1}{16\pi d}$, as $\mu\rightarrow\infty$ with fixed $T$. \end{itemize} We also proved that $c_{s}^{2}\leq1/d$ for all temperature and chemical potentials (at least for $d\leq7$) provided\ that the solution of scalar $\phi$ in Eq. (\ref{phip}) is real. To investigate these universal quantities in further details is not only attractive in QCD theory, such as the new phase in large chemical potential, but also important to understand the deep structure of the gauge/gravity correspondence. \begin{acknowledgments} We would like to thank Carlos Hoyos for useful discussions. This work is supported in part by the Ministry of Science and Technology and S.T. Yau center at NCTU, Taiwan. \end{acknowledgments}
2,869,038,156,223
arxiv
\section{Introduction} The problem of inferring unknown parameters associated to the solution of (partial) differential equations (PDEs) is referred to as an inverse problem. In such a context, when the forward problem is well-posed, the inverse problem is often ill-posed and challenging to solve, even numerically. The area has a long history and a large literature (see e.g.~\cite{engl,tikhonov}) yet the intersection with statistics is still comparatively small, particularly considering the significant intersection, in terms of both methods and algorithms as well as objectives. If one adopts a Bayesian approach to solution of the inverse problem then the object of interest is a posterior distribution, and in particular expectations with respect to this distribution \cite{franklin, stuart}. While this provides an elegant solution and quantified uncertainty via well-defined target distribution, it is more challenging to solve than its deterministic counterpart, requiring at least a Hessian in addition to a maximum a posteriori estimator for a Laplace approximation, if not more expensive Monte Carlo methods. Here we assume solution of the Bayesian inverse problem (BIP) requires computationally intensive Monte Carlo methods for accurate estimation. We furthermore assume that the statistical model can only be defined up to some unknown parameters. Consider a BIP with unknown $u\in \mathsf{X}$ and data $y \in \mathsf{Y}$, related through a PDE, and assume that the statistical model is known only up to some parameter $\theta \in \Theta\subseteq\mathbb{R}^{d_{\theta}}$ (assumed finite dimensional). In other words, the posterior distribution takes the form $$ p(du, \theta | y) \propto p(y | u, \theta) p(du | \theta) p(\theta) \, . $$ Due to sensitivity with respect to the parameter $\theta$ and strong correlation with the unknown $u$, such posterior distribution can be highly complex and very challenging to sample from, even using quite advanced Markov chain Monte Carlo (MCMC) algorithms. In this article, the unknown $u$ is treated as a nuisance parameter and the goal is to maximize the marginal likelihood of the parameters $$ p(y | \theta) = \int_\mathsf{X} p(y| u, \theta) p(du|\theta) \, . $$ In such a scenario one is left with a finite-dimensional optimization problem, albeit with an objective function that is not available analytically. This intractability arises from two sources: \begin{itemize} \item first, for a given $(u,\theta)$ only a discretization of the likelihood $p(y| u, \theta)$ can be evaluated; \item second, the discretized marginal likelihood is a high-dimensional integral which itself must be approximated. \end{itemize} Moreover, the associated gradient of the log-likelihood is not available, which may be of interest in optimization algorithms. In the following we will suppress the notation for fixed observation $y$ and present the method generally. In particular, we use the notation $\gamma_\theta(u) = p(y | u, \theta) p(u | \theta)$, where $du$ represents the finite measure of an infinitesimal volume element, which may or may not be Lebesgue measure, and $p(u | \theta) = (p(du|\theta)/du)(u)$. We will also denote its integral $Z_\theta = p(y | \theta)$, and the posterior by $\eta_\theta(du)$. In this article, we present a new scheme to provide finite variance estimates of the gradient of the log-likelihood that are unbiased. To be precise, let $E_{\theta} = \nabla_\theta \log ( Z_\theta )$ denote the gradient of the log-likelihood with no discretization bias. The proposed method provides an estimator $\hat{E}_{\theta}$ such that $\mathbb{E}[\hat{E}_{\theta}]=E_{\theta}$, where $\mathbb{E}$ is the expectation with respect to the randomization induced by our numerical approach. Moreover, the estimator $\hat{E}_{\theta}$ is constructed so that one only needs access to finite resolution (discretized) approximations of the BIP. This scheme is of interest for several reasons: \begin{enumerate} \item{Unbiased estimates of gradients help to facilitate stochastic gradient algorithms}; \item{The method is easy to parallelize}; \item{The method helps to provide a benchmark for other computations.} \end{enumerate} In terms of the first point, it is often simpler to verify the validity of stochastic gradient algorithms when the estimate of the noisy functional is unbiased. Whilst this is not always needed (see \cite{tadic} for a special case, which does not apply in our context), it at least provides the user a peace-of-mind when implementing optimization schemes. The second point is of interest, in terms of efficiency of application, especially relative to competing methods. The third point simply states that one can check the precision of biased methodology. We now explain the approach in a little more detail. The method that we use is based upon a technique developed in \cite{ubpf}. In that article the authors consider the filtering of a class of diffusion processes, which have to be discretized. The authors develop a method which allows one to approximate the filtering distribution, unbiasedly and without any discretization error. The methodology that is used in \cite{ubpf} is a double randomization scheme based upon the approaches in \cite{mcl,rhee}. The work in \cite{mcl,rhee} provides a methodology to turn a sequence of convergent estimators into an unbiased estimator, using judicious randomization across the level of discretization. It is determined for the problem of interest in \cite{ubpf} that an additional randomization is required in order to derive efficient estimators, that is, estimators that are competitive with the existing state-of-the-art methods in the literature. In this article we follow the basic approach that is used in \cite{ubpf}, except that one cannot use the same estimation methodology for the current problem. An approach is introduced in \cite{beskos} which enables application of the related deterministic multilevel Monte Carlo identity \cite{vihola} to a sequential Monte Carlo (SMC) sampler \cite{delm:13, delm:04} for inference in the present context. In this article, we consider such a strategy to allow the application of the approach in \cite{ubpf} to unbiasedly estimate the gradient of the log-likelihood for BIPs. The method of \cite{beskos} is one of the most efficient techniques that could be used for estimation of the gradient of the log-likelihood for BIPs. However, this method is subject to discretization bias. In other words, suppose $E_{\theta}^l$ is the gradient of the log-likelihood with a choice of discretization bias level, e.g. $2^{-l}$. The original method would produce an estimate $\hat{E}_{\theta}^l$ for which $\mathbb{E}[\hat{E}_{\theta}^l] \neq E_{\theta}^l$. On the other hand, under assumptions, it is proven that the new method introduced here can produce an estimate $E_{\theta}$ with finite variance and without bias, i.e. $\mathbb{E}[\hat{E}_{\theta}] = E_{\theta}^\infty$. We also show that the cost to achieve a given variance is very similar to the multilevel SMC (MLSMC) approach of \cite{beskos}, with high probability. This is confirmed in numerical simulations. We furthermore numerically investigate the utility of our new estimator in the context of stochastic gradient algorithms, where it is shown that a huge improvement in efficiency is possible. Our approach is one of the first which can in general provide unbiased and finite variance estimators of the gradient of the log-likelihood for BIPs. A possible alternative would be the approach of \cite{sergios}, however, the methodology in that article is not as general as is presented here and may be more challenging to implement. This article is structured as follows. In Section \ref{sec:problem} we explain the generic problem to which our approach is applicable. In particular, a concrete example in the context of Bayesian inverse problems is described. In Section \ref{sec:method} we present our methodology and the proposed estimator. In Section \ref{sec:theory} we show that our proposed estimator is unbiased and of finite variance and we consider the cost to obtain the estimate. In Section \ref{sec:numerics} several numerical examples are presented to investigate performance of the estimator in practice, including the efficiency of the estimator when used in in the relevant context of a stochastic gradient algorithm for parameter estimation. In appendix \ref{app:proofs} the proofs of some of our theoretical results can be found. \section{Problem Setting}\label{sec:problem} \subsection{Generic Problem} Let $(\mathsf{X},\mathcal{X})$ be a measurable space, and define a probability measure on it as $$ \eta_{\theta}(du) = \frac{\gamma_{\theta}(u)du}{\int_{\mathsf{X}}\gamma_{\theta}(u)du} $$ where $\theta\in\Theta\subseteq\mathbb{R}^{d_{\theta}}$, $\gamma:\Theta\times\mathsf{X}\rightarrow\mathbb{R}_+$ and $du$ is a $\sigma-$finite measure on $(\mathsf{X},\mathcal{X})$. We are interested in computing \begin{eqnarray*} \nabla_{\theta}\log\Big(\int_{\mathsf{X}}\gamma_{\theta}(u)du\Big) & = & \int_{\mathsf{X}}\nabla_{\theta}\log\Big(\gamma_{\theta}(u)\Big) \eta_{\theta}(du) \\ & = & \int_{\mathsf{X}}\varphi_{\theta}(u) \eta_{\theta}(du) \, , \nonumber \end{eqnarray*} where we have defined $\varphi_{\theta}(u) = \nabla_{\theta}\log\Big(\gamma_{\theta}(u)\Big)$. From here on, we will use the following short-hand notation for a measure $\mu$ on $(\mathsf{X},\mathcal{X})$ and a measurable $\mu-$integrable $\varphi:\mathsf{X}\rightarrow\mathbb{R}^d$ $$\mu(\varphi):=\int_{\mathsf{X}}\varphi(x)\mu(dx) \, ,$$ which should be understood as a column vector of integrals. In practice, we assume that we must work with an approximation of $\varphi_{\theta}(u)$ and $\eta_{\theta}(du)$. Let $l\in\mathbb{N}_0$, and set $$ \eta_{\theta}^l(du) = \frac{\gamma_{\theta}^l(u)du}{\int_{\mathsf{X}}\gamma_{\theta}^l(u)du} $$ where $\gamma^l:\Theta\times\mathsf{X}\rightarrow\mathbb{R}_+$. We are now interested in computing \begin{eqnarray*} \nabla_{\theta}\log\Big(\int_{\mathsf{X}}\gamma_{\theta}^l(u)du\Big) & = & \int_{\mathsf{X}}\nabla_{\theta}\log\Big(\gamma_{\theta}^l(u)\Big) \eta_{\theta}^l(du) \\ & = & \int_{\mathsf{X}}\varphi_{\theta}^l(u) \eta_{\theta}^l(du). \nonumber \end{eqnarray*} It is assumed explicitly that $\forall \theta\in\Theta$ $$ \lim_{l\rightarrow+\infty}\eta_{\theta}^l(\varphi_{\theta}^l) = \eta_{\theta}(\varphi_{\theta}). $$ \subsection{Example of Problem}\label{sec:example} We will focus on the following particular problem. Let $D\subset\mathbb{R}^d$ with $\partial D\in C^1$ convex and $f\in L^2(D)$. Consider the following PDE on $D$: \begin{align} -\nabla \cdot (\hat{u}\nabla p) &=f,\quad \textrm{ on } D, \\ p&= 0, \quad \textrm{ on } \partial D, \nonumber \end{align} where $$ \hat{u}(x) = \bar{u}(x) + \sum_{k=1}^Ku_k\sigma_k\phi_k(x). $$ Define $u=\{u_k\}_{k=1}^K$, with $u_k \sim U[-1,1]$ i.i.d. (the uniform distribution on $[-1,1]$). This determines the prior distribution for $u$. The state space is $\mathsf{X}=\prod_{k=1}^K[-1,1]$. Let $p(\cdot;u)$ denote the weak solution of $(1)$ for parameter value $u$. The following will be assumed. \begin{hypA} \label{hyp:N} $f, \phi_k \in C(D)$, $\|\phi_k\|_\infty \leq 1$, and there is a $u_*>0$ such that $\bar{u}(x) > \sum_{k=1}^K\sigma_k + u_*$. \end{hypA} Note that this assumption guarantees $\hat u > u_*$ uniformly in $u$, hence there is a well-defined (weak) solution $p(\cdot ;u)$ which will be bounded in uniformly in $u$, in an appropriate space, e.g. $L^2(D)$. Define the following vector-valued function $$ \mathcal{G}(u) = [g_1(p(\cdot;u)),\dots,g_M(p(\cdot;u))]^{\intercal}, $$ where $g_m$ are elements of the dual space (e.g. $L^2(D)$ is sufficient), for $m=1,\dots,M$. It is assumed that the data take the form $$ y = \mathcal{G}(u) + \xi,\quad \xi \sim N(0,\theta^{-1}\cdot\bm{I}_M),\quad \xi \perp u, $$ where $N(0,\theta^{-1}\cdot\bm{I}_M)$ denotes the Gaussian random variable with mean $0$ and covariance matrix $\theta^{-1}\cdot\bm{I}_M$, and $\perp$ denotes independence. The unnormalized density of $u$ for fixed $\theta$ is then given by \begin{equation}\label{eq:unno} \gamma_{\theta}(u) = \theta^{M/2}\exp(-\frac{\theta}{2}\|\mathcal{G}(u) - y\|^2) \, , \end{equation} the normalized density is given by $$ \eta_{\theta}(u) = \frac{\gamma_{\theta}(u)} {Z_\theta} \, , $$ where $Z_\theta = {\int_{\mathsf X}\gamma_{\theta}(u)du}$, and the quantity of interest is defined as \begin{equation}\label{eq:phi} \varphi_\theta(u) := \nabla_{\theta}\log\Big(\gamma_{\theta}(u)\Big) = \frac{M}{2\theta} - \frac{1}{2}\|\mathcal{G}(u) - y\|^2) \, . \end{equation} \subsubsection{Particular setup}\label{ssec:example} Let $d=1$ and $D=[0,1]$ and consider $f(x)=100x$. For the prior specification of $u$, we set $K=2, \bar{u}(x)=0.15$, and for $k>0$, let $\sigma_k=(2/5)4^{-k}, \phi_k(x)=\sin(k\pi x)$ if $k$ is odd and $\phi_k(x)=\cos(k\pi x)$ if $k$ is even. The observation operator is $\mathcal{G}(u)=[p(0.25;u),p(0.75;u)]^{\intercal}$, and the parameter in observation noise covariance is taken to be $\theta=0.3$. The PDE problem at resolution level $l$ is solved using a finite element method with piecewise linear shape functions on a uniform mesh of width $h_l=2^{-l}$, for $l\geq2$. Thus, on the $l$th level the finite-element basis functions are $\{\psi_i^l\}_{i=1}^{2^l-1}$ defined as (for $x_i = i\cdot 2^{-l}$): $$ \psi_i^l(x) = \left\{\begin{array}{ll} (1/h_l)[x-(x_i-h_l)] & \textrm{if}~x\in[x_i-h_l,x_i], \\ (1/h_l)[x_i+h_l-x] & \textrm{if}~x\in[xi,x_i+h_l]. \end{array}\right. $$ To solve the PDE, $p^l(x)=\sum_{i=1}^{2^l-1}p_i^l\psi_i^l(x)$ is plugged into (1), and projected onto each basis element: $$ -\Big\langle \nabla\cdot\Big(\hat{u}\nabla\sum_{i=1}^{2^l-1}p_i^l \psi_i^l \Big),\psi_j^l \Big\rangle = \langle f, \psi_j^l \rangle, $$ resulting in the following linear system: $$ \bm{A}^l(u)\bm{p}^l = \bm{f}^l, $$ where we introduce the matrix $\bm{A}^l(u)$ with entries $A_{ij}^l(u) = \langle \hat{u}\nabla\psi_i^l,\nabla\psi_j^l \rangle$, and vectors $\bm{p}^l, \bm{f}^l$ with entries $p_i^l$ and $f_i^l=\langle f, \psi_i^l\rangle$, respectively. Define $\mathcal{G}^l(u) = [g_1(p^l(\cdot;u)),\dots,g_M(p^l(\cdot;u))]^{\intercal}$. Denote the corresponding approximated unnormalized density by \begin{equation}\label{eq:unnol} \gamma_{\theta}^l(u) = \theta^{M/2}\exp(-\frac{\theta}{2}\|\mathcal{G}^l(u) - y\|^2), \end{equation} and the approximated normalized density by $$ \eta_{\theta}^l(u) = \frac{\gamma_{\theta}^l(u)} {Z_\theta^l} \, , $$ where $Z_\theta^l = {\int_{E}\gamma_{\theta}^l(u)du}$. We further define \begin{equation}\label{eq:phil} \varphi^l_\theta(u) := \nabla_{\theta}\log\Big(\gamma_{\theta}^l(u)\Big) = \frac{M}{2\theta} - \frac{1}{2}\|\mathcal{G}^l(u) - y\|^2) \, . \end{equation} It is well-known that under assumption (A\ref{hyp:N}) $p^l$ converges to $p$ as $l \rightarrow \infty$ uniformly in $u$ \cite{brenner, ciarlet}. Furthermore, continuity ensures $\gamma_{\theta}^l(u)$ converges to $\gamma_{\theta}(u)$ and $\varphi^l_\theta(u)$ converges to $\varphi_\theta(u)$ uniformly in $u$ as well. \section{Methodology for Unbiased Estimation}\label{sec:method} We now describe our methodology for computing an unbiased estimate of $\eta_{\theta}(\varphi_{\theta})$. For simplicity of exposition we will suppose that for $i\in\{1,\dots,d_{\theta}\}$, $(\varphi_\theta(u))_i\in\mathcal{B}_b(\mathsf{X})$, where $(x)_i$ denotes the $i^{th}-$element of a vector and $\mathcal{B}_b(\mathsf{X})$ are the collection of bounded, measurable and real-valued functions on $\mathsf{X}$. This constraint is not needed for the numerical implementation of the method, but, shall reduce most of the technical exposition to follow. As remarked in the introduction, the basic approach follows that in \cite{ubpf} with some notable differences. We now detail how the approach will work. \subsection{Methodology in \cite{ubpf}} The underlying approach of \cite{ubpf} is a type of double randomization scheme. The first step is to use the single-term estimator as developed in \cite{rhee}. Suppose one wants to estimate $\eta_{\theta}(\varphi_{\theta})$, but, only has access to a methodology that can approximate $\eta_{\theta}^l(\varphi_{\theta}^l)$ for each fixed $l\in\mathbb{N}_0$. Let $\mathbb{P}_L(l)$ be a positive probability mass function on $\mathbb{N}_0$ and suppose that one can construct a sequence of random variables $(\Xi_{\theta}^l)_{l\geq 0}$ such that \begin{eqnarray} \mathbb{E}[\Xi_{\theta}^0] & = & \eta_{\theta}^0(\varphi_{\theta}^0) \label{eq:ub1}\\ \mathbb{E}[\Xi_{\theta}^l] & = & \eta_{\theta}^l(\varphi_{\theta}^l) - \eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1})\quad\quad l\in\mathbb{N}\label{eq:ub2} \end{eqnarray} and that \begin{equation}\label{eq:ub3} \sum_{l\in\mathbb{N}_0}\frac{1}{\mathbb{P}_L(l)}\mathbb{E}[\|\Xi_{\theta}^l\|^2] < +\infty \end{equation} where $\|\cdot\|$ is the $L_2-$norm. Now if one draws $L\sim\mathbb{P}_L(\cdot)$, then $\Xi_{\theta}^L/\mathbb{P}_{L}(L)$ is an unbiased and finite variance estimator of $\eta_{\theta}(\varphi_{\theta})$. It should be noted that \eqref{eq:ub1}-\eqref{eq:ub2} are not necessary conditions, but are sufficient to ensure the unbiasedness of the estimator. In the context of interest, it can be challenging to obtain a sequence of random variables which can possess the properties \eqref{eq:ub1}-\eqref{eq:ub3}. We will detail one possible approach at a high-level and then explain in details how one can actually construct a simulation method to achieve this high-level description. \subsection{High Level Approach} \label{ssec:high} The objective of this section is to highlight the generic procedure that is used in \cite{ubpf} for producing estimates that satisfy \eqref{eq:ub1}-\eqref{eq:ub2}. The basic idea is to use another application of randomization to construct such unbiased estimators from a consistent sequence of estimators. In particular, consider a given increasing sequence $(N_p)_{p\in\mathbb{N}_0}$ with $N_p\in\mathbb{N}$ for each $p\in\mathbb{N}_0$, $1\leq N_0< N_1< \cdots$ and $\lim_{p\rightarrow\infty}N_p=\infty$. Then, we suppose that one can construct $N_p$-sample Monte Carlo (type) estimators $\xi_\theta^{l,p}$ for $l\in\mathbb{N}_0$, such that almost surely the following consistency results hold \begin{eqnarray}\label{eq:cons1} \lim_{p\rightarrow\infty}\xi_\theta^{0,p} & = & \eta_{\theta}^{0}(\varphi_{\theta}^0) \, ,\\ \lim_{p\rightarrow\infty} \xi_\theta^{l,p} & = &\eta_{\theta}^{l}(\varphi_{\theta}^l)-\eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1}) \, , \quad\quad l\in\mathbb{N} \, . \label{eq:cons2} \end{eqnarray} For a given $(l,p,p')\in\mathbb{N}_0^3$, $p\neq p'$ we do \emph{not} require $\xi_\theta^{l,p}$ and $\xi_\theta^{l,p'}$ to be independent, nor do we require unbiasedness of the individual estimators as in \begin{eqnarray*} \mathbb{E}[\xi_\theta^{0,p}] & = & \eta_{\theta}^{0}(\varphi_{\theta}^0) \, , \\ \mathbb{E}[\xi_\theta^{l,p}] & = & \eta_{\theta}^{l}(\varphi_{\theta}^l)-\eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1}) \, , \quad\quad l\in\mathbb{N} \, . \end{eqnarray*} Now set \begin{eqnarray*} \Xi_{\theta}^{0,0} & := & \xi_\theta^{0,0} \, ,\\ \Xi_{\theta}^{0,p} & := & \xi_\theta^{0,p} - \xi_\theta^{0,p-1} \, , \quad\quad p\in\mathbb{N} \, . \end{eqnarray*} For $l\in\mathbb{N}$ given, set \begin{eqnarray*} \Xi_{\theta}^{l,0} & := & \xi_\theta^{l,0} \, , \\ \Xi_{\theta}^{l,p} & := & \xi_\theta^{l,p} - \xi_\theta^{l,p-1} \, , \quad\quad p\in\mathbb{N} \, . \end{eqnarray*} Let $\mathbb{P}_P(p)$, $p\in\mathbb{N}_0$, be a positive probability mass function with $\overline{\mathbb{P}}_P(p)=\sum_{q=p}^{\infty}\mathbb{P}_P(q)$. Now if \begin{eqnarray}\label{eq:cs_fv} \sum_{p\in\mathbb{N}_0}\frac{1}{\overline{\mathbb{P}}_P(p)} \mathbb{E}[\| \xi_\theta^{l,p} -\eta_{\theta}^0(\varphi_{\theta}^0)\|^2] & < & +\infty \label{eq:cs_fv1} \, , \\ \sum_{p\in\mathbb{N}_0}\frac{1}{\overline{\mathbb{P}}_P(p)}\mathbb{E}[\| \xi_\theta^{l,p} - \{\eta_{\theta}^{l}(\varphi_{\theta}^l) - \eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1})\} \|^2] & < & +\infty \, , \quad\quad l\in\mathbb{N}\label{eq:cs_fv2} \end{eqnarray} and $P\sim\mathbb{P}_P(\cdot)$, then \begin{equation}\label{eq:xi_l_def} \Xi_{\theta}^l = \sum_{p=0}^P \frac{1}{\overline{\mathbb{P}}_P(p)} \Xi_{\theta}^{l,p} \end{equation} will allow $(\Xi_{\theta}^l)_{l\in\mathbb{N}_0}$ to satisfy \eqref{eq:ub1}-\eqref{eq:ub2}, where expectations are understood to be with respect to $\mathbb{P}_P$ yet $P$ is suppressed in the notation. Moreover $(\Xi_{\theta}^l)_{l\in\mathbb{N}_0}$ will have finite variances. This result follows as we are simply using the coupled sum estimator as in \cite{rhee} and using \cite[Theorem 5]{vihola}, for instance, to verify the conditions required. \subsection{Details of the Approach} We will now describe how to obtain the sequence $(\Xi_{\theta}^{l,p})_{p\in\mathbb{N}_0}$ for $l\in\mathbb{N}_0$ fixed. \subsubsection{MLSMC Method of \cite{beskos}} To introduce our approach, we first consider the MLSMC method in \cite{beskos} which will form the basis for our estimation procedure. Define for $l\in\mathbb{N}_0$ $$ G_{\theta}^l(u) = \frac{\gamma_{\theta}^{l+1}(u)}{\gamma_{\theta}^{l}(u)} $$ and for $l\in\mathbb{N}$, $M_{\theta}^l$ is a $\eta_{\theta}^l-$invariant Markov kernel; that is, for any $\varphi\in\mathcal{B}_b(\mathsf{X})$ \begin{equation}\label{eq:m} \eta_{\theta}^l(\varphi) = \int_{\mathsf{X}}\Big(\int_{\mathsf{X}}\varphi(u') M_{\theta}^l(u,du')\Big)\eta_{\theta}^l(du). \end{equation} Define for $\mu\in\mathcal{P}(\mathsf{X})$ (the collection of probability measures on $(\mathsf{X},\mathcal{X})$), $l\in\mathbb{N}$ \begin{equation}\label{eq:Phi} \Phi_{\theta}^l(\mu)(du') := \frac{1}{\mu(G_{\theta}^{l-1})} \int_{\mathsf{X}}G_{\theta}^{l-1}(u)M_{\theta}^l(u,du') \mu(du) \, . \end{equation} Noting that \begin{equation}\label{eq:rat} \eta_{\theta}^l(\varphi) = \frac{\eta_{\theta}^{l-1}(G_{\theta}^{l-1}\varphi)}{\eta_{\theta}^{l-1}(G_{\theta}^{l-1})} = \frac{Z_\theta^{l-1}}{Z_\theta^l}\eta_{\theta}^{l-1}(G_{\theta}^{l-1}\varphi) = \frac{1}{Z_\theta^l} \int_{\mathsf{X}} ( \gamma_\theta^l(u) \varphi(u) ) du \, , \end{equation} equations \eqref{eq:m} and \eqref{eq:Phi} lead to the recursion \begin{eqnarray* \eta_{\theta}^l(\varphi) = \frac{\eta_{\theta}^{l-1}(G_{\theta}^{l-1}\varphi)}{\eta_{\theta}^{l-1}(G_{\theta}^{l-1})} &=& \frac{1}{\eta_{\theta}^{l-1}(G_{\theta}^{l-1})} \int_{\mathsf{X}} G_{\theta}^{l-1}(u) \Big ( \int_{\mathsf{X}}\varphi(u') M_{\theta}^l(u,du') \Big) \eta_{\theta}^{l-1}(du) \\ &=& \Phi_{\theta}^l(\eta_{\theta}^{l-1})(\varphi) \, . \end{eqnarray*} Consider $N\in\mathbb{N}$, and slightly modify the MLSMC algorithm used in \cite{beskos} to keep the number of samples across layers fixed, up to some given level $l\in\mathbb{N}$. Details are given in Algorithm \ref{alg:mlsmc}. \begin{algorithm}[h!] \begin{enumerate} \item{Initialization: For $i\in\{1,\dots,N\}$ sample $U_0^{i}$ from $\eta_\theta^0$. If $l=0$ stop; otherwise set $s=1$ and go-to step 2.} \item{Resampling and Sampling: For $i\in\{1,\dots,N\}$ sample $U_s^{i}$ from $\Phi_{\theta}^s(\eta_{\theta}^{s-1,N})$. This consists of sampling $a_s^i\in\{1,\dots,N\}$ with probability mass function $$ \mathsf{P}_\theta^N(a_s^i=j) = \frac{G_{\theta}^{s-1}(u_{s-1}^j)}{\sum_{k=1}^N G_{\theta}^{s-1}(u_{s-1}^k)} \, , $$ and then sampling $U_s^i$ from $M_{\theta}^s(u_{s-1}^{a_s^i},\cdot)$. If $s=l$ stop; otherwise set $s=s+1$ and return to the start of 2.} \end{enumerate} \caption{A Multilevel Sequential Monte Carlo Sampler with a fixed number of samples $N\in\mathbb{N}$ and a given level $l\in\mathbb{N}_0$.} \label{alg:mlsmc} \end{algorithm} This algorithm yields samples distributed according to the following joint law \begin{equation}\label{eq:mlsmc_law} \mathsf{P}_\theta^N\big(d(u_0^{1:N},\dots,u_l^{1:N})\big) = \Big(\prod_{i=1}^{N} \eta_{\theta}^{0}(du_0^i)\Big) \Big(\prod_{s=1}^l \prod_{i=1}^{N} \Phi_{\theta}^s(\eta_{\theta}^{s-1,N})(du_s^i) \Big) \, , \end{equation} where $\eta_{\theta}^{s-1,N}(du)=\frac{1}{N}\sum_{i=1}^{N}\delta_{u_{s-1}^i}(du)$ for $s\in\mathbb{N}$. One can compute an estimate of $\eta_{\theta}^0(\varphi_{\theta}^0)$ as $$ \eta_{\theta}^{0,N}(\varphi_{\theta}^0) := \frac{1}{N}\sum_{i=1}^N \varphi_{\theta}^0(u_0^i) \, . $$ Following from \eqref{eq:rat}, for $l\in\mathbb{N}$, one can estimate $\eta_{\theta}^l(\varphi_{\theta}^l)-\eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1})$ with $$ \frac{\eta_{\theta}^{l-1,N}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N}(\varphi_{\theta}^{l-1}) = \frac{\frac{1}{N}\sum_{i=1}^N G_{\theta}^{l-1}(u_{l-1}^i)\varphi_{\theta}^{l}(u_{l-1}^i)}{\frac{1}{N}\sum_{i=1}^N G_{\theta}^{l-1}(u_{l-1}^i)} - \frac{1}{N}\sum_{i=1}^N \varphi_{\theta}^{l-1}(u_{l-1}^i) \, . $$ The reason for using the samples generated at level $l-1$ to estimate $\eta_{\theta}^l(\varphi_{\theta}^l)$ as well as $\eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1})$ is to construct estimators which satisfying conditions such as \eqref{eq:ub3}. Standard results (for instance in \cite{delm:13}) allow one to prove that almost surely \begin{eqnarray*} \lim_{N\rightarrow\infty} \eta_{\theta}^{0,N}(\varphi_{\theta}^0) & = & \eta_{\theta}^{0}(\varphi_{\theta}^0) \\ \lim_{N\rightarrow\infty} \Big(\frac{\eta_{\theta}^{l-1,N}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N}(\varphi_{\theta}^{l-1}) \Big)& = & \eta_{\theta}^l(\varphi_{\theta}^l)-\eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1}) \, ,\quad\quad l\in\mathbb{N} \, . \end{eqnarray*} Note that in general one has $$ \mathsf{E}_\theta^N\bigg[\Big(\frac{\eta_{\theta}^{l-1,N}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N}(\varphi_{\theta}^{l-1}) \Big)\bigg] \neq \eta_{\theta}^l(\varphi_{\theta}^l)-\eta_{\theta}^{l-1}(\varphi_{\theta}^{l-1}) \, ,\quad\quad l\in\mathbb{N} \, , $$ where $\mathsf{E}_\theta^N$ is an expectation associated to the probability in \eqref{eq:mlsmc_law}. \subsubsection{Approach for Constructing $(\Xi_{\theta}^{l,p})_{p\in\mathbb{N}_0}$} In order to calculate our approximation, we will consider the following approach, which was also used in \cite{ubpf}. Given any $(l,P)\in\mathbb{N}_0^2$, we will run Algorithm \ref{alg:est_const} in order to obtain $(\Xi_{\theta}^{l,p})_{p\in\{0,1,\dots,P\}}$. \begin{algorithm}[h!] \begin{enumerate} \item Sample: Run Algorithm \ref{alg:mlsmc} {\em independently} with $N_p-N_{p-1}$ samples for $p\in\{0,1,\dots,P\}$, up-to level $(l-1)\vee 0$, where we define for convenience $N_{-1}:=0$. \item Estimate: construct $\Xi_{\theta}^{l,p}$ as in equation \eqref{eq:xi_l_p_def}, for $p\in\{0,1,\dots,P\}$. \end{enumerate} \caption{Approach to construct $(\Xi_{\theta}^{l,p})_{p\in\{0,1,\dots,P\}}$ for $(l,P)\in\mathbb{N}_0^2$ given.} \label{alg:est_const} \end{algorithm} The joint probability law of the samples simulated according to Algorithm \ref{alg:est_const} is \begin{equation}\label{eq:alg_2_law} \mathbb{P}_{\theta}\big(d(u_0^{1:N_p},\dots,u_{(l-1)\vee 0}^{1:N_p})\big) = \prod_{p=0}^P \mathsf{P}_\theta^{N_p-N_{p-1}} \big((u_0^{N_{p-1}+1:N_p},\dots,u_{(l-1)\vee 0}^{N_{p-1}+1:N_p})\big) \, , \end{equation} where $N_{-1}=0$ and $\mathsf{P}_\theta^{N_p-N_{p-1}}$ is as defined in \eqref{eq:mlsmc_law}. For $(l,P)\in\mathbb{N}_0^2$ given, consider running Algorithm \ref{alg:est_const}. Then for any $s\in\{0,1,\dots,(l-1)\vee 0\}$ and any $p \in \{0,\dots, P\}$ we can construct the following empirical probability measure on $(\mathsf{X},\mathcal{X})$ \begin{equation}\label{eq:es} \eta_\theta^{s,N_{0:p}}(du_s) := \sum_{q=0}^p \Big(\frac{N_q-N_{q-1}}{N_p}\Big)\eta_\theta^{s,N_q-N_{q-1}}(du_s) \, . \end{equation} Note the recursion $$ \eta_\theta^{s,N_{0:p}}(du_s) = \Big(\frac{N_p-N_{p-1}}{N_p}\Big)\eta_\theta^{s,N_p-N_{p-1}}(du_s) + \frac{N_{p-1}}{N_p}\eta_\theta^{s,N_{0:p-1}}(du_s) \, . $$ Now define \begin{equation}\label{eq:xi_l_p_def} \Xi_{\theta}^{l,p} := \left\{\begin{array}{ll} \eta_{\theta}^{0,N_{0:p}}(\varphi_{\theta}^0) - \eta_{\theta}^{0,N_{0:p-1}}(\varphi_{\theta}^0) & \textrm{if}~l=0 \\ \frac{\eta_{\theta}^{l-1,N_{0:p}}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N_{0:p}}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N_{0:p}}(\varphi_{\theta}^{l-1}) - \Big(\frac{\eta_{\theta}^{l-1,N_{0:p-1}}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N_{0:p-1}}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N_{0:p-1}}(\varphi_{\theta}^{l-1})\Big) & \textrm{otherwise} \, , \end{array}\right . \end{equation} where $\eta_{\theta}^{0,N_{0:-1}}(\varphi_{\theta}^0):=0$, and $$ \frac{\eta_{\theta}^{l-1,N_{0:-1}}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N_{0:-1}}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N_{0:-1}}(\varphi_{\theta}^{l-1}) := 0 \, . $$ For convenience in the next section, the conditions \eqref{eq:cs_fv1}-\eqref{eq:cs_fv2} translated to the notations used in this section are \begin{eqnarray} \sum_{p\in\mathbb{N}_0}\frac{1}{\overline{\mathbb{P}}_P(p)}\mathbb{E}_{\theta}[\|[\eta_\theta^{0,N_{0:p}}-\eta_\theta^{0}](\varphi_\theta^{0})\|^2] & < &+\infty \label{eq:cs_fv3}\\ \sum_{p\in\mathbb{N}_0}\frac{1}{\overline{\mathbb{P}}_P(p)} \mathbb{E}_{\theta}\Big[\Big\|\frac{ \eta_\theta^{l-1,N_{0:p}}(G_{\theta}^{l-1}\varphi_\theta^{l}) } {\eta_\theta^{l-1,N_{0:p}}(G_{\theta}^{l-1})}- \eta_\theta^{l-1,N_{0:p}}(\varphi_\theta^{l-1}) - \Big( \frac{ \eta_\theta^{l-1}(G_{\theta}^{l-1}\varphi_\theta^{l}) } {\eta_\theta^{l-1}(G_{\theta}^{l-1})}- \eta_\theta^{l-1}(\varphi_\theta^{l-1}) \Big) \Big\|^2\Big] & < &+\infty , \,\, l\in\mathbb{N} \, , \label{eq:cs_fv4} \end{eqnarray} where $\mathbb{E}_{\theta}$ is used to denote expectation associated to the probability $\mathbb{P}_{\theta}$ in \eqref{eq:alg_2_law}. \subsection{Method} The new method is now presented in Algorithm \ref{alg:method}. \begin{algorithm}[h!] \vspace{5pt} {For $i=1,\dots, M$:} \begin{enumerate} \item{Generate $L_i\sim\mathbb{P}_L$ and $P_i\sim\mathbb{P}_P$.} \item{Run Algorithm \ref{alg:est_const} with $l=L_i$ and $P=P_i$.} \item{Compute: $$ \Xi_{\theta}^{L_i} = \sum_{p=0}^{P_i} \frac{1}{\overline{\mathbb{P}}_P(p)}\Xi_{\theta}^{L_i,p} \, , $$ where $\Xi_{\theta}^{L_i,p}$ is given in \eqref{eq:xi_l_p_def}.} \end{enumerate} {Return the estimate: \begin{equation}\label{eq:ub_est} \widehat{\eta_{\theta}(\varphi_{\theta})} := \frac{1}{M}\sum_{i=1}^M \frac{1}{\mathbb{P}_L(L_i)}\Xi_{\theta}^{L^i} \, . \end{equation} } \caption{Method for Unbiasedly Estimating $\eta_{\theta}(\varphi_{\theta})$.} \label{alg:method} \end{algorithm} The estimate of $\eta_{\theta}(\varphi_{\theta})$ is given by \eqref{eq:ub_est}. In Section \ref{sec:theory} we will prove that it is both unbiased and of finite variance, as well as to investigate the cost of computing the estimate. There are several points of practical interest to be made at this stage (the first two were noted already in \cite{ubpf}). First, the loop over the number of independent samples $i$ in Algorithm \ref{alg:method} can be easily parallelized. Second, one does not need to make $L$ and $P$ independent; this is only assumed for simplicity of presentation, but is not required. Third, the current method uses only the level $l-1$ marginal of \eqref{eq:alg_2_law}. All the samples for $s=0,\dots, l-2$ and associated empirical measures \eqref{eq:es} are discarded and only the level $l-1$ empirical measure is utilized. This differs from \cite{beskos} where all the lower level empirical measures are used. It is possible these samples could be utilized to improve the accuracy of the method, but it is not necessary and so is not investigated further here. The potential efficiency of the double randomization scheme, as well as a discussion of the overall efficiency of the approach, is given in \cite[Section 2.5]{ubpf}. \section{Theoretical Results}\label{sec:theory} Our main objective is to show that $(\Xi_\theta^l)_{l\in\mathbb{N}_0}$ as defined in \eqref{eq:xi_l_def} with $(\Xi_\theta^{l,p})_{p\in\mathbb{N}_0}$ as in \eqref{eq:xi_l_p_def} will satisfy \eqref{eq:ub1}-\eqref{eq:ub3}. To that end, one must first show that $(\Xi_\theta^{l,p})_{p\in\mathbb{N}_0}$ satisfy \eqref{eq:cs_fv3}-\eqref{eq:cs_fv4} which certainly verifies \eqref{eq:ub1}-\eqref{eq:ub2} and then one must establish that \eqref{eq:ub3} holds. We make the following assumptions. \begin{hypA} \label{hyp:A} For each $\theta\in\Theta$, there exist $0<\underline{C}<\overline{C}<+\infty$ such that \begin{eqnarray*} \sup_{l\geq 0} \sup_{u\in \mathsf{X}} G_{\theta}^l (u) & \leq & \overline{C}\\ \inf_{l\geq 0} \inf_{u\in \mathsf{X}} G_{\theta}^l (u) & \geq & \underline{C}. \end{eqnarray*} \end{hypA} \begin{hypA} \label{hyp:B} For each $\theta\in\Theta$, , there exist a $\rho\in(0,1)$ such that for any $l\geq 1$, $(u,v)\in \mathsf{X}^2$, $A\in\mathcal{X}$ $$ \int_A M_{\theta}^l(u,du') \geq \rho \int_A M_{\theta}^l(v,dv'). $$ \end{hypA} \begin{hypA} \label{hyp:C} For each $\theta\in\Theta$, there exists a $\widetilde{C}<+\infty$ such that for each $i\in\{1,\dots,d_{\theta}\}$ $$ \sup_{l\geq 0} \sup_{u\in \mathsf{X}} |(\varphi_{\theta}^l(u))_i| \leq \widetilde{C}. $$ \end{hypA} For $\varphi\in\mathcal{B}_b(\mathsf{X})$ we set $\|\varphi\|_{\infty}=\sup_{u\in\mathsf{X}}|\varphi(u)|$. To simplify our notations we will set $Z_\theta^l=\int_{\mathsf{X}}\gamma_{\theta}^l(u)du$, $l\in\mathbb{N}_0$, and for $l\in\mathbb{N}$ $$ \|{\varphi}_\theta^{l}-{\varphi}_\theta^{l-1}\|_{\infty}^2 = \max_{i\in\{1,\dots,d_{\theta}\}}\Big\{\|(\varphi_\theta^{l})_i-(\varphi_\theta^{l-1})_i\|_{\infty}^2\Big\}. $$ We begin with the following result, which is associated to verifying that \eqref{eq:cs_fv3}-\eqref{eq:cs_fv4} can hold. \begin{prop}\label{prop:main_prop} Assume (A\ref{hyp:A}-\ref{hyp:C}). Then for any $\theta\in\Theta$ there exists a $C<+\infty$ such that for any $p\in\mathbb{N}_0$, $1\leq N_0<N_1<\cdots<N_p<+\infty$: $$ \mathbb{E}_{\theta}[\|[\eta_\theta^{0,N_{0:p}}-\eta_\theta^{0}](\varphi_\theta^{0})\|^2] \leq \frac{C}{N_p}\Big(1+\frac{p^2}{N_p}\Big). $$ In addition, for any $(l,p)\in\mathbb{N}\times\mathbb{N}_0$, $1\leq N_0<N_1<\cdots<N_p<+\infty$: $$ \mathbb{E}_{\theta}\Bigg[\Bigg\|\frac{ \eta_\theta^{l-1,N_{0:p}}(G_{\theta}^{l-1}\varphi_\theta^{l}) } {\eta_\theta^{l-1,N_{0:p}}(G_{\theta}^{l-1})}- \eta_\theta^{l-1,N_{0:p}}(\varphi_\theta^{l-1}) - \Big( \frac{ \eta_\theta^{l-1}(G_{\theta}^{l-1}\varphi_\theta^{l}) } {\eta_\theta^{l-1}(G_{\theta}^{l-1})}- \eta_\theta^{l-1}(\varphi_\theta^{l-1}) \Big) \Bigg\|^2\Bigg] \leq $$ $$ \frac{C}{N_p}\Big(1+\frac{p^2}{N_p}\Big)\Big(\|{\varphi}_\theta^{l}-{\varphi}_\theta^{l-1}\|_{\infty}^2+\Big\|G_{\theta}^{l-1}\frac{Z_\theta^{l-1}}{Z_\theta^{l}}-1\Big\|_{\infty}^2\Big). $$ \end{prop} \begin{proof} The first result follows by Lemma \ref{lem:est_pool} in the appendix and the second from Lemma \ref{lem:tech_lem2} also in the appendix. \end{proof} \begin{rem} To show that \eqref{eq:cs_fv3}-\eqref{eq:cs_fv4} can hold, one can set, for instance $N_p=2^p$ and $\mathbb{P}_P(p)\propto 2^{-p}(p+1)\log_2(p+2)^2$. See for example \cite{ubpf} and \cite{rhee}. \end{rem} To continue our discussion, to complete our proof, we must know something about the quantities $$ \|{\varphi}_\theta^{l}-{\varphi}_\theta^{l-1}\|_{\infty}^2\quad {\rm and}\quad \Big\|G_{\theta}^{l-1}\frac{Z_\theta^{l-1}}{Z_\theta^{l}}-1\Big\|_{\infty}^2 $$ in terms of a possible decay as a function of $l$. To that end, we shall assume that these terms are $\mathcal{O}(h_l^{\beta})$ for some $\beta>0$. This assumption can be verified for the example in Section \ref{sec:example}. Recall from Section \ref{sec:example} that $h_l=2^{-l}$. \begin{prop}\label{prop:verify} Assume (A\ref{hyp:N}). Then there is $C > 0$, depending on $f$ and $u_*$, and $\beta>0$ such that for all $u\in \mathsf{X}$ $$ \| p^l(\cdot;u) \|, \| p(\cdot;u) \| <C \, , \qquad \| p^l(\cdot;u) - p(\cdot;u) \|^2 \leq C h_l^\beta \, , $$ where the norm is $L^2(D)$. Given a function $F : \mathbb{N} \times \mathsf{X} \rightarrow \mathbb{R}^n$, suppose that there is a $C'>0$ which does not depend on $(l,u)$ such that \begin{equation}\label{eq:cty} \|F(l,u) - F(\infty,u)\| \leq C' \| p^l(\cdot ; u) - p(\cdot ; u) \| \, , \end{equation} where the first norm is understood as the $n-$dimensional Euclidean norm, while the second norm is $L^2(D)$, and $F(\infty, \cdot) := \lim_{l\rightarrow \infty} F(l,\cdot)$. Then there is another $C>0$ which does not depend on $(l,u)$ such that $$\|F(l,\cdot ) - F(l-1,\cdot )\|_\infty^2 \leq C h_l^\beta.$$ \end{prop} \begin{proof} This is a slight generalization of the results of \cite{beskos}, Sec. 4, where it was verified that $\Big\|G_{\theta}^{l-1}\frac{Z_\theta^{l-1}}{Z_\theta^{l}}-1\Big\|_{\infty}^2 = \mathcal{O}(h_l^\beta)$. It is well known that for $v\in \mathbb{R}^n$, there is a $C>0$ such that $\|v\|_\infty \leq C \|v\|$. The result follows by taking supremum over $u$. \end{proof} Note that $G_{\theta}^{l-1}\frac{Z_\theta^{l-1}}{Z_\theta^{l}} = \frac{G_{\theta}^{l-1}}{\eta_\theta^{l-1}(G_{\theta}^{l-1})}$ and $G_\theta^{\infty}=1$. So $$ |G_{\theta}^{l-1}\frac{Z_\theta^{l-1}}{Z_\theta^{l}}-1| \leq \frac2{\eta_\theta^{l-1}(G_{\theta}^{l-1})}|G_{\theta}^{l-1}-1| \, . $$ Defining $F(l,u) = G_\theta^{l}(u)$ then assumption (A\ref{hyp:A}) and Prop. 4.1 of \cite{beskos} together with Proposition \ref{prop:verify} imply there is a $C>0$ such that $$ \Big\|G_{\theta}^{l-1}\frac{Z_\theta^{l-1}}{Z_\theta^{l}}-1\Big\|_{\infty}^2 \leq C h_l^\beta \, . $$ Defining $F(l,u) := {\varphi}_\theta^{l}(u)$, as defined in \eqref{eq:phil} and \eqref{eq:phi}, then it is easy to show that Proposition \ref{prop:verify} ensures $$ \|{\varphi}_\theta^{l}-{\varphi}_\theta^{l-1}\|_{\infty}^2 \leq C h_l^\beta \, . $$ See equation (19) of \cite{beskos}. Assumptions (A\ref{hyp:A}) and (A\ref{hyp:C}) are similarly verified. \begin{theorem} Assume (A\ref{hyp:A}-\ref{hyp:C}). Then there exist choices of $\mathbb{P}_L,\mathbb{P}_P$ and $(N_p)_{p\in\mathbb{N}_0}$, $1\leq N_0<N_1<\cdots$ so that $(\Xi_\theta^l)_{l\in\mathbb{N}_0}$ as defined in \eqref{eq:xi_l_def} with $(\Xi_\theta^{l,p})_{p\in\mathbb{N}_0}$ as in \eqref{eq:xi_l_p_def} will satisfy \eqref{eq:ub1}-\eqref{eq:ub3}. That is, \eqref{eq:ub_est} is an unbiased and finite variance estimator of $\eta_{\theta}(\varphi_{\theta})$. \end{theorem} \begin{proof} Throughout the proof $C$ is a finite constant that will not depend on $l$ or $p$ and whose value will change upon each appearance. Given the commentary above, we need only show that \eqref{eq:ub3} can hold for some given choices of of $\mathbb{P}_L,\mathbb{P}_P$ and $(N_p)_{p\in\mathbb{N}_0}$. We have that \begin{equation}\label{eq:main_theo_eq1} \sum_{l\in\mathbb{N}_0}\frac{1}{\mathbb{P}_L(l)}\mathbb{E}_{\theta}[\|\Xi_{\theta}^l\|^2] = \sum_{(l,p)\in\mathbb{N}_0^2}\frac{\mathbb{P}_P(p)}{\mathbb{P}_L(l)}\Big\{ \sum_{s=0}^p \frac{\mathbb{E}_{\theta}[\|\Xi_{\theta}^{l,s}\|^2]}{\overline{\mathbb{P}}_P(s)^2} + 2\sum_{0\leq s <q\leq p} \frac{\mathbb{E}_{\theta}[\|\Xi_{\theta}^{l,s}\|\|\Xi_{\theta}^{l,q}\|]}{\overline{\mathbb{P}}_P(p)\overline{\mathbb{P}}_P(q)} \Big\}. \end{equation} Now recalling \eqref{eq:xi_l_p_def} and noting that for $p\in\mathbb{N}$ $$ \eta_{\theta}^{0,N_{0:p}}(\varphi_{\theta}^0) - \eta_{\theta}^{0,N_{0:p-1}}(\varphi_{\theta}^0) = \eta_{\theta}^{0,N_{0:p}}(\varphi_{\theta}^0) - \eta_{\theta}^{0}(\varphi_{\theta}^0) - \{\eta_{\theta}^{0,N_{0:p-1}}(\varphi_{\theta}^0)-\eta_{\theta}^{0}(\varphi_{\theta}^0)\} $$ and that for $p\in\mathbb{N}$ $$ \frac{\eta_{\theta}^{l-1,N_{0:p}}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N_{0:p}}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N_{0:p}}(\varphi_{\theta}^{l-1}) - \Big(\frac{\eta_{\theta}^{l-1,N_{0:p-1}}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N_{0:p-1}}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N_{0:p-1}}(\varphi_{\theta}^{l-1})\Big) = $$ $$ \frac{\eta_{\theta}^{l-1,N_{0:p}}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N_{0:p}}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N_{0:p}}(\varphi_{\theta}^{l-1}) - \Big( \frac{ \eta_\theta^{l-1}(G_{\theta}^{l-1}\varphi_\theta^{l}) } {\eta_\theta^{l-1}(G_{\theta}^{l-1})}- \eta_\theta^{l-1}(\varphi_\theta^{l-1}) \Big) - $$ $$ \Big\{ \Big(\frac{\eta_{\theta}^{l-1,N_{0:p-1}}(G_{\theta}^{l-1}\varphi_{\theta}^l)}{\eta_{\theta}^{l-1,N_{0:p-1}}(G_{\theta}^{l-1})} - \eta_{\theta}^{l-1,N_{0:p-1}}(\varphi_{\theta}^{l-1})\Big) - \Big( \frac{ \eta_\theta^{l-1}(G_{\theta}^{l-1}\varphi_\theta^{l}) } {\eta_\theta^{l-1}(G_{\theta}^{l-1})}- \eta_\theta^{l-1}(\varphi_\theta^{l-1}) \Big) \Big\} $$ by Proposition \ref{prop:main_prop} that for $(l,s)\in\mathbb{N}_0^2$ \begin{equation}\label{eq:main_theo_eq2} \mathbb{E}_{\theta}[\|\Xi_{\theta}^{l,s}\|^2] \leq \frac{Ch_l^{\beta}}{N_s}\Big(1+\frac{s^2}{N_s}\Big). \end{equation} Also, by using Cauchy-Schwarz, $(l,p,q)\in\mathbb{N}_0\times\mathbb{N}^2$ \begin{equation}\label{eq:main_theo_eq3} \mathbb{E}_{\theta}[\|\Xi_{\theta}^{l,s}\|\|\Xi_{\theta}^{l,q}\|] \leq \frac{Ch_l^{\beta}}{N_s^{1/2}N_q^{1/2}}\Big(1+\frac{s^2}{N_s}\Big)^{1/2}\Big(1+\frac{q^2}{N_q}\Big)^{1/2}. \end{equation} Then using the bounds \eqref{eq:main_theo_eq2}-\eqref{eq:main_theo_eq3} in \eqref{eq:main_theo_eq1} gives the upper-bound (noting that the case $s=0=q$ the terms $\mathbb{E}_{\theta}[\|\Xi_{\theta}^{l,s}\|^2]$ and $\mathbb{E}_{\theta}[\|\Xi_{\theta}^{l,s}\|\|\Xi_{\theta}^{l,q}\|]$ are $\mathcal{O}(1)$ so one can find a $C$ such that the following upper-bound holds) \begin{eqnarray*} \sum_{l\in\mathbb{N}_0}\frac{1}{\mathbb{P}_L(l)}\mathbb{E}_{\theta}[\|\Xi_{\theta}^l\|^2] & \leq & C\sum_{(l,p)\in\mathbb{N}_0^2}\frac{\mathbb{P}_P(p)h_l^{\beta}}{\mathbb{P}_L(l)}\Bigg\{ \sum_{s=0}^p \frac{\Big(1+\frac{s^2}{N_s}\Big)}{N_s\overline{\mathbb{P}}_P(s)^2} + \sum_{0\leq s <q\leq p} \frac{\Big(1+\frac{s^2}{N_s}\Big)^{1/2}\Big(1+\frac{q^2}{N_q}\Big)^{1/2}}{N_s^{1/2}N_q^{1/2}\overline{\mathbb{P}}_P(p)\overline{\mathbb{P}}_P(q)} \Bigg\}. \end{eqnarray*} Now if one chooses, for instance $N_p=2^p$ and $\mathbb{P}_P(p)\propto 2^{-p}(p+1)\log_2(p+2)^2$ and $\mathbb{P}_L(l)\propto h_l^{\alpha\beta}$ for any $\alpha\in(0,1)$ then \eqref{eq:ub3} is satisfied and hence the proof is completed. \end{proof} In most cases of practical interest, it is not possible to choose $\mathbb{P}_L,\mathbb{P}_P$ and $(N_p)_{p\in\mathbb{N}_0}$ so that \eqref{eq:ub_est} is an unbiased and finite variance estimator, as well as having finite expected cost. Suppose, in the case of Section \ref{sec:example}, the cost to evaluate $G_{\theta}^l$ is $\mathcal{O}(h_l^{-1})$ and $\beta=1$. Then, just as in \cite{ubpf}, if we choose $\mathbb{P}_L(l)\propto 2^{-l}(l+1)\log_2(l+2)^2$, $N_p=2^p$, and $\mathbb{P}_P(p)\propto 2^{-p}(p+1)\log_2(p+2)^2$, then to achieve a variance of $\mathcal{O}(\epsilon^2)$ (for $\epsilon>0$ arbitrary) the cost is $\mathcal{O}(\epsilon^{-2}|\log(\epsilon)|^{2+\delta})$ for any $\delta>0$, with high probability. For the MLSMC method in \cite{beskos}, the cost to obtain a mean square error of $\mathcal{O}(\epsilon^2)$ is $\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2)$, which is a mild reduction in cost. However, we note that this discussion is constrained to the case of a single processor. The unbiased method is straightforwardly parallelized. \section{Numerical Results}\label{sec:numerics} First we will consider a toy example where we can analytically calculate the marginal likelihood and investigate the performance of the resulting estimator in comparison to the estimator obtained using the original MLSMC method of \cite{beskos} (not presented here). Subsequently we will consider the example from Section \ref{sec:example}. Finally, for both examples we will explore the potential applicability of our estimators within the context of parameter optimization using the stochastic gradient descent (SGD) method. The forward model is the same for both problems, hence the anticipated rate of convergence is the same, and is estimated as $\beta=4$, just as in \cite{beskos}. The cost would be $\mathcal{O}(h_l^{-\gamma})$ in general for the problem in Section \ref{sec:example}, and for the particular setup described in \ref{ssec:example} $\gamma=1$. We choose $h_l = 2^{-l}$ and $\mathbb{P}_L(l) \propto 2^{-2.5 l}$. This is far into the so-called canonical regime ($\beta>\gamma$), and therefore we allow unbounded $L$, i.e. $L_{\rm max}=\infty$ in the terminology of \cite{ubpf}. The reason for this is basically that the sum \eqref{eq:ub3} and the corresponding cost series both easily converge, if the cost is deterministic and $\mathcal{O}(h_l^{-\gamma})$ as a function of $h_l$. However, in this case the cost depends upon the randomized estimator of the series in $p$. Since the rate of convergence is borderline in the $p$ direction, $\beta_p=1=\gamma_p$, as in \cite{ubpf} we impose a maximum $P_{\rm max}$ on $P$. This is necessary to prevent the possibility of the algorithm getting stuck with an extremely expensive sample. It is discussed further in that work. In particular, we choose $N_p=2^{p+3}$ and $$ \mathbb{P}_P(p) \propto \mathbb{I}(0\leq p\leq P_{\rm max}) \left\{\begin{array}{ll} 2^{-p+4} & \textrm{if}~p<4, \\ 2^{-p}\cdot p \cdot \log_2(p)^2 & \textrm{otherwise} \, . \end{array}\right . $$ The piecewise definition of $\mathbb{P}_P$ ensures that it has the correct asymptotic behaviour but is also monotonically decreasing. Note that in this regime, i.e. strongly canonical convergence in $L$, or large $\beta >\gamma$, the MLSMC method easily achieves the optimal complexity $\mathcal O(\epsilon^{-2})$. However, since the convergence rate in $P$ is necessarily subcanonical, our method therefore suffers from a logarithmic penalty, i.e. $\mathcal O(\epsilon^{-2}\log(\epsilon)^{2+\delta})$, for any $\delta>0$. This cannot be observed in the simulations though. Empirically we observe that we can set $P_{\rm max}$ rather small, which is perhaps afforded by the very fast convergence in the $L$ direction. This may also be why we cannot see the theoretically predicted log penalty in the simulations. \subsection{Toy example}\label{ssec:toy} We first consider an example where the marginal likelihood is analytically calculable. Consider the following PDE on $D$: \begin{align*} \nabla^2p &=u,\quad \textrm{ on } D, \\ p&= 0, \quad \textrm{ on } \partial D, \end{align*} where $D=[0,1]$. The solution of this PDE is $p(x;u)=\frac{u}{2}(x^2-x)$. Define the observation operator as $$\mathcal{G}(u)=[p(x_1;u),p(x_2;u), \dots, p(x_M;u)]^{\intercal}\triangleq Gu \, .$$ Suppose the observation takes the form $y = \mathcal{G}(u) + \xi, \xi \sim N(0,\theta^{-1}\cdot\bm{I}_M), \xi \perp u$, and $\theta$ follows a log-normal distribution with mean 0 and variance $\sigma^2$. Then the unnormalized likelihood is given by $$ \gamma(u,\theta) = \theta^{\frac{M}{2}}\cdot\exp\left\{-\frac{\theta}{2}\|Gu - y\|^2 \right\} \cdot \frac{1}{\theta}\exp\Big\{ -\frac{(\log(\theta))^2} {2\sigma^2} \Big\} , $$ and the marginal likelihood is \begin{align*} \gamma(\theta) & = \int_{-1}^1 \gamma(u,\theta) \textrm{d}u \\ & = \theta^{\frac{M-2}{2}} \exp\Big\{ -\frac{(\log(\theta))^2} {2\sigma^2} \Big\} \int_{-1}^{1} \exp\left\{-\frac{\theta }{2} \|Gu - y\|^2 \right\} \textrm{d}u \\ & = \theta^{\frac{M-2}{2}} \exp\left\{ -\frac{\theta}{2} \Big(\|y\|^2 - \frac{(G^{\intercal}y)^2}{\|G\|^2}\Big)-\frac{(\log(\theta))^2} {2\sigma^2} \right\} \int_{-1}^{1} \exp \left\{ -\frac{\theta\|G\|^2}{2} \Big(u - \frac{G^{\intercal}y}{\|G\|^2 } \Big)^2 \right\} \textrm{d}u \\ & = \theta^{\frac{M-3}{2}} \frac{\sqrt{\pi/2}}{\|G\|} \exp\left\{ -\frac{\theta}{2} \Big(\|y\|^2 - \frac{(G^{\intercal}y)^2}{\|G\|^2}\Big) -\frac{(\log(\theta))^2} {2\sigma^2} \right\} \Bigg( \textrm{erf}\Big(\sqrt{\frac{\theta}{2}}\|G\|(1- \frac{G^{\intercal}y}{\|G\|^2 })\Big) - \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad~~~ \textrm{erf}\Big(\sqrt{\frac{\theta}{2}}\|G\|(-1- \frac{G^{\intercal}y}{\|G\|^2 })\Big) \Bigg). \end{align*} So the log-likelihood is then given by \begin{align*} \log\big(\gamma(\theta)\big) = &~ \frac{M-3}{2}\log(\theta) - \frac{\theta}{2} \Big(\|y\|^2 - \frac{(G^{\intercal}y)^2}{\|G\|^2}\Big) -\frac{(\log(\theta))^2} {2\sigma^2} + \log \Bigg( \textrm{erf}\Big(\sqrt{\frac{\theta}{2}}\|G\|(1- \frac{G^{\intercal}y}{\|G\|^2 })\Big) - \\ &~ \textrm{erf}\Big(\sqrt{\frac{\theta}{2}}\|G\|(-1- \frac{G^{\intercal}y}{\|G\|^2 })\Big) \Bigg) + C, \end{align*} and the derivative of the log-likelihood is \begin{align*} \frac{\partial\log\big(\gamma(\theta)\big)}{ \partial\theta} & = \frac{M-3}{2\theta} - \frac{\Big(\|y\|^2 - \frac{(G^{\intercal}y)^2}{\|G\|^2}\Big)}{2} - \frac{\log(\theta)} {\sigma^2 \theta} + \frac{1}{ \textrm{erf}\Big (\sqrt{\frac{\theta}{2}}\|G\|(1- \frac{G^{\intercal}y}{\|G\|^2 })\Big) - \textrm{erf}\Big(\sqrt{\frac{\theta}{2}}\|G\|(-1- \frac{G^{\intercal}y}{\|G\|^2 })\Big)} \cdot \\ & \quad~ \frac{2}{\sqrt{\pi}}\Bigg( \exp\Big\{ -\frac{\theta\|G\|^2}{2}(1- \frac{G^{\intercal}y}{\|G\|^2 })^2 \Big\} \frac{\|G\|(1- \frac{G^{\intercal}y}{\|G\|^2 })}{2\sqrt{2\theta}} - \\ & \qquad\qquad \exp\Big\{ -\frac{\theta\|G\|^2}{2}(-1- \frac{G^{\intercal}y}{\|G\|^2 })^2 \Big\} \frac{\|G\|(-1- \frac{G^{\intercal}y}{\|G\|^2 })}{2\sqrt{2\theta}} \Bigg). \end{align*} First, the performance of the unbiased algorithm for a single gradient estimation $\frac{\partial\log\big(\gamma(\theta)\big)}{ \partial\theta}\Big|_{\theta=\theta^*}$ is verified. The data is generated with M=50, and observation operator $\mathcal{G}(u)=[p(x_1;u),p(x_2;u),$ $\dots, p(x_M;u)]^{\intercal}$ with $x_i = i/(M+1)$. $\theta^*$ is set to be $2$, and the true value of the derivative of the log-likelihood at this point is calculated using the above analytical solution. For each $L$, the MLSMC estimator is realized 50 times and the MSE is reported. Similarly, the MSE of unbiased algorithm is calculated based on 50 realizations. The results are presented in Figure \ref{fig:single_anal}. The cost reported in the plot is proportional to the sum of the cost per forward solve at level $l$ (tridiagonal linear system), $h_l^{-1}$, multiplied by the total number of samples at level $l$. \begin{figure}[!htbp] \centering\includegraphics[width=\textwidth]{mse_vs_cost_single_estimation_analytical.pdf} \caption{Toy example, single estimator. {MSE vs cost for (i) the unbiased algorithm with different choices of $P_{\rm max}$ and (ii) MLSMC.}} \label{fig:single_anal} \end{figure} \subsection{Example of Sec. \ref{sec:example}}\label{ssec:full} Now that we understand the behaviour of the estimator we consider the example from Sec. \ref{sec:example}. Here we do not have an analytical solution, so the true value of the target was first estimated with the MLSMC algorithm with L=12. This sampler was realized 50 times and the average of the estimator is taken as the ground truth. Now for each $L$, the MLSMC estimator is realized 50 times and the MSE is reported. Similarly, the MSE of unbiased algorithm is calculated based on 50 realizations. The results are presented in Figure \ref{fig:single_nonanal}. The cost in the plot is proportional to the sum of the cost per forward solve at level $l$ (tridiagonal linear system), $h_l^{-1}$, multiplied by the total number of samples at level $l$. \begin{figure}[!htbp] \centering\includegraphics[width=\textwidth]{mse_vs_cost_single_estimation_nonanalytical.pdf} \caption{Example of Sec. \ref{sec:example}, single estimator. {MSE vs cost for (i) the unbiased algorithm with different choices of $P_{\rm max}$ and (ii) MLSMC.}} \label{fig:single_nonanal} \end{figure} \subsection{Stochastic gradient descent method} In this section, we investigate the potential to use our unbiased estimators within the SGD method. Recall we want to find the MLE of $\theta$ by minimizing $-\log p(y|\theta) = -\log(\int \gamma_{\theta}(u)du)$. Our estimator given in equation \eqref{eq:ub_est} provides an unbiased estimator $\widehat{\eta_{\theta}(\varphi_{\theta})}$ of $\nabla_\theta \log p(y|\theta)$, for any choice of $M\geq 0$. In other words $\mathbb{E} \widehat{\eta_{\theta}(\varphi_{\theta})} = \nabla_\theta \log p(y|\theta)$. We will see that it is most efficient to choose $M=1$. To ensure the output of the SGD algorithm satisfies $\theta>0$, we let $\theta=\exp(\xi)$ and optimize $\xi$. The details are given in Algorithm \ref{algo:sgd}. \begin{algorithm}[h!] \begin{enumerate} \item{Initialize $\xi_1$ and choose a sequence $\{\alpha_k\}_{k=1}^\infty$ and a value $M\in \mathbb{N}$.} \item {For $k=1,\dots, K$ (or until convergence criterion is met)} \begin{itemize} \item{Compute $\widehat{\eta_{\theta}(\varphi_{e^{\xi_k}})}$ using \eqref{eq:ub_est}} \item{Update $\xi_{k+1} = \xi_k - \alpha_k \widehat{\eta_{\theta}(\varphi_{e^{\xi_k}})} \exp(\xi_k)$.} \end{itemize} \item{Return $\theta_{K+1}=\exp(\xi_{K+1})$.} \end{enumerate} \caption{SGD using new unbiased estimator.} \label{algo:sgd} \end{algorithm} As above, it makes sense to first explore the toy model with analytical solution. As before, the MSE is calculated based on 50 realizations, and the cost in the plot is again proportional to the sum of the cost per forward solve at level $l$ (tridiagonal linear system), $h_l^{-1}$, multiplied by the total number of samples at level $l$. In Figure \ref{fig:alphakN} we explore the performance of the unbiased estimator with different choices of $\alpha_k = \alpha_1/k$, $\alpha_1\in\{0.1,0.025\}$, and different choices of the number of samples $M$ used to construct $\widehat{\eta_{\theta}(\varphi_{e^{\xi_k}})}$ using \eqref{eq:ub_est} in step 2 of Algorithm \ref{algo:sgd}. The two takeaways from this experiment are that (1) it is more efficient to take fewer samples $M$ (in particular $M=1$), and (2) it is more efficient to choose a larger constant $\alpha_1 = 0.1$. In particular, the dynamics of the algorithm experiences a phase transition as one varies the constant $\alpha_1$. A large enough value appears to provide gradient descent type exponential convergence $\mathcal O(e^{-\rm cost})$, while a value which is too small yields Monte Carlo (MC) type $\mathcal O(1/{\rm cost})$. It is notable that the exponential convergence eventually gives way to MC type convergence, and that the point where this occurs increases proportionally to the additional constant in cost incurred with larger sample size $M$, so that the error curves for different values of $M$ eventually intersect. \begin{figure}[!htbp] \centering\includegraphics[width=\textwidth]{SGD_mse_vs_cost_anal_alphak_N} \caption{Toy example, SGD. {MSE vs cost for $\alpha_k=0.025/k$ and $\alpha_k=0.1/k$ for a range of sample sizes $M$. $P_{\rm max}=0$ is fixed.}} \label{fig:alphakN} \end{figure} Natural questions are then whether there is a limit to how large one can choose $\alpha_1$ and at which value precisely does the phase transition occur. These questions are partially answered by the experiments presented in Figure \ref{fig:alphakalpha} (a), where we see that $\alpha_1$ should not be chosen larger than $0.2$ and the phase transition happens in between $0.025$ and $0.05$. Figure \ref{fig:alphakalpha} (b) illustrates the benefits and drawbacks of using a constant $\alpha$. In particular, the algorithm may converge more quickly at first, but plateaus when it reaches the induced bias. \begin{figure}[!htbp] \centering\includegraphics[width=\textwidth]{SGD_mse_vs_cost_alphak_and_alpha} \caption{Toy example, SGD. {MSE vs cost for (a) $\alpha_k=\alpha_1/k$ and a range of $\alpha_1$, and (b) some examples of constant $\alpha$. $P_{\rm max}=0$ is fixed.}} \label{fig:alphakalpha} \end{figure} In Figure \ref{fig:pmax} we explore various choices of $P_{\rm max}$, for $M=1$ fixed and $\alpha_1=0.1$. It is apparent that it is preferable to choose a smaller value of $P_{\rm max}$. We note however that there will be an induced bias, which will be larger for smaller $P_{\rm max}$. However, for this particular problem we do not even observe that bias over the range of MSE and cost considered. \begin{figure}[!htbp] \centering\includegraphics[width=\textwidth]{SGD_mse_vs_cost_anal_Pmax} \caption{Toy example, SGD. {MSE vs cost for $M=1$ fixed and $\alpha_k=0.1/k$, and various choices of $P_{\rm max}$.}} \label{fig:pmax} \end{figure} As a last experiment with the toy example, we compare the convergence of SGD using our unbiased algorithm with $P_{\rm max}=0$, $M=1$, and $\alpha_1=0.1$ to the analogous algorithm where an MLSMC estimator with various $L$ (single gradient estimator MSE $\propto 2^{-L}$) replaces the unbiased estimator in step 2 of Algorithm \ref{algo:sgd}. Similar behaviour was observed for MLSMC relative to different choices of $\alpha_k$ as compared to the unbiased estimator. The results are shown in Figure \ref{fig:anal}. Here it is clear that over a wide range of MSE the unbiased estimator provides a significantly more efficient alternative to the MLSMC estimator. \begin{figure}[!htbp] \centering\includegraphics[width=\textwidth]{SGD_unbiased_v_mlsmc_anal} \caption{Toy example, SGD. {MSE vs cost for $\alpha_k=0.1/k$, and unbiased estimator with $P_{\rm max}=0$ and $M=1$ in comparison to MLSMC estimator with different choices of $L$.}} \label{fig:anal} \end{figure} Finally, we consider the same last experiment except with the example of Sec. \ref{sec:example}. Again, over a wide range of MSE the unbiased estimator provides a significantly more efficient alternative to the MLSMC estimator. Here one can already observe the induced $P_{\rm max}=1$ bias for the unbiased estimator around $2^{-20}$. Adjusting the various tuning parameters resulted in similar behaviour as was observed in the earlier experiments with the toy example. These results are not presented. \begin{figure}[!htbp] \centering\includegraphics[width=\textwidth]{SGD_unbiased_v_mlsmc_nonanal} \caption{Example of Sec. \ref{sec:example}, SGD. {MSE vs cost for $\alpha_k=0.5/k$, and unbiased estimator with $P_{\rm max}=0$ and $M=1$ in comparison to MLSMC estimator with different choices of $L$.}} \label{fig:nonanal} \end{figure} \subsubsection*{Acknowledgements} AJ was supported by KAUST baseline funding. KJHL was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.
2,869,038,156,224
arxiv
\section{Introduction: the Mystery and the Motivation} The single inclusive muon and like-sign dimuon charge asymmetries are defined as the ratios of the difference and sum of the rates $a_{CP} = \{ \Gamma(\mu^+)-\Gamma(\mu^-)\}$/$ \{ \Gamma(\mu^+) + \Gamma(\mu^-) \}$ and $A_{CP} = \{ \Gamma(\mu^+ \mu^+)-\Gamma(\mu^- \mu^-) \}$/$ \{\Gamma(\mu^+ \mu^+) +\Gamma(\mu^- \mu^-) \}$. The standard model (SM) predictions of the charge asymmetry induced by CP-violation are small in magnitude compared to the current experimental precision, so non-zero measurements would indicate new sources of CP-violation. D0 has previously published three measurements of the CP-violating like-sign dimuon charge asymmetry in $p\overline{p}$ collisions at $\sqrt{s}$ = 1.96 TeV at the Fermi~lab Tevatron collider. These measurements were at integrated luminosities of 1 fb$^{-1}$ ~\cite{1 inv-fb}, 6.1 fb$^{-1}$ ~\cite{6 inv-fb}, and 9 fb$^{-1}$ ~\cite{9 inv-fb}, each observing $A_{CP}$ with differences from the standard model predictions of 1.7 to 3.9 $\sigma$ significance. This is one of only a few apparent inconsistencies with the standard model. The major questions to be answered are whether these observations of deviation from the standard model are real, is our understanding of the SM complete, and is there something else going on beyond the SM? A new analysis of the full Run II data set of 10.4 fb$^{-1}$ with improved background subtraction and upgraded analysis methodology is currently under collaboration review. It is not yet ready for public release, so I can only give a status report, show the checks performed using single inclusive muons, and give an indication of the expected sensitivities. The slides presented at DPF2013, with additional figures, are available at Reference \cite{talk}. \section{Theoretical Framework} In the standard model, one manifestation of CP-violation is in the mixing of the neutral $B$ mesons $B^0 \leftrightarrow \overline{B}^0$ and $B_S^0 \leftrightarrow \overline{B}_S^0$. This asymmetry can be observed in the decays of pairs of particles containing $b$ and $\overline{b}$ quarks. Pairs of $b$ and $\overline{b}$ quarks are produced symmetrically in the $p\overline{p}$ collisions. These quarks hadronize into pairs of $B$ and $\overline{B}$ particles, including baryons. For example, particles containing $b$ quarks can have the decay chain $b \rightarrow \mu^- + X$, while particles containing $\overline{b}$ quarks can have the decay chain $\overline{b} \rightarrow \mu^+ + X$. For these direct decays, the negative charge of this ``right-sign'' muon will tag the $b$ flavor of the parent quark. The other $\overline{b}$ quark can decay $\overline{b} \rightarrow \mu^+ + X$, producing an opposite-sign dimuon $\mu^+ \mu^-$ pair. However, for example, the parent $b$ quark could hadronize into a $\overline{B}^0$ which could then oscillate into a $B^0$ which then can decay into a ``wrong-sign'' $\mu^+ + X$, schematically $b \rightarrow \overline{b} \rightarrow \mu^+$. So the oscillation of either $B^0 \leftrightarrow \overline{B}^0$ or $B_S^0 \leftrightarrow \overline{B}_S^0$ can produce same sign dimuons. CP-violation occurs if the rate $\Gamma (B^0 \rightarrow \overline{B}^0)$ does not equal the rate $ \Gamma(\overline{B}^0 \rightarrow B^0)$, or $\Gamma(B_S^0 \rightarrow \overline{B}_S^0) \ne \Gamma(\overline{B}_S^0 \rightarrow B_S^0)$, which can produce an observable charge asymmetry for the like-sign dimuons for $\Gamma(\mu^+\mu^+) \ne \Gamma(\mu^-\mu^-)$. The sequential decays $b \rightarrow c \rightarrow \mu^+$ are background sources of wrong-sign muons. Using the Pythia ~\cite{pythia} simulation for the total number of muons from $b$-particles, D0 observes approximately 73\% $b \rightarrow \mu^-$; 11\% $b \rightarrow \overline{b} \rightarrow \mu^+$; and 16\% $b \rightarrow c \rightarrow \mu^+$. Previously, the only source of charge asymmetry for these like-sign dimuons in the standard model was considered to be via CP-violation in mixing. The predicted magnitude of this effect in D0 is $A_{CP}^{\rm mixing}$(SM) = (-0.008 $\pm$ 0.001)\%. Recently, however, G. Borissov and B. Hoeneisen ~\cite{interference} have calculated an additional CP-violating contribution due to the interference between processes involving identical CP-definite states that can be reached both via mixing and non-mixing paths. For example, a $B^0$ can produce a wrong sign $\mu^+$ via the CP-even final state $D^- D^+$ where the $D^+ \rightarrow \mu^+ X$ decay. The $B^0$ can produce $D^- D^+$ either directly or by first oscillating into $\overline{B}^0$ which can also decay into $D^- D^+$. There is interference and CP-violation between the two paths. This interference does not contribute to $a_{CP}$ for single muons since the rates for $D^+ \rightarrow \mu^+$ and $D^- \rightarrow \mu^-$ balance. To set the scale, for the D0 like-sign dimuon charge asymmetry, this interference term is calculated to be $A_{CP}^{\rm int}$(SM) = (-0.035 $\pm$ 0.008)\%, or about 4 times $ A_{CP}^{\rm mixing}$(SM). So $A_{CP}(SM)$ = $A_{CP}^{\rm mixing}$(SM) + $A_{CP}^{\rm int}$(SM) = {-0.043 $\pm$ 0.010)\% which can be compared with the previously measured $A_{CP}$(D0, 9 fb$^{-1}$) = (-0.276 $\pm$ 0.092)\% ~\cite{9 inv-fb}. $A_{CP}^{\rm int}$ is linearly dependent on $\Delta\Gamma_d/\Gamma_d$, the ratio of the difference of the widths of the light and heavy members of the mass eigenstates, $\Gamma(B_d^{\rm light})-\Gamma(B_d^{\rm heavy})$, to their average. This, then, gives the possibility of measuring $\Delta\Gamma_d/\Gamma_d$ in this analysis. The current World Average~\cite{HFAG-2012} for $\Delta\Gamma_d/\Gamma_d$ is (1.5 $\pm$ 1.8)\%, while the SM prediction~\cite{Lenz} is (0.42 $\pm$ 0.08)\%. It is anticipated that in this analysis, D0 will be able to measure $\Delta\Gamma_d/\Gamma_d$ to $\approx$ 1\% (absolute) precision. Interference between such mixed and non-mixed paths for $B_s^0$ is too small to be observable in the D0 data set. The analogous $\Delta\Gamma_s/\Gamma_s$ for $B_s^0$ is much smaller than for $B_d^0$ and is already well determined~\cite{HFAG-2012,Lenz}. \section{Experimental Situation} The three prior D0 analyses ~\cite{1 inv-fb,6 inv-fb,9 inv-fb} of the CP-violating like-sign dimuon asymmetry consistently measured $A_{CP}$ in the range -0.25 \% to -0.28 \% which differed from the predictions of the standard model (assuming mixing only, without the interference between the mixed and non-mixed paths) by significances of 1.7 to 3.9 $\sigma $. Table~1 shows the evolution of the D0 measurement of $A_{CP}$ with increasing integrated Luminosity and sophistication of the analysis. The $A_{CP}$ results from this analysis of the full 10.4~fb$^{-1}$ data set, with reduced systematic uncertainties will be compared with the sum of $A_{CP}^{\rm mixing} + A_{CP}^{\rm int}$. \begin{table}[t] \begin{center} \begin{tabular}{l|cccc} $\int {\cal L}$ $dt$ & Asymmetry $A_{CP}$ & deviation from SM & Reference \\ \hline 1.0 fb$^{-1}$ & (-0.28 $\pm$ 0.13 $\pm$ 0.09)\% & 1.7 $\sigma$ & ~\cite{1 inv-fb} (2006) \\ 6.1 fb$^{-1}$ & (-0.252 $\pm$ 0.088 $\pm$ 0.092)\% & 3.2 $\sigma $ & ~\cite{6 inv-fb} (2010) \\ 9.0 fb$^{-1}$ & (-0.276 $\pm$ 0.067 $\pm$ 0.063)\% & 3.9 $\sigma $ & ~\cite{9 inv-fb} (2011) \\ 10.4 fb$^{-1}$ & (???? $\pm$ 0.064 $\pm$ 0.055)\% & ? $\sigma $ & in preparation (2013) \\ \hline \end{tabular} \caption{Evolution of the D0 measurement of $A_{CP}$ for the like-sign dimuon charge asymmetry.} \label{tab:evolution} \end{center} \end{table} \section{Experimental Methodology} Why is D0 ~\cite{D0 detector} a good place to measure the like-sign dimuon charge asymmetry? The CP-symmetric initial $p\overline{p}$ state does not have a charge asymmetry in the central region, or when integrated over a symmetric range of $\pm$ $\eta$. Due to the large amount of hadronic absorption in the U-LAr Calorimeter and the tracking through the muon toroids ~\cite{D0 muon system}, D0 has excellent muon identification ~\cite{D0 muon reconstruction}. The magnetic field directions in the central tracking solenoid magnet ~\cite{D0 detector} and in the muon toroids ~\cite{D0 muon system} are cycled through all combinations on a regular basis which allows for cancellation of first-order effects due to instrumental asymmetries. We observe a sample of 2.2 x $10^9$ single $\mu^\pm$, 2.2 x $10^7$ opposite sign $\mu^+ \mu^-$, and 6.2 x $10^6$ $\mu^\pm \mu^\pm$ like-sign dimuons. For analysis, the data are primarily divided into three muon (transverse) Impact Parameter (IP) bins (IP=1, 2, 3) corresponding to (0-50 $\mu m$), (50-120 $\mu m$), and (120-3000 $\mu m$), and a sum integrating over ALL IP. The muons of interest from $b$ decays are predominantly at large IP, while the muons from the decays of kaons and pions are predominantly at small IP, since the parent kaons and pions have already been tracked from the primary vertex before decaying. Each of these four IP sets are also sub-divided into a combination of nine ($p_\top,|\eta|$) bins: Bins \# 1-3: \space\space 0 $\le |\eta| \le$ 0.7 , $p_\top$ = 4.2-5.6, 5.6-7, 7-25 GeV; Bins \# 4-5: 0.7 $\le |\eta| \le$ 1.2 , $p_\top$ = 3.5-5.6, 5.6-25 GeV; Bins \# 6-9: 1.2 $\le |\eta| \le$ 2.2 , $p_\top$ = 1.5-3.5, 3.5-4.2, 4.2-5.6, 5.6-25 GeV. Standard D0 single- and multi-muon triggers and analyses ~\cite{1 inv-fb,6 inv-fb,9 inv-fb} are used, along with slightly tighter requirements on tracking quality. To ensure that the muon candidates penetrate through the muon toroids, we require either $p_\top > $ 4.2 GeV or $|p_z| >$ 5.2 GeV. We also require $p_\top < $ 25 GeV to avoid muons from $W^\pm$ and $Z^0$ decays. The dimuon invariant mass $M_{\mu\mu}$ is required to be greater than 2.8 GeV to avoid both muons from the decay chain of the same $b$ quark $b \rightarrow \mu^- $ $\nu$ $c$ \space $(\rightarrow \mu^+)$. Based on the muon charge configuration of number of events observed $n^\pm$, $N^{++}$, and $N^{--}$, the raw (observed) asymmetries (in each IP, $p_\top, |\eta|$) bin are defined as: $A = (N^{++}-N^{--})$/$(N^{++}+N^{--})$ and $a = (n^+ - n^-)$/$(n^+ + n^-)$ for like-sign dimuons and for single inclusive muons, respectively. The background subtracted residual CP asymmetries are $a_{CP} = a - a_{bkg}$ and $A_{CP} = A - A_{bkg}$, where $a_{bkg} = a_\mu + f_K a_K + f_\pi a_\pi + a_p $. $f_K$ is the fraction of charged kaons in the $\mu$ sample, measured using dedicated channels with final-state kaons reconstructed as muons. $a_K$ is the asymmetry is due to the difference in the inelastic cross sections between $K^+$ and $K^-$. $f_K a_K$ is typically +0.62\% and is the dominant background term at low IP. $a_\mu$ is the muon detector charge asymmetry measured with $J/\psi \rightarrow \mu^+ \mu^-$. $a_\mu$ is typically -0.29\% and is the next dominant background term. $f_\pi a_\pi$ and $a_p$ are considerably smaller. In the current analysis, $f_K$ and $f_\pi$ are cross-checked using tracks measured in both the central tracker and in the local muon detector trackers. The differences in the two measurements of the muon fractions are included in the systematic uncertainties. \section{Checks and Projections of Sensitivities} The standard model predicts the magnitudes of the CP asymmetries $a_{CP}$ for single inclusive muons for all of the (IP, $p_\top, |\eta|$) bins which are well below the sensitivity limits of the D0 10.4 fb$^{-1}$ data. Therefore, D0 expects the measurements of these CP asymmetries for single inclusive muons to be consistent with zero. The single muon data serves as a closure test or consistency check that we are not generating false asymmetries through the apparatus, the acceptances, the analysis, or the background subtractions. To illustrate the preliminary resolutions and data scatter for the 10.4 fb$^{-1}$ data, Figure~1 shows the raw, observed asymmetries $a$ (upper histogram), measured background asymmetries $a_{bkg}$ (upper data points), and the background subtracted CP violating asymmetries $a_{CP} = a - a_{bkg}$ (lower data points) for each of the 9 ($p_\top, |\eta|$) bins, for ALL IP and for the three IP bins. The $a_{CP} = a - a_{bkg}$ plot for ALL IP demonstrates consistency with the expected zero asymmetry, along with the uncertainty spread for the average over the nine ($p_\top, |\eta|$) bins. \begin{figure}[htb] \centering \includegraphics[height=5.00in]{aCP_plots} \caption{The raw, background, and CP violating asymmetries for single inclusive muons for ALL IP and the three IP ranges and the 9 bins in $(p_\top,|eta|)$.} \label{fig:aCP} \end{figure} Before the interference term was included in the like-sign dimuon phenomenology, the 9 fb$^{-1}$ analysis ~\cite{9 inv-fb} used only two IP bins of IP $<$ 120 $\mu m$ and IP $>$ 120 $\mu m$ and the sum over all IP. This produced the three linear correlation bands between $a_{sl}^d$ and $a_{sl}^s$ along with their correlated 68\% and 95\% CL uncertainty ellipses in Figure~2. Given the fitting over three independent IP bins and the 9 ($p_\top, |\eta|$) bins, and the upgraded background subtraction and analysis, D0 anticipates that the $areas$ of these uncertainty ellipses will decrease by $\approx$ 44 \%. The D0 direct measurements of $a_{sl}^d$ = (0.68 $\pm$ 0.47)\% from $B^0 \rightarrow D^{(*)-} \mu^+ X$ ~\cite{asld} and $a_{sl}^s$ = (-1.12 $\pm$ 0.76)\% from $B_s^0 \rightarrow D_s^- \mu^+ X$ ~\cite{asls} are overlayed for comparison. \begin{figure}[htb] \centering \includegraphics[height=3.90in]{aslq_overlay} \caption{The $a_{sl}^d$ vs. $a_{sl}^s$ measurement and uncertainty ellipse contours for the prior 9 fb$^{-1}$ D0 same sign dimuon charge asymmetry analysis ~\cite{9 inv-fb} compared to the SM predictions. The horizontal and vertical solid and dashed lines represent the measurements $\pm$ 1 $\sigma$ bands from recent D0 inclusive semi-leptonic decay measurements of $a_{sl}^d$ and $a_{sl}^s$~\cite{asld,asls}. } \label{fig:aslq_overlay} \end{figure} \section{Summary} D0 is preparing the final release of the analysis of the CP violating like-sign dimuon analysis based on the full 10.4 fb$^{-1}$ Run II data set. This result is anticipated to have an uncertainty on the asymmetry $A_{CP}$ of $\pm$ 0.084 \% (stat. + syst.), allowing more stringent comparison with the predictions of the standard model. This analysis will also decrease the area of the uncertainty ellipse for the semi-leptonic decay asymmetries $a_{sl}^d$ and $a_{sl}^s$ by a factor of $\approx$ 44 \%. Remaining questions to be addressed are whether the entire like-sign dimuon charge asymmetry could be due to a large value of $\Delta \Gamma_d/\Gamma_d$; whether there are still missing SM contributions not included in the calculation of $A_{CP}$; and whether the D0 observation of significant deviations from the SM predictions is real. Addressing the latter will require verification by other experiments. The final D0 paper on CP-violation for like-sign dimuons based on this analysis has been submitted for publication in early October, 2013 \cite{arXiv}. \Acknowledgments We thank the staffs at Fermilab and collaborating institutions, and acknowledge support from the DOE and NSF (USA); CEA and CNRS/IN2P3 (France); FASI, Rosatom and RFBR (Russia); CNPq, FAPERJ, FAPESP and FUNDUNESP (Brazil); DAE and DST (India); Colciencias (Colombia); CONACyT (Mexico); KRF and KOSEF (Korea); CONICET and UBACyT (Argentina); FOM (The Netherlands); STFC and the Royal Society (United Kingdom); MSMT and GACR (Czech Republic); CRC Program and NSERC (Canada); BMBF and DFG (Germany); SFI (Ireland); The Swedish Research Council (Sweden); and CAS and CNSF (China). I also would like to thank the members of the D0 Collaboration and especially Guennadi Borissov (Lancaster University) and Bruce Hoeneisen (Universidad San Francisco de Quito) for their discussions, guidance, and sharing materials.
2,869,038,156,225
arxiv
\section{Search for \boldmath{\ensuremath{C\!P}\xspace} Violation in the decay {\boldmath$D^+\to K_{S}^0\pi^+$}~\cite{DtoKspi}} \mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}~searched for \ensuremath{C\!P\!V}\xspace in the decay $D^\pm\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^\pm$ by measuring the parameter $A_{\ensuremath{C\!P}\xspace}$ defined as: \begin{equation} A_{\ensuremath{C\!P}\xspace}=\frac{\Gamma(D^+\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^+)-\Gamma(D^-\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^-)} {\Gamma(D^+\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^+)+\Gamma(D^-\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^-)}, \end{equation} where $\Gamma$ is the partial decay width for this decay. This decay mode has been chosen because of its clean experimental signature. Although direct \ensuremath{C\!P}\xspace violation due to interference between Cabibbo-allowed and doubly Cabibbo-suppressed amplitudes is predicted to be negligible within the SM~\cite{Lipkin:1999qz}, $\ensuremath{K^0}\xspace-\ensuremath{\Kbar^0}\xspace$ mixing induces a time-integrated \ensuremath{C\!P}\xspace violating asymmetry of $(-0.332\pm 0.006)\,\%$~\cite{Nakamura:2010zzi}. Contributions from non-SM processes may reduce the value of the measured $A_{\ensuremath{C\!P}\xspace}$ or enhance it up to the level of one percent~\cite{Lipkin:1999qz,Bigi:1994aw}. Therefore, a significant deviation of the $A_{\ensuremath{C\!P}\xspace}$ measurement from pure $\ensuremath{K^0}\xspace-\ensuremath{\Kbar^0}\xspace$ mixing effects would be evidence for the presence of new physics beyond the SM. Due to the smallness of the expected value, this measurement requires a large data sample and precise control of the systematic uncertainties. Previous measurements of $A_{\ensuremath{C\!P}\xspace}$ have been reported by the CLEO-c ($(-0.6\pm 1.0 \ensuremath{\mathrm{(stat)}}\xspace \pm 0.3 \ensuremath{\mathrm{(syst)}}\xspace)\%$~\cite{:2007zt}) and Belle collaborations ($(-0.71\pm 0.19 \ensuremath{\mathrm{(stat)}}\xspace \pm 0.20 \ensuremath{\mathrm{(syst)}}\xspace)\%$~\cite{Ko:2010ng}). We select $D^\pm\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^\pm$ decays by combining a $\ensuremath{K^0_{\scriptscriptstyle S}}\xspace$ candidate reconstructed in the decay mode $\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\to\pi^+\pi^-$ with a charged pion candidate. A \ensuremath{K^0_{\scriptscriptstyle S}}\xspace candidate is reconstructed from two oppositely charged tracks with an invariant mass within $\pm$ 10 MeV\ensuremath{/c^2}\xspace of the nominal \ensuremath{K^0_{\scriptscriptstyle S}}\xspace mass~\cite{Nakamura:2010zzi}. To obtain the final candidate events, a Boosted Decision Tree (BDT) algorithm~\cite{Speckmayer:2010zz} is constructed from seven discriminating variables for each $D^\pm$ candidate: the measured proper decay time $\tau(D^\pm)$, the decay distance in the transverse plane $L_{xy}(D^\pm)$, the CM momentum magnitude $p^*(D^\pm)$, the momentum magnitudes and transverse components with respect to the beam axis for both the \ensuremath{K^0_{\scriptscriptstyle S}}\xspace and pion candidates. A binned maximum likelihood (ML) fit to the $m(\ensuremath{K^0_{\scriptscriptstyle S}}\xspace \pi^\pm)$ distribution for the retained $D^\pm$ candidates is used to extract the signal yield. The total probability distribution function (PDF) is the sum of signal and background components. The signal PDF is modeled as a sum of three Gaussian functions, the first two of them with common mean. The background PDF is taken as a sum of two components: a background from $D^\pm_s\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace K^\pm$, where the $K^\pm$ is misidentified as $\pi^\pm$, and a combinatorial background from other sources. The data and the fit are shown in Fig.~\ref{dtokspi}. All of the fit parameters are extracted from the fit to the data sample apart from the normalization of the background due to $D^\pm_s\to \ensuremath{K^0_{\scriptscriptstyle S}}\xspace K^\pm$, which is fixed to the value predicted by the MC simulation. \begin{figure}[tb] \begin{center} \includegraphics[width=8cm]{data_w_mD_all.eps} \vspace{-0.3cm} \caption{Invariant mass distribution for $\ensuremath{K^0_{\scriptscriptstyle S}}\xspace \pi^\pm$ candidates in the data (black points). The solid curve shows the fit to the data. The dashed line is the sum of all backgrounds, while the dotted line is combinatorial background only. The vertical scale of the plot is logarithmic.} \label{dtokspi} \vspace{-0.7cm} \end{center} \end{figure} We determine $A_{\ensuremath{C\!P}\xspace}$ by measuring the signal yield asymmetry $A$ defined as: \begin{equation} A=\frac{N_{D^+}-N_{D^-}}{N_{D^+}+N_{D^-}}, \end{equation} where $N_{D^+}$($N_{D^-}$) is the number of fitted $D^+\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^+$($D^-\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^-$) decays. The quantity $A$ is the result of two other contributions in addition to $A_{\ensuremath{C\!P}\xspace}$. There is a physics component due to the forward-backward (FB) asymmetry ($A_{FB}$) in $\ensuremath{e^+e^-}\xspace\to\ensuremath{c\overline c}\xspace$, arising from $\gamma^*$-$Z^0$ interference and high order QED processes in $\ensuremath{e^+e^-}\xspace \to \ensuremath{c\overline c}\xspace$. This asymmetry will create a difference in the number of reconstructed $D^+$ and $D^-$ decays due to the FB detection asymmetries arising from the boost of the center-of-mass (CM) system relative to the laboratory frame. There is also a detector-induced component due to the difference in the reconstruction efficiencies of $D^+\to K^0_s\pi^+$ and $D^-\to K^0_s\pi^-$ generated by differences in the track reconstruction and identification efficiencies for $\pi^+$ and $\pi^-$. While $A_{FB}$ is measured together with $A_{\ensuremath{C\!P}\xspace}$ using the selected dataset, we correct the dataset itself for the reconstruction and identification effects using control data sets. \mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}~developed a data-driven method to determine the charge asymmetry in track reconstruction as a function of the magnitude of the track momentum and its polar angle which is shown along with the associated errors in Fig.~\ref{trackratio}. \begin{figure}[tb] \begin{center} \includegraphics[width=12cm]{plot_Ratio.eps} \vspace{-0.3cm} \caption{Map of the ratio between detection efficiency for $\pi^+$ and $\pi^-$ (top) plus the corresponding statistical errors (bottom). The map is produced using the numbers of $\pi^-$ and $\pi^+$ tracks in the selected control sample.} \label{trackratio} \vspace{-0.7cm} \end{center} \end{figure} Neglecting the second-order terms that contain the product of $A_{\ensuremath{C\!P}\xspace}$ and $A_{FB}$, the resulting asymmetry can be expressed simply as the sum of the two. The parameter $A_{\ensuremath{C\!P}\xspace}$ is independent of kinematic variables, while $A_{FB}$ is an odd function of $\cos\theta^*_D$, where $\theta^*_D$ is the polar angle of the $D^\pm$ candidate momentum in the $\ensuremath{e^+e^-}\xspace$ CM frame. If we compute $A(+|\cos\theta^*_D|)$ for the $D^\pm$ candidates in a positive $\cos\theta^*_D$ bin and $A(-|\cos\theta^*_D|)$ for the candidates in its negative counterpart, the contribution to the two asymmetries from $A_{\ensuremath{C\!P}\xspace}$ is the same, while the contribution from $A_{FB}$ has the same magnitude but opposite sign. Therefore $A_{\ensuremath{C\!P}\xspace}$ and $A_{FB}$ can be written as a function of $|\cos\theta^*_D|$ as follows: \begin{align} A_{FB}(|\cos\theta^*_D|) &= \frac{A(+|\cos\theta^*_D|) - A(-|\cos\theta^*_D|)}{2} \\ \intertext{and} A_{\ensuremath{C\!P}\xspace}(|\cos\theta^*_D|) &= \frac{A(+|\cos\theta^*_D|) + A(-|\cos\theta^*_D|)}{2}. \label{eq:AcpAfb_intro} \end{align} The selected sample is divided into ten subsamples corresponding to ten $\cos\theta^*_D$ bins of equal width and a simultaneous binned ML fit is performed on the invariant mass distributions of $D^+$ and $D^-$ candidates for each subsample to extract the signal yield asymmetries. Using the asymmetry measurements in five positive and in five negative $\cos\theta^*_D$ bins, we obtain five $A_{FB}$ and five $A_{CP}$ values. As $A_{CP}$ does not depend upon $\cos\theta^*_D$, we compute a central value of this parameter using a $\chi^2$ minimization to a constant. The $A_{\ensuremath{C\!P}\xspace}$ and $A_{FB}$ values are shown in Fig.~\ref{acp}, together with the central value and $\pm 1\,\sigma$ confidence interval for $A_{\ensuremath{C\!P}\xspace}$. We determine $A_{CP}$ to be: \begin{equation} A_{\ensuremath{C\!P}\xspace}=(-0.39 \pm 0.13 \pm 0.10)\% \end{equation} where the first error is statistical and the second systematic. \begin{figure}[tb] \begin{center} \includegraphics[width=0.4\textwidth,clip=true]{fitAcp_w.eps} \includegraphics[width=0.4\textwidth,clip=true]{fitAfb_w.eps} \vspace{-0.3cm} \caption{$A_{\ensuremath{C\!P}\xspace}$ (top) and $A_{FB}$ (bottom) asymmetries for $D^\pm\to\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\pi^\pm$ candidates as a function of $|\cos\theta^*_D|$ in the data sample. The solid line represents the central value of $A_{\ensuremath{C\!P}\xspace}$ and the hatched region is the $\pm1\,\sigma$ interval, both obtained from a $\chi^2$ minimization assuming no dependence on $|\cos\theta^*_D|$.} \label{acp} \vspace{-0.7cm} \end{center} \end{figure} \section{Search for {\boldmath\ensuremath{C\!P}\xspace} Violation using {\boldmath\ensuremath{T}\xspace}-Odd Correlations in {\boldmath$D^+_{(s)}\rightarrow K^0_S K^+\pi^+\pi^+$}~\cite{delAmoSanchez:2011fb}} A search for \ensuremath{C\!P}\xspace violation in the decays \ensuremath{\Dp \to \Kp \KS \pip \pim}\xspace and \ensuremath{\Ds \to \Kp \KS \pip \pim}\xspace using \ensuremath{T}-odd\xspace correlations is described here. We define a kinematic triple product that is odd under time reversal using the vector momenta of the final state particles in the \ensuremath{D^+_{(s)}}\xspace rest frame as \begin{equation} \ensuremath{C_T}\xspace \equiv \vec{p}_{\ensuremath{K^+}\xspace} \cdot \left( \vec{p}_{\ensuremath{\pi^+}\xspace} \times \vec{p}_{\ensuremath{\pi^-}\xspace} \right). \label{eq:Ct} \end{equation} Under the assumption of \ensuremath{C\!PT}\xspace invariance, \ensuremath{T}\xspace violation is equivalent to \ensuremath{C\!P}\xspace violation. We study the \ensuremath{T}-odd\xspace correlations by measuring the observable expressed in Eq.~(\ref{eq:Ct}) and then evaluating the asymmetry \begin{equation} \ensuremath{A_T}\xspace \equiv \frac{\Gamma(\ensuremath{C_T}\xspace>0) - \Gamma(\ensuremath{C_T}\xspace<0)}{\Gamma(\ensuremath{C_T}\xspace>0) + \Gamma(\ensuremath{C_T}\xspace<0)}, \label{eq:At} \end{equation} where $\Gamma$ is the decay rate for the process under study. The observable defined in Eq.~(\ref{eq:At}) can have a non-zero value due to final state interactions even if the weak phases are zero~\cite{Bigi:2009zzb}. The \ensuremath{T}-odd\xspace asymmetry measured in the \ensuremath{C\!P}\xspace-conjugate decay process, \ensuremath{\bar{A}_T}\xspace, is defined as: \begin{equation} \ensuremath{\bar{A}_T}\xspace \equiv \frac{\Gamma(-\ensuremath{\bar{C}_T}\xspace>0) - \Gamma(-\ensuremath{\bar{C}_T}\xspace<0)}{\Gamma(-\ensuremath{\bar{C}_T}\xspace>0) + \Gamma(-\ensuremath{\bar{C}_T}\xspace<0)}, \label{eq:Atb} \end{equation} where $\ensuremath{\bar{C}_T}\xspace\equiv \vec{p}_{\ensuremath{K^-}\xspace} \cdot \left( \vec{p}_{\ensuremath{\pi^-}\xspace} \times \vec{p}_{\ensuremath{\pi^+}\xspace} \right)$. We can then construct: \begin{equation} \ensuremath{\mathcal{A}_T}\xspace \equiv \frac{1}{2}\left( \ensuremath{A_T}\xspace - \ensuremath{\bar{A}_T}\xspace \right), \label{eq:Atv} \end{equation} which is an asymmetry that characterizes \ensuremath{T}\xspace violation in the weak decay process~\cite{Bensalem:2002ys,Bensalem:2002pz,Bensalem:2000hq}. At least four different particles are required in the final state so that the triple product may be defined using momentum vectors only~\cite{Golowich:1988ig}. The $D$ meson decays suitable for this analysis method are \ensuremath{\Dp \to \Kp \KS \pip \pim}\xspace, \ensuremath{\Ds \to \Kp \KS \pip \pim}\xspace and $\ensuremath{D^0}\xspace\to\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$. The search for \ensuremath{C\!P}\xspace violation using \ensuremath{T}-odd\xspace correlations in $\ensuremath{D^0}\xspace\to\ensuremath{K^+}\xspace\ensuremath{K^-}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ has recently been carried out by the \mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}\ Collaboration, and no evidence of \ensuremath{C\!P}\xspace violation has been observed~\cite{delAmoSanchez:2010xj}. The \ensuremath{D^+}\xspace and \ensuremath{D^+_s}\xspace meson decay candidates are reconstructed in the production and decay sequence: \begin{equation} \ensuremath{e^+e^-}\xspace\to X\ensuremath{D^+_{(s)}}\xspace; \ensuremath{D^+_{(s)}}\xspace\to\ensuremath{K^+}\xspace\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace; \ensuremath{K^0_{\scriptscriptstyle S}}\xspace\to\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace, \label{eq:reaction} \end{equation} using the events with at least five charged particles. To obtain the final set of signal candidates, the \ensuremath{p^*}\xspace, the difference in vertex probabilities that the parent meson originates from a common vertex and the primary vertex, and the signed transverse decay length are combined in a likelihood-ratio test. Fig.~\ref{fig:fig2} shows the resulting $\ensuremath{K^+}\xspace\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ mass spectra in the \ensuremath{D^+}\xspace and \ensuremath{D^+_s}\xspace regions. For each region, the signal is described by the superposition of two Gaussian functions with a common mean value. The background is parametrized by a first-order polynomial in the \ensuremath{D^+}\xspace region, and by a second-order polynomial in the \ensuremath{D^+_s}\xspace region. \begin{figure} \centering \includegraphics[width=6cm]{fitDp.eps} \includegraphics[width=6cm]{fitDs.eps} \caption{\label{fig:fig2} The $\ensuremath{K^+}\xspace\ensuremath{K^0_{\scriptscriptstyle S}}\xspace\ensuremath{\pi^+}\xspace\ensuremath{\pi^-}\xspace$ mass spectrum a) in the \ensuremath{D^+}\xspace, and b) in the \ensuremath{D^+_s}\xspace mass region. The curves result from the fits described in the text. The distributions of the Pull values are also shown.} \end{figure} We extract the integrated yields $N(\ensuremath{D^+}\xspace) = 21210 \pm 392$ and $N(\ensuremath{D^+_s}\xspace) = 29791 \pm 337$ from the fits, where the uncertainties are statistical only. We next divide the data sample into four sub-samples depending on $D_{(s)}$ charge and whether \ensuremath{C_T}\xspace (\ensuremath{\bar{C}_T}\xspace) is greater or less than zero, and fit the corresponding mass spectra simultaneously to extract the yields and the values of the asymmetry parameters \ensuremath{A_T}\xspace and \ensuremath{\bar{A}_T}\xspace. The triple product asymmetries for Cabibbo-suppressed decays $\ensuremath{D^0}\xspace\to K^+K^-\pi^+\pi^+$~\cite{delAmoSanchez:2010xj}, \ensuremath{\Dp \to \Kp \KS \pip \pim}\xspace and Cabibbo-favored decays \ensuremath{\Ds \to \Kp \KS \pip \pim}\xspace are summarized in Tab.~\ref{tab:todd}. The average of the triple product asymmetries is also included in the table \begin{equation} \Sigma_T = \frac{1}{2}(\ensuremath{A_T}\xspace + \ensuremath{\bar{A}_T}\xspace) \end{equation} which is not a CP violating parameter but may provide more information on the final-state interactions in these decays. \begin{table}[h] \begin{center} \caption{Triple-product asymmetries \ensuremath{A_T}\xspace, \ensuremath{\bar{A}_T}\xspace, \ensuremath{\mathcal{A}_T}\xspace, and $\Sigma_T$ for the Cabibbo-suppressed decays $\ensuremath{D^0}\xspace\to K^+K^-\pi^+\pi^-$~\cite{delAmoSanchez:2010xj}, \ensuremath{\Dp \to \Kp \KS \pip \pim}\xspace~\cite{delAmoSanchez:2011fb} and the Cabibbo-favored decays \ensuremath{\Ds \to \Kp \KS \pip \pim}\xspace~\cite{delAmoSanchez:2011fb}. The values quoted in units $10^{-3}$.} \begin{tabular}{|l|c|c|c|} \hline \hline Asymmetry & \ensuremath{D^0}\xspace/\ensuremath{\Dbar^0}\xspace & \ensuremath{D^+}\xspace/\ensuremath{D^-}\xspace & \ensuremath{D^+_s}\xspace/\ensuremath{D^-_{s}}\xspace \\ \hline \ensuremath{A_T}\xspace & -68.5 $\pm$ 7.3 $\pm$ 5.8 & 11.2 $\pm$ 14.1 $\pm$ 5.7 & -99.2 $\pm$ 10.7 $\pm$ 8.3 \\ \ensuremath{\bar{A}_T}\xspace & -70.5 $\pm$ 7.3 $\pm$ 3.9 & 35.1 $\pm$ 14.3 $\pm$ 7.2 & -72.1 $\pm$ 10.9 $\pm$ 10.7 \\ \ensuremath{\mathcal{A}_T}\xspace & 1.0 $\pm$ 5.1 $\pm$ 4.4 & -12.0 $\pm$ 10.0 $\pm$ 4.6 & -13.6 $\pm$ 7.7 $\pm$ 3.4 \\ $\Sigma_T$ & -69.5 $\pm$ 6.2 & 23.1 $\pm$ 11.0 & 85.6 $\pm$ 10.2 \\ \hline \hline \end{tabular} \label{tab:todd} \end{center} \end{table} The final measurements for \ensuremath{\mathcal{A}_T}\xspace in all decays are consistent with zero, however, the values for the \ensuremath{T}\xspace-odd asymmetries are considerably larger in \ensuremath{D^0}\xspace and \ensuremath{D^+_s}\xspace decays. The differences in these values for the various decays may indicate a difference in the final-state interactions. The final-state interactions may be responsible for the hierarchy of lifetimes and branching fractions~\cite{GronauRosner}. \section{Introduction} In the Standard Model (SM), \ensuremath{C\!P}\xspace violation (\ensuremath{C\!P\!V}\xspace) arises from the complex phase of the CKM quark-mixing matrix~\cite{Kobayashi:1973fr}. Measurements of the \ensuremath{C\!P\!V}\xspace asymmetries in the $K$ and $B$ meson systems are consistent with expectations based on the SM and, together with theoretical inputs, lead to the determination of the parameters of the CKM matrix. \ensuremath{C\!P\!V}\xspace has not yet been observed in the charm sector, where the theoretical predictions based on the SM for \ensuremath{C\!P\!V}\xspace asymmetries are at the level of $10^{-3}$ or below~\cite{Buccella:1994nf}. An observation of \ensuremath{C\!P}\xspace asymmetries at the level of one percent or greater would be a clear indication of new physics. \input{CPV.tex} \section{Conclusion} Measurements with the final \mbox{\slshape B\kern-0.1em{\smaller A}\kern-0.1em B\kern-0.1em{\smaller A\kern-0.2em R}}~dataset achieve the precision at the SM prediction for \ensuremath{C\!P}\xspace violation in charm decays. The systematic uncertainties are at the level of the statistical uncertainties. Current and future measurements from LHCb, Belle, and SuperB will face the challenge of reducing these systematic uncertainties. \bigskip \input{bib.tex} \end{document}
2,869,038,156,226
arxiv
\section{Introduction} The definition of functions by recursion on the description of datatypes is the basic idea of generic programming. This method is based on defining a datatype, introduced as the \emph{universe}~\cite{mlof84:bibliopolis}, which contains datatype descriptions, such as ``a list is either empty or a pair consisting of a parameter and a sublist''. Indeed, the universe constructors correspond to the common notions ``either'', ``pair'', ``parameter'' and ``substructure'' abstracted out of informal descriptions such as the preceding one. Then a decoding function is introduced, which interprets instances of the universe, usually called universe \emph{codes}, into actual datatypes. In a dependently typed setting we can define generic functions over the universe of codes and the associated decoded datatypes. In other words, the codes provide information enough to properly traverse the structure of decoded datatype instances. Thereby, traversal becomes an operation that can be described for any recursive structure by means of generic iteration and recursion principles and, thus, code duplication is avoided by abstracting out basic operations on the datatypes. One can wonder how many other practical behaviors can undergo such kind of generalisation. In this work we introduce a universe of \emph{regular trees}~\cite{Morris:2004} extended with variables (i.e. names) and binding information. We first define generic formation and elimination (i.e. induction/recursion) operators over this universe. The inclusion of names and the notion of locality allow us to introduce a generic \ensuremath{\alpha}-equivalence relation, which we choose to base on name-swapping. Then we derive \ensuremath{\alpha}-iteration and induction principles that capture Barendregt’s Variable Convention (BVC) which allows to proceed in proofs and definitions by conveniently choosing bound names so as to avoid conflict. At this generic level we are able to prove several properties, mainly concerning the interaction of the iteration and recursion principles with the swapping operation and \ensuremath{\alpha}-equivalence relation. We next obtain \ensuremath{\lambda}-calculus\ and System F as instances of our universe and define corresponding substitution operations as instances of the generic \ensuremath{\alpha}-iteration principle. We are thereby able to derive the lemmas on compatibility of substitution with \ensuremath{\alpha}-equivalence by direct instantiation of the generic properties referred to above. Finally, we prove the substitution composition lemma for the System F showing how our approach allows us to mimic the BVC, proving particular results on instances of the framework as it is usually done in pen-and-paper style. \subsection{Related work} \label{sec:relatedwork} Programming languages supporting native constructions to declare and manipulate abstract syntax with binders are presented by Shinwell, et. al in~\cite{ShinwellPG03,SHINWELL200653}, where an ML extension \emph{FreshML}, and an OCaml extension \emph{Fresh O'Caml} are respectively developed. These languages allow to deconstruct datatypes with binders in a safe way, that is, in the case of an abstraction inspection, a renaming with a freshly generated binder is computed for the abstraction body. In this way, the language user has access only to a fresh binder, and the renamed body of the opened abstraction. This mechanism grants that values with binders are operationally equivalent if they represent \ensuremath{\alpha}-equivalent objects. This result is proved in~\cite{ShinwellPG03} by introducing a denotational semantics of the object language FreshML into FM-sets (Fraenkel and Mostowski's sets). They prove that this denotational semantics matches the operational one. In this way, they are able to prove that values of the introduced abstract syntax with binders properly represent \ensuremath{\alpha}-equivalence classes of the object-level syntax. In~\cite{Cheney} Cheney carries out a similar work, but instead of developing a language extension, he implements a Haskell library called \emph{FreshLib}. As the author does not implement a language from scratch, this work introduces generic programming techniques in its implementation to support the required level of genericity. All previous works address common operations dealing with general structures with binders. Although some of these developments give proofs about the soundness of their approaches, their main concern is the implementation of meta-programs. In~\cite{Lee2012}, Lee et al. use generic programming techniques to develop mechanisations of formal meta-theory in the Coq proof assistant. This work allows the user to choose between nominal, locally nameless or de Bruijn first-order syntax. For each of these representations, they offer several infrastructure operations and their associated lemmas. For instance, for the locally nameless setting, two different substitutions are needed for bound and free variables respectively. In the case of System F, where terms and type variables have binding constructions, this representation involves six different substitution operations. Hence, as the number of syntactic sorts supporting binding constructions increases in the object language, there is a combinatorial explosion of the number of operations and lemmas involved in its formalisation for the locally nameless and de Bruijn first-order syntaxes. They manage to address these issues defining these operations and associated lemmas in a generic re-usable way. Moreover, they provide a small annotation language to describe the binding structure of the object language, from which they can automatically derive an isomorphism between the object language and their generic universe syntax. However, introducing inductive relations in this framework requires the user to provide a mapping between the concrete relation, defined at the object language level, and the generic relation. They are able to instantiate some cases of the POPLmark challenge~\cite{Aydemir2005} in their framework, validating their approach both for the locally nameless and the de Bruijn first-order syntax, and comparing some metrics of their approach against other solutions. However, their particular choice of universe makes it impossible to have more than one sort of binder per datatype. Hence, they cannot represent in their setting a language such as Session Types~\cite{YOSHIDA200773}, where there exist three distinct sort of binders: parameters, channels and ports within a concurrent calculus. We believe their work addresses reusing and usability in great manner, but lacks in extensibility and abstraction. By using this framework it is possible to reuse several operations and lemmas that hide some of the work required by the underlying binders representation. However, in order to introduce new operations and prove results, the user may have to deal with the underlying generic abstract syntax language. Their work seems to support the nominal syntax, although no \ensuremath{\alpha}-conversion relation, neither any other classic relation, property or function over named terms is presented. Indeed, they do not further develop the nominal syntax, beyond the basic definitions of a nominal abstract syntax. In~\cite{licata-harper-09}, Licata and Harper codify a universe that mixes binding and computation constructions in Agda, where computations are represented as meta-level functions injected in the universe constructions, i.e., they embed a HOL syntax in their development. Their representation is based on a well-scoped de Bruijn representation, that is, de Bruijn terms associated with a context indexing the free variables. For this universe, they provide a generic substitution operation, and prove context weakening and strengthening lemmas. In 2002 Pitts and Gabbay introduced the \emph{Nominal Logic}~\cite{GP02:newapproach}, a first-order many-sorted logic with equality, containing primitives for renaming via name-swapping, for freshness of names, and for name-binding. The swapping operation has much nicer logical properties than the more general, non-bijective forms of renaming. This operation provides a sufficient foundation for a theory of structural induction/recursion for the syntax modulo \ensuremath{\alpha}. In~\cite{UrbanT05}, Urban and Tasson use ideas from Nominal Logic to construct a set of \ensuremath{\lambda}-calculus terms modulo alpha, that is, identifying \ensuremath{\alpha}-convertible terms. The construction is based on a HOAS syntax on top of Isabelle/HOL, deriving recursion and induction principles over this quotient set. Our main motivation is to show it is feasible to formalise within constructive type theory \ensuremath{\alpha}-iteration/ induction principles for a classical named syntax, deriving these principles from just simple structural induction on fist-order terms, where equality remains the simple definitional one, and not performing any kind of quotient on terms. Then, we want to study how feasible is to use this development in practical examples. This work is structured as follows: in Section~\ref{sec:rtrees} we present our regular tree universe, in Section~\ref{sec:nameswapping} we introduce name-swapping, in Section~\ref{sec:alpha} we give an \ensuremath{\alpha}-conversion relation and iteration/induction schemes modulo \ensuremath{\alpha}, which allow us to define the caputre-avoiding subtitution operation, and automatically derive some of its basic properties. In Section~\ref{sec:bvc} we introduce the proof technique that mimics the BVC, and we apply it in the substitution composition lemma. Finally, in the last section we discuss related work and conclusions. We carry out the whole development within Constructive Type Theory as implemented in the system Agda~\cite{Norell2009}. We will show fragments of the Agda code, the complete version being available at: \href{https://github.com/ernius/genericBindingFramework}{https://github.com/ernius/genericBindingFramework}. \section{Universe of Regular Trees with Binders} \label{sec:rtrees} \subsection{Universe of (Codes of) Functors} We choose to build up a universe whose objects are codes to be interpreted as functions from \AgdaDatatypeFoot{Set} to \AgdaDatatypeFoot{Set}\footnote{\AgdaDatatypeFoot{Set} is the type of (small) datatypes.}, i.e. as \emph{functors}. The actual datatypes generated by this mechanism arise as fixed points of such functors. To this effect, we introduce in Figure~\ref{fig:regulartree}:\begin{itemize} \item the datatype \AgdaDatatypeFoot{Functor} of codes, and \item the (mutually recursive-inductive) definition of the decoding function \AgdaFunctionFoot{⟦\_⟧} and of the actual datatype \AgdaDatatype{μ}\,$F$ associated to any given functor code $F$. \end{itemize} \begin{figure}[ht] \AgdaTarget{Functor} \begin{minipage}[t]{0.45\linewidth} \vspace{0.5em} \small\ExecuteMetaData[GenericProgramming/GPBindings.tex]{functor} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \small \ExecuteMetaData[GenericProgramming/GPBindings.tex]{interpret} \AgdaTarget{μ} \small\ExecuteMetaData[GenericProgramming/GPBindings.tex]{mu} \end{minipage} \caption{Regular tree universe with binders.} \vspace{-1em} \label{fig:regulartree} \end{figure} Notice first that the (inductive) definition of \AgdaDatatype{μ}\,$F$ by means of the constructor $\langle\_\rangle$ indeed introduces it as the least fixed point of the functor corresponding to the code $F$. Now let us examine the codes and corresponding functors. The first three constructors of datatype \AgdaDatatypeFoot{Functor} in Figure~\ref{fig:regulartree} represent the embedding of: the unity type, a recursive position, and an arbitrary (i.e. externally given) datatype. The fourth constructor embeds a datatype representable in our universe, while the next two constructors represent the sum and product of types. Finally, the last two introduction rules are specific to our desired domain of abstract syntaxes with binders. As our framework supports different sorts of names, the variables and binders constructors receive as parameters an identifier of the sort of variables that they respectively introduce or bind. The binder constructor also receives the descriptor of the structure serving as scope of the bound variable. In many cases of interest (e.g. the \ensuremath{\lambda}-calculus\ and System F to be examined shortly) this subterm descriptor will be just a recursive position. However, a compound structure is often needed, as it is the case in e.g. languages with a \texttt{letrec} primitive. We can observe how the variable and binder constructions inject a fixed set of variables (names) $V$\ into the interpreted datatype. This set $V$\ is assumed to be infinite with a decidable equality. Notice too that in the cases of the variable injection and the binder functors the sort argument $S$\ has no impact on the interpreted set. Indeed, we have only one kind of names $V$. The sort identifier will be relevant to implement generic operations related to binding issues, as shown in subsequent sections. For example, the types of natural numbers and of lists of natural numbers can be defined as follows:\\ \vspace{-1em} \begin{minipage}[b]{0.4\linewidth} \qquad $\text{FNat} = |1|\ |{+}|\ \text{|R|}$ \end{minipage} \begin{minipage}[b]{0.4\linewidth} $\text{FListNat} = |1|\ |{+}|\ (\text{|Ef|}\ \text{FNat})\ |{\times}|\ \text{|R|}$ \end{minipage} \begin{minipage}[b]{0.4\linewidth} \qquad $\text{Nat} = \mu\ \text{FNat}$ \end{minipage} \begin{minipage}[b]{0.4\linewidth} $\text{ListNat} = \mu\ \text{FListNat}$ \end{minipage} \vspace{-1em} \hfill In Figure~\ref{fig:lcalc} we illustrate the use of the variables and binders constructions by encoding the $\lambda$-calculus. We show the corresponding classical concrete syntax definition using comments, that are written following a dash to the right of each line. This definition has only one sort of variables identified with the sort \AgdaDatatypeFoot{SortλTermVars}. \begin{figure}[h!] \begin{minipage}[t]{0.6\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{lambdacalculus} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{lambdacalculusmu} \end{minipage} \vspace{-1em} \caption{\ensuremath{\lambda}-calculus.} \label{fig:lcalc} \end{figure} \vspace{-.5em} We next introduce notation resembling the concrete syntax of the \ensuremath{\lambda}-calculus\ and hiding away our universe code constructions.\\ \vspace{-1em} \begin{minipage}{0.2\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{lambdaconstvar} \end{minipage} \begin{minipage}{0.37\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{lambdaconstapp} \end{minipage} \begin{minipage}{0.33\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{lambdaconstlam} \end{minipage} \hfill Next we present the codification of the System F. As this language also needs bindings at the type level, this encoding illustrates the use of two distinct sorts of identifiers, namely \AgdaDatatypeFoot{SortFTypeVars} and \AgdaDatatypeFoot{SortλTermVars}: \begin{minipage}[t]{0.65\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/SystemF.tex]{systemF} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/SystemF.tex]{systemFmuty} \\ \\ \\ \small\ExecuteMetaData[GenericProgramming/Examples/SystemF.tex]{systemFmutrm} \end{minipage} \hfill In the preceding constructions we have chosen a simplification of the universe of regular tree datatypes presented in~\cite{Morris:2004}, where recursive types are represented using $\mu$-types (from~\cite{Pierce:2002}). However, instead of the nominal approach traditionally used with recursive type binders, they use a well-scoped de Bruijn representation. Therefore, in order to properly interpret the full universe, a definition indexed by a context with the multiple $\mu$-recursive positions definitions is required. Our representation in Agda (Figure~\ref{fig:regulartree}), simplifies this burden at the expense of not being able to represent mutually recursive datatypes. In other words, our universe construction has expressive power equivalent to admitting only a single top-level $\mu$-recursive type binder. \vspace{-1em} \subsection{Map and Fold} \label{sec:map-fold} The classical definition of \emph{fold} based on \emph{map} that is usually introduced in category theory does not pass Agda's termination checker. The recursive call to fold is hidden inside a call to map, and because of this the termination checker cannot determine how map is using it. To make the fold operation pass the termination checker we have to fuse map and fold into a single function, as done in~\cite{Norell2009} for a similar regular tree universe. In Figure~\ref{fig:foldt} we show our implementation of the function \AgdaFunctionFoot{foldmap}. We make use of Agda's implicit arguments feature, denoted by curly braces, to omit terms that the type checker can figure out for itself. For instance, we declare the $A$ set argument as an implicit argument. The presented \AgdaFunctionFoot{foldmap} function needs to keep two functors, since the fold (recursive) part works always over the same functor argument $F$, while, for the map part, the auxiliary functor argument $G$\ gives the position of functor $F$ during the traversal of the structure. Therefore, this function only uses the functor $F$\ in the recursive case rule (the \AgdaInductiveConstructorFoot{|R|} case) in which the right hand side expression basically begins a new traversal of the functor $F$, in a way similar to the original definition of fold. It does so by providing with a fresh copy of $F$\ in the position of the auxiliary argument $G$. The rest of the rules are equivalent to a map over the functor $G$. Note that this definition terminates because the argument of type \AgdaDatatypeFoot{⟦ G ⟧ (μ F)} decreases in each recursive call. The new \AgdaFunctionFoot{fold} operation is defined as a recursive instance of \AgdaFunctionFoot{foldmap}. \begin{figure}[h!] \small\ExecuteMetaData[GenericProgramming/GPBindings.tex]{foldmap} \\ \small\ExecuteMetaData[GenericProgramming/GPBindings.tex]{foldmap2} \vspace{-2em} \caption{Terminating fold operation.} \vspace{-.5em} \label{fig:foldt} \end{figure} As an example, we define a function \AgdaFunctionFoot{vars} that counts the number of variable occurrences in a term of the $\lambda$-calculus. We do so by instantiating it as a case of the \AgdaFunctionFoot{fold} operation in fig.~\ref{fig:vars} \begin{figure}[h!] \begin{minipage}{0.45\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{varsfold1} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{varsfold2} \end{minipage} \caption{Fold application example.} \label{fig:vars} \vspace{-1em} \end{figure} Next we present a particular useful instantiation of the fold operator, named \AgdaFunction{foldCtx}. This instantiation aims at reproducing some techniques related to the nominal syntax considered in our work. We introduce an extra argument with type \AgdaDatatypeFoot{μ C}, which is used by the folded function $f$. This function is partially applied to this extra argument, and then passed as an argument of fold. Hence, this argument acts as an explicit invariant context for the function $f$\ through the entire fold operation. Another difference with the original fold operation is that the result of this instance is a datatype \AgdaDatatypeFoot{μ H} encoded in our universe instead of an arbitrary set. \\ \vspace{-1em} {\small\ExecuteMetaData[GenericProgramming/GPBindings.tex]{foldCtx}} \vspace{-1em} From this fold instance we can directly derive the naive substitution operation for the \ensuremath{\lambda}-calculus. In order to do this, we next give the functor descriptor \AgdaDatatypeFoot{cF} for the context argument. It represents the pair formed by the variable to be replaced and the substituted term: \\ \vspace{-1em} {\small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{substcontext}} \vspace{-1em} Next we define the function \AgdaFunctionFoot{substaux} (Figure~\ref{fig:substaux}) to be folded which, given a term structure with the results of the recursive calls in its recursive positions, constructs the final result of the substitution. For the variable case, we check, as usual, whether the substitution is to be applied, whereas the application and abstraction cases directly reconstruct the corresponding terms from the recursive call. Note that in the abstraction case we do not check whether the abstracted variable is different from the one being replaced, as in Barendregt's substitution definition in~\cite{bar:84}. In fact, this comparison would be pointless because, as we are using an iteration principle, we do not have access to the original abstraction body subterm. Note that we hide the universe codes on the right side of this definition by using the previously introduced \ensuremath{\lambda}-calculus\ constructors. \vspace{-.5em} \begin{figure}[h!] {\small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{substaux}} \vspace{-2em} \caption{Naive substitution auxiliary function.} \vspace{-.5em} \label{fig:substaux} \end{figure} Finally, we instantiate the \AgdaFunctionFoot{foldCtx} function with \AgdaFunctionFoot{substaux}, and its appropriate context pair to get the naive substitution operation. \\ \vspace{-1em} {\small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{naivesubst}} \vspace{-2.45em} \subsection{Primitive Induction}\label{sec:pimind} We now develop a more generic elimination rule than the fold operation defined above. This elimination rule captures proof by induction, and is based on the recursion rule given by Benke et al. in~\cite{Benke:2003}. However, our development departs from their work in the following points: Firstly, they derive an elimination rule for a simpler universe construction, based on one-sorted term algebras, and defined through signatures instead of functors. For instance, their universe does not allow the injection of previously defined datatypes, which is necessary for defining e.g. lists of natural numbers so as natural numbers become parts of the lists. Secondly, their induction principle would not pass Agda's termination checker due to reasons similar to the ones discussed for the first version of the fold operation. To define the desired function, to be called \AgdaFunctionFoot{foldInd}, we first introduce the auxiliary function \AgdaFunctionFoot{fih} (Figure~\ref{fig:fih}). \vspace{-.5em} \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/GPBindings.tex]{primIndih} \vspace{-2em} \caption{\AgdaFunctionFoot{fih} function.} \vspace{-1em} \label{fig:fih} \end{figure} This function receives a predicate $P$\ over the fixed point of a functor $F$, and an auxiliary functor $G$ (with a similar functionality to the one used in the \AgdaFunctionFoot{foldmap} function). It returns a corresponding predicate of type \AgdaFunctionFoot{⟦ G ⟧ (μ F) → Set}. This resulting predicate represents $P$\ holding for every recursive position \AgdaDatatypeFoot{μ F} in an element of type \AgdaDatatypeFoot{⟦ G ⟧ (μ F)}. We can now present our induction principle. We proceed in a similar way as we did above for the fold function. First, we introduce the fold-map fusion function \AgdaFunctionFoot{foldmapFh} (Figure~\ref{fig:ind}). Then, we use this function to directly derive the induction principle as a recursive instance of the fold-map fusion. \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/GPBindings.tex]{primInd} \\ \small \ExecuteMetaData[GenericProgramming/GPBindings.tex]{primInd2} \vspace{-2em} \caption{Induction principle.} \vspace{-1em} \label{fig:ind} \end{figure} We next give an example of the use of this induction principle, namely proving that the application of the function \AgdaFunctionFoot{vars} to any lambda term is greater than zero. We introduce the predicate \AgdaFunctionFoot{Pvars} representing the property to be proved and an auxiliary lemma \AgdaFunctionFoot{plus>0}, stating that the sum of two positive numbers is also positive. \\ \vspace{-1em} \begin{minipage}[b]{0.3\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{varsproof1} \end{minipage} \begin{minipage}[b]{0.4\linewidth} \small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{varsproof2} \end{minipage} \vspace{.5em} The proof that \AgdaFunctionFoot{Pvars} holds for every term \AgdaFunctionFoot{M} is a direct application of the induction principle. The variable case is direct, while the application case is the application of the lemma \AgdaFunctionFoot{plus>0} to the induction hypotheses. Finally, the abstraction case is a direct application of the induction hypothesis. \vspace{.5em} \begin{minipage}[t]{0.55\linewidth} {\small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{proofpvars}} \end{minipage} \begin{minipage}[t]{0.3\linewidth} {\small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{proof}} \end{minipage} \vspace{-2em} \section{Name-Swapping}\label{sec:nameswapping} \vspace{-1em} We now turn to considering a very basic primitive of name-swapping, which will be used for defining $\alpha$-conversion without a mention to substitution. This constitutes the foundation of the implementation of the general idea that principles of recursion and induction ought to be defined so as to work modulo $\alpha$-conversion, thus allowing to mimic the usual pen-an-paper conventions that allow the choice of convenient representatives of the terms involved in a definition or proof. The name-swapping operation completely traverses a data structure, swapping occurrences (either free, bound or binding) of two given names of some sort. Its implementation (Figure~\ref{fig:swap}) is similar to that of fold. \begin{figure}[h!] \small\ExecuteMetaData[GenericProgramming/Swap.tex]{swapF} \\ \small\ExecuteMetaData[GenericProgramming/Swap.tex]{swap} \vspace{-2em} \caption{Name-swapping operation.} \vspace{-1em} \label{fig:swap} \end{figure} We use an auxiliary function \AgdaFunctionFoot{swapF}, that takes functors $F$ and $G$, and traverses the $G$ structure until a recursive or embedded position is reached, from where we restart the $G$ argument with either the original recursive functor $F$\ or the embedded functor respectively. Note that this treatment differs from the one in the definition of fold, where this case is a base case. Here we must also traverse the embedded functor instance, as we are swapping all the variables in the structure, including the variables present in any embedded structure. Because of this, we cannot derive name-swapping as an instance of fold. In the variable and abstraction cases we use name-swapping over variables~\footnote{Requiring the decidable equality over names.}, denoted by the (mixfix) operator \AgdaFunctionFoot{(_∙_)ₐ_} \ as in~\cite{CopelloTSBF16}. We prove a generic lemma about the interaction between name-swapping and the iteration principle. This lemma is presented in Figure~\ref{fig:swapfoldCtx}, and states that the fold instance with context information is well-behaved with respect to name-swapping, given that the respectively folded operation is also well-behaved. Its proof goes by a direct induction on terms. This example shows how we are able to develop generic proofs over our universe with binders. \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/Swap.tex]{swapfoldCtx} \vspace{-2em} \caption{Fold with context is well-behaved with respect to name-swapping.} \vspace{-.5em}\label{fig:swapfoldCtx} \end{figure} We are able to directly apply the preceding lemma to the $\lambda$-calculus case in order to prove the result in Figure~\ref{fig:swapsubst}. This states that name-swapping commutes with substitution, which is particularly useful. We introduce the operator \AgdaFunctionFoot{(_∙_)_} \ to denote the swapping of variables in terms. In the proof we use of the auxiliary lemma \AgdaFunctionFoot{lemma-substauxSwap} which states that the function \AgdaFunctionFoot{substaux}, used to define substitution, is well-behaved with respect to name-swapping. This example shows how feasible it is in our framework to instantiate generic proofs for deriving useful lemmas holding for particular instances of the generic universe. \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{swapsubst} \vspace{-2em} \caption{Substitution is well-behaved with respect to name-swapping.} \vspace{-1em} \label{fig:swapsubst} \end{figure} In a similar manner we introduce a generic function returning the free variables of terms, and prove several properties about its interaction with swapping, fold, and \ensuremath{\alpha}-conversion. \vspace{-1em} \section{Alpha Equivalence Relation.} \label{sec:alpha} \vspace{-.5em} \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/Alpha.tex]{alpha} \vspace{-2em} \caption{Alpha equivalence relation.} \vspace{-.5em} \label{fig:alpha} \end{figure} In Figure~\ref{fig:alpha} we introduce the generic definition of the \ensuremath{\alpha}-equivalence relation over our universe, named \AgdaDatatypeFoot{∼α}. Its definition follows a process similar to the one used before to implement generic functions over our universe. First, we define an auxiliary relation \AgdaDatatypeFoot{∼αF}, which is inductively defined introducing an auxiliary functor $G$, used to traverse the functor $F$\ structure. For the interesting binder case, we follow an idea similar to the one used in~\cite{CopelloTSBF16}, that is, we define that two abstractions are \ensuremath{\alpha}-equivalent if there exists some list of variables $xs$, such that for any given variable $z$\ not in $xs$, the result of swapping the corresponding binders with $z$\ in the abstraction bodies is \ensuremath{\alpha}-equivalent. Note that the swapping is performed only over the sort of variables bound by this binder position, leaving any other sort of variables unchanged. We are able to prove that this is an equivalence relation, and also that it is preserved under name-swapping in a similar way as done in our previous work~\cite{CopelloTSBF16}. As we did before with name-swapping, we study how the iteration principle interacts with the introduced \ensuremath{\alpha}-equivalence relation. We begin proving that the fold operation is \ensuremath{\alpha}-compatible if it is applied to an also \ensuremath{\alpha}-compatible function. We say a function is \ensuremath{\alpha}-compatible iff it returns \ensuremath{\alpha}-convertible results when it is applied to \ensuremath{\alpha}-convertible arguments. In Figure~\ref{fig:foldalphaf} we state this lemma, whose proof goes by induction on terms. The only interesting case is the binders case, where we make use of the preservation of \ensuremath{\alpha}-equivalence under name-swapping. \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/Alpha.tex]{lemma-foldfalpha} \vspace{-2em} \caption{Fold function \ensuremath{\alpha}-compatibility property.} \vspace{-1em} \label{fig:foldalphaf} \end{figure} As a direct corollary we get that the fold with context instance is \ensuremath{\alpha}-compatible in its context argument provided the folded function is also \ensuremath{\alpha}-compatible on its arguments (Figure~\ref{fig:foldalphafCtx}).\\ \begin{figure}[h!] {\small \ExecuteMetaData[GenericProgramming/Alpha.tex]{lemmafoldCtxalphactx}} \vspace{-2em} \caption{Fold context function \ensuremath{\alpha}-compatibility corollary.} \label{fig:foldalphafCtx} \vspace{-1em} \end{figure} We define other relations over our universe in a similar way as we have done for the \ensuremath{\alpha}-equivalence relation. For instance, the \AgdaDatatypeFoot{notOccurBind} relation holds if some given variable does not occur in any binder position within a term. In this relation we discard the name sort information. We do so to simplify our next development as we will explain later. We find useful to extend this relation to lists of variables, named as \AgdaDatatypeFoot{ListNotOccurBind}, which holds if all the variables in a given list do not occur in any binder position (associated with any sort) in a term. Using this relation we are able to prove the lemma stated in Figure~\ref{fig:foldalphacomp}. This lemma states that the fold with context principle is \ensuremath{\alpha}-compatible on its two arguments if the provided function is \ensuremath{\alpha}-compatible and well-behaved with respect to name-swapping. Note that this lemma extends the one given before in Figure~\ref{fig:foldalphafCtx}, although it requires extra freshness premises, and that the folded function is preserved under name-swapping. \vspace{-.5em} \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/Alpha.tex]{lemmafoldCtxalpha} \vspace{-2em} \caption{Fold context \ensuremath{\alpha}-compatibility property.} \label{fig:foldalphacomp} \end{figure} \vspace{-2em} \subsection{Alpha Fold} \label{sec:alphafold} We are now able to introduce a fold operation that works at the level of \ensuremath{\alpha}-equivalence classes of terms, that is, it only defines \ensuremath{\alpha}-compatible functions. First, we introduce the function \AgdaFunctionFoot{bindersFreeElem} that takes a list of variables $xs$\ and an element $e$, and returns an element \ensuremath{\alpha}-equivalent to $e$ whose binders are not in the given list. This function will be useful to reproduce the BVC, which basically states that we can always pick a term with its binders fresh from a given context, which in this function is represented as a list of variables. We prove that this function has the important property of being strongly \ensuremath{\alpha}-compatible, i.e. that it returns the same result for \ensuremath{\alpha}-convertible terms. \\ \vspace{-1em} {\small\ExecuteMetaData[GenericProgramming/AlphaInduction.tex]{bindersfreealphaelem}} \vspace{-1em} Based on this function, we next directly implement the \ensuremath{\alpha}-fold principle as an instance of the fold with context function. \\ \vspace{-1em} {\small \ExecuteMetaData[GenericProgramming/AlphaInduction.tex]{foldCtxalpha} } \vspace{-1em} This iteration principle first finds a fresh term for a given context $c$, and then directly applies the fold operation over it. We developed this iteration principle following a different approach from the one taken in our previous work~\cite{CopelloTSBF16}, where we renamed the binders during the fold traverse. Instead, we chose to separate these two stages in order to reuse the previously defined fold operation and its properties. We can now properly justify the name ``alpha'' given to the introduced iteration principle. Firstly, as \AgdaFunctionFoot{bindersFreeElem} returns syntactical equal terms when applied to \ensuremath{\alpha}-convertible terms, we have that our function is trivially strong \ensuremath{\alpha}-compatible on its last term argument. Secondly, as a direct consequence from the lemma already proved for our iteration principle \AgdaFunctionFoot{foldCtx} in Figure~\ref{fig:foldalphafCtx}, this new principle inherits its \ensuremath{\alpha}-compatibility in its context argument from \AgdaFunctionFoot{foldCtx}, given that the function received is also \ensuremath{\alpha}-compatible on its arguments. Thus, the presented iteration principle works at the \ensuremath{\alpha}-equivalence classes level when the given function works at the same level. Now we can derive the capture avoiding substitution operation for the lambda calculus example by a direct application of the introduced \ensuremath{\alpha}-fold principle. In fact this definition is exactly the same as the one given before for the naive substitution, but using now the \ensuremath{\alpha}-fold operation instead of the fold one. \\ \vspace{-1em} {\small\ExecuteMetaData[GenericProgramming/Examples/LambdaCalculus.tex]{subst} } \vspace{-1em} Substitution lemmas stating that substitution is well-behaved with respect to \ensuremath{\alpha}-conversion are inherited from the \ensuremath{\alpha}-compatibility of the iteration principle, only requiring the \ensuremath{\alpha}-compatibility property of the \AgdaFunctionFoot{substaux} function. As this auxiliary function is not recursively defined, this proof is just a simple case analysis, while the proof involving the recursive data-type traversal is resolved at the generic level. Next lemma in Figure~\ref{fig:foldfoldalpha} relates the presented \ensuremath{\alpha}-fold principle with the previously defined one, giving sufficient conditions under which the two principles return \ensuremath{\alpha}-convertible terms. First, the folded function must be \ensuremath{\alpha}-compatible on its two arguments, and also well-behaved with respect to name-swapping. Secondly, we need a freshness premise stating that the free variables in the context do not occur bound in the applied term. \vspace{-.5em} \begin{figure}[h!] \small\ExecuteMetaData[GenericProgramming/AlphaInduction.tex]{lemmafoldCtxalpha} \vspace{-2em} \caption{Fold and \ensuremath{\alpha}-fold relations.} \vspace{-.5em} \label{fig:foldfoldalpha} \end{figure} We can instantiate this lemma to the $\lambda$-calculus to get sufficient conditions under which the two presented substitution operations are \ensuremath{\alpha}-convertible. Its proof requires two lemmas about the \AgdaFunctionFoot{substaux} function, one stating that it is \ensuremath{\alpha}-compatible, and the other one stating that it is well-behaved under name-swapping. Both lemmas were already used in previous proofs. \vspace{-1em} \subsection{Alpha Induction Principle} In this section we generalise previous works~\cite{CopelloTSBF16,Copello:LSFA17}, developing an \ensuremath{\alpha}-induction principle for \ensuremath{\alpha}-compatible predicates. Our presentation introduces an explicit premise about the \ensuremath{\alpha}-compatibility of the predicate being proved, which in general is not explicitly mentioned in informal developments, but is certainly required to ensure that the predicate in question is actually about \emph{abstract} terms, i.e. not dependent on the choice of bound names. Hence, to prove some property over any term, this principle requires the user to prove the property only for terms with fresh enough binders, i.e. distinct from the variables in some given context, as is usually done under the BVC. We derive this principle following a procedure similar to the one used to infer the principle in Section~\ref{sec:pimind}. We show below the interesting recursive and binder cases, the remaining ones being equivalent to those presented in Figure~\ref{fig:fih}. We define an auxiliary function \AgdaFunctionFoot{fihalpha}, which transforms a given predicate $P$\ over a datatype \AgdaDatatypeFoot{μ F} to a predicate over the datatype \AgdaDatatypeFoot{⟦ G ⟧ (μ F)}. This predicate states that $P$\ holds for every recursive position \AgdaDatatypeFoot{μ F} in a datatype \AgdaDatatypeFoot{⟦ G ⟧ (μ F)}. Besides, it adds freshness premises with respect to some given variables list $xs$\ in the recursive and binder cases of its definition: In the binder case, it states that the binder is not in the given list $xs$, while, in the recursive case, it states that no variable in $xs$\ occurs in a binder position in the recursive subterm $e$. \\ \vspace{-1em} {\small \ExecuteMetaData[GenericProgramming/AlphaInduction.tex]{alphainductionhypotheses} \small \ExecuteMetaData[GenericProgramming/AlphaInduction.tex]{alphainductionhypothesescases} } \vspace{-1em} We state this principle in Figure~\ref{fig:alphaind}. Its proof is similar to the \ensuremath{\alpha}-fold principle's proof. We firstly use the function \AgdaFunctionFoot{bindersFreeElem} (from Section~\ref{sec:alphafold}) over the parameter $e$\ and the freshness context $xs$\ to get an \ensuremath{\alpha}-equivalent term $e'$\ with binders not occurring in the list $xs$. Then we apply the primitive induction principle (Figure~\ref{fig:ind}) over the fresh term $e'$\ to prove the following predicate $P'$: \vspace{-.3em} \begin{center} $P'(x) \equiv (\forall c \in xs \Rightarrow c\ notOccurrBind\ x) \Rightarrow P(x)$ \end{center} Finally, we apply the proof of predicate $P'$\ to the term $e'$ and its freshness hypothesis to obtain that $P\ e'$\ must hold. Hence, as the predicate $P$\ is \ensuremath{\alpha}-compatible, and $e$ \ensuremath{\sim_\alpha} $e'$, we get that $P\ e$\ should also hold. \begin{figure}[h!] \small \ExecuteMetaData[GenericProgramming/AlphaInduction.tex]{alphainductionprinciple} \vspace{-2em} \caption{Alpha induction principle.} \vspace{-.5em} \label{fig:alphaind} \end{figure} The proof of $P'$\ is done using an auxiliary lemma which recursively reconstructs a proof of \AgdaFunctionFoot{fihalpha} $P\ xs\ e$\ given that \AgdaFunctionFoot{fih} $P\ xs\ e$\ holds and that the binders of $e$\ do not occur in the context $xs$. This proof is just a generalisation of the one already presented in~\cite{Copello:LSFA17} for an equivalent \ensuremath{\alpha}-induction principle for \ensuremath{\lambda}-calculus. In this previous work we were also able to prove the Church-Rosser theorem for the \ensuremath{\lambda}-calculus\ using this equivalent induction principle. Therefore, we conjecture that following the same procedure we would be able to achieve the confluence of \ensuremath{\beta}-reduction result within our generic framework. However, we sketch another approach in next section to prove the substitution composition lemma, a crucial lemma in the confluence proof. \vspace{-1em} \section{Codification of a BVC proof technique.} \label{sec:bvc In Figure.~\ref{fig:alphaproof} we show a result that validates the BVC and usual practices in common pen-and-paper proofs within our generic framework. It states that for any \ensuremath{\alpha}-compatible predicate $P$, we can prove $P\ e$ for any term $e$ by just proving it for terms whose binders are all different from their own free variables and from the variables in an arbitrary list $xs$. As in previous induction principle, this technique requires the \ensuremath{\alpha}-compatibility of the predicate being proved. \vspace{-.5em} \begin{figure}[h!] \small\ExecuteMetaData[GenericProgramming/AlphaInduction.tex]{alphaproof} \vspace{-2.5em} \caption{BVC proof principle.} \vspace{-.5em} \label{fig:alphaproof} \end{figure} To prove $P\ e$ for arbitrary $e$, we proceed as follows: We first find a fresh enough term $e'$\ such that \ $e' \ensuremath{\sim_\alpha} e$ \ using the function \AgdaFunctionFoot{bindersFreeElem}. Then, we can use the hypothesis for the fresh term $e'$\ to derive that $P\ e'$ \ holds. Finally, $P\ e$ must also hold, as $P$ is \ensuremath{\alpha}-compatible. We do not show the code of the proof, since it is similar to others previously presented. Next we illustrate the use of this result to prove the substitution composition lemma for System F. First, we prove this lemma for the naive substitution operation. Next we introduce the property to be proved. Note that an extra freshness premise stating that $x$\ does not occur bound in the term $L$ is required, since we use the naive substitution. \\ \vspace{-1em} {\small \ExecuteMetaData[GenericProgramming/Examples/SystemF.tex]{substnaivecompositionpredicate} } \vspace{-1em} The proof is done using the structural induction principle (Figure~\ref{fig:ind}). We show below the interesting abstraction case: {\small\ExecuteMetaData[GenericProgramming/Examples/SystemF.tex]{substncompositionabstractioncase}} \vspace{-1em} This equational proof is constructed following the usual pen-and-paper practice: First we push the substitution inside the abstraction. Then, by the induction hypothesis we know that the composition of substitutions in the abstraction bodies are \ensuremath{\alpha}-convertible, and hence we are able to prove that the entire abstraction is \ensuremath{\alpha}-convertible too, using the auxiliary lemma \AgdaFunctionFoot{lemma∼+B}. Finally, we push back the substitutions outside the abstraction to conclude the proof. Now we prove the substitution composition lemma for the capture-avoiding substitution operation using the introduced \ensuremath{\alpha}-proof technique. We begin by defining the functor describing a triple of terms \AgdaDatatypeFoot{TreeTermF}. Then, we introduce the predicate \AgdaDatatypeFoot{PSComp} over triples, stating the composition lemma for the substitution. {\small\ExecuteMetaData[GenericProgramming/Examples/SystemF.tex]{substcompositionpredicate} } \vspace{-.5em} We prove that \AgdaDatatypeFoot{PSComp} is \ensuremath{\alpha}-compatible with respect to triples of terms by a direct equational proof using basically the previous substitution lemmas. In Figure~\ref{fig:substcompproof} we show the core of the proof. It uses the preceding substitution lemmas to replace the classical substitution operations with the naive ones. This can be done because we have freshness premises stating that in the introduced context of triples all binders are different from the free variables in the involved terms, and also from variables $x$ and $y$. Finally, we work in very much the same way as we did at the beginning to recover the classical substitutions from the naive ones. There are many auxiliary lemmas and boilerplate code concerning the freshness premises involved in the last proof which we do not show in this presentation. These are hidden inside auxiliary lemmas as: \AgdaFunctionFoot{y:fvL-NB-M[x≔N]ₙ} and \AgdaFunctionFoot{y:fvL-NB-N} occurring in the proof. The first of these lemmas, for instance, proves that neither the variable $y$\ nor the free variable binding in $L$\ occur bound in $M[x≔N]_n$, which is easy to verify from the freshness premises. However, we believe further work is necessary to automatise some of these proofs, or even rewriting the freshness relations in order to alleviate its handling. Finally, we can use the introduced \ensuremath{\alpha}-proof principle with the previous proof obligations to finish the proof. Note how, by applying the \ensuremath{\alpha}-proof technique to a triple of terms, we were able to get sufficient freshness premises to develop a proof similar in structure to pen-and-paper ones in a direct manner. This is possible because in our generic framework we can state the \ensuremath{\alpha}-equivalence of any structure (triples in this case), and not just language terms. \vspace{-1ex} \begin{figure}[h!] {\small \ExecuteMetaData[GenericProgramming/Examples/SystemF.tex]{substitutioncompositionproof}}\vspace{-2em} \vspace{-1em} \caption{Proof of Substitution composition lemma.} \vspace{-1em} \label{fig:substcompproof}\end{figure} \section{Conclusions} \label{sec:concl} We address the formalisation of a general first order named syntax with multi-sorted binders by applying a combination of generic programming and nominal techniques to derive fold operations, name-swapping, the \ensuremath{\alpha}-conversion relation, and \ensuremath{\alpha}-induction/iteration principles for any language abstract syntax with binders. We derive the \ensuremath{\lambda}-calculus\ and System F as instances of the introduced general framework. For these examples we derive both the naive and the capture-avoiding substitutions as direct instances of the corresponding fold and \ensuremath{\alpha}-fold principles. We directly inherit the classical substitution lemmas for the \ensuremath{\alpha}-conversion, and the good behavior of substitution under the name-swapping from fold properties already proved at the generic level. We prove a lemma stating sufficient conditions under which the fold and \ensuremath{\alpha}-fold functions are \ensuremath{\alpha}-equivalent. Therefore, as substitution operations are direct instances of these iteration principles, we get in an almost free manner a result about the relation between the naive and the capture-avoiding substitution operations for the \ensuremath{\lambda}-calculus\ and System F. This result is particularly useful in the last proof, which is conducted using the introduced \ensuremath{\alpha}-proof technique, that enables us to mimic the BVC in a generic setup. Our work uses generic programming techniques to develop the meta-theory of abstract syntax with binders in a general way as in related works. But we choose to maintain names for binders like as usually done in informal practice. On the other hand, contrary to the historical standpoint, following ideas in~\cite{GP02:newapproach}, we give \ensuremath{\alpha}-conversion a more fundamental role than that of the definition of substitution. Indeed, we verify that the name-swapping is powerful enough to define a theory of structural induction/recursion modulo \ensuremath{\alpha}\ in a general way. We generalise the \ensuremath{\alpha}-recursion/induction principles developed in \cite{CopelloTSBF16,Copello:LSFA17}. In these previous works we renamed binders within the fold traversal. Instead, in this work we separate these stages, managing to reuse the fold operation and its properties. We also present an \ensuremath{\alpha}-proof technique which is not based on an induction principle as in the previous works, and thus can be used to prove properties over relations or composite datatypes. This is the case in the proof of the substitution composition lemma, where it is used to obtain freshness premises over a triple of terms, which is possible because of the generic character of the approach, allowing us to state the \ensuremath{\alpha}-equivalence or freshness premises over any composite datatype; that is, we are able to state freshness in any mathematical context, thus reflecting the BVC more accurately. Generic programming techniques are capable of further improvements as the one considered in~\cite{Delaware:2013:MLC}, where a more modular assembly is introduced, enabling a more structured approach to the reuse of meta-theory formalisations through the composition of modular inductive definitions and proofs. The present work does not directly support a modular reuse, but it would be interesting to explore this improvement. In~\cite{DBLP:journals/corr/AnandM17} Reynold's parametricity theory is used to prove the \ensuremath{\alpha}-compatibility property of a big step semantics using reflection within Coq. They introduce a lambda calculus terms interface, and by a formalisation of Reynold's parametricity, they prove that polymorphic functions (over this interface) applied to related inputs produces related outputs. Then, given two concrete implementations of their lambda terms interface, one with de Brujin syntax (where α-convertible are syntaticaly equal), and another one nominal, they are able to get as a "free theorem" that on α-convertible inputs, the big step function produces α-convertible outputs in the nominal syntax. It remains as future work to study how our generic framework could be used to internalise this kind of "free theorems" by introducing a de Brujin interpretation to our universe, and then translate results between interpretations. \\ \noindent \textbf{Acknowledgements.} We gratefully acknowledge DoD support under award FA9550-16-1-0082 (MURI Program). \vspace{-1em} \bibliographystyle{eptcs}
2,869,038,156,227
arxiv
\section{Introduction} In neural language modelling, a neural network estimates a distribution over sequences of words or characters that belong to a given language \citep{bengio:2003}. In neural machine translation, the network estimates a distribution over sequences in the target language conditioned on a given sequence in the source language. In the latter case the network can be thought of as composed of two sub-networks, a {source network} that processes the source sequence into a representation and a {target network} that uses the representation of the source to generate the target sequence \citep{kalchbrenner13emnlp} Recurrent neural networks (RNN) are powerful sequence models \citep{hochreiter1997long} and are widely used in language modelling \citep{DBLP:conf/interspeech/MikolovKBCK10}, yet they have a potential drawback. RNNs have an inherently serial structure that prevents them from being run in parallel along the sequence length. Forward and backward signals in a RNN also need to traverse the full distance of the serial path to reach from one point to another in the sequence. The larger the distance the harder it is to learn dependencies between the points \citep{chapter-gradient-flow-2001}. A number of neural architectures have been proposed for modelling translation \citep{kalchbrenner13emnlp, DBLP:conf/nips/SutskeverVL14, DBLP:journals/corr/ChoMGBSB14,DBLP:journals/corr/BahdanauCB14,DBLP:journals/corr/KalchbrennerDG15,kaiser2016active}. These networks either have running time that is super linear in the length of the source and target sequences, or they process the source sequence into a constant size representation, burdening the model with a memorization step. Both of these drawbacks grow more severe as the length of the sequences increases. We present a neural translation model, the \emph{ByteNet}, and a neural language model, the \emph{ByteNet Decoder}, that aim at addressing these drawbacks. The ByteNet uses convolutional neural networks with dilation for both the source network and the target network. The ByteNet connects the source and target networks via stacking and unfolds the target network dynamically to generate variable length output sequences. We view the ByteNet as an instance of a wider family of sequence-mapping architectures that stack the sub-networks and use dynamic unfolding. The sub-networks themselves may be convolutional or recurrent. The ByteNet with recurrent sub-networks may be viewed as a strict generalization of the RNN Enc-Dec network \citep{DBLP:conf/nips/SutskeverVL14,DBLP:journals/corr/ChoMGBSB14} (Sect.~\ref{modelcomp}). The ByteNet Decoder has the same architecture as the target network in the ByteNet. In contrast to neural language models based on RNNs \citep{DBLP:conf/interspeech/MikolovKBCK10} or on feed-forward networks \citep{bengio:2003,Arisoy:2012:DNN:2390940.2390943}, the ByteNet Decoder is based on a novel convolutional structure designed to capture a very long range of past inputs. The ByteNet has a number of beneficial computational and learning properties. From a computational perspective, the network has a running time that is \emph{linear} in the length of the source and target sequences (up to a constant $c \approx \log d$ where $d$ is the size of the desired dependency field). The computation in the source network during training and decoding and in the target network during training can also be run efficiently \emph{in parallel} along the strings -- by definition this is not possible for a target network during decoding (Sect.~\ref{NTM}). From a learning perspective, the representation of the source string in the ByteNet is \emph{resolution preserving}; the representation sidesteps the need for memorization and allows for maximal bandwidth between the source and target networks. In addition, the distance traversed by forward and backward signals between any input and output tokens in the networks corresponds to the fixed depth of the networks and is largely independent of the distance between the tokens. Dependencies over large distances are connected by short paths and can be learnt more easily. We deploy ByteNets on raw sequences of characters. We evaluate the ByteNet Decoder on the Hutter Prize Wikipedia task; the model achieves 1.33 bits/character showing that the convolutional language model is able to outperform the previous best results obtained with recurrent neural networks. Furthermore, we evaluate the ByteNet on raw character-level machine translation on the English-German WMT benchmark. The ByteNet achieves a score of 18.9 and 21.7 BLEU points on, respectively, the 2014 and the 2015 test sets; these results approach the best results obtained with other neural translation models that have quadratic running time \citep{DBLP:conf/acl/ChungCB16,wu2016}. We use gradient-based visualization \citep{DBLP:journals/corr/SimonyanVZ13} to reveal the latent structure that arises between the source and target sequences in the ByteNet. We find the structure to mirror the expected word alignments between the source and target sequences. \begin{figure*} \vspace{-1cm} \centering \includegraphics[scale=.5]{architecture/detailed} \caption{The architecture of the ByteNet. The target network (blue) is stacked on top of the source network (red). The target network generates the variable-length target sequence using dynamic unfolding. The ByteNet Decoder is the target network of the ByteNet.} \label{architecture} \end{figure*} \section{Neural Translation Model} \label{NTM} Given a string $\vec{s}$ from a source language, a neural translation model estimates a distribution $p(\vec{t}|\vec{s})$ over strings $\vec{t}$ of a target language. The distribution indicates the probability of a string $\vec{t}$ being a translation of $\vec{s}$. A product of conditionals over the tokens in the target $\vec{t} = t_0,...,t_N$ leads to a tractable formulation of the distribution: \begin{equation} p(\vec{t} | \vec{s}) = \prod_{i=0}^{N}p(t_{i} | t_{<i},\vec{s}) \label{eq:likelihood} \end{equation} Each conditional factor expresses complex and long-range dependencies among the source and target tokens. The strings are usually sentences of the respective languages; the tokens are words or, as in the present case, characters. The network that models $p(\vec{t}|\vec{s})$ is composed of two sub-networks, a source network that processes the source string into a representation and a target network that uses the source representation to generate the target string \citep{kalchbrenner13emnlp}. The target network functions as a language model for the target language. A neural translation model has some basic properties. The target network is autoregressive in the target tokens and the network is sensitive to the ordering of the tokens in the source and target strings. It is also useful for the model to be able to assign a non-zero probability to any string in the target language and retain an open vocabulary. \subsection{Desiderata} \label{desiderata} Beyond these basic properties the definition of a neural translation model does not determine a unique neural architecture, so we aim at identifying some desiderata. (i) The running time of the network should be \emph{linear} in the length of the source and target strings. This is more pressing the longer the strings or when using characters as tokens. The use of operations that run \emph{in parallel} along the sequence length can also be beneficial for reducing computation time. (ii) The size of the source representation should be linear in the length of the source string, i.e. it should be \emph{resolution preserving}, and not have constant size. This is to avoid burdening the model with an additional memorization step before translation. In more general terms, the size of a representation should be proportional to the amount of information it represents or predicts. A related desideratum concerns the path traversed by forward and backward signals in the network between a (source or target) input token and a predicted output token. Shorter paths whose length is decoupled from the sequence distance between the two tokens have the potential to better propagate the signals \citep{chapter-gradient-flow-2001} and to let the network learn long-range dependencies more easily. \begin{figure*} \vspace{-1cm} \centering \includegraphics[width=0.8\linewidth]{conceptualh.pdf} \caption{Dynamic unfolding in the ByteNet architecture. At each step the target network is conditioned on the source representation for that step, or simply on no representation for steps beyond the source length. The decoding ends when the target network produces an end-of-sequence (EOS) symbol.} \label{dynunf} \end{figure*} \section{ByteNet} We aim at building neural language and translation models that capture the desiderata set out in Sect.~\ref{desiderata}. The proposed ByteNet architecture is composed of a target network that is \emph{stacked} on a source network and generates variable-length outputs via \emph{dynamic unfolding}. The target network, referred to as the ByteNet Decoder, is a language model that is formed of one-dimensional convolutional layers that use dilation (Sect.~\ref{dilation}) and are masked (Sect.~\ref{masked}). The source network processes the source string into a representation and is formed of one-dimensional convolutional layers that use dilation but are \emph{not} masked. Figure~\ref{architecture} depicts the two networks and their combination in the ByteNet. \subsection{Dynamic Unfolding} To accommodate source and target sequences of different lengths, the ByteNet uses dynamic unfolding. The source network builds a representation that has the same width as the source sequence. At each step the target network takes as input the corresponding column of the source representation until the target network produces the end-of-sequence symbol. The source representation is zero-padded on the fly: if the target network produces symbols beyond the length of the source sequence, the corresponding conditioning column is set to zero. In the latter case the predictions of the target network are conditioned on source and target representations from previous steps. Figure~\ref{dynunf} represents the dynamic unfolding process. \subsection{Masked One-dimensional Convolutions} \label{masked} Given a target string $\vec{t} = t_0,...,t_{n}$ the target network embeds each of the first $n$ tokens $t_0,...,t_{n-1}$ via a look-up table (the $n$ tokens $t_1,...,t_{n}$ serve as targets for the predictions). The resulting embeddings are concatenated into a tensor of size $1 \times n \times 2d$ where $d$ is the number of inner channels in the network. The target network applies masked one-dimensional convolutions \citep{van2016pixel} to the embedding tensor that have a masked kernel of size $k$. The masking ensures that information from future tokens does not affect the prediction of the current token. The operation can be implemented either by zeroing out some of the weights on a wider kernel of size $2k-1$ or by padding the output map. \begin{figure*}[t] \vspace{-1cm} \centering \hspace{2.2cm} \begin{subfigure}{} \includegraphics[scale=.25]{relublock} \end{subfigure} \hfill \begin{subfigure}{} \includegraphics[scale=.25]{block_mu} \end{subfigure} \hspace{2cm} \caption{Left: Residual block with ReLUs \citep{resnets} adapted for decoders. Right: Residual Multiplicative Block adapted for decoders and corresponding expansion of the MU \citep{vpn}.} \label{fig:residual} \vspace{-0.5cm} \end{figure*} \subsection{Dilation} \label{dilation} The masked convolutions use dilation to increase the receptive field of the target network~\citep{DBLP:journals/corr/ChenPKMY14,DBLP:journals/corr/YuK15}. Dilation makes the receptive field grow exponentially in terms of the depth of the networks, as opposed to linearly. We use a dilation scheme whereby the dilation rates are doubled every layer up to a maximum rate $r$ (for our experiments $r=16$). The scheme is repeated multiple times in the network always starting from a dilation rate of 1~\citep{wavenet,vpn}. \subsection{Residual Blocks} Each layer is wrapped in a residual block that contains additional convolutional layers with filters of size 1 \citep{resnets}. We adopt two variants of the residual blocks, one with ReLUs, which is used in the machine translation experiments, and one with Multiplicative Units \citep{vpn}, which is used in the language modelling experiments. Figure~\ref{fig:residual} diagrams the two variants of the blocks. \subsection{Sub-Batch Normalization} We introduce a modification to Batch Normalization (BN) \citep{DBLP:conf/icml/IoffeS15} in order to make it applicable to target networks and decoders. Standard BN computes the mean and variance of the activations of a given convolutional layer along the batch, height, and width dimensions. In a decoder, the standard BN operation at training time would average activations along all the tokens in the input target sequence, and the BN output for each target token would incorporate the information about the tokens that follow it. This breaks the conditioning structure of Eq.~\ref{eq:likelihood}, since the succeeding tokens are yet to be predicted. To circumvent this issue, we present Sub-Batch Normalization (SubBN). It is a variant of BN, where a batch of training samples is split into two parts: the \emph{main} batch and the \emph{auxiliary} batch. For each layer, the mean and variance of its activations are computed over the auxiliary batch, but are used for the batch normalization of the main batch. At the same time, the loss is computed only on the predictions of the main batch, ignoring the predictions from the auxiliary batch. \subsection{Bag of Character $n$-Grams} The tokens that we adopt correspond to characters in the input sequences. An efficient way to increase the capacity of the models is to use input embeddings not just for single tokens, but also for $n$-grams of adjacent tokens. At each position we sum the embeddings of the respective $n$-grams for $1\leq n \leq 5$ component-wise into a single vector. Although the portion of seen $n$-grams decreases as the value of $n$ increases -- a cutoff threshold is chosen for each $n$ -- all characters ($n$-grams for $n=1$) are seen during training. This fallback structure provided by the bag of character $n$-grams guarantees that at any position the input given to the network is always well defined. The length of the sequences corresponds to the number of characters and does not change when using bags of $n$-grams. \begin{figure*}[t] \vspace{-1cm} \centering \hspace{2.4cm} \begin{subfigure}{} \includegraphics[scale=0.5]{BytenetCR} \label{fig:mt_decoder_block} \end{subfigure} \hfill \begin{subfigure}{} \includegraphics[scale=0.5]{BytenetRR} \label{fig:lm_decoder_block} \end{subfigure} \hspace{2.4cm} \caption{Recurrent ByteNet variants of the ByteNet architecture. Left: Recurrent ByteNet with convolutional source network and recurrent target network. Right: Recurrent ByteNet with bidirectional recurrent source network and recurrent target network. The latter architecture is a strict generalization of the RNN Enc-Dec network. } \label{recurrentBytenet} \end{figure*} \section{Model Comparison} \label{modelcomp} In this section we analyze the properties of various previously and currently introduced neural translation models. For the sake of a more complete analysis, we also consider two recurrent variants in the ByteNet family of architectures, which we do not evaluate in the experiments. \subsection{Recurrent ByteNets} The ByteNet is composed of two stacked source and target networks where the top network dynamically adapts to the output length. This way of combining source and target networks is not tied to the networks being strictly convolutional. We may consider two variants of the ByteNet that use recurrent networks for one or both of the sub-networks (see Figure~\ref{recurrentBytenet}). The first variant replaces the convolutional target network with a recurrent one that is similarly stacked and dynamically unfolded. The second variant replaces the convolutional source network with a recurrent network, namely a bidirectional RNN. The target RNN is placed on top of the bidirectional source RNN. We can see that the RNN Enc-Dec network \citep{DBLP:conf/nips/SutskeverVL14,DBLP:journals/corr/ChoMGBSB14} is a Recurrent ByteNet where all connections between source and target -- except for the first one that connects $s_0$ and $t_0$ -- have been severed. The Recurrent ByteNet is thus a generalization of the RNN Enc-Dec and, modulo the type of sequential architecture, so is the ByteNet. \subsection{Comparison of Properties} In our comparison we consider the following neural translation models: the Recurrent Continuous Translation Model (RCTM) 1 and 2 \citep{kalchbrenner13emnlp}; the RNN Enc-Dec \citep{DBLP:conf/nips/SutskeverVL14,DBLP:journals/corr/ChoMGBSB14}; the RNN Enc-Dec Att with the attentional pooling mechanism \citep{DBLP:journals/corr/BahdanauCB14} of which there are a few variations \citep{luong-pham-manning:2015:EMNLP,chung2016hierarchical}; the Grid LSTM translation model \citep{DBLP:journals/corr/KalchbrennerDG15} that uses a multi-dimensional architecture; the Extended Neural GPU model \citep{kaiser2016active} that has a convolutional RNN architecture; the ByteNet and the two Recurrent ByteNet variants. The two grounds of comparison are the desiderata (i) and (ii) set out in Sect~\ref{desiderata}. We separate the computation time desideratum (i) into three columns. The first column indicates the time complexity of the network as a function of the length of the sequences and is denoted by \textbf{Time}. The other two columns \textbf{Net$_\mathbf{S}$} and \textbf{Net$_\mathbf{T}$} indicate, respectively, whether the source and the target network uses a convolutional structure (CNN) or a recurrent one (RNN); a CNN structure has the advantage that it can be run in parallel along the length of the sequence. We also break the learning desideratum (ii) into three columns. The first is denoted by \textbf{RP} and indicates whether the source representation in the network is resolution preserving. The second \textbf{Path$_\mathbf{S}$} column corresponds to the length in layer steps of the shortest path between a source token and any output target token. Similarly, the third \textbf{Path$_\mathbf{T}$} column corresponds to the length of the shortest path between an input target token and any output target token. Shorter paths lead to better forward and backward signal propagation. Table~\ref{properties} summarizes the properties of the models. The ByteNet, the Recurrent ByteNets and the RNN Enc-Dec are the only networks that have linear running time (up to the constant $c$). The RNN Enc-Dec, however, does not preserve the source sequence resolution, a feature that aggravates learning for long sequences such as those in character-level machine translation \citep{DBLP:journals/corr/LuongM16}. The RCTM 2, the RNN Enc-Dec Att, the Grid LSTM and the Extended Neural GPU do preserve the resolution, but at a cost of a quadratic running time. The ByteNet stands out also for its \textbf{Path} properties. The dilated structure of the convolutions connects any two source or target tokens in the sequences by way of a small number of network layers corresponding to the depth of the source or target networks. For character sequences where learning long-range dependencies is important, paths that are sub-linear in the distance are advantageous. \begin{table} \vspace{-1cm} \small \centering \bgroup \def\arraystretch{1.5 \begin{tabular}{l c c c c c c} \toprule \textbf{Model} & \textbf{Net$_\mathbf{S}$} & \textbf{Net$_\mathbf{T}$} & \textbf{Time} & \textbf{RP} & \textbf{Path$_\mathbf{S}$} & \textbf{Path$_\mathbf{T}$} \\ \hline \multicolumn{1}{l}{RCTM 1 } & CNN & RNN & $|S||S|+|T|$ & no & $|S|$ & $|T|$ \\ \multicolumn{1}{l}{RCTM 2 } & CNN & RNN & $|S||S| + |T|$ & yes & $|S|$ & $|T|$ \\ \multicolumn{1}{l}{RNN Enc-Dec } & RNN & RNN & $|S| + |T|$ & no & $|S|+|T|$ & $|T|$ \\ \multicolumn{1}{l}{RNN Enc-Dec Att } & RNN & RNN & $|S||T|$ & yes & 1 & $|T|$ \\ \multicolumn{1}{l}{Grid LSTM } & RNN & RNN & $|S||T|$ & yes & $|S| + |T|$ & $|S|+|T|$ \\ \multicolumn{1}{l}{Extended Neural GPU } & cRNN & cRNN & $|S||S| + |S||T|$ & yes & $|S|$ & $|T|$ \\ \hline \multicolumn{1}{l}{Recurrent ByteNet} & RNN & RNN & $|S|+|T|$ & yes & $\max(|S|,|T|)$ & $|T|$ \\ \multicolumn{1}{l}{Recurrent ByteNet} & CNN & RNN & $c|S|+|T|$ & yes & $c$ & $|T|$ \\ \multicolumn{1}{l}{ByteNet} & CNN & CNN & $c|S|+c|T|$ & yes & $c$ & $c$ \\ \bottomrule \end{tabular} \egroup \caption{ Properties of various previously and presently introduced neural translation models. The ByteNet models have both linear running time and are resolution preserving. } \label{properties} \end{table} \section{Character Prediction} We first evaluate the ByteNet Decoder separately on a character-level language modelling benchmark. We use the Hutter Prize version of the Wikipedia dataset and follow the standard split where the first 90 million bytes are used for training, the next 5 million bytes are used for validation and the last 5 million bytes are used for testing \citep{chung2015gated}. The total number of characters in the vocabulary is 205. The ByteNet Decoder that we use for the result has 25 residual blocks split into five sets of five blocks each; for the five blocks in each set the dilation rates are, respectively, 1,2,4,8 and 16. The masked kernel has size 3. This gives a receptive field of 315 characters. The number of hidden units $d$ is 892. For this task we use residual multiplicative blocks and Sub-BN (\figref{fig:residual} Right); we do not use bags of character $n$-grams for the inputs. For the optimization we use Adam \citep{DBLP:journals/corr/KingmaB14} with a learning rate of $10^{-2}$ and a weight decay term of $10^{-5}$. We do not reduce the learning rate during training. At each step we sample a batch of sequences of 515 characters each, use the first 315 characters as context and predict only the latter 200 characters. Table~\ref{wiki} lists recent results of various neural sequence models on the Wikipedia dataset. All the results except for the ByteNet result are obtained using some variant of the LSTM recurrent neural network \citep{hochreiter1997long}. The ByteNet Decoder achieves 1.33 bits/character on the test set. \begin{table}[t] \vspace{-1cm} \small \begin{center} \begin{tabular}{llc} \toprule \textbf{Model} & \textbf{Test} \\ \midrule Stacked LSTM \citep{DBLP:journals/corr/Graves13} & 1.67 \\ GF-LSTM \citep{chung2015gated} & 1.58 \\ Grid-LSTM \citep{DBLP:journals/corr/KalchbrennerDG15} & 1.47 \\ Layer-normalized LSTM \citep{chung2016hierarchical} & 1.46 \\ MI-LSTM \citep{wu2016multiplicative} & 1.44 \\ Recurrent Highway Networks \citep{DBLP:journals/corr/SrivastavaGS15} & 1.42 \\ Recurrent Memory Array Structures \citep{rocki2016recurrent} & 1.40 \\ HM-LSTM \citep{chung2016hierarchical} & 1.40 \\ Layer Norm HyperLSTM \citep{2016arXiv160909106H} & 1.38 \\ Large Layer Norm HyperLSTM \citep{2016arXiv160909106H} & 1.34 \\ \textbf{ByteNet Decoder} & $\mathbf{1.33}$ \\ \bottomrule \end{tabular} \end{center} \caption{Negative log-likelihood results in bits/byte on the Hutter Prize Wikipedia benchmark. } \label{wiki} \end{table} \begin{table}[t] \small \begin{center} \begin{tabular}{lccc} \toprule \textbf{Model} & \textbf{WMT Test '14} & \textbf{WMT Test '15} \\ \midrule Phrase Based MT & ${20.7}^{(1)}$ & $24.0^{(2)}$ \\ \midrule RNN Enc-Dec & $11.3^{(3)}$ \\ RNN Enc-Dec + reverse & $14.0^{(3)}$ \\ RNN Enc-Dec Att & ${16.9}^{(3)}$ \\ RNN Enc-Dec Att + deep \citep{zhou2016deep} & 20.6 \\ RNN Enc-Dec Att + local p + unk replace & ${20.9}^{(3)}$ \\ RNN Enc-Dec Att + BPE in + BPE out & ${19.98}^{(4)}$ & ${21.72}^{(4)}$ \\ RNN Enc-Dec Att + BPE in + char out & ${21.33}^{(4)}$ & ${23.45}^{(4)}$ \\ GNMT + char in + char out \citep{wu2016} & $22.8$ \\ \textbf{ByteNet} & \textit{18.9} & \textit{21.7} \\ \bottomrule \end{tabular} \end{center} \caption{BLEU scores on En-De WMT NewsTest 2014 and 2015 test sets. The ByteNet is character-level. The other models are word-level unless otherwise noted. Result (1) is from \citep{freitag14:wmtEuBridge}, result (2) is from \citep{DBLP:conf/wmt/WilliamsSNHK15}, results (3) are from \citep{luong-pham-manning:2015:EMNLP} and results (4) are from \citep{DBLP:conf/acl/ChungCB16} } \label{mt} \end{table} \begin{figure*}[t] \vspace{-1cm} \centering \hspace{2.1cm} \begin{subfigure}{} \includegraphics[scale=.18, trim={0.1cm 0 0 0},clip]{sentence_length/de_en.pdf} \end{subfigure}% \hfill \begin{subfigure}{} \includegraphics[scale=.18, trim={0.1cm 0 0 0},clip]{sentence_length/ru_en.pdf} \end{subfigure}% \hspace{2.1cm} \caption{Lengths of sentences in characters and their correlation coefficient for the En-De and the En-Ru WMT NewsTest-2013 validation data. The correlation coefficient is similarly high ($\rho>0.96$) for all other language pairs that we inspected.} \label{fig:corr} \end{figure*} \begin{table}[t] \small \begin{center} \begin{tabular}{lccc} \toprule \textit{At the same time, around 3000 demonstrators attempted to reach the official residency of} \\ \textit{Prime Minister Nawaz Sharif.} \vspace{0.2cm} \\ \textit{Gleichzeitig versuchten rund 3000 Demonstranten, zur Residenz von Premierminister} \\ \textit{Nawaz Sharif zu gelangen.} \vspace{0.2cm} \\ \textit{Gleichzeitig haben etwa 3000 Demonstranten versucht, die offizielle Residenz des} \\ \textit{Premierministers Nawaz Sharif zu erreichen.} \\ \midrule \textit{Just try it: Laura, Lena, Lisa, Marie, Bettina, Emma and manager Lisa Neitzel} \\ \textit{(from left to right) are looking forward to new members.} \vspace{0.2cm} \\ \textit{Einfach ausprobieren: Laura, Lena, Lisa, Marie, Bettina, Emma und Leiterin Lisa Neitzel} \\ \textit{(von links) freuen sich auf Mitstreiter.} \vspace{0.2cm} \\ \textit{Probieren Sie es aus: Laura, Lena, Lisa, Marie, Bettina, Emma und Manager Lisa Neitzel} \\ \textit{(von links nach rechts) freuen sich auf neue Mitglieder.} \\ \midrule \textit{He could have said, ``I love you," but it was too soon for that.} \vspace{0.2cm} \\ \textit{Er hätte sagen können ``ich liebe dich", aber dafür war es noch zu früh.} \vspace{0.2cm} \\ \textit{Er hätte sagen können: ``I love you", aber es war zu früh.} \\ \bottomrule \end{tabular} \end{center} \caption{Raw output translations generated from the ByteNet that highlight interesting reordering and transliteration phenomena. For each group, the first row is the English source, the second row is the ground truth German target, and the third row is the ByteNet translation.} \label{tab:samples} \vspace{-0.5cm} \end{table} \section{Character-Level Machine Translation} We evaluate the full ByteNet on the WMT English to German translation task. We use NewsTest 2013 for development and NewsTest 2014 and 2015 for testing. The English and German strings are encoded as sequences of characters; no explicit segmentation into words or morphemes is applied to the strings. The outputs of the network are strings of characters in the target language. There are about 140 characters in each of the languages. The ByteNet used in the experiments has 15 residual blocks in the source network and 15 residual blocks in the target network. As in the ByteNet Decoder, the residual blocks are arranged in sets of five with corresponding dilation rates of 1,2,4,8 and 16. For this task we use residual blocks with ReLUs and Sub-BN (\figref{fig:residual} Left). The number of hidden units $d$ is 892. The size of the kernel in the source network is $1 \times 5$, whereas the size of the masked kernel in the target network is $1 \times 3$. We use bags of character $n$-grams as additional embeddings at the source and target inputs: for $n>2$ we prune all $n$-grams that occur less than 500 times. For the optimization we use Adam with a learning rate of $0.003$. Each sentence is padded with special characters to the nearest greater multiple of 25. Each pair of sentences is mapped to a bucket based on the pair of padded lengths for efficient batching during training. We find that Sub-BN learns bucket-specific statistics that cannot easily be merged across buckets. We circumvent this issue by simply searching over possible target intervals as a first step during decoding with a beam search; each hypothesis uses Sub-BN statistics that are specific to a target length interval. The hypotheses are ranked according to the \emph{average} likelihood of each character. Table~\ref{mt} contains the results of the experiments. We note that the lengths of the translations generated by the ByteNet are especially close to the lengths of the reference translations and do not tend to be too short; the brevity penalty in the BLEU scores is 0.995 and 1.0 for the two test sets, respectively. We also note that the ByteNet architecture seems particularly apt for machine translation. The correlation coefficient between the lengths of sentences from different languages is often very high (\figref{fig:corr}), an aspect that is compatible with the resolution preserving property of the architecture. Table~\ref{tab:samples} contains some of the unaltered generated translations from the ByteNet that highlight reordering and other phenomena such as transliteration. The character-level aspect of the model makes post-processing unnecessary in principle. We further visualize the sensitivity of the ByteNet's predictions to specific source and target inputs. Figure~\ref{fig:gradheatmaps} represents a heatmap of the magnitude of the gradients of source and target inputs with respect to the generated outputs. For visual clarity, we sum the gradients for all the characters that make up each word and normalize the values along each column. In contrast with the attentional pooling mechanism \citep{DBLP:journals/corr/BahdanauCB14}, this general technique allows us to inspect not just dependencies of the outputs on the source inputs, but also dependencies of the outputs on previous target inputs, or on any other neural network layers. \begin{figure} \centering \vspace{-1.4cm} \includegraphics[width=0.8\linewidth]{demonstration_heatmap} \vspace{-0.9cm} \caption{Magnitude of gradients of the predicted outputs with respect to the source and target inputs. The gradients are summed for all the characters in a given word. In the bottom heatmap the magnitudes are nonzero on the diagonal, since the prediction of a target character depends highly on the preceding target character in the same word.} \label{fig:gradheatmaps} \end{figure} \section{Conclusion} We have introduced the ByteNet, a neural translation model that has linear running time, decouples translation from memorization and has short signal propagation paths for tokens in sequences. We have shown that the ByteNet Decoder is a state-of-the-art character-level language model based on a convolutional neural network that significantly outperforms recurrent language models. We have also shown that the ByteNet generalizes the RNN Enc-Dec architecture and achieves promising results for raw character-level machine translation while maintaining linear running time complexity. We have revealed the latent structure learnt by the ByteNet and found it to mirror the expected alignment between the tokens in the sentences. \bibliographystyle{plainnat} { \small